meta
dict | text
stringlengths 1
1.2M
|
---|---|
{
"arxiv_id": "2302.08637",
"language": "en",
"timestamp": "2023-02-20T02:04:29",
"url": "https://arxiv.org/abs/2302.08637",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
\IEEEPARstart{T}{raining} a high-performing deep learning model is incredibly costly, which consumes a massive amount of computational resources along with electricity, human resources, etc.
In fact, training such models is so time-consuming and expensive, that stealing a model is comparatively much simpler and cheaper.
Beyond directly copying an original model and simply claiming its ownership, there are several genres of deep learning model theft.
These include model fine-tuning \cite{wang2017growing, nagabandi2018neural, howard2018universal}, model pruning\cite{liu2018rethinking, he2017channel, zhu2017prune}, and knowledge distillation.
In the face of this many potential threats, model owners have sought ways to protect their intellectual properties, and one such method is model watermarking.
Model watermarking is a brand-new technique that embeds a traceable digital watermark into a deep learning model.
As such, it offers a promising way to model copyright protection.
The first attempt at model watermarking was made in 2017 by Yusuke Uchida {\it et al.} \cite{uchida2017embedding}, who proposed a method of embedding a watermark into a deep learning model.
The watermark was designed to verify ownership of the model given white-box access.
Since then, several exemplary works \cite{rouhani2018deepsigns, szyller2021dawn, namba2019robust} have emerged to provide better protection for deep learning models in different scenarios up to today, where even black-box attacks can be effectively prevented.
\begin{figure}[t!]
\centering
\includegraphics[width=.4\textwidth]{figure/attack_idea.drawio.pdf}
\caption{Attack workflow: The adversary sends the input image to the watermarked image-processing model to derive the watermarked output. If it trains a surrogate model directly with the input image and the watermarked output, the surrogate model will contain the watermark information. By overwriting the watermarked output, it removes the watermark in the output set and forges its own watermark inside the overwritten output. Finally, a watermark-free surrogate model can be trained.}
\label{fig:atk_idea}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.4\textwidth]{figure/defence_idea.drawio.pdf}
\caption{Defense workflow:
After embedding the watermark, an overwriting network performs an overwriting attack that yields an overwritten image that is then fed to the retrieval network.
The retrieval network is then required to retrieve a valid recovered watermark from the overwritten image.
Together, these networks form the defense network.
}
\label{fig:def_idea}
\end{figure}
However, most model watermarking approaches are designed to protect classification models; methods that work for image-processing models are few and far between.
In short, an image-processing model takes images as its input and outputs modified images, which is quite unlike a classification model that takes in images and simply outputs the digits of a category.
In 2020, Zhang {\it et al.} \cite{zhang2020model} proposed a framework to watermark image-processing neural networks, which, to the best of our knowledge, was the first work in image-processing model watermarking.
Essentially, Zhang's work combines model watermarking with deep steganography so as to forcibly embed watermark information in the outputs of the released models.
Deep steganography is a technique that uses deep learning models to hide a secret image completely within a cover image, such that it is invisible to the naked human eye.
The image containing the embedded secret image is called the container image.
By releasing a set of processed images containing a hidden watermark, any attacker intending to steal the model is compelled to train their own watermarked model.
Subsequently, Quan {\it et al.} \cite{quan2020watermarking} devised another image-processing model watermarking scheme that takes a backdoor watermarking approach.
Briefly, the watermarked model functions normally when it receives normal input images.
When it receives a noise trigger input, it outputs a pre-defined watermark to validate ownership.
Even though recent studies show that steganography plays an essential role in the protection of images, this type of approach might still vulnerable to attacks.
In fact, our study shows that current watermarking methods for image-processing models are not adequately robust.
For example, we find that, due to the properties of deep steganography, watermarking with image-processing models is vulnerable to changes in the frequency domain, especially the high-frequency domain.
To outline what we mean, we devise an overwriting attack method that shows how existing image-processing model watermarking methods, and even deep steganography itself can be nullified.
Having designed the attack, we also designed a defense against it that promises to guarantee the safety of deep learning models.
The defense method mitigates the overwriting attack through a new adversarial training framework that combines a watermarking method with the overwriting attack.
The general workflow of the attack is described in Figure \ref{fig:atk_idea}.
Here, a released image-processing deep learning model is watermarked such that every image it outputs contains an invisible watermark.
If an attacker tries to train a surrogate model via knowledge distillation, the surrogate model will carry the watermark information automatically.
However, in our attack, we train an overwriting network that overwrites the embedded watermark in the output from the watermarked model.
A surrogate model is also trained with the overwritten output and the input image sets.
Thus, the watermark is nullified, for the original watermark can no longer be retrieved from the output of the surrogate model.
To effectively counter the overwriting attack, we propose an adversarial training framework that deliberately incorporates an overwriting network to enhance the robustness of the watermarking network.
Figure \ref{fig:def_idea} demonstrates.
Briefly, an overwriting network is trained along with a watermarking network, which together form a defense network.
There is an adversarial training process, where the overwriting network tries to overwrite the watermark in the container image so that the retrieval network in the watermarking network cannot retrieve a valid recovered watermark from it.
In contrast, the watermarking network tries to retrieve a valid recovered watermark even if the container image has been overwritten.
This competitive process significantly boosts the robustness of the watermarking network.
Overall, our contributions are as follows:
\begin{enumerate}[label=\roman*)]
\item Through frequency analysis, we have unraveled where a secret image signal is embedded in a container image.
Accordingly, we devised a possible attack to nullify the currently existing image-processing model watermarking methods.
\item We devised a corresponding defense method based on adversarial training that counters the proposed attack method with a new adversarial training framework to protect the image-processing network.
\item We discovered an overfitting problem with the current watermarking method for protecting image-processing models that will nullify the protection, and fixed it by modifying the training process.
\end{enumerate}
The rest of this paper is organized as follows.
In section \ref{sect:pre}, we demonstrate the preliminaries by listing the notations used in the context and illustrating the background and related works.
We then describe our proposed method in detail in Section \ref{sect:method}.
Our experiment processes and results are presented in Section \ref{sect:exp}, and they are analyzed and discussed in Section \ref{sect:dis}.
Lastly, we draw a conclusion about this work in Section \ref{sect:con}.
\section{Preliminary}
\label{sect:pre}
\begin{table}[ht!]
\caption{Notations}
\label{tab:notation}
\begin{tabularx}{.48\textwidth}{
|>{\centering\arraybackslash}m{.05\textwidth}
|>{\arraybackslash}m{.377\textwidth}
|}
\hline
\multicolumn{1}{|c|}{Notation}
&
\multicolumn{1}{c|}{Definition}
\\\hline
$\mcl U$
&
The overwriting network.
\\\hline
$\mcl O$
&
The defense network.
\\\hline
$\mcl E$
&
An embedding network that embeds a secret image into a cover image to yield a container image.
\\\hline
$\mcl R$
&
A retrieval network that retrieves a recovered secret image from a container image.
\\\hline
$\mcl D$
&
A discriminator network that identifies whether or not a given image contains hidden content.
\\\hline
$\mcl E_{\mcl U}$
&
The overwriting embedding network.
\\\hline
$\mcl R_{\mcl U}$
&
The overwriting retrieval network.
\\\hline
$H$
&
The original and watermark-free image-processing model.
\\\hline
$H'$
&
A surrogate model mimicking $H$, but trained on a watermarked dataset.
\\\hline
$H_0$
&
A surrogate model mimicking $H$, but trained on a watermark-free dataset.
\\\hline
$A$
&
A set of images for the image-processing network to process.
\\\hline
$B$
&
A set of processed images originating from $A$.
\\\hline
$B'$
&
A set of watermarked and processed images, originating from $B$.
\\\hline
$B''$
&
A set of noisy output images from, originating from $B$.
\\\hline
$B_{\mcl U}$
&
A set of watermarked and processed images, but having suffered from the overwriting attack.
\\\hline
$B_0$
&
A set of processed images from a surrogate model that is not trained on the watermarked dataset.
\\\hline
$C/c$
&
A set of cover images/a cover image for concealing secrets.
\\\hline
$C'/c'$
&
A set of container images/a container image where secrets are hidden inside.
\\\hline
$S/s$
&
A set of secret images/a secret image to hide.
\\\hline
$S'/s'$
&
A set of recovered secret images/a recovered secret image.
\\\hline
$w$
&
A watermark.
\\\hline
$w'$
&
A recovered watermark.
\\\hline
$w_0$
&
A pure black null image.
\\\hline
$c'$
&
A container image that contains a watermark.
\\\hline
$x$
&
An arbitrary image that is the same size as $c'$
\\\hline
$x'$
&
A recovered image originating from $x$.
\\\hline
$\epsilon$
&
A tolerable error range of a recovered secret image.
\\\hline
$\mcl L$
&
A loss function.
\\\hline
$\lambda$
&
A weight parameter for a regularizer in the loss function.
\\
\hline
\end{tabularx}
\end{table}
\subsection{Watermarking \& Deep Learning}
Watermarking is a powerful method for object authentication and ownership validation. It has established strong ties with deep learning in recent times. To provide a comprehensive overview of these interactions, we have categorized them into two main categories: model watermarking and image watermarking using deep learning. For the reader's convenience, a list of all the notations used in the subsequent sections can be found in Table \ref{tab:notation}.
\subsubsection{Model watermarking}
The existing techniques for model watermarking can be classified into three categories: model weight watermarking, backdoor watermarking, and active watermarking.
In model weight watermarking, as described in \cite{uchida2017embedding}, the watermark is embedded into the model's weight parameters during the training process. To retrieve the watermark, one needs complete access to the model's internal structure, which is often not feasible in real-world scenarios. Furthermore, these methods are not highly resilient against attacks such as model pruning, fine-tuning, and knowledge distillation.
Backdoor watermarking, as discussed in \cite{szyller2021dawn}, involves the deliberate alteration of a portion of the training data to create an overfitted model. This portion is referred to as the trigger dataset and can be used to validate the ownership of a suspect model. If the majority of the trigger data result in the suspect model producing the watermark labels, the model's ownership can be confirmed with just black-box access. Compared to model weight watermarking, this method is more robust against the previously mentioned attacks.
On the other hand, active watermarking methods aim to prevent model theft proactively. For instance, Tang et al. \cite{tang2020deep} proposed a method that requires the user to enter a valid serial number before using the desired model. This model is a student model derived from a teacher model and functions correctly only with a valid serial number. Although this approach is proactive in nature and protects the model, a malicious entity can still crack the serial number generator and propagate the stolen model.
\subsubsection{Image watermarking via deep learning}
Image watermarking methods that leverage deep learning can be further categorized into auto-encoder image watermarking and generative adversarial network image watermarking.
Auto-encoder image watermarking, first introduced by Baluja in \cite{baluja2017hiding}, involves the use of an embedding network and a retrieval network.
The embedding network embeds a watermark or secret image into a cover image to produce a container image that is visually similar to the cover image.
The retrieval network then retrieves the watermark from the container image with a tolerable error range.
While these methods achieve high perceptual quality, they are susceptible to steganalysis, a detection attack that identifies hidden content in an image.
Additionally, the container images generated by these methods lack robustness against distortions and malicious attacks that can result in damage or removal of the hidden content.
Generative adversarial network image watermarking is similar to auto-encoder image watermarking, but with the addition of a discriminator in the framework.
During adversarial training, the discriminator is trained to detect hidden content in any image, while the embedding network is tasked with deceiving the discriminator with the container images it generates.
This enhances the covertness of the container images against steganalysis.
However, they remain vulnerable to distortions during transmission and malicious attacks, such as JPEG compression and overwriting attacks.
\subsection{Related Work}
\begin{figure*}[t!]
\centering
\includegraphics[width=.9\textwidth]{figure/rerendered_wmer.drawio.pdf}
\caption{Framework of the Watermarking Network:
Starting from the very left, the embedding network is trained to embed a watermark into a processed image set so as to yield a watermarked image set.
An adversary trains a surrogate model with a set of raw images and the watermarked image set, and thus the surrogate model carries the watermark information.
Every time when the surrogate model yields a set of noisy output, the retrieval network is able to retrieve a recovered watermark from the noisy output to validate the model's ownership.
}
\label{fig:wm_net}
\end{figure*}
Watermarking is a powerful method for safeguarding intellectual property and preventing copyright infringement in various domains, including image protection \cite{potdar2005survey}, audio \cite{arnold2000audio} and video files \cite{doerr2003guide}. By embedding unique and imperceptible marks within the intellectual property, the watermark serves as evidence of ownership and can be used in legal proceedings to defend against infringement claims. Despite having a long history of use, watermarking is a relatively new application in the realm of deep learning models.
In 2017, Uchida {\it et al.} \cite{uchida2017embedding} introduced a novel method for embedding a watermark into the weight parameters of a model, which was considered to be the first attempt at using watermarking techniques for the protection of intellectual property in neural networks. Despite its pioneering efforts, the method's validity in proving ownership required complete access to the parameters, or white-box access, which made it not practical for real-world scenarios. Furthermore, its robustness to different types of attacks was subject to improvement.
Rouhani {\it et al.} \cite{rouhani2018deepsigns} then proposed a watermarking framework that provides protection against fine-tuning, pruning, and overwriting of watermarks in both white-box and black-box scenarios. This approach was more robust to attacks, however, it was not capable of preventing knowledge distillation attacks.
Szyller {\it et al.} \cite{szyller2021dawn} then introduced an approach that was capable of countering all types of attacks, including knowledge distillation, by making a portion of the output from the watermarked model deliberately false. This strategy forces the surrogate model to include the watermark information by overfitting the falsified labels, thus representing a trade-off between robustness and accuracy.
It is worth noting that all of these methods, including the one proposed by Szyller {\it et al.}, work in a passive manner to defend against attacks, as the watermarks only serve to prove ownership after a copyright violation has already occurred, rather than preventing the violation from happening in the first place. Furthermore, these methods, along with most other model watermarking methods, are designed for classification models, with only a limited number of watermarking methods available for image-processing models.
In 2020, Zhang {\it et al.} proposed a watermarking method for image-processing deep learning models \cite{zhang2020model}. This method is the first of its kind and incorporates the concept of deep steganography, which is the technique of hiding information in such a way that it is not detected. The method fuses imperceptible image watermarking with model watermarking, making it effective against black-box knowledge distillation attacks.
The technique of steganography has a long history, dating back centuries, and has been utilized in different domains. Baluja first introduced the use of a deep learning model for image steganography in 2017 \cite{baluja2017hiding}. The method involves hiding one image within another image in such a way that it is not visible to the naked eye.
Several advancements have been made in the field of deep steganography since then, with Wu {\it et al.} designing a framework to perform end-to-end deep steganography \cite{wu2018image} and Zhang {\it et al.} developing a framework to hide an arbitrary image within another image \cite{zhang2020udh}. In \cite{zhao2022jointw}, an image watermarking method was merged with the image-generative network, using deep steganography to prevent the model from being misused.
However, as deep steganography evolves, so do the attacks. Traditional attacks on steganography include image resizing, cropping, distortion, and compression, as illustrated in Hosam's work \cite{hosam2019attacking}. Additionally, deep learning has been utilized to perform these attacks, as seen in Boroumand {\it et al.}'s work \cite{boroumand2018deep}, where a deep convolution neural network (DCNN) framework was proposed to perform deep steganalysis. Corley {\it et al.} \cite{corley2019destruction} designed a framework based on a generative adversarial network (GAN) with significant performance that is capable of purging secret images hidden in container images. Thus, similar to the battles between attacks and defenses in model watermarking, intense battles also exist in image watermarking through deep steganography.
\section{Method – Attack and defense}
\label{sect:method}
\begin{figure*}[t!]
\centering
\includegraphics[width=.7\textwidth]{figure/rerendered_ower.drawio.pdf}
\caption{Framework of the Overwriting Network:
The overwriting network is trained to embed an arbitrary image, or, a watermark, into a container image so as to yield a overwritten image.
The overwriting also contains a retrieval network that is able to retrieve the recovered image, whereas the retrieval network in the watermarking network can only retrieve a null image from the overwritten image.
}
\label{fig:usurper}
\end{figure*}
\subsection{Attack Analysis}
In the current watermarking method for image-processing neural networks, deep steganography is seamlessly integrated with model watermarking.
The watermarking process is composed of two key components, an embedding network $\mathcal{E}$ and a retrieval network $\mathcal{R}$.
As illustrated in Figure \ref{fig:wm_net}, the watermarking process begins by training $\mathcal{E}$ and $\mathcal{R}$ on a set of processed images, $B$, and a watermark image, $w$.
The embedding network $\mathcal{E}$ then embeds the watermark image $w$ into each image $b_i$ in the set $B$ to produce a watermarked image set, $B'$.
This process is denoted as
\begin{equation}
\begin{aligned}
B' = \mcl E(B, w).
\end{aligned}
\end{equation}
In the event of the presence of an adversary, they will only have access to the unprocessed image set $A$, and the watermarked processed image set $B'$. The adversary can then train a surrogate model, denoted as $H'$, using $A$ and $B'$, such that the model learns to produce processed images with watermarks similar to those in $B'$. Finally, the retrieval network $\mcl R$ should be capable of retrieving a recovered watermark $w'$ from both the original watermarked image set $B'$ and the noisy output set $B''$ produced by the surrogate model $H'$, denoted as
\begin{equation}
\begin{aligned}
w' = \mcl R(b'), \text{ s.t. } w' = w + \epsilon, \textit{ iff } b' \in B' \cup B'',
\end{aligned}
\end{equation}
where $\epsilon$ represents a tolerable error range.
Meanwhile, if $\mcl R$ receives a watermark-free image $x$ as input, $\mcl R$ will yield a null image $w_0$ that is purely dark, denoted as
\begin{equation}
\begin{aligned}
w_0 = \mcl R(x), \forall \ x \not \in B' \cup B''.
\end{aligned}
\end{equation}
However, deep steganography is vulnerable to perturbations in the frequency domain, as highlighted in the work of Zhang {\it et al.} \cite{zhang2021universal}. This motivates us to explore an overwriting attack on the watermarked image set $B'$. The objective of the attack is to generate an overwritten image set $B_{\mcl U}$ such that the retrieval network $\mcl R$ is unable to retrieve a valid watermark from $B_{\mcl U}$ or from the outputs of a surrogate model $H'$ trained on $B_{\mcl U}$.
The objective of this attack is denoted as
\begin{equation}
\begin{aligned}
\forall \ b_u \in B_{\mcl U} \cup B'', \mcl R(b_u) \neq w + \epsilon.
\end{aligned}
\end{equation}
In other words, the goal here is to purge the signal of the watermark inside the container images so that the surrogate model trained on them does not contain the watermark's information.
Thus, the watermarking method is nullified.
Conversely, to counter the overwriting attack, we need a watermarking network that is sufficiently robust so as to be able to retrieve a valid recovered watermark $w'$ under such an attack.
The objective of the defense is denoted as
\begin{equation}
\begin{aligned}
\exists \ \mcl R, w' = \mcl R(b_u), \text{ s.t. } w' = w + \epsilon, \forall \ b_u \in B_{\mcl U} \cup B''.
\end{aligned}
\end{equation}
This objective requires the watermarking method to completely withstand the overwriting attack.
\subsection{The Attack Network}
\subsubsection{The overwriting attack}
\paragraph{Overview}
As depicted in Figure \ref{fig:usurper}, the overwriting attack aims at the output image set $B'$, which contains the watermark.
A deep steganographic model $\mcl U$ is trained, which consists of an embedding function $\mcl E_{\mcl U}$ and a retrieval function $\mcl R_{\mcl U}$.
As illustrated in Algorithm \ref{alg:atk_U}, this model is capable of embedding an arbitrary image into another arbitrary image so as to perform an overwriting attack on the given container image set $B''$.
The result is a set of overwritten container images $B_{\mcl U}$, where $w'$ cannot be validly retrieved.
\begin{algorithm}
\caption{Train the Overwriting Network}\label{alg:atk_U}
\begin{algorithmic}
\While{$\mcl L_{\mcl U}$ not converged}
\State $c_{\mcl U} \gets \mcl E_{\mcl U}(x, c')$ \Comment{Overwrite}
\State $x' \gets \mcl R_{\mcl U}(c_{\mcl U})$ \Comment{Retrieve}
\State $\mcl L_{\mcl U} \gets \mcl L^{\mcl U}_{\mcl E}(c', c_{\mcl U}) + \mcl L^{\mcl U}_{\mcl R}(c_{\mcl U}, x')$ \Comment{Get loss}
\State $\mcl L_{\mcl U}$.back\_propagation() \Comment{Backwards}
\EndWhile
\end{algorithmic}
\end{algorithm}
This attack is denoted as
\begin{equation}
\begin{aligned}
\mcl E_{\mcl U}(B') = B_{\mcl U}, \ \text{s.t.} \ \mcl R(B_{\mcl U}) \not= w + \epsilon.
\end{aligned}
\end{equation}
Since the watermark information in $B_{\mcl U}$ is lost, an attacker can train a surrogate model $\mcl H_{\mcl U}$ with $A$ and $B_{\mcl U}$, which is either watermark-free, or it contains a self-made watermark $w_{\mcl U}$.
\paragraph{Loss Functions.}
The loss function for training $\mcl U$ is defined as
\begin{equation}
\begin{aligned}
\mcl L_{\mcl U} = \mcl L_{\mcl E}^{\mcl U} + \mcl L_{\mcl R}^{\mcl U},
\end{aligned}
\label{eq:loss_u}
\end{equation}
where $\mcl L_{\mcl E}^{\mcl U}$ and $\mcl L_{\mcl R}^{\mcl U}$ respectively denote the embedding loss and the retrieval loss of $\mcl U$.
$\mcl L_{\mcl E}^{\mcl U}$ is further decomposed into
\begin{equation}
\begin{aligned}
\mcl L_{\mcl E}^{\mcl U} = \lambda_{mse} l_{mse} + \lambda_{vgg} l_{vgg} + \lambda_{freq} l_{freq},
\end{aligned}
\label{eq:loss_u_e}
\end{equation}
where the $\lambda$s are weight parameters.
$l_{bs}$ is the $L2$ loss between the cover images $C$ and container images $C'$, defined as
\begin{equation}
\begin{aligned}
l_{mse} = \sum_{c_i \in C, c_i' \in C'} \frac{1}{N_c} \| c_i - c_i'\|^2,
\end{aligned}
\end{equation}
where $N_c$ is the total number of pixels.
$l_{vgg}$ denotes the perceptual loss between $C$ and $C'$, defined as
\begin{equation}
\begin{aligned}
l_{vgg} = \sum_{c_i \in C, c_i' \in C'} \frac{1}{N_f} \| VGG_k(c_i) - VGG_k(c_i')\|^2,
\end{aligned}
\end{equation}
where $N_f$ and $VGG_k$ respectively denote the total number of feature neurons and the features extracted at layer $k$.
$l_{freq}$ is the frequency loss \cite{jiang2021focal} between $C$ and $C'$ for controlling consistency in the frequency domain, defined as
\begin{equation}
\begin{aligned}
l_{freq} = \sum_{c_i \in C, c_i' \in C'} \frac{1}{N_p} \mcl F(c_i, c_i'),
\end{aligned}
\end{equation}
where $\mcl F$ and $N_p$ are the focal frequency loss function and the total number of image pairs.
$\mcl L_{\mcl R_{\mcl U}}$ is also further decomposed into
\begin{equation}
\begin{aligned}
\mcl L_{\mcl R}^{\mcl U} = \lambda_{mse} l_{mse} + \lambda_{vgg} l_{vgg} + \lambda_{freq} l_{freq},
\end{aligned}
\end{equation}
where the terms therein are identical to those in $\mcl L_{\mcl E_{\mcl U}}$ but applied to image pairs $(s_i, s_i')$ from the secret images $S$ and the retrieved secret images $S'$.
\subsection{The defense Network}
\begin{figure*}[t!]
\centering
\includegraphics[width=.8\textwidth]{figure/rerendered_def_net.drawio.pdf}
\caption{Framework of the Defense Network:
The initial training stage is critical for the success of the proposed method, as it involves the concurrent training of both the watermarking and overwriting networks. The retrieval network must be able to extract a valid watermark from the overwritten images generated by the overwriting embedding network. During the adversarial training stage, the retrieval network is further refined through exposure to both watermarked images and watermark-free images. If the retrieval network encounters a watermarked image, it should produce a valid recovered watermark. Conversely, when it encounters a watermark-free image, it should output a null image.
}
\label{fig:oathkeeper}
\end{figure*}
\subsubsection{Overview}
To counter the threat of the attack network, we devised an adversarial training framework, i.e., the defense network $\mcl O$, that includes both the watermarking framework $f_{wm}$ and $\mcl U$, and where $f_{wm}$ and $\mcl U$ are each configured as a two-party-mini-max game.
In short, we set up an adversarial training scheme by training $f_{wm}$ along with $\mcl U$ according to the following settings in Figure \ref{fig:oathkeeper}:
As demonstrated in Algorithm \ref{alg:def_O_init}, the embedding network $\mcl E$ in $f_{wm}$ is initially trained to embed $w$ into $B$ to get $B'$.
A discriminator network $\mcl D$ then determines whether $B'$ contains a watermark so as to make $\mcl E$ to hide the watermark more covertly.
Meanwhile, the overwriting embedding network $\mcl E_{\mcl U}$ is trained to embed an arbitrary image into another arbitrary image so as to perform an overwriting attack.
$B'$ is then fed to $\mcl E_{\mcl U}$ along with an arbitrary image set $S$ of the same size of $B'$ to yield an overwritten image set $B_{\mcl U}$.
Lastly, $B'$ and $B_{\mcl U}$ are passed to the retrieval network $\mcl R$ in $f_{wm}$ to retrieve $w'$, and $\mcl R$ is required to produce a null image $w_0$ when it receives watermark-free images from $A$ and $B$.
\begin{algorithm}
\caption{The Defense Network - Initial Training Stage}
\label{alg:def_O_init}
\begin{algorithmic}
\While{$\mcl L$ not converged}
\State $B' \gets \mcl E(w, B)$ \Comment{Embed}
\State $B_{\mcl U} \gets \mcl E_{\mcl U}(w_{\mcl U}, B')$ \Comment{Overwrite}
\State $w_0 \gets \mcl R(A; B)$ \Comment{Null retrieval}
\State $w' \gets \mcl R(B'; B_{\mcl U})$ \Comment{Ret. watermark}
\State $\mcl L \gets \mcl L_{\mcl U}(A, B, B', B_{\mcl U}, w, w_{\mcl U}, w_0, w')$ \Comment{Get Loss}
\State $\mcl L \gets \mcl L + \mcl L_{\mcl O}(A, B, B', B_{\mcl U}, w, w_{\mcl U}, w_0, w')$
\State $\mcl L$.back() \Comment{Backwards}
\EndWhile
\end{algorithmic}
\end{algorithm}
At the adversarial training stage, as illustrated in Algorithm \ref{alg:def_O_adv}, only $\mcl R$ is trained for better robustness.
On top of the previous training settings, $\mcl R$ is further forced to retrieve a watermark from the noisy output $B''$ generated by the surrogate model $H'$.
Meanwhile, a clean surrogate model $H_0$ is trained to produce clean output $B_0$, which boosts the specificity of $\mcl R$, where $\mcl R$.
Further, $\mcl R$ must also retrieve a null image when it receives $B_0$.
This solves an intractable problem that we encountered in the experiments, which is further discussed in Section \ref{sect:problem}.
\begin{algorithm}
\caption{The defense Network - Adversarial Training Stage}
\label{alg:def_O_adv}
\begin{algorithmic}
\While{$\mcl L_{\mcl O}$ not converged}
\State $B_0 \gets \mcl H_0(A)$
\State $B'' \gets \mcl H'(A)$
\State $w_0 \gets \mcl R(A; B; B_0)$\Comment{Null Retrieval}
\State $w' \gets \mcl R(B'; B''; B_{\mcl U})$\Comment{Watermark Retrieval}
\State $\mcl L_{\mcl O} = \mcl L_{\mcl O}(A, B, B', B'', w, w_0, w')$\Comment{Get Loss}
\State $\mcl L_{\mcl O}$.back()\Comment{Backwards}
\EndWhile
\end{algorithmic}
\end{algorithm}
The two-party-mini-max game is defined as
\begin{equation}
\begin{aligned}
& \qquad \qquad \qquad \underset{\mcl E, \mcl R}{\min} \ \underset{\mcl E_{\mcl U}}{\max} \ \mcl L(\mcl E, \mcl R, \mcl E_{\mcl U}) =\\
& \bigg(
\mbb E \big[ \sum_{b_i \in B, s_i \in S} \frac{1}{N_c} \big\|
\mcl R \big(
\mcl E_{\mcl U}(
\mcl E(b_i, w),
s_i)
\big)
- w \big\|^2 \big]
\bigg),
\end{aligned}
\end{equation}
where $\mcl E_{\mcl U}$ mostly benefits when $\mcl R$ cannot retrieve a valid $w'$.
Additionally, $\mcl E$ and $\mcl R$ get the most bonuses when a $w'$ that is infinitely close to $w$ is retrieved.
\subsubsection{Loss Functions}
The loss function for training the defense network is defined as
\begin{equation}
\begin{aligned}
\mcl L = \mcl L_{\mcl U} + \mcl L_{\mcl O}
\end{aligned}
\end{equation}
where $\mcl L_{\mcl U}$ and $\mcl L_{\mcl O}$ respectively denote the loss of training the overwriting network and the watermarking part of the defense network.
Similar to Equation \ref{eq:loss_u}, $\mcl L_{\mcl U}$ here is defined as
\begin{equation}
\begin{aligned}
\mcl L_{\mcl U} = \mcl L_{\mcl E}^{\mcl U} + \mcl L_{\mcl R}^{\mcl U} + l_{\mcl U}.
\end{aligned}
\end{equation}
The extra term $l_{\mcl U}$ denotes the adversarial overwriting loss that attempts to make $\mcl R$ retrieve a blank image $w_0$ from $B_{\mcl U}$.
This is defined as
\begin{equation}
\begin{aligned}
l_{\mcl U} = \sum_{b_i \in B_{\mcl U}} \frac{1}{N_c} \| \mcl R(b_i) - w_0 \|^2.
\end{aligned}
\end{equation}
$\mcl L_{\mcl O}$ is then further decomposed into
\begin{equation}
\begin{aligned}
\mcl L_{\mcl O} = \mcl L^{\mcl O}_{\mcl E} + \mcl L^{\mcl O}_{\mcl R} + \mcl L^{\mcl O}_{\mcl D} + l_{\mcl O}
\end{aligned}
\end{equation}
where the terms represent the loss of training the embedding network, the retrieval network, the discriminator, and the defense adversarial loss.
Further, $\mcl L_{\mcl E}^{\mcl O}$ comprises the following terms:
\begin{equation}
\begin{aligned}
\mcl L_{\mcl E}^{\mcl O} = \lambda_{mse} l_{mse} + \lambda_{freq} l_{freq} + \lambda_{vgg} l_{vgg} + \lambda_{adv} l_{adv},
\end{aligned}
\end{equation}
where the former three losses are identical to those appearing in Equation \ref{eq:loss_u_e}.
The last term $l_{adv}$ represents the adversarial loss against the discriminator network, defined as
\begin{equation}
\begin{aligned}
l_{adv} = \mbb E_{b_i' \in B'} \big[ \log(\mcl D(b_i')) \big].
\end{aligned}
\end{equation}
The goal is to make the embedding network produce container images that cannot be detected by the discriminator network.
$\mcl L_{\mcl R}$ is decomposed into
\begin{equation}
\begin{aligned}
\mcl L_{\mcl R} = \lambda_{wm} l_{wm} + \lambda_{clean} l_{clean} + \lambda_{cst} l_{cst},
\end{aligned}
\end{equation}
where the $\lambda$s are weight parameters.
$\mcl L_{wm}$ denotes watermark retrieval loss
\begin{equation}
\begin{aligned}
l_{wm} = \sum_{b_i' \in B'} \frac{1}{N_c} \| \mcl R(b_i') - w\|^2 + \sum_{b_i'' \in B''} \frac{1}{N_c} \| \mcl R(b_i'') - w\|^2.
\end{aligned}
\end{equation}
$\mcl L_{cln}$ represents the blank extraction loss for guiding $\mcl E$ to extract only blank images from images not possessing watermark information, denoted as
\begin{equation}
\begin{aligned}
l_{clean} = \sum_{a_i \in A} \frac{1}{N_c} \| \mcl R(a_i) - w_0\| + \sum_{b_i \in B} \frac{1}{N_c} \| \mcl R(b_i) - w_0\|,
\end{aligned}
\end{equation}
where $w_0$ is a blank image.
Lastly, $\mcl L_{cst}$ is the consistency loss for ensuring that the watermarks extracted from different images are consistent, denoted as
\begin{equation}
\begin{aligned}
l_{cst} = \sum_{x, y \in B' \bigcup B''} \| \mcl R(x) - \mcl R(y) \|^2
\end{aligned}
\end{equation}
$l_{\mcl O}$ stands for the defense adversarial loss that guarantees that $\mcl R$ can retrieve $w' = w + \epsilon$ from the overwritten images $B_{\mcl U}$, defined as
\begin{equation}
\begin{aligned}
l_{\mcl O} = \sum_{b_i \in B_{\mcl U}} \frac{1}{N_c} \| \mcl R(b_i) - w \|^2.
\end{aligned}
\end{equation}
\subsection{Discussion}
In our defense framework, the overwriting network is trained in tandem with the watermarking network to form the defense network. The purpose of the overwriting network is to overwrite the original watermark with a forged watermark, creating an adversarial relationship between the two. The retrieval network of the watermarking network must then be able to retrieve the original watermark from the overwritten images, as demonstrated in previous work \cite{baluja2019hiding}.
As the two embedding networks embed the original and forged watermarks into the same container image in parallel, both secret images are preserved within the container image. This is because, as shown in \cite{baluja2019hiding}, it is possible to embed multiple secret images into one cover image, albeit with a higher perceptual quality loss.
Our experiments show that without proper adversarial training, the watermarking network is unable to retrieve a valid watermark. Thus, our adversarial training scheme is a crucial component of the defense framework.
\section{Experiment}
\label{sect:exp}
\subsection{Experimental Setup}
\subsubsection{Dataset}
Two datasets were used to train the image-processing surrogate model: the de-raining dataset from \cite{zhang2018density} and an 8-bit image dataset generated via the algorithm in \cite{zou2021stylized}.
The de-raining dataset is public available, while the 8-bit image dataset is generated using the images in ImageNet datset.
The goal of the first task was to remove rain drops from the images.
The second task was to transform an input image into an 8-bit style artwork.
With each task, we split the dataset into two subsets: a training set of $4000$ images and a test set of 1000 images.
All the images were resized to $256 \times 256$ for training.
We also took samples from the ImageNet dataset to train $\mcl U$.
Here, there were $40,000$ images in the training set and $10,000$ images in the test set.
Each image was greater than $256 \times 256$ in size so we randomly cropped the images down to $256 \times 256$ so as to enhance $\mcl U$'s robustness.
\subsubsection{Implementation details}
$\mcl E$'s network structure follows UNet \cite{ronneberger2015u}.
UNet has considerably high performance with reference to translation based and semantic segmentation tasks, so the network produces good results when there are close connections between the inputs and the outputs.
CEILNet \cite{fan2017generic} was used as the model for $\mcl R$, which is believed to work well when the inputs somewhat differ from the outputs.
Patch-GAN \cite{isola2017image} was used for $\mcl D$.
In terms of the overwriting network, UNet \cite{ronneberger2015u} was once more used as the network structure for $\mcl E_{\mcl U}$.
For $\mcl R_{\mcl U}$, we simply used stacks of convolutional layers, as the critical point lies in the overwriting procedure, and, here, the embedding network plays the more crucial role.
Moreover, there is no discriminator in this training process. Lastly, the defense network comprised the watermarking network and $\mcl U$.
Together, they form $\mcl O$.
\subsubsection{Evaluation metrics}
We chose PSNR and SSIM \cite{hore2010image} to evaluate the visual quality of the container image in comparison to its corresponding cover image.
Additionally, we used normalized cross correlation (NCC) to measure whether a retrieved watermark was valid.
If the NCC between a retrieved watermark and the original watermark was greater than $0.95$, the retrieved watermark was considered to be legitimate.
NCC is defined as
\begin{equation}
\begin{aligned}
NCC = \frac{\langle \mcl R (b_i'), w \rangle}{\| \mcl R (b_i') \| \cdot \| w\|}
\end{aligned}
\end{equation}
where $\langle \cdot , \cdot \rangle$ denotes the inner product, and $\| \cdot \|$ denotes the L2 norm.
The success rate $SR$ is defined as the ratio of successfully retrieved watermarks from a certain amount of container images.
\subsection{Baseline Reproduction}
\subsubsection{Training the watermarking network}
First, we reproduced Zhang's method \cite{zhang2020model} as our experimental baseline.
When training the watermarking networks $\mcl E$, $\mcl R$, and $\mcl D$, we set the initial learning rate of the Adam optimizer to 0.001.
Here, the goal was to equip the watermarking network with the ability to embed a fixed watermark into an arbitrary image, and to retrieve the watermark from the container image.
Therefore, we trained the network on the ``ImageNet" training dataset, where 1 epoch contains 40,000 images.
The images were randomly cropped down to $256 \times 256$ so as to increase the randomness of the input data.
We set the batch size to 10, which means that the model ran through 10 images in one iteration.
If there was no loss descent within 4000 iterations, we decreased the learning rate by 0.2.
All $\lambda$s were set to 1 except for $\lambda_{3} = 0.01$, which is the weight parameter of the adversarial loss.
Figure \ref{fig:test_wmer_res} depicts the test results of the two trained models.
Each row of the two images is an instance.
From left to right, each column represents the cover images $c$, the secret images $s$, the container images $c'$, the retrieved secret images $s'$, and the null images retrieved from watermark-free images.
From the results, it is clear that the watermarking network was able to complete both the embedding and the retrieval tasks quite well.
Further, a pure black image was guaranteed when the input contained no hidden content.
Here, our settings slightly differ with Zhang's method.
Zhang set the null image to pure white, but for ease of reading, we set the null image to black.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/wmer_test.png}
\caption{Flower}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/wmer_test_pepper.png}
\caption{Pepper}
\end{subfigure}
\caption{Test Results of the Watermarking Network}
\label{fig:test_wmer_res}
\end{figure}
\subsubsection{The adversarial stage}
This stage was designed to enhance the robustness of the retrieval network by training it to retrieve the watermark from the surrogate model's processed images.
The retrieval will fail if this step is not conducted, because the retrieval network does not meet any noisy samples from the surrogate model.
However, because of a problem we discovered (discussed in Section \ref{subsect:atk}), we made a change to this stage of the process where we involved the outputs from a watermark-free surrogate model in the fine-tuning process.
To train the surrogate models, we used the de-raining and the 8-bit datasets in two parallel experiments.
The paired processed images were watermarked by the watermarking network.
By training the surrogate models this way, we forced the surrogate models to overfit the watermark signals hidden in the processed images, such that every output from the surrogate models carried the watermark signal.
Here, the batch size was set to $20$ and the number of epochs was set to $50$ based on some previous experiments.
The initial learning rate was set to $0.001$, and then decayed by $0.2$ if the loss remained unchanged for 5 epochs.
Additionally, we use the same settings to train a watermark-free surrogate model with the watermark-free datasets.
After training the surrogate models, we used them to produce noisy watermarked images and the watermark-free images, which were then fed into the retrieval network $\mcl R$.
In the adversarial training state, the hyperparameters remained the same as those in training the watermarking network for updating $\mcl R$.
However, we reset the learning rate back to $0.001$ so as to let the trained network escape reach of the local minimum.
The fine-tuning process lasted for $50$ epochs.
As a result, the fine-tuned retrieval network was able to retrieve the watermark from both the watermarked image $B'$ and the surrogate model's outputs $B''$.
The results are visually presented in Figure \ref{fig:org_meth_res}.
Details of the results are listed in Table \ref{tab:res_org}.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/method_org_val.png}
\caption{De-raining + Flower}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/method_org_val_8bit.png}
\caption{8bit + Pepper}
\end{subfigure}
\caption{Watermarks Retrieved from the Surrogate Models' Output}
\label{fig:org_meth_res}
\end{figure}
\begin{table}[t!]
\centering
\scriptsize
\caption{Original Method Results}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{l|ccccccc}
\toprule
Condition/Metric & \textbf{PSNR} & \textbf{SSIM} & \textbf{NCC} & \textbf{SR}(\%) \\
\midrule
\textbf{De-raining $\times$ $\mcl W$} & 30.49 & 0.8688 & 0.9992 & 100 \\
\textbf{De-raining $\times$ $\mcl W$ $\times$ UNet} & / & / & 0.9974 & 100 \\
\textbf{De-raining $\times$ $\mcl W$ $\times$ Res16} & / & / & 0.9877 & 100 \\
\textbf{8-bit $\times$ $\mcl W$} & 32.89 & 0.8739 & 0.9999 & 100 \\
\textbf{8-Bit $\times$ $\mcl W$ $\times$ UNet} & / & / & 0.9985 & 100 \\
\textbf{8-Bit $\times$ $\mcl W$ $\times$ Res16} & / & / & 0.9910 & 100 \\
\bottomrule
\end{tabular}}%
\label{tab:res_org}%
\end{table}%
With each dataset, we conducted three parallel experiments: one using UNet, one using a residue network with $16$ blocks (Res16), and one performed directly on the watermarked image $B'$.
PSNR and SSIM were used to measure the quality of the container image $c'$ compared to its corresponding cover image $c$.
NCC and SR were only used to validate the watermark retrieval.
Remarkably, the success rate of the watermark retrieval reached $100\%$ in each experiment, which firmly verifies the efficacy of Zhang's method.
\subsection{Attacks}
\label{subsect:atk}
We trained our overwriting network with the Adam optimizer on the ImageNet training set.
The learning rate and batch size were set to $0.001$ and $20$.
We decreased the learning rate by $0.2$ if there was no loss decrease within $2,000$ iterations.
After $20$ epochs, there was no significant descent loss, so we set the number of epochs to $30$.
The $\lambda$s were all set to $1$.
The cover images were randomly cropped into size of $256 \times 256$ so as to increase randomness of the input data and, in turn, enhance the robustness of the overwriting network.
Further, the overwriting network was trained to embed one of four selected watermarks: ``flower", ``copyright", ``lena", or ``pepper", into an arbitrary image, and then retrieve the watermark.
The effect is depicted in Figure \ref{fig:test_u}, where each row is an instance.
From left to right, each column respectively represents the cover images $c$, the secret images $s$, the container images $c'$, and the recovered secret images $s'$.
\begin{figure}[t!]
\centering
\includegraphics[width=.3\textwidth]{figure/usurper_test.png}
\caption{Test Results of the Overwriting Network}
\label{fig:test_u}
\end{figure}
After having the trained overwriting network, we launched an attack on the watermarked images.
Generally, the watermarked images $B'$ were overwritten with another watermark so as to prevent the watermark from being retrieved.
The direct effect of the attack is depicted in Figure \ref{fig:usurper_atk_res}, where each row is an instance.
From left to right, each column respectively represents the cover images $c$, the secret images $s$, the container images $c'$, and the retrieved secret images $s'$.
Table \ref{tab:res_atk} lists the results of the visual quality test for the container image and watermark retrieval under various conditions, namely, different combinations of surrogate model types and datasets.
Each value is the average number of the item derived from 100 randomly selected images.
We performed three experiments with each of the two tasks, i.e., a direct attack on $B'$, an attack on the UNet surrogate model, and another on the Res16 surrogate model.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/usurper_attack.png}
\caption{De-raining + Flower}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/usurper_attack_8bit.png}
\caption{8bit + Pepper}
\end{subfigure}
\caption{Results of the Overwriting Attack}
\label{fig:usurper_atk_res}
\end{figure}
Compared to the watermarked images $B'$, the quality of the attacked image $B_{\mcl U}$ decreased slightly.
However, the quality loss was still negligible to human eye.
Success rates across all the experiments were no greater than 10\%, and half of them reached 0\%, which proves the efficacy of our attack.
Notably, the success rate of the watermark retrieval with the Res16 surrogate model on both tasks was higher than the others, which is an interesting phenomenon.
\begin{table}[t!]
\centering
\scriptsize
\caption{Attack Results}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{l|ccccccc}
\toprule
Condition/Metric & \textbf{PSNR} & \textbf{SSIM} & \textbf{NCC} & \textbf{SR}(\%) \\
\midrule
\textbf{De-raining $\times$ $\mcl W$ $\times$ $\mcl U$} & 27.24 & 0.8031 & 0.0565 & 4 \\
\textbf{De-raining $\times$ $\mcl W$ $\times$ $\mcl U$ $\times$ UNet} & / & / & 0.0109 & 0 \\
\textbf{De-raining $\times$ $\mcl W$ $\times$ $\mcl U$ $\times$ Res16} & / & / & 0.1527 & 10 \\
\textbf{8-bit $\times$ $\mcl W$ $\times$ $\mcl U$} & 31.91 & 0.6061 & 0.2968 & 0 \\
\textbf{8-Bit $\times$ $\mcl W$ $\times$ $\mcl U$ $\times$ Unet} & / & / & 0.0678 & 0 \\
\textbf{8-Bit $\times$ $\mcl W$ $\times$ $\mcl U$ $\times$ Res16} & / & / & 0.2248 & 5 \\
\bottomrule
\end{tabular}}%
\label{tab:res_atk}%
\end{table}%
\subsubsection{The overfitting problem in the retrieval network.}
\label{sect:problem}
\begin{figure*}[t!]
\centering
\includegraphics[width=.8\textwidth]{figure/fine_tune.png}
\caption{Training Process of the defense Network:
From left to right:
the images to process $A$,
the processed images $B$,
the watermarked processed images $B'$,
the overwritten watermarked processed images $B_{\mcl U}$,
the overwritten watermark-free processed images $B_{\mcl U}'$,
the null images retrieved from $A$ and $B$,
the watermarks retrieved from $B'$ and $B_{\mcl U}$,
the null images retrieved from $B_{\mcl U}'$,
and the watermark image $w$.
}
\label{fig:train_o}
\end{figure*}
In the attack, we also tried to use the fine-tuned retrieval network to extract watermarks from the images that were only processed by the overwriting network.
In other words, we tried to extract watermarks from images that did not contain a watermark signal embedded by the watermarking network.
Under these circumstances, the retrieval network was still able to retrieve the watermark with decreased quality as demonstrated in Figure \ref{fig:overfitting}.
This indicated that, during the fine-tuning, the retrieval network was tuned to output a pre-defined watermark if there was any secret image signal in the container images, regardless of what exactly the signal represented.
\begin{figure}[t!]
\centering
\includegraphics[width=.3\textwidth]{figure/overfitting.png}
\caption{Overfitting Phenomenom:
From left to right, the images depict the rainy image to process, the watermarked image from the overwriting network, the overwriting watermark, and the retrieved watermark from the second image by the retrieval network.
The watermark can be retrieved from any container image that has some steganographic content.}
\label{fig:overfitting}
\end{figure}
Though this method can withstand an overwriting attack by this overfitting phenomenon, the phenomenon is harmful to this method.
This is because the watermarking scheme is nullified if a valid watermark can be retrieved from any container image that does not contain the corresponding watermark information.
We managed to overcome this problem with a fairly simple manoeuvre.
We trained a watermark-free surrogate model, and then, we added its output images into the adversarial stage of fine-tuning the retrieval network.
The retrieval network was therefore made to differentiate the outputs of the watermark-free surrogate model from those of the watermarked surrogate model, and output the null images correspondingly.
This extra step successfully mitigates this problem.
\subsection{defenses}
Lastly, we trained the defense network with the same hyperparameters as above.
The main idea was to concurrently train a watermarking network and an overwriting network, and to make the retrieval network retrieve the watermark from the overwritten container image.
Meanwhile, as the adversary, the overwriting network attempts to overwrite the watermark within the container image so that the retrieval network will only yield null images.
Figure \ref{fig:train_o} directly illustrates the training process, where the defense network is trained to embed the watermark into an arbitrary image, and retrieve the watermark from the container and overwritten images.
Further, the retrieval network must generate a null image if there is not the embedded watermark signal in the input.
The settings in the fine-tuning stage were almost the same as for the watermarking network's adversarial stage.
Additionally, the overwriting network also participated in this stage so as to force the retrieval network to produce either the watermark or a null image when it encounters the overwritten container image or the container image generated only by the overwriting network.
Finally, we tested the defense network on two datasets with different settings.
Table \ref{tab:res_def} shows the test results.
As shown, the container images generated by the defense network have a better visual quality than those generated by the watermarking network.
Among the watermarking retrieval tests, all success rates reached 100\% except for the direct overwriting attack on the 8-bit datasets, which verifies the efficacy of our defense method.
\begin{table}[t!]
\centering
\scriptsize
\caption{Watermark Retrieval Results Comparison among Defenses}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{l|ccccccc}
\toprule
Condition/Metric & \textbf{PSNR} & \textbf{SSIM} & \textbf{NCC} & \textbf{SR}(\%) \\
\midrule
\textbf{De-raining $\times$ $\mcl O$} & 34.07 & 0.9022 & 0.9997 & 100 \\
\textbf{De-raining $\times$ $\mcl O$ $\times$ $\mcl U$} & / & / & 0.9924 & 100 \\
\textbf{De-raining $\times$ $\mcl O$ $\times$ $\mcl U$ $\times$ UNet} & / & / & 0.9915 & 100 \\
\textbf{De-raining $\times$ $\mcl O$ $\times$ $\mcl U$ $\times$ Res16} & / & / & 0.9914 & 100 \\
\textbf{8-bit $\times$ $\mcl O$} & 34.54 & 0.8796 & 0.9998 & 100 \\
\textbf{8-bit $\times$ $\mcl O$ $\times$ $\mcl U$} & / & / & 0.9040 & 0.81 \\
\textbf{8-bit $\times$ $\mcl O$ $\times$ $\mcl U$ $\times$ UNet} & / & / & 0.9991 & 100 \\
\textbf{8-bit $\times$ $\mcl O$ $\times$ $\mcl U$ $\times$ Res16} & / & / & 0.9982 & 100 \\
\bottomrule
\end{tabular}}%
\label{tab:res_def}%
\end{table}%
\section{Discussion}
\label{sect:dis}
\subsection{Analysis of the Overwriting Attack}
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth]{figure/freq_als_atk.pdf}
\caption{The Overwriting Attack}
\label{fig:freq_als_atk}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/freq_als_def.pdf}
\caption{The defense Method}
\label{fig:freq_als_def}
\end{subfigure}
\caption{Frequency Analysis}
\label{fig:freq_als}
\end{figure*}
\subsubsection{Frequency Analysis}
The objective of the study is to investigate the cause of the overwriting attack's capability to render the embedded watermark in the container image ineffective. This is achieved by calculating the Azimuthal Integral of the experimental images and comparing their frequency domains. The final data is obtained by averaging the Azimuthal Integral computed from 1,000 groups of test images, each group consisting of the container image generated by the watermarking network, the cover image, the overwritten container image, the output from the surrogate model, and the overwritten output from the surrogate model. The images within each group correspond to the same processed image.
Typically, images processed by Deep Convolutional Neural Networks (DCNNs) display a bias in the high frequency domain. As illustrated in Figure \ref{fig:freq_als_atk}, the container image generated by the watermarking network and its corresponding image generated by the surrogate model exhibit an abnormally high amplitude in the high frequency domain, which distinguishes them greatly from the cover image. This is the reason why the watermark can be invisibly embedded into the cover image, as human eyes are not sensitive enough to the high frequency domain of an image.
However, through fine-tuning, the retrieval network in the watermarking network can still retrieve the watermark from the surrogate model's output, despite its significant deviation from the frequency distribution of the container image. This emphasizes the significance of the fine-tuning stage. In the case of the overwritten container image, it displays a marked bias in the high frequency domain, both in comparison to the cover image and the watermarked image. A peak can be observed in the range of 160 to 175 on the frequency axis, which neutralizes the previously embedded watermark.
To further ascertain the location where the watermark is embedded, a low-pass filter is applied to the watermarked images. The filtered image retains its visual quality to the extent that changes are not easily noticeable by the human eye. This filter is applied to 1,000 container images and then the watermark retrieval is performed. As expected, the success rate of the retrieval drops to 0, and the direct effect can be seen in Figure \ref{fig:low_pass}, where each column in each row, from left to right, respectively represents the container image, the retrieved watermark, the filtered container image, and the nullified retrieval result. This underscores the high sensitivity of the watermark retrieval to the high frequency distribution of the container image.
\begin{figure}[t!]
\centering
\includegraphics[width=.3\textwidth]{figure/low_pass_attack.png}
\caption{Watermark Retrieval of Low-pass Filtered Container Images}
\label{fig:low_pass}
\end{figure}
\subsection{Analysis of the defense Network}
\subsubsection{Residue Analysis}
First, we performed a residue analysis on the container images generated by our defense and watermarking networks.
The details can be seen in Figure \ref{fig:residue_als}, where from left to right, each column in each row respectively represents the cover image, the container image, and the residue enhanced 10 times.
Intuitively, the residues of the defense network's output seem to be darker (better) than those of the watermarking network's output.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/residue_oath.png}
\caption{defense Network}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/residue_wmer.png}
\caption{Watermarking Network}
\end{subfigure}
\caption{Residue Analysis (10x Enhanced)}
\label{fig:residue_als}
\end{figure}
\subsubsection{Frequency Analysis.}
In the adversarial stage of the watermarking network, the retrieval network is required to retrieve the watermark from both the surrogate model's output and the container images.
Due to the bias in the frequency domain of the overwritten surrogate model's output shown in Figure \ref{fig:freq_als_atk}, the retrieval fails, because it has never encountered any input with such a frequency distribution.
However, in the defense network's fine-tuning stage, the surrogate model's output is protected by the defense network, and they share almost the same frequency distribution as the overwritten surrogate model's output in Figure \ref{fig:freq_als_def}.
This forces the retrieval network to become more robust to the mutable watermark signal.
Further, with the assistance of the frequency loss, the container images generated by the defense network share a more similar frequency distribution to the cover images than those generated by the watermarking network.
Our defense method therefore shows good robust to overwriting attack, even if the type of the surrogate model does not match the one used in fine-tuning.
Nevertheless, it is still possible for an adversary to change the attack method to cripple the watermark with a higher cost of visual quality – through low-pass filtering for example.
\subsection{Ablation Study}
\subsubsection{The frequency loss}
The ablation experiments in Zhang {\it et al.} \cite{zhang2020model} prove the necessity of several loss terms, including the clean loss and the consistent loss.
In our defense network, the frequency loss regularizer is added into the loss function so as to guide the network to generate the container images that share a more similar frequency distribution to the cover image.
The only difference between the loss terms in our defense network and the watermarking network is the frequency loss.
This boosts the image quality, as is proven in the test results presented in Tables \ref{tab:res_org}, and \ref{tab:res_def}.
Here, both the PSNR and the SSIM values of the container images generated by our defense network are higher than those from the watermarking network.
Further, as Figure \ref{fig:freq_als_def} shows, the high frequency distribution of the containers from the defense network is closer to the cover image than those from the watermarking network.
\subsubsection{Fine-tuning}
Unlike the original watermarking method in \cite{zhang2020model}, we additionally add a group of watermark-free images generated by the surrogate model trained on the watermark-free training dataset into the fine-tuning dataset.
This prevents the watermarking network from overfitting the steganographic signal in the container images so that it will retrieve the watermark regardless of what exact watermark signal lies in the container images.
Figure \ref{fig:overfitting} shows how the overfitting phenomenon nullifies the watermarking method.
Therefore, the watermark-free surrogate model's output is essential in this stage.
If a watermark can be retrieved from a container image that does not contain the specific watermarking signal, the method can be claimed unreliable.
Further, by inserting the overwriting network into the watermarking network to form the defense network, the defense network is pushed to become more robust to both the overwriting attack and the addition of noise.
Further, the embedding network hides the watermark more covertly, and the retrieval network differentiates the container image carrying both the specific watermark and the overwritten watermark images from the watermark-free images, and the watermark-free surrogate model's output, and the container images carrying any other secret images.
\section{Conclusion}
\label{sect:con}
In this study, we present an overwriting attack that effectively nullifies the watermark embedded in images processed by image processing neural networks. Our attack is also a threat to deep steganography, as it can invisibly replace a secret image with minimal impact on the visual quality of the image. Additionally, we identify an overfitting issue in the original watermarking method and resolve it with an alternative training approach.
To defend against our proposed overwriting attack, we develop an adversarial framework defense network that integrates the watermarking network and the overwriting network. To the best of our knowledge, this defense network is resilient against the overwriting attack. Through adversarial training, the defense network is able to retrieve valid watermarks from overwritten images and the output of the overwritten surrogate model.
There is ample room for future research in the area of image-processing model watermarking, including the development of robust watermarking techniques and malicious attacks. Although our method demonstrates robustness against overwriting attacks, the adversary can still manipulate the frequency domain of the output to erase the embedded watermark with minimal perceptual impact. To address this issue, a more robust watermarking method that embeds the watermark in the low frequency domain of the image should be explored.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08670",
"language": "en",
"timestamp": "2023-02-20T02:05:59",
"url": "https://arxiv.org/abs/2302.08670",
"yymm": "2302"
} | \section{}
Multispectral pedestrian detection is a technology designed to detect and locate pedestrians in Color and Thermal images, which has been widely used in automatic driving, video surveillance, etc. So far most available multispectral pedestrian detection algorithms only achieved limited success in pedestrian detection because of the lacking take into account the confusion of pedestrian information and background noise in Color and Thermal images. Here we propose a multispectral pedestrian detection algorithm, which mainly consists of a cascaded information enhancement module and a cross-modal attention feature fusion module. On the one hand, the cascaded information enhancement module adopts the channel and spatial attention mechanism to perform attention weighting on the features fused by the cascaded feature fusion block. Moreover, it multiplies the single-modal features with the attention weight element by element to enhance the pedestrian features in the single-modal and thus suppress the interference from the background. On the other hand, the cross-modal attention feature fusion module mines the features of both Color and Thermal modalities to complement each other, then the global features are constructed by adding the cross-modal complemented features element by element, which are attentionally weighted to achieve the effective fusion of the two modal features. Finally, the fused features are input into the detection head to detect and locate pedestrians. Extensive experiments have been performed on two improved versions of annotations (sanitized annotations and paired annotations) of the public dataset KAIST. The experimental results show that our method demonstrates a lower pedestrian miss rate and more accurate pedestrian detection boxes compared to the comparison method. Additionally, the ablation experiment also proved the effectiveness of each module designed in this paper.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} multispectral pedestrian detection, attention mechanism, feature fusion, convolutional neural network, background noise}
\end{abstract}
\section{Introduction}
Pedestrian detection, parsing visual content to identify and locate pedestrians on an image/video, has been viewed as an essential and central task within the computer vision field and widely employed in various applications, e.g. autonomous driving, video surveillance and person re-identification \citep{jeong2017,zhang2018,lilingli2021,chenyiwen2021,dongneng2022,lishuang,wangshujuan}. The performance of such technology has greatly advanced through the facilitation of convolutional neural networks (CNN). Typically, pedestrian detectors take Color images as input and try to retrieve the pedestrian information from them. However, the quality of Color images highly depends on the light condition. Missing recognition of pedestrians occurs frequently when pedestrian detectors process Color images with poor resolution and contrast caused by unfavorable lighting. Consequently, the use of such models has been limited for the application of all-weather devices.
Thermal imaging is related to the infrared radiation of pedestrians, barely affected by changes in ambient light. The technique of combining Color and Thermal images has been explored in recent years \citep{hwang2015,liu2016,gonzalez2016,yangmoyuan2020,liu2020,li2018joint,li2020discriminative,wangjiaxin2021,huangbochun}. These methods has been shown to exhibit positive effects on pedestrian detection performance in complex environments as it could retrieve more pedestrian information. However, despite important initial success, there remain two major challenges. First, as shown in Figure \ref{fig:1}, the image of pedestrians tends to blend with the background for nighttime Color images resulting from insufficient light \citep{zhuzhipin}, and for daytime Thermal images as well due to similar temperatures between the human body and the ambient environment \citep{yang2022}. Second, there is an essential difference between Color images and Thermal images the former displays the color and texture detail information of pedestrians while the latter shows the temperature information. Therefore, solutions needed to be taken to augment the pedestrian features in Color and Thermal modalities in order to suppress background interference, and enable better integration and understanding of both Color and Thermal images to improve the accuracy of pedestrian detection in complex environments.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=15cm]{logo2.eps
\end{center}
\caption{Example of Color and Thermal images of pedestrians in daytime and nighttime scenes}\label{fig:1}
\end{figure}
To address the challenges above, the researches \citep{guan2019,zhou2020} designed illumination-aware networks to obtain illumination-measured parameters of Color and Thermal images respectively, which were used as fusion weights for Color and Thermal features in order to realize a self-adaptively fuse of two modal features. However, the acquisition of illumination-measured parameters relied heavily on the classification scores, the accuracy of which was limited by the performance of the classifier. Li et al. \citep{li2022} reported confidence-aware networks to predict the confidence of detection boxes for each modal, and then Dempster-Sheffer theory combination rules were employed to fuse the results of different branches based on uncertainty. Nevertheless, the accuracy of predicting the detection boxes' confidence is also affected by the performance of the confidence-aware network. A cyclic fusion and refinement scheme was introduced by \citep{zhang2020b} for the sake of gradually improving the quality of Color and Thermal features and automatically adjusting the complementary and consistent information balance of the two modalities to effectively utilize the information of both modalities. However, this method only used a simple feature cascade operation to fuse Color and Thermal features and failed to fully exploit the complementary features of these two modalities.
To tackle the problems aforementioned, we propose a multispectral pedestrian detection algorithm with cascaded information enhancement and cross-modal attention feature fusion. The cascaded information enhancement module (CIEM) is designed to enhance the pedestrian information suppressed by the background in the Color and Thermal images. CIEM uses a cascaded feature fusion block to fuse Color and Thermal features to obtain fused features of both modalities. Since the fused features contain the consistency and complementary information of Color and Thermal modalities, the fused features can be used to enhance Color and Thermal features respectively to reduce the interference of background on pedestrian information. Inspired by the attention mechanism, the attention weights of the fused features are sequentially obtained by channel and spatial attention learning, and the Color and Thermal features are multiplied with the attention weights element by element, respectively. In this way, the single-modal features have the combined information of the two modalities, and the single-modal information is enhanced from the perspective of the fused features. Although CIEM enriches single-modal pedestrian features, simple feature fusion of the enhanced single-modal features is still insufficient for robust multispectral pedestrian detection. Thus, we design the cross-modal attention feature fusion module (CAFFM) to efficiently fuse Color and Thermal features. Cross-modal attention is used in this module to implement the differentiation of pedestrian features between different modalities. In order to supplement the pedestrian information of the other modality to the local modality, the attention of the other modality is adopted to augment the pedestrian characteristics of the local modality. A global feature is constructed by adding the Color and Thermal features after performing cross-modal feature enhancement, and the global feature is used to guide the fusion of the Color and Thermal features. Overall, the method presented in this paper enables more comprehensive pedestrian features acquisition through cascaded information enhancement and cross-modal attention feature fusion, which effectively enhances the accuracy of multispectral image pedestrian detection. The main contributions of this paper are summarized as follows:
(1) A cascaded information enhancement module is proposed. From the perspective of fused features, it reduces the interference from the background of Color and Thermal modalities on pedestrian detection and augments the pedestrian features of Color and Thermal modalities separately through an attention mechanism.
(2) The designed cross-modal attention feature fusion module first mines the features of both Color and Thermal modalities separately through a cross-modal attention network and adds them to the other modality for cross-modal feature enhancement. Meanwhile, the cross-modal enhanced Color and Thermal features are used to construct global features to guide the feature fusion of the two modalities.
(3) Numerous experiments are conducted on the public dataset KAIST to demonstrate the effectiveness and superiority of the proposed method. In addition, the ablation experiments also demonstrate the effectiveness of the proposed modules.
\section{RELATED WORKS}
\subsection{Multispectral Pedestrian Detection}
Multispectral sensors can obtain paired Color-Thermal images to provide complementary information about pedestrian targets. A large multispectral pedestrian detection (KAIST) dataset was constructed by \citep{hwang2015}. Meanwhile, by combining the traditional aggregated channel feature (ACF) pedestrian detector \citep{dollar2014} with the HOG algorithm \citep{dalal2015}, an extended ACF (ACF+T+THOG) method was proposed to fuse Color and Thermal features. In 2016, Liu et al. \citep{liu2016} proposed four fusion modalities of low-layer feature, middle-layer feature, high-layer feature, and confidence fraction fusion with VGG16 as the backbone network, and the middle-layer feature fusion was proved to offer the maximum integration capability of Color and Thermal features. Inspired by this, \citep{konig2017} developed a multispectral region candidate network with Faster RCNN (Region with CNN features, RCNN) \citep{ren2017} as the architecture and replaced the original classifier in Faster RCNN with an enhanced decision tree classifier to reduce the missed and false detection of pedestrians. Recently, Kim et al. \citep{kim2021a} deployed the EfficientDet as the backbone network and proposed an EfficientDet-based fusion framework for multispectral pedestrian detection to improve the detection accuracy of pedestrians in Color and Thermal images by adding and cascading the Color and Thermal features. Although the studies \citep{hwang2015,liu2016,konig2017,kim2021a} fused Color and Thermal features for pedestrian detection, they mainly focused on exploring the impact of different stages of fusion on pedestrian detection, and only adopted simple feature fusion and not focusing on the case of pedestrian and background confusion.
In 2019, Zhang et al. \citep{zhang2019a} observed a weak alignment problem of pedestrian position between Color and Thermal images, for which the KAIST dataset was re-annotated and Aligned Region CNN (AR-CNN) was proposed to handle weakly aligned multispectral pedestrian detection data in an end-to-end manner. But the deployment of this algorithm requires pairs of annotations, and the annotation of the dataset is a time-consuming and labor-intensive task, which makes the algorithm difficult to be applied in realistic scenes. Kim et al. \citep{kim2021b} proposed a new single-stage multispectral pedestrian detection framework. This framework used multi-label learning to learn input state-aware features based on the state of the input image pair by assigning an individual label (if the pedestrian is visible in only one image of the image pair, the label vector is assigned as $y_1 \in[0,1]$ or $ {y_2} \in [1,0] $ ; if the pedestrian is visible in both images of the image pair, the label vector is assigned as $ {y_3} \in [1,1] $ ) to solve the problem of weak alignment of pedestrian locations between Color and Thermal images, but the model still requires pairs of annotations during training. Guan et al. \citep{guan2019} designed illumination-aware networks to obtain illumination-measured parameters for Color and Thermal images separately and used them as the fusion weights for Color and Thermal features. Zhou et al. \citep{zhou2020} designed a differential modality perception fusion module to guide the features of the two modalities to become similar, and then used the illumination perception network to assign fusion weights to the Color and Thermal features. Kim et al. \citep{kim2022} reported an uncertainty-aware cross-modal guidance (UCG) module to guide the distribution of modal features with high prediction uncertainty to align with the distribution of modal features with low prediction uncertainty. The researches \citep{guan2019,zhou2020} noticed that the pedestrians in Color and Thermal images are easily confused with the background and used illumination-aware networks to assign fusion weights to Color and Thermal features. However, the acquisition of illumination-measured parameters relied heavily on the classification scores, whose accuracy was limited by the performance of the classifier. In contrast, the method proposed in this paper not only considers the confusion of pedestrians and background in Color and Thermal images but also effectively fuses the two modal features.
\subsection{Attention Mechanisms}
Attention mechanisms \citep{atention} utilized in computer vision are aimed to perform the processing of visual information. Currently, attention mechanisms have been widely used in semantic segmentation \citep{li2020a}, image captioning \citep{li2020b}, image fusion \citep{xiaowanxin2022,cenyueliang2021}, image dehazing \citep{gaojirui2022}, saliency target detection \citep{xu2021}, person re-identification \citep{xukaixiong2022,ACMMM2021,wangyiming}, etc. Hu et al. \citep{hu2020} introduced the idea of a squeeze and excitation network (SENet) to simulate the interdependence between feature channels in order to generate channel attention to recalibrate the feature mapping of channel directions. Li et al. \citep{li2019a} employed the use of a selective kernel unit (SKNet) to adaptively fuse branches with different kernel sizes based on input information. A work inspired by this was from Dai et al.\citep{dai2021}. They designed a multi-scale channel attention feature fusion network that used channel attention mechanisms to replace simple fusion operations such as feature cascades or summations in feature fusion to produce richer feature representations. However, this recent progress in multispectral pedestrian detection has also been limited to two main challenges the interference caused by background and the difference of fundamental characteristics in Color and Thermal images. Therefore, we propose a multispectral pedestrian detection algorithm with cascaded information enhancement and cross-modal attention feature fusion based on the attention mechanism.
\section{Methods}
The overall network framework of the proposed algorithm is shown in Figure \ref{fig:2}. The network consists of an encoder, a cascaded information enhancement module (CIEM), a cross-modal attentional feature fusion module (CAFFM) and a detection head. Specifically, ResNet-101 \citep{he2016} is used as the backbone network of the encoder to encode the features of the input Color images $\boldsymbol{X}_c$ and Thermal images $\boldsymbol{X}_t$ to obtain the corresponding feature maps ${\boldsymbol{F}_c} \in {{\rm{R}}^{W \times H \times C}}$ and ${\boldsymbol{F}_t} \in {{\rm{R}}^{W \times H \times C}}$ ($ W $, $ H $, $ C $ represent the width, height and the number of channels of the feature maps, respectively). CIEM enhances single-modal information from the perspective of fused features by cascading feature fusion blocks to fuse ${\boldsymbol{F}_c}$ and ${\boldsymbol{F}_t}$, and attention weighting the fused features to enrich pedestrian features. CAFFM complements the features of different modalities by mining the complementary features between the two modalities and constructs global features to guide the effective fusion of the two modal features. The detection head is employed for pedestrian recognition and localization of the final fused features.
\subsection{Cascaded Information Enhancement Module}
Considering the confusion of pedestrians with the backgrounds in Color and Thermal images, we design a cascaded information enhancement module (CIEM) to augment the pedestrian features of both modalities to mitigate the effect of background interference on pedestrian detection.
Specifically, a cascaded feature fusion block is used to fuse the Color features ${\boldsymbol{F}_c}$ and Thermal features ${\boldsymbol{F}_t}$ . The cascaded feature fusion block consists of feature cascade, $1 \times 1$ convolution, $3 \times 3$ convolution, $ BN$ layer, and $ReLu$ activation function. The feature cascade operation splice ${\boldsymbol{F}_c}$ and ${\boldsymbol{F}_t}$ along the direction of channels. $1 \times 1$ convolution is conducive to cross-channel feature interaction in the channel dimension and reducing the number of channels in the splice feature map, while $3 \times 3$ convolution expands the field of perception and makes a more comprehensive fusion of features for generating fusion features $ {\boldsymbol{F}_{ct}}$ :
\begin{equation}
\boldsymbol{F}_{c t}={ReLu}\left({BN}\left({Conv}_3\left({Conv}_1\left[\boldsymbol{F}_c, \boldsymbol{F}_t\right]\right)\right)\right)
\end{equation}
where $ BN $ denotes batch normalization, $ Con{v_n}\left( \cdot \right)\ $ denotes a convolution kernel with kernel size $ n \times n $,
$ [ \cdot , \cdot ] $ denotes the cascade of features along the channel direction, $ ReLu( \cdot )$ represents $ReLu$ activation function. Fusion feature $ {\boldsymbol{F}_{ct}}$ is used to enhance the single-modal information because $ {\boldsymbol{F}_{ct}}$ combines the consistency and complementarity of the Color features ${\boldsymbol{F}_c}$ and Thermal features ${\boldsymbol{F}_t}$ . The use of $ {\boldsymbol{F}_{ct}}$ for enhancing the single-modal feature can reduce the interference of the noise in the single-modal features (for example, it is difficult to distinguish between the pedestrian information and the background noise).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{logo3.eps
\end{center}
\caption{Overall framework of the proposed algorithm}\label{fig:2}
\end{figure}
In order to effectively enhance pedestrian features, the fusion feature $ {\boldsymbol{F}_{ct}}$ is sent into the channel attention module (CAM) and spatial attention module (PAM) \citep{woo2018} to make the network pay attention to pedestrian features. The network structure of CAM and PAM is shown in Figure \ref{fig:3}. $ {\boldsymbol{F}_{ct}}$ first learns the channel attention weight ${\boldsymbol{w}_{ca}} \in {{\rm{R}}^{1 \times 1 \times C}}$ by CAM, then uses $ {\boldsymbol{w}_{ca}}$ to weight $ {\boldsymbol{F}_{ct}}$ , and the spatial attention weight $ {\boldsymbol{w}_{pa}} \in {{\rm{R}}^{W \times H \times 1}} $ is obtained from the weighted features by PAM.
The single-modal Color features ${\boldsymbol{F}_c}$ and Thermal features ${\boldsymbol{F}_t}$ are multiplied element by element with the attention weights $ {\boldsymbol{w}_{ca}}$ and $ {\boldsymbol{w}_{pa}}$ to enhance the single-modal features from the perspective of fused features. The whole process can be described as follows:
\begin{equation}
\boldsymbol{F}_t^{\prime}=\left(\boldsymbol{F}_t \otimes \boldsymbol{w}_{c a}\right) \otimes \boldsymbol{w}_{p a}
\end{equation}
\begin{equation}
\boldsymbol{F}_c^{\prime}=\left(\boldsymbol{F}_c \otimes \boldsymbol{w}_{c a}\right) \otimes \boldsymbol{w}_{p a}
\end{equation}
where $\boldsymbol{F}_t^{\prime}$ and $\boldsymbol{F}_c^{\prime}$ denote the Color features and Thermal features obtained by the cascaded information enhancement module, respectively. $\otimes$ represents the element by element multiplication.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\textwidth]{logo4.eps
\end{center}
\caption{Network structure of channel attention and spatial attention}\label{fig:3}
\end{figure}
\subsection{Cross-modal Attention Feature Fusion Module}
There is an essential difference between Color and Thermal images, Color images reflect the color and texture detail information of pedestrians while Thermal images contain the temperature information of pedestrians, however, they also have some complementary information. In order to explore the complementary features of different image modalities and fuse them effectively, we design a cross-modal attention feature fusion module.
Specifically, the Color features $\boldsymbol{F}_c^{\prime} $ and Thermal features $\boldsymbol{F}_t^{\prime}$ enhanced by CIEM are first mapped into feature vectors $\boldsymbol{v}_c \in \mathrm{R}^{1 \times 1 \times C}$ and $\boldsymbol{v}_t \in \mathrm{R}^{1 \times 1 \times C}$ , respectively, by using global average pooling operation. The cross-modal attention network consists of a set of symmetric $1 \times 1$ convolutions, $R e L u$ activation functions, and $Sigmoid$ activation functions. In order to obtain the complementary features of the two modalities, more pedestrian features need to be mined from the single-modal. The feature vectors $\boldsymbol{v}_t$ and $\boldsymbol{v}_c$ are learned to the respective modal attention weights $\boldsymbol{w}_t \in \mathrm{R}^{1 \times 1 \times C}$ and $\boldsymbol{w}_c \in \mathrm{R}^{1 \times 1 \times C}$ by a cross-modal attention network, and then the Color features $\boldsymbol{F}_c^{\prime} $ are multiplied element by element with the attention weights $\boldsymbol{w}_t$ of the Thermal modality, and the Thermal features $\boldsymbol{F}_t^{\prime}$ are multiplied element by element with the attention weights $\boldsymbol{w}_c$ of the Color modality to complement the features of the other modality into the present modality. The specific process can be expressed as follows.
\begin{equation}
\boldsymbol{w}_t=\operatorname{\mathit{Sigmoid}}\left(\mathit{ReLu}\left(\mathit{Conv}_1\left(\mathit{GAP}\left(\boldsymbol{F}_t^{\prime}\right)\right)\right)\right)
\end{equation}
\begin{equation}
\boldsymbol{F}_{c t}^{\prime}=\boldsymbol{w}_t \otimes G A P\left(\boldsymbol{F}_c^{\prime}\right)
\end{equation}
\begin{equation}
\boldsymbol{w}_c={Sigmoid}\left({ReLu}\left({Conv}_1\left({GAP}\left(\boldsymbol{F}_c^{\prime}\right)\right)\right)\right)
\end{equation}
\begin{equation}
\boldsymbol{F}_{t c}^{\prime}=\boldsymbol{w}_c \otimes G A P\left(\boldsymbol{F}_t^{\prime}\right)
\end{equation}
where $\boldsymbol{F}_{c t}^{\prime}$ denotes Color features after supplementation with Thermal features, $\boldsymbol{F}_{t c}^{\prime}$ denotes Thermal features after supplementation with Color features, $G A P(\cdot)$ denotes global average pooling operation, ${Conv}_1(\cdot)$ denotes convolution with convolution kernel size $1 \times 1$ , $ ReLu( \cdot )$ denotes $ReLu$ activation operation, and $Sigmoid$ $(\cdot)$ denotes $Sigmoid $ activation operation.
In order to efficiently fuse the two modal features, the features $\boldsymbol{F}_{c t}^{\prime}$ and $\boldsymbol{F}_{t c}^{\prime}$ are subjected to an element by element addition operation to obtain a global feature vector containing Color and Thermal features. Then, the features $\boldsymbol{F}_t^{\prime}$ and $\boldsymbol{F}_c^{\prime} $ are added element by element and multiplied with the attention weight $\boldsymbol{w}_{c t}$ of the global feature vector element by element to guide the fusion of Color and Thermal features from the perspective of global features to obtain the final fused feature $\boldsymbol{F}$. The fused feature $\boldsymbol{F}$ is input to the detection head to obtain the pedestrian detection results. The feature fusion process can be expressed as follows:
\begin{equation}
\boldsymbol{w}_{c t}={Sigmoid}\left({ReLu}\left({Conv}_1\left(\boldsymbol{F}_{c t}^{\prime} \oplus \boldsymbol{F}_{t c}^{\prime}\right)\right)\right)
\end{equation}
\begin{equation}
\boldsymbol{F}=\boldsymbol{w}_{c t} \otimes\left(\boldsymbol{F}_t^{\prime} \oplus \boldsymbol{F}_c^{\prime}\right)
\end{equation}
where $\oplus$ denotes element by element addition.
\subsection{Loss Function}
The loss function in this paper is consistent with the literature \citep{ren2017} and uses the Region Proposal Network (RPN) loss function $L_{R P N}$ and Fast RCNN \citep{girshick2015} loss function $L_{F R}$ to jointly optimize the network:
\begin{equation}
L=L_{R P N}+L_{F R}
\end{equation}
Both $L_{R P N}$ and $L_{F R}$ consist of classification loss $L_{c l s}$ and bounding box regression loss $L_{r e g}$:
\begin{equation}
L\left(\left\{p_i\right\},\left\{t_i\right\}\right)=\frac{1}{N_{c l s}} \sum_i L_{c l s}\left(p_i, p_i^*\right)+\lambda \frac{1}{N_{\text {reg }}} \sum_i p_i^* L_{r e g}\left(t_i, t_i^*\right)
\end{equation}
Where, $N_{c l s}$ is the number of anchors, $N_{r e g}$ is the sum of positive and negative sample number, $p_{i}$ is the probability that the $i$-th anchor is predicted to be the target, $p_i^*$ is 1 when the anchor is a positive sample, otherwise it is 0, $t_{i}$ denotes the bounding box regression parameter predicting the $i$-th anchor, and $t_i^*$ denotes the GT bounding box parameter of the $i$-th anchor, $\lambda=1$.
The difference between the classification loss of RPN network and Fast RCNN network is that the RPN network focuses only on the foreground and background when classifying, so its loss is a binary cross-entropy loss, while the Fast RCNN classification is focused to the target category and is a multi-category cross-entropy loss:
\begin{equation}
L_{c l s}\left(p_i, p_i^*\right)=-\log \left[p_i^* p_i+\left(1-p_i^*\right)\left(1-p_i\right)\right]
\end{equation}
The bounding box regression loss of RPN network and Fast RCNN network uses Smooth $_{L_1}$ loss:
\begin{equation}
L_{\text {reg }}\left(t_i, t_i^*\right)=R\left(t_i-t_i^*\right)
\end{equation}
Where, R denotes Smooth $_{L_1}$ function,
\begin{equation}
\text { Smooth }_{L_1}(x)=\left\{\begin{array}{cc}
\frac{x^2}{2 \sigma^2} & \text { if }|x|<\frac{1}{\sigma^2} \\
|x|-0.5 & \text { otherwise }
\end{array}\right.
\end{equation}
The difference between the bounding box regression loss of RPN loss and the regression loss of Fast RCNN loss is that the RPN network is trained when $\sigma$ =3 and the Fast RCNN network is trained when $\sigma$ =1.
\section{EXPERIMENTAL RESULTS AND ANALYSIS}
\subsection{Datasets}
This paper evaluates the algorithm performance on the KAIST pedestrian dataset \citep{hwang2015}, which is composed of 95,328 pairs of Color and Thermal images captured during daytime and nighttime. It is the most widely used multispectral pedestrian detection dataset at present. The dataset is labeled with four categories including person, people, person?, and cyclist. Considering the application areas of multispectral pedestrian detection (e.g., automatic driving), all four categories are treated as positive examples for detection in this paper. To address the problem of the annotation errors and missing annotations in the original annotation of the KAIST dataset, studies \citep{liu2016,li2018,zhang2019a} performed data cleaning and re-annotation of the original data. Given that the annotations used in various studies are not consistent, we use 7601 pairs of Color and Thermal images from synthetic annotation (SA) \citep{li2018} and 8892 pairs of Color and Thermal images from paired annotation (PA) \citep{zhang2019a} for model training. The test set consists of 2252 pairs of Color and Thermal images, of which 1455 pairs are from the daytime and 797 pairs are from the nighttime. For a fair comparison with other methods, the test experiments were performed according to the reasonable settings proposed in the literature \citep{hwang2015}.
\subsection{Evaluation Indexes}
In this paper, Log-average Miss Rate (MR) proposed by Dollar et al.\citep {dollar2012} is employed as an evaluation index and combined with the plotting of the Miss Rate-FPPI curve to assess the effectiveness of the algorithm. The horizontal coordinate of the Miss Rate-FPPI curve indicates the average number of False Positives Per Image (FPPI), and the vertical coordinate represents the Miss Rate (MR), which is expressed as:
\begin{equation}
\text { MissRate }=\frac{F N}{T P+F N}
\end{equation}
\begin{equation}
F P P I=\frac{F P}{\text { Total }(\text { images })}
\end{equation}
where ${F N}$ denotes False Negative, ${T P}$ denotes True Positive, ${F P}$ denotes False Positive, the sum of ${T P}$ and ${F N}$ is the number of all positive samples, and $\text { Total }(\text { images })$ denotes the total number of predicted images. It is worth noting that the lower the Miss Rate-FPPI curve trend, the better the detection performance; the smaller the MR value, the better the detection performance.
In order to calculate MR, in logarithmic space, 9 points are taken from the horizontal coordinate (limited value range is $\left[10^{-2}, 10^0\right]$ ) of Miss Rate-FPPI curve, and then there are 9 corresponding vertical coordinates $m_1$, $m_2$,...$m_9$ . By averaging these values, MR can be obtained as follows:
\begin{equation}
\mathrm{MR}=\exp \left[\frac{1}{n} \sum_{i=1}^n \ln \left(m_i\right)\right]
\end{equation}
where $n$ is 9.
\subsection{Implementation Details}
In this paper, the deep learning framework pytorch1.7 is adopted. The experimental platform is the ubuntu18.04 operating system and a single NVIDIA GeForce RTX 2080Ti GPU. Stochastic Gradient Descent (SGD) algorithm is used to optimize the network during model training, with momentum value of 0.9, weight attenuation value $5 \times 10^{-4}$ , and initial learning rate is $1 \times 10^{-3}$ . The model is iterated for 5 epochs with the batch size of 4, and the learning rate decay to $1 \times 10^{-4}$ after the 3rd epoch.
\subsection{Experimental Results and Analysis}
\subsubsection{Construction of the Baseline}
This work constructs a baseline algorithm architecture based on ResNet-101 backbone network and Faster RCNN detection head. Simple characteristic fusion (feature cascade, element by element addition and element by element multiplication) of the Color and Thermal features output by the backbone network is carried out in three sets of experiments. The fused feature is used as the input of the detection head. In order to ensure the high efficiency of the build baseline algorithm, synthesis annotation is employed to train and test the baseline. The test results are shown in Table \ref{tab:1}. The MR values using feature cascade, element by element addition and element by element multiplication in the all-weather scene are 14.62$\%$, 13.84$\%$ and 14.26$\%$, respectively. By comparing these three results, it can be seen that the feature element by element addition demonstrates the best performance. Therefore, we adopt the method of adding features element by element as the baseline integration method.
\begin{table}[!ht]
\centering
\begin{spacing}{1.5}
\caption{Experimental results of baseline under different fusion modes}\label{tab:1}
\begin{tabular}{cc} \hline
Fusion modes & All-weather \\ \hline
feature cascade & 14.62 \\
element by element multiplication & 14.26 \\
element by element addition & \textbf{\scriptsize 13.84} \\ \hline
\end{tabular}
\end{spacing}
\end{table}
\subsubsection{Performance comparison of different methods}
The performance of this method is compared with several other state-of-the-art methods. The compared methods include hand-represented methods, e.g., ACT+T+THOG \citep{hwang2015} and deep learning-based methods, e.g., Halfway Fusion \citep{liu2016}, CMT\_CNN\citep{xu2017}, CIAN\citep{zhang2019b}, IAF R-CNN\citep{li2019b}, IATDNN+IAMSS\citep{guan2019}, CS-RCNN \citep{zhang2020a}, IT-MN \citep{zhuang2022}, and DCRD \citep{liu2022}. Here, the model is trained using 7601 pairs of Color and Thermal images from SA and 8892 pairs of Color and Thermal images from PA, respectively. Besides, 2252 pairs of Color and Thermal images from the test set are used for model testing. Table \ref{tab:2} lists the experimental results.
Table \ref{tab:2} shows that when the model is trained with SA, the MRs of the method proposed in this paper are 10.71$\%$, 13.09$\%$ and 8.45$\%$ for all-weather, daytime and nighttime scenes, respectively, which are 0.72$\%$, -1.23$\%$ and 0.37$\%$ lower than the compared method CS-RCNN with the best performance, respectively. The PA (Color) and PA (Thermal) in Table \ref{tab:2} represent the Color annotation and Thermal annotation in the pairwise annotation PA, respectively, for the purpose of training the model. It can be seen from \ref{tab:2} that the MRs of the method in this paper are 11.11$\%$ and 10.98$\%$ when using Color annotation and Thermal annotation in the all-weather scene, which are 2.53$\%$ and 3.70$\%$, respectively, lower than those of compared method with the best performance. In addition, by analyzing the experimental results of two improved versions of annotations, it can be found that pedestrian detection results are different when using different annotations, indicating the importance of annotations.
\begin{table}[!ht]
\centering
\setlength{\tabcolsep}{1mm}
\begin{spacing}{1.5}
\caption{MRs of different methods on KAIST datasets}\label{tab:2}
\begin{tabular}{cccccccccc}\hline
\multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{3}{c}{SA} & \multicolumn{3}{c}{PA(Color)} & \multicolumn{3}{c}{PA(Thermal)} \\ \cline{2-10}
& All-weather & Day & Night & All-weather & Day & Night & All-weather & Day & Night \\ \hline
ACF+T+THOG& 41.65 & 39.18 & 48.29 & 41.74 & 39.30 & 49.52 & 41.36 & 38.74 & 48.30\\
Halfway Fusion& 25.75 & 24.88 & 26.59 & 25.10 & 24.29 & 26.12 & 25.51 & 25.20 & 24.90\\
CMT\_CNN& 36.83 & 34.56 & 41.82 & 36.25 & 34.12 & 41.21 & -- & -- & --\\
IAF R-CNN& 15.73 & 14.55 & 18.26 & 15.65 & 14.95 & 18.11 & 16.00 & 15.22 & 17.56\\
IATDNN+IAMSS& 14.95 & 14.67 & 15.72 & 15.14 & 14.82 & 15.87 & 15.08 & \textcolor{blue}{15.02} & 15.20\\
CIAN& 14.12 & 14.77 & 11.13 & 14.64& 15.13 & \textcolor{blue}{12.43} & \textcolor{blue}{14.68} & 16.21 & \textcolor{blue}{9.88}\\
CS-RCNN & \textcolor{blue}{11.43} & \textcolor{red}{11.86} & \textcolor{blue}{8.82} & -- & -- & -- & -- & -- & -- \\
IT-MN& 14.19 & 14.30 & 13.98 & -- & -- & -- & -- & -- & -- \\
DCRD& 12.58 & 13.12 & 11.65 & \textcolor{blue}{13.64} & \textcolor{blue}{13.15} & 13.98 & -- & -- & --\\
Ours& \textcolor{red}{10.71} & \textcolor{blue}{13.09} & \textcolor{red}{8.45} & \textcolor{red}{11.11} & \textcolor{red}{12.85} & \textcolor{red}{8.77} & \textcolor{red}{10.98} & \textcolor{red}{13.07} & \textcolor{red}{8.53} \\ \hline
\end{tabular}
\end{spacing}
\end{table}
\subsubsection{Analysis of Ablation Experiments}
\noindent(1) Complementarity and importance of Color and Thermal features
\begin{table}[!ht]
\centering
\begin{spacing}{1.5}
\caption{MRs of different modal inputs}\label{tab:3}
\begin{tabular}{cccc} \hline
Input & All-weather& Day & Night \\ \hline
dual-stream Color images & 25.37& 19.31 & 31.18\\
dual-stream Thermal images & 17.55 & 22.81 & 12.61 \\
Color images + Thermal images & \textbf{\scriptsize 13.84} & \textbf{\scriptsize 15.35} & \textbf{\scriptsize 12.48} \\ \hline
\end{tabular}
\end{spacing}
\end{table}
This section compares the effect of different input sources on pedestrian detection performance. In order to eliminate the impact of the proposed module on detection performance, three sets of experiments are conducted on baseline: 1) the combination of Color and Thermal images as the input source (the input of the two branches of the backbone network are respectively Color and Thermal images); 2) dual-stream Color image as the input source (use Color images to replace Thermal images, that is, the backbone network input source is Color images); 3) dual-stream Thermal images as the input source (use Thermal images to replace Color images, that is, the backbone network input source is Thermal images).The training set of the model here is 7061 pairs of images of SA, and the test set is 2252 pairs of Color and Thermal images. Table \ref{tab:3} shows the MRs of these three input sources for the all-weather, daytime, and nighttime scenes. It can be seen from Table \ref{tab:3} that the MRs obtained using Color and Thermal images as input to the network are 13.84$\%$, 15.35$\%$ and 12.48$\%$ for the all-weather, daytime and nighttime scenes, respectively, which are 11.53$\%$, 3.96$\%$, 18.70$\%$ and 3.71$\%$, 7.46$\%$, 0.13$\%$ lower than using Color images and Thermal images as input alone. The experimental results prove that the detection network combining Color and Thermal features delivers better performance, indicating that Color and Thermal features are important for pedestrian detection.
Figure \ref{fig:4} shows the Miss Rate-FPPI curves of the detection results for these three input sources in the all-weather, daytime, and nighttime scenes (blue, red and green curves indicate dual-stream Thermal images, dual-stream Color images, and Color and Thermal images, respectively). By analyzing the Miss Rate-FPPI curve trend and combining with the experimental data in Table \ref{tab:3}, it can be seen that the detection effect of Color images as the input source is better than that of Thermal images in the daytime scene while the result is the opposite for the night scene, and the detection effect of Color and Thermal images combined as the input source is better than that of single-modal input in both daytime and nighttime. It shows that there are complementary features between Color and Thermal modalities, and the fusion of the two modal features can improve the pedestrian detection performance.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{logo5.eps
\end{center}
\caption{The Miss Rate-FPPI curves of the detection results of the three groups of input sources in the All-weather, Daytime and Nighttime scenes (From left to right, All-weather, Daytime and Nighttime Miss Rate-FPPI curves are shown in the figure)}\label{fig:4}
\end{figure}
\noindent(2) Ablation experiments
In this section, ablation experiments are conducted to demonstrate the effectiveness of the proposed cascaded information enhancement module (CIEM) and cross-modal attentional feature fusion module (CAFFM). Here, 7061 pairs of SA images are used to train the model, and 2252 pairs of Color and Thermal images in the test set are used to test the model.
Effectiveness of CIEM: CIEM is used to enhance the pedestrian features in Color and Thermal images to reduce the interference from the background. The experimental results are shown in Table \ref{tab:4}. The MRs of baseline on SA are 13.84$\%$, 15.35$\%$ and 12.48$\%$ for all-weather, daytime and nighttime scenes, respectively. When CIEM is additionally employed, the MRs are 11.21$\%$, 13.15$\%$ and 9.07$\%$ for all-weather, daytime and nighttime scenes, respectively, which are reduced by 2.63$\%$, 2.20$\%$ and 3.41$\%$ compared to the baseline, respectively. It is shown that the proposed CIEM effectively enhances the pedestrian features in both modalities, reduces the interference of background, and improves the pedestrian detection performance.
\begin{table}[!ht]
\centering
\begin{spacing}{1.5}
\caption{MRs for ablation studies of the proposed method on SA}\label{tab:4}
\begin{tabular}{cccc} \hline
Methods & All-weather& Day & Night \\ \hline
baseline & 13.84 & 15.35 & 12.48\\
baseline + CIEM & 11.21 & 13.15 & 9.07 \\
baseline + CAFFM & 11.68 & 13.81 & 9.50 \\
Overall model & \textbf{\scriptsize 10.71} & \textbf{\scriptsize 13.09} & \textbf{\scriptsize 8.45} \\ \hline
\end{tabular}
\end{spacing}
\end{table}
Validity of CAFFM: CAFFM is used to effectively fuse Color and Thermal features. The experimental results are shown in Table \ref{tab:4}. On the SA, when the baseline is used with CAFFM, the MRs are 11.68$\%$, 13.81$\%$ and 9.50$\%$ in all-weather, daytime and nighttime scenes, respectively, which are reduced by 2.16$\%$, 1.54$\%$ and 2.98$\%$ compared baseline, respectively. It shows that the proposed CAFFM effectively fuses the two modal features to achieve robust multispectral pedestrian detection.
Overall effectiveness: The proposed CIEM and CAFFM are additionally used on the basis of baseline. Experimental results show a reduction of 3.13$\%$, 2.26$\%$ and 4.03$\%$ in MRs for all-weather, daytime and nighttime scenes, respectively, compared to the baseline, indicating the overall effectiveness of the proposed method. A closer look reveals that with additional employment of CIEM and CAFFM alone, MRs are decreased by 2.63$\%$ and 2.16$\%$, respectively, in the all-weather scene, but the MR of the overall model is reduced by 3.13$\%$. It demonstrates that there is some orthogonal complementarity in the role of the proposed two modules.
Figure \ref{fig:5} shows the Miss Rate-FPPI curves for CIEM and CAFFM ablation studies in all-weather, daytime and nighttime scenes (blue, red, orange and green curves represent baseline, baseline + CIEM, baseline + CAFFM and overall model, respectively). It is clear that the curve trends of each module and the overall model are both lower than that of the baseline, which further proves the effectiveness of the method presented in this work.
Furthermore, in order to qualitatively analyze the effectiveness of the proposed CIEM and CAFFM, four pairs of Color and Thermal images (two pairs of images are taken from daytime and two pairs of images are taken from nighttime) are selected from the test set for testing. The pedestrian detection results of the baseline and each proposed module are shown in Figure \ref{fig:6}. The first row is the visualization results of labeled boxes for Color and Thermal images, and the second to the fifth rows are the visualization results of the labeled and prediction boxes for baseline, baseline + CIEM, baseline + CAFFM, and the overall model pedestrian detection with the green and red boxes representing the labeled and prediction boxes, respectively. It can be seen that the proposed method successfully addresses the problem of pedestrian missing detection in complex environments and achieves more accurate detection boxes. For example, the second row, pedestrian detection missing happens in the first, third, and fourth pairs of images in the baseline detection result, however, the pedestrian miss detection problem is properly solved with CIEM and CAFFM added to the baseline and the overall model produces more accurate pedestrian detection boxes.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{logo6.eps
\end{center}
\caption{The Miss Rate-FPPI curves of CIEM and CAFFM ablation studies in All-weather, Daytime and Nighttime scenes (From left to right, All-weather, Daytime and Nighttime Miss Rate-FPPI curves are shown in the figure)}\label{fig:5}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=18cm]{logo7.eps
\end{center}
\caption{In this paper, each module and baseline pedestrian detection results (The first row is the visualization results of labeled boxes for Color and Thermal images, and the second to the fifth rows are the visualization results of the labeled and prediction boxes for baseline, baseline + CIEM, baseline + CAFFM and the overall model pedestrian detection with the green and red boxes representing the labeled and prediction boxes, respectively.)}\label{fig:6}
\end{figure}
\section{CONCLUSION}
In this paper, we propose a multispectral pedestrian detection algorithm including cascaded information enhancement module and cross-modal attention feature fusion module. The proposed method improves the accuracy of pedestrian detection in multispectral images (Color and Thermal images) by effectively fusing the features from the two modules and augmenting the pedestrian features. Specifically, on the one hand, a cascaded information enhancement module (CIEM) is designed to enhance single-modal features to enrich the pedestrian features and suppress interference from the background noise. On the other hand, unlike previous methods that simply splice Color and Thermal features directly, a cross-modal attention feature fusion module (CAFFM) is introduced to mine the features of both Color and Thermal modalities and to complement each other, then complementary enhanced modal features are used to construct global features. Extensive experiments have been conducted on two improved annotations of the public dataset KAIST. The experimental results show that the proposed method is conducive to obtain more comprehensive pedestrian features and improve the accuracy of multispectral image pedestrian detection.
\section{Tables}
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Author Contributions}
YY responsible for paper scheme design, experiment and paper writing. WK guide the paper scheme design and revision. XK guide to do experiments and write papers.
\section*{Funding}
This work was supported by the National Natural Science Foundation of China (No. 52107017) and Fundamental Research Fund of Science and Technology Department of Yunnan Province(No.202201AU070172)
\bibliographystyle{Frontiers-Vancouver}
|
{
"arxiv_id": "2302.08582",
"language": "en",
"timestamp": "2023-02-20T02:02:07",
"url": "https://arxiv.org/abs/2302.08582",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
\begin{figure}
\centering
\begin{subfigure}[c]{0.5\textwidth}
\includegraphics[width=0.9\linewidth]{figures/fig1.pdf}
\end{subfigure}
\begin{subfigure}[c]{0.5\textwidth}
\vspace{-210px}
\begin{subfigure}[b]{0.4\textwidth}
\quad
\end{subfigure}%
\begin{subfigure}[b]{0.75\textwidth}
\tiny{{
\line{mle_blue} Conventional LM pretraining \quad \hfill \break
\line{cond_orange} Pretraining with feedback \\
\line{dashed,cond_orange} Finetuning with feedback for 1.6B tokens \\
\line{dotted,cond_orange} Finetuning with feedback for 330M tokens}}
\end{subfigure}
\end{subfigure}
\vspace{-20px}
\caption{Toxicity score (lower is better) of LMs pretrained with the standard objective (solid \textcolor{mle_blue}{blue}), using conditional training (solid \textcolor{cond_orange}{orange}) and LMs finetuned using conditional training for 1.6B (\textcolor{cond_orange}{orange} dashed) and 330M tokens (\textcolor{cond_orange}{orange} dotted). Pretraining with Human Feedback (PHF) reduces the amount of offensive content much more effectively than finetuning with human feedback.}
\label{fig:fig1}
\vspace{-15px}
\end{figure}
Language models (LMs) are trained to imitate text from large and diverse datasets.
These datasets often contain content that violates human preferences, e.g., falsehoods \cite{lin2021truthfulqa}, offensive comments \cite{gehman-etal-2020-realtoxicityprompts}, personally identifiable information \citep[PII;][]{carlini2020} or low-quality code \cite{chen2021codex}.
Imitating such data stands in stark contrast with the behavior people desire from language models, e.g., to generate text that is helpful, honest and harmless \cite{lab}.
In this paper, we explore alternative objectives for pretraining LMs on large amounts of diverse data that guide them to generate text aligned with human preferences.
Prior work on aligning LMs with human preferences almost exclusively focused on making adjustments to pretrained LMs. A widely adopted strategy of adding safety filters on top of pretrained LMs \cite{recipes} works only to an extent: even the most effective safety filters fail to catch a large amount of undesirable content \cite{gehman-etal-2020-realtoxicityprompts,weibl2021,ziegler2022}. Another approach involves finetuning LMs using either supervised learning on curated data \citep{solaiman2021,scheurer2022} or reinforcement learning from human feedback \citep[RLHF;][]{ziegler2019fine,Ouyang2022,bai2022training,menick_2022_sparrow}, but this strategy is also limited by the fact that large LMs are quite resistant to forgetting their training data \citep[an effect that increases with model size;][]{carlini2022,vu2022,ramasesh2022effect}. While filtering out all undesirable content from pretraining data could seem to be a simple solution, it severely handicaps the capabilities of LMs \cite{weibl2021} which are already bottlenecked by high-quality data \citep{Hoffmann2022,Villalobos2022_will_we}.
Moreover, reducing the diversity of training data can negatively impact alignment with human preferences by decreasing robustness \cite{Hendrycks2019,Hendrycks2020} and amplifying existing social biases \cite{xu_detoxifying,weibl2021}.
These limitations suggest that while human preferences should be imposed in pretraining itself, content violating those preferences should still be present in the training data.
In this paper, we explore objectives for aligning LMs with human preferences during pretraining. Instead of filtering the training data, we propose pretraining with human feedback (PHF), where we estimate human preference judgments using a reward function (e.g. a toxic text classifier).
In this way, we allow the LM to learn from undesirable content while guiding the LM \emph{not} to imitate it at inference time.
We experiment with four PHF objectives: conditional training \cite{keskar}, dataset filtering, unlikelihood loss \cite{welleck2019} and two offline RL algorithms, reward-weighted regression \citep[RWR;][]{peters2007} and advantage-weighted regression \citep[AWR;][]{peng2019}. We compare them to maximum likelihood estimation (MLE), the standard pretraining objective.
We evaluate PHF objectives on three tasks: generating non-toxic text, text without personally identifiable information (PII), and PEP8-compliant Python \cite{pep8}.
We compare LMs pretrained with feedback in terms of \emph{alignment} (how well they satisfy preferences) and \emph{capabilities} (how well they perform on downstream tasks).
While different objectives offer different alignment--capabilities trade-offs for different tasks, we find that \emph{conditional training} is on the Pareto frontier across all three tasks. Conditional training is a simple algorithm that learns a distribution over tokens conditional
on their human preference score, reminiscent of decision transformer in reinforcement learning \citep{chen2021decisiontransformer}.
Conditional training decreases the frequency of undesirable content in LM samples up to an order of magnitude, reaping continued improvements with increasing training data (\S\ref{sec:pretraining/tradeoffs}).
Superior alignment persists when the LM is faced with an adversary prompting it to elicit undesirable behavior, as evaluated using the automated red-teaming approach from \citet{perez_2022} (\S\ref{sec:pretraining/red_teaming}).
At the same time, conditional training achieves comparable performance to MLE-trained LMs on zero-shot benchmarks \citep{paperno-etal-2016-lambada,chen2021codex} and after finetuning on GLUE tasks \citep{wang2018_glue} (\S\ref{sec:pretraining/downstream}); conditional training is able to learn representations from the entire training distribution, without learning to regurgitate undesirable content as MLE-trained LMs do.
Finally, in \S\ref{sec:finetuning} we examine whether PHF improves over the standard practice of MLE pretraining followed by finetuning with human feedback.
We find that PHF results in equal or (sometimes dramatically) better alignment across all three tasks (Fig.~\ref{fig:fig1}) as well as improved adversarial robustness.
These findings results suggest that it is more effective to train LMs to exhibit desirable behaviors from the outset, rather than having them learn undesirable behavior and then attempt to unlearn it.
Our results challenge the standard practice of aligning LMs with human preferences during finetuning alone, suggesting that should we incorporate human preferences from the very beginning of training.\footnote{The code and datasets accompanying the paper are available at \href{https://github.com/tomekkorbak/pretraining-with-human-feedback}{github.com/tomekkorbak/pretraining-with-human-feedback}}
\section{Methods}
\label{sec:method}
Here we present five PHF objectives that we will evaluate in \S\ref{sec:pretraining}, in terms of various capabilities and alignment metrics for different tasks. In LM pretraining, we start with an LM $\pi_\theta$ with randomly initialized weights $\theta$ and an unlabeled dataset of documents $\mathcal{D}$. Each document $x \in \mathcal{D}$ is a sequence of segments (sentences or lines): $x = (x^1, \dots, x^{|x|})$. Each segment $x^i \in x$ is a sequence of $N_i$ tokens: $x^i = (x^i_1, \dots, x^i_{N_i})$, where $N_i = |x^i|$. Tokens come from a fixed vocabulary $\mathcal{V}$. In PHF, we additionally assume access to a segment-level reward function $R$ that takes a document segment $x^i$ and outputs a scalar score $R(x^i)$ indicating how preferable $x^{(i)}$ is. For instance, $R(x^i)$ could be the negative likelihood that a sentence would be harmful to civil conversation. At a high-level, pretraining can be posed as maximizing some pretraining objective $\mathcal{L}$ across documents: $\pi_\theta = \operatorname*{argmax}_\theta \sum_{x \in \mathcal{D}} \mathcal{L}(x)$. In the rest of the section we will describe MLE, the standard objective, followed by five PHF objectives.
\paragraph{MLE} Maximum likelihood estimation \citep[MLE;][]{Bengio2013,mikolov2021,Radford2018ImprovingLU,brown_gpt3} is the dominant approach to pretraining and finetuning LMs. This objective boils down to the log likelihood of training documents:
\begin{equation}\label{obj:mle}
\mathcal{L}_\text{MLE}(x) = \log \pi_\theta(x),
\end{equation}
where $\log \pi_\theta(x)$ can be decomposed autoregressively as
\begin{align}
\log \pi_\theta(x) &= \sum_{i=1}^{|x|} \log \pi_\theta(x^i|x^{<i}) \\
&= \sum_{i=1}^{|x|} \sum_{j=1}^{|x^i|} \log \pi_\theta(x^i_j|x^{\leq i}_{<j}),
\end{align}
where $x^{<i} = (x^1, \dots, x^{i-1})$ denotes all segments in a document prior to $x^i$ and $x^{\leq i}_{<j} = (x^1_1, \dots, x^i_{j-1})$ denotes all tokens in a document $x$ prior to $x^i_j$.
\paragraph{MLE with Filtering} Dataset filtering \citep{solaiman2021,Wang2022} corresponds to an objective identical to MLE except it is zero for documents $x$ such that their document-level reward $\text{avg}(R(x)) = \frac{1}{|x|} \sum_{i=1}^{|x|} R(x^i)$ is above a threshold $t$:
\begin{equation}
\mathcal{L}_\text{Filt}(x) = \begin{cases}\log \pi_\theta(x), \text{if} \text{ avg} (R(x)) > t, \\ 0, \ \text{otherwise}.\end{cases}
\end{equation}
$t$ is a hyperparameter we set to a certain percentile of document-level rewards in the training data (see Appendix~\ref{appendix:hparams} for values used in experiments and an ablation study). In practice, we train with this objective by discarding documents with rewards below $t$ and training for multiple epochs on the remaining ones at a fixed budget of training tokens.
\paragraph{Conditional Training} Conditional training \citep{ficler-goldberg-2017-controlling,fan2018,keskar} has been shown to be successful across tasks as diverse as as controllable language generation \cite{peng-etal-2018-towards,dai2019}, mitigating toxicity \cite{gehman-etal-2020-realtoxicityprompts,recipes,Lu2022QuarkCT} and robotic control \cite{chen2021decisiontransformer,janner2021sequence}. In its simplest form, it extends MLE by prepending each segment $x^i$ with a control token $c^i$ based on that segment's reward $R(x^i)$:
\begin{equation}\label{obj:cond}
\mathcal{L}_\text{Cond}(x) = \log \pi_\theta(c^i, x^i, \dots, c^{|x|}, x^{|x|})
\end{equation}
We use two control tokens: \texttt{<|good|>} if $R(x^i) \geq t$ and \texttt{<|bad|>} otherwise. The threshold $t$ is a hyperparameter.
At inference time, we sample from $\pi_\theta(\cdot|c_1=\texttt{<|good|>})$. See Appendix~\ref{appendix:hparams} for details.
\paragraph{Unlikelihood} Unlikelihood training \citep{welleck2019} follows MLE in maximizing the likelihoods of segments exceeding a certain reward threshold $t$. However, for segments with rewards below the threshold, we use token-level \emph{unlikelihood} instead. The unlikelihood of a token $x^i_j$ is the total log probability of all other tokens in the vocabulary on position $j$ of segment $i$. This gives rise to the objective:
\begin{align}
\mathcal{L}_\text{UL}(x) = &\sum_{\substack{ x=1 \\ R(x^i) > t}}^{|x|} \log \pi_\theta(x^i|x^{<i}) \nonumber \\ + \alpha &\sum_{\substack{ x=1 \\ R(x^i) \leq t}}^{|x|} \sum_{j=1}^{|x^i|} \log (1-\pi_\theta(x^i_j|x^{\leq i}_{<j}))
\end{align}
The threshold $t$ and $\alpha$, a coefficient scaling the second (unlikelihood) term, are hyperparameters.
\paragraph{RWR} Reward-weighted regression \citep[RWR;][]{peters2007} extends MLE by reweighting each segment by a term proportional to exponentiated reward:
\begin{equation}
\mathcal{L}_\text{RWR}(x) = \sum_{i=1}^{|x|} \log \pi_\theta(x^i|x^{<i}) \exp(R(x^i)/\beta)
\end{equation}
$\beta$, the coefficient controlling how much reward affects the loss, is a hyperparameter.
\paragraph{AWR} Advantage-weighted regression \citep[AWR;][]{peng2019} extends RWR by subtracting a token-level value estimate $V_\theta(x^i_j)$ from each segment-level reward $R(x^i)$. Value estimates are produced by a value function that shares parameters $\theta$ with the LM but is trained to minimize the mean-squared error between token-level value estimate and ground-truth returns $R(x^i)$. The LM and the value head are trained jointly to maximize:
\begin{align}
\mathcal{L}_\text{AWR}(x) = \alpha &\sum_{i=1}^{|x|} \sum_{j=1}^{|x^i|} \log \pi_\theta(x^i_j|x^{\leq i}_{<j}) \exp \Big(A(x^i_j)/\beta \Big) \nonumber \\
- (1-\alpha) &\sum_{i=1}^{|x|} \sum_{j=1}^{|x^i|} \big[ V_\theta(x^i_j) - R(x^i)) \big]^2
\end{align}
where $A(x^i_j) = R(x^i)-V_\theta(x^i_j)$ is the advantage. The two hyperparameters are $\alpha$ (controlling the trade-off between value loss and policy loss) and $\beta$ (again, controlling the amount of reweighting). We implement the value function $V_\theta$ as a linear head on top of the LM $\pi_\theta$; they share the parameters of all other layers.
\section{Experimental Setup}\label{sec:setup}
Here, we describe the setup of our pretraining (\S\ref{sec:pretraining}) and finetuning experiments~(\S\ref{sec:finetuning}), which we use to compare MLE and various PHF objectives on both capabilities and alignment.
\subsection{Tasks}
We evaluate PHF objectives on three tasks: (i) avoiding offensive content, (ii) avoiding leaking personally identifiable information (PII), and (iii) generating Python code following PEP8, the style guide for Python \cite{pep8}. Each task is associated with a reward function $R$ and a dataset $\mathcal{D}$ as defined in \S\ref{sec:method}. For evaluation, we use misalignment scores equal to the negative rewards.
\paragraph{Toxicity}
LMs can generate highly harmful language, including insults, profanities and threats \cite{sap-etal-2019-risk,gehman-etal-2020-realtoxicityprompts,abid2021}. Following \citet{weibl2021}, we group these harms under the name of ``toxicity,'' understood as ``a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective'' \citep{Borkan2019}. To obtain toxicity scores, we follow \citet{lab} and use Detoxify \citep{Detoxify}, %
a toxic comment classifier. We used the \texttt{unbiased} model, based on the 124M parameter RoBERTa \citep{Liu2019} and trained on the Jigsaw Unintended Bias in Toxicity Classification dataset \citep{Borkan2019}. We define our reward $R$ as negative probability of toxicity according to Detoxify and misalignment score as the probability of toxicity. Since Detoxify was trained on short documents (predominantly comments), we first segment our training documents using a SpaCy \citep{spacy} sentence segmenter and score them at sentence level. When scoring LM samples during evaluation, we skip segmentation.
\paragraph{PII}
LMs sometimes generate text that occurs verbatim in their training data \cite{carlini2019,perez_2022}. This poses privacy risks if the text contains confidential information identifying living people (PII) such as email addresses or social security numbers \cite{henderson2017}. To detect such PII, we use Scrubadub,\footnote{\href{https://github.com/LeapBeyond/scrubadub}{github.com/LeapBeyond/scrubadub}} a PII detector using both pattern matching rules and a pretrained SpaCy \citep{spacy} named entity recognizer. We use pattern matching for detecting emails, addresses and postal codes, phone numbers, credit card numbers, US social security numbers, vehicle plates numbers, dates of birth, URLs and login credentials. The named entity recognizer detects mentions of people names, locations and organizations. We define our reward $R$ as the negative number of detected PII instances per character. Similarly to toxicity, we score training documents at sentence-level.
\paragraph{PEP8}
While LMs are highly successful at generating code, the generated code is not always aligned with user intent \cite{chen2021codex}. For instance, prompted with low-quality code, LMs are likely to produce a low-quality completion even if user's intent is to write high-quality code. We explore alignment failures in the context of code by requiring compliance with PEP8 \cite{pep8}, the style guide for Python. To detect PEP8 violations, we use \texttt{pycodestyle}, a popular static code analysis tool.\footnote{\href{https://github.com/PyCQA/pycodestyle}{github.com/PyCQA/pycodestyle}} Our reward function $R$ is the negative number of PEP8 violations per character. We assign rewards to individual lines of training documents, but note that the presence of PEP8 violations on a particular line does depend on previous lines.
\subsection{Model Architecture and Hyperparamers}
All of our LMs use the neural network architecture of \texttt{gpt2-small} \citep[124M parameters;][]{radford2019language}. We keep the original hyperparameters of \texttt{gpt2-small} except for learning rate and batch size, which we tune for each task-objective pair based on train loss. If an objective has it own hyperparameters (e.g. $t$, $\alpha$ or $\beta$), we tune learning rate and batch size separately for each $(t, \alpha, \beta)$ configuration considered and then chose the best $(t, \alpha, \beta)$ configuration based on misalignment score of LM samples and the KL divergence from GPT-3~(\S\ref{sec:pretraining/tradeoffs}). See Appendix~\ref{appendix:hparams} for hyperparameters used in experiments and ablations on them.
\subsection{Training Data}
We fixed training set size to 3.32B tokens which is compute-optimal for our model size according to the scaling laws from \citet{Hoffmann2022}. For toxicity and PII, we prepared training data by subsampling 1.95M documents (totaling 3.32B tokens) from the Pile \citep{pile}. For code generation, we subsampled 1.5M Python files (again totaling 3.32B tokens) from a cleaned and filtered version of the GitHub dataset from Google BigQuery released by \citet{tunstall2022natural}.\footnote{\href{https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code}{GitHub on BigQuery}}
\section{Pretraining Experiments}\label{sec:pretraining}
\begin{figure*}[ht!]
\centering
\legend
\vspace{-10px}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{25px}}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_kl_score_scatterplot.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_kl.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_score.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-15px}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{38px}}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_kl_score_scatterplot.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_kl.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_score.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-15px}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PEP8}\hspace{35px}}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_kl_score_scatterplot.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_kl.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_score.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-15px}
\caption{KL from GPT-3 and average misalignment score of LM samples for MLE and PHF objectives (lower is better). We show KL from GPT-3 versus average score on a scatter plot (first column) and also each of these two metrics over training time (with log-log axes; second and third columns). Conditional training (\textcolor{cond_orange}{orange}) is either strictly optimal (toxicity, PEP8) or on the Pareto frontier (PII) of PHF objectives}
\label{results:pretrain-main}
\end{figure*}
In this section, we investigate how PHF affects the alignment and capabilities of resulting models. In \S\ref{sec:pretraining/tradeoffs} we introduce two primary metrics: misalignment score (indicating how well unconditional samples from an LM satisfy human preferences) and the KL divergence from GPT3 (indicating general capabilities), and discuss the Pareto frontier of the capability-alignment trade-off. We additionally evaluate alignment by analyzing LM behavour when conditioned on adversarial prompts (``red-teaming''; \S\ref{sec:pretraining/red_teaming}) and evaluate capabilities by reporting performance on downstream tasks (\S\ref{sec:pretraining/downstream}). Finally, we measure diversity of LM samples (\S\ref{sec:pretraining/diversity}).
\subsection{Capabilities-Alignment Trade-offs} \label{sec:pretraining/tradeoffs}
\paragraph{Misalignment Score} To estimate the frequency of undesirable content in text generated by an LM, we obtain a set of $K=4096$ samples from it by nucleus sampling \cite{holtzman2019} with temperature $T = 0.7$ and top-$p = 0.9$, constraining sequence length to be between 10 and 128 tokens. Unless specified otherwise, we generate unconditionally, i.e. only condition on a special \texttt{<|endoftext|>} token (or on \texttt{<|endoftext|><|good|>} when using conditional training). We then score those samples using the same scorers that had been used as reward functions during training. %
We report misalignment scores averaged across $K$ samples. In Appendix~\ref{appendix:lm_scores}, we also report metrics tracking the worst-case tail of misalignment score distribution.
\paragraph{KL from GPT-3} As a measure of an LM's general capabilities, we estimate the Kullback-Leibler (KL) divergence of its output distribution from that of a highly capable model, GPT-3 \citep{brown_gpt3}.
Lower divergence from GPT-3 likely translates into an increase in capabilities.
We qualitatively found KL from GPT-3 to be sensitive to the most egregious failure modes of PHF, e.g., degeneration \citep{holtzman2019}, repetition or reduced sample diversity. Note that KL from GPT-3 favors models trained like GPT-3, namely with MLE and without any alignment-relevant constraints; such constraints may cause the distribution to change in ways that do not impact a model's performance on downstream tasks.
We estimate $D_\text{KL}(p_\text{GPT3}, \pi_\theta)$ by computing $\frac{1}{N}\sum_{n=1}^N \log \frac{p_\text{GPT-3}(x_i)}{\pi_\theta(x_i)}$, where $x_1, \dots, x_N \sim p_\text{GPT3}$ are samples from GPT-3 obtained using its public API\footnote{\href{https://openai.com/api/}{openai.com/api/}} and $\pi_\theta$ is the LM being evaluated. We generate $N = 4096$ unbiased (temperature 1, top-$p$ 1) samples of at most 64 tokens, using \texttt{<|endoftext|>} as a stop token. To decrease variance due to the stochasticity of sampling we used the same set of $N$ samples for all evaluations.
For toxicity and PII experiments, we use GPT-3 (175B; \texttt{davinci}) as $p_\text{GPT3}$. For PEP8, we use a 12B Codex model \citep[\texttt{code-cushman-001};][]{chen2021codex}. In prior experiments, we found that using InstructGPT \citep[\texttt{textdavinci-002};][]{Ouyang2022} as a target distribution gives very similar results.
\begin{figure*}[t]
\centering
\legend
\vspace{-15px}
\hspace{-0.05\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{25px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_red_team.pdf}
\vspace{-20px}
\end{subfigure}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{35px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_red_team.pdf}
\vspace{-20px}
\end{subfigure}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PEP8}\hspace{38px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_red_team.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.05\textwidth}
\vspace{-10px}
\caption{Average misalignment score of LM responses to adversarial prompts in the pool found in the course of red-teaming. With each additional round, more optimization pressure is applied to the search for adversarial prompts. A target LM is considered more robust when its misalignment score increases at a slower rate.}
\label{fig:pretrain_red-team}
\end{figure*}
\paragraph{Results}
We present our main results in Fig.~\ref{results:pretrain-main}. All PHF objectives are able to reduce the amount of undesirable content significantly, sometimes by an order of magnitude. For instance, on toxicity the average misalignment score of an MLE LM reaches 0.0141; conditional pretraining instead reaches 0.0011. These order-of-magnitude drops persist for metrics tracking the right tail of the misalignment score distribution (worst case), see Figs.~\ref{fig:pretrain_exp_max_score}-\ref{fig:pretrain_score_num_hits} in Appendix~\ref{appendix:lm_scores}. Conditional training shifts the right tail furthest left (Fig.~\ref{fig:score_distribution}). Moreover, for conditional training and filtering, the misalignment score decreases consistently through training time, with no clear signs of a plateau. This scaling behavior suggests that increasing training set size further would lead to even lower scores.
Among PHF objectives, conditional training offers the best trade-off between misalignment score reduction and KL overhead. It is strictly Pareto-optimal in toxicity (leftmost and bottommost in Fig.~\ref{results:pretrain-main}, first column, first row) and on the Pareto frontier in PII and PEP8. It is also the only PHF method that is always on the Pareto frontier across all three tasks. In terms of score, it is only outperformed (by filtering) on PEP8. Filtering turns out to be a strong baseline; it is either second-best or best in terms of alignment. However, on two out of three tasks (PII and PEP8) it pays a significant capabilities penalty (the largest among all methods). RWR and AWR tend to obtain similar, rather poor, performance. They improve upon MLE's misalignment score only slightly, while reducing capabilities significantly compared to MLE. Finally, the success of unlikelihood training is highly task-dependent; it reduces the misalignment score significantly for toxicity but only slightly for PII and PEP8.
\subsection{Robustness to Red-Teaming}
\label{sec:pretraining/red_teaming}
\paragraph{Procedure}
In addition to measuring how aligned our LMs are for unconditional generation, we also study their responses to prompts chosen by an adversary. The adversary tries to elicit misaligned behavior of the target LM $\pi_\theta$, a procedure known as ``red-teaming'' \citep{perez_2022}. We use prompted InstructGPT \citep[\texttt{text-davinci-002};][]{Ouyang2022} to simulate an adversary, extending the stochastic few-shot generation approach to red-teaming introduced by \citet{perez_2022}.
We start with an initial pool of human-written adversarial prompts $P = \{ a_i \}$ and iteratively apply the following steps:
\vspace{-5px}
\begin{enumerate}
\itemsep0em
\item Assign each new adversarial prompt $a_i \in P$ with $u(a_i) = \frac{1}{N}\sum_j^N (-R(x_i))$ for $x_j \sim \pi_\theta(x_j|a_i)$, where $\pi_\theta$ is the target LM.
\item Sample $K=4$ adversarial prompts from the pool, $a_1, \dots, a_K$, with weights proportional to $\exp(u(a_k)/\beta)$.
\item Instruct InstructGPT to generate text likely to elicit a particular alignment failure (offensive reply, leaking PII or violating PEP8). In addition to the instruction, InstructGPT is provided with $a_1, \dots, a_K$ as few shot examples. We sample $M=20$ independent completions and add them to the pool $P$.
\end{enumerate}
We repeat steps (1)-(3) for ten rounds. For each model and each task, we conduct ten separate trials of the procedure. We report average and standard deviation across ten trials. For more details, see Appendix \ref{appendix:red}.
\vspace{-5px}
\paragraph{Results}
We show the average misalignment score of all adversarial prompts in the pool, $\frac{1}{|P|}\sum_{i=1}^{|P|} u(a_i)$, throughout ten rounds of red-teaming in Fig.~\ref{fig:pretrain_red-team} (see also Figs.~\ref{fig:pretrain_red-team_round_avg}-\ref{fig:pretrain_red-team_round_max} in Appendix~\ref{appendix:red} for other metrics). The main trend is consistent with misalignment scores from \S\ref{sec:pretraining/tradeoffs}: conditional training and filtering are the most robust objectives in terms of their their final misalignment scores. On toxicity and PII even after ten rounds of red-teaming conditional training outperforms MLE by up to an order of magnitude. Unlikelihood's performance is heavily task-dependent; it is the most robust method (by a wide margin) for toxicity while being the least robust for PII. We verified that its unsually high robustness on toxicity persists when, instead of actively red-teaming, we compute misalignment scores for generation conditioned on a fixed set of challenging RealToxicityPrompts \cite{gehman-etal-2020-realtoxicityprompts}, see Fig.~\ref{fig:pretrain_score_rtp} in Appendix~\ref{appendix:lm_scores}. Overall, all LMs pretrained with feedback (except for unlikelihood-trained LM in PII) are significantly more robust to adversaries than MLE-trained LMs.
On the other hand, all PHF objectives leave LMs with vulnerabilities that an adversary with black box access can exploit. For all PHF objectives, subsequent iterations of red-teaming increase the average score of target LM responses, with no clear plateau even after 10 iterations. This result highlight the limitations of PHF; while it results in LMs significantly more robust than after MLE pretraining, the resulting LMs are not completely aligned or safe in all deployment scenarios.
\subsection{Downstream Benchmarks}
\label{sec:pretraining/downstream}
\paragraph{Zero-shot Benchmarks}
We supplement KL from GPT-3 as a measure of LM capabilities, by measuring the performance of trained models on tasks without additional training or examples (zero-shot). We choose tasks for which a 124M parameter MLE-trained LMs should be able to achieve non-trivial performance.
For toxicity and PII, we evaluate models on LAMBADA \cite{paperno-etal-2016-lambada}, a passage understanding task that evaluates an LM's accuracy and perplexity at predicting the final word in a passage.
For PEP8, we report pass@10 and pass@100 on HumanEval \cite{chen2021codex} which tasks models with generating code to solve a given problem, and evaluates the correctness of the generated code using test cases.
\paragraph{GLUE}
\begin{figure}[t]
\centering
\vspace{-10px}
\hspace{0.01\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{30px}}
\end{subfigure}
\hspace{-0.01\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_lm_harness.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.04\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_glue.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.02\textwidth}
\vspace{-10px}
\hspace{0.01\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{40px}}
\end{subfigure}
\hspace{-0.01\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_lm_harness.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.04\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_glue.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.02\textwidth}
\vspace{-5px}
\hspace{0.01\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PEP8}\hspace{40px}}
\end{subfigure}
\hspace{-0.01\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_humaneval10.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.04\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_humaneval100.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.02\textwidth}
\vspace{-5px}
\caption{GLUE and zero-shot evaluation results (higher is better). Conditional training (\textcolor{cond_orange}{orange}) tends to match MLE's (\textcolor{mle_blue}{blue}) performance.}
\label{fig:pretrain_zero_shot}
\vspace{-5px}
\end{figure}
We also study the performance of PHF-trained LMs on various natural language understanding tasks, after finetuning on those tasks.
In this way, we evaluate the effectiveness of various pretraining objectives at representation learning. In contrast with metrics from previous subsections, this kind of evaluation does not involve any generation; it tests PHF affects representations acquired during pretraining rather than how it affects the distribution over LM outputs.
Here, we use the GLUE benchmark \cite{wang2018_glue}, a suite of text classification tasks related to question answering, sentiment analysis and recognizing textual entailment, among others. We conduct single-model single-task evaluation, i.e. to evaluate a given pretrained LM, we finetune it on the training set of each GLUE task separately and report test set scores averaged across tasks. To control for the variance of results, we restart each finetuning three times and report standard deviation of scores as error bars. We omit GLUE evaluation for PEP8 models because they are trained on code rather than natural language (used in GLUE tasks). See Appendix~\ref{appendix:glue} for details.
\paragraph{Results}
We present the results of zero-shot evaluation in Fig.~\ref{fig:pretrain_zero_shot}. Conditional training slightly exceeds MLE's performance in terms of accuracy on both tasks. Other PHF objectives suffer from decreased accuracy, especially for toxicity. Unlikelihood also matches MLE accuracy, but only for PII; it obtains very low accuracy on toxicity (recall that we found similar task-sensitivity in \S\ref{sec:pretraining/tradeoffs} and \S\ref{sec:pretraining/red_teaming}).
GLUE results paint a similar picture; conditional training most closely matches MLE scores. The second-best objective using feedback is Filtering (on toxicity) or unlikelihood (on PII). For results on individual GLUE tasks, see Appendix~\ref{appendix:glue}.
Finally, on HumanEval, the capabilities gap between MLE and PHF methods is wider. This gap is only closed -- in terms of pass@100 -- by filtering. Conditional training is no longer the best PHF method; it is outperformed or matched by filtering, AWR and RWR. Unlikelihood consistently obtains the lowest scores.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.01\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{28px}}
\end{subfigure}
\hspace{-0.005\textwidth}
\begin{subfigure}[t]{0.46\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_diversity.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-15px}
\begin{subfigure}[t]{0.01\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{30px}}
\end{subfigure}
\hspace{-0.005\textwidth}
\begin{subfigure}[t]{0.46\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_diversity.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-10px}
\caption{Difference in diversity (token entropy) and degeneration frequency (distinct tokens) compared to MLE (higher is better).}
\label{fig:diversity}
\vspace{-10px}
\end{figure}
\vspace{-5px}
\begin{figure*}[t]
\begin{center}
\small{%
\cblock{31.12156862745098}{119.46666666666667}{180.7058823529412} MLE\quad
\cblock{255}{160}{88}
Conditional\quad
\cblock{44.17254901960784}{160.62745098039215}{44.17254901960784} Filtering\quad
\cblock{192}{192}{192} Unlikelihood, RWR, AWR \quad \\
\vspace{5px}
\line{} Pretraining \quad \line{dashed} Finetuning from MLE for 1.6B tokens \quad \line{dotted} Finetuning from MLE for 330M tokens}
\end{center}
\begin{subfigure}[t]{0.33\textwidth}
\begin{center}
\hspace{30px}\small{Task: toxicity}
\end{center}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\begin{center}
\hspace{30px}\small{Task: PII}
\vspace{-40px}
\end{center}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\begin{center}
\hspace{30px}\small{Task: PEP8}
\end{center}
\end{subfigure}
\centering
\begin{subfigure}[t]{0.33\textwidth}
\vspace{-10px}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_finetune_vs_pretrain_score.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\vspace{-10px}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_finetune_vs_pretrain_score.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\vspace{-10px}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_finetune_vs_pretrain_score.pdf}
\end{subfigure}
\vspace{-5px}
\caption{Misalignment score over training time for finetuning with feedback. We report finetuning from a model trained on 1.6B tokens using MLE (dashed line) and finetuning from a model trained on 2.9B tokens using MLE (dotted line).
For comparison, we also plot MLE pretraining and conditional pretraining (solid lines). We grayed out finetuning runs with worse results for clarity. On all tasks, neither finetuning run matches conditional pretraining's scores.}
\label{fig:finetuning}
\vspace{-5px}
\end{figure*}
\subsection{Diversity}
\label{sec:pretraining/diversity}
\paragraph{Metrics}
Constraining an LM to be aligned with human preferences can result in decreased entropy or increased degeneration of LM samples \cite{korbak2022rlBayesian}, e.g. due to repeated tokens \citep{holtzman2019}. To control for this, we supplement our capabilities evaluation with an examination of the diversity and rate of degeneration of LM samples.
We measure diversity in terms of entropy over unigrams expected in a set of $N = 2048$ LM samples and degeneration in terms of the ratio of all unigrams and distinct unigrams \emph{within} an average sample \cite{li2015diversity}. In Appendix~\ref{appendix:diversity} we also report Self-BLEU-5, a measure of text diversity \emph{across} samples \cite{zhu2018texygen}, bigram entropy and fraction of distinct bigrams.
\vspace{-5px}
\paragraph{Results}
The results for toxicity and PII, shown on Fig.~\ref{fig:diversity}, reveal two patterns of behavior. Unlikelihood, AWR and RWR tend to match MLE diversity but suffer from slightly increased degeneration. Conditional training and, to a degree, filtering, show the reverse trend; decreased diversity but more closely matching MLE's fraction of distinct unigrams. In absolute terms, however, none of the PHF objectives cause significant degeneration or entropy collapse.
\section{Finetuning with Human Feedback} \label{sec:finetuning}
\paragraph{Setup}
As discussed in \S\ref{sec:intro}, the standard approach to aligning LMs with human preferences involves pretraining an LM using MLE and finetuning it using an objective involving human feedback, e.g., RL with KL penalties \citep{ziegler2019fine,Ouyang2022} or supervised finetuning \citep{solaiman2021,chung2022_scaling_instruction}. In this section, we compare PHF to supervised finetuning with human feedback using PHF objectives, but only after MLE pretraining.\footnote{We also experimented with finetuning using RL with KL penalties, but decided to exclude these experiments because we did not obtain results competitive with supervised finetuning.}
We are also interested in understanding whether pretraining with MLE and then finetuning with feedback is better than using PHF from scratch. To address this question, we compare finetuning runs against PHF with conditional training, the PHF objective we identified as the best in \S\ref{sec:pretraining}.
To ensure comparability, we use checkpoints of MLE runs from \S\ref{sec:pretraining} trained either 50\% of the training data (i.e. 1.66B tokens) or 90\% of the training data (i.e. 2.97B tokens). We then continue finetuning them for another 1.66B or 300M tokens, respectively, using each of five objectives using feedback.\footnote{It is worth noting that the fraction of the training budget we allocate to finetuning (50\% or 10\%) is already very high (e.g. compared to 1.6\%-0.2\% in \cite{chung2022_scaling_instruction} or 0.1\% in \cite{tay2022_transcending}). This experiment design allows us to interpolate between pretraining and finetuning.} We conduct separate hyperparameter sweeps over learning rate and batch size for each task and finetuning objective. Following standard practice for finetuning a pretrained model, we reset the learning rate schedule used during pretraining. Our setup is otherwise identical to that from \S\ref{sec:pretraining}, e.g., finetuning runs use the same order and batches of training data as pretraining runs from \S\ref{sec:pretraining}.
\begin{figure*}[t]
\centering
\begin{center}
\small{%
\quad \line{pretrain} Pretraining \quad \line{dashed,finetune} Finetuning from MLE for 1.6B tokens
\quad \line{dotted,finetune90} Finetuning from MLE for 330M tokens
}
\end{center}
\vspace{-10px}
\hspace{-0.05\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{25px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_finetune_red_team.pdf}
\vspace{-10px}
\end{subfigure}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{35px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_finetune_red_team.pdf}
\vspace{-10px}
\end{subfigure}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PEP8}\hspace{38px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_finetune_red_team.pdf}
\vspace{-10px}
\end{subfigure}
\hspace{-0.05\textwidth}
\vspace{-20px}
\caption{Average misalignment score (lower is better) of LM responses to adversarial prompts in the pool found in the course of red-teaming, for models pretrained with conditional training (solid lines) and only finetuned with conditional training (dashed and dotted lines); lower is better. Pretraining with feedback for the whole time is always better than only using feedback with final 330M tokens, and tends to be better than using feedback only with the final 1.6B tokens.
}
\label{fig:finetune_red-team}
\vspace{-10px}
\end{figure*}
\paragraph{Results}
We present the comparison of PHF and finetuning with human feedback in Fig.~\ref{fig:finetuning}.
PHF achieves scores that are always better, typically dramatically better, than finetuning with feedback. On toxicity and PII there is a significant gap between pretraining using conditional training and the best finetuning objective.
For instance, in PII, aligning the LM during pretraining is two to three times more effective than finetuning on 300M tokens; conditional pretraining converges to misalignment score 0.0013 compared to 0.0018 (finetuning on 1.6B tokens) and 0.0023 (finetuning on 3.3B tokens).
The gap between PHF and finetuning with feedback only widens as fewer tokens are available for finetuning (dashed vs dotted line in Fig.~\ref{fig:finetuning}).
The size of this gap and its persistence across two tasks provides evidence that PHF is more effective than MLE pretraining followed by finetuning with feedback. We also present a head-to-head comparison of pretraining and finetuning performance of each objective on Fig.~\ref{fig:pretrain_vs_finetune} in Appendix~\ref{appendix:finetuning}; we find that the improvement from PHF over only finetuning with feedback tends to increase with how effective the PHF objective is at reducing scores in general. Cconditional training works well for both pretraining and finetuning (see Fig.~\ref{fig:finetune-main} for a direct comparison with capabilities-alignment of trade-offs of all objectives during finetuning for 1.6B tokens).
Finally, we repeated the red-teaming procedure from \S\ref{sec:pretraining/red_teaming} to compare adversarial robustness of LMs pretrained with conditional training and LMs only finetuned with conditional training (Fig.~\ref{fig:finetune_red-team}). Once again, low misalignment scores from unconditional sampling indicates increased robustness, and we found LMs pretrained with human feedback to be significantly more robust to red-teaming (on toxicity and PII). For instance, on PII, ten rounds of red-teaming of PHF-trained LMs are required to reach the misalignemnt score that a finetuned LM has just after one iteration. Overall, our findings demonstrate that alignment of an LM is closely tied to the quantity of human feedback it receives during training. Involving human feedback throughout the entire pretraining process (as in PHF) results in substantially better alignment than the standard practice of incorporating feedback for only a small portion of the training budget.
\section{Related Work}
\paragraph{Offline RL}
In this paper, we tackled the problem of training an LM on (potentially undesirable) content annotated with feedback while constraining the LM not to imitate undesirable content at inference time. This setting is closely related to offline RL which addresses training an optimal policy on (possibly suboptimal) demonstrations annotated with rewards \cite{levine_offline_rl}. Most work in offline RL has focused on pretraining policies for robotic control environments \cite{nair2020,kumar2020}. However, offline RL techniques were recently used for finetuning pretrained LMs to be aligned with human preferences in dialog tasks \cite{jaques-etal-2020-human,jang2022gptcritic,snell_ilql}. Conditional training has recently emerged as an effective apporoach to offline RL~\cite{Schmidhuber2019,kumar2019_rcp} and demonstrated strong results %
when paired with transformers \cite{chen2021decisiontransformer,janner2021sequence}.
For instance, decision transformer \cite{chen2021decisiontransformer} consists of training a sequence model on (reward, state, action) pairs and, at inference time, sampling an action conditioned on high reward. This approach mirrors our conditional training approach: training an LM on (control token, sentence) pairs and, at inference time, sampling tokens when conditioned on an \texttt{<|good|>} control token.
\paragraph{LM alignment during finetuning}
While we focus on pretraining, aligning LMs is frequently approached through finetuning an MLE-pretrained LM. In addition to RLHF \citep{ziegler2019fine}, alternative finetuning objectives included divergence from a target distribution \cite{khalifa2021distributional,korbak2022reinforcement} or supervised finetuning on data generated by other LMs \cite{scheurer2022} or highly curated collections of tasks phrased as instructions \cite{sanh_t0,chung2022_scaling_instruction}.
\section{Conclusion}
In the paper, we challenged the practice of aligning LMs during finetuning and advocated for utilizing human feedback during pretraining itself. Out of five PHF objectives we evaluated, conditional training consistently outperforms the alternatives in terms of both capabilities and alignment (with two notable exceptions: unlikelihood is more robust to red-teaming on toxicity and filtering achieves better HumanEval results). The fact that conditional training tends to match MLE's capabilities while enjoying much better alignment corroborates previous findings \cite{bai2022training} that alignment and capabilities might not be at odds with each other on many tasks of practical importance. On the other hand, PHF requires additional the overhead of annotating the training data with a reward function. Note, however, that assigning rewards is a constant cost that can be amortized over many training runs. PHF shares an important advantage of offline RL over online RL approaches (such as RLHF): being able to reuse feedback annotations across experiments \cite{levine_offline_rl}. Overall, incorporating human preferences in pretraining leads to capable models that generate text more aligned with human preferences, even under adversarial attacks.
\section*{Acknowledgments}
We are grateful to
Adam Gleave,
Ajeya Cotra,
Alex Havrilla,
Andy Jones,
Asa Cooper Stickland,
Beth Barnes,
Charlie Snell,
Claudia Shi,
Daniel Ziegler,
David Dohan,
David Krueger,
David Linder,
Euan McLean,
Evan Hubinger,
Ian McKenzie,
Jérémy Scheurer,
Kath Lupante,
Kyle McDonell,
Laria Reynolds,
Leo Gao,
Łukasz Kuciński,
Michael Janner,
Piotr Miłoś,
Sean Welleck,
Scott Emmons, and
Xiang Pan
for helpful conversations and feedback.
Tomasz Korbak was supported by the Leverhulme Doctoral Scholarship and Open Philantropy.
Angelica Chen was supported by the National Science Foundation Award no. 1922658.
Sam Bowman was supported by Eric and Wendy Schmidt (by recommendation of the Schmidt Futures program), Open Philanthropy, Apple, and the National Science Foundation under Grant Nos. 1922658 and 2046556.
Ethan Perez was supported by the National Science Foundation and Open Philanthropy. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
We also thank NYU High-Performance Computing Center for providing access to computational resources and OpenAI for providing access and credits to their models via the API Academic Access Program.
\section{Introduction}\label{sec:intro}
\begin{figure}
\centering
\begin{subfigure}[c]{0.5\textwidth}
\includegraphics[width=0.9\linewidth]{figures/fig1.pdf}
\end{subfigure}
\begin{subfigure}[c]{0.5\textwidth}
\vspace{-210px}
\begin{subfigure}[b]{0.4\textwidth}
\quad
\end{subfigure}%
\begin{subfigure}[b]{0.75\textwidth}
\tiny{{
\line{mle_blue} Conventional LM pretraining \quad \hfill \break
\line{cond_orange} Pretraining with feedback \\
\line{dashed,cond_orange} Finetuning with feedback for 1.6B tokens \\
\line{dotted,cond_orange} Finetuning with feedback for 330M tokens}}
\end{subfigure}
\end{subfigure}
\vspace{-20px}
\caption{Toxicity score (lower is better) of LMs pretrained with the standard objective (solid \textcolor{mle_blue}{blue}), using conditional training (solid \textcolor{cond_orange}{orange}) and LMs finetuned using conditional training for 1.6B (\textcolor{cond_orange}{orange} dashed) and 330M tokens (\textcolor{cond_orange}{orange} dotted). Pretraining with Human Feedback (PHF) reduces the amount of offensive content much more effectively than finetuning with human feedback.}
\label{fig:fig1}
\vspace{-15px}
\end{figure}
Language models (LMs) are trained to imitate text from large and diverse datasets.
These datasets often contain content that violates human preferences, e.g., falsehoods \cite{lin2021truthfulqa}, offensive comments \cite{gehman-etal-2020-realtoxicityprompts}, personally identifiable information \citep[PII;][]{carlini2020} or low-quality code \cite{chen2021codex}.
Imitating such data stands in stark contrast with the behavior people desire from language models, e.g., to generate text that is helpful, honest and harmless \cite{lab}.
In this paper, we explore alternative objectives for pretraining LMs on large amounts of diverse data that guide them to generate text aligned with human preferences.
Prior work on aligning LMs with human preferences almost exclusively focused on making adjustments to pretrained LMs. A widely adopted strategy of adding safety filters on top of pretrained LMs \cite{recipes} works only to an extent: even the most effective safety filters fail to catch a large amount of undesirable content \cite{gehman-etal-2020-realtoxicityprompts,weibl2021,ziegler2022}. Another approach involves finetuning LMs using either supervised learning on curated data \citep{solaiman2021,scheurer2022} or reinforcement learning from human feedback \citep[RLHF;][]{ziegler2019fine,Ouyang2022,bai2022training,menick_2022_sparrow}, but this strategy is also limited by the fact that large LMs are quite resistant to forgetting their training data \citep[an effect that increases with model size;][]{carlini2022,vu2022,ramasesh2022effect}. While filtering out all undesirable content from pretraining data could seem to be a simple solution, it severely handicaps the capabilities of LMs \cite{weibl2021} which are already bottlenecked by high-quality data \citep{Hoffmann2022,Villalobos2022_will_we}.
Moreover, reducing the diversity of training data can negatively impact alignment with human preferences by decreasing robustness \cite{Hendrycks2019,Hendrycks2020} and amplifying existing social biases \cite{xu_detoxifying,weibl2021}.
These limitations suggest that while human preferences should be imposed in pretraining itself, content violating those preferences should still be present in the training data.
In this paper, we explore objectives for aligning LMs with human preferences during pretraining. Instead of filtering the training data, we propose pretraining with human feedback (PHF), where we estimate human preference judgments using a reward function (e.g. a toxic text classifier).
In this way, we allow the LM to learn from undesirable content while guiding the LM \emph{not} to imitate it at inference time.
We experiment with four PHF objectives: conditional training \cite{keskar}, dataset filtering, unlikelihood loss \cite{welleck2019} and two offline RL algorithms, reward-weighted regression \citep[RWR;][]{peters2007} and advantage-weighted regression \citep[AWR;][]{peng2019}. We compare them to maximum likelihood estimation (MLE), the standard pretraining objective.
We evaluate PHF objectives on three tasks: generating non-toxic text, text without personally identifiable information (PII), and PEP8-compliant Python \cite{pep8}.
We compare LMs pretrained with feedback in terms of \emph{alignment} (how well they satisfy preferences) and \emph{capabilities} (how well they perform on downstream tasks).
While different objectives offer different alignment--capabilities trade-offs for different tasks, we find that \emph{conditional training} is on the Pareto frontier across all three tasks. Conditional training is a simple algorithm that learns a distribution over tokens conditional
on their human preference score, reminiscent of decision transformer in reinforcement learning \citep{chen2021decisiontransformer}.
Conditional training decreases the frequency of undesirable content in LM samples up to an order of magnitude, reaping continued improvements with increasing training data (\S\ref{sec:pretraining/tradeoffs}).
Superior alignment persists when the LM is faced with an adversary prompting it to elicit undesirable behavior, as evaluated using the automated red-teaming approach from \citet{perez_2022} (\S\ref{sec:pretraining/red_teaming}).
At the same time, conditional training achieves comparable performance to MLE-trained LMs on zero-shot benchmarks \citep{paperno-etal-2016-lambada,chen2021codex} and after finetuning on GLUE tasks \citep{wang2018_glue} (\S\ref{sec:pretraining/downstream}); conditional training is able to learn representations from the entire training distribution, without learning to regurgitate undesirable content as MLE-trained LMs do.
Finally, in \S\ref{sec:finetuning} we examine whether PHF improves over the standard practice of MLE pretraining followed by finetuning with human feedback.
We find that PHF results in equal or (sometimes dramatically) better alignment across all three tasks (Fig.~\ref{fig:fig1}) as well as improved adversarial robustness.
These findings results suggest that it is more effective to train LMs to exhibit desirable behaviors from the outset, rather than having them learn undesirable behavior and then attempt to unlearn it.
Our results challenge the standard practice of aligning LMs with human preferences during finetuning alone, suggesting that should we incorporate human preferences from the very beginning of training.\footnote{The code and datasets accompanying the paper are available at \href{https://github.com/tomekkorbak/pretraining-with-human-feedback}{github.com/tomekkorbak/pretraining-with-human-feedback}}
\section{Methods}
\label{sec:method}
Here we present five PHF objectives that we will evaluate in \S\ref{sec:pretraining}, in terms of various capabilities and alignment metrics for different tasks. In LM pretraining, we start with an LM $\pi_\theta$ with randomly initialized weights $\theta$ and an unlabeled dataset of documents $\mathcal{D}$. Each document $x \in \mathcal{D}$ is a sequence of segments (sentences or lines): $x = (x^1, \dots, x^{|x|})$. Each segment $x^i \in x$ is a sequence of $N_i$ tokens: $x^i = (x^i_1, \dots, x^i_{N_i})$, where $N_i = |x^i|$. Tokens come from a fixed vocabulary $\mathcal{V}$. In PHF, we additionally assume access to a segment-level reward function $R$ that takes a document segment $x^i$ and outputs a scalar score $R(x^i)$ indicating how preferable $x^{(i)}$ is. For instance, $R(x^i)$ could be the negative likelihood that a sentence would be harmful to civil conversation. At a high-level, pretraining can be posed as maximizing some pretraining objective $\mathcal{L}$ across documents: $\pi_\theta = \operatorname*{argmax}_\theta \sum_{x \in \mathcal{D}} \mathcal{L}(x)$. In the rest of the section we will describe MLE, the standard objective, followed by five PHF objectives.
\paragraph{MLE} Maximum likelihood estimation \citep[MLE;][]{Bengio2013,mikolov2021,Radford2018ImprovingLU,brown_gpt3} is the dominant approach to pretraining and finetuning LMs. This objective boils down to the log likelihood of training documents:
\begin{equation}\label{obj:mle}
\mathcal{L}_\text{MLE}(x) = \log \pi_\theta(x),
\end{equation}
where $\log \pi_\theta(x)$ can be decomposed autoregressively as
\begin{align}
\log \pi_\theta(x) &= \sum_{i=1}^{|x|} \log \pi_\theta(x^i|x^{<i}) \\
&= \sum_{i=1}^{|x|} \sum_{j=1}^{|x^i|} \log \pi_\theta(x^i_j|x^{\leq i}_{<j}),
\end{align}
where $x^{<i} = (x^1, \dots, x^{i-1})$ denotes all segments in a document prior to $x^i$ and $x^{\leq i}_{<j} = (x^1_1, \dots, x^i_{j-1})$ denotes all tokens in a document $x$ prior to $x^i_j$.
\paragraph{MLE with Filtering} Dataset filtering \citep{solaiman2021,Wang2022} corresponds to an objective identical to MLE except it is zero for documents $x$ such that their document-level reward $\text{avg}(R(x)) = \frac{1}{|x|} \sum_{i=1}^{|x|} R(x^i)$ is above a threshold $t$:
\begin{equation}
\mathcal{L}_\text{Filt}(x) = \begin{cases}\log \pi_\theta(x), \text{if} \text{ avg} (R(x)) > t, \\ 0, \ \text{otherwise}.\end{cases}
\end{equation}
$t$ is a hyperparameter we set to a certain percentile of document-level rewards in the training data (see Appendix~\ref{appendix:hparams} for values used in experiments and an ablation study). In practice, we train with this objective by discarding documents with rewards below $t$ and training for multiple epochs on the remaining ones at a fixed budget of training tokens.
\paragraph{Conditional Training} Conditional training \citep{ficler-goldberg-2017-controlling,fan2018,keskar} has been shown to be successful across tasks as diverse as as controllable language generation \cite{peng-etal-2018-towards,dai2019}, mitigating toxicity \cite{gehman-etal-2020-realtoxicityprompts,recipes,Lu2022QuarkCT} and robotic control \cite{chen2021decisiontransformer,janner2021sequence}. In its simplest form, it extends MLE by prepending each segment $x^i$ with a control token $c^i$ based on that segment's reward $R(x^i)$:
\begin{equation}\label{obj:cond}
\mathcal{L}_\text{Cond}(x) = \log \pi_\theta(c^i, x^i, \dots, c^{|x|}, x^{|x|})
\end{equation}
We use two control tokens: \texttt{<|good|>} if $R(x^i) \geq t$ and \texttt{<|bad|>} otherwise. The threshold $t$ is a hyperparameter.
At inference time, we sample from $\pi_\theta(\cdot|c_1=\texttt{<|good|>})$. See Appendix~\ref{appendix:hparams} for details.
\paragraph{Unlikelihood} Unlikelihood training \citep{welleck2019} follows MLE in maximizing the likelihoods of segments exceeding a certain reward threshold $t$. However, for segments with rewards below the threshold, we use token-level \emph{unlikelihood} instead. The unlikelihood of a token $x^i_j$ is the total log probability of all other tokens in the vocabulary on position $j$ of segment $i$. This gives rise to the objective:
\begin{align}
\mathcal{L}_\text{UL}(x) = &\sum_{\substack{ x=1 \\ R(x^i) > t}}^{|x|} \log \pi_\theta(x^i|x^{<i}) \nonumber \\ + \alpha &\sum_{\substack{ x=1 \\ R(x^i) \leq t}}^{|x|} \sum_{j=1}^{|x^i|} \log (1-\pi_\theta(x^i_j|x^{\leq i}_{<j}))
\end{align}
The threshold $t$ and $\alpha$, a coefficient scaling the second (unlikelihood) term, are hyperparameters.
\paragraph{RWR} Reward-weighted regression \citep[RWR;][]{peters2007} extends MLE by reweighting each segment by a term proportional to exponentiated reward:
\begin{equation}
\mathcal{L}_\text{RWR}(x) = \sum_{i=1}^{|x|} \log \pi_\theta(x^i|x^{<i}) \exp(R(x^i)/\beta)
\end{equation}
$\beta$, the coefficient controlling how much reward affects the loss, is a hyperparameter.
\paragraph{AWR} Advantage-weighted regression \citep[AWR;][]{peng2019} extends RWR by subtracting a token-level value estimate $V_\theta(x^i_j)$ from each segment-level reward $R(x^i)$. Value estimates are produced by a value function that shares parameters $\theta$ with the LM but is trained to minimize the mean-squared error between token-level value estimate and ground-truth returns $R(x^i)$. The LM and the value head are trained jointly to maximize:
\begin{align}
\mathcal{L}_\text{AWR}(x) = \alpha &\sum_{i=1}^{|x|} \sum_{j=1}^{|x^i|} \log \pi_\theta(x^i_j|x^{\leq i}_{<j}) \exp \Big(A(x^i_j)/\beta \Big) \nonumber \\
- (1-\alpha) &\sum_{i=1}^{|x|} \sum_{j=1}^{|x^i|} \big[ V_\theta(x^i_j) - R(x^i)) \big]^2
\end{align}
where $A(x^i_j) = R(x^i)-V_\theta(x^i_j)$ is the advantage. The two hyperparameters are $\alpha$ (controlling the trade-off between value loss and policy loss) and $\beta$ (again, controlling the amount of reweighting). We implement the value function $V_\theta$ as a linear head on top of the LM $\pi_\theta$; they share the parameters of all other layers.
\section{Experimental Setup}\label{sec:setup}
Here, we describe the setup of our pretraining (\S\ref{sec:pretraining}) and finetuning experiments~(\S\ref{sec:finetuning}), which we use to compare MLE and various PHF objectives on both capabilities and alignment.
\subsection{Tasks}
We evaluate PHF objectives on three tasks: (i) avoiding offensive content, (ii) avoiding leaking personally identifiable information (PII), and (iii) generating Python code following PEP8, the style guide for Python \cite{pep8}. Each task is associated with a reward function $R$ and a dataset $\mathcal{D}$ as defined in \S\ref{sec:method}. For evaluation, we use misalignment scores equal to the negative rewards.
\paragraph{Toxicity}
LMs can generate highly harmful language, including insults, profanities and threats \cite{sap-etal-2019-risk,gehman-etal-2020-realtoxicityprompts,abid2021}. Following \citet{weibl2021}, we group these harms under the name of ``toxicity,'' understood as ``a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective'' \citep{Borkan2019}. To obtain toxicity scores, we follow \citet{lab} and use Detoxify \citep{Detoxify}, %
a toxic comment classifier. We used the \texttt{unbiased} model, based on the 124M parameter RoBERTa \citep{Liu2019} and trained on the Jigsaw Unintended Bias in Toxicity Classification dataset \citep{Borkan2019}. We define our reward $R$ as negative probability of toxicity according to Detoxify and misalignment score as the probability of toxicity. Since Detoxify was trained on short documents (predominantly comments), we first segment our training documents using a SpaCy \citep{spacy} sentence segmenter and score them at sentence level. When scoring LM samples during evaluation, we skip segmentation.
\paragraph{PII}
LMs sometimes generate text that occurs verbatim in their training data \cite{carlini2019,perez_2022}. This poses privacy risks if the text contains confidential information identifying living people (PII) such as email addresses or social security numbers \cite{henderson2017}. To detect such PII, we use Scrubadub,\footnote{\href{https://github.com/LeapBeyond/scrubadub}{github.com/LeapBeyond/scrubadub}} a PII detector using both pattern matching rules and a pretrained SpaCy \citep{spacy} named entity recognizer. We use pattern matching for detecting emails, addresses and postal codes, phone numbers, credit card numbers, US social security numbers, vehicle plates numbers, dates of birth, URLs and login credentials. The named entity recognizer detects mentions of people names, locations and organizations. We define our reward $R$ as the negative number of detected PII instances per character. Similarly to toxicity, we score training documents at sentence-level.
\paragraph{PEP8}
While LMs are highly successful at generating code, the generated code is not always aligned with user intent \cite{chen2021codex}. For instance, prompted with low-quality code, LMs are likely to produce a low-quality completion even if user's intent is to write high-quality code. We explore alignment failures in the context of code by requiring compliance with PEP8 \cite{pep8}, the style guide for Python. To detect PEP8 violations, we use \texttt{pycodestyle}, a popular static code analysis tool.\footnote{\href{https://github.com/PyCQA/pycodestyle}{github.com/PyCQA/pycodestyle}} Our reward function $R$ is the negative number of PEP8 violations per character. We assign rewards to individual lines of training documents, but note that the presence of PEP8 violations on a particular line does depend on previous lines.
\subsection{Model Architecture and Hyperparamers}
All of our LMs use the neural network architecture of \texttt{gpt2-small} \citep[124M parameters;][]{radford2019language}. We keep the original hyperparameters of \texttt{gpt2-small} except for learning rate and batch size, which we tune for each task-objective pair based on train loss. If an objective has it own hyperparameters (e.g. $t$, $\alpha$ or $\beta$), we tune learning rate and batch size separately for each $(t, \alpha, \beta)$ configuration considered and then chose the best $(t, \alpha, \beta)$ configuration based on misalignment score of LM samples and the KL divergence from GPT-3~(\S\ref{sec:pretraining/tradeoffs}). See Appendix~\ref{appendix:hparams} for hyperparameters used in experiments and ablations on them.
\subsection{Training Data}
We fixed training set size to 3.32B tokens which is compute-optimal for our model size according to the scaling laws from \citet{Hoffmann2022}. For toxicity and PII, we prepared training data by subsampling 1.95M documents (totaling 3.32B tokens) from the Pile \citep{pile}. For code generation, we subsampled 1.5M Python files (again totaling 3.32B tokens) from a cleaned and filtered version of the GitHub dataset from Google BigQuery released by \citet{tunstall2022natural}.\footnote{\href{https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code}{GitHub on BigQuery}}
\section{Pretraining Experiments}\label{sec:pretraining}
\begin{figure*}[ht!]
\centering
\legend
\vspace{-10px}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{25px}}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_kl_score_scatterplot.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_kl.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_score.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-15px}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{38px}}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_kl_score_scatterplot.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_kl.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_score.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-15px}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PEP8}\hspace{35px}}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_kl_score_scatterplot.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_kl.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_score.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-15px}
\caption{KL from GPT-3 and average misalignment score of LM samples for MLE and PHF objectives (lower is better). We show KL from GPT-3 versus average score on a scatter plot (first column) and also each of these two metrics over training time (with log-log axes; second and third columns). Conditional training (\textcolor{cond_orange}{orange}) is either strictly optimal (toxicity, PEP8) or on the Pareto frontier (PII) of PHF objectives}
\label{results:pretrain-main}
\end{figure*}
In this section, we investigate how PHF affects the alignment and capabilities of resulting models. In \S\ref{sec:pretraining/tradeoffs} we introduce two primary metrics: misalignment score (indicating how well unconditional samples from an LM satisfy human preferences) and the KL divergence from GPT3 (indicating general capabilities), and discuss the Pareto frontier of the capability-alignment trade-off. We additionally evaluate alignment by analyzing LM behavour when conditioned on adversarial prompts (``red-teaming''; \S\ref{sec:pretraining/red_teaming}) and evaluate capabilities by reporting performance on downstream tasks (\S\ref{sec:pretraining/downstream}). Finally, we measure diversity of LM samples (\S\ref{sec:pretraining/diversity}).
\subsection{Capabilities-Alignment Trade-offs} \label{sec:pretraining/tradeoffs}
\paragraph{Misalignment Score} To estimate the frequency of undesirable content in text generated by an LM, we obtain a set of $K=4096$ samples from it by nucleus sampling \cite{holtzman2019} with temperature $T = 0.7$ and top-$p = 0.9$, constraining sequence length to be between 10 and 128 tokens. Unless specified otherwise, we generate unconditionally, i.e. only condition on a special \texttt{<|endoftext|>} token (or on \texttt{<|endoftext|><|good|>} when using conditional training). We then score those samples using the same scorers that had been used as reward functions during training. %
We report misalignment scores averaged across $K$ samples. In Appendix~\ref{appendix:lm_scores}, we also report metrics tracking the worst-case tail of misalignment score distribution.
\paragraph{KL from GPT-3} As a measure of an LM's general capabilities, we estimate the Kullback-Leibler (KL) divergence of its output distribution from that of a highly capable model, GPT-3 \citep{brown_gpt3}.
Lower divergence from GPT-3 likely translates into an increase in capabilities.
We qualitatively found KL from GPT-3 to be sensitive to the most egregious failure modes of PHF, e.g., degeneration \citep{holtzman2019}, repetition or reduced sample diversity. Note that KL from GPT-3 favors models trained like GPT-3, namely with MLE and without any alignment-relevant constraints; such constraints may cause the distribution to change in ways that do not impact a model's performance on downstream tasks.
We estimate $D_\text{KL}(p_\text{GPT3}, \pi_\theta)$ by computing $\frac{1}{N}\sum_{n=1}^N \log \frac{p_\text{GPT-3}(x_i)}{\pi_\theta(x_i)}$, where $x_1, \dots, x_N \sim p_\text{GPT3}$ are samples from GPT-3 obtained using its public API\footnote{\href{https://openai.com/api/}{openai.com/api/}} and $\pi_\theta$ is the LM being evaluated. We generate $N = 4096$ unbiased (temperature 1, top-$p$ 1) samples of at most 64 tokens, using \texttt{<|endoftext|>} as a stop token. To decrease variance due to the stochasticity of sampling we used the same set of $N$ samples for all evaluations.
For toxicity and PII experiments, we use GPT-3 (175B; \texttt{davinci}) as $p_\text{GPT3}$. For PEP8, we use a 12B Codex model \citep[\texttt{code-cushman-001};][]{chen2021codex}. In prior experiments, we found that using InstructGPT \citep[\texttt{textdavinci-002};][]{Ouyang2022} as a target distribution gives very similar results.
\begin{figure*}[t]
\centering
\legend
\vspace{-15px}
\hspace{-0.05\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{25px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_red_team.pdf}
\vspace{-20px}
\end{subfigure}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{35px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_red_team.pdf}
\vspace{-20px}
\end{subfigure}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PEP8}\hspace{38px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_red_team.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.05\textwidth}
\vspace{-10px}
\caption{Average misalignment score of LM responses to adversarial prompts in the pool found in the course of red-teaming. With each additional round, more optimization pressure is applied to the search for adversarial prompts. A target LM is considered more robust when its misalignment score increases at a slower rate.}
\label{fig:pretrain_red-team}
\end{figure*}
\paragraph{Results}
We present our main results in Fig.~\ref{results:pretrain-main}. All PHF objectives are able to reduce the amount of undesirable content significantly, sometimes by an order of magnitude. For instance, on toxicity the average misalignment score of an MLE LM reaches 0.0141; conditional pretraining instead reaches 0.0011. These order-of-magnitude drops persist for metrics tracking the right tail of the misalignment score distribution (worst case), see Figs.~\ref{fig:pretrain_exp_max_score}-\ref{fig:pretrain_score_num_hits} in Appendix~\ref{appendix:lm_scores}. Conditional training shifts the right tail furthest left (Fig.~\ref{fig:score_distribution}). Moreover, for conditional training and filtering, the misalignment score decreases consistently through training time, with no clear signs of a plateau. This scaling behavior suggests that increasing training set size further would lead to even lower scores.
Among PHF objectives, conditional training offers the best trade-off between misalignment score reduction and KL overhead. It is strictly Pareto-optimal in toxicity (leftmost and bottommost in Fig.~\ref{results:pretrain-main}, first column, first row) and on the Pareto frontier in PII and PEP8. It is also the only PHF method that is always on the Pareto frontier across all three tasks. In terms of score, it is only outperformed (by filtering) on PEP8. Filtering turns out to be a strong baseline; it is either second-best or best in terms of alignment. However, on two out of three tasks (PII and PEP8) it pays a significant capabilities penalty (the largest among all methods). RWR and AWR tend to obtain similar, rather poor, performance. They improve upon MLE's misalignment score only slightly, while reducing capabilities significantly compared to MLE. Finally, the success of unlikelihood training is highly task-dependent; it reduces the misalignment score significantly for toxicity but only slightly for PII and PEP8.
\subsection{Robustness to Red-Teaming}
\label{sec:pretraining/red_teaming}
\paragraph{Procedure}
In addition to measuring how aligned our LMs are for unconditional generation, we also study their responses to prompts chosen by an adversary. The adversary tries to elicit misaligned behavior of the target LM $\pi_\theta$, a procedure known as ``red-teaming'' \citep{perez_2022}. We use prompted InstructGPT \citep[\texttt{text-davinci-002};][]{Ouyang2022} to simulate an adversary, extending the stochastic few-shot generation approach to red-teaming introduced by \citet{perez_2022}.
We start with an initial pool of human-written adversarial prompts $P = \{ a_i \}$ and iteratively apply the following steps:
\vspace{-5px}
\begin{enumerate}
\itemsep0em
\item Assign each new adversarial prompt $a_i \in P$ with $u(a_i) = \frac{1}{N}\sum_j^N (-R(x_i))$ for $x_j \sim \pi_\theta(x_j|a_i)$, where $\pi_\theta$ is the target LM.
\item Sample $K=4$ adversarial prompts from the pool, $a_1, \dots, a_K$, with weights proportional to $\exp(u(a_k)/\beta)$.
\item Instruct InstructGPT to generate text likely to elicit a particular alignment failure (offensive reply, leaking PII or violating PEP8). In addition to the instruction, InstructGPT is provided with $a_1, \dots, a_K$ as few shot examples. We sample $M=20$ independent completions and add them to the pool $P$.
\end{enumerate}
We repeat steps (1)-(3) for ten rounds. For each model and each task, we conduct ten separate trials of the procedure. We report average and standard deviation across ten trials. For more details, see Appendix \ref{appendix:red}.
\vspace{-5px}
\paragraph{Results}
We show the average misalignment score of all adversarial prompts in the pool, $\frac{1}{|P|}\sum_{i=1}^{|P|} u(a_i)$, throughout ten rounds of red-teaming in Fig.~\ref{fig:pretrain_red-team} (see also Figs.~\ref{fig:pretrain_red-team_round_avg}-\ref{fig:pretrain_red-team_round_max} in Appendix~\ref{appendix:red} for other metrics). The main trend is consistent with misalignment scores from \S\ref{sec:pretraining/tradeoffs}: conditional training and filtering are the most robust objectives in terms of their their final misalignment scores. On toxicity and PII even after ten rounds of red-teaming conditional training outperforms MLE by up to an order of magnitude. Unlikelihood's performance is heavily task-dependent; it is the most robust method (by a wide margin) for toxicity while being the least robust for PII. We verified that its unsually high robustness on toxicity persists when, instead of actively red-teaming, we compute misalignment scores for generation conditioned on a fixed set of challenging RealToxicityPrompts \cite{gehman-etal-2020-realtoxicityprompts}, see Fig.~\ref{fig:pretrain_score_rtp} in Appendix~\ref{appendix:lm_scores}. Overall, all LMs pretrained with feedback (except for unlikelihood-trained LM in PII) are significantly more robust to adversaries than MLE-trained LMs.
On the other hand, all PHF objectives leave LMs with vulnerabilities that an adversary with black box access can exploit. For all PHF objectives, subsequent iterations of red-teaming increase the average score of target LM responses, with no clear plateau even after 10 iterations. This result highlight the limitations of PHF; while it results in LMs significantly more robust than after MLE pretraining, the resulting LMs are not completely aligned or safe in all deployment scenarios.
\subsection{Downstream Benchmarks}
\label{sec:pretraining/downstream}
\paragraph{Zero-shot Benchmarks}
We supplement KL from GPT-3 as a measure of LM capabilities, by measuring the performance of trained models on tasks without additional training or examples (zero-shot). We choose tasks for which a 124M parameter MLE-trained LMs should be able to achieve non-trivial performance.
For toxicity and PII, we evaluate models on LAMBADA \cite{paperno-etal-2016-lambada}, a passage understanding task that evaluates an LM's accuracy and perplexity at predicting the final word in a passage.
For PEP8, we report pass@10 and pass@100 on HumanEval \cite{chen2021codex} which tasks models with generating code to solve a given problem, and evaluates the correctness of the generated code using test cases.
\paragraph{GLUE}
\begin{figure}[t]
\centering
\vspace{-10px}
\hspace{0.01\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{30px}}
\end{subfigure}
\hspace{-0.01\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_lm_harness.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.04\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_glue.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.02\textwidth}
\vspace{-10px}
\hspace{0.01\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{40px}}
\end{subfigure}
\hspace{-0.01\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_lm_harness.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.04\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_glue.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.02\textwidth}
\vspace{-5px}
\hspace{0.01\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PEP8}\hspace{40px}}
\end{subfigure}
\hspace{-0.01\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_humaneval10.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.04\textwidth}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_pretrain_humaneval100.pdf}
\vspace{-20px}
\end{subfigure}
\hspace{-0.02\textwidth}
\vspace{-5px}
\caption{GLUE and zero-shot evaluation results (higher is better). Conditional training (\textcolor{cond_orange}{orange}) tends to match MLE's (\textcolor{mle_blue}{blue}) performance.}
\label{fig:pretrain_zero_shot}
\vspace{-5px}
\end{figure}
We also study the performance of PHF-trained LMs on various natural language understanding tasks, after finetuning on those tasks.
In this way, we evaluate the effectiveness of various pretraining objectives at representation learning. In contrast with metrics from previous subsections, this kind of evaluation does not involve any generation; it tests PHF affects representations acquired during pretraining rather than how it affects the distribution over LM outputs.
Here, we use the GLUE benchmark \cite{wang2018_glue}, a suite of text classification tasks related to question answering, sentiment analysis and recognizing textual entailment, among others. We conduct single-model single-task evaluation, i.e. to evaluate a given pretrained LM, we finetune it on the training set of each GLUE task separately and report test set scores averaged across tasks. To control for the variance of results, we restart each finetuning three times and report standard deviation of scores as error bars. We omit GLUE evaluation for PEP8 models because they are trained on code rather than natural language (used in GLUE tasks). See Appendix~\ref{appendix:glue} for details.
\paragraph{Results}
We present the results of zero-shot evaluation in Fig.~\ref{fig:pretrain_zero_shot}. Conditional training slightly exceeds MLE's performance in terms of accuracy on both tasks. Other PHF objectives suffer from decreased accuracy, especially for toxicity. Unlikelihood also matches MLE accuracy, but only for PII; it obtains very low accuracy on toxicity (recall that we found similar task-sensitivity in \S\ref{sec:pretraining/tradeoffs} and \S\ref{sec:pretraining/red_teaming}).
GLUE results paint a similar picture; conditional training most closely matches MLE scores. The second-best objective using feedback is Filtering (on toxicity) or unlikelihood (on PII). For results on individual GLUE tasks, see Appendix~\ref{appendix:glue}.
Finally, on HumanEval, the capabilities gap between MLE and PHF methods is wider. This gap is only closed -- in terms of pass@100 -- by filtering. Conditional training is no longer the best PHF method; it is outperformed or matched by filtering, AWR and RWR. Unlikelihood consistently obtains the lowest scores.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.01\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{28px}}
\end{subfigure}
\hspace{-0.005\textwidth}
\begin{subfigure}[t]{0.46\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_pretrain_diversity.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-15px}
\begin{subfigure}[t]{0.01\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{30px}}
\end{subfigure}
\hspace{-0.005\textwidth}
\begin{subfigure}[t]{0.46\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_pretrain_diversity.pdf}
\end{subfigure}
\hspace{-0.03\textwidth}
\vspace{-10px}
\caption{Difference in diversity (token entropy) and degeneration frequency (distinct tokens) compared to MLE (higher is better).}
\label{fig:diversity}
\vspace{-10px}
\end{figure}
\vspace{-5px}
\begin{figure*}[t]
\begin{center}
\small{%
\cblock{31.12156862745098}{119.46666666666667}{180.7058823529412} MLE\quad
\cblock{255}{160}{88}
Conditional\quad
\cblock{44.17254901960784}{160.62745098039215}{44.17254901960784} Filtering\quad
\cblock{192}{192}{192} Unlikelihood, RWR, AWR \quad \\
\vspace{5px}
\line{} Pretraining \quad \line{dashed} Finetuning from MLE for 1.6B tokens \quad \line{dotted} Finetuning from MLE for 330M tokens}
\end{center}
\begin{subfigure}[t]{0.33\textwidth}
\begin{center}
\hspace{30px}\small{Task: toxicity}
\end{center}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\begin{center}
\hspace{30px}\small{Task: PII}
\vspace{-40px}
\end{center}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\begin{center}
\hspace{30px}\small{Task: PEP8}
\end{center}
\end{subfigure}
\centering
\begin{subfigure}[t]{0.33\textwidth}
\vspace{-10px}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_finetune_vs_pretrain_score.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\vspace{-10px}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_finetune_vs_pretrain_score.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\vspace{-10px}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_finetune_vs_pretrain_score.pdf}
\end{subfigure}
\vspace{-5px}
\caption{Misalignment score over training time for finetuning with feedback. We report finetuning from a model trained on 1.6B tokens using MLE (dashed line) and finetuning from a model trained on 2.9B tokens using MLE (dotted line).
For comparison, we also plot MLE pretraining and conditional pretraining (solid lines). We grayed out finetuning runs with worse results for clarity. On all tasks, neither finetuning run matches conditional pretraining's scores.}
\label{fig:finetuning}
\vspace{-5px}
\end{figure*}
\subsection{Diversity}
\label{sec:pretraining/diversity}
\paragraph{Metrics}
Constraining an LM to be aligned with human preferences can result in decreased entropy or increased degeneration of LM samples \cite{korbak2022rlBayesian}, e.g. due to repeated tokens \citep{holtzman2019}. To control for this, we supplement our capabilities evaluation with an examination of the diversity and rate of degeneration of LM samples.
We measure diversity in terms of entropy over unigrams expected in a set of $N = 2048$ LM samples and degeneration in terms of the ratio of all unigrams and distinct unigrams \emph{within} an average sample \cite{li2015diversity}. In Appendix~\ref{appendix:diversity} we also report Self-BLEU-5, a measure of text diversity \emph{across} samples \cite{zhu2018texygen}, bigram entropy and fraction of distinct bigrams.
\vspace{-5px}
\paragraph{Results}
The results for toxicity and PII, shown on Fig.~\ref{fig:diversity}, reveal two patterns of behavior. Unlikelihood, AWR and RWR tend to match MLE diversity but suffer from slightly increased degeneration. Conditional training and, to a degree, filtering, show the reverse trend; decreased diversity but more closely matching MLE's fraction of distinct unigrams. In absolute terms, however, none of the PHF objectives cause significant degeneration or entropy collapse.
\section{Finetuning with Human Feedback} \label{sec:finetuning}
\paragraph{Setup}
As discussed in \S\ref{sec:intro}, the standard approach to aligning LMs with human preferences involves pretraining an LM using MLE and finetuning it using an objective involving human feedback, e.g., RL with KL penalties \citep{ziegler2019fine,Ouyang2022} or supervised finetuning \citep{solaiman2021,chung2022_scaling_instruction}. In this section, we compare PHF to supervised finetuning with human feedback using PHF objectives, but only after MLE pretraining.\footnote{We also experimented with finetuning using RL with KL penalties, but decided to exclude these experiments because we did not obtain results competitive with supervised finetuning.}
We are also interested in understanding whether pretraining with MLE and then finetuning with feedback is better than using PHF from scratch. To address this question, we compare finetuning runs against PHF with conditional training, the PHF objective we identified as the best in \S\ref{sec:pretraining}.
To ensure comparability, we use checkpoints of MLE runs from \S\ref{sec:pretraining} trained either 50\% of the training data (i.e. 1.66B tokens) or 90\% of the training data (i.e. 2.97B tokens). We then continue finetuning them for another 1.66B or 300M tokens, respectively, using each of five objectives using feedback.\footnote{It is worth noting that the fraction of the training budget we allocate to finetuning (50\% or 10\%) is already very high (e.g. compared to 1.6\%-0.2\% in \cite{chung2022_scaling_instruction} or 0.1\% in \cite{tay2022_transcending}). This experiment design allows us to interpolate between pretraining and finetuning.} We conduct separate hyperparameter sweeps over learning rate and batch size for each task and finetuning objective. Following standard practice for finetuning a pretrained model, we reset the learning rate schedule used during pretraining. Our setup is otherwise identical to that from \S\ref{sec:pretraining}, e.g., finetuning runs use the same order and batches of training data as pretraining runs from \S\ref{sec:pretraining}.
\begin{figure*}[t]
\centering
\begin{center}
\small{%
\quad \line{pretrain} Pretraining \quad \line{dashed,finetune} Finetuning from MLE for 1.6B tokens
\quad \line{dotted,finetune90} Finetuning from MLE for 330M tokens
}
\end{center}
\vspace{-10px}
\hspace{-0.05\textwidth}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: toxicity}\hspace{25px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/tox_finetune_red_team.pdf}
\vspace{-10px}
\end{subfigure}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PII}\hspace{35px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pii_finetune_red_team.pdf}
\vspace{-10px}
\end{subfigure}
\begin{subfigure}[t]{0.02\textwidth}
\rotatebox[origin=r]{90}{\small{Task: PEP8}\hspace{38px}}
\end{subfigure}
\hspace{-8px}
\begin{subfigure}[t]{0.32\textwidth}
\vskip 0pt
\centering
\includegraphics[width=\linewidth]{figures/pep8_finetune_red_team.pdf}
\vspace{-10px}
\end{subfigure}
\hspace{-0.05\textwidth}
\vspace{-20px}
\caption{Average misalignment score (lower is better) of LM responses to adversarial prompts in the pool found in the course of red-teaming, for models pretrained with conditional training (solid lines) and only finetuned with conditional training (dashed and dotted lines); lower is better. Pretraining with feedback for the whole time is always better than only using feedback with final 330M tokens, and tends to be better than using feedback only with the final 1.6B tokens.
}
\label{fig:finetune_red-team}
\vspace{-10px}
\end{figure*}
\paragraph{Results}
We present the comparison of PHF and finetuning with human feedback in Fig.~\ref{fig:finetuning}.
PHF achieves scores that are always better, typically dramatically better, than finetuning with feedback. On toxicity and PII there is a significant gap between pretraining using conditional training and the best finetuning objective.
For instance, in PII, aligning the LM during pretraining is two to three times more effective than finetuning on 300M tokens; conditional pretraining converges to misalignment score 0.0013 compared to 0.0018 (finetuning on 1.6B tokens) and 0.0023 (finetuning on 3.3B tokens).
The gap between PHF and finetuning with feedback only widens as fewer tokens are available for finetuning (dashed vs dotted line in Fig.~\ref{fig:finetuning}).
The size of this gap and its persistence across two tasks provides evidence that PHF is more effective than MLE pretraining followed by finetuning with feedback. We also present a head-to-head comparison of pretraining and finetuning performance of each objective on Fig.~\ref{fig:pretrain_vs_finetune} in Appendix~\ref{appendix:finetuning}; we find that the improvement from PHF over only finetuning with feedback tends to increase with how effective the PHF objective is at reducing scores in general. Cconditional training works well for both pretraining and finetuning (see Fig.~\ref{fig:finetune-main} for a direct comparison with capabilities-alignment of trade-offs of all objectives during finetuning for 1.6B tokens).
Finally, we repeated the red-teaming procedure from \S\ref{sec:pretraining/red_teaming} to compare adversarial robustness of LMs pretrained with conditional training and LMs only finetuned with conditional training (Fig.~\ref{fig:finetune_red-team}). Once again, low misalignment scores from unconditional sampling indicates increased robustness, and we found LMs pretrained with human feedback to be significantly more robust to red-teaming (on toxicity and PII). For instance, on PII, ten rounds of red-teaming of PHF-trained LMs are required to reach the misalignemnt score that a finetuned LM has just after one iteration. Overall, our findings demonstrate that alignment of an LM is closely tied to the quantity of human feedback it receives during training. Involving human feedback throughout the entire pretraining process (as in PHF) results in substantially better alignment than the standard practice of incorporating feedback for only a small portion of the training budget.
\section{Related Work}
\paragraph{Offline RL}
In this paper, we tackled the problem of training an LM on (potentially undesirable) content annotated with feedback while constraining the LM not to imitate undesirable content at inference time. This setting is closely related to offline RL which addresses training an optimal policy on (possibly suboptimal) demonstrations annotated with rewards \cite{levine_offline_rl}. Most work in offline RL has focused on pretraining policies for robotic control environments \cite{nair2020,kumar2020}. However, offline RL techniques were recently used for finetuning pretrained LMs to be aligned with human preferences in dialog tasks \cite{jaques-etal-2020-human,jang2022gptcritic,snell_ilql}. Conditional training has recently emerged as an effective apporoach to offline RL~\cite{Schmidhuber2019,kumar2019_rcp} and demonstrated strong results %
when paired with transformers \cite{chen2021decisiontransformer,janner2021sequence}.
For instance, decision transformer \cite{chen2021decisiontransformer} consists of training a sequence model on (reward, state, action) pairs and, at inference time, sampling an action conditioned on high reward. This approach mirrors our conditional training approach: training an LM on (control token, sentence) pairs and, at inference time, sampling tokens when conditioned on an \texttt{<|good|>} control token.
\paragraph{LM alignment during finetuning}
While we focus on pretraining, aligning LMs is frequently approached through finetuning an MLE-pretrained LM. In addition to RLHF \citep{ziegler2019fine}, alternative finetuning objectives included divergence from a target distribution \cite{khalifa2021distributional,korbak2022reinforcement} or supervised finetuning on data generated by other LMs \cite{scheurer2022} or highly curated collections of tasks phrased as instructions \cite{sanh_t0,chung2022_scaling_instruction}.
\section{Conclusion}
In the paper, we challenged the practice of aligning LMs during finetuning and advocated for utilizing human feedback during pretraining itself. Out of five PHF objectives we evaluated, conditional training consistently outperforms the alternatives in terms of both capabilities and alignment (with two notable exceptions: unlikelihood is more robust to red-teaming on toxicity and filtering achieves better HumanEval results). The fact that conditional training tends to match MLE's capabilities while enjoying much better alignment corroborates previous findings \cite{bai2022training} that alignment and capabilities might not be at odds with each other on many tasks of practical importance. On the other hand, PHF requires additional the overhead of annotating the training data with a reward function. Note, however, that assigning rewards is a constant cost that can be amortized over many training runs. PHF shares an important advantage of offline RL over online RL approaches (such as RLHF): being able to reuse feedback annotations across experiments \cite{levine_offline_rl}. Overall, incorporating human preferences in pretraining leads to capable models that generate text more aligned with human preferences, even under adversarial attacks.
\section*{Acknowledgments}
We are grateful to
Adam Gleave,
Ajeya Cotra,
Alex Havrilla,
Andy Jones,
Asa Cooper Stickland,
Beth Barnes,
Charlie Snell,
Claudia Shi,
Daniel Ziegler,
David Dohan,
David Krueger,
David Linder,
Euan McLean,
Evan Hubinger,
Ian McKenzie,
Jérémy Scheurer,
Kath Lupante,
Kyle McDonell,
Laria Reynolds,
Leo Gao,
Łukasz Kuciński,
Michael Janner,
Piotr Miłoś,
Sean Welleck,
Scott Emmons, and
Xiang Pan
for helpful conversations and feedback.
Tomasz Korbak was supported by the Leverhulme Doctoral Scholarship and Open Philantropy.
Angelica Chen was supported by the National Science Foundation Award no. 1922658.
Sam Bowman was supported by Eric and Wendy Schmidt (by recommendation of the Schmidt Futures program), Open Philanthropy, Apple, and the National Science Foundation under Grant Nos. 1922658 and 2046556.
Ethan Perez was supported by the National Science Foundation and Open Philanthropy. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
We also thank NYU High-Performance Computing Center for providing access to computational resources and OpenAI for providing access and credits to their models via the API Academic Access Program.
|
{
"arxiv_id": "2302.08606",
"language": "en",
"timestamp": "2023-02-20T02:03:02",
"url": "https://arxiv.org/abs/2302.08606",
"yymm": "2302"
} | \section{Introduction}
The last two decades have witnessed an explosive development in deep learning approaches. These approaches have achieved breakthrough performance in a broad range of learning problems from a variety of applications fields such as imaging recognition \citep{krizhevsky2012imagenet}, speech recognition \citep{speechhinton}, natural language processing \citep{Bahdanau2015NeuralMT} and other areas of computer vision \citep{Voulodimos2018DeepLF}. Deep learning has also served as the main impetus for the advancement of recent artificial intelligence (AI) technologies. This unprecedented success has been made possible due to the increasing computational prowess, availability of large data sets, and the development of efficient computational algorithms for training deep neural networks. There have been increasing efforts to understand the theoretical foundations of deep neural networks, including in the statistics community \citep{schmidt-hieber2020, ohn2019smooth, kim2018fast, bauer2019deep, suzuki2018adaptivity, Kohler2019OnTR, Fan2019ASO}.
Most of these efforts from model and algorithmic development to theoretical understanding, however, have been largely focused on the Euclidean domains. In a wide range of problems arising in computer and machine vision, medical imaging, network science, recommender systems, computer graphics, and so on, one often encounters learning problems concerned with non-Euclidean data, particularly manifold-valued data. For example, in neuroscience, data collected in diffusion tensor imaging (DTI), now a powerful tool in neuroimaging for clinical trials, are represented by the diffusion matrices, which are $3\times3$ \emph{positive definite matrices} \citep{dti-ref}. In engineering and machine learning, pictures or images are often preprocessed or reduced to a collection of \emph{subspaces} with each data point (an image) in the sample data represented by a subspace \citep{subspacepaper, facialsubpace}. In machine vision, a digital image can also be represented by a set of $k$-landmarks, the collection of which form \emph{landmark-based shape spaces} \citep{kendall84}. One may also encounter data that are stored as \emph{orthonormal frames} \citep{vecdata}, \emph{surfaces}, \emph{curves}, and \emph{networks} \citep{kolaczyk2020}. The underlying space where these general objects belong falls in the general category of \emph{manifolds} whose geometry is generally well-characterized, which should be utilized and incorporated for learning and inference. Thus, there is a natural need and motivation for developing deep neural network models over manifolds.
This work aims to develop general deep neural network architectures on manifolds and take some steps toward understanding their theoretical foundations. The key challenge lies in incorporating the underlying geometry and structure of manifolds in designing deep neural networks.
Although some recent works propose deep neural networks for specific manifolds \citep{Zhang2018GrassmannianLE, matrixback-prop, deep-LG, deepgra}, there is a lack of general frameworks or paradigms that work for arbitrary manifolds. In addition, the theoretical understanding of deep neural networks on manifolds remains largely unexplored. To fill in these gaps, in this work, we make the following contributions: (1) we develop \emph{extrinsic deep neural networks (eDNNs)} on manifolds to generalize the popular feedforward networks in the Euclidean space to manifolds via equivariant embeddings. The extrinsic framework is conceptually simple and computationally easy and works for general manifolds where nice embeddings such as \emph{emquivariant embeddings} are available;
(2) we develop \emph{intrinsic deep neural networks (iDNNs)} for deep learning networks on manifolds employing a Riemannian structure of the manifold; (3) we study theoretical properties such as approximation properties and estimation error of both eDNNs and iDNNs, and (4) we implement various DNNs over a large class of manifolds under simulations and real datasets, including eDNNs, iDNNs and \emph{tangential deep neural networks (tDNNs)}, which is a special case of iDNNs with only one tangent space.
The rest of the paper is organized as follows. In Section 2, we introduce the eDNNs on manifolds and study their theoretical properties. In Section 3, we propose the iDNNs on manifolds that take into account the intrinsic geometry of the manifold. The simulation study and the real data analysis are carried out in Section 4. Our work ends with a discussion.
\section{Extrinsic deep neural networks (eDNNs) on manifolds}
\label{sec-manifold}
\subsection{eDNNs and equivariant embeddings}
Let $M$ be a $d$-dimensional manifold. Let $(x_i, y_i)$, $i=1,\ldots, n$ be a sample of data from some regression model with input $x_i\in \mathcal X= M$ and output $y_i\in \mathcal Y={\mathbb R}$, and we propose deep neural networks for learning the underlying function $f: M\rightarrow \mathbb{R}$. The output space can be $\mathcal Y=\{1,\ldots, k \}$ for a classification problem. In this work, we propose to develop two general deep neural network architectures on manifolds based on an extrinsic and an intrinsic framework, respectively. The first framework employs an equivariant embedding of a manifold into the Euclidean space and builds a deep neural network on its image after embedding, which is the focus of this section, while the intrinsic framework utilizes Riemannian or intrinsic geometry of the manifold for designing the deep neural networks (Section \ref{sec-idnn}). Our initial focus will be on proposing appropriate analogs of feed-forward neural networks on manifolds which are popular DNNs in the Euclidean space and suitable objects for theoretical analysis. The theoretical properties of the proposed geometric DNNs will be studied.
Before describing our proposed frameworks, we introduce our mathematical definition of DNNs and related classes. A DNN $\tilde f$ with depth $L$ and a width vector $\mathbf{p} = (p_0 ,\cdots, p_{L+1} ) \in \mathbb{N}^{L+2}$ is a function of the form
\begin{align}
\label{eq-ednnn}
\tilde f(\tilde x):=A_{L+1}\circ \sigma_L\circ A_L\circ\dots\circ\sigma_1\circ A_1(\tilde x),
\end{align}
where $A_l: {\mathbb R}^{p_{l-1}}\rightarrow {\mathbb R}^{p_l}$ is an affine linear map defined by
$A_l(\tilde x)=\boldsymbol{W}_l\tilde x+\boldsymbol{b}_l$ for $p_l\times p_{l-1}$ weight matrix $\boldsymbol W_l$ and $p_l$ dimensional bias vector $\boldsymbol b_l$, and $\sigma_l: {\mathbb R}^{p_{l}}\rightarrow {\mathbb R}^{p_l}$ is an element-wise nonlinear activation map with the ReLU activation function $\sigma(z)=\max \{0, z\}$ as a popular choice. We referred to the maximum value $\max_{j=1,\dots, L }p_j$ of the width vector as the width of the DNN. We denote $\boldsymbol{\theta}$ as the collection of all weight matrices and bias vectors:
$\boldsymbol \theta:=\left ((\boldsymbol{W}_1, \boldsymbol{b}_1),\ldots, (\boldsymbol{W}_{L+1}, \boldsymbol{b}_{L+1}) \right),$ the parameters of the DNN. Moreover, we denote by $\|\boldsymbol{\theta}\|_0$ the number of non-zero parameter values (i.e., the sparsity) and by $\|\boldsymbol{\theta}\|_\infty$ the maximum of parameters. We denote by $\mathcal{F}(L, (p_0\sim P\sim p_{L+1}), S, B)$ the class of DNNs with depth $L$, input dimension $p_0$, width $P$, output dimension $p_{L+1}$, sparsity $S$ and the maximum of parameters $B$. For simplicity, if the input and output dimensions are clear in the context, we write $\mathcal{F}(L, P, S, B)=\mathcal{F}(L, (p_0\sim P\sim p_{L+1}), S, B)$.
Let $J: M\rightarrow{\mathbb R}^D$ be an embedding of $M$ into some higher dimensional Euclidean space $\mathbb{R}^D$ ($D\geq d$) and denote the image of the embedding as $\tilde{M} = J(M)$. By definition of an embedding, $J$ is a smooth map such that its differential $dJ: T_xM\rightarrow T_{J(x)}{\mathbb R}^D$ at each point $x\in M$ is an injective map from its tangent space $T_xM$ to $T_{J(x)}{\mathbb R}^D$, and $J$ is a homeomorphism between $M$ and its image $\tilde{M}$. Our idea of building \emph{an extrinsic DNN} on manifold relies on building a DNN on the image of the manifold after the embedding. The geometry of the manifold of $M$ can be well-preserved with a good choice of embedding, such as an equivariant embedding which will be defined rigorously in Remark \ref{remark:embedding} below. The extrinsic framework has been adopted for the estimation of Fr\'echet means \citep{ linclt}, regression on manifolds \citep{linjasa}, and construction of Gaussian processes on manifolds \citep{lin2019}, which have enjoyed some notable features such as ease of computations and accurate estimations.
The key idea of proposing an extrinsic feedforward neural network on a manifold $M$ is to build a one-to-one version of its image after the embedding. More specially, we say that $f$ is an \emph{extrinsic deep neural network (eDNN)} if $f$ is of the form
\begin{align}
\label{embedF}
f(x) = \tilde f(J(x)),
\end{align}
with a DNN $\tilde{f}$. We denote the eDNN class induced by $\mathcal{F}(L, P, S, B)$ as
$$\mathcal F_{eDNN}(L,P,S,B):=\{f=\tilde{f}\circ J:\tilde{f}\in\mathcal{F}(L, P, S, B)\}.$$
The extrinsic framework is very general and works for any manifold where a good embedding, such as an equivariant embedding, is available. Under this framework, training algorithms in the Euclidean space, such as the stochastic gradient descent (SGD) with backpropagation algorithms, can be utilized working with the data $(J(x_i), y_i)$, $i=1,\ldots ,n$, with the only additional computation burden potentially induced from working higher-dimensional ambiance space. In our simulation Section \ref{sec-sim}, the extrinsic DNN yields better accuracy than the Naive Bayes classifier, kernel SVM, logistic regression classifier, and the random forester classifier for the planar shape datasets. Due to its simplicity and generality, there is a potential for applying eDNNs in medical imaging and machine vision for broader scientific impacts.
\begin{remark}
In \cite{SchmidtHieber2019DeepRN} and \cite{2019chen}, a feedforward neural network was used for nonparametric regression on a lower-dimensional submanifold embedded in some higher-dimensional ambient space. It showed that with appropriate conditions on the neural network structures, the convergence rates of the ERM would depend on the dimension of the submanifold $d$ instead of the dimension of the ambient space $D$. In their framework, they assume the geometry of the submanifold is unknown. From a conceptual point of view, our extrinsic framework can be viewed as a special case of theirs by ignoring the underlying geometry. In this case, the image of the manifold $\tilde M=J(M)$ can be viewed as a submanifold in ${\mathbb R}^D$, so their results follow. On the other hand, our embedding framework allows us to work with very complicated manifolds, such as the quotient manifolds for which no natural ambient coordinates are available. An example is the planar shape which is the quotient of a typically high-dimensional sphere consisting of orbits of equivalent classes, with the submanifold structure only arising after the embedding. And such an embedding is typically not isometric.
In \cite{2019chen}, the charts were constructed by intersecting small balls in ${\mathbb R}^D$ with the submanifold $M$. In our case, we provide explicit charts of the submanifold based on the knowledge of the geometry of the original manifold $M$ and the embedding map $J$ that works with the ambient coordinates in ${\mathbb R}^D$.
\end{remark}
\begin{remark}
\label{remark:embedding}
One of the essential steps in employing an eDNN is the choice of the embedding $J$, which is generally not unique. It is desirable to have an embedding that preserves as much geometry as possible. An equivariant embedding is one type of embedding that preserves a substantial amount of geometry. Figure \ref{fig-embed} provides a visual illustration of equivariant embedding. Suppose $M$ admits an action of a (usually `large') Lie group $H$. Then we say that $J$ is an equivariant embedding if we can find a Lie group homomorphism $\phi: H\rightarrow GL(D, \mathbb R)$ from $H$ to the general linear group $GL(D, \mathbb R)$ of degree $D$ acting on $\tilde M$ such that
\begin{align*}
J(hp)=\phi(h)J(p)
\end{align*}
for any $h\in H$ and $p\in M$. The definition seems technical at first sight. However, the intuition is clear. If a large group $H$ acts on manifolds such as by rotation before embedding, such an action can be preserved via $\phi$ on the image $\tilde M$, thus potentially preserving many of the geometric features of $M$, such as its symmetries. Therefore, the embedding is geometry-preserving in this sense. For the case of the planar shape, which is a collection of shapes consisting of $k$-landmarks modular Euclidean motions such as rotation, scaling, and translation, which is a quotient manifold of a sphere of dimension $S^{2k-3}$, and the embedding can be given by the Veronese-whitnning embedding which is equivariant under the special unitary group. Another example that's less abstract to understand is the manifold of symmetric positive definite matrices whose embedding can be given as the $\log$ map (the matrix $\log$ function) into the space of symmetric matrices, and this embedding is equivariant with respect to the group action of the general linear group via the conjugation group action.
See Section \ref{sec-sim} for some concrete examples of equivariant embeddings for well-known manifolds, such as the space of the sphere, symmetric positive definite matrices, and planar shapes.
\end{remark}
\begin{figure}
\includegraphics[width=1.2\linewidth, angle=0]{embedding.pdf}
\caption{\footnotesize An simple illustration of equivariant embeddings}
\label{fig-embed}
\end{figure}
\subsection{Approximation analysis for eDNNs}
In this section, we study the ability of the eDNN class in approximating an appropriate smooth class of functions on manifolds. First, we define the ball of $\beta$-H\"older functions on a set $U\in\mathbbm R^D$ with radius $K$ as
\begin{align*}
\mathcal{C}_D^\beta(U,K) = \{f: \| f\|_{\mathcal{C}^\beta_D(U)}\le K\},
\end{align*}
where $\|\cdot\|_{\mathcal{C}^\beta_D(U)}$ denotes the $\beta$-H\"older norm defined as
\begin{align*}
\|f\|_{\mathcal{C}^{\beta}_D(U)}
=\sum_{m\in\mathbbm N_0^D:\|m\|_1\le \lfloor\beta\rfloor}\|\partial^{m}f\|_{\infty}
+\sum_{m\in\mathbbm N_0^D:\|m\|_1= \lfloor\beta\rfloor}\sup_{x_1,x_2\in U, x_1\neq x_2 }\frac{|\partial^{m}f(x_1)-\partial^{m}f(x_2)|}{\|x_1-x_2\|_\infty^{\beta- \lfloor\beta\rfloor}}.
\end{align*}
Here, $\partial^{m}f$ denotes the partial derivative of $f$ of order $m$ and $\mathbbm N_0:=\mathbbm N\cup\{0\}$. To facilitate smooth function approximation on manifolds, following \cite{SchmidtHieber2019DeepRN}, we impose an additional smooth assumption on local coordinates which project inputs in an ambient space to a lower dimensional space.
\begin{definition}
We say that a compact $d$-dimensional manifold $M\subset \mathbbm R^D$ has smooth local coordinates if there exist charts $(V_1, \psi_1),\dots,(V_r, \psi_r)$, such that for any $\gamma>0$, $\psi_j \in \mathcal{C}^\gamma_D(\psi_j(V_j))$.
\end{definition}
The next theorem reveals the approximation ability of the eDNN class. For a measure of approximation, we consider the sup norm defined as $ \| f_1-f_2\|_{L^\infty(M)}:=\sup_{x\in M}|f_1(x)-f_2(x)|$ for two functions $f_1,f_2:M\to \mathbbm R$.
\begin{theorem}
\label{thm:ednn:approx}
Let $M\subset \mathbb{R}^D$ be a $d$-dimensional compact manifold and $J:M\to \mathbbm R^D$ be an embedding map. Assume that $J(M)$ has smooth local coordinates.
Then there exist positive constants $c_1,c_2$ and $c_3$ depending only on $D,d,\beta,K$ and the surface area of $M$ such that for any $\eta \in (1, 0)$,
\begin{align*}
\sup_{f_0:M\to [-1,1] \textup{ s.t } f_0\circ J^{-1}\in C^\beta_D(J(M), K)} \inf_{f \in \mathcal{F}_{eDNN}(L, P, S, B=1)}\|f - f_0\|_{L^{\infty}(M)} \leq \eta
\end{align*}
with $L \leq c_1\log\frac{1}{ \eta}$, $P \leq c_2 \eta^{-\frac{d}{\beta}}$ and $S\le c_3 \eta^{-\frac{d}{\beta}}\log\frac{1}{ \eta}$.
\end{theorem}
\begin{proof}
Let $\tilde{f}_0 = f_0\circ J^{-1}$, then $\tilde{f}_0$ is a function on the $d$-dimensional manifold $\tilde{M}=J(M) \subset\mathbbm R^D$. Since $\tilde{M}$ has smooth local coordinates, we can apply Theorem 2 in \cite{SchmidtHieber2019DeepRN}, there exists a network $\tilde{f}\in\mathcal{F}(L, (D\sim P \sim 1), S,1)$ such that $\|\tilde{f}-\tilde{f}_0\|_{L^{\infty}(\tilde{M})}<\eta$ with $L \leq c_1\log\frac{1}{ \eta}$, $P \leq c_2 \eta^{-\frac{d}{\beta}}$ and $S\le c_3 \eta^{-\frac{d}{\beta}}\log\frac{1}{ \eta}$ for some $c_1>0,c_2>0$ and $c_3>0$. Now, let $f = \tilde{f}\circ J\in \mathcal{F}_{eDNN}(L, (D\sim P \sim 1), S,1)$. Then
\begin{equation*}
\|f-f_0\|_{L^{\infty}(M)} = \|\tilde{f}\circ J-\tilde{f}_0\circ J \|_{L^{\infty}(M)} = \|\tilde{f}-\tilde{f}_0\|_{L^{\infty}(\tilde{M})}.
\end{equation*}
Therefore, we get the desired result.
\end{proof}
\subsection{Statistical risk analysis for eDNNs}
In this section, we study the statistical risk of the empirical risk minimizer (ERM) based on the eDNN class. We assume the following regression model
\begin{align}
\label{eq:regmodel}
y_i = f_0(x_i) + \epsilon_i
\end{align}
for $i=1,\dots, n$, where $x_1,\dots, x_n\in M$ are i.i.d inputs following a distribution $P_x$ on the manifold and $\epsilon_1,\dots, \epsilon_n$ are i.i.d. sub-Gaussian errors. We consider the ERM over the eDNN class such that
\begin{equation}
\label{eq-erm}
\hat{f}_{eDNN}=\underset{f\in \mathcal F_{eDNN} (L, P, S, B) }{\argmin}\frac{1}{n}\sum_{i=1}^n(y_i-f(x_i))^2.
\end{equation}
A natural question to ask is whether the ERM type of estimators such as $\hat{f}_{n}$ defined above achieve minimax optimal estimation of $\beta$-H\"older smooth functions on manifolds, in terms of the excess risk
\begin{align*}
R(\hat{f}_{eDNN}, f_0)
:=E(\hat{f}_{eDNN}(x)-f_0(x))^2
\end{align*}
where the expectation is taken over the random variable $x\sim P_x$.
\begin{theorem}
\label{thm:ednn:risk}
Assume the model (\ref{eq:regmodel}) with a $d$-dimensional compact manifold $M\subset \mathbb{R}^D$ and an embedding map $J:M\to\mathbbm R^D$. Moreover, assume that $J(M)$ has smooth local coordinates. Then the ERM estimator $\hat{f}_{eDNN}$ over the eDNN class $\mathcal{F}_{eDNN}(L, P, S, B=1)$ in (\ref{eq-erm})
with $L \asymp \log(n)$, $n\gtrsim P\gtrsim n^{d/(2\beta+d)}$ and $S\asymp n^{d/(2\beta+d)}\log n$ satisfies
\begin{align*}
\sup_{f_0:M\to [-1,1] \textup{ s.t } f_0\circ J^{-1}\in C^\beta_D(\mathbbm R^D, K)} R(\hat{f}_{eDNN},f_0)\lesssim n^{-\frac{2\beta}{2\beta+d}}\log^3n.
\end{align*}
\end{theorem}
\begin{proof}
For any $\tilde{f_1},\tilde{f_2}\in\mathcal{F}(L, P, S, B=1)$, we have $\|\tilde{f_2}\circ J-\tilde{f_2}\circ J\|_{L^\infty(M)}= \|\tilde{f_2}-\tilde{f_2}\|_{L^\infty(\tilde{M})}\le \|\tilde{f_2}-\tilde{f_2}\|_{L^\infty(\mathbbm R^D)}$. Hence the entropy of the eDNN class $\mathcal{F}_{eDNN}(L, P, S, B=1)$ is bounded by that of $\mathcal{F}(L, P, S, B=1)$. Thus, by Lemmas 4 and 5 of \cite{schmidt-hieber2020}, we have
\begin{equation*}
R\left(\hat{f}_{eDNN}, f_{0}\right) \lesssim \inf _{f \in \mathcal{F}_{eDNN}(L, P, S, B=1)} \|f-f_0\|^2_{L^{\infty}(M)}+\frac{(S+1) \log \left(2 n(L+1)P^{2L}(D+1)^2\right)+1}{n}.
\end{equation*}
Therefore, by Theorem \ref{thm:ednn:approx}, if we take $L, P$ and $S$ as in the theorem, we get the desired result.
\end{proof}
\section{Intrinsic deep neural networks (iDNNs) on manifolds}
\label{sec-idnn}
\subsection{The iDNN architectures on a Riemannian manifold}
Despite the generality and computational advantage enjoyed by eDNNs on manifolds proposed in the previous section, one potential drawback is that an embedding is not always available on complex manifolds such as some intrinsic structure spatial domains. In this section, we propose a class of intrinsic DNNs on manifolds (iDNNs) by employing the intrinsic geometry of a manifold to utilize its exponential and log maps with respect to a Riemannian structure. Some works construct a DNN on the manifold via mapping the points on the manifold to a \emph{single tangent space} (e.g., with respect to some central points of the data) or proposing DNNs on specific manifolds, in particular, matrix manifolds \cite{deepspd, matrixback-prop}. Using a DNN on a single tangent space approximation cannot provide a good approximation of a function on the whole manifold. Below we provide a rigorous framework for providing a local approximation of a function on a Riemannian manifold via Riemannian exponential and logarithm maps and thoroughly investigate their theoretical properties.
The key ideas here are to first cover the manifold with images of the subset of tangent spaces $U_1,\ldots, U_k$ under the exponential map, approximate a local function over the tangent space using DNNs, which are then patched together via the transition map and a partition of unity on the Riemannian manifold. Specifically, let $\{x_1, \dots, x_K\in M\}$ be a finite set of points, such that for an open set of subsets $U_k\subset T_{x_k}M$ with $k=1, \dots, K,$ one has $\bigcup_{k=1}^K\exp_{x_k}(U_k)=M.$ Namely, one has $\big\{\big(\exp_{x_k}(U_k), \ \exp_{x_k}\big), \ \ k=1 \dots, K\big\}$ as the charts of the manifold $M$.
For each $k=1, \dots, K$ one has orthonormal basis $v_{k1}, \dots, v_{kd}\in T_{x_k}M$ and respectively the normal coordinates of $x\in\exp_{x_k}(U_k)$
\begin{align*}
v_k^j(x)=\big\langle\log_{x_k}x, v_{kj}\big\rangle \quad \textrm{for} \quad j=1, \dots, d.
\end{align*}
Thus
\begin{align*}
v_k(x)=\big(v_k^1(x), \dots, v_k^d(x)\big)=\sum_{j=1}^dv_k^j(x)v_{kj}\in T_{x_k}M.
\end{align*}
The normal coordinate allows one to perform elementwise non-linear activation to tangent vectors easily.
For example, any $1\leq k<l \leq K$ one has the transition map on $\exp_{x_l}(U_l)\cap\exp_{x_k}(U_k)$
\begin{align*}
v_k^j(x)=\big\langle\log_{x_k}x, v_{kj}\big\rangle = \big\langle\log_{x_k}\exp_{x_l}v_l(x), v_{kj}\big\rangle \quad \textrm{for} \quad j=1, \dots, d.
\end{align*}
A compact manifold $M$ always admits a \emph{finite partition of unity} $\big\{\tau_k, \ k=1, \dots, K\big\}$, $\tau_k(\cdot): M\rightarrow \mathbb R_+$ such that $\sum _{k=1}^K\tau_k (x)=1,$ and for every $x\in M$ there is a neighbourhood of $x$ where all but a finite number of functions are $0$ (e.g., Proposition 13.9 of \cite{loring2011introduction}). Therefore, for each function $f:M\rightarrow\mathbb{R}$, we can write
\begin{align}
\label{eq-idnnarchitecture}
f(x)=\sum_{k=1}^K\tau_k(x)f\Big(\exp_{x_k}\big(\log_{x_k}x\big)\Big)\doteq\sum_{k=1}^K\tau_k(x) f_k(\log _{x_k}(x)).
\end{align}
As a result, one can model the compositions $f_k=f\circ\exp_{x_k}:U_k\rightarrow\mathbb{R}$ instead of $f,$ for which we propose to use DNN. This idea gives rise to our iDNN architecture $f(x)=\sum_{k=1}^K\tau_k(x)f_k\left(\log _{x_k}(x)\right)$. Figure \ref{fig:idnn} illustrates the core ideas of the iDNN architecture. Given a set of points $\{x_1,\dots,x_K\}\subset M$, we define the iDNN class with depth $L$, width $P$, sparsity $S$ and the maximum of parameters $B$ as
\begin{align}
\mathcal{F}_{iDNN}(L,P, S, B)=\left\{\sum_{k=1}^K\tau_k(x)f_k\left(\log _{x_k}(x)\right):f_k\in \mathcal{F}(L,(d\sim P\sim 1), S, B)\right\}.
\end{align}
\begin{figure}[ht]
\centering
\includegraphics[width=15cm]{idnn1.pdf}
\caption{\footnotesize The iDNN architecture on a Riemannian manifold $M$. Given the base points $\{x_1, \dots, x_K\in M\}$ and the charts $\big\{U_k\subset T_{x_k}M, k=1 \dots, K\big\}$ on the manifold $M$, the input data $X$ is mapped to the $kth$ chart $U_k$ after the log map $\log_{x_k}(.)$. Afterward, the transformed data is fed into the DNN $f_k$ on each chart $k$. The final prediction $Y$ is given by the partition of unity $\tau(.)$ as $Y = \sum_{k=1}^K\tau_k(x)f_k\left(\log _{x_k}(x)\right)$. }
\label{fig:idnn}
\end{figure}
\subsection{Approximation analysis for iDNNs}
In this section, we investigate the approximation theory for the iDNN for smooth functions on manifolds.
\begin{theorem}
\label{thm:idnn:approx}
Let $M\subset \mathbb{R}^D$ be a $d$-dimensional compact manifold. Assume that $\exp_{x_k}\in \mathcal{C}_D^{\gamma}(U_k)$ for $\gamma>\beta$ for every $k=1,\dots, K$. Then there exist positive constants $c_1,c_2$ and $c_3$ depending only on $D,d,\beta,K$ and the surface area of $M$ such that for any $\eta \in (1, 0)$,
\begin{align*}
\sup_{f_0:M\mapsto[-1, 1] \text{ s.t. } f_0\in\mathcal{C}_D^{\beta}(M, K)}\inf_{f\in \mathcal{F}_{iDNN}(L,P, S, B=1)}\|f-f_0\|_{L^{\infty}(M)}\leq \eta.
\end{align*}
with $L \leq c_1\log\frac{1}{ \eta}$, $P \leq c_2 \eta^{-\frac{d}{\beta}}$ and $S\le c_3 \eta^{-\frac{d}{\beta}}\log\frac{1}{ \eta}$.
\end{theorem}
\begin{proof}
We construct a DNN approximating $f_{0k}=f_0\circ \exp_{x_k}$ for each $k=1,\dots, K$. Note that $f_{0k}$ is $\beta$-H\"older smooth by assumption. Therefore, by Theorem 1 in \cite{SchmidtHieber2019DeepRN}, there exist DNNs $f_1,\dots, f_K\in\mathcal{F}(L, (d\sim P \sim 1), S,1)$ such that $\|f_k-f_{0k}\|_{L^{\infty}(U_k)}<\eta$ with $L \leq c_1\log\frac{1}{ \eta}$, $P \leq c_2 \eta^{-\frac{d}{\beta}}$ and $S\le c_3 \eta^{-\frac{d}{\beta}}\log\frac{1}{ \eta}$ for some $c_1>0,c_2>0$ and $c_3>0$. Now, let $f = \sum_{k=1}^K\tau_k(x)f_k(\log _{x_k}(x))\in \mathcal{F}_{iDNN}(L,P, S,1)$. Then
\begin{align*}
\|f-f_0\|_{L^{\infty}(M)}
&=\sup_{x\in M }\left|\sum_{k=1}^K\tau_k(x)f_k(\log _{x_k}(x))-\sum_{k=1}^K\tau_k(x)f_{0k}(\log _{x_k}(x))\right|\\
&\le \sup_{x\in M }\sum_{k=1}^K\tau_k(x)\left|f_k(\log _{x_k}(x))-f_{0k}(\log _{x_k}(x))\right|\\
&\le \max_{1\le k\le K}\left\|f_k-f_{0k}\right\|_{L^{\infty}(U_k)}<\eta
\end{align*}
which completes the proof.
\end{proof}
\begin{remark}
\cite{SchmidtHieber2019DeepRN} and \cite{2019chen} propose feedforward neural networks on a manifold that's embedded in a higher-dimensional Euclidean space. In the approximation theory of \cite{SchmidtHieber2019DeepRN} and \cite{2019chen}, they utilize local charts and partition of unities, but due to the unknown geometry of the manifold, they need to use DNNs to approximate the local charts $\psi_j$s, the partition of unities functions as well as the mappings $f\circ \psi_j^{-1}$. Under our iDNN framework, we utilize the Riemmanian geometry of the manifold and the $\log$ map. Further, the partition of utilities functions can be constructed so there is no need to approximate them with DNNs.
\end{remark}
\subsection{Statistical risk analysis for iDNNs}
In this section, we study the statistical risk of the ERM over the iDNN class given by
\begin{equation}
\label{eq-erm-idnn}
\hat{f}_{iDNN}=\underset{f\in \mathcal F_{iDNN} (L, P, S, B) }{\argmin}\frac{1}{n}\sum_{i=1}^n(y_i-f(x_i))^2.
\end{equation}
for the nonparametric regression model (\ref{eq:regmodel}) where the true function $f_0$ is $\beta$-H\"older smooth on a manifold. The following theorem shows that the iDNN estimator attains the optimal rate. We omit the proof since it is almost the same as the proof of Theorem \ref{thm:ednn:risk} except using the approximation result for the iDNN class given in Theorem \ref{thm:idnn:approx}.
\begin{theorem}
Assume the model (\ref{eq:regmodel}) with a $d$-dimensional compact manifold $M$ isometrically embedded in $\mathbbm R^D$. Then the ERM estimator $\hat{f}_{iDNN}$ over the iDNN class $\mathcal{F}_{iDNN}(L, P, S, B=1)$ in (\ref{eq-erm-idnn})
with $L \asymp \log(n)$, $n\gtrsim P\gtrsim n^{d/(2\beta+d)}$ and $S\asymp n^{d/(2\beta+d)}\log n$ satisfies
\begin{align*}
\sup_{f_0:M\mapsto[-1, 1] \text{ s.t. } f_0\in\mathcal{C}_D^{\beta}(M, K)}R(\hat{f}_{iDNN},f_0)\lesssim n^{-\frac{2\beta}{2\beta+d}}\log^3n.
\end{align*}
\end{theorem}
\begin{proof}
For any two iDNNs $f(\cdot) = \sum_{k=1}^K\tau_k(\cdot)f_k(\log _{x_k}(\cdot))$ and $f'(\cdot) = \sum_{k=1}^K\tau_k(\cdot)f_k'(\log _{x_k}(\cdot))$ in $ \mathcal{F}_{iDNN}(L,P, S,B)$, we have
\begin{align*}
\|f-f'\|_{L^\infty(M)}
&\le \sup_{x\in M }\sum_{k=1}^K\tau_k(x)\left|f_k(\log _{x_k}(x))-f_{0k}(\log _{x_k}(x))\right|\\
&\le \max_{1\le k\le K}\left\|f_k-f_{0k}\right\|_{L^\infty(\mathbbm R^d)}.
\end{align*}
Therefore, the entropy of $ \mathcal{F}_{iDNN}(L,P, S,B)$ is bounded by the $K$-times of the entropy of the class $ \mathcal{F}(L,P, S,B)$. So by the same way as in the proof of Theorem \ref{thm:ednn:risk}, we get the desired.
\end{proof}
\section{Simulations study and real data analysis}
\label{sec-sim}
Applications will illustrate the practical impact and utilities of our methods to simulated data sets and some important real data sets, such as in the context of the AFEW database, HDM95 database, the ADHD-200 dataset, an HIV study, and others. The proposed eDNNs, tDNNs, and iDNNs will be applied to learning problems such as regression and classification on various manifolds, including the sphere, the planar shapes, and the manifold of symmetric positive definite matrices, which are the most popular classes of manifolds encountered in medical diagnostics using medical imaging and image classification in digital imaging analysis. For the eDNN models, we list explicit embeddings below and the corresponding lie groups that act on them equivariantly. For the iDNN models, we elaborate the exponential map and inverse-exponential (log) map on those manifolds. As mentioned before, the tDNN model is the special case of the iDNN model when $K=1$, which utilizes the exponential map and inverse-exponential map as well.
\subsection{Sphere}
One of the simplest manifolds of interest is the sphere in particular in directional statistics and spatial statistics \citep{fisherra53, maridajpuu, fisher87, jun2008, chunfeng}. Statistical analysis of data from the two-dimensional sphere $S^2$, often called directional statistics, has a fairly long history \citep{fisherra53, maridajpuu, fisher87}. Modeling on the sphere has also received recent attention due to applications in spatial statistics, for example, global models for climate or satellite data \citep{jun2008, chunfeng}.
To build the eDNN on the sphere, first note that $S^d$ is a submanifold of ${\mathbb R}^{d+1}$, so that the inclusion map $J$ serves as a natural embedding of $S^d$ into $\mathbb R^{d+1}$. It is easy to check that $J$ is an equivariant embedding with respect to the Lie group $H=SO(d+1)$, the group of $d+1$ by $d+1$ special orthogonal matrices. Intuitively speaking, this embedding preserves a lot of symmetries of the sphere. On the other hand, one can use the geodesics (in this case, the big circles on the sphere) for which the closed-form exponential map and inverse-exponential map are available to construct the iDNN model. Furthermore, given the base points $x_i, i=1,...,k$, one has $\tau(x) = e^{-\frac{1}{1 - \|x - x_i\|^2}}$ by utilizing the bump function on the sphere.
In this simulation study, we consider the classification problem in terms of the Von Mises-Fisher distribution (MF) on the sphere $S^2$, which has the following density:
\begin{equation}
f_{\mathrm{MF}}(y ; \mu, \kappa) \propto \exp \left(\kappa \mu^{T} y\right),
\end{equation}
where $\kappa$ is a concentration parameter with $\mu$ a location parameter. Then we simulate the data from $K$ different classes on the sphere $ S^d$ via a mixture of MF as:
\begin{align}
& u_{i1},...,u_{i10} \sim \mathrm{MF}(\mu_i,\kappa_1), i = 1,..,K.\\
& m_{ij} \sim unif\{u_{k1},...,u_{k10}\}, \quad x_{ij} \sim \mathrm{MF}(m_{ij},\kappa_2), \\
& i = 1,..,K, \quad j= 1,...,N.
\end{align}
Here $x_{ij}$ is the $j$th sample from $i$th class, $\mu_i$ is the mean for the $i$th class, and $\kappa$ is the dispersion for all classes. We first generated $10$ means $u_{i1},...,u_{i10}$ from the $\mathrm{MF}$ distribution for $i$th class. Then for each class, we generated $N$ observations as follows: for each observation $x_{ij}$, we randomly picked $m_{ij}$ from $u_{k1},...,u_{k10}$ with probability $1/10$, and then generated a $\mathrm{MF}(m_{ij},\kappa_2)$, thus leading to a mixture of $\mathrm{MF}$ distribution. Moreover, $\kappa_1$ controls the dispersion of the intermediate variable $m_{ij}$ while $\kappa_2$ controls the dispersion of observations $x_{ij}$. Figure \ref{fig:sphere} shows observations from the mixture model on the sphere under different dispersions.
\begin{figure*}[htbp]
\centering
\subfloat[$\kappa_1=10, \kappa_2 = 50$]{
\includegraphics[width=.35\textwidth]{kappa1=10.png}}
\subfloat[$\kappa_1=8, \kappa_2 = 40$]{
\includegraphics[width=.35\textwidth]{kappa1=8.png}}
\subfloat[$\kappa_1=4, \kappa_2 = 20$]{
\includegraphics[width=.35\textwidth]{kappa1=4.png}}\\
\caption{Observations for $K =2$ classes from the mixture $\mathrm{MF}$ distribution, $N =100$. The nonlinear boundary between the two classes becomes hard to see with bare eyes due to the surging variance of the data as the $\kappa_1, \kappa_2$ dropping, which makes the classification problem harder. }
\label{fig:sphere}
\end{figure*}
In the following simulation, we follow the mixture model on the hyper-sphere $S^2, S^{10}, S^{50}$ with $K =2$, $N = 2000$, $\kappa_1 = 4$, $\kappa_2 = 20$ and divide the data into 75 percent training set and 25 percent test set. We repeat this split $50$ times. Then we compare the eDNN, tDNN, iDNN models to other competing estimators via the classification accuracy on the test set in Table \ref{tab:test-sphere}.
For competing estimators, we consider the k-nearest neighbors (kNN), the random forest (RF), the logistic regression (LR), and the support vector machine (SVM) with the radial basis function (RBF) kernel. The tuning parameters in each method are selected by evaluation on a validation data set whose size is $25\%$ of the training set.
For all DNN models, we apply a network architecture of $5$ hidden layers with the numbers of widths $(100,100,100,100,100)$. The DNN model is the same as the eDNN model on Euclidean since the embedding map from the sphere to the higher Euclidean space is the identity map. In the tDNN model, we consider the $Frechet$ mean of the training set as the base point and transform all data in the batch to tangent vectors before feeding to the neural network. In the iDNN model, we consider the north and south poles $(\pm 1,0,..,0)$ as base points and use the neural network with the same structure for all tangent spaces. All models are trained with Adam optimizer \cite{kingma2014adam}. As shown in Table 1, our tDNN model and iDNN model outperform other competing estimators. Specifically, our tDNN models achieve the best accuracy $94.88 \pm 0.53$ and $97.13 \pm 0.39$ in the low dimensional cases. Our iDNN models obtained the best result $ 80.72 \pm 0.94$ and $68.43 \pm 1.20$ in the high dimensional spaces.
\begin{table}[htbp]
\centering
\caption{The test accuracy is calculated over $50$ random split. The $5$-layers network (with $100$ hidden nodes in each layer) is used for our DNN models in all experiments. Our tDNN model achieved the best result when the dimension was low $S^2, S^{10}$, while our iDNN is the best in high-dimension cases ($S^{50}, S^{100}$). Moreover, our tDNN, iDNN models show better accuracy than the classical DNN, especially in high-dimensional cases. }
\begin{tabular}{lccccc}
& $S^2$ & $S^{10}$ & $S^{50}$ & $S^{100}$\\
\hline
$\text{DNN}$ & $ 94.12 \pm 0.67 $ & $ 96.22 \pm 0.63 $ & $ 75.93 \pm 1.07 $ & $62.53 \pm 1.35$\\
$\text{tDNN}$ & $ \mathbf{94.88 \pm 0.53} $ & $\mathbf{ 97.13 \pm 0.39} $ & $ 80.07 \pm 0.95$ & $68.26 \pm 1.16$\\
$\text{iDNN}$ & $94.69 \pm 0.65 $ & $ 97.11\pm 0.41 $ & $ \mathbf{80.72 \pm 0.94} $ &$\mathbf{68.43 \pm 1.20} $ \\
\hline
$\text{kNN}$ & $ 92.16\pm 0.77 $ & $ 94.98 \pm 0.60$ & $ 69.18\pm 1.44$ & $56.24 \pm 1.30$\\
$\text{LR}$& $ 92.98 \pm 0.76 $ & $ 88.64 \pm 0.76$ & $ 72.38 \pm 1.14$ & $66.73 \pm 1.37$\\
$\text{RF}$& $ 93.66 \pm 0.83 $ & $89.93 \pm 0.65$ & $ 70.29 \pm 1.48$ & $62.29 \pm 1.45$\\
$\text{SVM}$& $ 94.07 \pm 0.1 $ & $96.85 \pm 0.44$ & $ 79.38 \pm 1.15$ & $68.25 \pm 1.18$\\
\end{tabular}
\label{tab:test-sphere}
\end{table}
\subsection{The planar shape}
Let $z=(z_1,\ldots, z_k)$, with $z_1,\ldots, z_k\in {\mathbb R}^2$, be a set of $k$ landmarks. The planar shape $\Sigma_2^k$ is the collection of $z$'s modulo under the Euclidean motions, including translation, scaling, and rotation. One has $\Sigma_2^k=S^{2k-3}/SO(2)$, the quotient of sphere by the action of $SO(2)$ (or the rotation), the group of $2\times 2$ special orthogonal matrices;
A point in $\Sigma_2^k$ can be identified as the orbit of some $u\in S^{2k-3}$, which we denote as $\sigma(z)$. Viewing $z$ as a vector of complex numbers, one can embed $\Sigma_2^k$ into $S(k,\mathbb C)$, the space of $k\times k$ complex Hermitian matrices, via the Veronese-Whitney embedding (see, e.g., \cite{rabimono}):
\begin{equation}
\label{eq-planaremb}
J(\sigma(z))=uu^*=((u_i\bar{u}_j))_{1\leq, i,j\leq k}.
\end{equation}
One can verify that $J$ is equivariant (see \cite{kendall84}) with respect to the Lie group
$$H=SU(k)=\{A\in GL(k, \mathbb C), AA^*=I, \det(A)=I\},$$
with its action on $\Sigma_2^k$ induced by left multiplication.
We consider a planar shape data set, which involves measurements of a group of typically developing children and a group of children suffering the ADHD (Attention deficit hyperactivity disorder). ADHD is one of the most common psychiatric disorders for children that can continue through adolescence and adulthood. Symptoms include difficulty staying focused and paying attention, difficulty controlling behavior, and hyperactivity (over-activity). In general, ADHD has three subtypes: (1) ADHD hyperactive-impulsive, (2) ADHD-inattentive, (3) Combined hyperactive-impulsive and inattentive (ADHD-combined). ADHD-200 Dataset (\url{http://fcon_1000.projects.nitrc.org/indi/adhd200/}) is a data set that records both anatomical and resting-state functional MRI data of 776 labeled subjects across 8 independent imaging sites, 491 of which were obtained from typically developing individuals and 285 in children and adolescents with ADHD (ages: 7-21 years old).
The planar Corpus Callosum shape data are extracted, with 50 landmarks on the contour of the Corpus Callosum of each subject (see \cite{hongtu15}). See Figure \ref{fig:landmark} for a plot of the raw landmarks of a normal developing child and an ADHD child)
After quality control, 647 CC shape data out of 776 subjects were obtained, which included 404 ($n_1$) typically developing children, 150 ($n_2$) diagnosed with ADHD-Combined, 8 ($n_3$) diagnosed with ADHD-Hyperactive-Impulsive, and 85 ($n_4$) diagnosed with ADHD-Inattentive. Therefore, the data lie in the space $\Sigma_2^{50}$, which has a high dimension of $2\times 50-4=96$.
\begin{table}[htbp]
\centering
\caption{Demographic information about processed ADHD-200 CC shape dataset, including disease status, age, and gender.}
\begin{tabular}{cccc}
\hline Disease status & Num. & Range of age in years(mean) & Gender(female/male) \\
\hline Typically Developing Children & 404 & $7.09-21.83(12.43)$ & $179 / 225$ \\
ADHD-Combined & 150 & $7.17-20.15(10.96)$ & $39 / 111$ \\
ADHD-Hyperactive/Impulsive & 8 & $9.22-20.89(14.69)$ & $1 / 7$ \\
ADHD-Inattentive & 85 & $7.43-17.61(12.23)$ & $18 / 67$ \\
All data & 647 & $7.09-21.83(12.09)$ & $237 / 410$ \\
\hline
\end{tabular}
\label{tab-Demo}
\end{table}
As shown in the table \ref{tab-Demo}, we consider the classification problem with 4 different classes. We also divided the dataset into a $75$ percent training set and a $25$ percent test set and evaluated the classification accuracy in the test set compared to other learning methods. Since the sample size is unbalanced, the total number of some classes is too small, i.e., ADHD-Hyperactive case. We also considered the classification with two classes by combing those ADHD samples into one class shown in the right figure in Figure \ref{fig:landmark}.
\begin{figure*}[htbp]
\centering
\subfloat[Mean shapes of different classes]{
\includegraphics[width=1.0\textwidth]{planar.png}}
\caption{CC shapes}
\label{fig:landmark}
\end{figure*}
Similar to the sphere case, we select the k-nearest neighbors (kNN), the random forest (RF), the logistic regression (LR), and the support vector machine (SVM) with the radial basis function (RBF) kernel as competing estimators. The tuning parameters in each method are selected by evaluation on a validation data set whose size is $25\%$ of the training set. For all DNN models, we utilize the same network architecture of $5$ hidden layers with the numbers of width $(100,100,100,100,100)$. The DNN model is applied to the raw data, while the eDNN model is applied to the embedded data by Veronese-Whitney embedding. And the preshape data (normalized raw data) lying in the hyperspere $S^{100}$ is used for the tDNN model and iDNN model. In the iDNN model, we chose the north pole and south pole $(\pm 1,0,..,0)$ as base points and utilized the geometry of the hypersphere as before. In the tDNN model, we pick the $Frechet$ mean of the training set as the base point and transform all data in a batch to tangent vectors before feeding to the neural network. All models are trained with Adam optimizer. The competition results can be observed in Table 3. Our tDNN model achieves the best accuracy at $65.84\pm 3.10$ among 50 splits in the 2 classes case. Also, our iDNN model showed the best result of $63.55\pm 3.80$ in the 4 classes case.
\begin{table}[htbp]
\centering
\caption{The average accuracy on the test dataset is calculated over $50$ random splits. The $5$-layers network (with $100$ hidden nodes in each layer) is used for our DNN models in all experiments. Consequently, our tDNN model obtains the best accuracy in the 2 classes case while our iDNN model achieves the best accuracy in the 4 classes case. Furthermore, all our eDNN, tDNN and iDNN models outperform the classical DNN model, indicating the advantages of our frameworks. }
\begin{tabular}{lccc}
& 4 Classes & 2 Classes\\
\hline
$\text{DNN}$ & $ 56.40 \pm 10.83 $ & $ 61.09 \pm 8.44 $ \\
$\text{eDNN}$ & $ 62.98\pm 3.91 $ & $ 63.81\pm 3.72 $ \\
$\text{tDNN}$ & $ 63.20\pm 3.70 $ & $ \mathbf{65.84\pm 3.10} $ \\
$\text{iDNN}$ & $ \mathbf{ 63.55\pm 3.80 }$ & $ 65.42\pm 3.41 $ \\
\hline
$\text{kNN}$ & $ 57.62 \pm 3.37$ & $ 61.26\pm 3.84$\\
$\text{LR}$& $ 61.35 \pm 3.54 $ & $ 59.58 \pm 3.44$\\
$\text{RF}$& $ 61.38 \pm 3.50 $ & $ 63.20 \pm 3.13$\\
$\text{SVM}$& $ 61.80 \pm 3.92 $ & $ 64.89 \pm 3.64$\\
\end{tabular}
\label{tab:planar1}
\end{table}
\subsection{Symmetric semi-positive definite matrix (SPD)}
Covariance matrices are ubiquitous and attractive in machine learning applications due to their capacity to capture the structure inside the data. The main challenge is to take the particular geometry of the Riemannian manifold of symmetric positive definite ($\text{SPD}$) matrices into consideration. The space $\text{SPD}(d)$ of all $d\times d$ positive definite matrices belongs to an important class of manifolds that possesses particular geometric structures, which should be taken into account for building the DNNs. \cite{Fletcher2007250} investigates its Riemannian structure and provides somewhat concrete forms of all its geometric quantities. \cite{dryden2009non} studies different notions of means and averages in $\text{SPD}(3)$ with respect to different distance metrics and considers applications to DTI data and covariance matrices.
Under the Riemannian framework of tensor computing \cite{pennec2006riemannian}, several metrics play an important role in machine learning on \text{SPD} matrices. Generally, the Riemannian distance $d(P_1, P_2)$ between two points $P_1$ and $P_2$ on the manifold is defined as the length of the geodesic $\gamma_{P_1 \to P_2}$, i.e., the shortest parameterized curve connecting them. In the $\text{SPD}$ manifold, the distance under the affine metric could be computed as \cite{pennec2006riemannian}:
\begin{align*}
d\left(P_{1}, P_{2}\right)=\frac{1}{2}\left\|\log \left(P_{1}^{-\frac{1}{2}} P_{2} P_{1}^{-\frac{1}{2}}\right)\right\|_{F}.
\end{align*}
Other important natural mappings to and from the manifold and its tangent bundle are the logarithmic mapping $Log_{P_0}$ and the exponential mapping $Exp_{P_0}$ at the point $P_0$. Under the affine metric, those two mappings are known in closed form:
\begin{align*}
\forall S \in \mathcal{T}_{P_{0}}, Exp_{P_{0}}(S)=P_{0}^{\frac{1}{2}} \exp \left(P_{0}^{-\frac{1}{2}} S P_{0}^{-\frac{1}{2}}\right) P_{0}^{\frac{1}{2}} \in \text{SPD}(d)
\end{align*}
\begin{align*}
\forall P \in \text{SPD}(d), Log_{P_{0}}(P)=P_{0}^{\frac{1}{2}} \log \left(P_{0}^{-\frac{1}{2}} P P_{0}^{-\frac{1}{2}}\right) P_{0}^{\frac{1}{2}} \in \mathcal{T}_{P_{0}},
\end{align*}
where $\mathcal{T}_{P_{0}}$ denotes the tangent space at $P_0$. Furthermore, we consider the log map on the matrix as the embedding $J$, mapping $\text{SPD}(d)$ to $Sym(d)$, the space of the symmetric matrix. For example, let $P \in \text{SPD}(d)$ with a spectral decomposition $P^{(l)}=U\Sigma U^{T}$, we have the log-map of $A$ as $\log(P) = U\log(\Sigma)U^{T}$ where $\log(\Sigma)$ denotes the diagonal matrix whose diagonal entries are the logarithms of the diagonal entries of $\Sigma$. Moreover, the embedding $J$ is a diffeomorphism, equivariant with respect to the actions of $GL(d,{\mathbb R})$, the $d$ by $d$ general linear group. That is, for $H\in GL(d,{\mathbb R})$, we have $\log(HPH^{T}) = H \log(P)H^{-1}$.
In the context of deep networks on $\text{SPD}$, we build up our model in terms of SPDNet introduced by \cite{huang2017riemannian}, which mimicked the classical neural networks with the stage of computing an invariant representation of the input data points and a second stage devoted to performing the final classification. The SPDNet exploited the geometry based on threefold layers:
The BiMap (bilinear transformation) layer, analogous to the usual dense layer; the induced dimension reduction eases the computational burden often found in learning algorithms on SPD data:
$$
X^{(l)}=W^{(l)^{T}} P^{(l-1)} W^{(l)} \text { with } W^{(l)} \text { semi-orthogonal. }
$$
The ReEig (rectified eigenvalues activation) layer, analogous to the ReLU activation, can also be seen as an Eigen-regularization, protecting the matrices from degeneracy:
$$
X^{(l)}=U^{(l)} \max \left(\Sigma^{(l)}, \epsilon I_{n}\right) U^{(l)^{T}}, \text { with } P^{(l)}=U^{(l)} \Sigma^{(l)} U^{(l)^{T}.}
$$
The LogEig (log eigenvalues Euclidean projection) layer:
$X^{(l)}=\operatorname{vec}\left(U^{(l)} \log \left(\Sigma^{(l)}\right) U^{(l)^{T}}\right)$, with again $U^{(l)}$ the eigenspace of $P^{(l)}$.
Under our framework, the SPDNet is both an eDNN and a tDNN model. The LogEig layer applies the logarithmic mapping $\log_{I}(P) = \operatorname{vec}\left(U^{(l)} \log \left(\Sigma^{(l)}\right) U^{(l)^{T}}\right)$, which is identical to the transformation in the LogEig layer. Thus, SPDNet can also be viewed as a tDNN model. In our experiments, we only consider tDNN models as one tangent space from the base point is sufficient to cover the entire manifold. Our eDNN models on $\text{SPD}(p)$ consist of 3 BiMap layers, 3 ReEig layers, one LogEig layer (for embedding), and a 5-layer DNN with 100 hidden nodes per layer. In tDNN models, we replace the LogEig layer with the intrinsic logarithmic mapping under different metrics.
In our experiments, we evaluate the performance of tDNN and eDNN models on the AFEW and HDM05 datasets using the same setup and protocol as in \cite{huang2017riemannian}. The AFEW dataset \cite{dhall2011static} includes 600 video clips with per-frame annotations of valence and arousal levels and 68 facial landmarks, depicting 7 classes of emotions. The HDM05 dataset \cite{muller2007mocap} contains over three hours of motion capture data in C3D and ASF/AMC formats, covering more than 70 motion classes across multiple actors. We divide the data into a 75-25 percent training-test split, with 10 repetitions, and use the validation set (25 percent of training data) to tune hyperparameters. We implement tDNN models on both affine metrics and log-Euclidean metrics, using the Frechet mean of the batch as the base point. As shown in Table \ref{tab:spd}, our tDNN model under the Log-Euclidean metric achieves the best results on both datasets, with a 35.85 $\pm$ 1.49 accuracy on the AFEW dataset and 62.59 $\pm$ 1.35 accuracy on the HDM05 dataset.
\begin{table}[htbp]
\centering
\caption{The accuracy of the test set was reported. We follow the setup and protocols in \cite{huang2017riemannian} and our tDNN models outperform the eDNN (SPDNet) under both log and affine metrics. }
\begin{tabular}{lccc}
\hline Data & AFEW & HDM05 \\
\hline$(n, d)$ & $(2135,400^2)$ &$(2086,93^2)$ \\
\hline \hline eDNN(SPDNet) & $34.23 \pm 1.44 $ & $61.35 \pm 1.12$ \\
tDNN-Log & $\mathbf{35.85 \pm 1.49}$ & $\mathbf{62.59 \pm 1.35}$ \\
tDNN-Affine &$35.31 \pm 1.68$ & $62.23 \pm 1.43$ \\
\hline
\end{tabular}
\label{tab:spd}
\end{table}
\section{Discussion}
In this work, we develop intrinsic and extrinsic deep neural network architectures on manifolds and characterize their theoretical properties in terms of approximation error and statistical error of the ERM based estimator. The neural networks explore the underlying geometry of the manifolds for learning and inference. Future work will be focused on developing convolutional neural networks in manifolds for image classifications of manifold-values images, which have abundant applications in medical imaging and computer vision.
\section*{Acknowledgments}
We would like to thank Dong Quan Nguyen, Steve Rosenberg, and Bayan Saparbayeva for the very helpful discussions. We acknowledge the generous support of NSF grants DMS CAREER 1654579 and DMS 2113642. The second author was supported by INHA UNIVERSITY Research Grant.
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.08682",
"language": "en",
"timestamp": "2023-02-20T02:06:28",
"url": "https://arxiv.org/abs/2302.08682",
"yymm": "2302"
} | \section{Introduction}
Convolutional Neural Network (CNN) is an important component in computer vision and plays a key role in deep learning architecture, which extracts low/mid/high-level features \cite{zeiler2014visualizing} and classifiers naturally in an end-to-end multilayer, and the "levels" of features can be evolved by the depth of stacked layers. The idea of CNN model design is inspired by live organisms' inherent visual perception process \cite{hubel1962receptive}.
The shallow layers learn the local features of the image, such as the color and geometric shape of the image object, while the deep layers learn more abstract features from the input data, such as contour characteristics and other high-dimensional properties. The multi-layer structure of the CNN can automatically learn features' spatial information from the input image data. Spatial information is represented by matrices in the hierarchical CNN models, which is the conversion between the overall coordinate system of the object and the coordinate system of each component.
Thus, the spatial information of the features learned by CNN can perform shift-invariant classification of input information \cite{zhang1996improved}.
In fact, what really provides free translation and deformation invariance for the CNN architecture are the convolution structure and pooling operation, while the use of sub-sampling and stride will break this special characteristic \cite{azulay2019deep}.
Even if the target position is changed, the operations of convolution and pooling can still extract the same information from images, and then flatten it to the same feature value in different orders of the following fully connected layer. However, sub-sampling and stride reduce part of the information of the input image, which leads to the loss of some features, thus breaking the translation invariance of CNN.
Recent research on data argumentation has attempted to enhance the invariance by performing the operations of translation, rotation, reflection and scaling \cite{bruna2013invariant,sifre2013rotation,cohen2016group,worrall2017harmonic,esteves2018learning} on the input images. However, it cannot really improve the model's learning ability of shift invariance.
The shift-invariant ability of CNN depends on the learning of features' spatial information, which contains two types of information, i.e., \textit{features relationship} and \textit{position information}. \textit{Features relationship} refers to the relative position among different features, while \textit{position information} represents the absolute position of features in the image.
We deem that features relationship is helpful in CNN, as if a feature is useful in one image location, the same feature is likely to be useful in other locations. The Capsule Network \cite{kosiorek2019stacked} is designed to learn features relationship from images, i.e., the spatial relationships between whole objects and their parts. However, it is difficult to implement on complex datasets, e.g., CIFAR-10 and ImageNet. On the other hand, we believe position information is harmful to CNN, as learning it will impede the model's acquisition of features relationship. Recent evidence suggests that position information is implicitly encoded in the extracted feature maps, thus
non-linear readout of position information further augments the readout of absolute position \cite{islam2019much}. Additionally, these studies point to that zero-padding and borders serve as an anchor for spatial information that is generated and ultimately transmitted over the whole image during spatial abstraction. Hence, how to reduce the position information introduced by zero-padding has been long-ignored in CNN solutions to vision issues.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{RP1.pdf}
\caption{Traditional Padding and Random Padding.}
\label{fig1}
\end{figure}
In this paper, with the purpose of reducing CNN's learning of position information, we proposed the Random Padding operation shown in Figure \ref{fig1}, that is a variant of traditional padding technology. Random Padding is to add zero-padding to the randomly chosen half boundary of the feature maps, which will weaken the position information and let the CNN model better understand the features relationship. This technique makes CNN models more robust to the change of object's absolute position in the images.
The contribution of the paper is summarised as follows:
\begin{itemize}
\item We investigate the usefulness of spatial information in CNN and propose the new approach to improving the model accuracy, i.e., through reducing the position information in CNN.
\item We propose the Random Padding operation that can be directly added to a variety of CNN models without any changes to the model structure. It is lightweight that requires no additional parameter learning or computing in model training. As the operation does not make changes on the input images, it is complementary to the traditional data augmentations used in CNN training.
\item We conduct extensive experiments on popular CNN models. The results show that the Random Padding operation can reduce the extraction of the position information in CNN models, and improve the accuracy on image classification. Comprehensive experiment results and the complete program code are in supplementary materials.
\end{itemize}
\section{Related Work}
\subsection{Approaches to Improve Accuracy of CNNs}
The structure evolution of the convolutional neural networks has gradually improved the accuracy, i.e., Alexnet with ReLu and Dropout \cite{krizhevsky2012imagenet}, VGG with ${3 * 3}$ kernels \cite{DBLP:journals/corr/SimonyanZ14a}, Googlenet with inception \cite{szegedy2015going} and Resnet with residual blocks \cite{he2016deep}. In addition to the upgrade of CNN architectures, the augmentation of input data is also an indispensable part of improving performance.
\noindent\textbf{Data Augmentation.} The common demonstrations showing the effectiveness of data augmentation come from simple transformations, such as translation, rotation, flipping, cropping, adding noises, etc, which aim at artificially enlarging the training dataset \cite{shorten2019survey}. The shift invariance of the object is encoded by CNNs to improve the model's learning ability for image recognition tasks. For example, rotation augmentation on source images is performed by randomly rotating clockwise or counterclockwise between 0 and 360 degrees with the center of the image as the origin, reversing the entire rows and columns of image pixels horizontally or vertically is called flipping augmentation, and random cropping is a method to reduce the size of the input and create random subsets of the original input \cite{krizhevsky2012imagenet}. Random Erasing \cite{zhong2020random} is another interesting data augmentation technique that produces training images with varying degrees of occlusion.
\noindent\textbf{Other Approaches.}
In addition to geometric transformation and Random Erasing, there are many other image manipulations, such as noise injection, kenel filters, color space transformations and mixing images \cite{shorten2019survey}. Noise injection is injecting a matrix of random values usually drawn from a Gaussian distribution, which can help CNNs learn
more robust features \cite{moreno2018forward}. Kernel filters are a widely used image processing method for sharpening and blurring images \cite{kang2017patchshuffle}, whereas color space transformations aims to alter the color values or distribution of images \cite{DBLP:journals/corr/ChatfieldSVZ14,jurio2010comparison}. Mixing images appeared in recent years, which has two approaches. The one is cropping images randomly and concatenate the croppings together to form new images \cite{inoue2018data}, the other is using non-linear methods to combine images to create new training examples \cite{summers2019improved}.
Different from the above approaches, we design the Random Padding operation to improve the performance of CNN models from the perspective of reducing the position information in the network.
\subsection{Padding in CNN}
The boundary effect is a well-researched phenomenon in biological neural networks \cite{tsotsos1995modeling,sirovich1979effect}. Previous research has addressed the boundary effect for artificial CNNs by using specific convolution filters for the border regions \cite{innamorati2019learning}. At some point during the convolution process, the filter kernel will come into contact with the image border \cite{islam2019much}. Classic CNNs use zero-padding to enlarge the image for filtering by kernels. The cropped images are filled by paddings to reach the specified size \cite{xie2018mitigating}. Guilin Liu and his colleagues proposed a simple and effective padding scheme, called partial convolution-based padding. Convolution results are reweighted near image edges relying on the ratios between the padded region and the convolution sliding window area \cite{liu2018partial}.
Padding is an additional pixels that can be added to the border of an image. In the process of convolution, the pixel in the corner of an image will only get covered one time by kernels but the middle pixel will get covered more than once basically, which will cause shrinking outputs and losing information on corners of the image. Padding works by extending the area in which a convolution neural network processes an image. Padding is added to the frame of the image to enlarge the image size for the kernel to cover better, which assists the kernel with processing the image. Adding padding operations to a CNN help the model get a more accurate analysis of images.
\section{Random Padding for CNN}
This section presents the Random Padding operation for training in the convolutional neural network (CNN). Firstly, we introduce the detailed procedure of Random Padding. Next, using comparative experiments to verify that the extraction of position information will be reduced in CNNs with the method of Random Padding. Finally, the implementation of Random Padding in different CNN models is introduced.
\subsection{Random Padding Operation}\label{AA}
In CNN training, we replace the traditional padding with the technique of Random Padding, which has four types of padding selections shown in Figure \ref{fig1}. For the feature maps generated in the network, Random Padding will perform the zero-padding operation randomly on the four boundaries according to the required thickness of the padding by the feature maps.
When padding thickness equals 1, Random Padding will first randomly select one padding of the four patterns. Assuming that the size of the feature map in the training is ${w * w}$, the feature map will become ${(w+1)*(w+1)}$ after this new padding method. Then, Random Padding will randomly select one of the four modes again, which will change the size of the feature map to $((w+1)+1)*((w+1)+1)$. In general, Random Padding will perform $2n$ padding selections if the padding thickness is $n$, where $n=1,2,3$ are most common in the CNN models. The detailed steps of Random Padding are shown in Algorithm \ref{alg:1}. In this process, the position of features will be randomly changed by adding Random Padding, hence the learning of the object's position information by CNN will be reduced.
\begin{algorithm}[h]
\caption{Random Padding Procedure}
\label{alg:1}
\begin{algorithmic}[1]
\STATE {\bfseries Input:}
Input feature map: $I$; The thickness of padding: $n$; The random padding thickness of four boundaries, left, right, top and bottom: $l, r, t, b$; Padding options: $S$
\STATE {\bfseries Output:}
Feature map with padding $I^*$
\STATE$l,\,r,\,t,\,b \gets 0$;
\STATE$S \gets[[1,0,1,0],[1,0,0,1],[0,1,1,0],[0,1,0,1]]$;
\FOR{$i\leftarrow 1$,\, $2n$}
\STATE $P_r[] \gets (S,\,1)$
\COMMENT{select a padding option randomly}
\STATE $l \gets l + P_r[0]$
\COMMENT{padding\_left}
\STATE $r \gets r + P_r[1]$
\COMMENT{padding\_right}
\STATE $t \gets t + P_r[2]$
\COMMENT{padding\_top}
\STATE $b \gets b + P_r[3]$
\COMMENT{padding\_bottom}
\ENDFOR
\STATE $I^* \gets ([l,\,r,\,t,\,b])(I)$\
\end{algorithmic}
\end{algorithm}
\subsection{Validation Method for Position Information Reduction in CNNs}
The position information has been proved to be implicitly encoded in the feature map extracted by CNN, which was introduced by the traditional zero padding \cite{islam2019much}. In this article, we proposed the hypothesis that the Random Padding operation will reduce the extraction of position information in CNNs. In this sub-section, we prove this hypothesis by comparing position information in an end-to-end manner between the CNN with traditional padding and the CNN with Random Padding.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{CNN_with_RP.pdf}
\caption{Random Padding in the CNN.}
\label{fig2}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{posenet.pdf}
\caption{Compare position information between the CNN with traditional padding and the CNN with Random Padding.}
\label{fig3}
\end{figure*}
This validation experiment used the Position Encoding Network \cite{islam2019much}, which is composed of two critical components: a feedforward convolutional encoder network and a simple position encoding module. For this task, the two encoder networks collected characteristics, from the CNN with traditional padding and the CNN with Random Padding, at different layers of abstraction. After the collection of multi-scale features from the front networks as inputs, the other two position encoding modules outputted the prediction of position information. Due to the fact that position encoding measurement is a novel concept, there is no universal metric. We used Spearman's rank correlation coefficient (SPC) and Mean absolute error (MAE) proposed by \cite{islam2019much} to evaluate the position encoding performance to verify that the Random Padding operation reduces the amount of position information extracted by CNNs. The higher the SPC, the higher the correlation between the output and the ground truth map, while the MAE is the opposite. We will present the detailed experiment setting and results in Section 4.
\subsection{Construct CNN with Random Padding}
The Random Padding operation can be added to different types of backbone networks to construct CNN for image classification.
In order to better analyze the relationship between the way of padding joins CNN and the improvement of model performance,
we replaced the traditional padding of the first one, the first two and the first three padding layers with the Random Padding operation to compare the accuracy of image classification in various CNN models, which is shown in Figure \ref{fig2}. Since the Random Padding operation is complementary to general data augmentation methods, we employ random cropping, random flipping, random rotation, and random Erasing methods to enrich the training datasets. We will present the detailed experiment setting and results in Section 5.
\section{Evaluation of Position Information in CNNs}
This section quantitatively evaluates the impact of CNN using traditional padding and Random padding on position information extraction. The first part introduces the dataset used by prertrained models and the evaluation metrics for position information. Secondly, we compare the learning ability of CNN with traditional padding and CNN with Random Padding on position information extraction. In the third part, we analyze the experimental results and justify that the CNN with Random Padding reduces the extraction of position information.
\subsection{Dataset and Evaluation Metrics}
\noindent\textbf{Dataset}
We use the Imagenet dataset \cite{he2019rethinking} to train the basic VGG and VGG with Random Padding as our initialization networks, and then use the DUT-S dataset \cite{wang2017learning} as our training set, which contains 10,533 images for training. Following the common training protocol used in \cite{zhang2017amulet,liu2018picanet}, we train the models on the same training set of DUT-S and evaluate the existence of position information on the synthetic images (white, black, and Gaussian noise) and the natural image from the website of News-Leader. Notably, we adhere to the standard setting used in saliency detection to ensure that the training and test sets do not overlap. Since the position information is largely content-independent, any image or model can be used in our studies.
\noindent\textbf{Evaluation}
At present, there is no universal standard for the measurement of position encoding, so we evaluate the performance of position information according to the two different methods for metrics (Spearman's rank correlation coefficient (SPC) and Mean Absolute Error (MAE)) previously used by \cite{islam2019much}. SPC is a non-parametric measurement of the association between the ground-truth and the predicted position map. We maintain the SPC score within the range [-1, 1] to facilitate understanding. MAE measures the average magnitude of the errors in the predicted position map and the ground-truth gradient position map, without considering their direction.
The lower the SPC value, the less position information the model produces, and the higher the MAE value, the less position information the model outputs.
We expect lower SPC values and higher MAE values after applying the Random Padding operation in CNNs.
\begin{table*}[htbp]
\caption{Comparison of SPC and MAE in CNNs using traditional padding or Random Padding across different image types}
\centering
\small
\begin{tabular}{|l|l|ll|ll|ll|ll|}
\hline
& & \multicolumn{2}{c|}{Natrue} & \multicolumn{2}{c|}{Black} & \multicolumn{2}{c|}{White} & \multicolumn{2}{c|}{Noise} \\ \hline
& Model & \multicolumn{1}{c|}{SPC} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c|}{SPC} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c|}{SPC} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c|}{SPC} & \multicolumn{1}{c|}{MAE} \\ \hline
\multirow{3}{*}{H} & PosENet & \multicolumn{1}{l|}{0.130} & 0.284 & \multicolumn{1}{l|}{0.} & 0.251 & \multicolumn{1}{l|}{0.} & 0.251 & \multicolumn{1}{l|}{0.015} & 0.251 \\ \cline{2-10}
& VGG & \multicolumn{1}{l|}{0.411} & 0.239 & \multicolumn{1}{l|}{0.216} & 0.242 & \multicolumn{1}{l|}{0.243} & 0.242 & \multicolumn{1}{l|}{0.129} & 0.245 \\ \cline{2-10}
& VGG\_RP & \multicolumn{1}{l|}{-0.116} & 0.253 & \multicolumn{1}{l|}{0.021} & 0.252 & \multicolumn{1}{l|}{0.023} & 0.255 & \multicolumn{1}{l|}{-0.045} & 0.251 \\ \hline
\multirow{3}{*}{V} & PosENet & \multicolumn{1}{l|}{0.063} & 0.247 & \multicolumn{1}{l|}{0.063} & 0.254 & \multicolumn{1}{l|}{0.} & 0.253 & \multicolumn{1}{l|}{-0.052} & 0.251 \\ \cline{2-10}
& VGG & \multicolumn{1}{l|}{0.502} & 0.234 & \multicolumn{1}{l|}{0.334} & 0.242 & \multicolumn{1}{l|}{0.433} & 0.247 & \multicolumn{1}{l|}{0.120} & 0.250 \\ \cline{2-10}
& VGG\_RP & \multicolumn{1}{l|}{-0.174} & 0.249 & \multicolumn{1}{l|}{-0.174} & 0.249 & \multicolumn{1}{l|}{-0.027} & 0.257 & \multicolumn{1}{l|}{0.100} & 0.249 \\ \hline
\multirow{3}{*}{G} & PosENet & \multicolumn{1}{l|}{0.428} & 0.189 & \multicolumn{1}{l|}{0.} & 0.196 & \multicolumn{1}{l|}{0.} & 0.206 & \multicolumn{1}{l|}{0.026} & 0.198 \\ \cline{2-10}
& VGG & \multicolumn{1}{l|}{0.765} & 0.14 & \multicolumn{1}{l|}{0.421} & 0.205 & \multicolumn{1}{l|}{0.399} & 0.192 & \multicolumn{1}{l|}{0.161} & 0.187 \\ \cline{2-10}
& VGG\_RP & \multicolumn{1}{l|}{-0.49} & 0.200 & \multicolumn{1}{l|}{-0.009} & 0.196 & \multicolumn{1}{l|}{-0.040} & 0.195 & \multicolumn{1}{l|}{-0.051} & 0.196 \\ \hline
\multirow{3}{*}{HS} & PosENet & \multicolumn{1}{l|}{0.187} & 0.306 & \multicolumn{1}{l|}{0.} & 0.308 & \multicolumn{1}{l|}{0.} & 0.308 & \multicolumn{1}{l|}{-0.060} & 0.308 \\ \cline{2-10}
& VGG & \multicolumn{1}{l|}{0.234} & 0.211 & \multicolumn{1}{l|}{0.227} & 0.297 & \multicolumn{1}{l|}{0.285} & 0.301 & \multicolumn{1}{l|}{0.253} & 0.292 \\ \cline{2-10}
& VGG\_RP & \multicolumn{1}{l|}{0.043} & 0.308 & \multicolumn{1}{l|}{0.049} & 0.310 & \multicolumn{1}{l|}{-0.127} & 0.306 & \multicolumn{1}{l|}{-0.066} & 0.309 \\ \hline
\multirow{3}{*}{VS} & PosENet & \multicolumn{1}{l|}{0.015} & 0.315 & \multicolumn{1}{l|}{0.} & 0.308 & \multicolumn{1}{l|}{0.} & 0.313 & \multicolumn{1}{l|}{-0.022} & 0.310 \\ \cline{2-10}
& VGG & \multicolumn{1}{l|}{0.339} & 0.292 & \multicolumn{1}{l|}{0.240} & 0.296 & \multicolumn{1}{l|}{0.229} & 0.299 & \multicolumn{1}{l|}{0.249} & 0.292 \\ \cline{2-10}
& VGG\_RP & \multicolumn{1}{l|}{0.004} & 0.308 & \multicolumn{1}{l|}{0.040} & 0.308 & \multicolumn{1}{l|}{0.026} & 0.308 & \multicolumn{1}{l|}{0.050} & 0.308 \\ \hline
\end{tabular}
\label{table1}
\end{table*}
\subsection{Architectures and Settings}
\noindent\textbf{Architectures}
We first build two pre-trained networks based on the basic architecture of VGG with 16 layers. The first network uses traditional padding, and the second one applies the technique of Random Padding on the first two padding layers. The proper number of padding layers added to the CNN is analyzed in Section 5. Meanwhile, we construct a randomization test by using a normalized gradient-like position map as the ground-truth. The generated gradient-like ground-truth position maps contain Horizontal gradient (HG) and vertical gradient (VG) masks, horizontal and vertical stripes (HS, VS), and Gaussian distribution (G). As shown in Figure \ref{fig3}, the combination of natural images $I_m\in\mathbb{R}^{h\times w\times 3} $ and gradient-like masks $G_{pos}\in\mathbb{R}^{h\times w}$ is used as the input of two pretrained models with fixed weights. We remove the average pooling layer and the layer that assigns categories of the pretrained model to construct an encoder network $f_p$ and $f_{rp}$ for extracting feature maps. The features ($f^1_\theta,f^2_\theta,f^3_\theta,f^4_\theta,f^5_\theta$) and ($f^1_{r\theta},f^2_{r\theta},f^3_{r\theta},f^4_{r\theta},f^5_{r\theta}$) we extract from the two encoder networks respectively come from five different abstraction layers, from shallow to deep. The following is a summary of the major operations:
\begin{equation}
\begin{aligned}
f^i_\theta=\bm{W}_p \ast I_m(G_{pos})\\
f^i_{r\theta}=\bm{W}_{rp} \ast I_m(G_{pos})
\end{aligned}
\end{equation}
where $\bm{W}_p$ denotes frozen weights from the model using traditional padding, and $\bm{W}_{rp}$ represents frozen weights from the model using the Random Padding operation. $\ast$ indicates the model operation.
After multi-scale features collection, the transformation function $T_{pos}$ performs bi-linear interpolation on the extracted feature maps of different sizes to create feature maps with the same spatial dimension. ($f^1_{pos},f^2_{pos},f^3_{pos},f^4_{pos},f^5_{pos}$) and ($f^1_{rpos},f^2_{rpos},f^3_{rpos},f^4_{rpos},f^5_{rpos}$) can be summarized as:
\begin{align}
\begin{aligned}
f^i_{pos}=T_{pos}(f^i_\theta)\\
f^i_{rpos}=T_{pos}(f^i_{r\theta})
\end{aligned}
\end{align}
These resized feature maps should be concatenated together and then send into Position Encoding Module (PosENet) \cite{islam2019much}, which only has one convolutional layer. The features are delivered to PosENet and trained, where the goal of this training is to generate a pattern that is only related to the position information and has nothing to do with other features. It should be noted that during the training process, the parameters of pretrained networks are fixed. The final stage of this study is to compare the extraction of the amount of position information between the CNN with traditional padding and the CNN with Random Padding.
\noindent\textbf{Settings}
The models we choose to compare the position information of feature maps are traditional VGG16 and VGG16 with Random Padding. We initialize the CNN models by pre-training on the ImageNet dataset and keep the weights frozen in our comparison experiment. The size of the input image should be 224×224, which can be a natural picture, a black, a white, or a noise image. We also apply five different ground-truth patterns HG, VG, G, HS and VS, which represent horizontal and vertical gradients, 2D Gaussian distribution, horizontal and vertical stripes respectively. All feature maps of different five layers extracted from pre-trained models are resized to a size of 28x28. After the feature map is used as input, the Position Encoding Module (PosENet) will be trained with stochastic gradient descent for 15 epochs with a momentum factor of 0.9, and weight decay of $10^{-4}$. For this task, PosENet only has one convolutional layer with a kernel size of 3 × 3 without any padding, which will learn position information directly from the input.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{GT.pdf}
\caption{Results of PosENet based networks with traditional padding or Random Padding corresponding to different ground-truth (GT) patterns.}
\label{fig4}
\end{figure}
\begin{table*}[htbp]
\caption{Test errors (\%) with different architectures on CIFAR-10, CIFAR-100 and Fashion-MNIST. \textbf{Baseline:} Baseline model, \textbf{RP\_1:} Random Padding on the first padding layer, \textbf{RP\_2:} Random Padding on the first two padding layers, \textbf{RP\_3:} Random Padding on the first three padding layers.}
\centering
\small
\scalebox{1.0}{
\renewcommand\tabcolsep{3.0pt}}
\begin{tabular}{|l|l|l|l|l|c|}
\hline
Model & & \multicolumn{1}{c|}{Alexnet} & \multicolumn{1}{c|}{VGG16} & \multicolumn{1}{c|}{Googlenet} & Resnet18 \\ \hline
\multirow{4}{*}{CIFAR10} & Baseline & 15.19 ± 0.07 & 12.41 ± 0.05 & 11.47 ± 0.08 & \multicolumn{1}{l|}{12.08 ± 0.04} \\ \cline{2-6}
& RP\_1 & 13.39 ± 0.05 & 11.34 ± 0.07 & 10.37 ± 0.09 & \multicolumn{1}{l|}{8.21 ± 0.07} \\ \cline{2-6}
& RP\_2 & 12.93 ± 0.12 & 10.54 ± 0.04 & 10.25 ± 0.07 & - \\ \cline{2-6}
& RP\_3 & 12.75 ± 0.07 & 10.61 ± 0.05 & 10.20 ± 0.13 & - \\ \hline
\multirow{4}{*}{CIFAR100} & Baseline & 44.57 ± 0.06 & 46.93 ± 0.04 & 34.81 ± 0.07 & \multicolumn{1}{l|}{36.90 ± 0.09} \\ \cline{2-6}
& RP\_1 & 42.36 ± 0.07 & 42.32 ± 0.06 & 34.50 ± 0.07 & \multicolumn{1}{l|}{31.29 ± 0.08} \\ \cline{2-6}
& RP\_2 & 41.54 ± 0.06 & 40.66 ± 0.05 & 33.12 ± 0.09 & - \\ \cline{2-6}
& RP\_3 & 40.66 ± 0.05 & 39.66 ± 0.04 & 32.94 ± 0.06 & - \\ \hline
\multirow{4}{*}{Fashion-MNIST} & Baseline & 12.74 ± 0.06 & 7.02 ± 0.08 & 6.48 ± 0.09 & \multicolumn{1}{l|}{5.80 ± 0.04} \\ \cline{2-6}
& RP\_1 & 12.74 ± 0.06 & 5.49 ± 0.05 & 5.88 ± 0.07 & \multicolumn{1}{l|}{5.64 ± 0.07} \\ \cline{2-6}
& RP\_2 & 9.59 ± 0.05 & 5.49 ± 0.06 & 5.82 ± 0.05 & - \\ \cline{2-6}
& RP\_3 & 9.40 ± 0.06 & 5.72 ± 0.08 & 5.94 ± 0.07 & - \\ \hline
\end{tabular}
\label{table2}
\end{table*}
\subsection{Comparison and Evaluation}
We first use Random Padding on the first two padding layers in one model and then conduct experiments to verify and compare the differences in the position information encoded in the two pretrained models. Following the same protocol, we train networks, based on traditional VGG16 and VGG16 with Random Padding, on each type of ground-truth, and report the experimental results in Table \ref{table1}. In addition, we present the result as a reference by training PosENet without using any pretrained models' feature maps as input. For this task, not only the original image was used, but also pure black, pure white, and Gaussian noise images were used as inputs of PosENet. This is to verify whether the feature contains position information when there is no semantic information. The structure of PosENet is very simple, which can only read out the input features. If the input feature contains more position information, the output image can better approximate the target object; if the input feature does not contain any position information, the output feature map is similar to random noise and cannot output a regular pattern, which represents that the position information is not derived from prior knowledge of the object. Our experimental results are not to evaluate the performance of the model, but to compare the impact of different ways of padding on the position information encoded by the CNN model.
Our experiment takes three kinds of features as input, which are the feature maps extracted by VGG16 with traditional padding, VGG16 with Random Padding, and the natural image without any processing, which are recorded as VGG, VGG\_RP, and PosENet respectively in the Table \ref{table1}. The reason why the original image is used as input is that this kind of image does not contain position information, which shows PosENet's own ability to extract position information and plays a comparative role to other results. PosENet can easily extract the position information from the pretrained VGG models while it is difficult to extract position information directly from the original image. Only when combined with a deep neural network can this network extract position information that is coupled with the ground-truth position map. Prior studies that have noted the traditional zero-padding delivers position information for convolutional neural networks to learn.
According to the results in Table \ref{table1}, VGG\_RP is lower than VGG on the evaluation index SPC value, and almost all higher than VGG on MAE value. Sometimes, VGG\_RP is even lower than PosENet's SPC value. The qualitative results for CNNs with traditional padding or Random Padding across different patterns are shown in Figure \ref{fig4}. The first two columns are the patterns of the input image and the target; the third column is the visualization of directly inputting the source image into PosENet; the fourth and fifth columns are the generation effects of VGG and VGG\_RP respectively. We can observe a connection between the predicted and ground-truth position maps for the VG, G, and VS patterns, indicating that CNNs with Random Padding can better learn the object itself that needs to be recognized, which means the technology of Random Padding can indeed reduce CNN's extraction of position information effectively. Therefore, the basic CNN network will learn the position information of the object, while both the PosENet with only one convolution layer and the CNN with Random Padding hardly learn the position information from the input image.
\section{Evaluation of Random Padding}
This section evaluates the Random Padding operation in CNN for improving the accuracy of image classification.
\subsection{Dataset and Evaluation Metrics}
\noindent\textbf{Dataset} We use three image datasets to train the CNN model, including the well-known datasets CIFAR-10 and CIFAR-100, and a grayscale clothing dataset Fashion-Mnist. There are 10 classes in the dataset of CIFAR-10, and each has 6000 32x32 color images. The training set has 50000 images and the test set has 10000 images. The CIFAR-100 is just like the CIFAR-10, except it has 100 classes containing 600 images each. Fashion-MNIST is a dataset that contains 28x28 grayscale images of 70,000 fashion items divided into ten categories, each with 7,000 images. The training set has 60,000 pictures, whereas the test set contains 10,000. The image size and data format of Fashion-MNIST are identical to those of the original MNIST.
\noindent\textbf{Evaluation Metrics}
The test error assessment is an important part of any classification project, which compares the predicted result of the classified image with its ground truth label. For image classification, test error is used to calculate the ratio of incorrectly recognized images to the total number of images that need to be recognized.
\subsection{Experiment Setting}
We use CIFAR-10, CIFAR-100 and Fashion-Mnist to train four CNN architectures, which are Alexnet, VGG, Googlenet and Resnet. We use 16-layer network for VGG and 18-layer network for Resnet. The models with different layers' Random Padding were training for 200 epochs with the learning rate of $10^{-3}$. In our first experiment, we compare the different CNN models trained with Random Padding on different layers. For the same deep learning architecture, all the models are trained from the same weight initialization and all the input data has not been augmented. The second experiment is to apply the Random Padding operation and various data augmentations (e.g., flipping, rotation, cropping and erasing) in the CNNs together to justify the complementary performance of the Random Padding operation with data augmentation methods.
\subsection{Classification accuracy on different CNNs}
The experiments in Section 4 proved the Random Padding operation can reduce the extraction of position information, but did not show how much this method can improve the model performance. So we design a comparative experiment shown in Table \ref{table2} to illustrate the results of the Random Padding operation on different padding layers of different CNN models training on the datasets of CIFAR-10, CIFAR-100 and Fashion-MNIST. For each kind of CNN architecture, we apply the Random Padding operation on the first padding layer, the first two padding layers and the first three padding layers. Specially, we only replace the traditional padding with Random Padding on the first padding layer of the Resnet18 due to the unique structure of shortcut in Resnet. Based on the principle of controlling variables, we train and test the basic CNN architectures on the same dataset. All the results are shown in Table \ref{table2}.
For CIFAR-10, our approach obtains a 12.75\% error rate using Alextnet with Random Padding on the first three padding layers. The error rate on VGG16 with two padding layers of Random Padding achieves 10.54\%, which improves the accuracy by 1.87\%. Replacing the first three traditional paddings with Random Padding makes Googlenet reach a new state-of-the-art test error of 10.20\%. This method also works well on the architecture of Resnet18, which is 3.87\% higher than the baseline.
For CIFAR-100, all types of models using Random Padding have improved their recognition rates. The Alexnet with three Random Padding layers achieves 40.66\%, which increases the accuracy by 3.91\%. There is an upward trend of accuracy when adding the Random Padding operation in more padding layers in the VGG16 and Googlenet. The test errors on VGG16 and Googlenet are 39.66\% and 32.94\% respectively. The Resnet18 with the technique of Random Padding has improved the accuracy significantly, reaching 68.71\%.
For Fashion-Mnist, although the recognition rate of all models is already very high, the addition of the Random Padding operation still improves the performance of the model. Alexnet, which has a small number of layers, uses Random Padding to increase its recognition rate by 3.34\% on this simple dataset. VGG16 and Googlenet using two layers of the Random Padding operation have better results than using three layers of the Random Padding operation, reaching 5.49\% and 5.82\%, respectively. The error rate of Resnet with a layer of the Random Padding operation is only reduced by 0.16\%.
In general, as the number of layers deepens, the extracted object features gradually become abstracted, and the encoding of position information also becomes more implicit. The addition of the Random Padding operation in subsequent deep layers will jeopardize the models' learning of abstract features. So we can conclude our experiment results that using the technique of Random Padding in the first two padding layers can always improve the performance of various deep learning models.
\begin{table}[]
\caption{Test errors (\%) with different data augmentation methods on CIFAR-10 based on VGG16 with traditional padding and VGG16 with Random Padding. \textbf{Baseline:} Baseline model, \textbf{RC:} Random Cropping, \textbf{RR:} Random Rotation, \textbf{RF:} Random Flipping, \textbf{RE:} Random Erasing.}
\centering
\small
\renewcommand\tabcolsep{9.0pt}
\begin{tabular}{|c|c|c|}
\hline
VGG16 & Random Padding & Test error (\%) \\ \hline
\multirow{2}{*}{Baseline} & - & 12.41 ± 0.08 \\ \cline{2-3}
& \checkmark & 10.54 ± 0.05 \\ \hline
\multirow{2}{*}{RC} & - & 10.54 ± 0.14 \\ \cline{2-3}
& \checkmark & 10.08 ± 0.09 \\ \hline
\multirow{2}{*}{RR} & - & 15.12 ± 0.05 \\ \cline{2-3}
& \checkmark & 9.82 ± 0.06 \\ \hline
\multirow{2}{*}{RF} & - & 10.37 ± 0.12 \\ \cline{2-3}
& \checkmark & 8.69 ± 0.07 \\ \hline
\multirow{2}{*}{RE} & - & 11.03 ± 0.08 \\ \cline{2-3}
& \checkmark & 8.89 ± 0.09 \\ \hline
\multirow{2}{*}{RF+RE} & - & 8.75 ± 0.13 \\ \cline{2-3}
& \checkmark & 7.75 ± 0.04 \\ \hline
\multirow{2}{*}{RC+RE} & - & 8.85 ± 0.07 \\ \cline{2-3}
& \checkmark & 8.34 ± 0.08 \\ \hline
\multirow{2}{*}{RC+RF} & - & 8.74 ± 0.06 \\ \cline{2-3}
& \checkmark & 8.26 ± 0.07 \\ \hline
\multirow{2}{*}{RC+RF+RE} & - & 7.83 ± 0.09 \\ \cline{2-3}
& \checkmark & 7.21 ± 0.05 \\ \hline
\end{tabular}
\label{table3}
\end{table}
\subsection{Classification accuracy on different CNNs}
In this experiment, we use VGG16 as the benchmark model and apply Random Padding on the first two padding layers, and use CIFAR-10 as the test dataset. We chose four types of data augmentations, which are Random Rotation (RR), Random Cropping (RC), Random Horizontal Flip (RF) and Random Erasing (RE). The test error obtained by CIFAR-10 on the basic model of VGG16 is used as the baseline for this task. Then the effectiveness of our approach is evaluated by adding various data augmentations and combining with the Random Padding operation.
As shown in Table \ref{table3}, the augmentation method of Random Rotation is not suitable for the dataset of CIFAR-10 on the VGG16, which means the use of Random Rotation makes the model's accuracy lower than the baseline. But after adding the Random Padding operation, the model's recognition rate on the CIFAR-10 test set exceeded the baseline, which indicates that the Random Padding operation can help the model learn features better. The model that combines a single data augmentation and the technique of Random Padding has a stronger learning ability than the model that only uses data augmentation. Therefore, the Random Padding operation is complementary to the data augmentation methods. Particularly, combining all these methods achieves a 7.21\% error rate, which has a 5.20\% improvement over the baseline.
\section{Conclusions}
In this paper, by investigating the learning of spatial information in convolutional neural networks (CNN), we propose a new padding approach named “Random Padding” for training CNN. The Random Padding operation reduces the extraction of features' positional information and makes the model better understand features' relationships in CNNs. Experiments conducted on CIFAR-10, CIFAR-100 and Fashion-MNIST with various data augmentation methods validate the effectiveness of our method for improving the performances of many CNN models. In future work, we will apply our approach to large-scale datasets and other CNN recognition tasks, such as object detection and face recognition.
\bibliographystyle{splncs04}
|
{
"arxiv_id": "2302.08681",
"language": "en",
"timestamp": "2023-02-20T02:06:27",
"url": "https://arxiv.org/abs/2302.08681",
"yymm": "2302"
} |
\section{Introduction}
\label{sec:intro}
\input{sections/introduction.tex}
\section{Background}
\label{sec:background}
\input{sections/background}
\vspace{-0.1cm}
\section{CarbonScaler Design}
\label{sec:carbon-scale-algo}
\input{sections/scaling}
\vspace{-0.2cm}
\section{CarbonScaler Implementation}
\label{sec:carbon-scaler-system}
\input{sections/implementation}
\vspace{-0.2cm}
\section{Experimental Evaluation}
\label{sec:evaluation}
\input{sections/evaluation.tex}
\vspace{-0.2cm}
\section{Related Work}
\label{sec:relatedwork}
\input{sections/related_work}
\section{Conclusion}
\input{sections/conclusion}
\label{sec:conclusion}
\bibliographystyle{ACM-Reference-Format}
\subsection{Sustainable Data Centers}
\vspace{-0.05cm}
In addition to their long-standing emphasis on improving energy efficiency through PUE\footnote{Power Usage Effectiveness (PUE) is the ratio of the total amount of energy used by a data center to the energy delivered to computing.} reductions, cloud data centers have recently begun to focus on reducing their carbon footprint~\cite{Acun2022AHA, Radovanovic2021CarbonAwareCF}.
This can be achieved by reducing operational greenhouse gas (GHG) emissions measured in gCO2eq/kWh, also known as Scope 2 emissions \cite{ghg}, resulting from electricity use, as well as by reducing embodied carbon---Scope 3 emissions---that arise during the manufacturing of data center hardware (e.g., servers).
Our work focuses on reducing Scope 2 emissions. Cloud platforms have little direct (Scope 1) emissions, and optimizing embodied carbon of computing workloads is beyond the scope of this paper.
\subsection{Carbon Intensity of Electricity}
\label{sec:carbon_intensity}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/tracesample_new.png}
\vspace{-0.5cm}
\caption{\emph{Grid's carbon intensity shown over a 10 days period varies spatially and temporally.}}
\vspace{-0.5cm}
\label{fig:carbonTrace}
\end{figure}
To reduce their Scope 2 emissions, cloud data centers need to track the carbon cost of the electricity supply and modulate their electricity consumption over time to reduce emissions.
The carbon cost of electricity depends directly on the source of generation.
For example, a unit of energy generated by a coal plant will have a high carbon cost (i.e., emissions in terms of gCO2eq/kWh), while that generated from a hydroelectric plant will have no emissions.
The electricity supplied by the grid is produced by a mix of generation sources and the resulting carbon cost is a weighted average of the corresponding sources.
Importantly, the mix varies from one region to another---based on the local power plants in each region---and also varies over time since the generation mix changes based on demand, relative cost of generation, and intermittent generation from renewable sources.
Figure~\ref{fig:carbonTrace} depicts how the carbon cost differs by country/region and also how it exhibits diurnal variations over the course of each day.
In this case, Ontario tends to have a low but variable carbon cost because its energy mix consists of a large fraction of carbon-free nuclear and hydroelectric energy combined with some coal plants, which results in non-zero carbon intensity, and solar, which causes the diurnal fluctuations.
California is similar, but has a higher fraction of solar, which results in larger fluctuations, but also a higher fraction of coal plants, which elevates the average carbon intensity.
By contrast, the carbon intensity of electricity in Iceland is nearly zero and flat due to its unique abundance of carbon-free geothermal energy.
\vspace{-0.2cm}
\subsection{Carbon-aware Cloud Scheduling}
\label{sec:cloud-scheduling}
\vspace{-0.05cm}
Many cloud workloads, due to their temporal flexibility and resource elasticity, are capable of exploiting the temporal and spatial variations in energy's carbon intensity.
To facilitate such efforts, commercial services, such as electricityMap~\cite{electricity-map} and WattTime~\cite{watttime}, have emerged that aggregate data from grids in different parts of the world and expose carbon intensity data and forecasts to cloud providers and users.
Researchers, in turn, are exploiting this data to design carbon-aware schedulers that dynamically shift workloads across time and space to reduce emissions.
As noted earlier, temporal shifting involves moving delay-tolerant batch workloads to periods of low carbon intensity.
In Figure \ref{fig:carbonTrace}, for instance, rather than running a batch job continuously in a \emph{carbon-agnostic} manner, \emph{suspend-resume} approaches can execute the job in the ``valleys'', where the carbon cost is low, and suspend the job during peak periods.
This technique has been explored in recent efforts~\cite{Radovanovic2021CarbonAwareCF, Wiesner2021LetsWA, cloudcarbon}.
Threshold-based suspend-resume approaches suspend jobs whenever the carbon cost rises above a certain threshold, while deadline-based methods choose the $n$ lowest carbon cost periods between the arrival time and the deadline to execute the job.
A drawback of \emph{suspend-resume} methods, whether threshold-based or deadline-based, is that the savings depend on the amount of time the user is willing to wait for their job to complete---higher delay tolerance yields higher savings but also longer completion times.
Geographic or spatial shifting, in contrast, migrates jobs or workloads to regions with the greenest electricity grid~\cite{Moghaddam2014CarbonawareDC, Zheng2020MitigatingCA, Zhou2013CarbonAwareLB, cloudcarbon}.
However, batch computation may be unable to exploit geographic shifting due to data privacy (e.g., GDPR) regulations that impose region restrictions.
Even when possible, spatial shifting methods can incur high migration costs if substantial state or data needs to be moved with the job.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/speedup_profiles.png}
\vspace{-0.4cm}
\caption{\emph{Scaling characteristics of common MPI jobs and machine learning training frameworks.}}
\vspace{-0.5cm}
\label{fig:workloadScale}
\end{figure}
\vspace{-0.2cm}
\subsection{Cloud Autoscaling}
\vspace{-0.05cm}
Cloud workloads fall into two broad classes: interactive and batch.
Due to their latency-sensitive nature, interactive workloads are not amenable to temporal shifting optimizations.
Hence, our work primarily focuses on batch workloads, specifically clustered batch workloads that run on a cluster of cloud services.
Many popular cloud workloads, such as MPI-style parallel batch jobs and machine learning training frameworks (e.g., PyTorch) fall into this category.
Our work is applicable to any batch job that is amenable to cloud autoscaling, where
the job is able to handle dynamic changes to the number of servers allocated to it.
Typically, cloud autoscalers provide horizontal scaling to meet a job's service level objectives (SLOs)~\cite{aws-autoscaling}, while our carbon scaler exploits this resource flexibility to optimize the job's carbon emissions.
Cloud autoscalers are used for interactive web services to handle sudden workload spikes, rather than for batch jobs.
However, clustered batch computations such as scientific MPI applications and the Horovod elastic framework for PyTorch~\cite{sergeev2018horovod} have recently gained horizontal scaling capabilities, making them amenable to carbon scaling.
\vspace{-0.2cm}
\subsection{Batch Workload Resource Elasticity}
\vspace{-0.05cm}
Many modern clustered batch applications have elastic scaling capabilities, but their scaling behavior can significantly vary.
Figure~\ref{fig:workloadScale} depicts the scaling behavior of four deep learning training jobs, all of which use Horovod and PyTorch for elastic scaling, and two MPI tasks that perform scientific computations.
As can be seen, ResNet18 training and the larger $N$-body MPI computation show a linear increase in throughput as the number of servers increase, indicating a linear scaling behavior.
In contrast, VGG18 training task exhibit a sub-linear increase in throughput due to scaling bottlenecks.
Finally, ResNet50 training and the smaller $N$-body MPI computation exhibit a diminishing growth in throughput with increased server allocation.
These differences in scaling behavior should be considered by a carbon scaling approach when optimizing for carbon savings.
\subsection{Experimental Setup}
\vspace{-0.05cm}
\noindent\textbf{Workload.}
Table~\ref{tab:workloads} describes the elastic workloads used for evaluating \texttt{CarbonScaler}\xspace and their specifications.
The workloads spans both CPU- and GPU-intensive applications such as $N$-body problem~\cite{nbodysimulation} implemented using MPI~\cite{mpi} and machine learning models including ResNet~\cite{resnet}, EfficientNet~\cite{efficient-net}, and VGG~\cite{efficient-net} implemented using Pytorch~\cite{pytorch}.
The table shows the base configurations and power measurements for jobs that need 24hrs to finish.
The chosen workloads have a wide-range of scaling characteristics (shown in Figure~\ref{fig:workloadScale}), configurations, and energy requirements.
\noindent\textbf{Infrastructure.}
We deployed \texttt{CarbonScaler}\xspace in two different settings to demonstrate its adaptability to the underlying infrastructure.
For CPU-intensive workloads, we used a local computing cluster consisting of 7 servers, each equipped with a 16-core Xeon CPU E5-2620, connected through a 10G network.
For GPU-intensive workloads, we deployed \texttt{CarbonScaler}\xspace on Amazon Web Services (AWS) using 8 \texttt{p2.xlarge} instances each equipped with NVIDIA K80 GPU.
\begin{table}[t]
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{||c|c|c|c|c||}
\hline \hline
\textbf{Name} & \textbf{Implementation} & \textbf{Epochs} & \textbf{BatchSize} & \textbf{Power (W)} \\ \hline
$N$-Body Simulation (10,000) & MPI & 138000 & NA & CPU~(60) \\ \hline
$N$-Body Simulation (100,000) & MPI & 1500 & NA & CPU~(60) \\ \hline
Resnet18 (Tiny ImageNet) & Pytorch & 173 & 256 & CPU+GPU~(210) \\ \hline
EfficientNetB1 (ImageNet) & Pytorch & 45 & 96 & CPU+GPU~(210) \\ \hline
VGG16 (ImageNet) & Pytorch & 31 & 96 & CPU+GPU~(210) \\ \hline \hline
\end{tabular}%
}
\caption{\emph{Key details of elastic workloads in evaluation. Epochs is the number of epochs needed for a 24hr job.}}
\vspace{-0.2cm}
\label{tab:workloads}
\end{table}
\begin{figure}[t]
\centering
\vspace{-0.5cm}
\includegraphics[width=0.96\linewidth]{fig/region_carbon.png}
\vspace{-0.3cm}
\caption{\emph{Most cloud regions globally have high carbon cost but also show significant daily variations, providing an opportunity for CarbonScaler to optimize carbon emissions.}}
\vspace{-0.5cm}
\label{fig:intensity_variability}
\end{figure}
\noindent\textbf{Carbon Traces.}
We collected carbon traces for different geographical locations using electricityMap~\cite{electricity-map}, an online service that provides real-time and archival carbon intensity information.
We use average carbon intensity values, measured in grams of carbon dioxide equivalent per kilowatt-hour (gCO2eq/kWh), provided at hourly granularity.
The collected carbon traces span from January 2020 to September 2022, we specify the duration for each trace where it is used.
To choose representative regions for our evaluation, we analyzed the average carbon intensity and the coefficient of variation (computed as standard deviation over mean) for different Amazon AWS cloud availability regions.
Figure~\ref{fig:intensity_variability} shows the results for 37 different regions.
As shown, most regions have high carbon intensity but also show high daily variations, while some have low carbon intensity with similarly high daily variations.
Since \texttt{suspend-resume}\xspace and \texttt{CarbonScaler}\xspace rely on these high variations to reduce emissions, the figure indicates that both techniques will be effective in the majority of low-carbon as well as high-carbon cloud regions.
A few cloud regions have stable carbon costs (i.e., low variations), including low carbon regions such as Iceland and Sweden, and high carbon regions such as India and Singapore.
The effectiveness of \texttt{suspend-resume}\xspace and \texttt{CarbonScaler}\xspace is diminished in such cloud regions. Still, such regions are a small minority of the total cloud regions in a global cloud platform such as AWS.
Based on this analysis, we choose
Netherlands (\worldflag[width=2mm]{NL}) as a representative high carbon region and Ontario, Canada (\worldflag[width=2mm]{CA}) as an example of a low carbon region for our subsequent experiments.
\noindent{\bf Baselines Policies.}
We evaluate the performance of \texttt{CarbonScaler}\xspace against three baseline policies: \texttt{carbon-agnostic}\xspace, \texttt{suspend-resume}\xspace, and \texttt{static-scale}\xspace.
The \texttt{carbon-agnostic}\xspace is a simple policy that runs a job without considering carbon emissions and represents the status-quo.
The \texttt{suspend-resume}\xspace policy is inspired by prior work~\cite{Wiesner2021LetsWA, cloudcarbon}.
As mentioned in \S\ref{sec:cloud-scheduling}, \texttt{suspend-resume}\xspace can be implemented in two ways: threshold-based, which uses a carbon threshold to suspend-resume a job in a deadline-unaware manner, and deadline-based, which chooses the $k$ lowest carbon periods before the specified deadline for execution. In this case, \texttt{suspend-resume}\xspace defaults to \texttt{carbon-agnostic}\xspace policy when completion time equals job length ($T=l$), i.e. no slack, since execution cannot be deferred.
This policy acts as a baseline for temporal shifting scenarios where we assume a job has a completion time higher than job length ($T>l$).
Finally, \texttt{static-scale}\xspace is another policy inspired by prior work~\cite{ecovisor}, where an application picks the lowest carbon intensity points and runs with a certain \emph{static} scale factor to better utilize the carbon intensity variations.
This is our default baseline for scenarios where we evaluate \texttt{CarbonScaler}\xspace for its ability to leverage workload elasticity and scaling.
Unless stated otherwise, we report the mean across 7 runs for our cloud experiments and across 100 runs for \texttt{Carbon Advisor}\xspace's simulated executions.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/sample_saving_real_CA-ON_0.png}
\vspace{-0.4cm}
\caption{\emph{CarbonScaler in action for a 48hrs long $N$-body MPI job ($N$=100k), where $T=2\times l$.}}
\vspace{-0.7cm}
\label{fig:illustrative_example}
\end{figure}
\vspace{-0.2cm}
\subsection{CarbonScaler in Action}
\vspace{-0.1cm}
To show \texttt{CarbonScaler}\xspace in action, we ran a 48hr $N$-body MPI job on our CPU cluster and compared its execution to the threshold-based \texttt{suspend-resume}\xspace (deadline-unaware) and \texttt{carbon-agnostic}\xspace policies.
As shown in Figure~\ref{fig:illustrative_example},
the \texttt{carbon-agnostic}\xspace policy starts the job as soon as it arrives and finishes in 48hrs at the cost of 184g of $CO_2$ emissions.
The \texttt{suspend-resume}\xspace policy \emph{suspends} the job during high carbon intensity periods and waits for the carbon emissions to fall below a threshold (25$^{th}$ percentile in this case) to \emph{resume} the job.
By leveraging temporal flexibility, \texttt{suspend-resume}\xspace saved 45\% carbon compared to the \texttt{carbon-agnostic}\xspace policy, but increases the job completion time by 4$\times$.
Finally, we set the desired completion time $T$ to be 96hrs, i.e. $T=2\times l$, and execute our proposed \texttt{CarbonScaler}\xspace policy. \texttt{CarbonScaler}\xspace scales the number of servers depending on the application's scaling properties and the carbon cost at a given time.
As a result, \texttt{CarbonScaler}\xspace achieves 42\% carbon saving over \texttt{carbon-agnostic}\xspace policy.
\texttt{CarbonScaler}\xspace achieves comparable savings with \texttt{suspend-resume}\xspace while also reducing the job completion time to 2$\times$ of \texttt{carbon-agnostic}\xspace policy.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/scalability_effect_real.png}
\vspace{-0.5cm}
\caption{\emph{Carbon footprint (saving \% on-top) of different workloads scheduled under carbon-agnostic, static-scale (2$\times$), and CarbonScaler, in Ontario, Canada, where $T=l$ (i.e., no slack and on-time completion).}}
\vspace{-0.6cm}
\label{fig:scalabilty-effect}
\end{figure}
\vspace{-0.2cm}
\subsection{Impact of Workload Elasticity}
\vspace{-0.1cm}
The two key aspects that impact carbon savings from \texttt{CarbonScaler}\xspace are the temporal flexibility and workload elasticity.
While prior work necessitates temporal flexibility for carbon savings, \texttt{CarbonScaler}\xspace can achieve significant savings by leveraging workload elasticity even when no temporal flexibility is available.
The extent of savings depend on the scalability properties of the workload: a highly scalable job (with flat or close to flat marginal capacity) can achieve higher savings, as illustrated for the simple workload in Figure~\ref{fig:algoExample}.
To demonstrate the effect of elasticity, we limit the job completion time to the job length, i.e., $T=l$, which means no temporal flexibility is available.
We run 24hrs long jobs for various applications in Table~\ref{tab:workloads} using \texttt{carbon-agnostic}\xspace policy, \texttt{static-scale}\xspace (2$\times$), and \texttt{CarbonScaler}\xspace.
Figure~\ref{fig:scalabilty-effect} shows the performance of the three policies for different workloads.
The \texttt{static-scale}\xspace (2$\times$) achieves up to 28\% savings over \texttt{carbon-agnostic}\xspace for highly scalable workloads ($N$-body ($N$=100k) and ML (ResNet18)).
However, since it does not consider the scalability properties of the job, it can instead \emph{increase} the carbon consumption for some workloads by as much as 26\% by scaling the job beyond a single \emph{optimal} scale factor.
On the other hand, \texttt{CarbonScaler}\xspace is cognizant of scaling behavior and scales the job only in slots that have the highest work done per unit carbon cost.
\texttt{CarbonScaler}\xspace yields up to 47\% savings in the best case (for the N-body task with good scaling behavior) and 2\% savings in the worst case (for VGG
training which has poor scaling behavior).
\begin{figure*}[t]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{fig/scalefactor_effect_CA-ON_nbody100000.png} &
\includegraphics[width=0.3\textwidth]{fig/scalefactor_effect_CA-ON_tinyimagenet-resnet18-weak.png} &
\includegraphics[width=0.3\textwidth]{fig/scalefactor_effect_CA-ON_nbody10000.png} \vspace{-0.1cm} \\%&
(a) $N$-body ($N$=100k) &(b) ML (ResNet18) & (c) $N$-body ($N$=10k) \\
\end{tabular}
\caption{\emph{Normalized carbon w.r.t. CarbonScaler (CS) for static-scale (SS) policy with different scale factors. The carbon-agnostic policy (CA) is shown as reference.}}
\label{fig:scale_factor_ca_on}
\end{figure*}
In Figure~\ref{fig:scalabilty-effect}, we set the scale factor for the \texttt{static-scale}\xspace as 2$\times$ for initial analysis.
To further explore the effect of scale factor, we use our \texttt{Carbon Advisor}\xspace to compare \texttt{CarbonScaler}\xspace to policies with static scale factors of 2$\times$, 4$\times$, and 6$\times$ for three of our workloads that show almost linear and non-linear scalability, as shown in Figure~\ref{fig:scale_factor_ca_on}.
No single scale factor works well for all workloads and the extent of savings depends on the scalability properties of the workload.
For jobs with almost linear scalability, $N$-body ($N$=100k) and ML (ResNet 18), a high static scale factor can be chosen to obtain savings similar to \texttt{CarbonScaler}\xspace.
For a job with non-linear scalability, $N$-body ($N$=10k), it is important to pick different scale factors for different slots depending on the carbon cost in that slot. The scalability-aware nature of \texttt{CarbonScaler}\xspace means that it always picks an appropriate scale factor for the job.
\noindent\emph{\textbf{Key Takeaway.} CarbonScaler better leverages the workload elasticity by choosing a dynamic scale factor depending on the marginal capacity curve of the job normalized to carbon cost.}
\subsection{Impact of Temporal Flexibility}
\vspace{-0.05cm}
In last section, we focused on the savings achieved by only utilizing workload elasticity.
However, temporal flexibility is also an important source of carbon savings for delay-tolerant jobs.
Therefore, we evaluate the impact of using temporal flexibility by running workloads from Table~\ref{tab:workloads} using \texttt{carbon-agnostic}\xspace policy, \texttt{suspend-resume}\xspace policy, and \texttt{CarbonScaler}\xspace with extended completion times where $T>l$.
To ensure that the \texttt{suspend-resume}\xspace respects the job-specified completion time, we use the deadline-aware version of the \texttt{suspend-resume}\xspace policy~\cite{Wiesner2021LetsWA}.
Figure~\ref{fig:time_shifting_apps_locations} shows the carbon consumption when \emph{running} the workloads with 24 hrs length $l$, and 36 hrs as completion time $T$, $T=1.5\times l$, across two locations.
\texttt{CarbonScaler}\xspace is better at exploiting the temporal flexibility and outperforms the \texttt{suspend-resume}\xspace policy for most workloads.
The extent of savings depend on the scalability characteristics of the workload: highly scalable workloads yield up to 29-46\% savings.
On the other hand, for less scalable workloads such as ML(VGG16), \texttt{CarbonScaler}\xspace most savings are from time-shifting, hence comparable saving to \texttt{suspend-resume}\xspace.
It is also worth noting that \texttt{CarbonScaler}\xspace consistently achieves higher savings in both regions despite a significant difference in their average carbon cost.
The extent of savings depends on the variability in the carbon trace for a given region and we evaluate its effect on carbon savings in Section~\ref{sec:trace-character}.
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]{fig/time_shif_final_ca-on.png} \vspace{-0.15cm} \\
(a) Ontario, Canada \\
\includegraphics[width=0.48\textwidth]{fig/time_shif_final_nl.png}\vspace{-0.15cm} \\
(b) Netherlands \\
\end{tabular}
\vspace{-0.4cm}
\caption{\emph{Carbon footprint (saving \% on-top) of different workloads scheduled under carbon-agnostic, suspend-resume, and CarbonScaler policie
, where $T=1.5\times l$.}}
\vspace{-0.4cm}
\label{fig:time_shifting_apps_locations}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/real_resnet18_deadline.png}
\vspace{-0.45cm}
\caption{\emph{Effect of completion time on the carbon footprint of a 12hrs long ResNet18 job in Ontario, Canada.}}
\vspace{-0.5cm}
\label{fig:job_deadline}
\end{figure}
\noindent{\bf Effect of Completion Time.}
Prior work have demonstrated that the temporal flexibility can yield significant savings.
Figure~\ref{fig:job_deadline} evaluates the gain in carbon savings with increasing temporal flexibility (higher desired completion time $T$).
We run a 12hrs ML training job (ResNet18) and configure it to complete in 12hrs ($T=l$) up to 36hrs ($T=3\times l$).
For higher completion times, more low carbon slots become available, which allows \texttt{CarbonScaler}\xspace and \texttt{suspend-resume}\xspace to reduce the carbon consumption by 30-45\% and 0-32\%, respectively.
Note that since \texttt{suspend-resume}\xspace cannot suspend at 1$\times$ scale factor, it is equivalent to \texttt{carbon-agnostic}\xspace.
\texttt{CarbonScaler}\xspace achieves higher savings by using a higher scale factor during the lowest carbon slots and only picks a higher carbon slot if it gives a better marginal work done per unit carbon.
For very high completion times, the savings of \texttt{CarbonScaler}\xspace over \texttt{suspend-resume}\xspace diminish, since it begins to prefer job suspensions over high scale factors to avoid the impact of non-linear scaling behaviour.
\noindent{\bf Effect of Job Length.}
The length of a job is another key factor in determining the carbon savings.
As the job length increases, more low carbon slots become available as grid's carbon intensity generally has a diurnal pattern.
To evaluate the impact of job length, we varied the job length from 6 hours to 96 hours and used our \texttt{Carbon Advisor}\xspace to analyze the estimated carbon savings.
Figure~\ref{fig:job_length_n_body} shows the carbon savings of different policies, against a \texttt{carbon-agnostic}\xspace baseline, for the $N$-body($N$=100k) MPI task when $T = 1.5 \times l$.
The figure shows that \texttt{CarbonScaler}\xspace outperforms \texttt{suspend-resume}\xspace and \texttt{carbon-agnostic}\xspace over a range of job lengths. The carbon savings increase with job length
since there are more low-carbon time slots to choose from, providing opportunities for greater savings. Overall, \texttt{CarbonScaler}\xspace achieves 30\% more savings than \texttt{suspend-resume}\xspace for long running batch jobs.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/time_shift_task_length_nbody100000CA-ON_1.5.png}
\vspace{-0.45cm}
\caption{\emph{Effect of job length on CO$_2$ savings for an $N$-Body ($N$=100k) job in Ontario, Canada, where $T=1.5\times l$.}}
\vspace{-0.5cm}
\label{fig:job_length_n_body}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/large_scale_1.5_CA-ON_nbody100000.png}
\vspace{-0.4cm}
\caption{\emph{Carbon consumption (\% saving on-top) of 24-hour $N$-body ($N$=100k) MPI job with different cluster sizes in Ontario, Canada, where $T=1.5\times l$.}}
\vspace{-0.5cm}
\label{fig:job_largescale}
\end{figure}
\noindent{\bf Effect of Cluster Size.}
Our experiments this far have used a lower bound of 1 server ($m=1$) and an upper bound of 8 servers ($M=8$) for workloads due to cluster size and cloud cost constraints.
However, larger batch jobs execute on larger clusters, with larger $m$ and $M$.
For example, certain HPC and ML training applications run on tens or even hundreds of servers in the cloud~\cite{hpcML, bert}.
To evaluate the efficacy of \texttt{CarbonScaler}\xspace for large-scale clusters, we extrapolated the marginal capacity curve for the current $N$-body($N=100k)$ job. Then, we use \texttt{Carbon Advisor}\xspace to estimate how carbon savings change when running progressively bigger jobs on increasing cluster sizes while keeping the job length unchanged at 24hrs.
Figure~\ref{fig:job_largescale} shows that \texttt{CarbonScaler}\xspace can outperform both baseline policies over a range of cluster sizes and obtain 20 to 42\% additional savings. However, the savings over \texttt{suspend-resume}\xspace reduce as the cluster size increases, since the marginal capacity curve shows diminishing gains for larger cluster sizes, which in turn reduces the savings by \texttt{CarbonScaler}\xspace.
Interestingly, \texttt{suspend-resume}\xspace achieves the same savings of 17\% over \texttt{carbon-agnostic}\xspace policy across all cluster sizes.
Since the job length of 24hrs stays the same while running on bigger cluster sizes, \texttt{suspend-resume}\xspace suspends all jobs in the same high carbon periods regardless of the cluster size and obtains the same savings in all cases.
\vspace{0.05cm}
\noindent \emph{\textbf{Key Takeaway.} CarbonScaler can exploit temporal flexibility to outperform suspend-resume policy across regions with widely different average carbon costs and over different job lengths, completion times, and cluster sizes.}
\subsection{Monetary Cost Overhead}
\label{sec:monetary-overhead}
As discussed in Section~\ref{sec:algorithm}, \texttt{CarbonScaler}\xspace can potentially incur extra cloud costs for the workloads with diminishing marginal capacity curves.
Figure~\ref{fig:added_cost} shows the monetary cost overhead in two of the aforementioned scenarios, namely the added cost when evaluating \texttt{CarbonScaler}\xspace across workloads (figure \ref{fig:scalabilty-effect}), and evaluating the effect of extended completion time (figure \ref{fig:job_deadline}). We quantify the cost overhead as the additional compute-hours needed when compared to \texttt{carbon-agnostic}\xspace policy. As shown in figure \ref{fig:added_cost}(a), highly scalable workloads, such as $N$-body($N=100$) and ML (ResNet18), cost only 5-10\% higher than a \texttt{carbon-agnostic}\xspace policy. However, while focusing only on carbon saving, the less scalable workloads incur higher cost. Figure \ref{fig:added_cost}(b) also highlight an important aspect in extending the completion time, since better slots are mostly available, tasks are mostly scaled to a larger number of servers, yielding higher cost. Nonetheless, in all cases, the cost did not surpass 18\%.
\noindent \emph{\textbf{Key Takeaway.}
The monetary cost overhead of CarbonScaler is small, and both carbon and monetary costs should restrain less scalable tasks.
}
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=0.48\textwidth]{fig/all_models_0slack_added_cost.png} \\
(a) Workloads overhead, $T=l$ (Fig.~\ref{fig:scalabilty-effect}) \\
\includegraphics[width=0.48\textwidth]{fig/resnet18_slack_added_cost.png}\\
(b) ResNet18 overhead, $\mathord{T=[1-3]\times}l$ (Fig.~\ref{fig:job_deadline}) \\
\end{tabular}
\vspace{-0.4cm}
\caption{\emph{Monetary cost overhead of CarbonScaler over carbon-agnostic execution for different scenarios.}}
\vspace{-0.4cm}
\label{fig:added_cost}
\end{figure}
\subsection{Impact of Carbon Cost Dynamics}
\label{sec:trace-character}
Since achievable carbon savings depend on the temporal characteristics of the carbon costs within a cloud region, which significantly vary across regions, we next evaluate the impact of regions and carbon intensity variability on carbon savings.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/traces_tinyimagenet-resnet18-weak-byused_carbon.png}
\vspace{-0.45cm}
\caption{\emph{Carbon consumption and carbon savings (\%), for an ML (ResNet18) job, where $T=l$, across multiple geographical regions. The average carbon cost increases from left to right.}}
\label{fig:trace_resnet18}
\end{figure}
\vspace{0.1cm}
\noindent{\bf Carbon Savings Across Cloud Regions.}
To assess the effect of regions on carbon savings, we use \texttt{Carbon Advisor}\xspace to compute carbon savings achieved by a 24hrs long ML (ResNet18) job, with $T=l$, across 16 different AWS cloud regions.
Figure~\ref{fig:trace_resnet18} provides several insights about the average relative and absolute carbon savings compared to the \texttt{carbon-agnostic}\xspace policy.
First, the figure shows that the carbon emissions of the same job can vary by an {\em order of magnitude} depending on which cloud region is used to execute it.
Second, \texttt{CarbonScaler}\xspace is able to achieve significant carbon savings (in \%) for most regions, with a median and average savings of 16\% and 19\%, respectively.
So long as the carbon costs exhibit diurnal variations, \texttt{CarbonScaler}\xspace can reduce the job's emissions over the \texttt{carbon-agnostic}\xspace policy regardless of whether it runs in a low or high carbon region.
Finally, Figure~\ref{fig:trace_resnet18} shows that India's (\worldflag[width=2mm]{IN}) region is an exception: while it has high absolute carbon costs, its low hourly variations prevent \texttt{CarbonScaler}\xspace from generating much savings.
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.24\textwidth]{fig/scatter_tinyimagenet-resnet18-weak_CA-ON.png}&
\includegraphics[width=0.24\textwidth]{fig/cdf_tinyimagenet-resnet18-weak.png}\vspace{-0.15cm} \\
(a) Savings vs. Variation &
(b) Savings Distribution \\
\end{tabular}
\vspace{-0.4cm}
\caption{\emph{Effect of Variation (a) and location (b) on \emph{realized} savings for an ML (ResNet18) job with $T=l=24$hrs.}}
\vspace{-0.5cm}
\label{fig:achivable_saving}
\end{figure}
\vspace{0.05cm}
\noindent{\bf Effect of Variability.}
As noted earlier, regions with variable carbon cost tend to generate higher carbon savings.
This is because the high variations in such regions provide more low carbon periods to exploit for carbon reductions.
We use coefficient of variation, standard deviation divided by mean, as a metric to quantify the variability of region.
Figure~\ref{fig:achivable_saving}(a) shows the carbon savings, for each starting point of the year, for a 24hrs ML (ResNet18) job with no excess time for Ontario, Canada using \texttt{Carbon Advisor}\xspace.
The carbon savings are highly correlated with the coefficient of variation, with a Pearson coefficient of 0.82.
However, even a highly variable location like Ontario has a small fraction of hours when savings are less than 20\%, a fraction that will vary depending on the region.
Figure~\ref{fig:achivable_saving}(b) presents the distribution of carbon savings and compare regions with different average coefficient of variation.
Note that the curves on the right are better as they lead to high carbon savings most of the time.
The regions represented by the curves are also strictly ordered by their coefficient of variation, which means that a coefficient of variation can be used to rank regions, when mean carbon cost is comparable, for their carbon saving potential.
\vspace{0.05cm}
\noindent \emph{\textbf{Key Takeaway.} CarbonScaler achieves carbon savings for most cloud regions regardless of their absolute carbon cost. In addition, higher diurnal variations in carbon cost translate to greater savings.}
\subsection{Robustness to Errors}
\label{sec:assumptions}
In prior experiments, we assumed that the carbon forecasts are perfect and applications are profiled on an environment similar to what they eventually run on, yielding highly accurate marginal capacity curves.
However, in practice, these assumptions may not always be true and we evaluate the impact of deviation from the ground-truth for these two factors.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/error_sample_nbody100000-xlarge.png}
\vspace{-0.5cm}
\caption{\emph{Illustrative example of error in carbon forecasts for an $N$-Body ($N=100k$) workload.}}
\vspace{-0.6cm}
\label{fig:error_nbody100_30}
\end{figure}%
\begin{figure*}[t]
\centering
\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{fig/error_nbody100000-xlarge.png}
\vspace{-0.8cm}
\caption{\emph{Effect of carbon forecast errors for an $N$-body ($N$=100k) job.}}
\label{fig:error_nbody100}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{fig/profile_error.png}
\vspace{-0.8cm}
\caption{\emph{Effect of errors in observed and profiled marginal capacity curves.}}
\label{fig:profile_error}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{fig/denial_CA-ON.png}
\vspace{-0.8cm}
\caption{\emph{Carbon overhead of the server procurement denial.}}
\label{fig:denial_effect}
\end{minipage}
\end{figure*}
\noindent{\bf Effect of Carbon Forecast Error.}
Carbon forecasts are easily available through online tools and services such as~\cite{electricity-map, watttime, dayahead_estimations}, with a reported mean accuracy of 6.4\%.
More importantly, the fidelity of \texttt{CarbonScaler}\xspace does not depend on the actual magnitude of the carbon forecast and instead relies on correctly identifying the hills (high carbon slots) and valleys (low carbon slots) in the carbon trace, which can be predicted with high accuracy.
To illustrate this effect, we generate carbon traces with forecast errors of up to 30\% by adding a uniformly random error in the range of -X\% to X\% for an error of X\%.
Figure~\ref{fig:error_nbody100_30}(top) shows an example ground-truth and forecasted (X = 30\% error) carbon cost time-series.
While an erroneous forecast deviates from ground-truth at certain points, it still retains the hills and valleys, leading to harmonious schedules in both cases.
To further quantify the effect of forecast errors,
we compare the performance of \texttt{CarbonScaler}\xspace with perfect carbon forecast
to an error-agnostic variant of \texttt{CarbonScaler}\xspace that is oblivious to forecast errors, and \texttt{CarbonScaler}\xspace that recompute the schedule when the realized forecast error exceeds 5\%.
Figure~\ref{fig:error_nbody100} shows the carbon overhead over the perfect forecast scenario.
The results highlight the resiliency of \texttt{CarbonScaler}\xspace to forecast errors, as a 30\% forecast error resulted in merely 4\% added carbon at 95$^{th}$ percentile.
\noindent{\bf Effect of Profiling Errors.}
The marginal capacity curves generated by the \texttt{Carbon Profiler}\xspace can become erroneous if the environment characteristics, such as network bottlenecks \cite{network_aware_scheduling}, change during the execution.
This can impact the carbon savings of a given job if scaling behaviour changes as a result of deviation from actual marginal capacity curves.
To evaluate the effect of erroneous profiles, we added uniformly random errors to the marginal capacity curves and measured the carbon consumption using \texttt{Carbon Advisor}\xspace.
Figure~\ref{fig:profile_error} shows the carbon overhead over \texttt{CarbonScaler}\xspace with accurate marginal capacity profiles.
The results shows that the magnitude of error depends on the application power consumption and scalability behaviour, e.g., the $N$-body job is less affected by errors as it has low power consumption and scales somewhat linearly.
Additionally, we only show the results for the initial phase of execution where errors persist.
\texttt{CarbonScaler}\xspace's error-handling mechanism of updating marginal capacity curves, when they deviate, corrects the errors and net overhead over the entire execution of the workload would be considerably small.
\vspace{0.05cm}
\noindent{\bf Impact of Server Procurement Denial.}
Since \texttt{CarbonScaler}\xspace dynamically scales each job independently, similar to cloud autoscalers, many jobs may request cloud servers
during low carbon periods, creating a high demand for servers during such periods. Thus, jobs may end up competing with one another
for additional servers, which can cause the cloud platform to deny some requests for new instances. For example, it is not uncommon to see denials
for popular GPU instances even in the absence of carbon scaling.
To evaluate the effect of such denials, we run a 24hr job with 48hr completion time, $(T=2\times l)$, with different probability of random procurement denials. In such cases
\texttt{CarbonScaler}\xspace keeps retrying its request and then recomputes the schedule to mitigate the impact of denials on job completion.
Figure~\ref{fig:denial_effect} illustrates that the carbon overhead, compared to a no denial scenario, increases as the denial percentage increases.
The overhead's magnitude depends on a job's scalability behaviour.
For example, a highly scalable $N$-body job incurs 5\% overhead while a non-scalable ML job (VGG16) incurs up to 15\% overhead.
\vspace{0.05cm}
\noindent \emph{\textbf{Key Takeaway.} CarbonScaler only depends carbon cost trends and simple recomputations achieve saving comparable to the perfect estimation. The potential overheads of profiling error can be overcome by updating marginal capacity curves as they start to deviate. Finally, resource availability can impact the achievable savings, but the magnitude depends on the scalability properties of the workloads.}
\subsection{System Overheads}
\texttt{CarbonScaler}\xspace incurs two types of systemic overheads in its execution.
First is the overhead of scaling as the number of resources change over time.
The scaling overhead is a function of the application state size (e.g., the number of parameters in ML models).
In our experiments, the scaling overhead was between 20-40 seconds. We note that \texttt{suspend-resume}\xspace incurs similar overheads as the state is scale-independent.
The second source of overhead is the time needed by \texttt{Carbon Profiler}\xspace to obtain marginal capacity curves.
As mentioned in \S\ref{sec:profiler}, profiling time can be configured using profile duration $\alpha$ at each allocation level, and granularity $\beta$ of allocations profiled.
We use $\alpha=1$ minute, and $\beta=1$, i.e., we profile across all possible allocation levels.
Thus, for our evaluation, the once-needed total profile time was 40 minutes, 8-minutes per workload in Figure~\ref{fig:scalabilty-effect}.
\vspace{0.05cm}
\noindent \emph{\textbf{Key Takeaway.} CarbonScaler's systemic overheads are small, configurable, and generally occur once.}
\subsection{Carbon Profiler}
\label{sec:profiler}
\vspace{-0.1cm}
\texttt{CarbonScaler}\xspace requires the marginal capacity curve of a job for carbon-aware scaling.
\texttt{Carbon Profiler}\xspace performs a one time offline profiling of a new job to derive its marginal capacity curve.
To do so, it runs the job with server allocations ranging from the job-specified minimum number of servers, $m$, to the maximum number of servers, $M$, and records the work done at each allocation.
To minimize the overhead of profiling, \texttt{Carbon Profiler}\xspace runs the job for a small, configurable amount of time $\alpha$ (typically few minutes) and varies the resource allocation with a granularity $\beta$, which depends on $M$.
If $\beta > 1$, \texttt{Carbon Profiler}\xspace interpolates the recorded measurements to obtain a complete marginal capacity curve.
Finally, the marginal capacity curves are valid for a computing environment similar, or identical, to the profiling environment. The scaling behaviour and the expected savings may change if the environment is significantly different necessitating an environment-specific profiling or an online update of the capacity curves.
\texttt{CarbonScaler}\xspace also allows substituting \texttt{Carbon Profiler}\xspace with alternative workload profiling approaches from prior work~\cite{paleo, Oyama2016, Pei2019, justus2018, cai2017neuralpower, optimus, performance_modeling}.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{fig/carbonAutoscalerOverview.png}
\vspace{-0.35cm}
\caption{\emph{An overview of Carbon AutoScaler.}}
\vspace{-0.6cm}
\label{fig:systemOverview}
\end{figure}
\vspace{-0.3cm}
\subsection{Carbon AutoScaler}
\vspace{-0.1cm}
Figure~\ref{fig:systemOverview} shows an overview of \texttt{Carbon AutoScaler}\xspace that uses Kubeflow~\cite{kubeflow} to implement our \texttt{Carbon Scaling Algorithm}\xspace from \S\ref{sec:algorithm}.
The incoming elastic batch applications use Kubernetes' Custom Resource Definition (CRD), written in \texttt{.yaml} format, to provide their general job specifications alongside scaling and scheduling information including minimum $m$ and maximum $M$ number of servers, completion time $T$, estimated job length $l$, and marginal capacity curve.
\texttt{Carbon AutoScaler}\xspace deploys CRDs using Kubernetes APIs such as \texttt{kubectl}.
We implement \texttt{Carbon AutoScaler}\xspace as a controller that sits on top of the Kubeflow training operator and leverages its core resource management functionalities for clustered batch jobs such as ML training and MPI.
It first runs the \texttt{Carbon Scaling Algorithm}\xspace to compute the initial schedule for each job.
It executes the schedule by scaling the resources allocated to the job by modifying the Kubeflow job specification, such as the number of replicas.
It delegates the task of maintaining job status to the Kubeflow operator.
\texttt{Carbon AutoScaler}\xspace also registers a reconcile callback function, which is called when the carbon intensity changes.
The function updates the job's specifications based on the initial schedule and applies the changes to the Kubeflow operator.
\texttt{Carbon AutoScaler}\xspace implements resource-level and application-level monitoring.
\texttt{Carbon AutoScaler}\xspace implements additional kubernetes services to monitor resource usage, its energy usage, and carbon usage over time.
We track CPU usage using Kubernetes Metrics Server~\cite{metrics-server}, CPU energy usage using Running Average Power Limiting (RAPL)~\cite{david2010rapl} interfaces and powerAPI~\cite{bourdon2013powerapi}, and GPU energy usage using NVIDIA Data Center GPU Manager (DCGM)~\cite{nvidia-dcgm}. \texttt{Carbon AutoScaler}\xspace monitors application-level metrics such as progress and throughput via application-level interfaces.
To do so, \texttt{Carbon AutoScaler}\xspace tracks carbon intensity using a dedicated service that provides the instantaneous as well as forecasted carbon cost.
Finally, \texttt{Carbon AutoScaler}\xspace reacts to divergence in progress, throughput, or carbon intensity by recomputing the schedule as explained in \S\ref{sec:algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{fig/analysis_tool.png}
\vspace{-0.4cm}
\caption{\emph{Carbon Advisor user interface.}}
\vspace{-0.9cm}
\label{fig:carbon_advisor}
\end{figure}
\vspace{-0.3cm}
\subsection{Carbon Advisor}
\vspace{-0.1cm}
\texttt{Carbon Advisor}\xspace enables pre-deployment analysis of the carbon scaling algorithm in an environment that simulates the operation of \texttt{Carbon AutoScaler}\xspace.
It takes, as an input, carbon trace, job start time, deadline, job length, and \texttt{CarbonScaler}\xspace-specific parameters such as range of server allocations $[m, M]$ and marginal capacity curve.
It simulates the running of the job and reports savings for various policies outlined in the paper.
This simple plug-n-play nature of the tool allows application developers to perform what-if scenarios and explore a wide range of parameters before actual deployment. For example, the user can explore the benefits of extending her waiting time and its impact on carbon savings.
We also enable key high-level analysis by default such as computing the distribution of carbon savings across different start times of the year.
Finally, to facilitate initial exploration, we will provide carbon traces and marginal capacity curves used in the paper alongside the tool.
Figure~\ref{fig:carbon_advisor} shows the web interface of \texttt{Carbon Advisor}\xspace.
\subsection{Problem Formulation}
\label{sec:problem}
\vspace{-0.05cm}
Similar to cloud autoscalers that scale each application independently, a carbon scaler operates independently on each cloud application that wishes to optimize its carbon emissions.
When a new batch application arrives at time $t$, it specifies (i) the minimum number of servers, $m$, that it needs to run, where $m\geq1$, and (ii) the maximum number of servers $M$ that can be allocated to it, $M \geq m$.
The carbon scaler can vary the servers allocated to the application between $m$ and $M$. Suppose that $l$ is the estimated job length when executing on the baseline allocation of $m$ servers.\footnote{The job length $l$ can be estimated using profiling and modeling \cite{optimus, performance_modeling} or using prior execution history. For example \cite{weng2022mlaas} reports that 65\% of batch jobs see repeated execution of at least 5 times within a two month period.}
By default, we assume that the desired job completion time $T=t+l$, which means that job should complete "on-time" with no delays. Although $T$ must be at least $t+l$ for all jobs, some delay tolerant jobs have significant temporal flexibility and can {\em optionally} specify a longer completion time $T$ such that $T > t+l$. The value $T - (t+l)$ represents the slack available when executing the job. This slack represents the willingness of users to wait in order to maximize carbon savings. The default case of $T=t+l$ assumes on-time completion and zero slack.
The completion time $T$ specifies the \emph{temporal flexibility} (delay tolerance) available to the job, while the maximum server allocation $M$ specifies the \emph{resource elasticity} of the job.
The parameters $T$ and $M$ can be specified in different ways to obtain a range of carbon scaling behaviors.
For example, when $T=t+l$, the application has no temporal flexibility and cannot be subjected to suspend-resume mechanisms.
In this case, the job can only exploit resource elasticity by scaling up to $M$ workers during low carbon periods and must be completed on time with no delays.
In contrast, when $M=m$, the job has no resource flexibility and a carbon scaler is limited to performing only suspend-resume optimizations with a static number of servers, $m$, while also ensuring it completes the job by the specified completion time $T$.
Of course, when $T>t+l$ and $M>m$, a carbon scaler has the most flexibility and can exploit both resource elasticity and temporal shifting via suspend-resume.
Our goal is to design a carbon scaler that minimizes a job's carbon emissions subject to the available resource elasticity and temporal flexibility.
\begin{figure}[t!]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.24\textwidth]{fig/mc_linear_scaling.png} &
\includegraphics[width=0.24\textwidth]{fig/mc_diminishing_scaling.png}\\%\vspace{-0.1cm} \\
(a) Linear scaling & (b) Diminishing scaling
\end{tabular}
\vspace{-0.4cm}
\caption{\emph{Example marginal capacity curves.}}
\vspace{-0.6cm}
\label{fig:scalabilityExample}
\end{figure}
\vspace{-0.2cm}
\subsection{Basic Design}
\vspace{-0.05cm}
When a new batch job arrives, our system, which we refer to as \texttt{CarbonScaler}\xspace, computes an \emph{initial} schedule for executing the job through completion.
The execution schedule specifies how many servers to allocate to the batch job over time and when to dynamically change the allocation in response to variations in carbon cost.
This initial schedule is based on a forecast of future carbon cost as well as expected progress of the job over time based on its resource allocation.
As the job executes, \texttt{CarbonScaler}\xspace adjusts its schedule periodically if it encounters forecast errors or deviations in expected job progress --- to ensure completion by the specified completion time $T$. Observed deviations can occur due to profiling errors, from network and locality interference \cite{network_aware_scheduling}, or resource procurement denials and are discussed further in \S \ref{sec:assumptions}.
\texttt{CarbonScaler}\xspace assumes that forecasts of future carbon cost of electricity are available --- this is reasonable since commercial services~\cite{electricity-map, watttime} already provide such forecasts.
Since the application specifies its temporal flexibility (in terms of completion time $T$) and its resource elasticity (in terms of the varying server allocation from $m$ to $M$), \texttt{CarbonScaler}\xspace's schedule responds to fluctuations in forecasted carbon cost by scaling down or completely suspending the job when the cost is high and opportunistically scaling up when the cost is low.
Different clustered batch applications will have different scaling behaviors, as shown in Figure~\ref{fig:workloadScale}, which should be considered when scaling an application's server capacity between the specified range of $m$ to $M$.
As noted in Figure~\ref{fig:workloadScale}, applications throughput either increases sub-linearly or increases somewhat linearly initially and then shows diminishing returns with additional increase in server capacity.
This behavior is a direct consequence of Amdahl's law~\cite{amdahl-law}, which states that the speedup of a parallel application is limited by the amount of sequential code within it --- adding server capacity only speeds up the parallel component of the application.
Software bottlenecks such as synchronization overheads also limit the ability to scale up.
\texttt{CarbonScaler}\xspace considers this scaling behavior in terms of a \emph{marginal capacity curve}, shown in Figure~\ref{fig:scalabilityExample}, which captures the incremental increase in application capacity (i.e., throughput) for each unit increase in server capacity.
The ideal case of linear scaling translates to a \emph{flat} marginal capacity curve where each additional server results in a unit increase in (normalized) application capacity (see Figure~\ref{fig:scalabilityExample}(a)).
Most applications will have a diminishing marginal capacity curve, where marginal capacity decreases monotonically with increase in the server capacity (see Figure~\ref{fig:scalabilityExample}(b)).
\RestyleAlgo{ruled}
\newlength{\textfloatsepsave}
\setlength{\textfloatsepsave}{\textfloatsep}
\setlength{\textfloatsep}{0pt}
\begin{algorithm}[t]
\caption{\texttt{Carbon Scaling Algorithm()}}
\label{alg:proposed_algorithm}
\KwIn{Marginal capacity ($MC$), time slots $[t, T]$, carbon cost forecast ($c$), total work ($W$)}
\KwOut{Execution Schedule $S$}
$S \gets [0..0]$\;
$L \gets []$ \;
\For{$i \in [t, T]$}{
\For{$j \in [m, M]$}{%
$L$.append($i, j, MC_j/c_i$)\;
}
} \label{algline:marg-carb}
$L \gets$ Sort($L$) ; \tcp{w.r.t. Norm. Marginal Cap.} \label{algline:greedy-start}
$w \gets 0$ \;
\While{$w < W$ }{
$i,j, * \gets L.pop()$; \tcp{next highest $MC_j/c_i$}
$S[i] = j$; \tcp{increase allocation in slot $i$}
$w.update(S)$ \;
} \label{algline:greedy-end}
\Return $S$
\vspace{-0.1cm}
\end{algorithm}
\setlength{\textfloatsep}{\textfloatsepsave}
The marginal capacity curve and the carbon intensity curve can then be used to scale the application up or down in a carbon-efficient manner.
To do so, the marginal capacity curve is normalized by the forecasted carbon cost in each time step to compute the {\em marginal capacity per unit carbon} --- the marginal work done per unit of carbon.
\texttt{CarbonScaler}\xspace then adds server capacity to the time slots that maximize the work done per unit of carbon.
By doing so, \texttt{CarbonScaler}\xspace allocates {\em more server resources when the carbon cost is low, since more marginal work can be done at a lower carbon cost.} \texttt{CarbonScaler}\xspace will incrementally add servers to various time slots until sufficient server capacity has been added to complete the job within the desired completion time $T$, thereby yielding a carbon-efficient execution schedule that optimizes the carbon emissions.
In practice, each application can have multiple marginal capacity curves, each representing a different
phase of of its execution. For example, a MapReduce job can have different scaling behaviors, and a different marginal capacity curve, for its map and reduce phases. For ease of exposition, our discussion below assumes a single marginal capacity curve per application. However, our approach generalizes to multiple marginal capacity curves by considering the appropriate scaling curve in each time slot that corresponds to current phase of the application's execution.
\vspace{-0.1cm}
\subsection{Carbon Scaling Algorithm}\label{sec:algorithm}
\vspace{-0.05cm}
Our greedy carbon scaling algorithm to compute a batch job's execution schedule by horizontally scaling it up or down or suspending it entirely is shown as Algorithm~\ref{alg:proposed_algorithm}.
As noted in Section \ref{sec:problem}, when a new job arrives at time $t$, it specifies a \emph{desired} completion time (``deadline'') of time $T$.
We assume that the interval $[t, T]$ is discretized into smaller fixed-length intervals (e.g., 15 minutes or an hour), and the number of servers allocated to the job can be changed at the start of each interval.
Suppose that there are $n$ time intervals between $[t, T]$, $n\geq 1$.
Let $c_1, c_2, ..., c_n$ denote the forecasted carbon cost in each such interval $i, i \in [t,T]$.
We also assume that the marginal capacity curve of the application is obtained by profiling the application offline (see Section~\ref{sec:profiler}) and is known at arrival time.
Suppose that the marginal capacity curve is denoted by $MC_{m}, MC_{m+1}, ..., MC_{M}$, where $MC_j$ is the marginal capacity increase after allocating the $j$-th servers, $j \in [m, M]$.
Since the estimated job length is $l$ when executing with minimum server capacity $m$, the total amount of work the job needs to perform is $W = l \cdot MC_m$.
Our algorithm needs to compute a schedule where the aggregate server capacity allocated to the job over $[t,T]$ can perform this amount of work before the completion time $T$, while also minimizing carbon emissions.
To do so, our algorithm first computes the {\em marginal capacity per unit carbon} in each time interval $i$ by normalizing the $MC$ curve with carbon cost $c_i$ in that interval (line~\ref{algline:marg-carb}).\footnote{If the application has multiple marginal capacity curves, the one corresponding to the execution phase in time slot $i$ is chosen.}
That is, the marginal capacity per unit carbon in time interval $i$ is simply ${MC_m}/c_i, MC_{m+1}/c_i, ..., MC_{M}/c_i$.
The greedy algorithm then iteratively and incrementally allocates server capacity to various time slots in order of decreasing {\em marginal capacity per unit carbon} (lines~\ref{algline:greedy-start}-\ref{algline:greedy-end}).
In each iteration, it chooses the interval $i$ from [1, n] such that allocating incremental server capacity to that time slot maximizes the work done per unit carbon (i.e., chooses the interval with the greatest $MC_j/c_i$ across all intervals).
After allocating server capacity to that interval, it iteratively determines the next interval where allocating additional server capacity yields the next highest work done per unit carbon, and so on.
Note that the greedy algorithm may allocate additional capacity to the same interval as the previous iteration if its marginal work done per unit carbon continues to be the highest across all intervals.
Otherwise, a new time interval with the next highest marginal work done per unit is chosen for allocating server capacity.
Also, when a time interval is initially chosen by the greedy algorithm for capacity allocation, it must be allocated the minimum requirement of $m$ servers, after which the allocation can be increased incrementally by one in subsequent steps. Similarly, if a time slot reaches the maximum allocation of $M$ servers, it is not considered further by the greedy algorithm.
The process continues until sufficient capacity has been allocated across the $n$ time intervals to complete $W$ units of work.
This yields an initial schedule where each time interval has either a zero allocation (causing the job to be suspended in that period) or has a non-zero allocation between $m$ and $M$, with the server allocation potentially changing at interval boundaries.
\begin{figure*}[t]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.23\textwidth]{fig/algorithm_example-01.png} &
\includegraphics[width=0.23\textwidth]{fig/algorithm_example-02.png} &
\includegraphics[width=0.23\textwidth]{fig/algorithm_example-03.png} &
\includegraphics[width=0.2\textwidth]{fig/algorithm_example-04.png}\vspace{-0.1cm}\\
\footnotesize{(a) Carbon-agnostic} & \footnotesize{(b) Linear scaling} & \footnotesize{(c) Diminishing scaling} & \footnotesize{(d) MC per unit carbon}
\end{tabular}
\vspace{-0.1cm}
\caption{\emph{An illustrative example of our carbon scaling algorithm at work.}}
\label{fig:algoExample}
\end{figure*}
\noindent{\bf Example.} To illustrate our carbon scaling algorithm, consider a job of length 2 that arrives at $t=0$ and needs to finish by $T=3$.
Suppose that the job needs to execute on at least one server ($m=1$) and at most two servers ($M=2$).
Carbon-agnostic execution will run the job as soon as it arrives and it will complete at time 2, as shown in Figure~\ref{fig:algoExample}(a).
Suppose that the forecasted carbon cost in time slots 1, 2 and 3 is $c_1=10$ ("low"), $c_2=100$ ("high") and $c_3=20$ ("medium"), respectively.
First, assume that job has ideal scaling behavior and a flat marginal capacity curve of $MC_1=1$ and $MC_2 =1$.
The algorithm simply allocates two servers to the job in slot 1, since it has the lowest carbon cost and the highest marginal capacity per unit carbon.
As shown in Figure~\ref{fig:algoExample}(b), such a job runs with two servers and terminates at the end of slot 1.
Next assume a job with a diminishing marginal capacity curve, given by $MC_1=1$ and $MC_2 =0.7$.
Figure~\ref{fig:algoExample}(d) show the marginal capacity per unit cost table ($MC_j/c_i$) for all three slots.
The greedy algorithm allocates the first server to slot 1, since it has the highest marginal capacity per unit cost of 0.1.
In the next iteration, the greedy algorithm allocates a second server to slot 1 as it still has the highest marginal capacity per unit cost ($MC_2/c_1 = 0.07$).
Although two servers have been allocated, the total work done by these two servers is only 1.7 ($MC_1+MC_2$), which cannot complete the job of length 2 ($W=2$).
The algorithm then allocates another server to slot 3, which has the next highest marginal capacity per unit cost ($MC_1/c_3 = 0.05$).
This yields a schedule where the job is given 2 servers in slot 1, zero in slot 2 and one server in slot 3.
The job only runs for one-third of slot 3 before it completes.
The example also illustrates a tradeoff where \texttt{CarbonScaler}\xspace reduces the emissions compared to carbon-agnostic execution (from 110 to 40 carbon units), but increases cloud costs by 15\% due to the need for a third server.
\noindent{\bf Periodic Schedule Recomputation.}
Once the algorithm computes an initial schedule, \texttt{CarbonScaler}\xspace can begin execution of the job by auto-scaling it up or down, or suspending it, in each time slot as per the schedule.
\texttt{CarbonScaler}\xspace continuously monitors the work done (``job progress'') and the emissions of the job over the course of its execution.
Recall that the initial schedule is computed based on a \emph{forecasted} carbon cost and an \emph{estimated} marginal capacity curve derived from profiling, both of which can have errors in their estimates.
Similar to weather forecasts, carbon forecasts can have errors, especially over a period multiple days~\cite{dayahead_estimations}.
Similarly, the marginal capacity curves may not be exact since the production environments may somewhat differ from the profiling environment \cite{network_aware_scheduling, optimus, performance_modeling}.
These errors can cause deviations in the expected work done or the expected carbon emissions as estimated by the initial schedule.
To be robust to carbon prediction or profile estimation errors, \texttt{CarbonScaler}\xspace compares the expected work and carbon emissions to the estimates in the schedule at the end of each time interval.
If the deviations exceed a threshold, it recomputes the schedule for the remainder of the job's execution from the current time $t'$ to the completion time $T$.
When doing so, it can use an updated carbon forecast if available, since such forecasts are often updated every few hours, similar to weather forecasts.
Thus, if the progress deviates from the plan (e.g., due to profiling errors), \texttt{CarbonScaler}\xspace will recompute the schedule to ensure highest carbon savings.
Since some batch jobs can execute for days~\cite{Tirmazi2020BorgTN}, such schedule adjustments provide robustness to prediction errors and ensure timely job completion while minimizing carbon emissions.
\noindent{\bf{Carbon Scaler Optimality.}} The greedy algorithm in Algorithm~\ref{alg:proposed_algorithm} yields optimal carbon savings.
\vspace{-0.1cm}
\begin{theorem}
Consider a batch job with a monotonically decreasing marginal capacity curve, s.t. $MC_m > MC_{m+1}>.. > MC_{M}$. The job needs to finish work $W$, within $n$ time slots with carbon costs $c_1, c_2,..., c_n$, respectively. Greedily selecting the slot $i$ and scaling the job to $j$ servers with highest marginal capacity per unit carbon $MC_j/c_i$, in each step, results in the lowest (optimal) amount of carbon consumption.
\end{theorem}
\vspace{-0.1cm}
The carbon scaling problem can be represented as a fractional knapsack problem that can be solved optimally using greedy algorithms \cite{clrs}.
We list our detailed proof in appendix \ref{app:optimality}.
\noindent{\bf Run Time Complexity.}
In Algorithm \ref{alg:proposed_algorithm}, the time complexity of computing the marginal capacity per unit carbon (steps: 3-5) is $\mathcal{O}(n . M)$, list sorting is $\mathcal{O}(nM \log nM)$, and computing the schedule is $O(nM)$ (steps: 8-11).
The total complexity is $\mathcal{O}(nM + nM\log nM) \approx \mathcal{O}(nM\log nM)$ .
|
{
"arxiv_id": "2302.08643",
"language": "en",
"timestamp": "2023-02-28T02:05:57",
"url": "https://arxiv.org/abs/2302.08643",
"yymm": "2302"
} |
\section{Introduction} \label{sec:intro}
Time series modeling has been a quest in a wide range of academic fields and industrial applications, including neuroscience~\cite{POURAHMADI2016neuro} and traffic modeling~\cite{dcrnn}. Traditionally, model-based approaches such as autoregressive (AR) and Support Vector Regression~\cite{smola2004svr} require domain-knowledge as well as stationary assumption, which are often violated by the complex and non-linear structure of neural and traffic data.
Recently, there has been intensive research with promising results on the traffic forecasting problem using deep learning such as Recurrent Neural Network (RNN)~\cite{qin2017rnn}, LSTM~\cite{koprinska2018lstm}, and graph-learning using Tranformer~\cite{xu2020spatial}. On the other hand, forcasting in neuroscience has been focusing mainly on long-term evolution of brain network structure based on fMRI data, such as predicting brain connectivities of an Alzheimer’s disease after several months~\cite{bessadok2022graph}, where existing methods are GCN-based~\cite{goktacs2020residual} or GAN-based graph autoencoder~\cite{gurler2020foreseeing}. Meanwhile, research on instantaneous time series forecasting of electroencephalogram (EEG) or
electrocorticography (ECoG) remains untouched, even though EEG and ECoG are often cheaper and quicker to obtain than fMRI, while short-term forecasting may be beneficial for patients with strokes or epilepsy~\cite{shoeibi2022overview}.
In graph representation learning, a dense adjacency matrix expressing a densely connected graph can be a waste of computational resources, while physically, it may fail to capture the local ``smoothness'' of the network. To tackle such problems, a mathematical framework called Multiresolution Matrix Factorization (MMF) \cite{pmlr-v32-kondor14} has been adopted to ``sparsify'' the adjacency and graph Laplacian matrices of highly dense graphs. MMF is unusual amongst fast matrix factorization algorithms in that it does not make a low rank assumption. Multiresolution matrix factorization (MMF) is an alternative paradigm that is designed to capture structure at multiple different scales. This makes MMF especially well suited to modeling certain types of graphs with complex multiscale or hierarchical strucutre \cite{pmlr-v196-hy22a}, compressing hierarchical matrices (e.g., kernel/gram matrices) \cite{pmlr-v51-teneva16,NIPS2017_850af92f}, and other applications in computer vision \cite{8099564}. One important aspect of MMF is its ability to construct wavelets on graphs and matrices during the factorization process \cite{pmlr-v32-kondor14,pmlr-v196-hy22a}. The wavelet basis inferred by MMF tends to be highly sparse, that allows the corresponding wavelet transform to be executed efficiently via sparse matrix multiplication. \cite{pmlr-v196-hy22a} exploited this property to construct fast wavelet convolution and consequentially wavelet neural networks learning on graphs for graph classification and node classification tasks. In this work, we propose the incorporation of fast wavelet convolution based on MMF to build a time- and memory-efficient temporal architecture learning on timeseries data with the underlying graph structure.
From the aforementioned arguments, we propose the \textit{Fast Temporal Wavelet Graph Neural Network} (FTWGNN) for graph time series forecasting, in which the MMF theory is utilized to describe the local smoothness of the network as well as to accelerate the calculations. Experiments on real-world traffic and ECoG datasets show competitive performance along with remarkably smaller computational footprint of FTWGNN. In summary:
\begin{compactitem}
\item We model the spatial domain of the graph time series as a diffusion process, in which the theories of \textit{multiresolution analysis} and \textit{wavelet theory} are adopted. We employ \textit{Multiresolution Matrix Factorization} (MMF) to factorize the underlying graph structure and derive its sparse wavelet basis.
\item We propose the \textit{Fast Temporal Wavelet Graph Neural Network} (FTWGNN), an end-to-end model capable of modeling spatiotemporal structures.
\item We tested on two real-world traffic datasets and an ECoG dataset and achieved competitive results to state-of-the-art methods with remarkable reduction in computational time.
\end{compactitem}
\section{Related work} \label{sec:related}
A spatial-temporal forecasting task utilizes
spatial-temporal data information gathered from various sensors to predict their future states. Traditional approaches, such as the autoregressive integrated moving average (ARIMA), k-nearest neighbors algorithm (kNN), and support vector machine (SVM), can only take into account temporal information without considering spatial
features~\cite{van2012short,jeong2013supervised}. Aside from traditional approaches, deep neural networks are proposed to model much more complex spatial-temporal relationships. Specifically, by using an extended fully-connected LSTM with embedded convolutional layers, FC-LSTM \cite{sutskever2014sequence} specifically combines CNN and LSTM to model spatial and temporal relations. When predicting traffic, ST-ResNet~\cite{zhang2017deep} uses a deep residual CNN network, revealing the powerful capabilities of the residual network. Despite the impressive results obtained, traffic forecastin scenarios with graph-structured data is incompatible with all of the aforementioned methods because they are built for grid data. For learning tasks on graphs, node representations in GNNs~\cite{Kipf_GCN} uses a neighborhood aggregation scheme, which involves sampling and aggregating the features of nearby nodes. Since temporal-spatial data such as traffic data or brain network is a well-known type of non-Euclidean structured graph data, great efforts have been made to use graph convolution methods in traffic forecasting. As an illustration, DCRNN \cite{dcrnn} models traffic flow as a diffusion process and uses directed graph bidirectional random walks to model spatial dependency.
In the field of image and signal processing, processing is more efficient and simpler in a sparse representation where fewer coefficients reveal the information that we are searching for. Based on this motivation, Multiresolution Analysis (MRA) has been proposed by \cite{192463} as a design for multiscale signal approximation in which the sparse representations can be constructed by decomposing signals over elementary waveforms chosen in a family called \textit{wavelets}. Besides Fourier transforms, the discovery of wavelet orthogonal bases such as Haar \cite{Haar1910ZurTD} and Daubechies \cite{Daubechies1988OrthonormalBO} has opened the door to new transforms such as continuous and discrete wavelet transforms and the fast wavelet transform algorithm that have become crucial for several computer applications \cite{10.5555/1525499}.
\cite{pmlr-v32-kondor14} and \cite{pmlr-v196-hy22a} have introduced Multiresolution Matrix Factorization (MMF) as a novel method for constructing sparse wavelet transforms of functions defined on the nodes of an arbitrary graph while giving a multiresolution approximation of hierarchical matrices. MMF is closely related to other works on constructing wavelet bases on discrete spaces, including wavelets defined based on diagonalizing the diffusion operator or the normalized graph Laplacian \cite{COIFMAN200653} \cite{HAMMOND2011129} and multiresolution
on trees \cite{10.5555/3104322.3104370} \cite{10.2307/30244209}.
\section{Background} \label{sec:background}
\subsection{Multiresolution Matrix Factorization}
Most commonly used matrix factorization algorithms, such as principal component analysis (PCA), singular value decomposition (SVD), or non-negative matrix factorization (NMF) are inherently single-level algorithms. Saying that a symmetric matrix ${\bm{A}} \in \mathbb{R}^{n \times n}$ is of rank $r \ll n$ means that it can be expressed in terms of a dictionary of $r$ mutually orthogonal unit vectors $\{u_1, u_2, \dots, u_r\}$ in the form
$${\bm{A}} = \sum_{i = 1}^r \lambda_i u_i u_i^T,$$
where $u_1, \dots, u_r$ are the normalized eigenvectors of $A$ and $\lambda_1, \dots, \lambda_r$ are the corresponding eigenvalues. This is the decomposition that PCA finds, and it corresponds to factorizing ${\bm{A}}$ in the form
\begin{equation}
{\bm{A}} = {\bm{U}}^T {\bm{H}} {\bm{U}},
\label{eq:eigen}
\end{equation}
where ${\bm{U}}$ is an orthogonal matrix and ${\bm{H}}$ is a diagonal matrix with the eigenvalues of ${\bm{A}}$ on its diagonal. The drawback of PCA is that eigenvectors are almost always dense, while matrices occuring in learning problems, especially those related to graphs, often have strong locality properties, in the sense that they are more closely couple certain clusters of nearby coordinates than those farther apart with respect to the underlying topology. In such cases, modeling $A$ in terms of a basis of global eigenfunctions is both computationally wasteful and conceptually unreasonable: a localized dictionary would be more appropriate. In contrast to PCA, \cite{pmlr-v32-kondor14} proposed \textit{Multiresolution Matrix Factorization}, or MMF for short, to construct a sparse hierarchical system of $L$-level dictionaries. The corresponding matrix factorization is of the form
$${\bm{A}} = {\bm{U}}_1^T {\bm{U}}_2^T \dots {\bm{U}}_L^T {\bm{H}} {\bm{U}}_L \dots {\bm{U}}_2 {\bm{U}}_1,$$
where ${\bm{H}}$ is close to diagonal and ${\bm{U}}_1, \dots, {\bm{U}}_L$ are sparse orthogonal matrices with the following constraints:
\begin{compactitem}
\item Each ${\bm{U}}_\ell$ is $k$-point rotation (i.e. Givens rotation) for some small $k$, meaning that it only rotates $k$ coordinates at a time. Formally, Def.~\ref{def:rotation-matrix} defines the $k$-point rotation matrix.
\item There is a nested sequence of sets ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ such that the coordinates rotated by ${\bm{U}}_\ell$ are a subset of ${\mathbb{S}}_\ell$.
\item ${\bm{H}}$ is an ${\mathbb{S}}_L$-core-diagonal matrix that is formally defined in Def.~\ref{def:core-diagonal}.
\end{compactitem}
We formally define MMF in Defs.~\ref{def:mmf} and \ref{def:factorizable}. A special case of MMF is the Jacobi eigenvalue algorithm \cite{Jacobi+1846+51+94} in which each ${\bm{U}}_\ell$ is a 2-point rotation (i.e. $k = 2$).
\begin{definition} \label{def:rotation-matrix}
We say that ${\bm{U}} \in \mathbb{R}^{n \times n}$ is an \textbf{elementary rotation of order $k$} (also called as a $k$-point rotation) if it is an orthogonal matrix of the form
$${\bm{U}} = {\bm{I}}_{n - k} \oplus_{(i_1, \cdots, i_k)} {\bm{O}}$$
for some ${\mathbb{I}} = \{i_1, \cdots, i_k\} \subseteq [n]$ and ${\bm{O}} \in {\mathbb{S}}{\mathbb{O}}(k)$. We denote the set of all such matrices as ${\mathbb{S}}{\mathbb{O}}_k(n)$.
\end{definition}
\begin{definition} \label{def:core-diagonal}
Given a set ${\mathbb{S}} \subseteq [n]$, we say that a matrix ${\bm{H}} \in \mathbb{R}^{n \times n}$ is ${\mathbb{S}}$-core-diagonal if ${\bm{H}}_{i, j} = 0$ unless $i, j \in {\mathbb{S}}$ or $i = j$. Equivalently, ${\bm{H}}$ is ${\mathbb{S}}$-core-diagonal if it can be written in the form ${\bm{H}} = {\bm{D}} \oplus_{{\mathbb{S}}} \overline{{\bm{H}}}$, for some $\overline{H} \in \mathbb{R}^{\lvert {\mathbb{S}} \rvert \times \lvert {\mathbb{S}} \rvert}$ and ${\bm{D}}$ is diagonal. We denote the set of all ${\mathbb{S}}$-core-diagonal symmetric matrices of dimension $n$ as ${\mathbb{H}}^{{\mathbb{S}}}_n$.
\end{definition}
\begin{definition} \label{def:mmf}
Given an appropriate subset ${\mathbb{O}}$ of the group ${\mathbb{S}}{\mathbb{O}}(n)$ of $n$-dimensional rotation matrices, a depth parameter $L \in \mathbb{N}$, and a sequence of integers $n = d_0 \ge d_1 \ge d_2 \ge \dots \ge d_L \ge 1$, a \textbf{Multiresolution Matrix Factorization (MMF)} of a symmetric matrix ${\bm{A}} \in \mathbb{R}^{n \times n}$ over ${\mathbb{O}}$ is a factorization of the form
\begin{equation} \label{eq:mmf}
{\bm{A}} = {\bm{U}}_1^T {\bm{U}}_2^T \dots {\bm{U}}_L^T {\bm{H}} {\bm{U}}_L \dots {\bm{U}}_2 {\bm{U}}_1,
\end{equation}
where each ${\bm{U}}_\ell \in {\mathbb{O}}$ satisfies $[{\bm{U}}_\ell]_{[n] \setminus {\mathbb{S}}_{\ell - 1}, [n] \setminus {\mathbb{S}}_{\ell - 1}} = {\bm{I}}_{n - d_\ell}$ for some nested sequence of sets ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ with $\lvert {\mathbb{S}}_\ell \rvert = d_\ell$, and ${\bm{H}} \in {\mathbb{H}}^{{\mathbb{S}}_L}_n$ is an ${\mathbb{S}}_L$-core-diagonal matrix.
\end{definition}
\begin{definition} \label{def:factorizable}
We say that a symmetric matrix ${\bm{A}} \in \mathbb{R}^{n \times n}$ is \textbf{fully multiresolution factorizable} over ${\mathbb{O}} \subset {\mathbb{S}}{\mathbb{O}}(n)$ with $(d_1, \dots, d_L)$ if it has a decomposition of the form described in Def.~\ref{def:mmf}.
\end{definition}
\subsection{Multiresolution analysis} \label{sec:multiresolution-analysis}
\cite{pmlr-v32-kondor14} has shown that MMF mirrors the classical theory of multiresolution analysis (MRA) on the real line \cite{192463} to discrete spaces. The functional analytic view of wavelets is provided by MRA, which, similarly to Fourier analysis, is a way of filtering some function space into a sequence of subspaces
\begin{equation}
\dots \subset {\mathbb{V}}_{-1} \subset {\mathbb{V}}_0 \subset {\mathbb{V}}_1 \subset {\mathbb{V}}_2 \subset \dots
\label{eq:subspace-sequence}
\end{equation}
However, it is best to conceptualize (\ref{eq:subspace-sequence}) as an iterative process of splitting each ${\mathbb{V}}_\ell$ into the orthogonal sum ${\mathbb{V}}_\ell = {\mathbb{V}}_{\ell + 1} \oplus {\mathbb{W}}_{\ell + 1}$ of a smoother part ${\mathbb{V}}_{\ell + 1}$, called the \textit{approximation space}; and a rougher part ${\mathbb{W}}_{\ell + 1}$, called the \textit{detail space} (see Fig.~\ref{fig:subspaces}). Each ${\mathbb{V}}_\ell$ has an orthonormal basis $\Phi_\ell \triangleq \{\phi_m^\ell\}_m$ in which each $\phi$ is called a \textit{father} wavelet. Each complementary space ${\mathbb{W}}_\ell$ is also spanned by an orthonormal basis $\Psi_\ell \triangleq \{\psi_m^\ell\}_m$ in which each $\psi$ is called a \textit{mother} wavelet. In MMF, each individual rotation ${\bm{U}}_\ell: {\mathbb{V}}_{\ell - 1} \rightarrow {\mathbb{V}}_\ell \oplus {\mathbb{W}}_\ell$ is a sparse basis transform that expresses $\Phi_\ell \cup \Psi_\ell$ in the previous basis $\Phi_{\ell - 1}$ such that:
$$\phi_m^\ell = \sum_{i = 1}^{\text{dim}({\mathbb{V}}_{\ell - 1})} [{\bm{U}}_\ell]_{m, i} \phi_i^{\ell - 1},$$
$$\psi_m^\ell = \sum_{i = 1}^{\text{dim}({\mathbb{V}}_{\ell - 1})} [{\bm{U}}_\ell]_{m + \text{dim}({\mathbb{V}}_{\ell - 1}), i} \phi_i^{\ell - 1},$$
in which $\Phi_0$ is the standard basis, i.e. $\phi_m^0 = e_m$; and $\text{dim}({\mathbb{V}}_\ell) = d_\ell = \lvert {\mathbb{S}}_\ell \rvert$. In the $\Phi_1 \cup \Psi_1$ basis, ${\bm{A}}$ compresses into ${\bm{A}}_1 = {\bm{U}}_1{\bm{A}}{\bm{U}}_1^T$. In the $\Phi_2 \cup \Psi_2 \cup \Psi_1$ basis, it becomes ${\bm{A}}_2 = {\bm{U}}_2{\bm{U}}_1{\bm{A}}{\bm{U}}_1^T{\bm{U}}_2^T$, and so on. Finally, in the $\Phi_L \cup \Psi_L \cup \dots \cup \Psi_1$ basis, it takes on the form ${\bm{A}}_L = {\bm{H}} = {\bm{U}}_L \dots {\bm{U}}_2{\bm{U}}_1 {\bm{A}} {\bm{U}}_1^T{\bm{U}}_2^T \dots {\bm{U}}_L^T$ that consists of four distinct blocks (supposingly that we permute the rows/columns accordingly):
$${\bm{H}} = \begin{pmatrix} {\bm{H}}_{\Phi, \Phi} & {\bm{H}}_{\Phi, \Psi} \\ {\bm{H}}_{\Psi, \Phi} & {\bm{H}}_{\Psi, \Psi} \end{pmatrix},$$
where ${\bm{H}}_{\Phi, \Phi} \in \mathbb{R}^{\text{dim}({\mathbb{V}}_L) \times \text{dim}({\mathbb{V}}_L)}$ is effectively ${\bm{A}}$ compressed to ${\mathbb{V}}_L$, ${\bm{H}}_{\Phi, \Psi} = {\bm{H}}_{\Psi, \Phi}^T = 0$ and ${\bm{H}}_{\Psi, \Psi}$ is diagonal. MMF approximates ${\bm{A}}$ in the form
$${\bm{A}} \approx \sum_{i, j = 1}^{d_L} h_{i, j} \phi_i^L {\phi_j^L}^T + \sum_{\ell = 1}^L \sum_{m = 1}^{d_\ell} c_m^\ell \psi_m^\ell {\psi_m^\ell}^T,$$
where $h_{i, j}$ coefficients are the entries of the ${\bm{H}}_{\Phi, \Phi}$ block, and $c_m^\ell = \langle \psi_m^\ell, {\bm{A}} \psi_m^\ell \rangle$ wavelet frequencies are the diagonal elements of the ${\bm{H}}_{\Psi, \Psi}$ block.
In particular, the dictionary vectors corresponding to certain rows of ${\bm{U}}_1$ are interpreted as level one wavelets, the dictionary vectors corresponding to certain rows of ${\bm{U}}_2{\bm{U}}_1$ are interpreted as level two wavelets, and so on. One thing that is immediately clear is that whereas Eq.~(\ref{eq:eigen}) diagonalizes ${\bm{A}}$ in a single step, multiresolution analysis will involve a sequence of basis transforms ${\bm{U}}_1, {\bm{U}}_2, \dots, {\bm{U}}_L$, transforming ${\bm{A}}$ step by step as
\begin{equation}
{\bm{A}} \rightarrow {\bm{U}}_1{\bm{A}}{\bm{U}}_1^T \rightarrow \dots \rightarrow {\bm{U}}_L \dots {\bm{U}}_1{\bm{A}}{\bm{U}}_1^T \dots {\bm{U}}_L^T \triangleq {\bm{H}},
\label{eq:mmf-transform}
\end{equation}
so the corresponding matrix factorization must be a multilevel factorization
\begin{equation}
{\bm{A}} \approx {\bm{U}}_1^T {\bm{U}}_2^T \dots {\bm{U}}_\ell^T {\bm{H}} {\bm{U}}_\ell \dots {\bm{U}}_2 {\bm{U}}_1.
\label{eq:mmf-factorization}
\end{equation}
\begin{figure}[t]
$$
\xymatrix{
L_2({\mathbb{X}}) \ar[r] & \cdots \ar[r] & {\mathbb{V}}_0 \ar[r] \ar[dr] & {\mathbb{V}}_1 \ar[r] \ar[dr] & {\mathbb{V}}_2 \ar[r] \ar[dr] & \cdots \\
& & & {\mathbb{W}}_1 & {\mathbb{W}}_2 & {\mathbb{W}}_3
}
$$
\vspace{-20pt}
\caption{\label{fig:subspaces}
Multiresolution analysis splits each function space ${\mathbb{V}}_0, {\mathbb{V}}_1, \dots$ into the direct sum of a smoother part ${\mathbb{V}}_{\ell + 1}$ and a rougher part ${\mathbb{W}}_{\ell + 1}$.
}
\end{figure}
\subsection{MMF optimization problem}
Finding the best MMF factorization to a symmetric matrix ${\bm{A}}$ involves solving
\begin{equation}
\min_{\substack{{\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n] \\ {\bm{H}} \in {\mathbb{H}}^{{\mathbb{S}}_L}_n; {\bm{U}}_1, \dots, {\bm{U}}_L \in {\mathbb{O}}}} \| {\bm{A}} - {\bm{U}}_1^T \dots {\bm{U}}_L^T {\bm{H}} {\bm{U}}_L \dots {\bm{U}}_1 \|.
\label{eq:mmf-opt}
\end{equation}
Assuming that we measure error in the Frobenius norm, (\ref{eq:mmf-opt}) is equivalent to
\begin{equation}
\min_{\substack{{\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n] \\ {\bm{U}}_1, \dots, {\bm{U}}_L \in {\mathbb{O}}}} \| {\bm{U}}_L \dots {\bm{U}}_1 {\bm{A}} {\bm{U}}_1^T \dots {\bm{U}}_L^T \|^2_{\text{resi}},
\label{eq:mmf-resi}
\end{equation}
where $\| \cdot \|_{\text{resi}}^2$ is the squared residual norm
$\|{\bm{H}} \|_{\text{resi}}^2 = \sum_{i \neq j; (i, j) \not\in {\mathbb{S}}_L \times {\mathbb{S}}_L} \lvert {\bm{H}}_{i, j} \rvert^2$. The optimization problem in (\ref{eq:mmf-opt}) and (\ref{eq:mmf-resi}) is equivalent to the following 2-level one:
\begin{equation}
\min_{{\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]} \min_{{\bm{U}}_1, \dots, {\bm{U}}_L \in {\mathbb{O}}} \| {\bm{U}}_L \dots {\bm{U}}_1 {\bm{A}} {\bm{U}}_1^T \dots {\bm{U}}_L^T \|^2_{\text{resi}}.
\label{eq:mmf-two-phases}
\end{equation}
There are two fundamental problems in solving this 2-level optimization:
\begin{compactitem}
\item For the inner optimization, the variables (i.e. Givens rotations ${\bm{U}}_1, \dots, {\bm{U}}_L$) must satisfy the orthogonality constraints.
\item For the outer optimization, finding the optimal nested sequence of indices ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ is a combinatorics problem, given an exponential search space.
\end{compactitem}
In order to address these above problems, \cite{pmlr-v196-hy22a} proposes a learning algorithm combining Stiefel manifold optimization and Reinforcement Learning (RL) for the inner and outer optimization, respectively. In this paper, we assume that a nested sequence of indices ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ is given by a fast heuristics instead of computationally expensive RL. There are several heuristics to find the nested sequence, for example: clustering based on similarity between rows \cite{pmlr-v32-kondor14} \cite{DBLP:journals/corr/KondorTM15}. In the next section, we introduce the solution for the inner problem.
\subsection{Stiefel manifold optimization}
In order to solve the inner optimization problem of (\ref{eq:mmf-two-phases}),
we consider the following generic optimization with orthogonality constraints:
\begin{equation}
\min_{{\bm{X}} \in \mathbb{R}^{n \times p}} \mathcal{F}({\bm{X}}), \ \ \text{s.t.} \ \ {\bm{X}}^T {\bm{X}} = {\bm{I}}_p,
\label{eq:opt-prob}
\end{equation}
where ${\bm{I}}_p$ is the identity matrix and $\mathcal{F}({\bm{X}}): \mathbb{R}^{n \times p} \rightarrow \mathbb{R}$ is a differentiable function. The feasible set $\mathcal{V}_p(\mathbb{R}^n) = \{{\bm{X}} \in \mathbb{R}^{n \times p}: {\bm{X}}^T {\bm{X}} = {\bm{I}}_p\}$ is referred to as the Stiefel manifold of $p$ orthonormal vectors in $\mathbb{R}^{n}$. We will view $\mathcal{V}_p(\mathbb{R}^n)$ as an embedded submanifold of $\mathbb{R}^{n \times p}$. In the case there are more than one orthogonal constraints, (\ref{eq:opt-prob}) is written as
\begin{equation}
\min_{{\bm{X}}_1 \in \mathcal{V}_{p_1}(\mathbb{R}^{n_1}), \dots, {\bm{X}}_q \in \mathcal{V}_{p_q}(\mathbb{R}^{n_q})} \mathcal{F}({\bm{X}}_1, \dots, {\bm{X}}_q)
\label{eq:opt-prob-extended}
\end{equation}
where there are $q$ variables with corresponding $q$ orthogonal constraints.
In the MMF optimization problem (\ref{eq:mmf-two-phases}), suppose we are already given ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ meaning that the indices of active rows/columns at each resolution were already determined, for simplicity. In this case, we have $q = L$ number of variables such that each variable ${\bm{X}}_\ell = {\bm{O}}_\ell \in \mathbb{R}^{k \times k}$, where ${\bm{U}}_\ell = {\bm{I}}_{n - k} \oplus_{{\mathbb{I}}_\ell} {\bm{O}}_\ell \in \mathbb{R}^{n \times n}$ in which ${\mathbb{I}}_\ell$ is a subset of $k$ indices from ${\mathbb{S}}_\ell$, must satisfy the orthogonality constraint. The corresponding objective function is
\begin{equation}
\mathcal{F}({\bm{O}}_1, \dots, {\bm{O}}_L) = \| {\bm{U}}_L \dots {\bm{U}}_1 {\bm{A}} {\bm{U}}_1^T \dots {\bm{U}}_L^T \|^2_{\text{resi}}.
\label{eq:mmf-core}
\end{equation}
Therefore, we can cast the inner problem of (\ref{eq:mmf-two-phases}) as an optimization problem on the Stiefel manifold, and solve it by the specialized steepest gradient descent \cite{Tagare2011NotesOO}.
\section{Method} \label{sec:method}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{waveletDCRNN.png}
\centering
\caption{Architecture of Fast Temporal Wavelet Neural Network. \textbf{WC:} graph wavelet convolution given MMF's wavelet basis.}
\label{fig:architecture}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{WCGRU.png}
\centering
\caption{Architecture for the Wavelet Convolutional Gated Recurrent Unit. \textbf{WC}: graph wavelet convolution given MMF's wavelet basis}
\label{fig:architecture2}
\end{figure}
\subsection{Wavelet basis and convolution on graph}
Section \ref{sec:multiresolution-analysis} introduces the theory of multiresolution analysis behind MMF as well as the construction of a \textit{sparse} wavelet basis for a symmetric matrix ${\bm{A}} \in \mathbb{R}^{n \times n}$. Without the loss of generality, we assume that ${\bm{A}}$ is a weight matrix of a weighted undirected graph $\mathcal{G} = (V, E)$ in which $V = \{v_1, .., v_n\}$ is the set of vertices and $E = \{(v_i, v_j)\}$ is the set of edges with the weight of edge $(v_i, v_j)$ is given by ${\bm{A}}_{i, j}$. Given a graph signal ${\bm{f}} \in \mathbb{R}^n$ that is understood as a function $f: V \rightarrow \mathbb{R}$ defined on the vertices of the graph, the wavelet transform (up to level $L$) expresses this graph signal, without loss of generality $f \in {\mathbb{V}}_0$, as:
$$f(v) = \sum_{\ell = 1}^L \sum_m \alpha_m^\ell \psi_m^\ell(v) + \sum_m \beta_m \phi_m^L(v), \ \ \ \ \text{for each} \ \ v \in V,$$
where $\alpha_m^\ell = \langle f, \psi_m^\ell \rangle$ and $\beta_m = \langle f, \phi_m^L \rangle$ are the wavelet coefficients. Based on the wavelet basis construction via MMF detailed in \cite{pmlr-v196-hy22a}:
\begin{compactitem}
\item For $L$ levels of resolution, we get exactly $L$ mother wavelets $\overline{\psi} = \{\psi^1, \psi^2, \dots, \psi^L\}$, each corresponds to a resolution (see Figure~\ref{fig:waveletVisual} for visualizations).
\item The rows of ${\bm{H}} = {\bm{A}}_L$ make exactly $n - L$ father wavelets $\overline{\phi} = \{\phi^L_m = {\bm{H}}_{m, :}\}_{m \in {\mathbb{S}}_L}$. In total, a graph of $n$ vertices has exactly $n$ wavelets, both mothers and fathers.
\end{compactitem}
Analogous to the convolution based on Graph Fourier Transform \cite{ae482107de73461787258f805cf8f4ed}, each convolution layer $k \in \{1, .., K\}$ of wavelet neural network transforms an input vector ${\bm{f}}^{(k - 1)}$ of size $\lvert V \rvert \times F_{k - 1}$ into an output ${\bm{f}}^{(k)}$ of size $\lvert V \rvert \times F_k$ as
\begin{equation}
{\bm{f}}^{(k)}_{:, j} = \sigma \bigg( {\bm{W}} \sum_{i = 1}^{F_{k - 1}} {\bm{g}}^{(k)}_{i, j} {\bm{W}}^T {\bm{f}}^{(k - 1)}_{:, i} \bigg) \ \ \ \ \text{for} \ \ j = 1, \dots, F_k,
\label{eq:wavevlet-conv}
\end{equation}
where ${\bm{W}}$ is our wavelet basis matrix as we concatenate $\overline{\phi}$ and $\overline{\psi}$ column-by-column, ${\bm{g}}^{(k)}_{i, j}$ is a parameter/filter in the form of a diagonal matrix learned in spectral domain, and $\sigma$ is an element-wise non-linearity (e.g., ReLU, sigmoid, etc.). In Eq.(\ref{eq:wavevlet-conv}), first we employ the wavelet transform of a graph signal ${\bm{f}}$ into the spectral domain (i.e. $\hat{{\bm{f}}} = {\bm{W}}^T {\bm{f}}$ is the forward transform and ${\bm{f}} = {\bm{W}} \hat{{\bm{f}}}$ is the inverse transform), then a learnable filter ${\bm{g}}$ to the wavelet coefficients, the inverse transform back to the spatial domain and let everything through a non-linearity $\sigma$. Since the wavelet basis matrix ${\bm{W}}$ is \textit{sparse}, both the wavelet transform and its inverse transform can be implemented efficiently via sparse matrix multiplication.
\subsection{Temporal Wavelet Neural Networks}
Capturing spatiotemporal dependencies among time series in various spatiotemporal forecasting problems demands both spatial and temporal models. We build our novel \textit{Fast Temporal Wavelet Graph Neural Network} with the architectural backbone from \textit{Diffusion Convolutional Recurrent Neural Network} (DCRNN) \cite{dcrnn}, that combines both spatial and temporal models to solve these tasks.
\textbf{Spatial Dependency Model} \quad The spatial dynamic in the network is captured by diffusion process. Let $G = ({\bm{X}},{\bm{A}})$ represent an undirected graph, where ${\bm{X}} =[{\bm{x}}_{1}^T,\dots,{\bm{x}}_{N}^T]^T \in \mathbb{R}^{N \times D}$ denotes signals of $N$ nodes, each has $D$ features. Define further the right-stochastic edge weights matrix $\Tilde{{\bm{A}}} \in \mathbb{R}^{N \times N}$ in which $\sum_j \Tilde{{\bm{A}}}_{ij}=1 \forall i$. In the simplest case, when $\Tilde{{\bm{L}}}={\bm{I}}-\Tilde{{\bm{A}}}$ is the nomalized random walk matrix, the diffusion process on graph is governed by the following equation \cite{GRAND}:
\begin{equation}
\frac{{\rm d} {\bm{X}}(t)}{{\rm d}t} = (\Tilde{{\bm{A}}}-{\bm{I}}){\bm{X}}(t)
\label{eq:diffusion}
\end{equation}
where ${\bm{X}}(t) = [{\bm{x}}_{1}^T(t),\dots,[{\bm{x}}_{N}^T(t)]^T \in \mathbb{R}^{N \times D}$ and ${\bm{X}}(0)={\bm{X}}$. Applying forward Euler discretization with step size 1, gives:
\begin{align}
{\bm{X}}(k) &= {\bm{X}}(k-1) + (\Tilde{{\bm{A}}}-{\bm{I}}){\bm{X}}(k-1) \notag \\
&= {\bm{X}}(k-1) -\Tilde{{\bm{L}}}{\bm{X}}(k-1) \notag \\
& = \Tilde{{\bm{A}}}{\bm{X}}(k-1) \notag \\
& = \Tilde{{\bm{A}}}^k{\bm{X}}(0)
\label{eq:random_walk}
\end{align}
Eq.\ref{eq:random_walk} is similar to the well-established GCN architecture propose in \cite{Kipf_GCN}. Then, the diffusion convolution operation over a graph signal $\mathbb{R}^{N \times D}$ and filter $f_{\boldsymbol{\theta}}$ is defined as:
\begin{equation}
{\bm{X}}_{:,d}\star_\mathcal{G}f_{\boldsymbol{\theta}} = \sum_{k=0}^{K-1}\theta_k \Tilde{{\bm{A}}}^k {\bm{X}}_{:,d} \quad \forall d \in \{1,\dots,D\}
\label{eq:graph_conv}
\end{equation}
where ${\bm{\Theta}} \in \mathbb{R}^{K \times 2}$ are the parameters for the filter.
\textbf{Temporal Dependency Model} \quad The DCRNN is leveraged from the recurrent neural networks (RNNs) to model the temporal dependency. In particular, the matrix multiplications in GRU is replaced with the diffusion convolution, which is called \textit{Diffusion Convolutional Gated Recurrent Unit} (DCGRU).
\begin{align}
{\bm{r}}^{(t)} &= \sigma({\bm{\Theta}}_r \star_\mathcal{G} [{\bm{X}}^{(t)}, {\bm{H}}^{(t-1)}] + \boldsymbol{b} ) \notag \\
\boldsymbol{u}^{(t)} &= \sigma({\bm{\Theta}}_r \star_\mathcal{G} [{\bm{X}}^{(t)}, {\bm{H}}^{(t-1)}] + \boldsymbol{b_u} ) \notag \\
{\bm{C}}^{(t)} &= \tanh({\bm{\Theta}}_r \star_\mathcal{G} [{\bm{X}}^{(t)}, (\boldsymbol{r} \odot {\bm{H}}^{(t-1)})] + \boldsymbol{b_c} ) \notag \\
{\bm{H}}^{(t)} &= \boldsymbol{u}^{(t)} \odot {\bm{H}}^{(t-1)} + (1-\boldsymbol{u}^{(t)}) \odot {\bm{C}}^{(t)} \notag
\end{align}
where ${\bm{X}}(t), {\bm{H}}(t)$ denote the input and output of at time $t$, while ${\bm{r}}^{(t)},\boldsymbol{u}^{(t)}$ are reset gate and update gate at time $t$, respectively.
Both the encoder and the decoder are recurrent neural networks with DCGRU following \textit{Sequence-to-Sequence} style. To mitigate the distribution differences between training and testing data, scheduled sampling technique \cite{bengio2015scheduled} is used, where the model is fed with either the ground truth with probability $\epsilon_i$ or the prediction by the model with probability $1-\epsilon_i$.
For our novel \textit{Fast Temporal Wavelet Graph Neural Network} (FTWGNN), the fundamental difference is that instead of using temporal traffic graph as the input of DCRNN, we use the sparse wavelet basis matrix ${\bm{W}}$ which is extracted via MMF (see Section \ref{sec:multiresolution-analysis}) and replace the diffusion convolution by our fast \textit{wavelet convolution}. Given the sparsity of our wavelet basis, we significantly reduce the overall computational time and memory usage. Each Givens rotation matrix ${\bm{U}}_\ell$ (see Def.~\ref{def:rotation-matrix}) is a highly-sparse orthogonal matrix with a non-zero core of size $K \times K$. The number of non-zeros in MMF's wavelet basis ${\bm{W}}$, that can be computed as product ${\bm{U}}_1{\bm{U}}_2 \cdots {\bm{U}}_L$, is $O(LK^2)$ where $L$ is the number of resolutions (i.e. number of Givens rotation matrices) and $K$ is the number of columns in a Givens rotation matrix. \cite{pmlr-v32-kondor14} and \cite{pmlr-v196-hy22a} have shown in both theory and practice that $L$ only needs to be in $O(n)$ where $n$ is the number of columns and $K$ small (e.g., 2, 4, 8) to get a decent approximation/compression for a symmetric hierarchical matrix. Technically, MMF is able to compress a symmetric hierararchical matrix from the original quadratic size $n \times n$ to a linear number of non-zero elements $O(n)$. Practically, all the Givens rotation matrices $\{{\bm{U}}_\ell\}_{\ell = 1}^L$ and the wavelet basis ${\bm{W}}$ can be stored in Coordinate Format (COO), and the wavelet transform and its inverse in wavelet convolution (see Eq.~\ref{eq:wavevlet-conv}) can be implemented efficiently by sparse matrix multiplication in PyTorch's sparse library \cite{paszke2019pytorch}. The architecture of our model is shown in Figures \ref{fig:architecture} and \ref{fig:architecture2}.
\section{Experiments} \label{sec:experiments}
Our PyTorch implementation is publicly available at \url{https://github.com/HySonLab/TWGNN}. Implementation of Multiresolution Matrix Factorization and graph wavelet computation \cite{pmlr-v196-hy22a} is publicly available at \url{https://github.com/risilab/Learnable_MMF}.
To showcase the competitive performance and remarkable acceleration of FTWGNN, we conducted experiments on two well-known traffic forecasting benchmarks METR-LA and PEMS-BAY, and one challenging ECoG dataset AJILE12. Following \cite{dcrnn}, we compare our model with widely used time series models, including:
\begin{enumerate*}
\item HA: Historical Average, which models the traffic flow as a seasonal process, and uses weighted average of previous seasons as the prediction;
\item $\text{ARIMA}_{kal}$: Auto-Regressive Integrated Moving Average model with Kalman filter;
\item VAR: Vector Auto-Regression.
\item SVR: Support Vector Regression.
\item FNN: Feed forward neural network with two hidden layers and L2 regularization.
\item FC-LSTM: Recurrent Neural Network with fully connected LSTM hidden units\cite{sutskever2014sequence}
\end{enumerate*}
Methods are evaluated on three metrics:
\begin{enumerate*}[label= \textbf{(\roman*)}]
\item Mean Absolute Error (MAE);
\item Mean Absolute Percentage Error (MAPE); and
\item Root Mean Squared Error (RMSE).
\end{enumerate*}
FTWGNN and DCRNN are implemented using PyTorch~\cite{paszke2019pytorch} on an NVIDIA A100-SXM4-80GB GPU. Details about parameters and model structure can be found in the Appendix~\ref{app:baseline} and \ref{app:ftwgnn}.
\begin{table*}[hbt!]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{c||c|c|cccccccc}
\hline
Dataset & $T$ & Metric & HA & $\text{ARIMA}_{kal}$ & VAR & SVR & FNN & FC-LSTM & DCRNN & FTWGNN \\
\hline
\multirow{10}{*}{METR-LA}&&MAE & 4.16 & 3.99 &4.42 & 3.99 & 3.99 & 3.44 & 2.77 & \textbf{2.70}\\
&15 min &RMSE& 7.80 & 8.21 & 7.89 & 8.45 & 7.94 & 6.30 & 5.38 & \textbf{5.15}\\
&&MAPE& 13.0\% & 9.6\% & 10.2\% & 9.3\% & 9.9\% & 9.6\% & 7.3\% & \textbf{6.81\%}\\
\cline{2-11}
&&MAE& 4.16 & 5.15 & 5.41 & 5.05 & 4.23 & 3.77 & 3.15 & \textbf{3.0}\\
&30 min &RMSE& 7.80 & 10.45 & 9.13 & 10.87 & 8.17 & 7.23 & 6.45 & \textbf{5.95}\\
&&MAPE& 13.0\% & 12.7\% & 12.7\% & 12.1\% & 12.9\% & 10.9\% & 8.80\% & \textbf{8.01\%}\\
\cline{2-11}
&&MAE& 4.16 & 6.90 & 6.52 & 6.72 & 4.49 & 4.37 & 3.60 & \textbf{3.42}\\
&60 min &RMSE& 7.80 & 13.23 & 10.11 & 13.76 & 8.69 & 8.69 & 7.59 & \textbf{6.92}\\
&&MAPE& 13.0\% & 17.4\% & 15.8\% & 16.7\% & 14.0\% & 13.2\% & 10.5\% &\textbf{9.83\%}\\
\hline \hline
\multirow{10}{*}{PEMS-BAY}&&MAE& 2.88 & 1.62 & 1.74 & 1.85 & 2.20 & 2.05 & 1.38 & \textbf{1.14}\\
&15 min &RMSE& 5.59 & 3.30 & 3.16 & 3.59 & 4.42 & 4.19 & 2.95 & \textbf{2.40}\\
&&MAPE& 6.8\% & 3.5\% & 3.6\% & 3.8\% & 5.2\% & 4.8\% & 2.9\% & \textbf{2.28\%}\\
\cline{2-11}
&&MAE& 2.88 & 2.33 & 2.32 & 2.48 & 2.30 & 2.20 & 1.74 & \textbf{1.5}\\
&30 min &RMSE& 5.59 & 4.76 & 4.25 & 5.18 & 4.63 & 4.55 & 3.97 & \textbf{3.27}\\
&&MAPE& 6.8\% & 5.4\% & 5.0\% & 5.5\% & 5.43\% & 5.2\% & 3.9\% & \textbf{3.15\%}\\
\cline{2-11}
&&MAE& 2.88 & 3.38 & 2.93 & 3.28 & 2.46 & 2.37 & 2.07 & \textbf{1.79}\\
&60 min &RMSE& 5.59 & 6.5 & 5.44 & 7.08 & 4.98 & 4.96 & 4.74 & \textbf{3.99}\\
&&MAPE& 6.8\% & 8.3\% & 6.5\% & 8.0\% & 5.89\% & 5.7\% & 4.9\% & \textbf{4.14\%}\\
\hline
\end{tabular}}
\caption{Performance comparison of different models for traffic speed forecasting.}
\vspace{0.3cm}
\label{tab:result_acc}
\end{table*}
\begin{table
\centering
\small
\begin{tabular}{c||c|cc|c}
\hline
Dataset & $T$ & DCRNN & FTWGNN & Speedup \\
\hline
\multirow{3}{*}{METR-LA}&15 min& 350s & \textbf{217s} & 1.61x \\
\cline{2-5}
& 30 min& 620s & \textbf{163s} & 3.80x \\
\cline{2-5}
& 60 min& 1800s & \textbf{136s} & 13.23x \\
\hline
\multirow{3}{*}{PEMS-BAY}&15 min& 427s & \textbf{150s} & 2.84x \\
\cline{2-5}
& 30 min& 900s & \textbf{173s} & 5.20x \\
\cline{2-5}
& 60 min& 1800s & \textbf{304s} & 5.92x \\
\hline
\multirow{3}{*}{AJILE12}&1 sec& 80s & \textbf{35s} & 2.28x \\
\cline{2-5}
&5 sec& 180s & \textbf{80s} & 2.25x \\
\cline{2-5}
& 15 sec& 350s & \textbf{160s} & 2.18x \\
\hline
\end{tabular}
\caption{Training time/epoch between DCRNN and FTWGNN.}
\label{tab:trainTime}
\end{table}
\textbf{Adjacency matrix} \quad According to DCRNN~\cite{dcrnn}, the traffic sensor network is expressed by an adjacency matrix which is constructed using thresholded Gaussian kernel~\cite{shuman2013signal}. Specifically, for each pair of sensors $v_i$ and $v_j$, the edge weight from $v_i$ to $v_j$, denoted by $A_{ij}$, is defined as
\begin{equation} \label{eq:adjacency}
A_{ij} \coloneqq
\left
\{
\begin{aligned}
& \exp( -\frac{\text{dist}(v_i, v_j)}{\sigma^2} ), \quad & \text{dist}(v_i, v_j) \leq k \\
& 0, \quad & \text{otherwise}
\end{aligned}
\right. \,,
\end{equation}
where $\text{dist}(v_i, v_j)$ denotes the spatial distance from $v_i$ to $v_j$, $\sigma$ is the standard deviation of the distances, and $k$ is the distance threshold.
Nevertheless, such an user-defined adjacency matrix requires expert knowledge, thus may not work on other domains, \eg, brain networks. In the ECoG time series forecasting case, the adjacency matrix is computed based on the popular Local Linear Embedding (LLE)~\cite{saul2003think}. In particular, for the matrix data ${\bm{X}}=[{\bm{x}}_1, \ldots, {\bm{x}}_N] \in {\mathbb{R}}^{T \times N}$ where ${\bm{x}}_i$ denotes the time series data of node $i$ for $i \in \{1, \ldots, N\}$, an adjacency matrix ${\bm{A}}$ is identified to gather all the coefficients of the affine dependencies among $\{{\bm{x}}_i\}_{i=1}^N$ by solving the following optimization problem
\begin{align}
{\bm{A}} \coloneqq \arg\min_{\hat{{\bm{A}}} \in {\mathbb{R}}^{N \times N}}
{} & {} \norm{ {\bm{X}} - {\bm{X}} \hat{{\bm{A}}}^T}_{\text{F}}^2 + \lambda_{A} \norm{ \hat{{\bm{A}}} }_1 \notag \\
\text{s.to} \quad
{} & {} {\bm{1}}^T_N \hat{{\bm{A}}} = {\bm{1}}^T_N\,,
\quad \text{diag}(\hat{{\bm{A}}}) = {\bm{0}} \,, \label{manifold.identify}
\end{align}
where the constraint ${\bm{1}}^T_N \hat{{\bm{A}}} = {\bm{1}}^T_N$ realizes the affine combitions, while
$\text{diag}(\hat{{\bm{A}}}) = {\bm{0}}$ excludes the trivial solution $\hat{{\bm{A}}} = {\bm{I}}_N$. Furthermore, to promote the local smoothness of the graph, each data point $x_i$ is assumed to be approximated by a few neighbors $\{x_{j_1}, x_{j_2}, \ldots, x_{j_k}\}$, thus $\hat{A}$ is regularized by the $l_1$-norm loss $\norm{ \hat{{\bm{A}}} }_1$ to be sparse.
Task (\ref{manifold.identify}) is a composite convex minimization problem with affine constraints, hence can be solved by~\cite{slavakis2018fejer}.
\subsection{Traffic prediction} \label{sec:traffic}
Two real-world large-scale traffic datasets are considered:
\begin{compactitem}
\item \textbf{METR-LA} Data of 207 sensors in the highway of Los Angeles County~\cite{metrladata} over the 4-month period from Mar 1st 2012 to Jun 30th 2012.
\item \textbf{PEMS-BAY} Data of 325 sensors in the Bay Area over the 6-month period from Jan 1st 2017 to May 31th 2017 from the California Transportation Agencies (CalTrans) Performance Measurement System (PeMS).
\end{compactitem}
\begin{figure*}[!ht]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{augmentedECG.pdf}
{\phantomcaption\ignorespaces\label{fig:augmentedECG}}
\vspace{-5pt}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{ECG1s.pdf}
{\phantomcaption\ignorespaces\label{fig:expECG1s}}
\vspace{-5pt}
\end{subfigure}
\caption{(a) Augmented ECoG signals by spline interpolation envelopes, (b) 1-second prediction of ECoG signals}
\vspace{-5pt}
\end{figure*}
\begin{table*}[!hbt]
\centering
\resizebox{0.7\textwidth}{!}{\begin{tabular}{c||c|c|ccccccc}
\hline
Dataset & $T$ & Metric & HA & VAR & LR & SVR & LSTM & DCRNN & FTWGNN \\
\hline
\multirow{10}{*}{AJILE12}&&MAE& 0.88 & 0.16 & 0.27 & 0.27 & 0.07 & 0.05 & \textbf{0.03} \\
&1 sec &RMSE& 1.23 & 0.25 & 0.37 & 0.41 & \textbf{0.09} & 0.45 & 0.35 \\
&&MAPE& 320\% & 58\% & 136\% & 140\% & 38\% & 7.84\% & \textbf{5.27\%} \\
\cline{2-10}
&&MAE & 0.88 & 0.66 & 0.69 & 0.69 & 0.39 & 0.16 & \textbf{0.11}\\
&5 sec &RMSE& 1.23 & 0.96 & 0.92 & 0.93 & 0.52 & 0.24 & \textbf{0.15} \\
&&MAPE& 320\% & 221\% & 376\% & 339\% & 147\% & 64\% & \textbf{57\%}\\
\cline{2-10}
&&MAE& 0.88 & 0.82 & 0.86 & 0.86 & 0.87 & 0.78 & \textbf{0.70} \\
&15 sec &RMSE& 1.23 & 1.15 & 1.13 & 1.13 & 1.14 & 1.01 & \textbf{0.93} \\
&&MAPE& 320\% & 320\% & 448\% & 479\% & 330\% & 294\% & \textbf{254\%} \\
\hline
\end{tabular}}
\caption{\label{tab:resultECG} Performance comparison on ECoG signals forecast.}
\vspace{-5pt}
\end{table*}
The distance function $\text{dist}(v_i, v_j)$ in (\ref{eq:adjacency}) represents the road network distance from sensor $v_i$ to sensor $v_j$, producing an asymmetric adjacency matrix for a directed graph. Therefore, the symmetrized matrix $\hat{{\bm{A}}} \coloneqq \frac{1}{2} ({\bm{A}} + {\bm{A}}^T)$ is taken to compute the wavelet basis matrix ${\bm{W}}$ following Sec.~\ref{sec:multiresolution-analysis}. Parameters can be found in the Appendix~\ref{app:ftwgnn}.
Table \ref{tab:result_acc} shows the evaluation of different approaches on the two traffic datasets, while Table \ref{tab:trainTime} reports the training time per epoch of FTWGNN and DCRNN. Overall, although FTWGNN performs better than DCRNN by about only 10\%, it is significantly faster by about 5 times on average.
\subsection{Brain networks} \label{sec:brain}
\textit{Annotated Joints in Long-term Electrocorticography for 12 human participants} (AJILE12), publicly available at~\cite{brainnetdata}, records intracranial neural activity via the invasive ECoG, which involves implanting electrodes directly under the skull~\cite{ajile12}. For each participant, ECoG recordings are sporadically sampled at 500Hz in $7.4\pm2.2$ days (mean$\pm$std) from at least 64 electrodes, each of which is encoded with an unique set of Montreal Neurological Institute (MNI) x, y, z coordinates.
The proposed model is tested on the first one hour of recordings of subject number 5 with 116 good-quality electrodes. Signals are downsampled to 1Hz, thus producing a network of 116 nodes, each with $3{,}600$ data points. Furthermore, the signals are augmented by applying the spline interpolation to get the upper and lower envelopes along with an average curve~\cite{melia2014filtering} (see Figure~\ref{fig:augmentedECG}). The adjacency matrix ${\bm{A}}$ is obtained by solving task~(\ref{manifold.identify}), then the wavelet basis matrix ${\bm{W}}$ is constructed based on Sec.~\ref{sec:multiresolution-analysis}. Parameters can be found in the Appendix~\ref{app:ftwgnn}.
Table \ref{tab:resultECG} reports the performance of different methods on the AJILE12 dataset for 1-, 5-, and 15-second prediction. Generally, errors are much higher than those in the traffic forecasting problem, since the connections within the brain network are much more complicated and ambiguous~\cite{breakspear2017dynamic}. High errors using HA and VAR methods show that the AJILE12 data follows no particular pattern or periodicity, making long-step prediction extremely challenging. Despite having a decent performance quantitatively, Figure~\ref{fig:expECG1s} demonstrates the superior performance of FTWGNN, in which DCRNN fails to approximate the trend and the magnitude of the signals. Even though FTWGNN performs well at 1-second prediction, it produces unstable and erroneous forecast at longer steps of 5 or 15 seconds (see Figure~\ref{fig:expECG5s}). Meanwhile, similar to traffic prediction case, FTWGNN also sees a remarkable improvement in computation time by around 2 times on average (see Table \ref{tab:trainTime}).
Table \ref{tab:basisDensity} shows the sparsity density of our MMF wavelet basis ${\bm{W}}$ for each dataset. It is important to note that ${\bm{W}}$ is extremely sparse with no more than $2\%$ of non-zeros, that makes our models run much faster with significantly low memory usage while achieving competitive results.
\section{Conclusion} \label{sec:conclusion}
We propose a new class of spatial-temporal graph neural networks based on the theories of multiresolution analysis and wavelet theory on discrete spaces with RNN backbone, coined \textit{Fast Temporal Wavelet Graph Neural Network} (FTWGNN). Fundamentally, we employ \textit{Multiresolution Matrix Factorization} to factorize the underlying graph structure and extract its corresponding sparse wavelet basis that consequentially allows us to construct efficient wavelet transform and convolution on graph. Experiments on real-world large-scale datasets show promising results and computational efficiency of FTGWNN in network time series modeling including traffic prediction and brain networks. Several future directions are:
\begin{enumerate*}[label= \textbf{(\roman*)}]
\item investigating synchronization phenomena in brain networks \cite{honda2018mathematical};
\item developing a robust model against outliers/missing data that appear frequently in practice.
\end{enumerate*}
\section{Appendix} \label{sec:appendix}
\section{Baseline settings} \label{app:baseline}
\textbf{HA} \quad Historical Average, which models the traffic flow as a seasonal/periodical process, and uses weighted average of previous seasons/periods as the prediction.
$\textbf{ARIMA}_{kal}$ \quad Auto-Regressive Integrated Moving Average model with Kalman filter, implemented by the \textit{statsmodel} package in Python.
\textbf{VAR} \quad Vector Auto-regressive model~\cite{hamilton2020time} with orders $(3, 0, 1)$, implemented by the Python \textit{statsmodel} package.
\textbf{LR} \quad Linear Regression with 5 historical observations.
\textbf{SVR} \quad Linear Support Vector Regression~\cite{smola2004svr} with 5 historical observations.
\textbf{FNN} \quad Feed forward neural network with two hidden layers, each with 256 units. The initial learning rate is $1e^{-3}$, and the decay rate is $1e^{-1}$ per 20 epochs. In addition, for all hidden layers, dropout with ratio 0.5 and L2 weight decay $1e^{-2}$ is used. The model is trained to minimize the MAE with batch size 64.
\textbf{FC-LSTM} \quad The encoder-decoder framework using LSTM with peephole~\cite{sutskever2014sequence}. the encoder and the decoder contain two recurrent layers, each of which consists of 256 LSTM units, with L1 weight decay rate $2e^{-5}$ and L2 weight decay rate $5e^{-4}$. The initial learning rate is $1e^{-4}$, and the decay rate is $1e^{-1}$ per 20 epochs.
\textbf{DCRNN} \quad Settings of the Diffusion Convolutional Recurrent Neural Network follow its original work~\cite{dcrnn}.
\section{FTWGNN settings} \label{app:ftwgnn}
\textbf{Data preparation} \quad For all datasets, the train/validation/test ratio is $0.7/0.2/0.1$, divided into batch size of 64.
\textbf{Adjacency matrix} \quad The $k$-neighborhood of the traffic network in Eq.~\ref{eq:adjacency} is thresholded by $k=0.01$, while for the brain network, the parameter $\lambda_A$ is set to $1e^{-5}$ in Task (\ref{manifold.identify}).
\textbf{Wavelet basis} \quad For the traffic datasets, 100 mother wavelets are extracted, \ie, $L=100$, while for the AJILE12 dataset, $L=10$ was used.
\textbf{Model architecture} \quad For the wavelet convolution RNN, both encoder and decoder contains two recurrent layers, each with 64 units. The initial learning rate is $1e^{-2}$, decaying by $\frac{1}{10}$ per 20 epochs; the dropout ratio is $0.1$; and the maximum diffusion step, \ie, $K$, is set to 2. Besides, the optimizer is the Adam optimizer~\cite{kingma2014adam}.
\begin{table}[!ht]
\centering
\begin{tabular}{|c|cc|}
\hline
\textbf{Dataset} & \textbf{Fourier basis} & \textbf{Wavelet basis} \\
\hline
\hline
METR-LA & 99.04\% & \textbf{1.11\%} \\
\hline
PEMS-BAY & 96.35\% & \textbf{0.63\%} \\
\hline
AJILE12 & 100\% & \textbf{1.81\%} \\
\hline
\end{tabular}
\caption{Sparsity of wavelet basis (i.e. percentage of non-zeros).}
\label{tab:basisDensity}
\end{table}
\section{Vizualization of the AJILE12 data}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{ECG5sec.pdf}
\caption{5-second prediction of ECoG signals.}
\label{fig:expECG5s}
\end{figure}
\begin{figure*}[!ht]
\subfloat[$\ell=1$]
{\includegraphics[width=0.33\textwidth]{wavelet_0.pdf}}
\subfloat[$\ell=20$]
{\includegraphics[width=0.33\textwidth]{wavelet_19.pdf}}
\subfloat[$\ell=49$]
{\includegraphics[width=0.33\textwidth]{wavelet_48.pdf}}
\caption{Visualization of some of the wavelets on the brain network of 116 nodes. The low index wavelets (low $\ell$) are highly localized, whereas the high index ones are smoother and more dispersed.}
\label{fig:waveletVisual}
\end{figure*}
\section{Appendix} \label{sec:appendix}
\section{Baseline settings} \label{app:baseline}
\textbf{HA} \quad Historical Average, which models the traffic flow as a seasonal/periodical process, and uses weighted average of previous seasons/periods as the prediction.
$\textbf{ARIMA}_{kal}$ \quad Auto-Regressive Integrated Moving Average model with Kalman filter, implemented by the \textit{statsmodel} package in Python.
\textbf{VAR} \quad Vector Auto-regressive model~\cite{hamilton2020time} with orders $(3, 0, 1)$, implemented by the Python \textit{statsmodel} package.
\textbf{LR} \quad Linear Regression with 5 historical observations.
\textbf{SVR} \quad Linear Support Vector Regression~\cite{smola2004svr} with 5 historical observations.
\textbf{FNN} \quad Feed forward neural network with two hidden layers, each with 256 units. The initial learning rate is $1e^{-3}$, and the decay rate is $1e^{-1}$ per 20 epochs. In addition, for all hidden layers, dropout with ratio 0.5 and L2 weight decay $1e^{-2}$ is used. The model is trained to minimize the MAE with batch size 64.
\textbf{FC-LSTM} \quad The encoder-decoder framework using LSTM with peephole~\cite{sutskever2014sequence}. the encoder and the decoder contain two recurrent layers, each of which consists of 256 LSTM units, with L1 weight decay rate $2e^{-5}$ and L2 weight decay rate $5e^{-4}$. The initial learning rate is $1e^{-4}$, and the decay rate is $1e^{-1}$ per 20 epochs.
\textbf{DCRNN} \quad Settings of the Diffusion Convolutional Recurrent Neural Network follow its original work~\cite{dcrnn}.
\section{FTWGNN settings} \label{app:ftwgnn}
\textbf{Data preparation} \quad For all datasets, the train/validation/test ratio is $0.7/0.2/0.1$, divided into batch size of 64.
\textbf{Adjacency matrix} \quad The $k$-neighborhood of the traffic network in Eq.~\ref{eq:adjacency} is thresholded by $k=0.01$, while for the brain network, the parameter $\lambda_A$ is set to $1e^{-5}$ in Task (\ref{manifold.identify}).
\textbf{Wavelet basis} \quad For the traffic datasets, 100 mother wavelets are extracted, \ie, $L=100$, while for the AJILE12 dataset, $L=10$ was used.
\textbf{Model architecture} \quad For the wavelet convolution RNN, both encoder and decoder contains two recurrent layers, each with 64 units. The initial learning rate is $1e^{-2}$, decaying by $\frac{1}{10}$ per 20 epochs; the dropout ratio is $0.1$; and the maximum diffusion step, \ie, $K$, is set to 2. Besides, the optimizer is the Adam optimizer~\cite{kingma2014adam}.
\begin{table}[!ht]
\centering
\begin{tabular}{|c|cc|}
\hline
\textbf{Dataset} & \textbf{Fourier basis} & \textbf{Wavelet basis} \\
\hline
\hline
METR-LA & 99.04\% & \textbf{1.11\%} \\
\hline
PEMS-BAY & 96.35\% & \textbf{0.63\%} \\
\hline
AJILE12 & 100\% & \textbf{1.81\%} \\
\hline
\end{tabular}
\caption{Sparsity of wavelet basis (i.e. percentage of non-zeros).}
\label{tab:basisDensity}
\end{table}
\section{Vizualization of the AJILE12 data}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{ECG5sec.pdf}
\caption{5-second prediction of ECoG signals.}
\label{fig:expECG5s}
\end{figure}
\begin{figure*}[!ht]
\subfloat[$\ell=1$]
{\includegraphics[width=0.33\textwidth]{wavelet_0.pdf}}
\subfloat[$\ell=20$]
{\includegraphics[width=0.33\textwidth]{wavelet_19.pdf}}
\subfloat[$\ell=49$]
{\includegraphics[width=0.33\textwidth]{wavelet_48.pdf}}
\caption{Visualization of some of the wavelets on the brain network of 116 nodes. The low index wavelets (low $\ell$) are highly localized, whereas the high index ones are smoother and more dispersed.}
\label{fig:waveletVisual}
\end{figure*}
\section{Background} \label{sec:background}
\subsection{Multiresolution Matrix Factorization}
Most commonly used matrix factorization algorithms, such as principal component analysis (PCA), singular value decomposition (SVD), or non-negative matrix factorization (NMF) are inherently single-level algorithms. Saying that a symmetric matrix ${\bm{A}} \in \mathbb{R}^{n \times n}$ is of rank $r \ll n$ means that it can be expressed in terms of a dictionary of $r$ mutually orthogonal unit vectors $\{u_1, u_2, \dots, u_r\}$ in the form
$${\bm{A}} = \sum_{i = 1}^r \lambda_i u_i u_i^T,$$
where $u_1, \dots, u_r$ are the normalized eigenvectors of $A$ and $\lambda_1, \dots, \lambda_r$ are the corresponding eigenvalues. This is the decomposition that PCA finds, and it corresponds to factorizing ${\bm{A}}$ in the form
\begin{equation}
{\bm{A}} = {\bm{U}}^T {\bm{H}} {\bm{U}},
\label{eq:eigen}
\end{equation}
where ${\bm{U}}$ is an orthogonal matrix and ${\bm{H}}$ is a diagonal matrix with the eigenvalues of ${\bm{A}}$ on its diagonal. The drawback of PCA is that eigenvectors are almost always dense, while matrices occuring in learning problems, especially those related to graphs, often have strong locality properties, in the sense that they are more closely couple certain clusters of nearby coordinates than those farther apart with respect to the underlying topology. In such cases, modeling $A$ in terms of a basis of global eigenfunctions is both computationally wasteful and conceptually unreasonable: a localized dictionary would be more appropriate. In contrast to PCA, \cite{pmlr-v32-kondor14} proposed \textit{Multiresolution Matrix Factorization}, or MMF for short, to construct a sparse hierarchical system of $L$-level dictionaries. The corresponding matrix factorization is of the form
$${\bm{A}} = {\bm{U}}_1^T {\bm{U}}_2^T \dots {\bm{U}}_L^T {\bm{H}} {\bm{U}}_L \dots {\bm{U}}_2 {\bm{U}}_1,$$
where ${\bm{H}}$ is close to diagonal and ${\bm{U}}_1, \dots, {\bm{U}}_L$ are sparse orthogonal matrices with the following constraints:
\begin{compactitem}
\item Each ${\bm{U}}_\ell$ is $k$-point rotation (i.e. Givens rotation) for some small $k$, meaning that it only rotates $k$ coordinates at a time. Formally, Def.~\ref{def:rotation-matrix} defines the $k$-point rotation matrix.
\item There is a nested sequence of sets ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ such that the coordinates rotated by ${\bm{U}}_\ell$ are a subset of ${\mathbb{S}}_\ell$.
\item ${\bm{H}}$ is an ${\mathbb{S}}_L$-core-diagonal matrix that is formally defined in Def.~\ref{def:core-diagonal}.
\end{compactitem}
We formally define MMF in Defs.~\ref{def:mmf} and \ref{def:factorizable}. A special case of MMF is the Jacobi eigenvalue algorithm \cite{Jacobi+1846+51+94} in which each ${\bm{U}}_\ell$ is a 2-point rotation (i.e. $k = 2$).
\begin{definition} \label{def:rotation-matrix}
We say that ${\bm{U}} \in \mathbb{R}^{n \times n}$ is an \textbf{elementary rotation of order $k$} (also called as a $k$-point rotation) if it is an orthogonal matrix of the form
$${\bm{U}} = {\bm{I}}_{n - k} \oplus_{(i_1, \cdots, i_k)} {\bm{O}}$$
for some ${\mathbb{I}} = \{i_1, \cdots, i_k\} \subseteq [n]$ and ${\bm{O}} \in {\mathbb{S}}{\mathbb{O}}(k)$. We denote the set of all such matrices as ${\mathbb{S}}{\mathbb{O}}_k(n)$.
\end{definition}
\begin{definition} \label{def:core-diagonal}
Given a set ${\mathbb{S}} \subseteq [n]$, we say that a matrix ${\bm{H}} \in \mathbb{R}^{n \times n}$ is ${\mathbb{S}}$-core-diagonal if ${\bm{H}}_{i, j} = 0$ unless $i, j \in {\mathbb{S}}$ or $i = j$. Equivalently, ${\bm{H}}$ is ${\mathbb{S}}$-core-diagonal if it can be written in the form ${\bm{H}} = {\bm{D}} \oplus_{{\mathbb{S}}} \overline{{\bm{H}}}$, for some $\overline{H} \in \mathbb{R}^{\lvert {\mathbb{S}} \rvert \times \lvert {\mathbb{S}} \rvert}$ and ${\bm{D}}$ is diagonal. We denote the set of all ${\mathbb{S}}$-core-diagonal symmetric matrices of dimension $n$ as ${\mathbb{H}}^{{\mathbb{S}}}_n$.
\end{definition}
\begin{definition} \label{def:mmf}
Given an appropriate subset ${\mathbb{O}}$ of the group ${\mathbb{S}}{\mathbb{O}}(n)$ of $n$-dimensional rotation matrices, a depth parameter $L \in \mathbb{N}$, and a sequence of integers $n = d_0 \ge d_1 \ge d_2 \ge \dots \ge d_L \ge 1$, a \textbf{Multiresolution Matrix Factorization (MMF)} of a symmetric matrix ${\bm{A}} \in \mathbb{R}^{n \times n}$ over ${\mathbb{O}}$ is a factorization of the form
\begin{equation} \label{eq:mmf}
{\bm{A}} = {\bm{U}}_1^T {\bm{U}}_2^T \dots {\bm{U}}_L^T {\bm{H}} {\bm{U}}_L \dots {\bm{U}}_2 {\bm{U}}_1,
\end{equation}
where each ${\bm{U}}_\ell \in {\mathbb{O}}$ satisfies $[{\bm{U}}_\ell]_{[n] \setminus {\mathbb{S}}_{\ell - 1}, [n] \setminus {\mathbb{S}}_{\ell - 1}} = {\bm{I}}_{n - d_\ell}$ for some nested sequence of sets ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ with $\lvert {\mathbb{S}}_\ell \rvert = d_\ell$, and ${\bm{H}} \in {\mathbb{H}}^{{\mathbb{S}}_L}_n$ is an ${\mathbb{S}}_L$-core-diagonal matrix.
\end{definition}
\begin{definition} \label{def:factorizable}
We say that a symmetric matrix ${\bm{A}} \in \mathbb{R}^{n \times n}$ is \textbf{fully multiresolution factorizable} over ${\mathbb{O}} \subset {\mathbb{S}}{\mathbb{O}}(n)$ with $(d_1, \dots, d_L)$ if it has a decomposition of the form described in Def.~\ref{def:mmf}.
\end{definition}
\subsection{Multiresolution analysis} \label{sec:multiresolution-analysis}
\cite{pmlr-v32-kondor14} has shown that MMF mirrors the classical theory of multiresolution analysis (MRA) on the real line \cite{192463} to discrete spaces. The functional analytic view of wavelets is provided by MRA, which, similarly to Fourier analysis, is a way of filtering some function space into a sequence of subspaces
\begin{equation}
\dots \subset {\mathbb{V}}_{-1} \subset {\mathbb{V}}_0 \subset {\mathbb{V}}_1 \subset {\mathbb{V}}_2 \subset \dots
\label{eq:subspace-sequence}
\end{equation}
However, it is best to conceptualize (\ref{eq:subspace-sequence}) as an iterative process of splitting each ${\mathbb{V}}_\ell$ into the orthogonal sum ${\mathbb{V}}_\ell = {\mathbb{V}}_{\ell + 1} \oplus {\mathbb{W}}_{\ell + 1}$ of a smoother part ${\mathbb{V}}_{\ell + 1}$, called the \textit{approximation space}; and a rougher part ${\mathbb{W}}_{\ell + 1}$, called the \textit{detail space} (see Fig.~\ref{fig:subspaces}). Each ${\mathbb{V}}_\ell$ has an orthonormal basis $\Phi_\ell \triangleq \{\phi_m^\ell\}_m$ in which each $\phi$ is called a \textit{father} wavelet. Each complementary space ${\mathbb{W}}_\ell$ is also spanned by an orthonormal basis $\Psi_\ell \triangleq \{\psi_m^\ell\}_m$ in which each $\psi$ is called a \textit{mother} wavelet. In MMF, each individual rotation ${\bm{U}}_\ell: {\mathbb{V}}_{\ell - 1} \rightarrow {\mathbb{V}}_\ell \oplus {\mathbb{W}}_\ell$ is a sparse basis transform that expresses $\Phi_\ell \cup \Psi_\ell$ in the previous basis $\Phi_{\ell - 1}$ such that:
$$\phi_m^\ell = \sum_{i = 1}^{\text{dim}({\mathbb{V}}_{\ell - 1})} [{\bm{U}}_\ell]_{m, i} \phi_i^{\ell - 1},$$
$$\psi_m^\ell = \sum_{i = 1}^{\text{dim}({\mathbb{V}}_{\ell - 1})} [{\bm{U}}_\ell]_{m + \text{dim}({\mathbb{V}}_{\ell - 1}), i} \phi_i^{\ell - 1},$$
in which $\Phi_0$ is the standard basis, i.e. $\phi_m^0 = e_m$; and $\text{dim}({\mathbb{V}}_\ell) = d_\ell = \lvert {\mathbb{S}}_\ell \rvert$. In the $\Phi_1 \cup \Psi_1$ basis, ${\bm{A}}$ compresses into ${\bm{A}}_1 = {\bm{U}}_1{\bm{A}}{\bm{U}}_1^T$. In the $\Phi_2 \cup \Psi_2 \cup \Psi_1$ basis, it becomes ${\bm{A}}_2 = {\bm{U}}_2{\bm{U}}_1{\bm{A}}{\bm{U}}_1^T{\bm{U}}_2^T$, and so on. Finally, in the $\Phi_L \cup \Psi_L \cup \dots \cup \Psi_1$ basis, it takes on the form ${\bm{A}}_L = {\bm{H}} = {\bm{U}}_L \dots {\bm{U}}_2{\bm{U}}_1 {\bm{A}} {\bm{U}}_1^T{\bm{U}}_2^T \dots {\bm{U}}_L^T$ that consists of four distinct blocks (supposingly that we permute the rows/columns accordingly):
$${\bm{H}} = \begin{pmatrix} {\bm{H}}_{\Phi, \Phi} & {\bm{H}}_{\Phi, \Psi} \\ {\bm{H}}_{\Psi, \Phi} & {\bm{H}}_{\Psi, \Psi} \end{pmatrix},$$
where ${\bm{H}}_{\Phi, \Phi} \in \mathbb{R}^{\text{dim}({\mathbb{V}}_L) \times \text{dim}({\mathbb{V}}_L)}$ is effectively ${\bm{A}}$ compressed to ${\mathbb{V}}_L$, ${\bm{H}}_{\Phi, \Psi} = {\bm{H}}_{\Psi, \Phi}^T = 0$ and ${\bm{H}}_{\Psi, \Psi}$ is diagonal. MMF approximates ${\bm{A}}$ in the form
$${\bm{A}} \approx \sum_{i, j = 1}^{d_L} h_{i, j} \phi_i^L {\phi_j^L}^T + \sum_{\ell = 1}^L \sum_{m = 1}^{d_\ell} c_m^\ell \psi_m^\ell {\psi_m^\ell}^T,$$
where $h_{i, j}$ coefficients are the entries of the ${\bm{H}}_{\Phi, \Phi}$ block, and $c_m^\ell = \langle \psi_m^\ell, {\bm{A}} \psi_m^\ell \rangle$ wavelet frequencies are the diagonal elements of the ${\bm{H}}_{\Psi, \Psi}$ block.
In particular, the dictionary vectors corresponding to certain rows of ${\bm{U}}_1$ are interpreted as level one wavelets, the dictionary vectors corresponding to certain rows of ${\bm{U}}_2{\bm{U}}_1$ are interpreted as level two wavelets, and so on. One thing that is immediately clear is that whereas Eq.~(\ref{eq:eigen}) diagonalizes ${\bm{A}}$ in a single step, multiresolution analysis will involve a sequence of basis transforms ${\bm{U}}_1, {\bm{U}}_2, \dots, {\bm{U}}_L$, transforming ${\bm{A}}$ step by step as
\begin{equation}
{\bm{A}} \rightarrow {\bm{U}}_1{\bm{A}}{\bm{U}}_1^T \rightarrow \dots \rightarrow {\bm{U}}_L \dots {\bm{U}}_1{\bm{A}}{\bm{U}}_1^T \dots {\bm{U}}_L^T \triangleq {\bm{H}},
\label{eq:mmf-transform}
\end{equation}
so the corresponding matrix factorization must be a multilevel factorization
\begin{equation}
{\bm{A}} \approx {\bm{U}}_1^T {\bm{U}}_2^T \dots {\bm{U}}_\ell^T {\bm{H}} {\bm{U}}_\ell \dots {\bm{U}}_2 {\bm{U}}_1.
\label{eq:mmf-factorization}
\end{equation}
\begin{figure}[t]
$$
\xymatrix{
L_2({\mathbb{X}}) \ar[r] & \cdots \ar[r] & {\mathbb{V}}_0 \ar[r] \ar[dr] & {\mathbb{V}}_1 \ar[r] \ar[dr] & {\mathbb{V}}_2 \ar[r] \ar[dr] & \cdots \\
& & & {\mathbb{W}}_1 & {\mathbb{W}}_2 & {\mathbb{W}}_3
}
$$
\vspace{-20pt}
\caption{\label{fig:subspaces}
Multiresolution analysis splits each function space ${\mathbb{V}}_0, {\mathbb{V}}_1, \dots$ into the direct sum of a smoother part ${\mathbb{V}}_{\ell + 1}$ and a rougher part ${\mathbb{W}}_{\ell + 1}$.
}
\end{figure}
\subsection{MMF optimization problem}
Finding the best MMF factorization to a symmetric matrix ${\bm{A}}$ involves solving
\begin{equation}
\min_{\substack{{\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n] \\ {\bm{H}} \in {\mathbb{H}}^{{\mathbb{S}}_L}_n; {\bm{U}}_1, \dots, {\bm{U}}_L \in {\mathbb{O}}}} \| {\bm{A}} - {\bm{U}}_1^T \dots {\bm{U}}_L^T {\bm{H}} {\bm{U}}_L \dots {\bm{U}}_1 \|.
\label{eq:mmf-opt}
\end{equation}
Assuming that we measure error in the Frobenius norm, (\ref{eq:mmf-opt}) is equivalent to
\begin{equation}
\min_{\substack{{\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n] \\ {\bm{U}}_1, \dots, {\bm{U}}_L \in {\mathbb{O}}}} \| {\bm{U}}_L \dots {\bm{U}}_1 {\bm{A}} {\bm{U}}_1^T \dots {\bm{U}}_L^T \|^2_{\text{resi}},
\label{eq:mmf-resi}
\end{equation}
where $\| \cdot \|_{\text{resi}}^2$ is the squared residual norm
$\|{\bm{H}} \|_{\text{resi}}^2 = \sum_{i \neq j; (i, j) \not\in {\mathbb{S}}_L \times {\mathbb{S}}_L} \lvert {\bm{H}}_{i, j} \rvert^2$. The optimization problem in (\ref{eq:mmf-opt}) and (\ref{eq:mmf-resi}) is equivalent to the following 2-level one:
\begin{equation}
\min_{{\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]} \min_{{\bm{U}}_1, \dots, {\bm{U}}_L \in {\mathbb{O}}} \| {\bm{U}}_L \dots {\bm{U}}_1 {\bm{A}} {\bm{U}}_1^T \dots {\bm{U}}_L^T \|^2_{\text{resi}}.
\label{eq:mmf-two-phases}
\end{equation}
There are two fundamental problems in solving this 2-level optimization:
\begin{compactitem}
\item For the inner optimization, the variables (i.e. Givens rotations ${\bm{U}}_1, \dots, {\bm{U}}_L$) must satisfy the orthogonality constraints.
\item For the outer optimization, finding the optimal nested sequence of indices ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ is a combinatorics problem, given an exponential search space.
\end{compactitem}
In order to address these above problems, \cite{pmlr-v196-hy22a} proposes a learning algorithm combining Stiefel manifold optimization and Reinforcement Learning (RL) for the inner and outer optimization, respectively. In this paper, we assume that a nested sequence of indices ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ is given by a fast heuristics instead of computationally expensive RL. There are several heuristics to find the nested sequence, for example: clustering based on similarity between rows \cite{pmlr-v32-kondor14} \cite{DBLP:journals/corr/KondorTM15}. In the next section, we introduce the solution for the inner problem.
\subsection{Stiefel manifold optimization}
In order to solve the inner optimization problem of (\ref{eq:mmf-two-phases}),
we consider the following generic optimization with orthogonality constraints:
\begin{equation}
\min_{{\bm{X}} \in \mathbb{R}^{n \times p}} \mathcal{F}({\bm{X}}), \ \ \text{s.t.} \ \ {\bm{X}}^T {\bm{X}} = {\bm{I}}_p,
\label{eq:opt-prob}
\end{equation}
where ${\bm{I}}_p$ is the identity matrix and $\mathcal{F}({\bm{X}}): \mathbb{R}^{n \times p} \rightarrow \mathbb{R}$ is a differentiable function. The feasible set $\mathcal{V}_p(\mathbb{R}^n) = \{{\bm{X}} \in \mathbb{R}^{n \times p}: {\bm{X}}^T {\bm{X}} = {\bm{I}}_p\}$ is referred to as the Stiefel manifold of $p$ orthonormal vectors in $\mathbb{R}^{n}$. We will view $\mathcal{V}_p(\mathbb{R}^n)$ as an embedded submanifold of $\mathbb{R}^{n \times p}$. In the case there are more than one orthogonal constraints, (\ref{eq:opt-prob}) is written as
\begin{equation}
\min_{{\bm{X}}_1 \in \mathcal{V}_{p_1}(\mathbb{R}^{n_1}), \dots, {\bm{X}}_q \in \mathcal{V}_{p_q}(\mathbb{R}^{n_q})} \mathcal{F}({\bm{X}}_1, \dots, {\bm{X}}_q)
\label{eq:opt-prob-extended}
\end{equation}
where there are $q$ variables with corresponding $q$ orthogonal constraints.
In the MMF optimization problem (\ref{eq:mmf-two-phases}), suppose we are already given ${\mathbb{S}}_L \subseteq \cdots \subseteq {\mathbb{S}}_1 \subseteq {\mathbb{S}}_0 = [n]$ meaning that the indices of active rows/columns at each resolution were already determined, for simplicity. In this case, we have $q = L$ number of variables such that each variable ${\bm{X}}_\ell = {\bm{O}}_\ell \in \mathbb{R}^{k \times k}$, where ${\bm{U}}_\ell = {\bm{I}}_{n - k} \oplus_{{\mathbb{I}}_\ell} {\bm{O}}_\ell \in \mathbb{R}^{n \times n}$ in which ${\mathbb{I}}_\ell$ is a subset of $k$ indices from ${\mathbb{S}}_\ell$, must satisfy the orthogonality constraint. The corresponding objective function is
\begin{equation}
\mathcal{F}({\bm{O}}_1, \dots, {\bm{O}}_L) = \| {\bm{U}}_L \dots {\bm{U}}_1 {\bm{A}} {\bm{U}}_1^T \dots {\bm{U}}_L^T \|^2_{\text{resi}}.
\label{eq:mmf-core}
\end{equation}
Therefore, we can cast the inner problem of (\ref{eq:mmf-two-phases}) as an optimization problem on the Stiefel manifold, and solve it by the specialized steepest gradient descent \cite{Tagare2011NotesOO}.
\section{Conclusion} \label{sec:conclusion}
We propose a new class of spatial-temporal graph neural networks based on the theories of multiresolution analysis and wavelet theory on discrete spaces with RNN backbone, coined \textit{Fast Temporal Wavelet Graph Neural Network} (FTWGNN). Fundamentally, we employ \textit{Multiresolution Matrix Factorization} to factorize the underlying graph structure and extract its corresponding sparse wavelet basis that consequentially allows us to construct efficient wavelet transform and convolution on graph. Experiments on real-world large-scale datasets show promising results and computational efficiency of FTGWNN in network time series modeling including traffic prediction and brain networks. Several future directions are:
\begin{enumerate*}[label= \textbf{(\roman*)}]
\item investigating synchronization phenomena in brain networks \cite{honda2018mathematical};
\item developing a robust model against outliers/missing data that appear frequently in practice.
\end{enumerate*}
\section{Experiments} \label{sec:experiments}
Our PyTorch implementation is publicly available at \url{https://github.com/HySonLab/TWGNN}. Implementation of Multiresolution Matrix Factorization and graph wavelet computation \cite{pmlr-v196-hy22a} is publicly available at \url{https://github.com/risilab/Learnable_MMF}.
To showcase the competitive performance and remarkable acceleration of FTWGNN, we conducted experiments on two well-known traffic forecasting benchmarks METR-LA and PEMS-BAY, and one challenging ECoG dataset AJILE12. Following \cite{dcrnn}, we compare our model with widely used time series models, including:
\begin{enumerate*}
\item HA: Historical Average, which models the traffic flow as a seasonal process, and uses weighted average of previous seasons as the prediction;
\item $\text{ARIMA}_{kal}$: Auto-Regressive Integrated Moving Average model with Kalman filter;
\item VAR: Vector Auto-Regression.
\item SVR: Support Vector Regression.
\item FNN: Feed forward neural network with two hidden layers and L2 regularization.
\item FC-LSTM: Recurrent Neural Network with fully connected LSTM hidden units\cite{sutskever2014sequence}
\end{enumerate*}
Methods are evaluated on three metrics:
\begin{enumerate*}[label= \textbf{(\roman*)}]
\item Mean Absolute Error (MAE);
\item Mean Absolute Percentage Error (MAPE); and
\item Root Mean Squared Error (RMSE).
\end{enumerate*}
FTWGNN and DCRNN are implemented using PyTorch~\cite{paszke2019pytorch} on an NVIDIA A100-SXM4-80GB GPU. Details about parameters and model structure can be found in the Appendix~\ref{app:baseline} and \ref{app:ftwgnn}.
\begin{table*}[hbt!]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{c||c|c|cccccccc}
\hline
Dataset & $T$ & Metric & HA & $\text{ARIMA}_{kal}$ & VAR & SVR & FNN & FC-LSTM & DCRNN & FTWGNN \\
\hline
\multirow{10}{*}{METR-LA}&&MAE & 4.16 & 3.99 &4.42 & 3.99 & 3.99 & 3.44 & 2.77 & \textbf{2.70}\\
&15 min &RMSE& 7.80 & 8.21 & 7.89 & 8.45 & 7.94 & 6.30 & 5.38 & \textbf{5.15}\\
&&MAPE& 13.0\% & 9.6\% & 10.2\% & 9.3\% & 9.9\% & 9.6\% & 7.3\% & \textbf{6.81\%}\\
\cline{2-11}
&&MAE& 4.16 & 5.15 & 5.41 & 5.05 & 4.23 & 3.77 & 3.15 & \textbf{3.0}\\
&30 min &RMSE& 7.80 & 10.45 & 9.13 & 10.87 & 8.17 & 7.23 & 6.45 & \textbf{5.95}\\
&&MAPE& 13.0\% & 12.7\% & 12.7\% & 12.1\% & 12.9\% & 10.9\% & 8.80\% & \textbf{8.01\%}\\
\cline{2-11}
&&MAE& 4.16 & 6.90 & 6.52 & 6.72 & 4.49 & 4.37 & 3.60 & \textbf{3.42}\\
&60 min &RMSE& 7.80 & 13.23 & 10.11 & 13.76 & 8.69 & 8.69 & 7.59 & \textbf{6.92}\\
&&MAPE& 13.0\% & 17.4\% & 15.8\% & 16.7\% & 14.0\% & 13.2\% & 10.5\% &\textbf{9.83\%}\\
\hline \hline
\multirow{10}{*}{PEMS-BAY}&&MAE& 2.88 & 1.62 & 1.74 & 1.85 & 2.20 & 2.05 & 1.38 & \textbf{1.14}\\
&15 min &RMSE& 5.59 & 3.30 & 3.16 & 3.59 & 4.42 & 4.19 & 2.95 & \textbf{2.40}\\
&&MAPE& 6.8\% & 3.5\% & 3.6\% & 3.8\% & 5.2\% & 4.8\% & 2.9\% & \textbf{2.28\%}\\
\cline{2-11}
&&MAE& 2.88 & 2.33 & 2.32 & 2.48 & 2.30 & 2.20 & 1.74 & \textbf{1.5}\\
&30 min &RMSE& 5.59 & 4.76 & 4.25 & 5.18 & 4.63 & 4.55 & 3.97 & \textbf{3.27}\\
&&MAPE& 6.8\% & 5.4\% & 5.0\% & 5.5\% & 5.43\% & 5.2\% & 3.9\% & \textbf{3.15\%}\\
\cline{2-11}
&&MAE& 2.88 & 3.38 & 2.93 & 3.28 & 2.46 & 2.37 & 2.07 & \textbf{1.79}\\
&60 min &RMSE& 5.59 & 6.5 & 5.44 & 7.08 & 4.98 & 4.96 & 4.74 & \textbf{3.99}\\
&&MAPE& 6.8\% & 8.3\% & 6.5\% & 8.0\% & 5.89\% & 5.7\% & 4.9\% & \textbf{4.14\%}\\
\hline
\end{tabular}}
\caption{Performance comparison of different models for traffic speed forecasting.}
\vspace{0.3cm}
\label{tab:result_acc}
\end{table*}
\begin{table
\centering
\small
\begin{tabular}{c||c|cc|c}
\hline
Dataset & $T$ & DCRNN & FTWGNN & Speedup \\
\hline
\multirow{3}{*}{METR-LA}&15 min& 350s & \textbf{217s} & 1.61x \\
\cline{2-5}
& 30 min& 620s & \textbf{163s} & 3.80x \\
\cline{2-5}
& 60 min& 1800s & \textbf{136s} & 13.23x \\
\hline
\multirow{3}{*}{PEMS-BAY}&15 min& 427s & \textbf{150s} & 2.84x \\
\cline{2-5}
& 30 min& 900s & \textbf{173s} & 5.20x \\
\cline{2-5}
& 60 min& 1800s & \textbf{304s} & 5.92x \\
\hline
\multirow{3}{*}{AJILE12}&1 sec& 80s & \textbf{35s} & 2.28x \\
\cline{2-5}
&5 sec& 180s & \textbf{80s} & 2.25x \\
\cline{2-5}
& 15 sec& 350s & \textbf{160s} & 2.18x \\
\hline
\end{tabular}
\caption{Training time/epoch between DCRNN and FTWGNN.}
\label{tab:trainTime}
\end{table}
\textbf{Adjacency matrix} \quad According to DCRNN~\cite{dcrnn}, the traffic sensor network is expressed by an adjacency matrix which is constructed using thresholded Gaussian kernel~\cite{shuman2013signal}. Specifically, for each pair of sensors $v_i$ and $v_j$, the edge weight from $v_i$ to $v_j$, denoted by $A_{ij}$, is defined as
\begin{equation} \label{eq:adjacency}
A_{ij} \coloneqq
\left
\{
\begin{aligned}
& \exp( -\frac{\text{dist}(v_i, v_j)}{\sigma^2} ), \quad & \text{dist}(v_i, v_j) \leq k \\
& 0, \quad & \text{otherwise}
\end{aligned}
\right. \,,
\end{equation}
where $\text{dist}(v_i, v_j)$ denotes the spatial distance from $v_i$ to $v_j$, $\sigma$ is the standard deviation of the distances, and $k$ is the distance threshold.
Nevertheless, such an user-defined adjacency matrix requires expert knowledge, thus may not work on other domains, \eg, brain networks. In the ECoG time series forecasting case, the adjacency matrix is computed based on the popular Local Linear Embedding (LLE)~\cite{saul2003think}. In particular, for the matrix data ${\bm{X}}=[{\bm{x}}_1, \ldots, {\bm{x}}_N] \in {\mathbb{R}}^{T \times N}$ where ${\bm{x}}_i$ denotes the time series data of node $i$ for $i \in \{1, \ldots, N\}$, an adjacency matrix ${\bm{A}}$ is identified to gather all the coefficients of the affine dependencies among $\{{\bm{x}}_i\}_{i=1}^N$ by solving the following optimization problem
\begin{align}
{\bm{A}} \coloneqq \arg\min_{\hat{{\bm{A}}} \in {\mathbb{R}}^{N \times N}}
{} & {} \norm{ {\bm{X}} - {\bm{X}} \hat{{\bm{A}}}^T}_{\text{F}}^2 + \lambda_{A} \norm{ \hat{{\bm{A}}} }_1 \notag \\
\text{s.to} \quad
{} & {} {\bm{1}}^T_N \hat{{\bm{A}}} = {\bm{1}}^T_N\,,
\quad \text{diag}(\hat{{\bm{A}}}) = {\bm{0}} \,, \label{manifold.identify}
\end{align}
where the constraint ${\bm{1}}^T_N \hat{{\bm{A}}} = {\bm{1}}^T_N$ realizes the affine combitions, while
$\text{diag}(\hat{{\bm{A}}}) = {\bm{0}}$ excludes the trivial solution $\hat{{\bm{A}}} = {\bm{I}}_N$. Furthermore, to promote the local smoothness of the graph, each data point $x_i$ is assumed to be approximated by a few neighbors $\{x_{j_1}, x_{j_2}, \ldots, x_{j_k}\}$, thus $\hat{A}$ is regularized by the $l_1$-norm loss $\norm{ \hat{{\bm{A}}} }_1$ to be sparse.
Task (\ref{manifold.identify}) is a composite convex minimization problem with affine constraints, hence can be solved by~\cite{slavakis2018fejer}.
\subsection{Traffic prediction} \label{sec:traffic}
Two real-world large-scale traffic datasets are considered:
\begin{compactitem}
\item \textbf{METR-LA} Data of 207 sensors in the highway of Los Angeles County~\cite{metrladata} over the 4-month period from Mar 1st 2012 to Jun 30th 2012.
\item \textbf{PEMS-BAY} Data of 325 sensors in the Bay Area over the 6-month period from Jan 1st 2017 to May 31th 2017 from the California Transportation Agencies (CalTrans) Performance Measurement System (PeMS).
\end{compactitem}
\begin{figure*}[!ht]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{augmentedECG.pdf}
{\phantomcaption\ignorespaces\label{fig:augmentedECG}}
\vspace{-5pt}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{ECG1s.pdf}
{\phantomcaption\ignorespaces\label{fig:expECG1s}}
\vspace{-5pt}
\end{subfigure}
\caption{(a) Augmented ECoG signals by spline interpolation envelopes, (b) 1-second prediction of ECoG signals}
\vspace{-5pt}
\end{figure*}
\begin{table*}[!hbt]
\centering
\resizebox{0.7\textwidth}{!}{\begin{tabular}{c||c|c|ccccccc}
\hline
Dataset & $T$ & Metric & HA & VAR & LR & SVR & LSTM & DCRNN & FTWGNN \\
\hline
\multirow{10}{*}{AJILE12}&&MAE& 0.88 & 0.16 & 0.27 & 0.27 & 0.07 & 0.05 & \textbf{0.03} \\
&1 sec &RMSE& 1.23 & 0.25 & 0.37 & 0.41 & \textbf{0.09} & 0.45 & 0.35 \\
&&MAPE& 320\% & 58\% & 136\% & 140\% & 38\% & 7.84\% & \textbf{5.27\%} \\
\cline{2-10}
&&MAE & 0.88 & 0.66 & 0.69 & 0.69 & 0.39 & 0.16 & \textbf{0.11}\\
&5 sec &RMSE& 1.23 & 0.96 & 0.92 & 0.93 & 0.52 & 0.24 & \textbf{0.15} \\
&&MAPE& 320\% & 221\% & 376\% & 339\% & 147\% & 64\% & \textbf{57\%}\\
\cline{2-10}
&&MAE& 0.88 & 0.82 & 0.86 & 0.86 & 0.87 & 0.78 & \textbf{0.70} \\
&15 sec &RMSE& 1.23 & 1.15 & 1.13 & 1.13 & 1.14 & 1.01 & \textbf{0.93} \\
&&MAPE& 320\% & 320\% & 448\% & 479\% & 330\% & 294\% & \textbf{254\%} \\
\hline
\end{tabular}}
\caption{\label{tab:resultECG} Performance comparison on ECoG signals forecast.}
\vspace{-5pt}
\end{table*}
The distance function $\text{dist}(v_i, v_j)$ in (\ref{eq:adjacency}) represents the road network distance from sensor $v_i$ to sensor $v_j$, producing an asymmetric adjacency matrix for a directed graph. Therefore, the symmetrized matrix $\hat{{\bm{A}}} \coloneqq \frac{1}{2} ({\bm{A}} + {\bm{A}}^T)$ is taken to compute the wavelet basis matrix ${\bm{W}}$ following Sec.~\ref{sec:multiresolution-analysis}. Parameters can be found in the Appendix~\ref{app:ftwgnn}.
Table \ref{tab:result_acc} shows the evaluation of different approaches on the two traffic datasets, while Table \ref{tab:trainTime} reports the training time per epoch of FTWGNN and DCRNN. Overall, although FTWGNN performs better than DCRNN by about only 10\%, it is significantly faster by about 5 times on average.
\subsection{Brain networks} \label{sec:brain}
\textit{Annotated Joints in Long-term Electrocorticography for 12 human participants} (AJILE12), publicly available at~\cite{brainnetdata}, records intracranial neural activity via the invasive ECoG, which involves implanting electrodes directly under the skull~\cite{ajile12}. For each participant, ECoG recordings are sporadically sampled at 500Hz in $7.4\pm2.2$ days (mean$\pm$std) from at least 64 electrodes, each of which is encoded with an unique set of Montreal Neurological Institute (MNI) x, y, z coordinates.
The proposed model is tested on the first one hour of recordings of subject number 5 with 116 good-quality electrodes. Signals are downsampled to 1Hz, thus producing a network of 116 nodes, each with $3{,}600$ data points. Furthermore, the signals are augmented by applying the spline interpolation to get the upper and lower envelopes along with an average curve~\cite{melia2014filtering} (see Figure~\ref{fig:augmentedECG}). The adjacency matrix ${\bm{A}}$ is obtained by solving task~(\ref{manifold.identify}), then the wavelet basis matrix ${\bm{W}}$ is constructed based on Sec.~\ref{sec:multiresolution-analysis}. Parameters can be found in the Appendix~\ref{app:ftwgnn}.
Table \ref{tab:resultECG} reports the performance of different methods on the AJILE12 dataset for 1-, 5-, and 15-second prediction. Generally, errors are much higher than those in the traffic forecasting problem, since the connections within the brain network are much more complicated and ambiguous~\cite{breakspear2017dynamic}. High errors using HA and VAR methods show that the AJILE12 data follows no particular pattern or periodicity, making long-step prediction extremely challenging. Despite having a decent performance quantitatively, Figure~\ref{fig:expECG1s} demonstrates the superior performance of FTWGNN, in which DCRNN fails to approximate the trend and the magnitude of the signals. Even though FTWGNN performs well at 1-second prediction, it produces unstable and erroneous forecast at longer steps of 5 or 15 seconds (see Figure~\ref{fig:expECG5s}). Meanwhile, similar to traffic prediction case, FTWGNN also sees a remarkable improvement in computation time by around 2 times on average (see Table \ref{tab:trainTime}).
Table \ref{tab:basisDensity} shows the sparsity density of our MMF wavelet basis ${\bm{W}}$ for each dataset. It is important to note that ${\bm{W}}$ is extremely sparse with no more than $2\%$ of non-zeros, that makes our models run much faster with significantly low memory usage while achieving competitive results.
\section{Introduction} \label{sec:intro}
Time series modeling has been a quest in a wide range of academic fields and industrial applications, including neuroscience~\cite{POURAHMADI2016neuro} and traffic modeling~\cite{dcrnn}. Traditionally, model-based approaches such as autoregressive (AR) and Support Vector Regression~\cite{smola2004svr} require domain-knowledge as well as stationary assumption, which are often violated by the complex and non-linear structure of neural and traffic data.
Recently, there has been intensive research with promising results on the traffic forecasting problem using deep learning such as Recurrent Neural Network (RNN)~\cite{qin2017rnn}, LSTM~\cite{koprinska2018lstm}, and graph-learning using Tranformer~\cite{xu2020spatial}. On the other hand, forcasting in neuroscience has been focusing mainly on long-term evolution of brain network structure based on fMRI data, such as predicting brain connectivities of an Alzheimer’s disease after several months~\cite{bessadok2022graph}, where existing methods are GCN-based~\cite{goktacs2020residual} or GAN-based graph autoencoder~\cite{gurler2020foreseeing}. Meanwhile, research on instantaneous time series forecasting of electroencephalogram (EEG) or
electrocorticography (ECoG) remains untouched, even though EEG and ECoG are often cheaper and quicker to obtain than fMRI, while short-term forecasting may be beneficial for patients with strokes or epilepsy~\cite{shoeibi2022overview}.
In graph representation learning, a dense adjacency matrix expressing a densely connected graph can be a waste of computational resources, while physically, it may fail to capture the local ``smoothness'' of the network. To tackle such problems, a mathematical framework called Multiresolution Matrix Factorization (MMF) \cite{pmlr-v32-kondor14} has been adopted to ``sparsify'' the adjacency and graph Laplacian matrices of highly dense graphs. MMF is unusual amongst fast matrix factorization algorithms in that it does not make a low rank assumption. Multiresolution matrix factorization (MMF) is an alternative paradigm that is designed to capture structure at multiple different scales. This makes MMF especially well suited to modeling certain types of graphs with complex multiscale or hierarchical strucutre \cite{pmlr-v196-hy22a}, compressing hierarchical matrices (e.g., kernel/gram matrices) \cite{pmlr-v51-teneva16,NIPS2017_850af92f}, and other applications in computer vision \cite{8099564}. One important aspect of MMF is its ability to construct wavelets on graphs and matrices during the factorization process \cite{pmlr-v32-kondor14,pmlr-v196-hy22a}. The wavelet basis inferred by MMF tends to be highly sparse, that allows the corresponding wavelet transform to be executed efficiently via sparse matrix multiplication. \cite{pmlr-v196-hy22a} exploited this property to construct fast wavelet convolution and consequentially wavelet neural networks learning on graphs for graph classification and node classification tasks. In this work, we propose the incorporation of fast wavelet convolution based on MMF to build a time- and memory-efficient temporal architecture learning on timeseries data with the underlying graph structure.
From the aforementioned arguments, we propose the \textit{Fast Temporal Wavelet Graph Neural Network} (FTWGNN) for graph time series forecasting, in which the MMF theory is utilized to describe the local smoothness of the network as well as to accelerate the calculations. Experiments on real-world traffic and ECoG datasets show competitive performance along with remarkably smaller computational footprint of FTWGNN. In summary:
\begin{compactitem}
\item We model the spatial domain of the graph time series as a diffusion process, in which the theories of \textit{multiresolution analysis} and \textit{wavelet theory} are adopted. We employ \textit{Multiresolution Matrix Factorization} (MMF) to factorize the underlying graph structure and derive its sparse wavelet basis.
\item We propose the \textit{Fast Temporal Wavelet Graph Neural Network} (FTWGNN), an end-to-end model capable of modeling spatiotemporal structures.
\item We tested on two real-world traffic datasets and an ECoG dataset and achieved competitive results to state-of-the-art methods with remarkable reduction in computational time.
\end{compactitem}
\section{Method} \label{sec:method}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{waveletDCRNN.png}
\centering
\caption{Architecture of Fast Temporal Wavelet Neural Network. \textbf{WC:} graph wavelet convolution given MMF's wavelet basis.}
\label{fig:architecture}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{WCGRU.png}
\centering
\caption{Architecture for the Wavelet Convolutional Gated Recurrent Unit. \textbf{WC}: graph wavelet convolution given MMF's wavelet basis}
\label{fig:architecture2}
\end{figure}
\subsection{Wavelet basis and convolution on graph}
Section \ref{sec:multiresolution-analysis} introduces the theory of multiresolution analysis behind MMF as well as the construction of a \textit{sparse} wavelet basis for a symmetric matrix ${\bm{A}} \in \mathbb{R}^{n \times n}$. Without the loss of generality, we assume that ${\bm{A}}$ is a weight matrix of a weighted undirected graph $\mathcal{G} = (V, E)$ in which $V = \{v_1, .., v_n\}$ is the set of vertices and $E = \{(v_i, v_j)\}$ is the set of edges with the weight of edge $(v_i, v_j)$ is given by ${\bm{A}}_{i, j}$. Given a graph signal ${\bm{f}} \in \mathbb{R}^n$ that is understood as a function $f: V \rightarrow \mathbb{R}$ defined on the vertices of the graph, the wavelet transform (up to level $L$) expresses this graph signal, without loss of generality $f \in {\mathbb{V}}_0$, as:
$$f(v) = \sum_{\ell = 1}^L \sum_m \alpha_m^\ell \psi_m^\ell(v) + \sum_m \beta_m \phi_m^L(v), \ \ \ \ \text{for each} \ \ v \in V,$$
where $\alpha_m^\ell = \langle f, \psi_m^\ell \rangle$ and $\beta_m = \langle f, \phi_m^L \rangle$ are the wavelet coefficients. Based on the wavelet basis construction via MMF detailed in \cite{pmlr-v196-hy22a}:
\begin{compactitem}
\item For $L$ levels of resolution, we get exactly $L$ mother wavelets $\overline{\psi} = \{\psi^1, \psi^2, \dots, \psi^L\}$, each corresponds to a resolution (see Figure~\ref{fig:waveletVisual} for visualizations).
\item The rows of ${\bm{H}} = {\bm{A}}_L$ make exactly $n - L$ father wavelets $\overline{\phi} = \{\phi^L_m = {\bm{H}}_{m, :}\}_{m \in {\mathbb{S}}_L}$. In total, a graph of $n$ vertices has exactly $n$ wavelets, both mothers and fathers.
\end{compactitem}
Analogous to the convolution based on Graph Fourier Transform \cite{ae482107de73461787258f805cf8f4ed}, each convolution layer $k \in \{1, .., K\}$ of wavelet neural network transforms an input vector ${\bm{f}}^{(k - 1)}$ of size $\lvert V \rvert \times F_{k - 1}$ into an output ${\bm{f}}^{(k)}$ of size $\lvert V \rvert \times F_k$ as
\begin{equation}
{\bm{f}}^{(k)}_{:, j} = \sigma \bigg( {\bm{W}} \sum_{i = 1}^{F_{k - 1}} {\bm{g}}^{(k)}_{i, j} {\bm{W}}^T {\bm{f}}^{(k - 1)}_{:, i} \bigg) \ \ \ \ \text{for} \ \ j = 1, \dots, F_k,
\label{eq:wavevlet-conv}
\end{equation}
where ${\bm{W}}$ is our wavelet basis matrix as we concatenate $\overline{\phi}$ and $\overline{\psi}$ column-by-column, ${\bm{g}}^{(k)}_{i, j}$ is a parameter/filter in the form of a diagonal matrix learned in spectral domain, and $\sigma$ is an element-wise non-linearity (e.g., ReLU, sigmoid, etc.). In Eq.(\ref{eq:wavevlet-conv}), first we employ the wavelet transform of a graph signal ${\bm{f}}$ into the spectral domain (i.e. $\hat{{\bm{f}}} = {\bm{W}}^T {\bm{f}}$ is the forward transform and ${\bm{f}} = {\bm{W}} \hat{{\bm{f}}}$ is the inverse transform), then a learnable filter ${\bm{g}}$ to the wavelet coefficients, the inverse transform back to the spatial domain and let everything through a non-linearity $\sigma$. Since the wavelet basis matrix ${\bm{W}}$ is \textit{sparse}, both the wavelet transform and its inverse transform can be implemented efficiently via sparse matrix multiplication.
\subsection{Temporal Wavelet Neural Networks}
Capturing spatiotemporal dependencies among time series in various spatiotemporal forecasting problems demands both spatial and temporal models. We build our novel \textit{Fast Temporal Wavelet Graph Neural Network} with the architectural backbone from \textit{Diffusion Convolutional Recurrent Neural Network} (DCRNN) \cite{dcrnn}, that combines both spatial and temporal models to solve these tasks.
\textbf{Spatial Dependency Model} \quad The spatial dynamic in the network is captured by diffusion process. Let $G = ({\bm{X}},{\bm{A}})$ represent an undirected graph, where ${\bm{X}} =[{\bm{x}}_{1}^T,\dots,{\bm{x}}_{N}^T]^T \in \mathbb{R}^{N \times D}$ denotes signals of $N$ nodes, each has $D$ features. Define further the right-stochastic edge weights matrix $\Tilde{{\bm{A}}} \in \mathbb{R}^{N \times N}$ in which $\sum_j \Tilde{{\bm{A}}}_{ij}=1 \forall i$. In the simplest case, when $\Tilde{{\bm{L}}}={\bm{I}}-\Tilde{{\bm{A}}}$ is the nomalized random walk matrix, the diffusion process on graph is governed by the following equation \cite{GRAND}:
\begin{equation}
\frac{{\rm d} {\bm{X}}(t)}{{\rm d}t} = (\Tilde{{\bm{A}}}-{\bm{I}}){\bm{X}}(t)
\label{eq:diffusion}
\end{equation}
where ${\bm{X}}(t) = [{\bm{x}}_{1}^T(t),\dots,[{\bm{x}}_{N}^T(t)]^T \in \mathbb{R}^{N \times D}$ and ${\bm{X}}(0)={\bm{X}}$. Applying forward Euler discretization with step size 1, gives:
\begin{align}
{\bm{X}}(k) &= {\bm{X}}(k-1) + (\Tilde{{\bm{A}}}-{\bm{I}}){\bm{X}}(k-1) \notag \\
&= {\bm{X}}(k-1) -\Tilde{{\bm{L}}}{\bm{X}}(k-1) \notag \\
& = \Tilde{{\bm{A}}}{\bm{X}}(k-1) \notag \\
& = \Tilde{{\bm{A}}}^k{\bm{X}}(0)
\label{eq:random_walk}
\end{align}
Eq.\ref{eq:random_walk} is similar to the well-established GCN architecture propose in \cite{Kipf_GCN}. Then, the diffusion convolution operation over a graph signal $\mathbb{R}^{N \times D}$ and filter $f_{\boldsymbol{\theta}}$ is defined as:
\begin{equation}
{\bm{X}}_{:,d}\star_\mathcal{G}f_{\boldsymbol{\theta}} = \sum_{k=0}^{K-1}\theta_k \Tilde{{\bm{A}}}^k {\bm{X}}_{:,d} \quad \forall d \in \{1,\dots,D\}
\label{eq:graph_conv}
\end{equation}
where ${\bm{\Theta}} \in \mathbb{R}^{K \times 2}$ are the parameters for the filter.
\textbf{Temporal Dependency Model} \quad The DCRNN is leveraged from the recurrent neural networks (RNNs) to model the temporal dependency. In particular, the matrix multiplications in GRU is replaced with the diffusion convolution, which is called \textit{Diffusion Convolutional Gated Recurrent Unit} (DCGRU).
\begin{align}
{\bm{r}}^{(t)} &= \sigma({\bm{\Theta}}_r \star_\mathcal{G} [{\bm{X}}^{(t)}, {\bm{H}}^{(t-1)}] + \boldsymbol{b} ) \notag \\
\boldsymbol{u}^{(t)} &= \sigma({\bm{\Theta}}_r \star_\mathcal{G} [{\bm{X}}^{(t)}, {\bm{H}}^{(t-1)}] + \boldsymbol{b_u} ) \notag \\
{\bm{C}}^{(t)} &= \tanh({\bm{\Theta}}_r \star_\mathcal{G} [{\bm{X}}^{(t)}, (\boldsymbol{r} \odot {\bm{H}}^{(t-1)})] + \boldsymbol{b_c} ) \notag \\
{\bm{H}}^{(t)} &= \boldsymbol{u}^{(t)} \odot {\bm{H}}^{(t-1)} + (1-\boldsymbol{u}^{(t)}) \odot {\bm{C}}^{(t)} \notag
\end{align}
where ${\bm{X}}(t), {\bm{H}}(t)$ denote the input and output of at time $t$, while ${\bm{r}}^{(t)},\boldsymbol{u}^{(t)}$ are reset gate and update gate at time $t$, respectively.
Both the encoder and the decoder are recurrent neural networks with DCGRU following \textit{Sequence-to-Sequence} style. To mitigate the distribution differences between training and testing data, scheduled sampling technique \cite{bengio2015scheduled} is used, where the model is fed with either the ground truth with probability $\epsilon_i$ or the prediction by the model with probability $1-\epsilon_i$.
For our novel \textit{Fast Temporal Wavelet Graph Neural Network} (FTWGNN), the fundamental difference is that instead of using temporal traffic graph as the input of DCRNN, we use the sparse wavelet basis matrix ${\bm{W}}$ which is extracted via MMF (see Section \ref{sec:multiresolution-analysis}) and replace the diffusion convolution by our fast \textit{wavelet convolution}. Given the sparsity of our wavelet basis, we significantly reduce the overall computational time and memory usage. Each Givens rotation matrix ${\bm{U}}_\ell$ (see Def.~\ref{def:rotation-matrix}) is a highly-sparse orthogonal matrix with a non-zero core of size $K \times K$. The number of non-zeros in MMF's wavelet basis ${\bm{W}}$, that can be computed as product ${\bm{U}}_1{\bm{U}}_2 \cdots {\bm{U}}_L$, is $O(LK^2)$ where $L$ is the number of resolutions (i.e. number of Givens rotation matrices) and $K$ is the number of columns in a Givens rotation matrix. \cite{pmlr-v32-kondor14} and \cite{pmlr-v196-hy22a} have shown in both theory and practice that $L$ only needs to be in $O(n)$ where $n$ is the number of columns and $K$ small (e.g., 2, 4, 8) to get a decent approximation/compression for a symmetric hierarchical matrix. Technically, MMF is able to compress a symmetric hierararchical matrix from the original quadratic size $n \times n$ to a linear number of non-zero elements $O(n)$. Practically, all the Givens rotation matrices $\{{\bm{U}}_\ell\}_{\ell = 1}^L$ and the wavelet basis ${\bm{W}}$ can be stored in Coordinate Format (COO), and the wavelet transform and its inverse in wavelet convolution (see Eq.~\ref{eq:wavevlet-conv}) can be implemented efficiently by sparse matrix multiplication in PyTorch's sparse library \cite{paszke2019pytorch}. The architecture of our model is shown in Figures \ref{fig:architecture} and \ref{fig:architecture2}.
\section{Related work} \label{sec:related}
A spatial-temporal forecasting task utilizes
spatial-temporal data information gathered from various sensors to predict their future states. Traditional approaches, such as the autoregressive integrated moving average (ARIMA), k-nearest neighbors algorithm (kNN), and support vector machine (SVM), can only take into account temporal information without considering spatial
features~\cite{van2012short,jeong2013supervised}. Aside from traditional approaches, deep neural networks are proposed to model much more complex spatial-temporal relationships. Specifically, by using an extended fully-connected LSTM with embedded convolutional layers, FC-LSTM \cite{sutskever2014sequence} specifically combines CNN and LSTM to model spatial and temporal relations. When predicting traffic, ST-ResNet~\cite{zhang2017deep} uses a deep residual CNN network, revealing the powerful capabilities of the residual network. Despite the impressive results obtained, traffic forecastin scenarios with graph-structured data is incompatible with all of the aforementioned methods because they are built for grid data. For learning tasks on graphs, node representations in GNNs~\cite{Kipf_GCN} uses a neighborhood aggregation scheme, which involves sampling and aggregating the features of nearby nodes. Since temporal-spatial data such as traffic data or brain network is a well-known type of non-Euclidean structured graph data, great efforts have been made to use graph convolution methods in traffic forecasting. As an illustration, DCRNN \cite{dcrnn} models traffic flow as a diffusion process and uses directed graph bidirectional random walks to model spatial dependency.
In the field of image and signal processing, processing is more efficient and simpler in a sparse representation where fewer coefficients reveal the information that we are searching for. Based on this motivation, Multiresolution Analysis (MRA) has been proposed by \cite{192463} as a design for multiscale signal approximation in which the sparse representations can be constructed by decomposing signals over elementary waveforms chosen in a family called \textit{wavelets}. Besides Fourier transforms, the discovery of wavelet orthogonal bases such as Haar \cite{Haar1910ZurTD} and Daubechies \cite{Daubechies1988OrthonormalBO} has opened the door to new transforms such as continuous and discrete wavelet transforms and the fast wavelet transform algorithm that have become crucial for several computer applications \cite{10.5555/1525499}.
\cite{pmlr-v32-kondor14} and \cite{pmlr-v196-hy22a} have introduced Multiresolution Matrix Factorization (MMF) as a novel method for constructing sparse wavelet transforms of functions defined on the nodes of an arbitrary graph while giving a multiresolution approximation of hierarchical matrices. MMF is closely related to other works on constructing wavelet bases on discrete spaces, including wavelets defined based on diagonalizing the diffusion operator or the normalized graph Laplacian \cite{COIFMAN200653} \cite{HAMMOND2011129} and multiresolution
on trees \cite{10.5555/3104322.3104370} \cite{10.2307/30244209}. |
{
"arxiv_id": "2302.08655",
"language": "en",
"timestamp": "2023-02-20T02:05:31",
"url": "https://arxiv.org/abs/2302.08655",
"yymm": "2302"
} | \section{Introduction}
Quantum entanglement is a key resource in quantum information with wide applications in entanglement swapping \cite{bvk}, quantum cryptography \cite{eak} and quantum secure communication \cite{bw}. The genuine multipartite entanglement (GME) stands out
with significant properties \cite{hp,tg}. Thus measuring and detection of genuine multipartite entanglement for given states has been an important task in quantum computation.
A lot of methods have been presented in detecting
entanglement and genuine entanglement \cite{pa,hhh,hong,skt}. For tripartite quantum states, sufficient conditions to test entanglement of three-qubit states in the vicinity of the GHZ, the W states and the PPT entangled states were found in \cite{akb}. A sufficient criterion for the entanglement of tripartite systems based on local sum uncertainty relations was proposed in \cite{ymm}. Sufficient conditions for judging genuine tripartite entanglement were presented by using partial transposition and realignment of density matrices in \cite{mes}. Yang et al \cite{ysc} derived a criterion for detecting genuine tripartite entanglement based on quantum Fisher information. By using the Bloch representation of density matrices and the norms of correlation tensors, the genuine tripartite entangled criteria were presented in \cite{lmj,dgh}. The authors in \cite{zzj} studied the separability criteria in tripartite and four-partite quantum system by the matrix method. The separability criteria for four-partite quantum system based on the upper bound of Bloch vectors were discussed in \cite{lww}. For higher dimensional quantum system, Chen et al \cite{cw} presented a generalized partial separability criterion of multipartite quantum systems in arbitrary dimensions. The separable criteria and $k$-separable criteria for general $n$-partite quantum states were given in \cite{hgy,lwf,xzz}.
Many of these methods have used the Bloch representation of the density matrix, which has
become more complex as dimension of the quantum system increases. This is partly due to the
fact that the Bloch representation is relied on the Gell-Mann basis of the special unitary Lie algebra $\mathfrak{su}(d)$
which has three kinds of basis elements: upper, diagonal and lower matrices. In view of this, perhaps
using another well-known basis of the Lie algebra $\mathfrak{su}(d)$: the Weyl basis to study quantum entanglement will likely simplify some of the criteria, as the latter consists of uniformed basis elements. In Ref. \cite{bgj}, the authors showed that the principal basis matrix plays an essential role in the representation theory of the Yangian $Y(\mathfrak{sl}(3))$ which has a close relation with the study of entangled states in quantum information (also see \cite{fss} for recent applications).
In this paper, we study the genuine entanglement of multipartite quantum systems by using the Weyl representation with generalized Pauli operators, and we obtain several better criteria in detecting the GME than previously available tests. The paper is organized as follows. In section 2, after reviewing the Weyl representation of the quantum state with generalized Pauli operators, we construct matrices by using the correlation tensors and derive the criteria to detect entanglement and genuine tripartite entanglement. By detailed examples, our results can detect more genuine entangled states. In section 3, we generalize the results to arbitrary $n$-partite quantum systems. Conclusions are given in section 4.
\section{Genuine entanglement for tripartite quantum state}
We first consider the GME for tripartite states. Let $E_{ij}$ be the $d\times d$ unit matrix with the only nonzero entry 1 at the position $(i,j)$, and let $\omega$ be a fixed $d$-th primitive root of unity. For fixed $d_s$, by means of the division with remainder, every $u_s\in\{0,\cdots,d_s^2-1\}$ is
expressed uniquely as $u_s=d_si+j$ for a pair $(i, j)$, where $(0\leq i,j\leq d_s-1)$. Then the generalized Pauli operators of the $s$th $d_s$-dimensional Hilbert space $H_s^{d_s}$ are given by
\begin{equation}\label{1}
A_{u_s}^{(s)}=A_{d_si+j}^{(s)}=\sum\limits_{m=0}^{d_s-1}\omega^{im}E_{m,m+j}, \qquad 0\leq u_s\leq d_2^2-1
\end{equation}
where $\omega^{d_s}=1$. The basis obeys the algebraic relation:
\begin{equation*}
A_{d_si+j}^{(s)}A_{d_sk+l}^{(s)}=\omega^{jk}A_{d_s(i+k)+(j+l)}^{(s)},
\end{equation*}
then $(A_{d_si+j}^{(s)})^{\dagger}=\omega^{ij}A_{d_s(d_s-i)+(d_s-j)}^{(s)}$, so $tr(A_{d_si+j}^{(s)}(A_{d_sk+l}^{(s)})^{\dagger})=\delta_{ik}\delta_{jl}d_s$\cite{hjz}.
Denote by $\|\cdot\|$ the norm of a (column) complex vector, i.e. $\|v\|=\sqrt{v^{\dagger} v}$.
The trace norm (Ky Fan norm) of a retangular matrix $A\in \mathbb{C}^{m\times n}$ is defined as $\|A\|_{tr}=\sum\sigma_i=tr\sqrt{AA^\dagger}$,
where $\sigma_i$ are the singular values of $A$. Clearly $\|A\|_{tr}=\|A^{\dagger}\|_{tr}$.
\begin{lemma}\label{lemma:1} Let $H_s^{d_s}$ denote the $s^{th}$ $d_s$-dimensional Hilbert space. For a quantum state $\rho_1\in H_1^{d_1}$, $\rho_1$ can be expressed as
$\rho_1=\frac{1}{d_1}\sum\limits_{u_1=0}^{d_1^2-1}t_{u_1}A_{u_1}^{(1)}$,
where $A_0^{(1)}=I_{d_1}$, $t_{u_1}=tr(\rho_1(A_{u_1}^{(1)})^{\dagger})$ are complex coefficients. Let $T^{(1)}$ be the column vector with entries $t_{u_1}$ for $u_1\neq0$,
we have
\begin{equation}\label{2}
\|T^{(1)}\|^2\leq d_1-1.
\end{equation}
\end{lemma}
{\it Proof}~~Since $tr(\rho_1^2)\leq1$, we have
$$tr(\rho_1^2)=tr(\rho_1\rho_1^{\dagger})=\frac{1}{d_1}(1+\|T^{(1)}\|^2)\leq1,$$
namely, $\|T^{(1)}\|^2\leq d_1-1$.
\qed
For a state $\rho_{12}\in H_1^{d_1}\otimes H_2^{d_2}$,
$\rho_{12}$ has the generalized Pauli operators representation:
\begin{equation}\label{3}
\rho_{12}=\frac{1}{d_1d_2}\sum\limits_{u_1=0}^{d_1^2-1}
\sum\limits_{u_2=0}^{d_2^2-1}t_{u_1,u_2}A_{u_1}^{(1)}\otimes A_{u_2}^{(2)}
\end{equation}
where $A_0^{(s)}=I_{d_s}(s=1,2)$, the coefficients
$t_{u_1,u_2}=tr(\rho_{12}(A_{u_1}^{(1)})^{\dagger}\otimes (A_{u_2}^{(2)})^{\dagger})$ are complex numbers.
Let $T^{(1)}$, $T^{(2)}$, $T^{(12)}$ be the vectors with entries $t_{u_1,0}$, $t_{0,u_2}$, $t_{u_1,u_2}$ for $u_1,u_2\neq0$.
\begin{lemma}\label{lemma:2}
Let $\rho_{12}\in H_1^{d_1}\otimes H_2^{d_2}$ be a mixed state, we have $\|T^{(12)}\|^2\leq d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1$.
\end{lemma}
{\it Proof}~~For a pure state $\rho_{12}$, we have $tr(\rho_{12}^2)=1$, namely
\begin{equation}\label{4}
tr(\rho_{12}^2)=tr(\rho_{12}\rho_{12}^{\dagger})
=\frac{1}{d_1d_2}(1+\|T^{(1)}\|^2+\|T^{(2)}\|^2+\|T^{(12)}\|^2)
=1.
\end{equation}
By using $tr(\rho_1^2)=tr(\rho_2^2)$, we have $\frac{1}{d_1}(1+\|T^{(1)}\|^2)=\frac{1}{d_2}(1+\|T^{(2)}\|^2)$, where $\rho_1$ and $\rho_2$ are the reduced density operators on $H_1^{d_1}$ and $H_2^{d_2}$, respectively. Then
\begin{equation*}
\frac{1}{d_1^2}(1+\|T^{(1)}\|^2)+\frac{1}{d_2^2}(1+\|T^{(2)}\|^2)=
\frac{1}{d_1d_2}(2+\|T^{(1)}\|^2+\|T^{(2)}\|^2).
\end{equation*}
Using the above two equations we obtain that
\begin{equation*}
\begin{split}
\|T^{(12)}\|^2&=d_1d_2-1-\|T^{(1)}\|^2-\|T^{(2)}\|^2\\
&=d_1d_2+1-[\frac{d_2}{d_1}(1+\|T^{(1)}\|^2)+\frac{d_1}{d_2}(1+\|T^{(2)}\|^2)].
\end{split}
\end{equation*}
By $\|T^{(1)}\|^2\geq0$, $\|T^{(2)}\|^2\geq0$, we have
\begin{equation}\label{5}
\begin{split}
\|T^{(12)}\|^2&\leq d_1d_2+1-(\frac{d_2}{d_1}\cdot1+\frac{d_1}{d_2}\cdot1)\\
&=d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1,
\end{split}
\end{equation}
If $\rho$ is a mixed state, then $\rho=\sum_{i}p_i\rho_i$ is a convex sum of pure states, $\sum_ip_i=1$.
Then $\|T^{(12)}(\rho)\|\leq \sum_ip_i\|T^{(12)}(\rho_i)\|\leq \sqrt{d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1}.$ \qed
A general tripartite state $\rho\in H_{1}^{d_1}\otimes H_{2}^{d_2}\otimes H_{3}^{d_3}$ can be written in terms of the generalized Pauli operators:
\begin{equation}\label{7}
\rho=\frac{1}{d_1d_2d_3}\sum_{u_1=0}^{d_1^2-1}\sum_{u_2=0}^{d_2^2-1}\sum_{u_3=0}^{d_3^2-1}t_{u_1,u_2,u_3}A_{u_1}^{(1)}\otimes A_{u_2}^{(2)}\otimes A_{u_3}^{(3)}
\end{equation}
where $A_{u_f}^{(f)}$ stands for that the tensor operator with $A_{u_f}$ acting on the space $H_f^{d_f}$, $A_{0}^{(f)}=I_{d_f}$, $t_{u_1,u_2,u_3}=tr(\rho (A_{u_1}^{(1)})^{\dagger}\otimes (A_{u_2}^{(2)})^{\dagger}\otimes (A_{u_3}^{(3)})^{\dagger})$ are the
complex coefficients. Let $T^{(f)}$ and $T^{(fg)}$ be the (column) vectors with entries of $t_{u_f,0,0}$ and $t_{u_f,u_g,0}$ for $u_f, u_g\neq0$, $f,g\in\{1,2,3\}$.
In the following we will construct some matrices out of the expansion coefficients of the density matrix $\rho$ in \eqref{7}. Let $\rho$ is separable under the bipartition $f|gh$, for real numbers $\alpha$, $\beta$ and distinct indices
$f, g, h\in\{1,2,3\}$, set
\begin{equation}\label{8}
N^{f|gh}=\alpha S_0^{f|g}+\beta S^{f|gh},
\end{equation}
where $S_0^{f|g}=[S^{f|g}~~O_f]\in \mathbb R^{(d_f^2-1)\times (d_g^2-1)(d_h^2-1)}$ is the block matrix with submatrix $S^{f|g}=T^{(f)}(T^{(g)})^{\dagger}=[t_{u_f,u_g,0}]$ of size $(d_f^2-1)\times(d_g^2-1)$ appended with the zero matrix of size $(d_f^2-1)\times(d_g^2-1)(d_h^2-2)$ and $S^{f|gh}=T^{(f)}(T^{(gh)})^{\dagger}=[t_{u_f,u_g,u_h}]$ is a $(d_f^2-1)\times(d_g^2-1)(d_h^2-1)$ matrix.
For example, when $\rho\in H_1^2\otimes H_2^2\otimes H_3^3$, $N^{2|13}=\alpha S_0^{2|1}+\beta S^{2|13}$, where
$$S^{2|1}=\left[
\begin{array}{ccc}
t_{1,1,0} & t_{2,1,0} & t_{3,1,0} \\
t_{1,2,0} & t_{2,2,0} & t_{3,2,0} \\
t_{1,3,0} & t_{2,3,0} & t_{3,3,0} \\
\end{array}
\right],~~~
S^{2|13}=\left[
\begin{array}{cccccc}
t_{1,1,1} & t_{1,1,2} & \cdots & t_{1,1,8} & \cdots & t_{3,1,8} \\
t_{1,2,1} & t_{1,2,2} & \cdots & t_{1,2,8} & \cdots & t_{3,2,8} \\
t_{1,3,1} & t_{1,3,2} & \cdots & t_{1,3,8} & \cdots & t_{3,3,8} \\
\end{array}
\right].
$$
\begin{theorem}\label{1}
If the tripartite state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes H_3^{d_3}$ is separable under the bipartition $f|gh$, we have
\begin{equation}\label{9}
\|N^{f|gh}\|_{tr}\leq\sqrt{d_f-1}\left(|\alpha|\sqrt{d_g-1}+|\beta|\sqrt{d_gd_h(1-\frac{1}{d_g^2}-\frac{1}{d_h^2})+1}\right).
\end{equation}
\end{theorem}
{\it Proof}~~If the tripartite mixed state $\rho$ is separable under the bipartition $f|gh$, then it can be expressed as
\begin{equation}\label{10}
\rho_{f|gh}=\sum_sp_s\rho_f^s\otimes \rho_{gh}^s, \quad 0<p_s\leq1, \sum_sp_s=1,
\end{equation}
where
\begin{equation}\label{11}
\rho_f^s=\frac{1}{d_f}\sum\limits_{u_f=0}^{d_f^2-1}t_{u_f}^sA_{u_f}^{(f)},
\end{equation}
\begin{equation}\label{12}
\begin{split}
\rho_{gh}^s=&\frac{1}{d_gd_h}\sum\limits_{u_g=0}^{d_g^2-1}\sum\limits_{u_h=0}^{d_h^2-1}t_{u_g,u_h}^sA_{u_g}^{(g)}\otimes A_{u_h}^{(h)}.
\end{split}
\end{equation}
Let $T_s^{(f)}$, $T_s^{(g)}$, $T_s^{(h)}$ and $T_s^{(gh)}$ be the vectors with entries $t_{u_f}^s$, $t_{u_g,0}^s$, $t_{0,u_h}^s$ and $t_{u_g,u_h}^s$ for $u_f,u_g,u_h\neq0$. Then
\begin{equation}\label{13}
S^{f|g}=\sum_sp_sT_s^{(f)}(T_s^{(g)})^{\dagger},~~ S^{f|gh}=\sum_sp_sT_s^{(f)}(T_s^{(gh)})^{\dagger},
\end{equation}
where $\dagger$ stands for conjugate transpose. It follows from Lemma 1 and Lemma 2 that
\begin{equation*}
\begin{split}
\|N^{f|gh}\|_{tr}&\leq\sum_sp_s(|\alpha|\|T_s^{(f)}\cdot[(T_s^{(g)})^{\dagger}~0~\cdots~0]\|_{tr}+
|\beta|\|T_s^{(f)}(T_s^{(gh)})^{\dagger}\|_{tr})\\
&=\sum_sp_s(|\alpha|\|T_s^{(f)}\|\|T_s^{(g)}\|+
|\beta|\|T_s^{(f)}\|\|T_s^{(gh)}\|)\\
&\leq\sqrt{d_f-1}\left(|\alpha|\sqrt{d_g-1}+|\beta|\sqrt{d_gd_h(1-\frac{1}{d_g^2}-\frac{1}{d_h^2})+1}\right),
\end{split}
\end{equation*}
where we have used the triangular inequality of the norm $\| \cdot \|_{tr}$
and $\||a\rangle\langle b|\|_{tr}=\||a\rangle\|\||b\rangle\|$
for real vectors $|a\rangle$ and $|b\rangle$.
\qed
Now we consider genuine tripartite entanglement. A mixed state is said to be genuine multipartite entangled if it cannot be written as a convex combination of biseparable states. Let $T(\rho)=\frac{1}{3}(\|N^{1|23}\|_{tr}+\|N^{2|13}\|_{tr}+\|N^{3|12}\|_{tr})$, after choosing certain (fixed) values of $\{\alpha, \beta\}$, we define
\begin{equation*}
\begin{split}
K_1=\textrm{Max}\{&\sqrt{d_1-1}\left(|\alpha|\sqrt{d_2-1}+|\beta|\sqrt{d_2d_3(1-\frac{1}{d_2^2}-\frac{1}{d_3^2})+1}\right),\\ &\sqrt{d_2-1}\left(|\alpha|\sqrt{d_1-1}+|\beta|\sqrt{d_1d_3(1-\frac{1}{d_1^2}-\frac{1}{d_3^2})+1}\right),\\
&\sqrt{d_3-1}\left(|\alpha|\sqrt{d_1-1}+|\beta|\sqrt{d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1}\right)\}.
\end{split}
\end{equation*}
We have the following theorem.
\begin{theorem}\label{2}
A mixed state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes H_3^{d_3}$ is genuine tripartite entangled if $T(\rho)>K_1$.
\end{theorem}
{\it Proof}~ Suppose $\rho$ is a biseparable mixed state, then
$\rho=\sum_i p_i\rho_i^1\otimes\rho_i^{23}+\sum_j r_j\rho_j^2\otimes\rho_j^{13}+\sum_k s_k\rho_k^3\otimes\rho_k^{12}$ with $0\leq p_i,r_j,s_k\leq1$ and $\sum_i p_i+\sum_j r_j+\sum_k s_k=1$. By Theorem 1, we have that
\begin{equation*}
\begin{split}
T(\rho)&=\frac{1}{3}(\sum_i p_i\|N^{1|23}(\rho_i)\|_{tr}+\sum_jr_j\|N^{2|13}(\rho_j)\|_{tr}+\sum_ks_k\|N^{3|12}(\rho_k)\|_{tr})\\
&\leq\frac{1}{3}(K_1+K_1+K_1)=K_1.
\end{split}
\end{equation*}
Consequently, if $T(\rho)>K_1$, $\rho$ is genuine tripartite entangled.
\qed
Next we consider the permutational invariant state $\rho$, i.e. $\rho=\rho^p=p\rho p^{\dagger}$ for any permutation $p$ of the qudits. A biseparable permutational invariant state can be written as $\rho=\sum_i p_i\rho_i^1\otimes\rho_i^{23}+\sum_j r_j\rho_j^2\otimes\rho_j^{13}+\sum_k s_k\rho_k^3\otimes\rho_k^{12}$, where $0<p_i,r_j,s_k\leq1$. We have the following corollary.
\begin{corollary}
If a permutational invariant mixed state is biseparable, then we have
$$T(\rho)=\frac{1}{3}(\|N^{1|23}\|_{tr}+\|N^{2|13}\|_{tr}+\|N^{3|12}\|_{tr})\leq J_1.$$
Therefore if $T(\rho)>J_1$, $\rho$ is genuine tripartite entangled. Here
\begin{equation*}
\begin{split}
J_1=&\frac{1}{3}\left(\sqrt{d_1-1}(|\alpha|\sqrt{d_2-1}+|\beta|\sqrt{d_2d_3(1-\frac{1}{d_2^2}-\frac{1}{d_3^2})+1}) \right.\\ &+\sqrt{d_2-1}(|\alpha|\sqrt{d_1-1}+|\beta|\sqrt{d_1d_3(1-\frac{1}{d_1^2}-\frac{1}{d_3^2})+1})\\ &+\left. \sqrt{d_3-1}(|\alpha|\sqrt{d_1-1}+|\beta|\sqrt{d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1})\right).
\end{split}
\end{equation*}
\end{corollary}
\textit{\textbf{Example 1}} Consider the mixed three-qubit $W$ state,
\begin{equation}\label{14}
\rho=\frac{1-x}{8}I_8+x|W\rangle\langle W|, \quad 0\leq x\leq1,
\end{equation}
where $|W\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle)$ and $I_8$ is the $8\times8$ identity matrix. With different choices of $\alpha$ and $\beta$, using Theorem 2 we obtain
the corresponding intervals of $x$ for which $\rho$ is genuine tripartite entangled. The results
are shown in Table \ref{tab:1}. Explicitly, let
$f_1(x)=T(\rho)-K_1=6.5225x-(\frac{1}{10}+2\sqrt{3})>0$, i.e. $0.5464<x\leq1$. Set $g_1(x)=\frac{1}{12}(\sqrt{66}x-6)$, Theorem 2 in \cite{lmj} says that when $g_1(x)>0$, $\rho$ is genuine entangled, i.e. for $0.7385<x\leq1$. Theorem 2 in \cite{dgh} implies that $\rho$ is genuine entangled if $g_2(x)=3.26x-\frac{6+\sqrt{3}}{3}>0$, i.e. $0.791<x\leq1$. Our result clearly outperforms these two results in some intervals, see the comparison in Fig. \ref{fig:1}.
\begin{table}[!htb]
\caption{$T(\rho_W)$, $K_1$ and the range of GME of the state (\ref{14}) for different $\alpha$ and $\beta$.}
\label{tab:1}
\centering
\begin{tabular}{cccc}
\hline\noalign{\smallskip}
\ & $T(\rho_W)$ & $K_1$ & the range of GME \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\alpha=1,\beta=1$ & $3.7177x$ & $1+\sqrt{3}$ & $0.7349<x\leq1$ \\
$\alpha=\frac{1}{2},\beta=2$ & $6.5825x$ & $\frac{1}{2}+2\sqrt{3}$ & $0.6022<x\leq1$\\
$\alpha=\frac{1}{10},\beta=2$ & $6.5225x$ & $\frac{1}{10}+2\sqrt{3}$ & $0.5464<x\leq1$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.75\textwidth]{figure1.eps}\\
\caption{$f_1(x)$ from our result (solid straight line), $g_1(x)$ from Theorem 2 in \cite{lmj}(dashed straight line), $g_2(x)$ from Theorem 2 in \cite{dgh}(dash-dot straight line). }
\label{fig:1}
\end{figure}
\textit{\textbf{Example 2}} Consider the quantum state $\rho\in H_1^3\otimes H_2^3\otimes H_3^2$,
\begin{equation}\label{15}
\rho=\frac{1-x}{18}I_{18}+x|\varphi\rangle\langle\varphi|,
\end{equation}
where $|\varphi\rangle=\frac{1}{\sqrt{5}}[(|10\rangle+|21\rangle)|0\rangle
+(|00\rangle+|11\rangle+|22\rangle)|1\rangle]$, $0\leq x\leq1$, $I_{18}$ is the $18\times18$ identity matrix. By Theorem 1, we can determine the range of $x$ where $\rho$ is surely entangled.
Table \ref{tab:2} shows that when $\alpha=0$, $\beta=1$, our criterion detects the entanglement for $0.3405<x\leq1$, which is better than the result $0.35\leq x\leq1$ given in \cite{lc}.
Similarly when $\alpha=1$, $\beta=1$, we get $f_2(x)=T(\rho)-(2+\sqrt{\frac{29}{3}})=9.0668x-(2+\sqrt{\frac{29}{3}})$, therefore $\rho$ is genuine tripartite entangled when $f_2(x)>0$, i.e. $0.5635<x\leq1$. Note that the criterion given in \cite{lc} cannot detect genuine entanglement.
\begin{table}[!htb]
\caption{The entanglement regions of $\rho$ as given by Theorem 1.}
\label{tab:2}
\centering
\begin{tabular}{ccc}
\hline\noalign{\smallskip}
\ & $\|N^{2|13}\|_{tr}$ & The range of entanglement \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\alpha=1, \beta=1$ & $10.5292x$ & $0.4852<x\leq1$ \\
$\alpha=\frac{1}{2}, \beta=2$ & $18.4650x$ & $0.3909<x\leq1$\\
$\alpha=0, \beta=1$ & $9.1321x$ & $0.3405<x\leq1$\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Genuine entanglement for multipartite quantum state}
Now we consider genuine entanglement of $n$-partite quantum systems. Let $\{A_{u_s}^{(s)}\}$ $(u_s=0,\cdots,d_s^2-1)$ be the generalized Pauli operators of the $s$th $d_s$-dimensional Hilbert space $H_s^{d_s}$. Any quantum state
$\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ has the generalized Pauli operators representation:
\begin{equation}\label{16}
\rho=\frac{1}{d_1d_2\cdots d_n}\sum\limits_{s=1}^n\sum_{u_s=0}^{d_s^2-1}t_{u_1,u_2,\cdots,u_n}A_{u_1}^{(1)}\otimes A_{u_2}^{(2)}\otimes \cdots\otimes A_{u_n}^{(n)}
\end{equation}
where $A_0^{(s)}=I_{d_s}$$(s=1,\cdots,n)$, $t_{u_1,u_2,\cdots,u_n}=tr(\rho (A_{u_1}^{(1)})^{\dagger}\otimes(A_{u_2}^{(2)})^{\dagger}\otimes \cdots\otimes (A_{u_n}^{(n)})^{\dagger})$ are complex coefficients.
Let $T^{(l_1\cdots l_k)}$ be the vectors with entries $t_{u_{l_1},\cdots,u_{l_k},\cdots,0}$, $u_{l_1},\cdots,u_{l_k}\neq0$ and $1\leq l_1<\cdots< l_k\leq n$.
We have
$$\|T^{(1)}\|^2=\sum_{u_1=1}^{d_1^2-1}t_{u_1,\cdots,0}t_{u_1,\cdots,0}^{*},$$
$$\cdots,$$
$$\|T^{(l_1\cdots l_k)}\|^2=\sum\limits_{s=1}^{k}\sum_{u_{l_s}=1}^{d_{l_s}^2-1}t_{u_{l_1}\cdots u_{l_k}\cdots0}t_{u_{l_1}\cdots u_{l_k}\cdots0}^{*},$$
$$\cdots,$$
$$\|T^{(12\cdots n)}\|^2=\sum\limits_{s=1}^n\sum_{u_s=1}^{d_s^2-1}t_{u_1\cdots u_n}t_{u_1\cdots u_n}^{*},$$
where $*$ represents the conjugate.
Set
$$A_1=\|T^{(1)}\|^2+\cdots+\|T^{(n)}\|^2,$$
$$A_2=\|T^{(12)}\|^2+\cdots+\|T^{((n-1),n)}\|^2,$$ $$\cdots,$$ $$A_n=\|T^{(1\cdots n)}\|^2.$$
\begin{lemma}\label{3}
Let $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ $(n\geq2)$ be a $n$-partite pure quantum state. Then
\begin{equation}\label{17}
\|T^{(12\cdots n)}\|^2\leq
\frac{d_1\cdots d_n(n-1-\sum\limits_{s=1}^n\frac{1}{d_s^2})+1}{n-1}.
\end{equation}
\end{lemma}
{\it Proof}~~It's enough to show the lemma for a pure state $\rho$, where we have $tr(\rho^2)=1$ and $tr(\rho_{l_1}^2)=tr(\rho_{l_2\cdots l_n}^2)$ for any distinct indices $l_1, \ldots, l_n\in\{1,2,\cdots,n\}$. Here
$\rho_{l_1}$ and $\rho_{l_2\cdots l_n}$ are the reduced states for the subsystem $H_{l_1}^{d_{l_1}}$ and $H_{l_2}^{d_{l_2}}\otimes\cdots\otimes H_{l_n}^{d_{l_n}}$. Therefore, we have
\begin{equation}\label{18}
tr(\rho^2)=\frac{1}{d_1d_2\cdots d_n}(1+A_1+\cdots+A_n)=1,
\end{equation}
and
\begin{equation}\label{19}
\frac{1}{d_{l_1}}(1+\|T^{(l_1)}\|^2)=\frac{1}{d_{l_2}\cdots d_{l_n}}(1+\|T^{(l_2)}\|^2+\cdots+\|T^{(l_n)}\|^2+\cdots+\|T^{(l_2\cdots l_n)}\|^2).
\end{equation}
Since $\sum\limits_{l_1=1}^n\frac{1}{d_{l_1}}tr(\rho_{l_1}^2)=\sum\limits_{l_1=1}^n\frac{1}{d_{l_1}}tr(\rho_{l_2\cdots l_n}^2)$,
we get that
\begin{equation*}
\sum\limits_{l_1=1}^n\frac{1}{d_{l_1}^2}(1+\|T^{(l_1)}\|^2)=\frac{1}{d_{1}\cdots d_{n}}[n+(n-1)A_1+(n-2)A_2+\cdots+A_{n-1}].
\end{equation*}
Therefore,
\begin{equation}\label{20}
A_1=\frac{d_{1}\cdots d_{n}}{n-1}\sum\limits_{s=1}^n\frac{1}{d_{s}^2}(1+\|T^{(s)}\|^2)
-\frac{n}{n-1}-\frac{n-2}{n-1}A_2-\frac{n-3}{n-1}A_3-\cdots-\frac{1}{n-1}A_{n-1}.
\end{equation}
Substituting (\ref{20}) into (\ref{18}), we get
\begin{equation}\label{21}
\begin{split}
A_n=&d_1\cdots d_n-1-\frac{1}{n-1}\left(d_{1}\cdots d_{n}\sum\limits_{s=1}^n\frac{1}{d_{s}^2}(1+\|T^{(s)}\|^2)
-n\right)-\frac{1}{n-1}A_2\\
&-\frac{2}{n-1}A_3-\cdots\frac{n-2}{n-1}A_{n-1}\\
\leq&\frac{d_1\cdots d_n(n-1-\sum\limits_{s=1}^n\frac{1}{d_s^2})+1}{n-1}
\end{split}
\end{equation}
\qed
Let $\rho$ be a $n$-partite state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ represented as \eqref{16}, suppose $\rho$ is separable under the bipartition $l_1\cdots l_{k-1}|l_k\cdots l_{n}$. For real number $\alpha$, $\beta$ and distinct
indices $l_1, \ldots, l_n\in\{1, 2, \cdots, n\}$,
set
\begin{equation}\label{22}
N^{l_1\cdots l_{k-1}|l_k\cdots l_{n}}=\alpha S_0^{l_1\cdots \l_{k-1}|l_k}+\beta S^{l_1\cdots l_{k-1}|l_k\cdots l_n},
\end{equation}
for $k-1=1, 2,\cdots, [n/2]$, the smallest integer less or equal to $n/2$. Let $T^{(l_1\cdots l_k)}$ be the $(d^2_{l_1}-1)\cdots (d^2_{l_k}-1)$-dimensional column vector with entries $t_{u_{l_1}\cdots u_{l_k}\cdots0}$
associated with the generalized Pauli operators representation of $\rho$, and define
$S_0^{l_1\cdots l_{k-1}|l_k}$ to be the block matrix
$S_0^{l_1\cdots \l_{k-1}|\l_k}=[S^{l_1\cdots \l_{k-1}|\l_k}~~O_{l_1\cdots l_{k-1}}]$, where
$S^{l_1\cdots l_{k-1} l_k}=T^{(l_1\cdots l_{k-1})}(T^{(l_k)})^{\dagger}=([t_{u_{l_1}\cdots u_{l_k}\cdots0}])$ is the
$\prod\limits_{s=1}^{k-1}(d_{l_s}^2-1)\times (d_{l_k}^2-1)$ matrix and
$O_{l_1\cdots l_{k-1}}$ is the $\prod\limits_{s=1}^{k-1}(d_{l_s}^2-1)\times
[\prod\limits_{s=k}^{n}(d_{l_s}^2-1)-(d_{l_k}^2-1)]$
zero matrix, and $S^{l_1\cdots l_{k-1}|l_k\cdots l_n}=T^{(l_1\cdots l_{k-1})}(T^{(l_k\cdots l_n)})^{\dagger}=[t_{u_1,\cdots, u_n}]$ to be a $\prod\limits_{s=1}^{k-1}(d_{l_s}^2-1)\times
\prod\limits_{s=k}^{n}(d_{l_s}^2-1)$ matrix. For example, when $\rho\in H_1^{2}\otimes H_2^{2}\otimes H_3^{2}\otimes H_4^{3}$, $N^{13|24}=\alpha S_0^{13|2}+\beta S^{13|24}$, where
$$S^{13|2}=\left[
\begin{array}{ccc}
t_{1,1,1,0}~~ & t_{1,2,1,0}~~ & t_{1,3,1,0} \\
t_{1,1,2,0}~~ & t_{1,2,2,0}~~ & t_{1,3,2,0} \\
t_{1,1,3,0}~~ & t_{1,2,3,0}~~ & t_{1,3,3,0} \\
\vdots & \vdots & \vdots \\
t_{3,1,3,0}~~ & t_{3,2,3,0}~~ & t_{3,3,3,0} \\
\end{array}
\right],~~
S^{13|24}=\left[
\begin{array}{cccccc}
t_{1,1,1,1}~ & t_{1,1,1,2} & \cdots & t_{1,1,1,8} & \cdots & t_{1,3,1,8}\\
t_{1,1,2,1}~ & t_{1,1,2,2} & \cdots & t_{1,1,2,8} & \cdots & t_{1,3,2,8}\\
t_{1,1,3,1}~ & t_{1,1,3,2} & \cdots & \cdot & \cdots & \cdot \\
\vdots~ & \vdots & \vdots & \vdots & \vdots & \vdots \\
t_{3,1,3,1}~ & t_{3,1,3,2} & \cdots & \cdot & \cdots & \cdot \\
\end{array}
\right].
$$
\begin{theorem}\label{3} Fix $\alpha, \beta$ as above.
If the $n$-partite state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ is separable under the bipartition $l_1\cdots l_{k-1}|l_k\cdots l_{n}$, then we have that\\
(i) $\|N^{l_1|l_{2}\cdots l_{n}}\|_{tr}\leq M_{l_1}$;\\
(ii) $\|N^{l_1\cdots l_{k-1}|l_k\cdots l_{n}}\|_{tr}\leq M_{l_1\cdots l_{k-1}}$ $(k\geq3)$;\\
where
\begin{footnotesize}
\begin{equation*}
M_{l_1}=\sqrt{d_{l_1}-1}\left(|\alpha|\sqrt{d_{l_2}-1}+|\beta|
\sqrt{\frac{d_{l_2}\cdots d_{l_n}(n-2-\sum\limits_{s=2}^nd_{l_s}^{-1})+1}{n-2}}\right),
\end{equation*}
\begin{equation*}
M_{l_1\cdots l_{k-1}}=\sqrt{\frac{d_{l_1}\cdots d_{l_{k-1}}(k-2-\sum\limits_{s=1}^{k-1}d_{l_s}^{-1})+1}{k-2}}
\left(|\alpha|\sqrt{d_{l_k}-1}+|\beta|\sqrt{\frac{d_{l_k}\cdots d_{l_n}(n-k-\sum\limits_{s=k}^nd_{l_s}^{-1})+1}{n-k}}\right).
\end{equation*}
\end{footnotesize}
\end{theorem}
{\it Proof}~~$(i)$ If the $n$-partite mixed state is separable under the bipartition $l_1|l_2\cdots l_{n}$, it can be expressed as
\begin{equation}\label{23}
\rho_{l_1|l_2\cdots l_{n}}=\sum\limits_s p_s\rho_{l_1}^s\otimes\rho_{l_2\cdots l_{n}}^s, \ 0<p_s\leq1, \sum\limits_s p_s=1,
\end{equation}
where
\begin{equation}\label{24}
\rho_{l_1}^s=\frac{1}{d_{l_1}}\sum_{u_{l_1}=0}^{d_{l_1}^2-1} t_{u_{l_1}}^sA_{u_{l_1}}^{(l_1)},
\end{equation}
\begin{equation}\label{25}
\rho_{l_2\cdots l_{n}}^s=\frac{1}{d_{l_2}\cdots d_{l_{n}}}\sum\limits_{q=2}^{n}\sum_{u_{l_q}=0}^{d_{l_q}^2-1}t_{u_{l_2},\cdots,u_{l_n}}^s
A_{u_{l_2}}^{(l_2)}\otimes \cdots\otimes A_{u_{l_{n}}}^{(l_{n})}
\end{equation}
Then,
\begin{equation}\label{26}
S^{l_1|l_2}=\sum\limits_sp_sT_s^{(l_1)}(T_s^{(l_2)})^{\dagger},~~
S^{l_1|l_2\cdots l_n}=\sum\limits_sp_sT_s^{(l_1)}(T_s^{(l_2\cdots l_n)})^{\dagger}.
\end{equation}
By Lemma 1 and Lemma 3, we have
\begin{equation*}
\begin{split}
\|N^{l_1|l_2\cdots l_{n}}\|_{tr}&\leq\sum_sp_s(|\alpha|\|T_s^{(l_1)}\|\|T_s^{(l_2)}\|+|\beta|\|T_s^{(l_1)}\|\|T_s^{(l_2\cdots l_n)}\|)\\
&\leq\sqrt{d_{l_1}-1}\left(|\alpha|\sqrt{d_{l_2}-1}+|\beta|
\sqrt{\frac{d_{l_2}\cdots d_{l_n}(n-2-\sum\limits_{s=2}^n\frac{1}{d_{l_s}})+1}{n-2}}\right)\\
&=M_{l_1},
\end{split}
\end{equation*}
where we have used $\|A+B\|_{tr}\leq\|A\|_{tr}+\|B\|_{tr}$ for matrices $A$ and $B$ and $\||a\rangle\langle b|\|_{tr}=\||a\rangle\|\||b\rangle\|$
for vectors $|a\rangle$ and $|b\rangle$.\\
$(ii)$ If $\rho$ is separable under the bipartition $l_1\cdots l_{k-1}|l_k\cdots l_{n}$, it can be expressed as
\begin{equation}\label{27}
\rho_{l_1\cdots l_{k-1}|l_k\cdots l_{n}}=\sum\limits_s p_s\rho_{l_1\cdots l_{k-1}}^s\otimes\rho_{l_k\cdots l_{n}}^s, 0<p_s\leq1, \sum\limits_s p_s=1,
\end{equation}
where
\begin{equation}\label{28}
\rho_{l_1\cdots l_{k-1}}^s=\frac{1}{d_{l_1}\cdots d_{l_{k-1}}}\sum\limits_{p=1}^{k-1}\sum_{u_{l_p}=0}^{d_{l_p}^2-1}
t_{u_{l_1},\cdots,u_{l_{k-1}}}^s
A_{u_{l_1}}^{(l_1)}\otimes \cdots\otimes A_{u_{l_{k-1}}}^{(l_{k-1})},
\end{equation}
\begin{equation}\label{29}
\rho_{l_k\cdots l_{n}}^s=\frac{1}{d_{l_k}\cdots d_{l_{n}}}\sum\limits_{q=k}^{n}\sum_{u_{l_q}=0}^{d_{l_q}^2-1}t_{u_{l_k},\cdots,u_{l_{n}}}^s
A_{u_{l_k}}^{(l_k)}\otimes \cdots\otimes A_{u_{l_{n}}}^{(l_{n})}.
\end{equation}
Then,
\begin{equation}\label{30}
S^{l_1\cdots l_{k-1}|l_k}=\sum\limits_sp_sT_s^{(l_1\cdots \l_{k-1})}(T_s^{(l_k)})^{\dagger},~~
S^{l_1\cdots l_{k-1}|l_k\cdots l_n}=\sum\limits_sp_sT_s^{(l_1\cdots \l_{k-1})}(T_s^{(l_k\cdots l_n)})^{\dagger}.
\end{equation}
Similarly, we get
\begin{equation*}
\begin{split}
&\|N^{l_1\cdots l_{k-1}|l_k\cdots l_{n}}\|_{tr}\\
\leq&\sum_sp_s(|\alpha|\|T_s^{(l_1\cdots l_{k-1})}\|\|T_s^{(l_k)}\|+|\beta|\|T_s^{(l_1\cdots l_{k-1})}\|\|T_s^{(l_k\cdots l_n)}\|)\\
\leq&\sqrt{\frac{d_{l_1}\cdots d_{l_{k-1}}(k-2-\sum\limits_{s=1}^{k-1}\frac{1}{d_{l_s}})+1}{k-2}}
[|\alpha|\sqrt{d_{l_k}-1}+|\beta|\sqrt{\frac{d_{l_k}\cdots d_{l_{n}}(n-k-\sum\limits_{s=k}^n\frac{1}{d_{l_s}})+1}{n-k}}]\\
=&M_{l_1\cdots l_{k-1}}.
\end{split}
\end{equation*}
\qed
Now we consider the question of genuine multipartite entanglement. An $n$-partite mixed state $\rho=\sum p_i|\varphi_i\rangle\langle\varphi_i|$ is said to be biseparable if $|\varphi_i\rangle$ $(i=1,2,\cdots,n)$ can be expressed as one of the forms: $|\varphi_i\rangle=|\varphi_i^{l_1\cdots\l_{k-1}}\rangle\otimes|\varphi_i^{l_k\cdots l_n}\rangle$, where $|\varphi_i^{l_1\cdots\l_{k-1}}\rangle$ and $|\varphi_i^{l_k\cdots l_n}\rangle$ are some pure states in $H_{l_1}^{d_{l_1}}\otimes\cdots \otimes H_{l_{k-1}}^{d_{l_{k-1}}}$ and $H_{l_k}^{d_{l_k}}\otimes\cdots \otimes H_{l_{n}}^{d_{l_{n}}}$, respectively, $l_1\neq \cdots\neq l_n\in\{1,2,\cdots,n\}$. Otherwise, $\rho$ is said to be genuinely multipartite entangled. Let
\begin{equation}\label{31}
T(\rho)=\frac{1}{\sum\limits_{s=1}^mC_n^s}(\sum\limits _{l_1=1}^n\|N^{l_1|l_2\cdots l_n}\|_{tr}+\sum\limits _{1\leq l_1<\atop l_2\leq n}\|N^{l_1l_2|l_3\cdots l_n}\|_{tr}+\cdots+\sum\limits _{1\leq l_1<\cdots\atop<l_m\leq n}\|N^{l_1\cdots l_m|l_{m+1}\cdots l_n}\|_{tr}),
\end{equation}
where $C_n^s=\frac{n!}{s!(n-s)!}$, $m=[n/2]$. Set $K_2=\textrm{Max}\{M_{l}, M_{l_1\cdots l_{k-1}}\}$, where $l=1,2,\cdots,n$, $l_1<\cdots<\l_{k-1}\in\{1,2,\cdots,n\}$.
We have the following criterion.
\begin{theorem}\label{4}
A mixed state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ is genuine multipartite entangled if $T(\rho)>K_2$.
\end{theorem}
{\it Proof}~~If $\rho$ is biseparable, it follows from Theorem 3 that
\begin{equation*}
\begin{split}
T(\rho)&=\frac{1}{\sum\limits_{s=1}^mC_n^s}(\sum\limits _{l_1=1}^n\|N^{l_1|l_2\cdots l_n}\|_{tr}+\sum\limits _{1\leq l_1<\atop l_2\leq n}\|N^{l_1l_2|l_3\cdots l_n}\|_{tr}+\cdots+\sum\limits _{1\leq l_1<\cdots\atop<l_m\leq n}\|N^{l_1\cdots l_m|l_{m+1}\cdots l_n}\|_{tr})\\
&\leq\frac{1}{\sum\limits_{s=1}^mC_n^s}(K_2+\cdots+K_2)\\
&\leq K_2,
\end{split}
\end{equation*}
Consequently, if $T(\rho)>K_2$, $\rho$ is genuine multipartite entangled.
\qed
For a permutational invariant density matrix, similar to Corollary 1, we have the following result:
\begin{corollary} Let $\rho$ be a permutational invariant density matrix.
If $T(\rho)>J_2$, then $\rho$ is genuine multipartite entanglement, where
$$J_2=\frac{1}{\sum\limits_{s=1}^mC_n^s}(\sum\limits _{l=1}^nM_{l}+\sum\limits _{1\leq l_1< l_2\leq n}M_{l_1l_2}+\cdots+\sum\limits _{1\leq l_1<\cdots<l_m\leq n}M_{l_1\cdots l_m}).$$
\end{corollary}
\textit{\textbf{Example 3}} Consider the four-qubit state $\rho\in H_1^2\otimes H_2^2\otimes H_3^2\otimes H_4^2$,
\begin{equation}\label{32}
\rho=x|\psi\rangle\langle\psi|+\frac{1-x}{16}I_{16},
\end{equation}
where $|\psi\rangle=\frac{1}{\sqrt{2}}(|0000\rangle+|1111\rangle)$, $0\leq x\leq1$,
$I_{16}$ is the $16\times16$ identity matrix.
Using Theorem 3 $(i)$ with $\alpha=1, \beta=1$, we set
$f_3(x)=\|N^{l_1|l_2l_3l_4}\|_{tr}-(1+\sqrt{\frac{11}{2}})=(4+\sqrt{2})x-(1+\sqrt{\frac{11}{2}})$, $\rho$ is not separable under the bipartition $l_1|l_2l_3l_4$ for $f_3(x)>0$, i.e. $0.6179<x\leq1$, while according to Theorem 3 in \cite{lww}, $\rho$ is not separable under the bipartition $l_1|l_2l_3l_4$ for $g_3(x)=9x^2-4>0$, i.e. $0.6667<x\leq1$. Fig. \ref{fig:2} shows that our method detects more entanglement.
Now using Corollary 2 and set $\alpha=1, \beta=1$, we get that $f_4(x)=T(\rho)-J_2=\frac{23+2\sqrt{2}}{5}x-\frac{11+\sqrt{22}+3\sqrt{3}}{5}$,
$\rho$ is genuine entanglement for $ f_4(x)>0$, i.e.$0.8087<x\leq1$, while the criterion given in \cite{lww} cannot detect the genuine four-qubits entanglement at all.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.75\textwidth]{figure2.eps}
\caption{$f_3(x)$ from our result (solid straight line), $g_3(x)$ from Theorem 3 in \cite{lww} (dashed curve line). }
\label{fig:2}
\end{figure}
\section{Conclusions}
By adopting the Weyl representation with generalized Pauli operators of density matrices, we have come up with several general tests to judge genuine entanglement for multipartite quantum systems. Our approach starts with some finer upper bounds for the norms of correlation tensors by using the generalized Pauli operators presentation, then we have obtained the entanglement criteria for genuine tripartite quantum states based on certain matrices constructed by the correlation tensor of the density matrices. These conclusions are generalized to higher dimensional multipartite quantum systems to detect genuine entanglement in arbitrary dimensional quantum states. Compared with previously available criteria, ours can detect more situations, and these are explained in details with several examples.
\textbf{Acknowledgments} This work is supported by the National Natural Science Foundation of China under grant nos. 12075159, 12126351 and 12171044, Simons Foundation under grant no. 523868, Beijing Natural Science Foundation (grant no. Z190005), Academy for Multidisciplinary Studies, Capital Normal University, and Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology (no. SIQSE202001), and the Academician Innovation Platform of Hainan Province.
\noindent\textbf {Data Availability Statements} All data generated or analysed during this study are available from the corresponding author on reasonable request.
\section{Introduction}
Quantum entanglement is a key resource in quantum information with wide applications in entanglement swapping \cite{bvk}, quantum cryptography \cite{eak} and quantum secure communication \cite{bw}. The genuine multipartite entanglement (GME) stands out
with significant properties \cite{hp,tg}. Thus measuring and detection of genuine multipartite entanglement for given states has been an important task in quantum computation.
A lot of methods have been presented in detecting
entanglement and genuine entanglement \cite{pa,hhh,hong,skt}. For tripartite quantum states, sufficient conditions to test entanglement of three-qubit states in the vicinity of the GHZ, the W states and the PPT entangled states were found in \cite{akb}. A sufficient criterion for the entanglement of tripartite systems based on local sum uncertainty relations was proposed in \cite{ymm}. Sufficient conditions for judging genuine tripartite entanglement were presented by using partial transposition and realignment of density matrices in \cite{mes}. Yang et al \cite{ysc} derived a criterion for detecting genuine tripartite entanglement based on quantum Fisher information. By using the Bloch representation of density matrices and the norms of correlation tensors, the genuine tripartite entangled criteria were presented in \cite{lmj,dgh}. The authors in \cite{zzj} studied the separability criteria in tripartite and four-partite quantum system by the matrix method. The separability criteria for four-partite quantum system based on the upper bound of Bloch vectors were discussed in \cite{lww}. For higher dimensional quantum system, Chen et al \cite{cw} presented a generalized partial separability criterion of multipartite quantum systems in arbitrary dimensions. The separable criteria and $k$-separable criteria for general $n$-partite quantum states were given in \cite{hgy,lwf,xzz}.
Many of these methods have used the Bloch representation of the density matrix, which has
become more complex as dimension of the quantum system increases. This is partly due to the
fact that the Bloch representation is relied on the Gell-Mann basis of the special unitary Lie algebra $\mathfrak{su}(d)$
which has three kinds of basis elements: upper, diagonal and lower matrices. In view of this, perhaps
using another well-known basis of the Lie algebra $\mathfrak{su}(d)$: the Weyl basis to study quantum entanglement will likely simplify some of the criteria, as the latter consists of uniformed basis elements. In Ref. \cite{bgj}, the authors showed that the principal basis matrix plays an essential role in the representation theory of the Yangian $Y(\mathfrak{sl}(3))$ which has a close relation with the study of entangled states in quantum information (also see \cite{fss} for recent applications).
In this paper, we study the genuine entanglement of multipartite quantum systems by using the Weyl representation with generalized Pauli operators, and we obtain several better criteria in detecting the GME than previously available tests. The paper is organized as follows. In section 2, after reviewing the Weyl representation of the quantum state with generalized Pauli operators, we construct matrices by using the correlation tensors and derive the criteria to detect entanglement and genuine tripartite entanglement. By detailed examples, our results can detect more genuine entangled states. In section 3, we generalize the results to arbitrary $n$-partite quantum systems. Conclusions are given in section 4.
\section{Genuine entanglement for tripartite quantum state}
We first consider the GME for tripartite states. Let $E_{ij}$ be the $d\times d$ unit matrix with the only nonzero entry 1 at the position $(i,j)$, and let $\omega$ be a fixed $d$-th primitive root of unity. For fixed $d_s$, by means of the division with remainder, every $u_s\in\{0,\cdots,d_s^2-1\}$ is
expressed uniquely as $u_s=d_si+j$ for a pair $(i, j)$, where $(0\leq i,j\leq d_s-1)$. Then the generalized Pauli operators of the $s$th $d_s$-dimensional Hilbert space $H_s^{d_s}$ are given by
\begin{equation}\label{1}
A_{u_s}^{(s)}=A_{d_si+j}^{(s)}=\sum\limits_{m=0}^{d_s-1}\omega^{im}E_{m,m+j}, \qquad 0\leq u_s\leq d_2^2-1
\end{equation}
where $\omega^{d_s}=1$. The basis obeys the algebraic relation:
\begin{equation*}
A_{d_si+j}^{(s)}A_{d_sk+l}^{(s)}=\omega^{jk}A_{d_s(i+k)+(j+l)}^{(s)},
\end{equation*}
then $(A_{d_si+j}^{(s)})^{\dagger}=\omega^{ij}A_{d_s(d_s-i)+(d_s-j)}^{(s)}$, so $tr(A_{d_si+j}^{(s)}(A_{d_sk+l}^{(s)})^{\dagger})=\delta_{ik}\delta_{jl}d_s$\cite{hjz}.
Denote by $\|\cdot\|$ the norm of a (column) complex vector, i.e. $\|v\|=\sqrt{v^{\dagger} v}$.
The trace norm (Ky Fan norm) of a retangular matrix $A\in \mathbb{C}^{m\times n}$ is defined as $\|A\|_{tr}=\sum\sigma_i=tr\sqrt{AA^\dagger}$,
where $\sigma_i$ are the singular values of $A$. Clearly $\|A\|_{tr}=\|A^{\dagger}\|_{tr}$.
\begin{lemma}\label{lemma:1} Let $H_s^{d_s}$ denote the $s^{th}$ $d_s$-dimensional Hilbert space. For a quantum state $\rho_1\in H_1^{d_1}$, $\rho_1$ can be expressed as
$\rho_1=\frac{1}{d_1}\sum\limits_{u_1=0}^{d_1^2-1}t_{u_1}A_{u_1}^{(1)}$,
where $A_0^{(1)}=I_{d_1}$, $t_{u_1}=tr(\rho_1(A_{u_1}^{(1)})^{\dagger})$ are complex coefficients. Let $T^{(1)}$ be the column vector with entries $t_{u_1}$ for $u_1\neq0$,
we have
\begin{equation}\label{2}
\|T^{(1)}\|^2\leq d_1-1.
\end{equation}
\end{lemma}
{\it Proof}~~Since $tr(\rho_1^2)\leq1$, we have
$$tr(\rho_1^2)=tr(\rho_1\rho_1^{\dagger})=\frac{1}{d_1}(1+\|T^{(1)}\|^2)\leq1,$$
namely, $\|T^{(1)}\|^2\leq d_1-1$.
\qed
For a state $\rho_{12}\in H_1^{d_1}\otimes H_2^{d_2}$,
$\rho_{12}$ has the generalized Pauli operators representation:
\begin{equation}\label{3}
\rho_{12}=\frac{1}{d_1d_2}\sum\limits_{u_1=0}^{d_1^2-1}
\sum\limits_{u_2=0}^{d_2^2-1}t_{u_1,u_2}A_{u_1}^{(1)}\otimes A_{u_2}^{(2)}
\end{equation}
where $A_0^{(s)}=I_{d_s}(s=1,2)$, the coefficients
$t_{u_1,u_2}=tr(\rho_{12}(A_{u_1}^{(1)})^{\dagger}\otimes (A_{u_2}^{(2)})^{\dagger})$ are complex numbers.
Let $T^{(1)}$, $T^{(2)}$, $T^{(12)}$ be the vectors with entries $t_{u_1,0}$, $t_{0,u_2}$, $t_{u_1,u_2}$ for $u_1,u_2\neq0$.
\begin{lemma}\label{lemma:2}
Let $\rho_{12}\in H_1^{d_1}\otimes H_2^{d_2}$ be a mixed state, we have $\|T^{(12)}\|^2\leq d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1$.
\end{lemma}
{\it Proof}~~For a pure state $\rho_{12}$, we have $tr(\rho_{12}^2)=1$, namely
\begin{equation}\label{4}
tr(\rho_{12}^2)=tr(\rho_{12}\rho_{12}^{\dagger})
=\frac{1}{d_1d_2}(1+\|T^{(1)}\|^2+\|T^{(2)}\|^2+\|T^{(12)}\|^2)
=1.
\end{equation}
By using $tr(\rho_1^2)=tr(\rho_2^2)$, we have $\frac{1}{d_1}(1+\|T^{(1)}\|^2)=\frac{1}{d_2}(1+\|T^{(2)}\|^2)$, where $\rho_1$ and $\rho_2$ are the reduced density operators on $H_1^{d_1}$ and $H_2^{d_2}$, respectively. Then
\begin{equation*}
\frac{1}{d_1^2}(1+\|T^{(1)}\|^2)+\frac{1}{d_2^2}(1+\|T^{(2)}\|^2)=
\frac{1}{d_1d_2}(2+\|T^{(1)}\|^2+\|T^{(2)}\|^2).
\end{equation*}
Using the above two equations we obtain that
\begin{equation*}
\begin{split}
\|T^{(12)}\|^2&=d_1d_2-1-\|T^{(1)}\|^2-\|T^{(2)}\|^2\\
&=d_1d_2+1-[\frac{d_2}{d_1}(1+\|T^{(1)}\|^2)+\frac{d_1}{d_2}(1+\|T^{(2)}\|^2)].
\end{split}
\end{equation*}
By $\|T^{(1)}\|^2\geq0$, $\|T^{(2)}\|^2\geq0$, we have
\begin{equation}\label{5}
\begin{split}
\|T^{(12)}\|^2&\leq d_1d_2+1-(\frac{d_2}{d_1}\cdot1+\frac{d_1}{d_2}\cdot1)\\
&=d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1,
\end{split}
\end{equation}
If $\rho$ is a mixed state, then $\rho=\sum_{i}p_i\rho_i$ is a convex sum of pure states, $\sum_ip_i=1$.
Then $\|T^{(12)}(\rho)\|\leq \sum_ip_i\|T^{(12)}(\rho_i)\|\leq \sqrt{d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1}.$ \qed
A general tripartite state $\rho\in H_{1}^{d_1}\otimes H_{2}^{d_2}\otimes H_{3}^{d_3}$ can be written in terms of the generalized Pauli operators:
\begin{equation}\label{7}
\rho=\frac{1}{d_1d_2d_3}\sum_{u_1=0}^{d_1^2-1}\sum_{u_2=0}^{d_2^2-1}\sum_{u_3=0}^{d_3^2-1}t_{u_1,u_2,u_3}A_{u_1}^{(1)}\otimes A_{u_2}^{(2)}\otimes A_{u_3}^{(3)}
\end{equation}
where $A_{u_f}^{(f)}$ stands for that the tensor operator with $A_{u_f}$ acting on the space $H_f^{d_f}$, $A_{0}^{(f)}=I_{d_f}$, $t_{u_1,u_2,u_3}=tr(\rho (A_{u_1}^{(1)})^{\dagger}\otimes (A_{u_2}^{(2)})^{\dagger}\otimes (A_{u_3}^{(3)})^{\dagger})$ are the
complex coefficients. Let $T^{(f)}$ and $T^{(fg)}$ be the (column) vectors with entries of $t_{u_f,0,0}$ and $t_{u_f,u_g,0}$ for $u_f, u_g\neq0$, $f,g\in\{1,2,3\}$.
In the following we will construct some matrices out of the expansion coefficients of the density matrix $\rho$ in \eqref{7}. Let $\rho$ is separable under the bipartition $f|gh$, for real numbers $\alpha$, $\beta$ and distinct indices
$f, g, h\in\{1,2,3\}$, set
\begin{equation}\label{8}
N^{f|gh}=\alpha S_0^{f|g}+\beta S^{f|gh},
\end{equation}
where $S_0^{f|g}=[S^{f|g}~~O_f]\in \mathbb R^{(d_f^2-1)\times (d_g^2-1)(d_h^2-1)}$ is the block matrix with submatrix $S^{f|g}=T^{(f)}(T^{(g)})^{\dagger}=[t_{u_f,u_g,0}]$ of size $(d_f^2-1)\times(d_g^2-1)$ appended with the zero matrix of size $(d_f^2-1)\times(d_g^2-1)(d_h^2-2)$ and $S^{f|gh}=T^{(f)}(T^{(gh)})^{\dagger}=[t_{u_f,u_g,u_h}]$ is a $(d_f^2-1)\times(d_g^2-1)(d_h^2-1)$ matrix.
For example, when $\rho\in H_1^2\otimes H_2^2\otimes H_3^3$, $N^{2|13}=\alpha S_0^{2|1}+\beta S^{2|13}$, where
$$S^{2|1}=\left[
\begin{array}{ccc}
t_{1,1,0} & t_{2,1,0} & t_{3,1,0} \\
t_{1,2,0} & t_{2,2,0} & t_{3,2,0} \\
t_{1,3,0} & t_{2,3,0} & t_{3,3,0} \\
\end{array}
\right],~~~
S^{2|13}=\left[
\begin{array}{cccccc}
t_{1,1,1} & t_{1,1,2} & \cdots & t_{1,1,8} & \cdots & t_{3,1,8} \\
t_{1,2,1} & t_{1,2,2} & \cdots & t_{1,2,8} & \cdots & t_{3,2,8} \\
t_{1,3,1} & t_{1,3,2} & \cdots & t_{1,3,8} & \cdots & t_{3,3,8} \\
\end{array}
\right].
$$
\begin{theorem}\label{1}
If the tripartite state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes H_3^{d_3}$ is separable under the bipartition $f|gh$, we have
\begin{equation}\label{9}
\|N^{f|gh}\|_{tr}\leq\sqrt{d_f-1}\left(|\alpha|\sqrt{d_g-1}+|\beta|\sqrt{d_gd_h(1-\frac{1}{d_g^2}-\frac{1}{d_h^2})+1}\right).
\end{equation}
\end{theorem}
{\it Proof}~~If the tripartite mixed state $\rho$ is separable under the bipartition $f|gh$, then it can be expressed as
\begin{equation}\label{10}
\rho_{f|gh}=\sum_sp_s\rho_f^s\otimes \rho_{gh}^s, \quad 0<p_s\leq1, \sum_sp_s=1,
\end{equation}
where
\begin{equation}\label{11}
\rho_f^s=\frac{1}{d_f}\sum\limits_{u_f=0}^{d_f^2-1}t_{u_f}^sA_{u_f}^{(f)},
\end{equation}
\begin{equation}\label{12}
\begin{split}
\rho_{gh}^s=&\frac{1}{d_gd_h}\sum\limits_{u_g=0}^{d_g^2-1}\sum\limits_{u_h=0}^{d_h^2-1}t_{u_g,u_h}^sA_{u_g}^{(g)}\otimes A_{u_h}^{(h)}.
\end{split}
\end{equation}
Let $T_s^{(f)}$, $T_s^{(g)}$, $T_s^{(h)}$ and $T_s^{(gh)}$ be the vectors with entries $t_{u_f}^s$, $t_{u_g,0}^s$, $t_{0,u_h}^s$ and $t_{u_g,u_h}^s$ for $u_f,u_g,u_h\neq0$. Then
\begin{equation}\label{13}
S^{f|g}=\sum_sp_sT_s^{(f)}(T_s^{(g)})^{\dagger},~~ S^{f|gh}=\sum_sp_sT_s^{(f)}(T_s^{(gh)})^{\dagger},
\end{equation}
where $\dagger$ stands for conjugate transpose. It follows from Lemma 1 and Lemma 2 that
\begin{equation*}
\begin{split}
\|N^{f|gh}\|_{tr}&\leq\sum_sp_s(|\alpha|\|T_s^{(f)}\cdot[(T_s^{(g)})^{\dagger}~0~\cdots~0]\|_{tr}+
|\beta|\|T_s^{(f)}(T_s^{(gh)})^{\dagger}\|_{tr})\\
&=\sum_sp_s(|\alpha|\|T_s^{(f)}\|\|T_s^{(g)}\|+
|\beta|\|T_s^{(f)}\|\|T_s^{(gh)}\|)\\
&\leq\sqrt{d_f-1}\left(|\alpha|\sqrt{d_g-1}+|\beta|\sqrt{d_gd_h(1-\frac{1}{d_g^2}-\frac{1}{d_h^2})+1}\right),
\end{split}
\end{equation*}
where we have used the triangular inequality of the norm $\| \cdot \|_{tr}$
and $\||a\rangle\langle b|\|_{tr}=\||a\rangle\|\||b\rangle\|$
for real vectors $|a\rangle$ and $|b\rangle$.
\qed
Now we consider genuine tripartite entanglement. A mixed state is said to be genuine multipartite entangled if it cannot be written as a convex combination of biseparable states. Let $T(\rho)=\frac{1}{3}(\|N^{1|23}\|_{tr}+\|N^{2|13}\|_{tr}+\|N^{3|12}\|_{tr})$, after choosing certain (fixed) values of $\{\alpha, \beta\}$, we define
\begin{equation*}
\begin{split}
K_1=\textrm{Max}\{&\sqrt{d_1-1}\left(|\alpha|\sqrt{d_2-1}+|\beta|\sqrt{d_2d_3(1-\frac{1}{d_2^2}-\frac{1}{d_3^2})+1}\right),\\ &\sqrt{d_2-1}\left(|\alpha|\sqrt{d_1-1}+|\beta|\sqrt{d_1d_3(1-\frac{1}{d_1^2}-\frac{1}{d_3^2})+1}\right),\\
&\sqrt{d_3-1}\left(|\alpha|\sqrt{d_1-1}+|\beta|\sqrt{d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1}\right)\}.
\end{split}
\end{equation*}
We have the following theorem.
\begin{theorem}\label{2}
A mixed state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes H_3^{d_3}$ is genuine tripartite entangled if $T(\rho)>K_1$.
\end{theorem}
{\it Proof}~ Suppose $\rho$ is a biseparable mixed state, then
$\rho=\sum_i p_i\rho_i^1\otimes\rho_i^{23}+\sum_j r_j\rho_j^2\otimes\rho_j^{13}+\sum_k s_k\rho_k^3\otimes\rho_k^{12}$ with $0\leq p_i,r_j,s_k\leq1$ and $\sum_i p_i+\sum_j r_j+\sum_k s_k=1$. By Theorem 1, we have that
\begin{equation*}
\begin{split}
T(\rho)&=\frac{1}{3}(\sum_i p_i\|N^{1|23}(\rho_i)\|_{tr}+\sum_jr_j\|N^{2|13}(\rho_j)\|_{tr}+\sum_ks_k\|N^{3|12}(\rho_k)\|_{tr})\\
&\leq\frac{1}{3}(K_1+K_1+K_1)=K_1.
\end{split}
\end{equation*}
Consequently, if $T(\rho)>K_1$, $\rho$ is genuine tripartite entangled.
\qed
Next we consider the permutational invariant state $\rho$, i.e. $\rho=\rho^p=p\rho p^{\dagger}$ for any permutation $p$ of the qudits. A biseparable permutational invariant state can be written as $\rho=\sum_i p_i\rho_i^1\otimes\rho_i^{23}+\sum_j r_j\rho_j^2\otimes\rho_j^{13}+\sum_k s_k\rho_k^3\otimes\rho_k^{12}$, where $0<p_i,r_j,s_k\leq1$. We have the following corollary.
\begin{corollary}
If a permutational invariant mixed state is biseparable, then we have
$$T(\rho)=\frac{1}{3}(\|N^{1|23}\|_{tr}+\|N^{2|13}\|_{tr}+\|N^{3|12}\|_{tr})\leq J_1.$$
Therefore if $T(\rho)>J_1$, $\rho$ is genuine tripartite entangled. Here
\begin{equation*}
\begin{split}
J_1=&\frac{1}{3}\left(\sqrt{d_1-1}(|\alpha|\sqrt{d_2-1}+|\beta|\sqrt{d_2d_3(1-\frac{1}{d_2^2}-\frac{1}{d_3^2})+1}) \right.\\ &+\sqrt{d_2-1}(|\alpha|\sqrt{d_1-1}+|\beta|\sqrt{d_1d_3(1-\frac{1}{d_1^2}-\frac{1}{d_3^2})+1})\\ &+\left. \sqrt{d_3-1}(|\alpha|\sqrt{d_1-1}+|\beta|\sqrt{d_1d_2(1-\frac{1}{d_1^2}-\frac{1}{d_2^2})+1})\right).
\end{split}
\end{equation*}
\end{corollary}
\textit{\textbf{Example 1}} Consider the mixed three-qubit $W$ state,
\begin{equation}\label{14}
\rho=\frac{1-x}{8}I_8+x|W\rangle\langle W|, \quad 0\leq x\leq1,
\end{equation}
where $|W\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle)$ and $I_8$ is the $8\times8$ identity matrix. With different choices of $\alpha$ and $\beta$, using Theorem 2 we obtain
the corresponding intervals of $x$ for which $\rho$ is genuine tripartite entangled. The results
are shown in Table \ref{tab:1}. Explicitly, let
$f_1(x)=T(\rho)-K_1=6.5225x-(\frac{1}{10}+2\sqrt{3})>0$, i.e. $0.5464<x\leq1$. Set $g_1(x)=\frac{1}{12}(\sqrt{66}x-6)$, Theorem 2 in \cite{lmj} says that when $g_1(x)>0$, $\rho$ is genuine entangled, i.e. for $0.7385<x\leq1$. Theorem 2 in \cite{dgh} implies that $\rho$ is genuine entangled if $g_2(x)=3.26x-\frac{6+\sqrt{3}}{3}>0$, i.e. $0.791<x\leq1$. Our result clearly outperforms these two results in some intervals, see the comparison in Fig. \ref{fig:1}.
\begin{table}[!htb]
\caption{$T(\rho_W)$, $K_1$ and the range of GME of the state (\ref{14}) for different $\alpha$ and $\beta$.}
\label{tab:1}
\centering
\begin{tabular}{cccc}
\hline\noalign{\smallskip}
\ & $T(\rho_W)$ & $K_1$ & the range of GME \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\alpha=1,\beta=1$ & $3.7177x$ & $1+\sqrt{3}$ & $0.7349<x\leq1$ \\
$\alpha=\frac{1}{2},\beta=2$ & $6.5825x$ & $\frac{1}{2}+2\sqrt{3}$ & $0.6022<x\leq1$\\
$\alpha=\frac{1}{10},\beta=2$ & $6.5225x$ & $\frac{1}{10}+2\sqrt{3}$ & $0.5464<x\leq1$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.75\textwidth]{figure1.eps}\\
\caption{$f_1(x)$ from our result (solid straight line), $g_1(x)$ from Theorem 2 in \cite{lmj}(dashed straight line), $g_2(x)$ from Theorem 2 in \cite{dgh}(dash-dot straight line). }
\label{fig:1}
\end{figure}
\textit{\textbf{Example 2}} Consider the quantum state $\rho\in H_1^3\otimes H_2^3\otimes H_3^2$,
\begin{equation}\label{15}
\rho=\frac{1-x}{18}I_{18}+x|\varphi\rangle\langle\varphi|,
\end{equation}
where $|\varphi\rangle=\frac{1}{\sqrt{5}}[(|10\rangle+|21\rangle)|0\rangle
+(|00\rangle+|11\rangle+|22\rangle)|1\rangle]$, $0\leq x\leq1$, $I_{18}$ is the $18\times18$ identity matrix. By Theorem 1, we can determine the range of $x$ where $\rho$ is surely entangled.
Table \ref{tab:2} shows that when $\alpha=0$, $\beta=1$, our criterion detects the entanglement for $0.3405<x\leq1$, which is better than the result $0.35\leq x\leq1$ given in \cite{lc}.
Similarly when $\alpha=1$, $\beta=1$, we get $f_2(x)=T(\rho)-(2+\sqrt{\frac{29}{3}})=9.0668x-(2+\sqrt{\frac{29}{3}})$, therefore $\rho$ is genuine tripartite entangled when $f_2(x)>0$, i.e. $0.5635<x\leq1$. Note that the criterion given in \cite{lc} cannot detect genuine entanglement.
\begin{table}[!htb]
\caption{The entanglement regions of $\rho$ as given by Theorem 1.}
\label{tab:2}
\centering
\begin{tabular}{ccc}
\hline\noalign{\smallskip}
\ & $\|N^{2|13}\|_{tr}$ & The range of entanglement \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\alpha=1, \beta=1$ & $10.5292x$ & $0.4852<x\leq1$ \\
$\alpha=\frac{1}{2}, \beta=2$ & $18.4650x$ & $0.3909<x\leq1$\\
$\alpha=0, \beta=1$ & $9.1321x$ & $0.3405<x\leq1$\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Genuine entanglement for multipartite quantum state}
Now we consider genuine entanglement of $n$-partite quantum systems. Let $\{A_{u_s}^{(s)}\}$ $(u_s=0,\cdots,d_s^2-1)$ be the generalized Pauli operators of the $s$th $d_s$-dimensional Hilbert space $H_s^{d_s}$. Any quantum state
$\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ has the generalized Pauli operators representation:
\begin{equation}\label{16}
\rho=\frac{1}{d_1d_2\cdots d_n}\sum\limits_{s=1}^n\sum_{u_s=0}^{d_s^2-1}t_{u_1,u_2,\cdots,u_n}A_{u_1}^{(1)}\otimes A_{u_2}^{(2)}\otimes \cdots\otimes A_{u_n}^{(n)}
\end{equation}
where $A_0^{(s)}=I_{d_s}$$(s=1,\cdots,n)$, $t_{u_1,u_2,\cdots,u_n}=tr(\rho (A_{u_1}^{(1)})^{\dagger}\otimes(A_{u_2}^{(2)})^{\dagger}\otimes \cdots\otimes (A_{u_n}^{(n)})^{\dagger})$ are complex coefficients.
Let $T^{(l_1\cdots l_k)}$ be the vectors with entries $t_{u_{l_1},\cdots,u_{l_k},\cdots,0}$, $u_{l_1},\cdots,u_{l_k}\neq0$ and $1\leq l_1<\cdots< l_k\leq n$.
We have
$$\|T^{(1)}\|^2=\sum_{u_1=1}^{d_1^2-1}t_{u_1,\cdots,0}t_{u_1,\cdots,0}^{*},$$
$$\cdots,$$
$$\|T^{(l_1\cdots l_k)}\|^2=\sum\limits_{s=1}^{k}\sum_{u_{l_s}=1}^{d_{l_s}^2-1}t_{u_{l_1}\cdots u_{l_k}\cdots0}t_{u_{l_1}\cdots u_{l_k}\cdots0}^{*},$$
$$\cdots,$$
$$\|T^{(12\cdots n)}\|^2=\sum\limits_{s=1}^n\sum_{u_s=1}^{d_s^2-1}t_{u_1\cdots u_n}t_{u_1\cdots u_n}^{*},$$
where $*$ represents the conjugate.
Set
$$A_1=\|T^{(1)}\|^2+\cdots+\|T^{(n)}\|^2,$$
$$A_2=\|T^{(12)}\|^2+\cdots+\|T^{((n-1),n)}\|^2,$$ $$\cdots,$$ $$A_n=\|T^{(1\cdots n)}\|^2.$$
\begin{lemma}\label{3}
Let $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ $(n\geq2)$ be a $n$-partite pure quantum state. Then
\begin{equation}\label{17}
\|T^{(12\cdots n)}\|^2\leq
\frac{d_1\cdots d_n(n-1-\sum\limits_{s=1}^n\frac{1}{d_s^2})+1}{n-1}.
\end{equation}
\end{lemma}
{\it Proof}~~It's enough to show the lemma for a pure state $\rho$, where we have $tr(\rho^2)=1$ and $tr(\rho_{l_1}^2)=tr(\rho_{l_2\cdots l_n}^2)$ for any distinct indices $l_1, \ldots, l_n\in\{1,2,\cdots,n\}$. Here
$\rho_{l_1}$ and $\rho_{l_2\cdots l_n}$ are the reduced states for the subsystem $H_{l_1}^{d_{l_1}}$ and $H_{l_2}^{d_{l_2}}\otimes\cdots\otimes H_{l_n}^{d_{l_n}}$. Therefore, we have
\begin{equation}\label{18}
tr(\rho^2)=\frac{1}{d_1d_2\cdots d_n}(1+A_1+\cdots+A_n)=1,
\end{equation}
and
\begin{equation}\label{19}
\frac{1}{d_{l_1}}(1+\|T^{(l_1)}\|^2)=\frac{1}{d_{l_2}\cdots d_{l_n}}(1+\|T^{(l_2)}\|^2+\cdots+\|T^{(l_n)}\|^2+\cdots+\|T^{(l_2\cdots l_n)}\|^2).
\end{equation}
Since $\sum\limits_{l_1=1}^n\frac{1}{d_{l_1}}tr(\rho_{l_1}^2)=\sum\limits_{l_1=1}^n\frac{1}{d_{l_1}}tr(\rho_{l_2\cdots l_n}^2)$,
we get that
\begin{equation*}
\sum\limits_{l_1=1}^n\frac{1}{d_{l_1}^2}(1+\|T^{(l_1)}\|^2)=\frac{1}{d_{1}\cdots d_{n}}[n+(n-1)A_1+(n-2)A_2+\cdots+A_{n-1}].
\end{equation*}
Therefore,
\begin{equation}\label{20}
A_1=\frac{d_{1}\cdots d_{n}}{n-1}\sum\limits_{s=1}^n\frac{1}{d_{s}^2}(1+\|T^{(s)}\|^2)
-\frac{n}{n-1}-\frac{n-2}{n-1}A_2-\frac{n-3}{n-1}A_3-\cdots-\frac{1}{n-1}A_{n-1}.
\end{equation}
Substituting (\ref{20}) into (\ref{18}), we get
\begin{equation}\label{21}
\begin{split}
A_n=&d_1\cdots d_n-1-\frac{1}{n-1}\left(d_{1}\cdots d_{n}\sum\limits_{s=1}^n\frac{1}{d_{s}^2}(1+\|T^{(s)}\|^2)
-n\right)-\frac{1}{n-1}A_2\\
&-\frac{2}{n-1}A_3-\cdots\frac{n-2}{n-1}A_{n-1}\\
\leq&\frac{d_1\cdots d_n(n-1-\sum\limits_{s=1}^n\frac{1}{d_s^2})+1}{n-1}
\end{split}
\end{equation}
\qed
Let $\rho$ be a $n$-partite state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ represented as \eqref{16}, suppose $\rho$ is separable under the bipartition $l_1\cdots l_{k-1}|l_k\cdots l_{n}$. For real number $\alpha$, $\beta$ and distinct
indices $l_1, \ldots, l_n\in\{1, 2, \cdots, n\}$,
set
\begin{equation}\label{22}
N^{l_1\cdots l_{k-1}|l_k\cdots l_{n}}=\alpha S_0^{l_1\cdots \l_{k-1}|l_k}+\beta S^{l_1\cdots l_{k-1}|l_k\cdots l_n},
\end{equation}
for $k-1=1, 2,\cdots, [n/2]$, the smallest integer less or equal to $n/2$. Let $T^{(l_1\cdots l_k)}$ be the $(d^2_{l_1}-1)\cdots (d^2_{l_k}-1)$-dimensional column vector with entries $t_{u_{l_1}\cdots u_{l_k}\cdots0}$
associated with the generalized Pauli operators representation of $\rho$, and define
$S_0^{l_1\cdots l_{k-1}|l_k}$ to be the block matrix
$S_0^{l_1\cdots \l_{k-1}|\l_k}=[S^{l_1\cdots \l_{k-1}|\l_k}~~O_{l_1\cdots l_{k-1}}]$, where
$S^{l_1\cdots l_{k-1} l_k}=T^{(l_1\cdots l_{k-1})}(T^{(l_k)})^{\dagger}=([t_{u_{l_1}\cdots u_{l_k}\cdots0}])$ is the
$\prod\limits_{s=1}^{k-1}(d_{l_s}^2-1)\times (d_{l_k}^2-1)$ matrix and
$O_{l_1\cdots l_{k-1}}$ is the $\prod\limits_{s=1}^{k-1}(d_{l_s}^2-1)\times
[\prod\limits_{s=k}^{n}(d_{l_s}^2-1)-(d_{l_k}^2-1)]$
zero matrix, and $S^{l_1\cdots l_{k-1}|l_k\cdots l_n}=T^{(l_1\cdots l_{k-1})}(T^{(l_k\cdots l_n)})^{\dagger}=[t_{u_1,\cdots, u_n}]$ to be a $\prod\limits_{s=1}^{k-1}(d_{l_s}^2-1)\times
\prod\limits_{s=k}^{n}(d_{l_s}^2-1)$ matrix. For example, when $\rho\in H_1^{2}\otimes H_2^{2}\otimes H_3^{2}\otimes H_4^{3}$, $N^{13|24}=\alpha S_0^{13|2}+\beta S^{13|24}$, where
$$S^{13|2}=\left[
\begin{array}{ccc}
t_{1,1,1,0}~~ & t_{1,2,1,0}~~ & t_{1,3,1,0} \\
t_{1,1,2,0}~~ & t_{1,2,2,0}~~ & t_{1,3,2,0} \\
t_{1,1,3,0}~~ & t_{1,2,3,0}~~ & t_{1,3,3,0} \\
\vdots & \vdots & \vdots \\
t_{3,1,3,0}~~ & t_{3,2,3,0}~~ & t_{3,3,3,0} \\
\end{array}
\right],~~
S^{13|24}=\left[
\begin{array}{cccccc}
t_{1,1,1,1}~ & t_{1,1,1,2} & \cdots & t_{1,1,1,8} & \cdots & t_{1,3,1,8}\\
t_{1,1,2,1}~ & t_{1,1,2,2} & \cdots & t_{1,1,2,8} & \cdots & t_{1,3,2,8}\\
t_{1,1,3,1}~ & t_{1,1,3,2} & \cdots & \cdot & \cdots & \cdot \\
\vdots~ & \vdots & \vdots & \vdots & \vdots & \vdots \\
t_{3,1,3,1}~ & t_{3,1,3,2} & \cdots & \cdot & \cdots & \cdot \\
\end{array}
\right].
$$
\begin{theorem}\label{3} Fix $\alpha, \beta$ as above.
If the $n$-partite state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ is separable under the bipartition $l_1\cdots l_{k-1}|l_k\cdots l_{n}$, then we have that\\
(i) $\|N^{l_1|l_{2}\cdots l_{n}}\|_{tr}\leq M_{l_1}$;\\
(ii) $\|N^{l_1\cdots l_{k-1}|l_k\cdots l_{n}}\|_{tr}\leq M_{l_1\cdots l_{k-1}}$ $(k\geq3)$;\\
where
\begin{footnotesize}
\begin{equation*}
M_{l_1}=\sqrt{d_{l_1}-1}\left(|\alpha|\sqrt{d_{l_2}-1}+|\beta|
\sqrt{\frac{d_{l_2}\cdots d_{l_n}(n-2-\sum\limits_{s=2}^nd_{l_s}^{-1})+1}{n-2}}\right),
\end{equation*}
\begin{equation*}
M_{l_1\cdots l_{k-1}}=\sqrt{\frac{d_{l_1}\cdots d_{l_{k-1}}(k-2-\sum\limits_{s=1}^{k-1}d_{l_s}^{-1})+1}{k-2}}
\left(|\alpha|\sqrt{d_{l_k}-1}+|\beta|\sqrt{\frac{d_{l_k}\cdots d_{l_n}(n-k-\sum\limits_{s=k}^nd_{l_s}^{-1})+1}{n-k}}\right).
\end{equation*}
\end{footnotesize}
\end{theorem}
{\it Proof}~~$(i)$ If the $n$-partite mixed state is separable under the bipartition $l_1|l_2\cdots l_{n}$, it can be expressed as
\begin{equation}\label{23}
\rho_{l_1|l_2\cdots l_{n}}=\sum\limits_s p_s\rho_{l_1}^s\otimes\rho_{l_2\cdots l_{n}}^s, \ 0<p_s\leq1, \sum\limits_s p_s=1,
\end{equation}
where
\begin{equation}\label{24}
\rho_{l_1}^s=\frac{1}{d_{l_1}}\sum_{u_{l_1}=0}^{d_{l_1}^2-1} t_{u_{l_1}}^sA_{u_{l_1}}^{(l_1)},
\end{equation}
\begin{equation}\label{25}
\rho_{l_2\cdots l_{n}}^s=\frac{1}{d_{l_2}\cdots d_{l_{n}}}\sum\limits_{q=2}^{n}\sum_{u_{l_q}=0}^{d_{l_q}^2-1}t_{u_{l_2},\cdots,u_{l_n}}^s
A_{u_{l_2}}^{(l_2)}\otimes \cdots\otimes A_{u_{l_{n}}}^{(l_{n})}
\end{equation}
Then,
\begin{equation}\label{26}
S^{l_1|l_2}=\sum\limits_sp_sT_s^{(l_1)}(T_s^{(l_2)})^{\dagger},~~
S^{l_1|l_2\cdots l_n}=\sum\limits_sp_sT_s^{(l_1)}(T_s^{(l_2\cdots l_n)})^{\dagger}.
\end{equation}
By Lemma 1 and Lemma 3, we have
\begin{equation*}
\begin{split}
\|N^{l_1|l_2\cdots l_{n}}\|_{tr}&\leq\sum_sp_s(|\alpha|\|T_s^{(l_1)}\|\|T_s^{(l_2)}\|+|\beta|\|T_s^{(l_1)}\|\|T_s^{(l_2\cdots l_n)}\|)\\
&\leq\sqrt{d_{l_1}-1}\left(|\alpha|\sqrt{d_{l_2}-1}+|\beta|
\sqrt{\frac{d_{l_2}\cdots d_{l_n}(n-2-\sum\limits_{s=2}^n\frac{1}{d_{l_s}})+1}{n-2}}\right)\\
&=M_{l_1},
\end{split}
\end{equation*}
where we have used $\|A+B\|_{tr}\leq\|A\|_{tr}+\|B\|_{tr}$ for matrices $A$ and $B$ and $\||a\rangle\langle b|\|_{tr}=\||a\rangle\|\||b\rangle\|$
for vectors $|a\rangle$ and $|b\rangle$.\\
$(ii)$ If $\rho$ is separable under the bipartition $l_1\cdots l_{k-1}|l_k\cdots l_{n}$, it can be expressed as
\begin{equation}\label{27}
\rho_{l_1\cdots l_{k-1}|l_k\cdots l_{n}}=\sum\limits_s p_s\rho_{l_1\cdots l_{k-1}}^s\otimes\rho_{l_k\cdots l_{n}}^s, 0<p_s\leq1, \sum\limits_s p_s=1,
\end{equation}
where
\begin{equation}\label{28}
\rho_{l_1\cdots l_{k-1}}^s=\frac{1}{d_{l_1}\cdots d_{l_{k-1}}}\sum\limits_{p=1}^{k-1}\sum_{u_{l_p}=0}^{d_{l_p}^2-1}
t_{u_{l_1},\cdots,u_{l_{k-1}}}^s
A_{u_{l_1}}^{(l_1)}\otimes \cdots\otimes A_{u_{l_{k-1}}}^{(l_{k-1})},
\end{equation}
\begin{equation}\label{29}
\rho_{l_k\cdots l_{n}}^s=\frac{1}{d_{l_k}\cdots d_{l_{n}}}\sum\limits_{q=k}^{n}\sum_{u_{l_q}=0}^{d_{l_q}^2-1}t_{u_{l_k},\cdots,u_{l_{n}}}^s
A_{u_{l_k}}^{(l_k)}\otimes \cdots\otimes A_{u_{l_{n}}}^{(l_{n})}.
\end{equation}
Then,
\begin{equation}\label{30}
S^{l_1\cdots l_{k-1}|l_k}=\sum\limits_sp_sT_s^{(l_1\cdots \l_{k-1})}(T_s^{(l_k)})^{\dagger},~~
S^{l_1\cdots l_{k-1}|l_k\cdots l_n}=\sum\limits_sp_sT_s^{(l_1\cdots \l_{k-1})}(T_s^{(l_k\cdots l_n)})^{\dagger}.
\end{equation}
Similarly, we get
\begin{equation*}
\begin{split}
&\|N^{l_1\cdots l_{k-1}|l_k\cdots l_{n}}\|_{tr}\\
\leq&\sum_sp_s(|\alpha|\|T_s^{(l_1\cdots l_{k-1})}\|\|T_s^{(l_k)}\|+|\beta|\|T_s^{(l_1\cdots l_{k-1})}\|\|T_s^{(l_k\cdots l_n)}\|)\\
\leq&\sqrt{\frac{d_{l_1}\cdots d_{l_{k-1}}(k-2-\sum\limits_{s=1}^{k-1}\frac{1}{d_{l_s}})+1}{k-2}}
[|\alpha|\sqrt{d_{l_k}-1}+|\beta|\sqrt{\frac{d_{l_k}\cdots d_{l_{n}}(n-k-\sum\limits_{s=k}^n\frac{1}{d_{l_s}})+1}{n-k}}]\\
=&M_{l_1\cdots l_{k-1}}.
\end{split}
\end{equation*}
\qed
Now we consider the question of genuine multipartite entanglement. An $n$-partite mixed state $\rho=\sum p_i|\varphi_i\rangle\langle\varphi_i|$ is said to be biseparable if $|\varphi_i\rangle$ $(i=1,2,\cdots,n)$ can be expressed as one of the forms: $|\varphi_i\rangle=|\varphi_i^{l_1\cdots\l_{k-1}}\rangle\otimes|\varphi_i^{l_k\cdots l_n}\rangle$, where $|\varphi_i^{l_1\cdots\l_{k-1}}\rangle$ and $|\varphi_i^{l_k\cdots l_n}\rangle$ are some pure states in $H_{l_1}^{d_{l_1}}\otimes\cdots \otimes H_{l_{k-1}}^{d_{l_{k-1}}}$ and $H_{l_k}^{d_{l_k}}\otimes\cdots \otimes H_{l_{n}}^{d_{l_{n}}}$, respectively, $l_1\neq \cdots\neq l_n\in\{1,2,\cdots,n\}$. Otherwise, $\rho$ is said to be genuinely multipartite entangled. Let
\begin{equation}\label{31}
T(\rho)=\frac{1}{\sum\limits_{s=1}^mC_n^s}(\sum\limits _{l_1=1}^n\|N^{l_1|l_2\cdots l_n}\|_{tr}+\sum\limits _{1\leq l_1<\atop l_2\leq n}\|N^{l_1l_2|l_3\cdots l_n}\|_{tr}+\cdots+\sum\limits _{1\leq l_1<\cdots\atop<l_m\leq n}\|N^{l_1\cdots l_m|l_{m+1}\cdots l_n}\|_{tr}),
\end{equation}
where $C_n^s=\frac{n!}{s!(n-s)!}$, $m=[n/2]$. Set $K_2=\textrm{Max}\{M_{l}, M_{l_1\cdots l_{k-1}}\}$, where $l=1,2,\cdots,n$, $l_1<\cdots<\l_{k-1}\in\{1,2,\cdots,n\}$.
We have the following criterion.
\begin{theorem}\label{4}
A mixed state $\rho\in H_1^{d_1}\otimes H_2^{d_2}\otimes\cdots\otimes H_n^{d_n}$ is genuine multipartite entangled if $T(\rho)>K_2$.
\end{theorem}
{\it Proof}~~If $\rho$ is biseparable, it follows from Theorem 3 that
\begin{equation*}
\begin{split}
T(\rho)&=\frac{1}{\sum\limits_{s=1}^mC_n^s}(\sum\limits _{l_1=1}^n\|N^{l_1|l_2\cdots l_n}\|_{tr}+\sum\limits _{1\leq l_1<\atop l_2\leq n}\|N^{l_1l_2|l_3\cdots l_n}\|_{tr}+\cdots+\sum\limits _{1\leq l_1<\cdots\atop<l_m\leq n}\|N^{l_1\cdots l_m|l_{m+1}\cdots l_n}\|_{tr})\\
&\leq\frac{1}{\sum\limits_{s=1}^mC_n^s}(K_2+\cdots+K_2)\\
&\leq K_2,
\end{split}
\end{equation*}
Consequently, if $T(\rho)>K_2$, $\rho$ is genuine multipartite entangled.
\qed
For a permutational invariant density matrix, similar to Corollary 1, we have the following result:
\begin{corollary} Let $\rho$ be a permutational invariant density matrix.
If $T(\rho)>J_2$, then $\rho$ is genuine multipartite entanglement, where
$$J_2=\frac{1}{\sum\limits_{s=1}^mC_n^s}(\sum\limits _{l=1}^nM_{l}+\sum\limits _{1\leq l_1< l_2\leq n}M_{l_1l_2}+\cdots+\sum\limits _{1\leq l_1<\cdots<l_m\leq n}M_{l_1\cdots l_m}).$$
\end{corollary}
\textit{\textbf{Example 3}} Consider the four-qubit state $\rho\in H_1^2\otimes H_2^2\otimes H_3^2\otimes H_4^2$,
\begin{equation}\label{32}
\rho=x|\psi\rangle\langle\psi|+\frac{1-x}{16}I_{16},
\end{equation}
where $|\psi\rangle=\frac{1}{\sqrt{2}}(|0000\rangle+|1111\rangle)$, $0\leq x\leq1$,
$I_{16}$ is the $16\times16$ identity matrix.
Using Theorem 3 $(i)$ with $\alpha=1, \beta=1$, we set
$f_3(x)=\|N^{l_1|l_2l_3l_4}\|_{tr}-(1+\sqrt{\frac{11}{2}})=(4+\sqrt{2})x-(1+\sqrt{\frac{11}{2}})$, $\rho$ is not separable under the bipartition $l_1|l_2l_3l_4$ for $f_3(x)>0$, i.e. $0.6179<x\leq1$, while according to Theorem 3 in \cite{lww}, $\rho$ is not separable under the bipartition $l_1|l_2l_3l_4$ for $g_3(x)=9x^2-4>0$, i.e. $0.6667<x\leq1$. Fig. \ref{fig:2} shows that our method detects more entanglement.
Now using Corollary 2 and set $\alpha=1, \beta=1$, we get that $f_4(x)=T(\rho)-J_2=\frac{23+2\sqrt{2}}{5}x-\frac{11+\sqrt{22}+3\sqrt{3}}{5}$,
$\rho$ is genuine entanglement for $ f_4(x)>0$, i.e.$0.8087<x\leq1$, while the criterion given in \cite{lww} cannot detect the genuine four-qubits entanglement at all.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.75\textwidth]{figure2.eps}
\caption{$f_3(x)$ from our result (solid straight line), $g_3(x)$ from Theorem 3 in \cite{lww} (dashed curve line). }
\label{fig:2}
\end{figure}
\section{Conclusions}
By adopting the Weyl representation with generalized Pauli operators of density matrices, we have come up with several general tests to judge genuine entanglement for multipartite quantum systems. Our approach starts with some finer upper bounds for the norms of correlation tensors by using the generalized Pauli operators presentation, then we have obtained the entanglement criteria for genuine tripartite quantum states based on certain matrices constructed by the correlation tensor of the density matrices. These conclusions are generalized to higher dimensional multipartite quantum systems to detect genuine entanglement in arbitrary dimensional quantum states. Compared with previously available criteria, ours can detect more situations, and these are explained in details with several examples.
\textbf{Acknowledgments} This work is supported by the National Natural Science Foundation of China under grant nos. 12075159, 12126351 and 12171044, Simons Foundation under grant no. 523868, Beijing Natural Science Foundation (grant no. Z190005), Academy for Multidisciplinary Studies, Capital Normal University, and Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology (no. SIQSE202001), and the Academician Innovation Platform of Hainan Province.
\noindent\textbf {Data Availability Statements} All data generated or analysed during this study are available from the corresponding author on reasonable request.
|
{
"arxiv_id": "2302.08608",
"language": "en",
"timestamp": "2023-02-20T02:03:03",
"url": "https://arxiv.org/abs/2302.08608",
"yymm": "2302"
} | \section{Introduction}
In this paper, we build on an existing body of work that examines the extreme values of eigenfunctions of classically chaotic quantum systems. Specifically, we examine the quantum cat map, one of the best-studied models. First introduced by Berry and Hannay in \cite{HB1980}, cat maps are the quanitzation of hyperbolic linear maps in $\text{SL}(2, \mathbb{Z})$ on the 2-dimensional torus.
The quantum cat map is a toy model in quantum chaos. Another
standard class of quantum chaos models is given by Laplacian eigenfunctions on negatively curved compact manifolds $(M,g)$, satisfying $-\Delta_g u=\lambda^2u$ and normalized so that $\|u\|_{L^2}=1$.
Estimating the suprema of these Laplacian eigenfunctions has been an area of great interest.
For example, Levitan, Avakumovi\'{c}, and H\"{o}rmander in \cite{levitan}, \cite{avakumovic}, \cite{hormander}, respectively, proved the well-known result that on a compact Riemannian manifold, $\|u\|_{L^\infty} \lesssim \lambda^{\frac{n-1}{2}}$ where $n=\dim M$. However, it is conjectured that much stronger results hold. Specifically, in \cite{iwaniecsarnak}, Iwaniec and Sarnak conjectured that for surfaces of constant negative curvature, $\|u\|_{L^\infty} \lesssim_\varepsilon \lambda^{\varepsilon}$ for all $\varepsilon>0$;
in the special case of Hecke eigenfunctions on arithmetic surfaces they obtained the bound $\|u\|_{L^\infty}\lesssim_\varepsilon \lambda^{\frac5{12}+\varepsilon}$. The best known bound outside of the arithmetic cases is the result of B\'erard~\cite{berard}: when $(M, g)$ has no conjugate points, $\|u\|_{L^\infty} = O(\lambda^{\frac{n-1}{2}}/\sqrt{\log \lambda})$.
In this paper, we are concerned with metaplectic transformations, the quantizations of hyperbolic maps
\begin{equation}
\label{e:A-intro}
A =\begin{bmatrix} a & b \\ c & d \end{bmatrix}\in \text{SL}(2, \mathbb{Z}),\quad
|a+d|>2,\quad
ab,cd\in 2\mathbb Z.
\end{equation}
We decompose $L^2(\mathbb{R}^n)$ into a direct integral of finite dimensional spaces $\mathcal{H}_N(\theta)$ where $\theta \in \mathbb{T}^{2}$ and $N \in \mathbb{N}$.
As shown in Section \ref{preliminaries}, the condition that $ab,cd$ be even is needed to make sure that the metaplectic map
associated to~$A$ descends to a map from $\mathcal{H}_N(0)$ to itself. We denote the resulting quantum cat map by $M_{N,0}$.
Then, we prove the following theorems bounding the supremum norm of eigenfunctions on~$\mathcal{H}_N(0)$. See Figure~\ref{fig:norms} for a numerical illustration.
Note that due to the normalization in the spaces $\ell^2$ and $\ell^\infty$, the standard bound on Laplacian eigenfunctions
$\|u\|_{L^2}\lesssim \|u\|_{L^\infty}\lesssim \lambda^{\frac{n-1}{2}}\|u\|_{L^2}$ becomes the bound $\frac{1}{\sqrt{N}}\|u\|_{\ell^2}\leq\|u\|_{\ell^\infty}\leq \|u\|_{\ell^2}$.
For a sequence of $N$, we have the following lower bound.
\begin{theorem}\label{thm:lowerbound}
Suppose $A$ is a matrix satisfying~\eqref{e:A-intro} with positive eigenvalues, even trace, and coprime off-diagonal terms. Then we can find a sequence of odd integers $N_k\to\infty$ such that for all $\varepsilon>0$, there exists $k_0$ such that for all $k\geq k_0$, there exists an eigenfunction $u$ of $M_{N,0}$ with $\|u\|_{\ell^2}=1$ and
\begin{align}
\|u\|_{\ell^\infty} \geq \frac{1-\varepsilon}{\sqrt{2\log_\lambda N_k}}.
\end{align}
\end{theorem}
Under more general assumptions, for all odd $N$, we have the following upper bound.
\begin{theorem}\label{thm:upperbound}
Suppose $A$ is a matrix satisfying~\eqref{e:A-intro}. Then for $\varepsilon>0$, there exists $N_0$ such that for all odd $N \geq N_0$, if $u$ is an eigenfunction of $M_{N, 0}$ with $\|u\|_{\ell^2}=1$ then
\begin{align}
\|u\|_{\ell^\infty}\leq \frac{1}{\sqrt{(1-\varepsilon)\log_\lambda N}}.
\end{align}
\end{theorem}
\begin{figure}
\centering
\includegraphics[scale=.39]{norms.eps}
\caption{The plot of the maximal $\ell^\infty$-norm of an $\ell^2$-normalized eigenfunction of $M_{N,0}$ where $M_{N,0}$ is associated with $A=\begin{bmatrix} 2 & 3 \\ 1 & 2\end{bmatrix}$. The lower and upper bounds from Theorems \ref{thm:lowerbound} and \ref{thm:upperbound}, $(2\log_\lambda N)^{-1/2}$ and $(\log_\lambda N)^{-1/2}$, respectively, give the solid red lines. The dotted blue line is the trivial lower bound $N^{-1/2}$. Finally, the large red dots correspond to the sequence $N_k$ from Theorem \ref{thm:lowerbound}.}
\label{fig:norms}
\end{figure}
Analogous statements can be proven for a sequence of even $N$ and any $\theta$, using similar proofs to those of Theorems \ref{thm:lowerbound} and \ref{thm:upperbound}. However, we exclude these arguments as they are overly technical and do not introduce any novel ideas.
The $\ell^\infty$ bounds on eigenfunctions of quantum cat maps have been
extensively studied in arithmetic quantum chaos, see~\cite{Kurlberg-Rudnick-1,Kurlberg-1,Olofsson-1,Olofsson-2,Olofsson-3}. These works have focused on \emph{Hecke eigenfunctions},
which are joint eigenfunctions of the quantum cat map $M_{N,0}$ and the Hecke operators,
constructed in this setting by Kurlberg--Rudnick~\cite{Kurlberg-Rudnick-0}.
There always exists an orthonormal basis of $\mathcal H_N(0)$ consisting of Hecke eigenfunctions;
however, due to the possibility of large multiplicities of the eigenspaces of $M_{N,0}$ (see~\cite[footnote~3]{Kurlberg-Rudnick-0}) an upper
bound on the $\ell^\infty$ norm of Hecke eigenfunctions does not imply the same bound for general eigenfunctions. We list
below the known bounds on $\ell^2$-normalized Hecke eigenfunctions $u$:
\begin{itemize}
\item Kurlberg--Rudnick~\cite{Kurlberg-Rudnick-1} proved the upper bound $\|u\|_{\ell^\infty}\lesssim_\varepsilon N^{-\frac18+\varepsilon}$.
\item Building on~\cite{Kurlberg-Rudnick-1} (which handled roughly half of the prime values of $N$), Kurlberg~\cite{Kurlberg-1} showed that for all but finitely many \emph{prime} values of $N$ one
has the upper bound $\|u\|_{\ell^\infty}\leq 2N^{-\frac 12}$, and deduced the upper bound $\|u\|_{\ell^\infty}\lesssim_\varepsilon N^{-\frac12+\varepsilon}$ for \emph{square-free}
values of~$N$.
\item Olofsson~\cite{Olofsson-1,Olofsson-2} showed an upper bound $\|u\|_{\ell^\infty}\lesssim N^{-\frac14}$
for \emph{most} values of~$N$ (in the sense of density as $N\to\infty$). On the other hand, these
papers also construct eigenfunctions satisfying a lower bound $\|u\|_{\ell^\infty}\gtrsim N^{-\frac14}$
for most values of $N$ which are \emph{not square-free}.
\end{itemize}
\section{Preliminaries} \label{preliminaries}
We begin with a review of the necessary definitions for this paper. First, recall the semiclassical Weyl quantization. For $a \in \mathscr{S}(\mathbb{R}^2)$ and a semiclassical parameter $h \in (0,1]$,
$$\text{Op}_h(a)f(x) := \frac{1}{2\pi h} \int_{\mathbb{R}^{2}} e^{\frac{i}{h} (x -x') \xi} a\left( \frac{x + x'}{2}, \xi \right) f(x') dx' d\xi, \quad f \in \mathscr{S}(\mathbb{R}).$$
Define the symbol class $$S(1) =\left\{a \in C^\infty\left(\mathbb{R}^{2}\right): \sup_{(x, \xi) \in \mathbb{R}^2} \left|\partial^\alpha_{(x, \xi)} a \right| < \infty \text{ for all } \alpha \in \mathbb{N}^{2}\right\},$$
which naturally induces the seminorms $\|a\|_{C^m} := \max_{|\alpha| \leq m } \sup_{\mathbb{R}^{2}} |\partial_{(x, \xi)}^\alpha a |$ for $m \in \mathbb{N}_0$. From \cite{z12semiclassical}*{Theorem 4.16}, we know for $a \in S(1)$, $\text{Op}_h(a)$ acts on both $\mathscr{S}(\mathbb{R})$ and $\mathscr{S}'(\mathbb{R})$.
Now, let $\omega=(y, \eta), z=(x, \xi) \in \mathbb{R}^{2}$. Define the \textit{standard symplectic form} $\sigma$ on $\mathbb{R}^{2}$ by $\sigma(z, \omega) :=\xi y - x \eta$ and define the \textit{quantum translation} by $U_\omega :=\text{Op}_h(a_\omega)$, where $a_\omega(z):= \exp(\frac{i}{h} \sigma(\omega, z))$. Noting that $a_\omega(z) \in S(1)$, we see $U_\omega$ is well-defined and acts on $\mathscr{S}(\mathbb{R})$. In \cite{z12semiclassical}*{Theorem 4.7}, it is shown that $$U_\omega f(x) = e^{\frac{i}{h} \eta x - \frac{i}{2h}y \eta } f(x-y).$$
Thus, $U_\omega$ is a unitary operator on $L^2(\mathbb{R})$ that satisfies the following exact Egorov's theorem,
\begin{equation}\label{eq:egorov}
U^{-1}_\omega \text{Op}_h(a) U_\omega =\text{Op}_h(\tilde{a}) \quad \text{for all } a \in S(1), \quad \tilde{a}(z):=a(z+\omega).
\end{equation}
From the fact that $U_\omega U_{\omega'} = e^{\frac{i}{2h} \sigma(\omega, \omega')} U_{\omega+\omega'}$, we deduce the following commutator formula,
\begin{equation}\label{eq:commutator}
U_\omega U_{\omega'} = e^{\frac{i}{h} \sigma(\omega, \omega')} U_{\omega'}U_\omega.
\end{equation}
Now let $\text{Sp}(2, \mathbb{R})$ be the group of real symplectic $2 \times 2$ matrices. In other words, $A \in \text{Sp}(2, \mathbb{R})$ if and only if $\sigma(Az, A\omega)=\sigma(z,\omega)$. Note that in this specific 2-dimensional case, $\text{Sp}(2, \mathbb{R})=\text{SL}(2, \mathbb{R})$.
For each $A \in \text{SL}(2, \mathbb{R})$, denote by $\mathcal{M}_A$ the set of all unitary transformations $M :L^2(\mathbb{R}) \rightarrow L^2(\mathbb{R})$ satisfying the following exact Egorov's theorem,
\begin{equation}\label{eq:MA}
M^{-1} \text{Op}_h(a) M= \text{Op}_h(a \circ A) \quad \text{for all } a \in S(1).
\end{equation}
From \cite{z12semiclassical}*{Theorem 11.9}, we have both existence of these transformations and uniqueness up to a unit factor.
Then, $\mathcal{M}:=\cup_{A \in \text{SL} (2, \mathbb{R})} \mathcal{M}_A$ is a subgroup of unitary transformations of $L^2(\mathbb{R})$ called the \textit{metaplectic group} and the map $M \mapsto A$ is a group homomorphism $\mathcal{M} \rightarrow \text{SL}(2, \mathbb{R})$. As a corollary of equation (\ref{eq:MA}), we obtain the following intertwining of the metaplectic and quantum transformations:
$M^{-1} U_\omega M=U_{A^{-1} \omega}$ for all $M \in \mathcal{M}_A$, $\omega \in \mathbb{R}^{2}.$
We turn our attention to quantizations of functions on the torus $\mathbb{T}^{2} := \mathbb{R}^{2}/ \mathbb{Z}^{2}$. Each $a \in C^\infty(\mathbb{T}^{2})$ can be identified with a $\mathbb{Z}^{2}$-periodic function on $\mathbb{R}^{2}$. Note that any $a \in C^\infty(\mathbb{T}^{2})$ is also an element of $S(1)$, therefore its Weyl quantization $\text{Op}_h(a)$ is an operator on $L^2(\mathbb{R})$.
By equation (\ref{eq:commutator}), we have the following commutation relations,
\begin{equation}\label{eq:commutation_relations}
\text{Op}_h(a) U_\omega = U_\omega \text{Op}_h(a) \quad \text{for all } a \in C^\infty(\mathbb{T}^{2}), \quad \omega \in \mathbb{Z}^{2}.
\end{equation}
These commutation relations motivate a decomposition of $L^2(\mathbb{R})$ into a direct integral of finite dimensional spaces $\mathcal{H}_N(\theta)$, where $\theta \in \mathbb{T}^{2}$, such that $\text{Op}_h(a)$ descends onto these spaces. From \cite{Bouzouina-deBievre}*{Proposition 2.1}, to ensure the these spaces are nontrivial, for the rest of the paper, we assume
$$
h= (2\pi N)^{-1}\quad\text{where }N\in\mathbb N.
$$
We call $\mathcal{H}_N(\theta)$ the space of \textit{quantum states}. Specifically, for each $\theta \in \mathbb{T}^{2}$, set $$\mathcal{H}_N(\theta):=\left\{f \in \mathscr{S}'(\mathbb{R}): U_\omega f=e^{2 \pi i \sigma(\theta, \omega) + N \pi i Q(\omega)} f \text{ for all } \omega \in \mathbb{Z}^{2}\right\},$$ where the quadratic form $Q$ on $\mathbb{R}^{2}$ is defined by $Q(\omega)=y\eta$ for $\omega=(y, \eta) \in \mathbb{R}^{2}$.
The following lemma gives an explicit basis for $\mathcal{H}_N(\theta)$.
\begin{lemma}[\cite{dyatlov2021semiclassical}, Lemma 2.5]
The space $\mathcal{H}_N(\theta)$ is $N$-dimensional with a basis $\{e_j^\theta\}$ defined for
$j \in \{0, \ldots, N-1\}$ and $\theta=(\theta_x, \theta_\xi) \in \mathbb{R}^{2}$. In particular,
$$e_j^\theta(x):=\frac{1}{\sqrt{N}} \sum_{k \in \mathbb{Z}} e^{-2 \pi i \theta_\xi k} \delta\left(x- \frac{Nk+j-\theta_x}{N} \right).$$
\end{lemma}
We fix an inner product $\lrang{\cdot, \cdot}_\mathcal{H}$ on each quantum state $\mathcal{H}_N(\theta)$ by requiring $\{e_j^\theta\}$ to be an orthonormal basis. It can be shown using translation idenitities for $e_j^\theta$ (see \cite{dyatlov2021semiclassical}*{(2.35)}) that although each $\{e_j^\theta\}$ depends on the choice of the representative $\theta_x \in \mathbb{R}$, the inner product depends only on $\theta \in \mathbb{T}^{2}$. We often denote the norm defined by this inner product by $\| \cdot\|_{\ell^2}$. Using the bases $\{e_j^\theta\}$, we can consider the spaces $\mathcal{H}_N(\theta)$ as fibers of a smooth $N$ dimensional vector bundle over $\mathbb{T}^{2}$, which we call $\mathcal{H}_N$.
For fixed $N \in \mathbb{N}$ and $a \in C^\infty(\mathbb{T}^{2})$, define the quantization
$$\text{Op}_{N, \theta}(a):= \text{Op}_h(a)|_{\mathcal{H}_N(\theta)} : \mathcal{H}_N(\theta) \rightarrow \mathcal{H}_N(\theta), \quad \theta \in \mathbb{T}^{2},$$ which depends smoothly on $\theta$.
This restriction holds by definition of $\mathcal{H}_N(\theta)$ and the commutation relations given in equation (\ref{eq:commutation_relations}).
We set
\begin{equation}
\label{eq:A-matrix}
A= \begin{bmatrix}
a & b \\ c & d \end{bmatrix} \in \text{SL}(2, \mathbb{Z})
\end{equation}
and choose a metaplectic operator $M \in \mathcal{M}_A$. Recall that for $\omega = (y, \eta), z=(x, \xi) \in \mathbb{Z}^2$, $\sigma(z, \omega) = \xi y - x \eta$ and $Q(\omega) = y \eta$. By \cite{dyatlov2021semiclassical}*{Lemma 2.9}, there exists a unique $\phi_A \in (\mathbb{Z}/2)^2$ such that for all $\omega\in\mathbb Z^2$, $Q(A^{-1} \omega) - Q(\omega) = \sigma(\phi_A, \omega) \mod 2\mathbb{Z}$.
Using the definition of $\mathcal{H}_N(\theta)$ and that fact that $M^{-1} U_\omega M=U_{A^{-1} \omega}$, we can verify that $M(\mathcal{H}_N(\theta)) \subset \mathcal{H}(A \theta +\frac{ N \phi_A}{2})$ for all $\theta \in \mathbb{T}^{2}$.
Denote $M_{N, \theta} := M|_{\mathcal{H}_N(\theta)} : \mathcal{H}_N(\theta) \rightarrow \mathcal{H}_N(A \theta +\frac{N \phi_A}{2})$, which depends smoothly on $\theta \in \mathbb{T}^{2}$. We require the domain and range of $M_{N, \theta}$ to be the same, in other words, we must have
\begin{equation}\label{eq:domainrange}
(I-A)\theta =\frac{N \phi_A}{2} \mod \mathbb{Z}^{2}.
\end{equation}
Thus, when $\theta =0$, condition (\ref{eq:domainrange}) is satisfied when $N$ is even or $\phi_A =0 $.
We henceforth assume that $\phi_A=0$, which gives $\sigma(\phi_A, \omega) =0 \mod \mathbb{Z}/2$. Noting that for all $\omega =(y, \eta)$,
$$Q(A^{-1} \omega)-Q(\omega) = (dy -b \eta)(-cy +a \eta) - y \eta=-dcy^2 +2 bc y \eta- ba \eta^2,$$
we must have $dcy^2 + ba \eta^2= 0 \mod 2\mathbb{Z}$. We conclude that $dc$ and $ab$ must be even.
Assuming condition (\ref{eq:domainrange}), we have the following exact Egorov's theorem for all $a \in C^\infty(\mathbb{T}^{2})$,
$$M_{N, \theta}^{-1} \text{Op}_{N, \theta} (a) M_{N, \theta} =\text{Op}_{N, \theta} (a \circ A).$$
Essential to our proof of Theorem \ref{thm:upperbound} is the following explicit formula for $M$.
\begin{lemma}\label{lem:explicitformula}
When $A$ is given by~\eqref{eq:A-matrix} and $b \neq 0$,
an element of $\mathcal M_A$ is given by
\begin{equation} \label{eq:explictformM}
M u(x) = \frac{\sqrt{N}}{\sqrt{|b|}} \int_\mathbb{R} e^{2 \pi N i\Phi(x, y)} u(y) dy,
\end{equation}
where
\begin{equation}\label{explictformphi}
\Phi(x, y) = \frac{d}{2b} x^2 - \frac{xy}{b} + \frac{a}{2b} y^2.
\end{equation}
\end{lemma}
We provide a short outline of this formula's derivation; for further details see Theorem 11.10 and its following remark in \cite{z12semiclassical}.
Set $\Lambda= \{(x, y, \xi, -\eta) : (x, \xi) = A(y, \eta)\}$ and note that $\Lambda$ is a Lagrangian submanifold of $\mathbb{R}^4$. Additionally, as $b \neq 0$, $\Lambda \ni (x,y, \xi, \eta) \mapsto (x, \eta)$ is surjective. Therefore, there exists a generating function $\Phi(x,y)$ such that $\Lambda=\{(x, y, \partial_x \Phi, \partial_y \Phi)\}$.
As $\eta = -\partial_y \Phi$ and $\xi = \partial_x \Phi$, we know $\Phi(x, y)$ is given by (\ref{explictformphi}).
We then have (\ref{eq:explictformM}), where the coefficient ensures $M$ is unitary, up to a unit factor.
\section{Proof of Theorem \ref{thm:lowerbound}}\label{secproofoflowerbound}
First, following the presentation in \cite{Bonechi-DeBievre2000_Article_ExponentialMixingAndTimeScales}, in Section \ref{lowerboundsubsection1}, we show that for a sequence $N_k$, $M_{N_k,0}$ has a large degenerate eigenspace. In Section \ref{lowerboundsubsection2}, we then reduce the proof of Theorem \ref{thm:lowerbound} to a linear algebra argument.
\subsection{Eigenspace of $M_{N_k, 0}$} \label{lowerboundsubsection1}
Consider $A$ given by~\eqref{eq:A-matrix} such that $b$ and $c$ are coprime and $\Tr A$ is even and greater than 2.
Let $\lambda$ be the largest eigenvalue of $A$. Then for each $t \in \mathbb{N}$,
\begin{equation} \label{eq:pdef}
A^t =p_t A-p_{t-1}I, \quad p_{t+1} = \Tr(A)p_t - p_{t-1}, \text{ where}\quad p_t=\frac{\lambda^t-\lambda^{-t}}{\lambda - \lambda^{-1}}\in\mathbb Z.
\end{equation}
Set $T_N =\min \{t : A^t =I \mod N \}$ and, for $k \in \mathbb{N}$, define
$$
N'_k:=\max \{N : A^k =I \mod N\}.
$$
Essential to our proof of Theorem \ref{thm:lowerbound} is the following statement from \cite{Bonechi-DeBievre2000_Article_ExponentialMixingAndTimeScales}. For the reader's convenience, we replicate their proof here.
\begin{theorem}[\cite{Bonechi-DeBievre2000_Article_ExponentialMixingAndTimeScales}, Prop. 11]\label{thm:T_n N_k}
For each $k \in \mathbb{N}$, we have
$N'_{2k}=2p_k$, $N'_{2k+1} =p_k + p_{k+1}$, and $T_{N'_k}=k$.
\end{theorem}
\begin{proof}
Using equation (\ref{eq:pdef}), we see that $N'_k$ is the greatest integer such that
$$\begin{bmatrix} p_k a - p_{k-1} -1 & p_k b \\ p_k c & p_k d -p_{k-1} -1 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \mod N'_k.$$
Recall that we assumed $b$ and $c$ are coprime. Therefore, $p_k=0 \mod N'_k$ and $p_{k-1} =-1 \mod N'_k$, which gives $N'_k = \gcd(p_k, p_{k-1} +1)$. We claim that for $s=0, \ldots, k-1$,
\begin{equation} \label{eq:N_kdef}
N'_k = \gcd(p_{k-s} +p_s, p_{k-(s+1)} +p_{s+1}).
\end{equation}
We proceed by induction.
Note that $p_0=0$ and $p_1=1$, therefore, (\ref{eq:N_kdef}) clearly holds for $s=0$. Now suppose we know (\ref{eq:N_kdef}) for some $s \geq 0$. Using (\ref{eq:pdef}) and the identity $\gcd(a, ca-b) =\gcd(a, b)$, we have
\begin{align*}
N'_k &= \gcd(\Tr A p_{k-s-1} -p_{k-s-2} + p_s, \text{ } p_{k-s-1} +p_{s+1})\\
&=\gcd(\Tr A (p_{k-s-1} + p_{s+1}) - \Tr A p_{s+1} - p_{k-s-2} +p_s, \text{ } p_{k-s-1} + p_{s+1})\\
&=\gcd(p_{s+2} + p_{k-(s+2)}, \text{ } p_{k -(s+1)} +p_{s+1}),
\end{align*}
which completes the induction. We set $k=2 \ell$, $s=\ell$ in (\ref{eq:N_kdef}) to conclude
$$N'_{2 \ell} =\gcd(2 p_\ell, p_{\ell -1} + p_{\ell +1}) = \gcd(2 p_{\ell}, \Tr A p_{\ell}) = 2 p_\ell,$$
where the last equality follows from our assumption that $\Tr A$ is even. Similarly, setting $k = 2 \ell +1$ and $s=\ell$ in (\ref{eq:N_kdef}) gives $N'_{2\ell +1}=p_\ell + p_{\ell +1}$.
Now note that for each $k$, we have
\begin{equation}\label{eq:N_k1}
A^k=1 \mod N'_k,
\end{equation}
\begin{equation}\label{eq:N_k2}
A^{T_{N'_k}} =1 \mod N'_k, \quad \text{and} \quad A^{T_{N'_k}} =1 \mod N'_{T_{N'_k}}.
\end{equation}
From the definition of $T_{N'_k}$ and (\ref{eq:N_k1}), we see that $T_{N'_k} \leq k$. From the definition of $N'_k$ and (\ref{eq:N_k2}), we see that $N'_k \leq N'_{T_{N'_k}}$. As $\{N'_k\}$ is increasing, we conclude that $T_{N'_k} \geq k$. Therefore, $T_{N'_k}=k$.
\end{proof}
Now, let $n(N)$ denote the period of $M_{N, 0}$; specifically
$$
n(N) := \min \{t : M_{N, 0}^t = e^{i \phi} \text{ for some } \phi \in \mathbb{R}\}.
$$
Suppose $A_N$ is the matrix with integer entries that satisfies $A^{T_N}=1+N A_N$. From \cite{HB1980}*{(36)-(46)}, we know that $n(N)=T_N$ if $N$ is odd or if $N$ is even and $(A_N)_{12}$ and $(A_N)_{21}$ are even. Otherwise, $n(N) =2T_N$.
Using this formula for $n(N)$ and Theorem \ref{thm:T_n N_k}, following \cite{Bonechi-DeBievre2000_Article_ExponentialMixingAndTimeScales}, we show an upper bound for $n(N)$ that depends only on $\lambda$ and $N$.
\begin{figure}%
\centering
\subfloat[\centering $N=991$ ]{{\includegraphics[width=11cm]{cateigf-991.eps} }}%
\linebreak
\subfloat[\centering $N=989$]{{\includegraphics[width=11cm]{cateigf-989.eps} }}%
\caption{The plots of a maximal $\ell^\infty$-norm, $\ell^2$-normalized eigenfunction of $M_{N,0}$, where $M_{N,0}$ corresponds to $A=\begin{bmatrix}
2 & 3\\1 & 2
\end{bmatrix}$. Specifically, each plot point corresponds to the absolute value of the $i$th coordinate of the eigenfunction for $0 \leq i \leq N-1$.
Note that $N=989$ is an element of the sequence $N_k$ in Corollary~\ref{lem:quantumperiod}, while $N=991$ is not.}%
\label{fig:cateigf}%
\end{figure}
\begin{corollary}\label{lem:quantumperiod}
Let $\lambda$ be the largest eigenvalue of $A$. Then, there exists a sequence of odd $N_k$ such that $2 \log_\lambda N_k +1 \geq n(N_k)$.
\end{corollary}
\begin{proof}
Using our assumption that $\Tr A$ is even, we note that $p_{2k}$ is even and $p_{2k+1}$ is odd. Thus, $n(N'_{2k+1}) =T_{N'_{2k+1}}=2k+1$.
We now formulate a bound for $n(N'_{2k+1})$ in terms of $N'_{2k+1}$. As $\Tr A >2$, we know $\lambda >1$. Additionally using Theorem \ref{thm:T_n N_k}, $N'_{2k+1} = \frac{\lambda^{k+1} + \lambda^k - \lambda^{-k-1} - \lambda^{-k}}{\lambda - \lambda^{-1}} \geq \lambda^{k}$. Therefore, $\log_\lambda N'_{2k+1} \geq k $, which gives $2 \log_\lambda (N'_{2k+1}) +1 \geq 2k +1 =n(N'_{2k+1})$. Labeling, $N'_{2k+1}$ as $N_{k}$, we are done.
\end{proof}
Note that the same proof ideas can be adapted for a sequence of even $N$, using the fact that $N'_{2k}$ is even. However, as the odd case is the simpler of the two, our main theorem is proven for a sequence of odd $N$.
For ease of notation, again using $N_k := N'_{2k+1}$, set
$$
t_k :=n(N_{k}).
$$
Therefore, if $u$ is an eigenfunction of $M_{N_k,0}$ with eigenvalue $\lambda$, $M_{N_{k},0}^{t_k} u=cu=\lambda^{t_k} cu$ for some $|c|=1$. Thus, each eigenvalue of $M_{N_k, 0}$ satisfies $\lambda^{t_k}=1$, giving an eigenvalue whose multiplicity is at least $\frac{N_k}{t_k}$. In other words, $M_{N_{k},0}$ has a degenerate eigenspace with dimension at least $\frac{N_{k}}{t_k}.$
\subsection{Lower Bound}\label{lowerboundsubsection2}
Now we are ready to state the lower bound in linear algebraic terms.
\begin{proposition}\label{prop:lowerbound}
Suppose $V$ is subspace of $\C^N$ with dimension at least $\frac{N}{t}$. Then,
\begin{align}
\frac{1}{\sqrt{t}}\leq\max_{u\in V,\ \|u\|_{\ell^2}=1}\|u\|_{\ell^\infty}.
\end{align}
\end{proposition}
\begin{proof}
Define $\Pi: \C^N\to V$ to be the orthogonal projection onto $V$ and let $e_j$ denote the $j$th coordinate vector. Note that
$$
\sum_{j=1}^N \|\Pi e_j\|_{\ell^2}^2=\Tr(\Pi^*\Pi)=
\Tr \Pi = \dim V \geq \frac{N}{t}.
$$
Therefore, there exists a $j$ such that $\|\Pi e_j\|_{\ell^2} \geq \frac{1}{\sqrt{t}}$.
We then have
$$\sup_{\substack{u \in V\\ \|u\|_{\ell^2}=1}}\|u\|_{\ell^\infty} =\max_j \sup_{\substack{u \in \C^N \\ \|u\|_{\ell^2}=1}} | \lrang{\Pi u, e_j} |=\max_j \sup_{\substack{u \in \C^N \\ \|u\|_{\ell^2}=1}} | \lrang{u, \Pi e_j} |=\max_j \|\Pi e_j\|_{\ell^2} \geq \frac{1}{\sqrt{t}},$$
which completes the proof.
\end{proof}
We claim that Proposition \ref{prop:lowerbound} implies Theorem \ref{thm:lowerbound}. In particular, fixing $V$ to be the degenerate eigenspace of $M_{N_k, 0}$ with dimension at least $\frac{N_k}{t_k}$ and using the fact that $\C^{N_k} \simeq \mathcal{H}_{N_k}(0)$, Proposition \ref{prop:lowerbound} implies
\begin{align}
\frac{1}{\sqrt{2 \log_\lambda N_{k}+1}} \leq \frac{1}{\sqrt{t_k}} \leq\max_{u\in V,\ \|u\|_{\ell^2}=1}\|u\|_{\ell^\infty}.
\end{align}
\section{Proof of Theorem \ref{thm:upperbound}}\label{secproofofupperbound}
We examine $A \in \text{SL}(2, \mathbb{Z})$ of the form (\ref{e:A-intro}). Note that the conditions on $A$ are less restrictive than those in Section \ref{secproofoflowerbound}.
Let $N$ be odd. We begin with the following dispersive estimate.
\begin{proposition}\label{propupperbound}
For $N$ odd and $M_{N, 0}: \mathcal{H}_N(0) \rightarrow \mathcal{H}_N(0)$, we have
\begin{align*}
\left|\lrang{M_{N,0} e_j^0,e_k^0}_\mathcal{H} \right|\leq \frac{\sqrt{|b|}}{\sqrt{N}}.
\end{align*}
\end{proposition}
\begin{proof}
We begin by calculating $M_{N, 0} e_j^0$. By Lemma~\ref{lem:explicitformula}, for $\Phi(x, y) = \frac{d}{2b} x^2 - \frac{xy}{b} + \frac{a}{2b} y^2$,
$$M_{N, 0} e^0_j(x) = \frac{1}{\sqrt{|b|}} \int_\mathbb{R} e^{2 \pi iN \Phi(x, y)} \sum_{k \in \mathbb{Z}} \delta \left(y- \frac{Nk+j}{N} \right) dy=\frac{1}{\sqrt{|b|}} \sum_{k \in \mathbb{Z}} e^{2 \pi i N \Phi\left(x, \frac{Nk+j}{N}\right)}.$$
Thus, we focus on calculating $\Phi\left(x, \frac{Nk+j}{N}\right)$ mod $N\mathbb{Z}$. In the following, we set $k = mb + r$ for $0 \leq r \leq |b| -1$. We have
$$\Phi\left(x, \frac{Nk +j}{N} \right) =\frac{d}{2b}x^2 - xm - \frac{xr}{b} - \frac{xj}{Nb}+ \frac{ar^2}{2b} + \frac{arj}{bN} + \frac{aj}{2bN^2} \mod N\mathbb{Z}.$$
Thus, using the Poisson summation formula and the fact that $\delta(x-x_0) f(x) = \delta(x-x_0) f(x_0)$, we know
\begin{align*}
M_{N,0} e_j^0(x) &= \frac{1}{\sqrt{|b|}}\sum_{r=0}^{|b|-1} e^{\frac{2 \pi i}{b} \left( \frac{aNr^2}{2} + arj + \frac{aj}{2N}\right)} \sum_{m \in \mathbb{Z}} e^{\frac{2 \pi i N}{b} \left(\frac{d}{2} x^2 -xr- \frac{xj}{N}\right)} e^{-2 \pi i N xm}\\
&=\frac{1}{N\sqrt{|b|}} \sum_{l \in \mathbb{Z}} \sum_{r=0}^{|b|-1} e^{\frac{2 \pi i}{b} \left( \frac{d l^2}{2N} -lr - \frac{lj}{N} + \frac{ar^2N}{2} + arj + \frac{aj}{2N}\right)} \delta\left(x - \frac{l}{N}\right).
\end{align*}
Now setting $$c_l = \sum_{r=0}^{|b|-1} e^{\frac{2 \pi i}{b} \left(\frac{d l^2}{2N} -lr - \frac{lj}{N} + \frac{ar^2N}{2} + arj + \frac{aj}{2N} \right)},$$ we want to show that $c_l$ has period $N$, in other words $c_l =c_{l+N}$.
We know that$$c_{l+N} = e^{\frac{2 \pi i}{b} \left(\frac{d l^2}{2N} +dl + \frac{dN}{2} - \frac{lj}{N} -j + \frac{aj}{2N} \right)}\sum_{r=0}^{|b|-1} e^{\frac{2 \pi i}{b} \left(-lr -Nr+ \frac{ar^2N}{2} + arj \right)} =\alpha_{l+N} \sum_{r=0}^{|b|-1} \beta_{l+N, r}.$$
Recall in Section \ref{preliminaries}, we showed in order for $M_{N, 0}$ to descend $\mathcal{H}_N(0)$ to itself, $ab$ must even. Therefore, $$\beta_{l, r+b} = e^{\frac{2 \pi i}{b} \left( -lr -lb + \frac{ar^2 N}{2} +abrN + \frac{ab^2 N}{2}+ arj + abj \right)}=e^{\frac{2 \pi i}{b} \left( -lr + \frac{ar^2 N}{2} + arj \right)}=\beta_{l,r}.$$
Therefore, $c_{l+N} = \alpha_{l+N} \sum_{r=d}^{|b|-1+d} \beta_{l+N, r}$.
As $ad-bc=1$, we know $ad= 1$ mod $b$. Then,
$$c_{l+N} = \alpha_{l+N} \sum_{r=d}^{|b|-1+d} e^{\frac{2 \pi i}{b} \left(-lr -Nr+ \frac{ar^2N}{2} + arj \right)} = \alpha_{l+N} \sum_{r=0}^{|b|-1} e^{\frac{2 \pi i}{b} \left(-lr -ld-Nd+ \frac{ar^2N}{2} + \frac{dN}{2} + arj +j \right)}=c_l.$$
Using the fact that $c_k = c_{k+lN}$ for all $l \in \mathbb{Z}$ and that $\{e_j^0\}$ is an orthonormal basis, we have
\begin{align*}
&\lrang{M_{N,0} e_j^0(x), e_k^0(x)}_{\mathcal{H}}\\
&= \frac{1}{\sqrt{N|b|}} \lrang{\frac{1}{\sqrt{N}} \sum_{l \in \mathbb{Z}} c_{k+lN} \delta\left(x - \frac{Nl +k}{N}\right), \frac{1}{\sqrt{N}} \sum_{l \in \mathbb{Z}} \delta\left(x- \frac{Nl + k}{N} \right)}_\mathcal{H}\\
&= \frac{1}{\sqrt{N|b|}} \sum_{r=0}^{|b|-1} e^{\frac{2 \pi i}{b} \left( \frac{ar^2N}{2} + arj + \frac{aj}{2N} + \frac{dk^2}{2N}- kr - \frac{kj}{N} \right)}.
\end{align*}
Using the triangle inequality, we conclude $|\lrang{M_{N,0} e_j^0(x), e_k^0(x)}_{\mathcal{H}}| \leq \frac{\sqrt{|b|}}{\sqrt{N}}$.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=.39]{cat-propagator-norms.eps}
\caption{The plot of $\|M_{N,0}^j\|_{\ell^1 \rightarrow \ell^\infty}$ for $0 \leq j \leq 50$ and several values of $N$. $M_{N,0}$ corresponds to $A=\begin{bmatrix} 2 & 3 \\ 1 &2 \end{bmatrix}$. We also plot the upper bound $\sqrt{|b|/N}$ for $N=855$.}
\label{fig:cat-prop}
\end{figure}
For $j>0$, we use the notation $A^j = \begin{bmatrix} a_j & b_j \\ c_j & d_j \end{bmatrix}$. As a direct consequence of Proposition \ref{propupperbound}, for $j>0$, $$\left\|M_{N,0}^j\right\|_{\ell^1 \rightarrow \ell^\infty} \leq \frac{\sqrt{|b_j|}}{\sqrt{N}}.$$ Figure \ref{fig:cat-prop} compares this upper bound to actual values of $\|M_{N,0}^j\|_{\ell^1 \rightarrow \ell^\infty}$. Now as $\Tr A>2$, $A$ has eigenvalues $\lambda$, $\lambda^{-1}$ with $\lambda>1$. Then we have $|b_j| \sim \lambda^j$, giving $$\left\|M_{N,0}^j\right\|_{\ell^1 \rightarrow \ell^\infty} \leq \frac{C \lambda^{j/2}}{\sqrt{N}}.$$
Suppose $u$ is an eigenfunction of $M_{N, 0}$ with eigenvalue $\mu$. As $M_{N, 0}$ is unitary, $|\mu| =1$. We have $u = \mu^{-n} M_{N,0}^n u$, which gives $u = \frac{1}{T} \left(\sum_{n=0}^{T-1} \mu^{-n} M^n_{N, 0} \right) u$.
Setting $B= \frac{1}{T} \sum_{n=0}^{T-1} \mu^{-n} M_{N,0}^n$, we see $u=Bu$. Again, as $M_{N, 0}$ is unitary, $$B^* B= \frac{1}{T^2} \sum_{m,n=0}^{T-1} \mu^{m-n} M^{n-m}_{N,0}.$$
Then
\begin{align*}
\|B^* B\|_{\ell^1 \rightarrow \ell^\infty}&\leq \frac{1}{T} + \frac{1}{T^2} \sum_{\substack{m \neq n \\ 0 \leq m,n \leq T-1}} \|M^{n-m}_{N,0} \|_{\ell^1 \rightarrow \ell^\infty}\\
&\leq \frac{1}{T} + \frac{C}{T^2} \sum_{\substack{m \neq n \\ 0 \leq m,n \leq T-1}} \frac{\lambda^{\frac{n-m}{2}}}{\sqrt{N}}\\
&\leq \frac{1}{T} + \frac{C}{\sqrt{N}}\lambda^{\frac{T}{2}}.
\end{align*}
We set $T= (1-\varepsilon/2)\log_\lambda N$ to get
$$
\|B^* B\|_{\ell^1 \rightarrow \ell^\infty} \leq \frac{1}{(1-\varepsilon/2)\log_\lambda N} +\frac{C}{\sqrt{\lambda N^\varepsilon}}.
$$
Finally, we know that $\|B\|^2_{\ell^2 \rightarrow \ell^\infty} = \|B^* B\|_{\ell^1 \rightarrow \ell^\infty}$. Therefore, for $0<\varepsilon<1$, there exists an $N_0$ such that for odd $N \geq N_0$,
$$\|u\|_{\ell^\infty} \leq \|B\|_{\ell^2 \rightarrow \ell^\infty} \|u\|_{\ell^2} \leq \frac{1}{\sqrt{(1-\varepsilon)\log_\lambda N}}.$$
\begin{bibdiv}
\begin{biblist}
\bib{avakumovic}{article}{
author = {V. G. Avakumovic},
journal = {Mathematische Zeitschrift},
pages = {327-344},
title = {\"{U}ber die Eigenfunktionen auf geschlossenen Riemannschen Mannigfaltigkeiten},
volume = {65},
year = {1956},
}
\bib{berard}{article}{
title = {On the wave equation on a compact Riemannian manifold without conjugate points},
journal = {Mathematische Zeitschrift},
volume = {155},
pages = {249-276},
year = {1977},
author = {P. B\"{e}rard}
}
\bib{berry1977}{article}{
title={Regular and irregular semiclassical wavefunctions},
author={M. V. Berry},
journal={J. Phys. A: Math. Gen.},
year = {1977},
volume={2083},
number={10}
}
\bib{HB1980}{article}{
title={Quantization of linear maps-Fresnel diffraction by a periodic grating},
author={ M.V. Berry},
author={J.H. Hannay},
volume={267},
number={1},
year={1980},
publisher={Physica D}
}
\bib{Bonechi-DeBievre2000_Article_ExponentialMixingAndTimeScales}{article}{
title={Exponential Mixing and {$|\log \hbar|$} Time Scales in Quantized Hyperbolic Maps on the Torus},
author={F. Bonechi},
author={S. {De Bi\'{e}vre}},
journal={Communications in Mathematical Physics},
volume={211},
number={3},
pages={659--686},
year={2000},
publisher={Springer}
}
\bib{Bouzouina-deBievre}{article}{
title={Equipartition of the eigenfunctions of quantized ergodic maps on the torus},
author={A. Bouzouina},
author={S. {De Bi\`{e}vre}},
volume={179},
number={1},
pages={83--105},
year={1996},
publisher={Communications in Mathematical Physics}
}
\bib{dyatlov2021semiclassical}{article}{
title={Semiclassical measures for higher dimensional quantum cat maps},
author={S. Dyatlov},
author={M. J{\'e}z{\'e}quel},
journal={arXiv preprint arXiv:2108.10463},
year={2021}
}
\bib{hormander}{article}{
author = {L. H\"{o}rmander},
journal = {Acta Math.},
pages = {193 - 218},
title = {The spectral function of an elliptic operator},
volume = {121},
year = {1968},
}
\bib{iwaniecsarnak}{article}{
title = {$L^\infty$ norms of eigenfunctions of arithmetic surfaces},
journal = {Physica D: Nonlinear Phenomena},
volume = {141},
number = {2},
pages = {301-320},
year = {1995},
author = {H. Iwaniec},
author = {P. Sarnak}
}
\bib{Kurlberg-1}{article}{
author = {P. Kurlberg},
journal = {Ann. Henri Poincar\'{e}},
pages = {75 - 89},
title = {Bounds on Supremum Norms for Hecke Eigenfunctions of Quantized Cat Maps},
volume = {8},
year = {2007},
}
\bib{Kurlberg-Rudnick-0}{article}{
author = {P. Kurlberg},
author = {Z. Rudnick},
journal = {Duke Math. J.},
pages = {47-77},
title = {Hecke theory and equidistribution for the quantization of linear maps of the torus},
volume = {103},
number = {1},
year = {2000}
}
\bib{Kurlberg-Rudnick-1}{article}{
title = {Value distribution for eigenfunctions of desymmetrized quantum maps},
journal = {International Mathematics Research Notices},
author = {P. Kurlberg},
author = {Z. Rudnick},
volume = {2001},
number = {18},
pages = {985-1002},
year = {2001},
}
\bib{levitan}{article}{
author = {B. M. Levitan},
journal = {Izv. Akad. Nauk SSSR Ser. Mat.},
pages = {33–58},
title = {On the asymptotic behavior of a spectral function and on expansion in eigenfunctions of a self-adjoint differential equation of second order},
volume = {19},
number = {1},
year = {1955}
}
\bib{Olofsson-1}{article}{
title = {Large Supremum Norms and Small Shannon Entropy for Hecke Eigenfunctions of Quantized Cat Maps},
journal = {Communications in Mathematical Physics},
author = {R. Olofsson},
volume = {286},
number = {3},
pages = {1051-1072},
year = {2009}
}
\bib{Olofsson-2}{article}{
title = {Hecke Eigenfunctions of Quantized Cat Maps Modulo Prime Powers},
journal = {Ann. Henri Poincar\'{e}},
author = {R. Olofsson},
volume = {1111},
number = {10},
year = {2009}
}
\bib{Olofsson-3}{article}{
author = {R. Olofsson},
journal = {Ann. Henri Poincar\'{e}},
pages = {1285–1302},
title = {Large Newforms of the Quantized Cat Map Revisited},
volume = {11},
year = {2010}
}
\bib{z12semiclassical}{book}{
title={Semiclassical analysis},
author={M. Zworski},
volume={138},
year={2012},
publisher={American Mathematical Soc.}
}
\end{biblist}
\end{bibdiv}
\end{document}
|
{
"arxiv_id": "2302.08607",
"language": "en",
"timestamp": "2023-02-20T02:03:03",
"url": "https://arxiv.org/abs/2302.08607",
"yymm": "2302"
} | \section{Introduction}
Accurate and efficient speech recognition models are key to realizing automatic speech recognition (ASR) applications in low-power mobile devices and smart appliances \cite{gopalakrishnan2020hfnet}. ASR systems with advanced deep artificial neural networks (ANN)~\cite{ graves2013speech}, powerful feature extraction algorithms (e.g., MFCC, Log-Mel), and massive data have achieved great success, reaching human-level performance in some tasks. However, the improved performance comes with higher energy consumption, and the current feature extraction method is less biologically realistic~\cite{baby2021convolutional} and processes audio signals in a synchronous manner. This motivates us to explore more efficient solutions.
Recently, biologically realistic auditory encoding frontend \cite{Wu2018, pan2020efficient, wu2018spiking} coupled with an SNN has received increasing attention. In particular, recent studies~\cite{Neftci2019, Shrestha2018, zhang2021rectified} using surrogate gradients or transfer learning \cite{wu2021tandem,wu2021progressive,yang2022training} to circumvent the problem of non-differentiable spike functions have opened the way for applying relatively deep SNN on large datasets. However, applying SNN to ASR is in its early phase, while more recent works have made progress in event-driven audio processing algorithms~\cite{wu2019robust, wu2020deep, 10.1007/978-981-10-5230-9_57, Blouw2020,yilmaz2020deep, zhang2019mpd}.
Nevertheless, most SNN studies focus on training synaptic weights~\cite{shrestha2017robust} and membrane time constants~\cite{perez2021neural, fang2021incorporating}. Several studies also propose training synaptic delays to learn the correct spike-timing~\cite{Shrestha2018,zhang2020supervised}. These methods can be regarded as encoding information in spike-times, while little research has studied the effect of different transmission delays \cite{Shrestha2018}. Sun et al.\cite{Sun2022} adopted a learnable axonal delay, which is effective in learning tasks with high temporal complexity. However, this work implements an axonal delay that is capped at a fixed maximum delay that does not adapt.
In this paper, we propose a training scheduler that adjusts the axonal delay cap independently for each layer during training. This improves the performance in ASR tasks. Our method adopts a two-stage training strategy. In the first stage, the network is pre-trained for several epochs to obtain the initial delay distribution. In the second stage, a mechanism to determine the layer-wise delay cap is introduced that uses two tunable parameters, the sliding window size and cap fraction, to calibrate the caps and redistribute the axonal delays.
\iffalse
The remainder of this paper is structured as follows. In
Section \ref{2} we describe the spiking neuron model and proposed adaptive training method. The description of our experimental setup is given in Section \ref{3}. In Section \ref{4}, we demonstrate the effectiveness of the proposed method on two event-based audio datasets before concluding in Section \ref{5}.
\fi
\section{Methods Description} \label{2}
\begin{figure*}[htbp]
\centering
\vspace{-0.8cm}
\hspace{-7cm}
\includegraphics[scale=0.62]{figure/inter.pdf}\vspace{-4cm}\hspace{-5cm}
\caption{Illustration of how the adaptive delay caps are determined and the axonal delays adjusted. The generated spikes $s^1(t)$ will be shifted in time by $d_{i}$ and then output as spike trains $s_{d}^1(\hat t)$ in the axonal delay module. The adaptive scheduler will adjust the delay cap accordingly. The delay value may be the same across neurons, such as the top two neurons with the same delay value $d_{1}$. The layer can be a traditional convolutional layer, dense layer, or recurrent layer.}
\label{fig:input}
\end{figure*}
We first describe the spiking neuron model in sec.\ref{sec:1}, and the axonal delay module with adaptive delay cap is then introduced in sec. \ref{sec:2}.
\subsection{Spike Response Model}\label{sec:1}
The spiking neuron model obtains its name from the special computation paradigm that communicates with spikes, which is highly energy efficient as no input integration is performed when there are no incoming events. In our work, we employ the Spike Response Model (SRM), of which membrane potential is described below
\begin{equation}
u^{l}_{i}(t) = \sum_{j}(W^{l-1}_{ij}(\epsilon * s^{l-1}_{j})(t) + (\nu * s^{l}_{i} )(t))
\end{equation}
where $W_{ij}^{l-1}$ indicates the synaptic weight from neuron $j$ to $i$ at layer $l-1$ and $u^{l}_{i}$ refers to the membrane potential of neuron $i$ in layer $l$, while $s^{l-1}_{j}$ is the incoming spike pattern from the preceding neuron $j$. In this experiment, we use the response signal $a(t)=(\epsilon * s^{l-1})(t)$ to describe the response of neurons by convolving input spikes $s^{l-1}(t)$ with the response kernel $\epsilon$, where $\epsilon(t) = \frac{t}{\tau_s}\exp(1-\frac{t}{\tau_s})\Theta(t)$. Here, $\Theta(t)$ represents the Heaviside step function. Likewise, the refractory signal can be described as $(\nu * s^{l})(t)$, where $\nu(t) = -2\theta_{u}\,\frac{t}{\tau_r}\exp(1-\frac{t}{\tau_r})\Theta(t)$. Here, the parameter $\tau_s$ and $\tau_r$ are the time constant of the corresponding kernels.
An output is generated whenever the $u^{l}_{i}$ surpasses the pre-defined threshold $\theta_{u}$. This spike-generation process can be formulated as
\begin{equation}
s^{l}_{i}({t}) = \Theta({u^{l}_{i}(t) - \theta_{u}}) \
\end{equation}
\subsection{Axonal delay module and adaptive delay caps}\label{sec:2}
In Fig.\ref{fig:input}, the axonal delay module and an adaptive training scheduler for delay caps are shown. The axonal delay is part of spike transmission, and we formulate it such that it can be jointly learned with an optimization algorithm.
\begin{equation}
s_{d}^{l}(\hat{t}) = \delta(t-{d^{l}}) * s^{l}(t)
\end{equation}
In layer $l$, $d^{l}$ represents the set of delays \{${d_{1},d_{2},..,d_{n}}$\} subject to the constraint that \{${d_{1}<d_{2},..,<d_{n}\leq \theta_d}$\}. Meanwhile, $s_{d}^{l}(\hat{t})$ denotes the spike trains output by the delay module at a shifted time $\hat{t}$. From the optimization point of view, constraining the delay value can facilitate learning. Here, we compute the fraction of neurons against the total number of neurons within a sliding window \cite{niu2019comparison} in the adaptive training scheduler so as to optimize training.
Consider the sliding window $m$, when the fraction of delay neurons within this window exceeds the pre-defined cap fraction $\alpha_{\theta}$, the sliding window right-shifts by 1, and the delay cap $\theta_{d}$ also increases by 1. Pseudo-code of the proposed adaptive training scheduler is presented in Algorithm \ref{alg1}.
During training (in the second while loop), the delay will be clipped as follows
\begin{equation}
{d} = max(0, min(d, \theta_d))
\label{eq4}
\end{equation}
where the $\theta_d$ and delays $d$ will be adaptively adjusted according to our scheduler.
\begin{algorithm}[ht]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\caption{Pseudo-code of the adaptive training scheduler}
\label{alg1}
\begin{algorithmic}[0]
\REQUIRE $\theta_d$: Delay Cap, $d$: Delay value, $\alpha_{\theta}$: Cap fraction, $m$: Sliding window size, $C_{i}$: Count of neurons with delay value $d_{i}$, $n$: Index of max delay value, $Tsteps$: Number of training steps, $Hidden Units$: Number of neurons in this layer
\STATE {\bf Initialization}:$\left\{ {\theta_{d}} \right\},\left\{ d \right\} $
\STATE {\bf Pre-train model}: $x$ epochs
\STATE {\bf Initialization}: $n \leftarrow C.index(max(d))$, $counter \leftarrow 0$
\STATE Fraction of neurons within sliding window:\\
$\alpha =\frac{(C_{n-m}+...+C_{n-1}+C_{n})}{Hidden Units}$
\WHILE{{$ \alpha > \alpha_{\theta}$ }}
\STATE $n \leftarrow n + 1$
\STATE $\theta_{d} \leftarrow \theta_{d} + 1$
\STATE $counter \leftarrow 0$
\WHILE{$counter < Tsteps$}
\STATE Train $d$
\STATE Clip $d$ based on Equation~(\ref{eq4})
\STATE $counter \leftarrow counter + 1$
\ENDWHILE
\ENDWHILE
\ENSURE Maximum Delay Cap $\theta_d$, Delay $d$.
\end{algorithmic}
\end{algorithm}
\section{Experimental Setup}\label{3}
\subsection{Datasets}
We evaluate our proposed methods on the SHD \cite{cramer2020heidelberg} and NTIDIGITS \cite{anumula2018feature} datasets. These datasets are the spike patterns processed by the artificial cochlear models\cite{cramer2020heidelberg, anumula2018feature} from the original audio signals and have high temporal complexity \cite{iyer2021neuromorphic}. The SHD dataset consists of 10,420 utterances of varying durations ($0.24s$ to $1.17s$) by 12 speakers. It contains a total of 20 digits classes from '0' to '9' in English and German language. We adopt the same data preprocessing method in \cite{yin2020effective}, and the same train/validation/test split of the dataset as the official \cite{cramer2020heidelberg}.
The NTIDIGITS is another event-based spoken recognition task. The original TIDIGITS is processd by the 64-channel CochleaAMS1b sensor and its output is recorded. In total, there are 11 spoken digits (the digits '0' to '9' in the English language and 'oh'). We follow the same train and test split as used in \cite{anumula2018feature}.
\iffalse
\begin{figure}
\centering
\includegraphics[scale=0.50]{figure/input_shd2.png}
\caption{An English spoken "2" example from SHD datasets. The Y-axis represents channels from 0 to 699 and X-axis represents time in $ms$ resolution. }
\label{fig:example1}
\end{figure}
\fi
\subsection{Implementation details}
In our experiment, we use the Adam optimizer to jointly update the synaptic weight and axonal delay with a constant learning rate of 0.1 and a minibatch size of 128. The initial caps for delay are set to 64 for SHD and 128 for NTIDIGITS, respectively. The pre-train epochs $x$ are set as $40$ and the simulation time step is 1 $ms$ for both datasets. Table 1 lists the parameters we used in our experiments.
We train our SNN on the SLAYER-Pytorch framework \cite{Shrestha2018}. This recently introduced GPU-accelerated software is publicly available and has proven to be effective for training SNN. For both tasks, we use an SNN with two hidden fully-connected layers with 128 hidden units for SHD and 256 neurons for NTIDIGITS. The total number of model parameters is approximately 0.11M for SHD dataset and 0.08M for NTIDIGITS dataset.\\
\vspace{-0.5cm}
\begin{table}[ht]
\small
\centering
\caption{
Detailed parameters settings for different datasets}
\label{tbl:param}
\begin{tabular}{clrccc}
\cline{1-6}
\multicolumn{1}{c}{\bf Dataset}& \multicolumn{1}{c}{\bf $\tau_s$} & \multicolumn{1}{c}{\bf $\tau_r$} & \textbf{Initial $\theta_{d}$} &\textbf{$\theta_{u}$}&$T_{steps}$
\\ \hline
\multirow{1}{*}{{SHD}}
& 1 & 1 & 64&10&150 \\
\multirow{1}{*}{{NTIDIGITS}}
& 5 & 5 & 128&10&150 \\
\hline
\end{tabular}
\vspace{-0.5cm}
\end{table}\textbf{}
\section{Results}\label{4}
\subsection{Ablation Study of Different Cap Fraction and Sliding Window Size}
The window size and cap fraction are important parameters to get a reasonable delay cap. To evaluate their influence, we design an ablation study to compare the influence, which contains 5 different window sizes and 3 different fraction parameters. The impact of these parameters on SHD and NTIDIGITS datasets is shown in Fig \ref{fig:timewindow}. For the SHD dataset, we observe that a small fraction can always get good results, and
the best results can be obtained by controlling the number of the largest two delayed neurons within 5\% of the total number of neurons, which means the window size is set as 2 and the cap fraction is 5\%. While for the NTIDIGITS dataset, the bigger window size and fraction are more helpful, and the accuracy keeps going up when the window size grows bigger besides the situation of window size 5 and cap fraction 10\%.
\begin{figure}
\centering\vspace{-0.00cm}
\includegraphics[scale=0.5]{Fig_SHD-dataset.pdf}\vspace{0.2cm}
\includegraphics[scale=0.5]{Fig_N-TIDIGITS-dataset.pdf}
\caption{Ablation study of different sliding window sizes $m$ and cap fractions $\alpha_{\theta}$ based on (Top) SHD and (Bottom) N-TIDIGITS. The x-axis indicates the sliding window size, and the y-axis refers to the accuracy. There are 3 different cap fractions (5\%, 10\%, and 15\%) and 5 different window sizes (1,2,3,4, and 5) we use in our experiment.}\vspace{-0.0cm}
\label{fig:timewindow}
\end{figure}
\begin{table*}[ht]
\small
\vspace{-0.8cm}
\centering
\caption{Comparison with the state-of-the-art in terms of network size and accuracy.}
\label{tbl:results}
\setlength{\tabcolsep}{0.5mm}{
\begin{tabular}{clrc}
\cline{1-4}
\multicolumn{1}{c}{Datasets}& \multicolumn{1}{c}{\bf Method} & \multicolumn{1}{c}{\bf Params} & \textbf{Accuracy}
\\ \hline
\multirow{6}{*}{{SHD}}
& Feed-forward SNN \cite{cramer2020heidelberg} & 0.11 MB
& $48.6\pm 0.9\%$\\
& RSNN \cite{cramer2020heidelberg} &
1.79 MB& $83.2\pm 1.3\%$\\
& RSNN with Adaption \cite{yin2020effective} &
0.14 MB&$84.4\%$ \\
& Heterogeneous RSNN \cite{perez2021neural} &
0.11 MB& $82.7\pm 0.8\%$\\
& SNN with Time Attention \cite{yao2021temporal} &
0.12 MB&$91.08\%$ \\
& \textbf{This work (m=2, $\alpha_{\theta}$ = 5\%)} &
\bf0.11 MB&$\bf 92.45\%$ \\
\hline
\multirow{5}{*}{{NTDIDIGITS}}
& GRU-RNN \cite{anumula2018feature}$^\dagger$
&
0.11 MB& $90.90\%$ \\
& Phased-LSTM \cite{anumula2018feature}$^\dagger$
&
0.61 MB& $91.25\%$ \\
& ST-RSBP \cite{zhang2019spike}
&
0.35 MB& $93.63\pm 0.27\%$ \\
& SNN with Axonal delay \cite{Sun2022} &
0.08 MB& $94.45\%$ \\
& \bf This work (m=4, $\alpha_{\theta}$ = 10\%) &
\bf0.08 MB& $\bf95.09\%$ \\
\hline
\multicolumn{3}{l}{\footnotesize{$^\dagger$ Non-SNN implementation.}}
\end{tabular}}
\vspace{-0.5cm}
\end{table*}
\subsection{SHD}
The result of performance provided by feed-forward SNN, recurrent spiking neural network (RSNN) with adaptive firing threshold, heterogeneous RSNN, RSNN with temporal attention, and our proposed method is given in Table \ref{tbl:results}. As can be seen from Table \ref{tbl:results} , our proposed method achieves an accuracy of 92.45\%, which is the best performance reported for this task. More importantly, it can be observed that the RSNN outperforms the feed-forward SNN, which implies that the task is inherently dynamic. However, our results show that a feed-forward SNN, without recurrent connections, but with an adaptive axonal delay module, can achieve flexible short-term memory and better accuracy, using fewer parameters.
\subsection{NTIDIGITS}
When the ANN (RNN and Phased-LSTM) are applied to the digits recognition task, they achieve an accuracy of 90.90\% and 91.25\% (see Table \ref{tbl:results}), respectively. However, these networks cannot fully exploit the advantages of sparse event-based information and have to rely on the event synthesis algorithm, effectively losing the advantage gained when processing information embedded in spike-timing. Using our adaptive axonal delay module, we can achieve the best performance of 95.09\%, and compared with Zhang et al.\cite{zhang2019spike} which directly use the spike train level features, our model can improve 1.4\% performance while only using 23\% of parameters.
\subsection{Effect of the Delay Value}
As shown in Table \ref{tbl:delay}, the delay cap is important to the performance of the model. For both audio classification tasks, the performance is competitive without limiting the range of delay values, demonstrating the effectiveness of the axonal delay module. However, it is still necessary to limit the delay range, and our experiments show that an appropriate delay cap will improve the classification ability of the model. Taking the SHD dataset as an example, our adaptive training scheduler can determine the optimal delay distribution, and this distribution enables the model to achieve the best performance. For other combinations of delay caps, the obtained accuracy drops, may indicate that each network structure has an optimal delay cap, but it is difficult to find the delay caps manually. Our method provides an adaptive training scheduler to search for these optimal parameters. It is worth noting that the NTIDIGITS dataset conforms to this phenomenon.
\vspace{-0.0cm}
\begin{table}[ht]
\small
\centering
\caption{Ablation studies for different delay cap methods and the effect of the delay cap $\theta_{d}$ in the axonal delay module. $\theta_{d_i}$ indicates the delay of $i^{th}$ layer. 'Manual' refers to using a static delay cap, while 'Adaptive' refers to using our proposed adaptive scheduler.}
\label{tbl:delay}
\begin{tabular}{cclrc}
\cline{1-5}
\multicolumn{1}{c}{\bf Dataset}&
\multicolumn{1}{c}{\bf Method}&
\multicolumn{1}{c}{\bf ($\theta_{d_1}$,\ $\theta_{d_2}$)} & \multicolumn{1}{c}{\bf Params} & \textbf{Accuracy}
\\ \hline
\multirow{5}{*}{\rotatebox{90}{SHD}}
&Manual& (0,\ \ \ 0) &
108,820& $67.05\%$ \\
&Manual& (64,\ 64) &
109,076& $86.84\%$ \\
&Manual& (128,\ 128) &
109,076& $87.24\%$ \\
&Adaptive& (107,\ 175) &
109,076& $\bf92.45\%$ \\
&Manual& ($+\infty$,$+\infty$) &
109,076& $84.99\%$ \\
\hline
\multirow{4}{*}{\rotatebox{90}{NTIDIGITS}}
&Manual& (0, \ \ \ 0) &
85,259& $78.86\%$ \\
&Manual& (128, \ 128) &
85,771& $94.19\%$ \\
&Adaptive& (215, \ 215) &
85,771& $\bf95.09\%$ \\
&Manual& ($+\infty$,$+\infty$) &
85,771& $93.83\%$ \\
\hline
\end{tabular}
\vspace{-0.0cm}
\vspace{-0.5cm}
\end{table}
\section{Conclusions}\label{5}
In this paper, we integrate the learnable axonal delay module into the spiking neuron model and then introduce an adaptive training scheduler to adjust the caps of axonal delay in each network layer. Compared to previous work that adopts a static delay cap, our proposed method significantly improves the classification capability without extra parameters. Furthermore, our adaptive scheduler can be easily integrated into the existing delay module and determine the optimal delay distribution of the network adaptively. We achieve the best performance in SHD (92.45\%) and NTIDIGITS (95.09\%) datasets with the fewest parameters. These results suggest that a neuron axonal delay with an adaptive delay cap can be used to model a lightweight flexible short-term memory module so as to achieve an accurate and efficient spoken word recognition system. We conjecture that the axonal delay mechanism introduces a form of short-term memory without increasing the number of trainable parameters. For certain data-sets in ASR, whereby 1) information is organized in short sequences, without need for long-term memory, and 2), data is limited in size and hence prone to overfitting, the axonal delay mechanism may work best, in combination with a feed-forward, small SNN. Our experiments seem to agree with the above and further confirm the great potential of using spike timing as part of the solution to an ASR problem. Furthermore, the use of spike-based losses \cite{9892379} can expedite decision-making, thereby reducing the impact of additional latency even more. \\
\footnotesize
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08615",
"language": "en",
"timestamp": "2023-02-20T02:03:30",
"url": "https://arxiv.org/abs/2302.08615",
"yymm": "2302"
} | \section{Introduction}
Atomic vibrational dynamics is an underlying factor to a variety of physical and chemical phenomena in molecules and materials.~\cite{BORNHUANG,THERMOSTAT_HILL} In quantum chemistry and physics, the simplest vibrational model is represented by the harmonic approximation (HA) to the Born-Oppenheimer potential energy surface (PES), where higher-than-quadratic terms are neglected and the vibration dynamics is described by a set of independent quantum harmonic oscillators.~\cite{maradudin1963} The HA has represented and still largely represents the standard approach to vibration dynamical investigations in molecules and materials because of its relatively low computational cost (as only second-order energy derivatives with respect to atomic displacements are required) and availability of robust implementations in most quantum chemistry and density functional theory (DFT) programs.~\cite{BARONI,TOGO20151,319,GONZE} We note that most solid state DFT programs implement analytic forces while the Hessian matrix is either computed analytically~\cite{giannozzi2009quantum} or from numerical finite differences of the forces computed at displaced atomic configurations.~\cite{HESSvsOPT,IntStrain}
In order to go beyond the HA and account for couplings among the normal modes of vibration, high-order terms of the PES need to be explicitly computed.
In quantum chemistry software packages, implementations to compute higher-than-quadratic terms of the PES are scarce; moreover, explicit analytic expressions have been derived only for molecular Hartree-Fock (HF) and DFT
(see Ref. [\onlinecite{ringholm2014analytic}] and references therein for a detailed review on the evolution of analytic total energy derivatives at different orders for molecules). Common strategies for the calculation of cubic and quartic terms of the PES (i.e. third- and fourth-order total energy derivatives with respect to nuclear displacements) involve numerical differentiation making use of either just the energy or the energy and lower order analytic derivatives at a set of atomically displaced nuclear configurations. Many different numerical schemes have been proposed, each requiring a specific number of nuclear configurations to be explicitly explored.~\cite{martin1996there,burcl2003vibrational,yagi2004ab,barone2005anharmonic} We refer to Ref. [\onlinecite{Lin}] and references therein for a comprehensive review of different numerical differentiation schemes for cubic and quartic interatomic force-constants for molecular systems. The computational cost associated to each scheme is determined by two factors: the number of configurations $N_\textup{conf}$ needed and the type of calculation required at each configuration (energy only, energy and analytic forces, etc.). The former factor accounts for most of the computational cost as computing the energy through the self-consistent field (SCF) procedure at a new nuclear configuration proves much more expensive than analytically evaluating the forces after the SCF is completed at an already explored geometry.
In a solid state context, much effort has gone into the implementation of schemes for the description of cubic terms of the PES (i.e. those relevant to the computation of the lattice thermal conductivity),~\cite{togo2015-3py,togo2015-py,plata2017,skelton2014,whalley2016,linnera2017} with fewer attempts to include up to quartic terms.~\cite{PhysRevLett.113.185501,prentice2017,souvatzis2008,errea2014,parlinski2018} Let us note that, for solids, the HA limits the description of the lattice dynamics even more than it does for molecular systems because of the corresponding missing dependence of thermodynamic properties on volume. However, this class of limitations can be largely overcome by computing harmonic phonons at different lattice volumes within the so-called quasi-harmonic approximation (QHA).~\cite{QHA,Baroni01012010} Some of the authors of this paper have recently developed a module of the \textsc{Crystal} program for the calculation of quasi-harmonic thermal properties of materials (from thermal expansion to thermo-elasticity).\cite{Grun_MIO,Thermal_MgCaO,CORUNDUM,FORSTERITE,LIF_PRL,UREA_QHA,THERMOELAST_MIO,ryder2019quasi,CU2O_ANTTI,THERMOELA_ORGSEMI,maul2020thermoelasticity}
A numerically robust and computationally efficient finite difference scheme (namely, EGH), based on a Taylor's expansion of the PES in the basis of the normal modes, has been proposed in 2008 for molecular systems by Lin {\it et al.}, which requires a minimal set of nuclear configurations to be explored in the definition of a 2M4T or 3M4T representation of the PES.~\cite{Lin} Such scheme (based on the analysis of the relative importance of different types of cubic and quartic terms) has recently been extended to solids by some of the present authors~\cite{PARTI_ANHARM} and implemented in the \textsc{Crystal} program,~\cite{CRYSTAL17PAP,PAPCRYSTAL23} along with the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods for computation of anharmonic vibrational states.~\cite{PARTII_ANHARM,maul2019elucidating,schireman2022anharmonic}
In this paper, we illustrate how group theoretical arguments can be used to drastically reduce the number of configurations $N_\textup{conf}$ needed to achieve a quartic representation of the PES on both molecules and materials belonging to high point symmetry groups. An algorithm is presented, as implemented in a developmental version of the \textsc{Crystal23} program, whose efficiency is documented by numerical tests performed on selected molecules and crystalline materials.
\section{Formal Aspects}
\label{sec:form}
\subsection{The Truncation of the PES}
By computing, mass-weighting and diagonalizing the Hessian matrix of either a molecular system with $N$ atoms or a crystal with $N$ atoms per cell, normal modes $Q_i$ and associated harmonic vibration frequencies $\omega_i$ are obtained, with $i=1,\dots, M$, where $M=3N -6(5)$ for molecules and $M=3N -3$ for solids at the $\Gamma$ point. Within the Born-Oppenheimer approximation, vibrational states are determined by solving the nuclear Schr\"odinger equation, which, in terms of normal coordinates, reads:
\begin{equation}
\label{eq:scrod}
\hat{H} \Psi_s({\bf Q}) = E_s \Psi_s({\bf Q}) \; ,
\end{equation}
where $\Psi_s({\bf Q})$ is the vibrational wavefunction of the $s$-th vibrational state and $E_s$ the corresponding energy.
By setting the rotational angular momentum to zero and by neglecting rotational coupling effects, the Hamiltonian operator in Eq. (\ref{eq:scrod}) can be written as:
\begin{equation}
\label{eq:hamil}
\hat{H} = \sum_{i=1}^M -\frac{1}{2} \frac{\partial^2}{\partial Q_i^2} + \hat{V}(Q_1, \dots, Q_M) \; .
\end{equation}
The Born-Oppenheimer PES can be expanded in a Taylor's series centered at the equilibrium nuclear configuration in the basis of such mass-weighted normal coordinates:
\begin{eqnarray}
\label{eq:pes}
\hat{V}(Q_1, \dots, Q_M) &=& \frac{1}{2}\sum_{i=1}^M \omega_i^2 Q_i^2 \nonumber \\
&+& \frac{1}{3!} \sum_{i,j,k=1}^M \eta_{ijk} Q_iQ_jQ_k + \nonumber \\
&+& \frac{1}{4!} \sum_{i,j,k,l=1}^M \eta_{ijkl} Q_iQ_jQ_kQ_l + \cdots \; , \;\;\;
\end{eqnarray}
where $\eta_{ijk}$ and $\eta_{ijkl}$ are cubic and quartic force constants, respectively:
\begin{eqnarray}
\label{eq:fconst}
\eta_{ijk} &=& \left( \frac{\partial^3 E}{\partial Q_i \partial Q_j \partial Q_k}\right)_\textup{eq} \\
\label{eq:fconst2}
\eta_{ijkl} &=& \left( \frac{\partial^4 E}{\partial Q_i \partial Q_j \partial Q_k \partial Q_l}\right)_\textup{eq} \; .
\end{eqnarray}
These are high-order total energy derivatives with respect to collective normal coordinates, evaluated at the equilibrium nuclear configuration. The PES expansion in Eq. (\ref{eq:pes}) needs to be truncated so as to include only those terms contributing significantly to the description of the vibrational states of the system.
In molecular anharmonic calculations, it is a common practice to truncate it after the fourth-order as in most cases neglected higher-than-quartic terms would produce little corrections to the vibrational states (note that for strongly anharmonic systems such as water this may not be the case).~\cite{Lin} Here, we follow the same strategy and thus we consider terms up to fourth-order in the PES (namely, we use a quartic, 4T, representation of the potential). Within a 4T representation, the PES can be further truncated by considering only those force constants involving a maximum of $n$ distinct modes (namely, a $n$M representation of the potential). By combining the two truncation strategies introduced above, a 1M4T representation of the PES would require the evaluation of the force constants below:
\begin{equation}
\label{eq:1M4T}
\eta_{iii} , \; \eta_{iiii} \quad \forall \; i=1,\dots, M \; .
\end{equation}
This representation of the PES neglects two-mode couplings and almost always results in a wrong description of the vibrational states. A popular representation of the potential is the 2M4T one, which includes all two-mode coupling force constants while neglecting three- and four-mode terms:~\cite{Lin}
\begin{eqnarray}
\label{eq:2M4T}
\eta_{iii} , \; \eta_{iiii} \quad &\forall& \; i=1,\dots, M \nonumber \\
\eta_{ijj} , \; \eta_{iij} , \; \eta_{iiij} , \; \eta_{ijjj} , \; \eta_{iijj} \quad &\forall& \; i<j =1,\dots, M \;. \; \;
\end{eqnarray}
This is the representation of the PES we work with here.
\begin{figure}[h!!]
\centering
\includegraphics[width=8.6cm]{Fig1.png}
\caption{(Left) Schematic representation of the displaced nuclear configurations required by the EGH scheme for a 2M4T description of the PES; (Right) Dependence of the number of nuclear configurations $N_\textup{conf}$ needed for the EGH scheme as a function of the number of normal modes $M$.}
\label{fig:egh}
\end{figure}
\subsection{The EGH Scheme for Cubic and Quartic Terms of the PES}
\label{sec:egh}
We start by briefly recalling the fundamentals of the EGH finite-difference scheme, as originally proposed by Lin {\it et al.}~\cite{Lin} We refer to Figure \ref{fig:egh} for a schematic representation. The zeroth step consists in the evaluation of the Hessian at the equilibrium configuration to obtain the harmonic normal modes and frequencies. Then, as a first step, for each normal coordinate $Q_i$, two nuclear configurations are explored towards positive and negative atomic displacements relative to the equilibrium configuration (blue circles in Figure \ref{fig:egh}). Both the total energy $E$ and the analytical gradients $G$ are computed at these configurations. The corresponding 1M terms of the PES of each mode $Q_i$ can be obtained from:
\begin{eqnarray}
\label{eq:scheme3step11}
\eta_{iii} &=& \dfrac{1}{s_i^2}\left( G_{-1}^{i} + G_{+1}^{i} \right) \\
\label{eq:scheme3step12}
\eta_{iiii} &=& \dfrac{3}{s_i^3}\left( G_{+1}^{i} -2s_i\omega_{i} - G_{-1}^{i} \right) \; ,
\end{eqnarray}
where $s_i = h/\sqrt{\omega_i}$ is an adaptive step (see Ref. [\onlinecite{PARTI_ANHARM}] for more details on the definition of the step size $h$) and $G^i_a$ is the gradient with respect to $Q_i$ computed at a nuclear configuration displaced by $a \cdot s_i \cdot Q_i$ from the equilibrium one.
As a second step, for each pair of normal modes $(Q_i, Q_j)$ so that $i<j$, two nuclear configurations are explored with positive and negative atomic displacements from the equilibrium configuration along both modes at the same time (green circles in Figure \ref{fig:egh}). For a 2M4T representation of the PES, just the total energy $E$ is required at these configurations. The corresponding 2M terms of the PES for each pair of modes are obtained from:
\begin{eqnarray}
\label{eq:scheme3step21}
\eta_{iij} &=& \dfrac{1}{s_i^2}( G_{-1,0}^{j} + G_{1,0}^{j} ) \\
\label{eq:scheme3step22}
\eta_{iiij} &=& \dfrac{3}{s_i^3}( G_{1,0}^{j} - G_{-1,0}^{j} ) \\
\label{eq:scheme3step23}
\eta_{iijj} &=& -\dfrac{1}{2s_i^2s_j^2}( 8E_{0,0} - 4 E_{-1,-1} -4E_{1,1} + \nonumber \\
&&-s_jG_{0,-1}^{j} + s_jG_{0,1}^{j} - s_iG_{-1,0}^{i} + s_iG_{1,0}^{i} + \nonumber\\
&&-4s_jG_{-1,0}^{j}+4s_jG_{1,0}^{j} - 4s_iG_{0,-1}^{i}+4s_iG_{0,1}^{i} + \nonumber\\
&&+2s_i^2\omega_{i}+ 2s_j^2\omega_{j} ) \; ,
\end{eqnarray}
where $E_{a,b}$ and where $G_{a,b}^i$ are the total energy and the gradient with respect to $Q_i$ computed at a nuclear configuration displaced by $a \cdot s_i \cdot Q_i + b \cdot s_j \cdot Q_j$ from the equilibrium one, respectively. Analogously, $G_{a,b}^j$ is the gradient with respect to $Q_j$ computed at the same nuclear configuration. For a system with $M$ normal modes, the total number of nuclear configurations to be explored in the definition of the 2M4T PES with this scheme is therefore given by:
\begin{equation}
\label{eq:count3}
N_\textup{conf} = 1 + 2M + 2 \binom{M}{2} \; .
\end{equation}
Figure \ref{fig:egh} shows the dependence of $N_\textup{conf}$ on $M$.
\subsection{From Wigner's Theorem to Symmetry Relations among Terms of the Anharmonic PES}
\label{subsec:symm}
The point symmetry group $\mathcal{P}$ of a system (molecular or crystalline) is the set of those point symmetry operators $\hat{R}_r$ (with $r = 1,\dots, \vert \mathcal{P}\vert$, where $\vert \mathcal{P}\vert$ is the so-called order of the group) with respect to which the system is invariant. This is expressed by Wigner's theorem that states that each point symmetry operator of the group must commute with the Hamiltonian operator:~\cite{wigner1931gruppentheorie,ZicErb2,ZicErb}
\begin{equation}
\label{eq:wigner}
\left[ \hat{H}, \hat{R}_r \right] = 0 \qquad \forall r = 1, \dots, \vert \mathcal{P}\vert \; .
\end{equation}
We note that if the Hamiltonian operator consists of a sum of terms, the condition above must be satisfied by each term individually. In group theory, a finite number $N_\textup{irrep}$ of irreducible representations (irreps) can be associated to a point group. Each irrep $\Gamma_\alpha$ (with $\alpha=1,\dots,N_\textup{irrep}$) has a given dimensionality $n_\alpha$ (for standard molecular or crystalline point groups $n_\alpha=1,2,3$) and is fully characterized by the corresponding $n_\alpha\times n_\alpha$ representation matrices $D^\alpha(\hat{R}_r)$ associated to each point symmetry operator. The characters $\chi^\alpha(\hat{R}_r)$ of a representation are simply defined as the trace of these matrices: $\chi^\alpha(\hat{R}_r) = \textup{Tr}[D^\alpha(\hat{R}_r)]$. Each irrep $\Gamma_\alpha$ is defined by $n_\alpha$ basis functions, corresponding to each row of the representation matrices. Let us introduce so-called projector operators associated to each irrep of the group:
\begin{equation}
\label{eq:proj}
\hat{P}^\alpha = \frac{1}{\vert \mathcal{P}\vert} \sum_{r=1}^{\vert \mathcal{P}\vert} \chi^\alpha(\hat{R}_r)^\ast \hat{R}_r \; .
\end{equation}
This operator is such to act on any function and ``extract'' its components of $\alpha$ type.
Normal modes are bases of the irreps of the point symmetry group of the system. Different modes can belong to the same irrep so that overall the manifold of all the harmonic modes of a system can be expressed as a direct sum of irreps as:
\begin{equation}
\Gamma_\textup{HA} = m_1\Gamma_1 \oplus \cdots \oplus m_\alpha\Gamma_\alpha \oplus \cdots \oplus m_{N_\textup{irrep}}\Gamma_{N_\textup{irrep}} \, ,
\end{equation}
where $m_\alpha$ is the multiplicity of irrep $\alpha$ in the manifold. The group theoretical approach that we illustrate requires normal modes to be explicitly labeled according to their symmetry properties. Therefore, we introduce the following extended notation for each normal mode:
\begin{equation}
\label{eq:not}
Q_i \rightarrow Q_{\alpha ul} \equiv \ket{\alpha u l} \; ,
\end{equation}
where each normal mode is labeled by the irrep $\alpha$ it belongs to, an index $u = 1,\dots,n_{\alpha}$ identifying the row of the irrep it is associated to, and an index $l = 1,\dots,m_{\alpha}$ marking the occurrence of the irrep $\Gamma_\alpha$ the mode refers to. From now on, we refer to the $n_\alpha$ basis functions of each occurrence of each irrep as a {\it set}. Moreover, with the notation introduced in Eq. (\ref{eq:not}) we adopt a ket notation with the aim of highlighting these labels more prominently, to be extensively used in what follows.
To make the notation introduced with Eq. (\ref{eq:not}) more clear, we shall analyse how it works for the simple molecule of methane in Table \ref{tab:methane}. Methane, CH$_4$, is a non-linear molecule with $N=5$ atoms and $M=9$ normal modes (excluding pure translations and rotations) belonging to the T$_\textup{d}$ point symmetry group. Modes 1, 2 and 3 belong to the first occurrence of the 3D irrep $F_2$, form the first {\it set}, and are degenerate (i.e. they have the same harmonic vibration frequency). Modes 4 and 5 belong to the first occurrence of the 2D irrep $E$, form the second {\it set}, and are degenerate. Mode 6 belongs to the 1D total-symmetric irrep $A_1$ and forms the third {\it set} on its own. Finally, modes 7, 8 and 9 belong to the second occurrence of the 3D irrep $F_2$, form the fourth {\it set}, and are degenerate.
\begin{table}[b]
\centering
\caption{Symmetry features of the normal modes of methane (the six pure translations and rotations are excluded).}
\label{tab:methane}
\begin{tabular} {ccccc}
\hline
\hline
Mode & Irrep $\alpha$ & Row $u$ & Occurrence $l$ & {\it Set}\\
\hline
1 & $F_2$ &1& 1 & 1\\
2 & $F_2$ &2& 1 & 1\\
3 & $F_2$ &3& 1 & 1\\
4 & $E$ &1& 1 & 2\\
5 & $E$ &2& 1 & 2\\
6 & $A_1$ &1& 1 & 3\\
7 & $F_2$ &1& 2& 4\\
8 & $F_2$ &2& 2& 4\\
9 & $F_2$ &3& 2& 4\\
\hline
\hline
\end{tabular}
\end{table}
Let us recall how, from group theory, basis functions of irreps transform upon application of a point symmetry operator:
\begin{equation}
\label{operator-application}
\hat{R}_r\ket{\alpha ul} = \sum_{u'=1}^{n_{\alpha}} D_{u'u}^{\alpha}(\hat{R}_r)\ket{\alpha u'l} \; ,
\end{equation}
that is, when acted upon by a symmetry operator, each normal mode $\ket{\alpha ul}$ is transformed into a linear combination of the basis functions of the {\it set} it belongs to, with coefficients given by the elements of the corresponding representation matrix. Given that cubic and quartic terms of the PES involve products of three or four normal modes, it will prove useful to what follows to show how a symmetry operator acts on a direct product of such functions, through its linearity property:
\begin{equation}
\label{operator-application2}
\hat{R}_r \left( \ket{\alpha ul} \otimes \cdots \otimes \ket{\omega zs} \right) =\hat{R}_r \ket{\alpha ul} \otimes \cdots \otimes \hat{R}_r \ket{\omega zs} \; .
\end{equation}
Wigner's theorem, as introduced in Eq. (\ref{eq:wigner}), proves key to an effective exploitation of point-symmetry to reduce the cost of the evaluation of an anharmonic PES such as the 2M4T one, that is to reduce the number of nuclear configurations $N_\textup{conf}$ to be explored to compute all of the interatomic force constants in Eq. (\ref{eq:2M4T}). Each additive term of the Hamiltonian (\ref{eq:hamil}) and thus of the potential (\ref{eq:pes}) must be invariant to any symmetry operator of the group. Taking into account that cubic and quartic terms of the PES are nothing but products of three or four normal modes, this can be formally expressed as:
$$
\ket{\alpha u l} \otimes \cdots \otimes \ket{\omega z s} \equiv
\hat{R}_r ( \ket{\alpha u l} \otimes \cdots \otimes \ket{\omega z s}) \; .
$$
Because the invariance of each term of the PES has to be satisfied for every symmetry operator, the expression above can also be written as:
\begin{equation}
\label{eq:lll}
\ket{\alpha u l} \otimes \cdots \otimes \ket{\omega z s} \equiv
\frac{1}{\vert \mathcal{P}\vert} \sum_{r=1}^{\vert \mathcal{P}\vert} \hat{R}_r ( \ket{\alpha u l} \otimes \cdots \otimes \ket{\omega z s}) \; .
\end{equation}
We note that by comparison with Eq. (\ref{eq:proj}) and by recalling that all characters of the total-symmetric irrep A$_1$ are 1 (i.e. $\chi^{\textup{A}_1}(\hat{R}_r) = 1 \;\; \forall r=1,\dots,\vert \mathcal{P}\vert$), Eq. (\ref{eq:lll}) can be written as:
\begin{equation}
\label{eq:lll2}
\ket{\alpha u l} \otimes \cdots \otimes \ket{\omega z s} \equiv
\hat{P}^{\textup{A}_1} ( \ket{\alpha u l} \otimes \cdots \otimes \ket{\omega z s}) \; .
\end{equation}
Now, by casting Eq. (\ref{operator-application2}) into Eq. (\ref{eq:lll}) and by use of property (\ref{operator-application}), the invariance condition for any general term of the PES becomes:
\begin{widetext}
\begin{eqnarray}
\label{eq:general_symm_rel}
\ket{\alpha u l} \otimes \cdots \otimes \ket{\omega z s} & \equiv&
\frac{1}{|\mathcal{P}|} \sum_{r=1}^{|\mathcal{P}|} \hat{R}_r (\ket{\alpha u l} \otimes \cdots
\otimes \ket{\omega z s} \nonumber \\
&=& \frac{1}{|\mathcal{P}|} \sum_{r=1}^{|\mathcal{P}|} \hat{R}_r \ket{\alpha ul} \otimes \cdots \otimes \hat{R}_r \ket{\omega zs} \nonumber \\
&=& \frac{1}{|\mathcal{P}|}\sum_{r=1}^{|\mathcal{P}|}
\sum_{u'=1}^{n_\alpha} \cdots
\sum_{z'=1}^{n_\omega} D_{u'u}^\alpha (\hat{R}_r)\cdots
D_{z'z}^\omega(\hat{R}_r)\ket{\alpha u' l} \otimes \cdots \otimes
\ket{\omega z' s} \nonumber \\
&=& \sum_{u'=1}^{n_\alpha}\cdots\sum_{z'=1}^{n_\omega}C_{u\cdots z,
u'\cdots z'}\ket{\alpha u' l} \otimes \cdots \otimes \ket{\omega z' s} \; ,
\end{eqnarray}
\end{widetext}
where we have introduced coefficients $C_{u\cdots z, u' \cdots z'}$ defined as:
\begin{equation}
\label{eq:coeff}
C_{u\cdots z, u' \cdots z'} =
\frac{1}{|\mathcal{P}|}\sum_{r=1}^{|\mathcal{P}|}D_{u'u}^\alpha(\hat{R}_r)
\cdots D_{z'z}^\omega(\hat{R}_r) \; .
\end{equation}
The invariance condition as explicitly worked out in Eq. (\ref{eq:general_symm_rel}) constitutes our working expression for an effective symmetry analysis of the anharmonic PES. Two main scenarios can be met:
\begin{enumerate}
\item If all coefficients are null on the right-hand side (rhs) then the term of the PES on the left-hand side (lhs) must be null;
\item If some coefficients are not null on the rhs then a symmetry relation is determined among the PES term on the lhs and those appearing on the rhs that must hold true. Let us note that those terms of the PES on the rhs involve normal modes belonging to the same {\it sets} of the modes of the PES term on the lhs. We shall call these connected terms {\it relative terms} below. We stress that a term of the PES can be related by symmetry only to those that constitute its {\it relative terms} as identified by Eq. (\ref{eq:general_symm_rel}).
\end{enumerate}
To summarize, use of Eq. (\ref{eq:general_symm_rel}) allows performing a preliminary symmetry analysis of the anharmonic terms of the PES and determining whether specific terms must be null by symmetry (and thus do not need to be explicitly computed) or whether specific symmetry relations must be satisfied among subsets of terms of the PES (i.e. among {\it relative terms}). Let us note that these symmetry relations are such that not all {\it relative terms} are independent. We will discuss below how a minimal number of terms of the PES to be explicitly computed can be identified, which then allows all others to be obtained by exploitation of such symmetry relations.
\subsection{On the Use of the Symmetry Relations among Terms of the Anharmonic PES}
\label{subsec:use}
In this Section, we discuss how Eq. (\ref{eq:general_symm_rel}) can be used to effectively reduce the number of nuclear configurations $N_\textup{conf}$ needed for the evaluation of all the anharmonic interatomic force constants in Eq. (\ref{eq:2M4T}). To do this, we need to inspect Eq. (\ref{eq:general_symm_rel}) more closely. The term on the lhs can be either cubic or quartic; its {\it relative terms} on the rhs will be cubic or quartic, respectively. The number of {\it relative terms} $n_\textup{rt}$ depends on the dimensionality of the irreps involved in the term on the lhs, and is simply given by $n_\textup{rt} = n_\alpha \times \cdots \times n_\omega$.
Thus, to each term of the PES a group of {\it relative terms} can be associated that we label simply $\ket{t}$ (with $t=1,\dots,n_\textup{rt}$) with a shorthand notation. To make this more evident, we introduce the following exemplification where we consider as term on the lhs the two-mode cubic term $\ket{\alpha 1 l} \otimes \ket{\alpha 1 l} \otimes \ket{\beta 1 m}$ with $n_\alpha = 2$ and $n_\beta = 1$. The corresponding {\it relative terms} would be:
\begin{eqnarray}
&\ket{\alpha 1 l} \otimes \ket{\alpha 1 l} \otimes \ket{\beta 1 m} \to \ket{1} &\nonumber \\
& \ket{\alpha 1 l} \otimes \ket{\alpha 2 l} \otimes \ket{\beta 1 m} \to \ket{2}& \nonumber \\
& \ket{\alpha 2 l} \otimes \ket{\alpha 1 l} \otimes \ket{\beta 1 m} \to \ket{3} & \nonumber \\
&\ket{\alpha 2 l} \otimes \ket{\alpha 2 l} \otimes \ket{\beta 1 m} \to \ket{4}& \nonumber \; .
\end{eqnarray}
By use of this simplified notation, the invariance condition of Eq. (\ref{eq:general_symm_rel}) for the first term of a group of {\it relative terms} can be written in a more compact fashion as:
\begin{equation}
\label{eq:cond1}
\ket{1} = C_{1,1} \ket{1} + C_{1,2} \ket{2} + \dots + C_{1,n_\textup{rt}} \ket{n_\textup{rt}} \; ,
\end{equation}
where the coefficients $C_{t,t^\prime}$ are those introduced in Eq. (\ref{eq:coeff}), as expressed in the new shorthand notation. The action of the invariance condition of Eq. (\ref{eq:general_symm_rel}) on the second term of the group of {\it relative terms} would lead to a symmetry relation of the form:
\begin{equation}
\ket{2} = C_{2,1} \ket{1} + C_{2,2} \ket{2} + \dots + C_{2,n_\textup{rt}} \ket{n_\textup{rt}} \; ,
\end{equation}
with different coefficients with respect to those of Eq. (\ref{eq:cond1}) but associated to the same terms of the PES (i.e. those belonging to the selected group of {\it relative terms}). These symmetry relations can be derived for each term of the PES among {\it relative terms} to form a linear system of $n_\textup{rt}$ equations of $n_\textup{rt}$ variables:
\begin{alignat}{4}[left = \empheqlbrace]
\ket{1} &= C_{1,1} \ket{1} + C_{1,2} \ket{2} + \dots + C_{1,n_\textup{rt}}\ket{n_\textup{rt}}
\nonumber \\
\ket{2} &= C_{2,1} \ket{1} + C_{2,2} \ket{2} + \dots + C_{2,n_\textup{rt}}\ket{n_\textup{rt}}
\nonumber \\
&\vdotswithin{\dots} \phantom{0} \nonumber\\
\ket{n_\textup{rt}} &= C_{n_\textup{rt},1} \ket{1} + C_{n_\textup{rt},2} \ket{2} + \dots + C_{n_\textup{rt},n_\textup{rt}} \ket{n_\textup{rt}} \; .
\end{alignat}
The system above can be made homogeneous as:
\begin{alignat}{4}[left = \empheqlbrace]
\label{eq:system}
C^\prime_{1,1} \ket{1} + C^\prime_{1,2} \ket{2} + \dots + C^\prime_{1,n_\textup{rt}} \ket{n_\textup{rt}}
&= 0 \nonumber \\
C^\prime_{2,1} \ket{1} + C^\prime_{2,2} \ket{2} + \dots + C^\prime_{2,n_\textup{rt}} \ket{n_\textup{rt}}
&= 0 \nonumber \\
&\vdotswithin{\dots} \phantom{0} \nonumber\\
C^\prime_{n_\textup{rt},1} \ket{1} + C^\prime_{n_\textup{rt},2} \ket{2} + \dots + C^\prime_{n_\textup{rt},n_\textup{rt}} \ket{n_\textup{rt}} &= 0
\end{alignat}
where the primed coefficients $C^\prime_{t,t^\prime}$ are related to the unprimed ones as:
\begin{equation}
\label{eq:c'}
C^\prime_{t,t^\prime}=
\begin{cases}
\;\;\;\;1 & \text{if} \;t=t^\prime\\
\frac{C_{t,t^\prime}}{C_{t,t} -1} & \text{if}\; t\neq t^\prime
\end{cases}
\end{equation}
In matrix notation, the linear system of Eq. (\ref{eq:system}) can be written as:
\begin{equation}
\label{eq:system2}
\bf{C}^\prime {\bf t} = {\bf 0} \; ,
\end{equation}
where $\bf{C}^\prime$ is the $n_\textup{rt}\times n_\textup{rt}$ matrix of the $C^\prime_{t,t^\prime}$ coefficients and ${\bf t}$ is the vector of the $n_\textup{rt}$ unknown {\it relative terms}. The key point to the whole symmetry analysis of the anharmonic PES we introduce is to look for non-trivial solutions of these linear systems. For each group of {\it relative terms}, we aim at identifying the minimal set of $n_\textup{ec}$ terms of the PES to be explicitly computed via the EGH finite difference numerical approach described in Section \ref{sec:egh} that allows for the system of Eq. (\ref{eq:system2}) to be solved and thus for the other $n_\textup{s} = n_\textup{rt} - n_\textup{ec}$ terms to be obtained by symmetry. By explicitly computing $n_\textup{ec}$ terms, the number of unknown variables reduces to $n_\textup{s}$ and thus we are left with a system of $n_\textup{rt}$ equations and $n_\textup{s}$ variables:
\begin{alignat}{4}[left = \empheqlbrace]
\label{eq:inhomogeneous}
\sum_{t_s}^{n_\textup{s}} C'_{1,t_s} \ket{t_s} &= -\sum_{t_\textup{ec}}^{n_\textup{ec}} C'_{1,t_\textup{ec}}\ket{t_\textup{ec}} \nonumber \\
\sum_{t_s}^{n_\textup{s}} C'_{2,t_s} \ket{t_s} &= -\sum_{t_\textup{ec}}^{n_\textup{ec}} C'_{2,t_\textup{ec}}\ket{t_\textup{ec}} \nonumber \\
&\vdotswithin{\dots} \phantom{0} \nonumber\\
\sum_{t_s}^{n_\textup{s}} C'_{n_\textup{rt},t_s} \ket{t_s} &= -\sum_{t_\textup{ec}}^{n_\textup{ec}} C'_{n_\textup{rt},t_\textup{ec}}\ket{t_\textup{ec}} \nonumber \\
\end{alignat}
It is important to mention that a system like this is overdetermined. A smaller system can be obtained by removing $n_\textup{ec}$ equations from the previous one so as to get a reduced square matrix of coefficients on the lhs:
\begin{equation}
\label{eq:system3}
\bf{C}^\prime_{\bf r} {\bf t}_{\bf s} = {\bf t}_{\bf ec} \; ,
\end{equation}
where $\bf{C}^\prime_{\bf r}$ is a $n_\textup{s}\times n_\textup{s}$ square matrix, ${\bf t}_{\bf s}$ is a vector whose elements are the force constants to be determined by symmetry (i.e. by solving the system of equations), and ${\bf t}_{\bf ec}$ is a vector obtained from the explicitly computed terms, with elements being the rhs of the equations in (\ref{eq:inhomogeneous}). Now, the linear system of equations (\ref{eq:system3}) can be solved if the following condition is satisfied:
\begin{equation}
\label{eq:condition}
\det {\bf C}^\prime_{\bf r} \neq 0 \; .
\end{equation}
Once this condition is met, it is trivial to obtain by symmetry the $n_\textup{s}$ terms of the PES just by inverting the $\bf{C}^\prime_{\bf r}$ matrix as:
\begin{equation}
\label{eq:solution}
{\bf t}_{\bf s} = \left[{\bf C}^\prime_{\bf r}\right]^{-1} {\bf t}_{\bf ec} \; .
\end{equation}
Condition (\ref{eq:condition}) is the one we use to devise an algorithm that performs a preliminary symmetry analysis to identify the minimal set of terms of the PES to be explicitly computed via the EGH numerical scheme that allows for the whole set of terms of a 2M4T PES to be computed.
\subsubsection{The Algorithm}
This is how the algorithm we have devised works. For each set of {\it relative terms} of the PES, the invariance condition (\ref{eq:general_symm_rel}) is applied to each term of the set and the linear system of equations (\ref{eq:system2}) is built. In order to identify the minimal set of terms among them that need to be explicitly computed via the EGH scheme to make the linear system solvable and thus obtain the remaining terms by symmetry, we exploit condition (\ref{eq:condition}) and proceed as discussed below:
\begin{enumerate}
\item From the invariance condition (\ref{eq:general_symm_rel}) we determine what terms must be null by symmetry. Let us label the number of such terms $n_0$. This leaves us with $\tilde{n}_\textup{rt} = n_\textup{rt} -n_0$ non vanishing {\it relative terms}. The $n_0$ null terms, along with the corresponding equations, can be safely eliminated from (\ref{eq:system2}), so that we are left with a linear system of $\tilde{n}_\textup{rt}$ variables and $\tilde{n}_\textup{rt}$ equations.
\item By analysing the {\it relative terms}, 3M or 4M ones are identified (i.e. those terms involving three or four distinct modes). Let the number of such terms be $n_{34}$. It is important to correctly identify these terms because the EGH scheme, as discussed in Section \ref{sec:egh} and implemented in the \textsc{Crystal} program, does not allow for their explicit calculation.
\item The process starts by checking whether the system can be solved by explicitly computing only one term (i.e. by setting $n_\textup{ec} = 1$). This involves an iterative procedure where at each iteration one of the $\tilde{n}_\textup{rt} - n_{34}$ variables is selected and moved to the rhs of (\ref{eq:inhomogeneous}). In order to reduce the overdetermined system to the form (\ref{eq:system3}), one of the $\tilde{n}_\textup{rt}$ equations must be removed. This is done in turn with an iterative procedure. For each combination in the iterative process, condition (\ref{eq:condition}) is checked. If the condition is satisfied the process stops otherwise it keeps going.
\item If condition (\ref{eq:condition}) was never satisfied at the previous step, then the process checks whether the system can be solved by explicitly computing only two terms (i.e. by setting $n_\textup{ec} = 2$). This involves an iterative procedure where at each iteration two of the $\tilde{n}_\textup{rt} - n_{34}$ variables are selected and moved to the rhs of (\ref{eq:inhomogeneous}). In order to reduce the overdetermined system to the form (\ref{eq:system3}), two of the $\tilde{n}_\textup{rt}$ equations must be removed. This is done in turn with an iterative procedure by exploring all possible pairs of equations. For each combination in the iterative process, condition (\ref{eq:condition}) is checked. If the condition is satisfied the process stops otherwise it keeps going.
\item If condition (\ref{eq:condition}) was not satisfied at steps 3 and 4 above, the process goes on by setting $n_\textup{ec} = 3,4,\dots$ until the condition is met.
\end{enumerate}
When the algorithm above identifies a combination that satisfies condition (\ref{eq:condition}), the symmetry analysis for the selected set of {\it relative terms} of the PES is completed having determined: i) the $n_0$ terms that are null by symmetry, ii) the minimal set of $n_\textup{ec}$ terms to be explicitly computed via the EGH scheme that allow all other $n_\textup{s}$ terms to be obtained from Eq. (\ref{eq:solution}). The algorithm then moves to the next set of {\it relative terms} until all terms of the PES in Eq. (\ref{eq:pes}) have been analysed.
Upon completion of this symmetry analysis for all terms of the PES in Eq. (\ref{eq:pes}), we are ready to determine what nuclear configurations can be skipped in the EGH procedure sketched in Figure \ref{fig:egh}. At Step 1 of the EGH procedure, for each selected normal mode $i=1,\dots,M$, the algorithm checks if both the corresponding 1M terms $\eta_{iii}$ and $\eta_{iiii}$ can be obtained as solutions of the linear systems of equations discussed above. If and only if that is the case, then the calculations at the two nuclear configurations obtained by displacing the atoms along the $i$-th normal mode (blue circles in Figure \ref{fig:egh}) can be skipped as $\eta_{iii}$ and $\eta_{iiii}$ do not need to be explicitly computed via the EGH scheme through Eqs. (\ref{eq:scheme3step11}-\ref{eq:scheme3step12}). At Step 2 of the EGH procedure, for each selected pair of normal modes $ i<j =1,\dots, M$, the algorithm checks if all the corresponding 2M terms $\eta_{ijj}$, $\eta_{iij}$, $\eta_{iiij}$, $\eta_{ijjj}$ and $\eta_{iijj}$ can be obtained as solutions of the linear systems of equations discussed above. If and only if that is the case, then the calculations at the two nuclear configurations obtained by simultaneously displacing the atoms along the $i$-th and $j$-th normal modes (green circles in Figure \ref{fig:egh}) can be skipped as the 2M terms do not need to be explicitly computed via the EGH scheme through Eqs. (\ref{eq:scheme3step21}-\ref{eq:scheme3step23}).
\section{Results and Discussion}
In this section, we present examples on the application to molecules and solids of the symmetry analysis described in Section \ref{sec:form} to reduce the number of anharmonic terms of the PES to be explicitly computed via the numerical EGH finite-difference scheme. We start by providing examples on the use of the symmetry relations introduced in Section \ref{subsec:symm} and then we present the computational gains obtained from their exploitation in actual calculations.
\subsection{Examples on the Use of the Symmetry Relations}
We discuss a couple of explicit examples on how the symmetry relations obtained with Eq. (\ref{eq:general_symm_rel}) can be exploited so as to make the algorithm described in Section \ref{subsec:use} clearer by exemplification.
We start from the methane molecule, whose normal modes have already been characterized by symmetry in Table \ref{tab:methane}. Let us consider quartic terms $Q_iQ_jQ_kQ_l$ where the first two modes ($Q_i$ and $Q_j$) belong to the first {\it set} (i.e. the first occurrence of the 3D irrep $F_2$) and the last two modes ($Q_k$ and $Q_l$) belong to the second {\it set} (i.e. the first occurrence of the 2D irrep $E$). There is a total of $n_\textup{rt} =18$ potentially distinct terms of this kind that form a group of {\it relative terms}: six 2M terms, nine 3M terms, and three 4M terms. Application of the invariance condition (\ref{eq:general_symm_rel}) to each of the 18 terms leads to the linear system of equations (\ref{eq:system2}). The corresponding 18$\times$18 $\bf{C}^\prime$ matrix is reported in Figure \ref{fig:mat} A$_1$. Here a compact notation is used to label each relative term by use of the index of the row for each mode, where the indices of the modes belonging to the second {\it set} are primed. For instance, the term $Q_1Q_1Q_4Q_4$ (a 2M term) is simply labeled $111^\prime1^\prime$ because mode 1 corresponds to the first row of the irrep $F_2$ and mode 4 corresponds to the first row of the irrep $E$; accordingly, the term $Q_1Q_3Q_4Q_5$ (a 4M term) is labeled $131^\prime2^\prime$, and so on. Inspection of Figure \ref{fig:mat} A$_1$ reveals that 10 out of 18 terms are null by symmetry (i.e. $n_0 =10$) and therefore must not be explicitly computed. By removing the $n_0$ terms from the linear system we are left with an 8$\times$8 matrix shown in Figure \ref{fig:mat} A$_2$. We now look for the minimal set of terms to be explicitly computed, which makes this system solvable via the algorithm described in Section \ref{subsec:use}. In this case, it turns out that by explicitly computing just two terms, namely $111^\prime1^\prime$ and $112^\prime2^\prime$ (first and third columns in Figure \ref{fig:mat} A$_2$), and by removing the first two rows in Figure \ref{fig:mat} A$_2$, the resulting reduced matrix $\bf{C}^\prime_{\bf r}$ has a non null determinant, as shown in Figure \ref{fig:mat} A$_3$, and therefore allows the system to be solved. To summarize, the symmetry analysis of these {\it relative terms} tells us that out of the total of 18 terms, 10 are null by symmetry and that six of the remaining eight non-null ones can be derived by exploitation of symmetry relations by explicitly computing only two of them via the EGH scheme.
\begin{widetext}
\begin{center}
\begin{figure}[h!!]
\centering
\includegraphics[width=14cm]{Fig2}
\caption{Examples of linear systems of equations of the form (\ref{eq:system2}), obtained from the invariance condition (\ref{eq:general_symm_rel}), for selected molecules and selected anharmonic terms of the PES. A$_1$) Linear system for quartic terms of the anharmonic PES of methane involving two modes of the first {\it set} (3D) and two modes of the second {\it set} (2D), see Table \ref{tab:methane}. A$_2$) Same as in A$_1$) but with null terms being removed. A$_3$) Corresponding $\bf{C}^\prime_{\bf r}$ matrix, which satisfies condition (\ref{eq:condition}). B) Linear system for quartic terms of the anharmonic PES of methane involving only modes of the first {\it set} (3D). C) Linear system for quartic terms of the anharmonic PES of tetrahedrane involving only modes of the third {\it set} (3D). The following color scheme is used: blue for 1M, green for 2M, yellow for 3M and red for 4M terms, respectively. Empty boxes correspond to null elements in the matrices.}
\label{fig:mat}
\end{figure}
\end{center}
\end{widetext}
For a second example, let us still work with methane. Let us now consider quartic terms $Q_iQ_jQ_kQ_l$ where all four modes belong to the first {\it set} (i.e. the first occurrence of the 3D irrep $F_2$). In this case, there is a total of $n_\textup{rt} =15$ potentially distinct terms of this kind that form a group of {\it relative terms}: three 1M terms, nine 2M terms, and three 3M terms. Application of the invariance condition (\ref{eq:general_symm_rel}) to each of the 15 terms leads to the linear system of equations (\ref{eq:system2}). The corresponding 15$\times$15 $\bf{C}^\prime$ matrix is reported in Figure \ref{fig:mat} B. The same compact notation introduced above is used to label each relative term. Inspection of Figure \ref{fig:mat} B reveals the peculiar block-diagonal form of the matrix where 1M terms are symmetry related only to other 1M terms, 2M terms only to 2M terms, while all 3M terms are null by symmetry in this case. Each block can thus be analysed independently. For the three 1M terms, it turns out that it is enough to explicitly compute the first one (namely 1111, that is $Q_1Q_1Q_1Q_1$) to make the system solvable and thus to obtain the other two (2222 and 3333, that is $Q_2Q_2Q_2Q_2$ and $Q_3Q_3Q_3Q_3$). For the nine 2M terms, six of them are null by symmetry, and also in this case it turns out that it is enough to explicitly compute the first non-null one (namely 1122, that is $Q_1Q_1Q_2Q_2$) to make the system solvable and obtain the other two non-null ones (1133 and 2233, that is $Q_1Q_1Q_3Q_3$ and $Q_2Q_2Q_3Q_3$). To summarize, the symmetry analysis of these {\it relative terms} tells us that out of the total of 15 terms, 9 are null by symmetry and that four of the remaining six non-null ones can be derived by exploitation of symmetry relations by explicitly computing only two of them via the EGH scheme.
As a last example, we consider the tetrahedrane molecule, C$_4$H$_4$, also belonging to the T$_\textup{d}$ point symmetry group. Let us consider quartic terms $Q_iQ_jQ_kQ_l$ where all four modes belong to the third {\it set} (i.e. the first occurrence of the 3D irrep $F_1$). As in the previous case, there is a total of $n_\textup{rt} =15$ potentially distinct terms of this kind that form a group of {\it relative terms}: three 1M terms, nine 2M terms, and three 3M terms. Application of the invariance condition (\ref{eq:general_symm_rel}) to each of the 15 terms leads to the linear system of equations (\ref{eq:system2}). The corresponding 15$\times$15 $\bf{C}^\prime$ matrix is reported in Figure \ref{fig:mat} C. Inspection of Figure \ref{fig:mat} C reveals another interesting structure, where no terms are found to be null by symmetry and where all symmetry relations link 1M, 2M and 3M terms together. As complex the symmetry relations may look in this case, this system can be solved by explicitly computing a surprisingly low number of terms. Indeed, by computing via the EGH scheme just the first two terms (namely 1111 and 2222, that is two 1M terms), the system becomes solvable and all other 13 terms can be derived.
The few selected examples discussed above show the effectiveness of an {\it a priori} symmetry analysis in reducing the number of anharmonic terms of the PES that need to be explicitly computed, with associated computational gains to be documented in the following section.
\subsection{Computational Gain}
We have implemented in a developmental version of the \textsc{Crystal23} program the group theoretical approach described in Section \ref{sec:form} to simplify the numerical calculation of cubic and quartic anharmonic terms of the PES. In this Section, we illustrate its effectiveness in reducing the number of atomically displaced nuclear configurations $N_\textup{conf}$ at which the energy and forces must be computed. Four molecular systems are considered: methane, CH$_4$ (belonging to the T$_\textup{d}$ point symmetry group, with 9 normal modes), tetrahedrane, C$_4$H$_4$ (belonging to the T$_\textup{d}$ point symmetry group, with 18 normal modes), cyclo-exasulfur, S$_6$ (belonging to the D$_\textup{3d}$ point symmetry group, with 12 normal modes), and cubane, C$_8$H$_8$ (belonging to the O$_\textup{h}$ point symmetry group, with 42 normal modes). Two 3D crystalline solids are also considered: Magnesium oxide, MgO, as described by a conventional cubic cell (belonging to the Fm$\overline{3}$m cubic space group, with 21 normal modes), and a Zinc-imidazolate framework, namely ZIF-8 (belonging to the I$\overline{4}$3m cubic space group). In the latter case, given that ZIF-8 has 138 atoms per primitive cell and thus a total of 411 normal modes, a sub-set of just 12 modes has been selected for the anharmonic analysis, corresponding to the highest frequency ones. Figure \ref{fig:gain} shows the atomic structure of the six selected systems. For each system, the figure also shows bar plots reporting the total number of nuclear configurations $N_\textup{conf}$ that need to be explicitly explored within the EGH finite-difference scheme when symmetry is not exploited and when symmetry is exploited according to the group theoretical approach presented here.
In the case of methane, CH$_4$, there are $M=9$ normal modes. Within a 2M4T representation of the PES, there are a total of 198 cubic and quartic anharmonic force constants to be computed. Without the exploitation of symmetry, 90 atomically displaced nuclear configurations should be explored, which are reduced to 30 by symmetry exploitation as described in Section \ref{sec:form}, with a computational gain that amounts to a factor of 3. For tetrahedrane, C$_4$H$_4$, there are $M=18$ normal modes. Within a 2M4T representation of the PES, there are a total of 801 cubic and quartic anharmonic force constants to be computed. Without the exploitation of symmetry, 342 distinct atomically displaced nuclear configurations should be explored, which are reduced to 110 by symmetry exploitation, with a computational gain that amounts to a factor of 3.1. The cyclo-exasulfur, S$_6$, molecule has $M=12$ normal modes. Within a 2M4T representation of the PES, there are a total of 354 cubic and quartic anharmonic force constants to be computed. Without the exploitation of symmetry, 156 distinct atomically displaced nuclear configurations should be explored, which are reduced to 96 by symmetry exploitation, with a computational gain that amounts to a factor of 1.6. In the case of cubane, C$_8$H$_8$, there are $M=42$ normal modes. Within a 2M4T representation of the PES, there are a total of 4389 cubic and quartic anharmonic force constants to be computed. Without the exploitation of symmetry, 1806 distinct atomically displaced nuclear configurations should be explored, which are reduced to 566 by symmetry exploitation, with a computational gain that amounts to a factor of 3.2. For the MgO crystal, described by its cubic conventional cell with 8 atoms, there are $M=21$ normal modes. Within a 2M4T representation of the PES, there are a total of 1092 cubic and quartic anharmonic force constants to be computed. Without the exploitation of symmetry, 462 distinct atomically displaced nuclear configurations should be explored, which are reduced to 112 by symmetry exploitation, with a computational gain that amounts to a factor of 4.1. Finally, for the ZIF-8 crystal, by considering only a subset of 12 normal modes, there are a total of 354 cubic and quartic anharmonic force constants to be computed within a 2M4T representation of the PES. Without the exploitation of symmetry, 156 distinct atomically displaced nuclear configurations should be explored, which are reduced to 48 by symmetry exploitation, with a computational gain that amounts to a factor of 3.2.
In conclusion, the effectiveness of the approach to reduce the number of atomically displaced nuclear configurations to be explicitly explored depends on two factors: i) the order of the point group of the system: clearly, the richer the point symmetry, the higher the speedup; ii) the occurrence of high-dimensional (2D or 3D) irreps in the manifold of normal modes.
\begin{widetext}
\begin{center}
\begin{figure}[h!!]
\centering
\includegraphics[width=17cm]{Fig3}
\caption{A) For each of the six test systems, the atomic structure is shown and a bar plot is presented, which reports the total number of nuclear configurations $N_\textup{conf}$ that need to be explicitly explored in building the 2M4T anharmonic PES with the EGH finite-difference scheme when symmetry is not exploited and when symmetry is exploited according to the group theoretical approach presented here. B) Overall computational gain factor due to the symmetry exploitation approach discussed in Section \ref{sec:form}.}
\label{fig:gain}
\end{figure}
\end{center}
\end{widetext}
\section{Conclusions}
A group theoretical approach based on Wigner's theorem has been formally illustrated to determine symmetry relations among anharmonic terms of the potential energy surface (PES) of a quantum-mechanical system. An algorithm has been devised to take full advantage of such symmetry relations to reduce the number of atomically displaced nuclear configurations at which energy and forces must be computed to build a 2M4T representation of the anharmonic PES, with the EGH finite-difference scheme. The algorithm has been implemented in a developmental version of the \textsc{Crystal23} software program, and tested on six high symmetry systems (four molecules and two 3D crystals). The results clearly demonstrate the consistent computational gain provided by such approach for highly symmetric systems. The highest speedup (factor of 4.1) has been obtained for a cubic crystal of magnesium oxide.
\acknowledgements
A.E. wishes to dedicate this paper to the memory of Prof. Claudio Zicovich-Wilson, for introducing him to the wonders of group theory in quantum chemistry. A.E. gratefully acknowledges insightful discussions with Prof. Michel R\'{e}rat on the application of group theory to anharmonic terms of the potential energy surface. A.E. and J.M. thank the University of Torino and the Compagnia di San Paolo for funding (CSTO169372).
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{48}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Born and Huang(1954)Born, and Huang]{BORNHUANG}
Born,~M.; Huang,~K. \emph{Dynamical Theory of Crystal Lattices}; Clarendon
Press, 1954\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hill(1986)]{THERMOSTAT_HILL}
Hill,~T.~L. \emph{An Introduction to Statistical Thermodynamics}; Dover
Publications, Inc., 1986\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Maradudin \latin{et~al.}(1963)Maradudin, Montroll, and
Weiss]{maradudin1963}
Maradudin,~A.~A.; Montroll,~E.~W.; Weiss,~G.~H. \emph{Theory of Lattice
Dynamics in The Harmonic Approximation}; Academic Press New York, 1963;
Vol.~3\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Baroni \latin{et~al.}(2001)Baroni, De~Gironcoli, Dal~Corso, and
Giannozzi]{BARONI}
Baroni,~S.; De~Gironcoli,~S.; Dal~Corso,~A.; Giannozzi,~P. Phonons and related
crystal properties from density-functional perturbation theory. \emph{Reviews
of modern Physics} \textbf{2001}, \emph{73}, 515\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Togo and Tanaka(2015)Togo, and Tanaka]{TOGO20151}
Togo,~A.; Tanaka,~I. {First Principles Phonon Calculations in Materials
Science}. \emph{Scr. Mater.} \textbf{2015}, \emph{108}, 1 -- 5\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pascale \latin{et~al.}(2004)Pascale, Zicovich-Wilson, Gejo, Civalleri,
Orlando, and Dovesi]{319}
Pascale,~F.; Zicovich-Wilson,~C.~M.; Gejo,~F.~L.; Civalleri,~B.; Orlando,~R.;
Dovesi,~R. The calculation of the vibrational frequencies of the crystalline
compounds and its implementation in the CRYSTAL code. \emph{J. Comp. Chem.}
\textbf{2004}, \emph{25}, 888--897\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gonze and Lee(1997)Gonze, and Lee]{GONZE}
Gonze,~X.; Lee,~C. {Dynamical matrices, Born effective charges, dielectric
permittivity tensors, and interatomic force constants from density-functional
perturbation theory}. \emph{Phys. Rev. B} \textbf{1997}, \emph{55},
10355\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Giannozzi \latin{et~al.}(2009)Giannozzi, Baroni, Bonini, Calandra,
Car, Cavazzoni, Ceresoli, Chiarotti, Cococcioni, Dabo, \latin{et~al.}
others]{giannozzi2009quantum}
others,, \latin{et~al.} QUANTUM ESPRESSO: a modular and open-source software
project for quantum simulations of materials. \emph{J. Phys.: Cond. Matter}
\textbf{2009}, \emph{21}, 395502\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2017)Erba, Caglioti, Zicovich-Wilson, and
Dovesi]{HESSvsOPT}
Erba,~A.; Caglioti,~D.; Zicovich-Wilson,~C.~M.; Dovesi,~R. Nuclear-relaxed
Elastic and Piezoelectric Constants of Materials: Computational Aspects of
Two Quantum-mechanical Approaches. \emph{J. Comput. Chem.} \textbf{2017},
\emph{38}, 257--264\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba(2016)]{IntStrain}
Erba,~A. The Internal-Strain Tensor of Crystals for Nuclear-relaxed Elastic and
Piezoelectric Constants: On the Full Exploitation of its Symmetry Features.
\emph{Phys. Chem. Chem. Phys.} \textbf{2016}, \emph{18}, 13984--13992\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ringholm \latin{et~al.}(2014)Ringholm, Jonsson, Bast, Gao,
Thorvaldsen, Ekstr{\"o}m, Helgaker, and Ruud]{ringholm2014analytic}
Ringholm,~M.; Jonsson,~D.; Bast,~R.; Gao,~B.; Thorvaldsen,~A.~J.;
Ekstr{\"o}m,~U.; Helgaker,~T.; Ruud,~K. Analytic cubic and quartic force
fields using density-functional theory. \emph{J. Chem. Phys.} \textbf{2014},
\emph{140}, 034103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Martin \latin{et~al.}(1996)Martin, Schwenke, Lee, and
Taylor]{martin1996there}
Martin,~J.~M.; Schwenke,~D.~W.; Lee,~T.~J.; Taylor,~P.~R. Is there evidence for
detection of cyclic C4 in IR spectra? An accurate ab initio computed quartic
force field. \emph{J. Chem. Phys.} \textbf{1996}, \emph{104},
4657--4663\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Burcl \latin{et~al.}(2003)Burcl, Handy, and
Carter]{burcl2003vibrational}
Burcl,~R.; Handy,~N.~C.; Carter,~S. Vibrational spectra of furan, pyrrole, and
thiophene from a density functional theory anharmonic force field.
\emph{Spectro. Acta A} \textbf{2003}, \emph{59}, 1881--1893\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yagi \latin{et~al.}(2004)Yagi, Hirao, Taketsugu, Schmidt, and
Gordon]{yagi2004ab}
Yagi,~K.; Hirao,~K.; Taketsugu,~T.; Schmidt,~M.~W.; Gordon,~M.~S. Ab initio
vibrational state calculations with a quartic force field: Applications to
H$_{2}$CO, C$_2$H$_4$, CH$_3$OH, CH$_3$CCH, and C$_6$H$_6$. \emph{J. Chem.
Phys.} \textbf{2004}, \emph{121}, 1383--1389\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Barone(2005)]{barone2005anharmonic}
Barone,~V. Anharmonic vibrational properties by a fully automated second-order
perturbative approach. \emph{J. Chem. Phys.} \textbf{2005}, \emph{122},
014108\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lin \latin{et~al.}(2008)Lin, Gilbert, and Gill]{Lin}
Lin,~C.~Y.; Gilbert,~A. T.~B.; Gill,~P. M.~W. Calculating molecular vibrational
spectra beyond the harmonic approximation. \emph{Theor. Chem. Acc.}
\textbf{2008}, \emph{120}, 23\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Togo \latin{et~al.}(2015)Togo, Chaput, and Tanaka]{togo2015-3py}
Togo,~A.; Chaput,~L.; Tanaka,~I. Distributions of Phonon Lifetimes in Brillouin
Zones. \emph{Phys. Rev. B} \textbf{2015}, \emph{91}, 094306\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Togo and Tanaka(2015)Togo, and Tanaka]{togo2015-py}
Togo,~A.; Tanaka,~I. First Principles Phonon Calculations in Materials Science.
\emph{Scr. Mater.} \textbf{2015}, \emph{108}, 1 -- 5\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Plata \latin{et~al.}(2017)Plata, Nath, Usanmaz, Carrete, Toher,
de~Jong, Asta, Fornari, Nardelli, and Curtarolo]{plata2017}
Plata,~J.~J.; Nath,~P.; Usanmaz,~D.; Carrete,~J.; Toher,~C.; de~Jong,~M.;
Asta,~M.; Fornari,~M.; Nardelli,~M.~B.; Curtarolo,~S. An Efficient and
Accurate Framework for Calculating Lattice Thermal Conductivity of Solids:
AFLOW-AAPL Automatic Anharmonic Phonon Library. \emph{NPJ Comput. Mater.}
\textbf{2017}, \emph{3}, 45\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Skelton \latin{et~al.}(2014)Skelton, Parker, Togo, Tanaka, and
Walsh]{skelton2014}
Skelton,~J.~M.; Parker,~S.~C.; Togo,~A.; Tanaka,~I.; Walsh,~A. Thermal Physics
of the Lead Chalcogenides PbS, PbSe, and PbTe from First Principles.
\emph{Phys. Rev. B} \textbf{2014}, \emph{89}, 205203\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Whalley \latin{et~al.}(2016)Whalley, Skelton, Frost, and
Walsh]{whalley2016}
Whalley,~L.~D.; Skelton,~J.~M.; Frost,~J.~M.; Walsh,~A. Phonon Anharmonicity,
Lifetimes, and Thermal Transport in
${\mathrm{CH}}_{3}{\mathrm{NH}}_{3}{\mathrm{PbI}}_{3}$ from Many-body
Perturbation Theory. \emph{Phys. Rev. B} \textbf{2016}, \emph{94},
220301\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Linnera and Karttunen(2017)Linnera, and Karttunen]{linnera2017}
Linnera,~J.; Karttunen,~A.~J. Ab initio Study of The Lattice Thermal
Conductivity of ${\mathrm{Cu}}_{2}\mathrm{O}$ Using the Generalized Gradient
Approximation and Hybrid Density Functional Methods. \emph{Phys. Rev. B}
\textbf{2017}, \emph{96}, 014304\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhou \latin{et~al.}(2014)Zhou, Nielson, Xia, and Ozoli\ifmmode
\mbox{\c{n}}\else \c{n}\fi{}\ifmmode~\check{s}\else
\v{s}\fi{}]{PhysRevLett.113.185501}
Zhou,~F.; Nielson,~W.; Xia,~Y.; Ozoli\ifmmode \mbox{\c{n}}\else
\c{n}\fi{}\ifmmode~\check{s}\else \v{s}\fi{},~V. Lattice Anharmonicity and
Thermal Conductivity from Compressive Sensing of First-Principles
Calculations. \emph{Phys. Rev. Lett.} \textbf{2014}, \emph{113}, 185501\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Prentice and Needs(2017)Prentice, and Needs]{prentice2017}
Prentice,~J. C.~A.; Needs,~R.~J. Using Forces to Accelerate First-principles
Anharmonic Vibrational Calculations. \emph{Phys. Rev. Materials}
\textbf{2017}, \emph{1}, 023801\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Souvatzis \latin{et~al.}(2008)Souvatzis, Eriksson, Katsnelson, and
Rudin]{souvatzis2008}
Souvatzis,~P.; Eriksson,~O.; Katsnelson,~M.~I.; Rudin,~S.~P. Entropy Driven
Stabilization of Energetically Unstable Crystal Structures Explained from
First Principles Theory. \emph{Phys. Rev. Lett.} \textbf{2008}, \emph{100},
095901\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Errea \latin{et~al.}(2014)Errea, Calandra, and Mauri]{errea2014}
Errea,~I.; Calandra,~M.; Mauri,~F. Anharmonic Free Energies and Phonon
Dispersions from the Stochastic Self-Consistent Harmonic Approximation:
Application to Platinum and Palladium Hydrides. \emph{Phys. Rev. B}
\textbf{2014}, \emph{89}, 064302\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Parlinski(2018)]{parlinski2018}
Parlinski,~K. Ab Initio Determination of Anharmonic Phonon Peaks. \emph{Phys.
Rev. B} \textbf{2018}, \emph{98}, 054305\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Allen and {De Wette}(1969)Allen, and {De Wette}]{QHA}
Allen,~R.~E.; {De Wette},~F.~W. Calculation of Dynamical Surface Properties of
Noble-Gas Crystals. I. The Quasiharmonic Approximation. \emph{Phys. Rev.}
\textbf{1969}, \emph{179}, 873--886\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Baroni \latin{et~al.}(2010)Baroni, Giannozzi, and
Isaev]{Baroni01012010}
Baroni,~S.; Giannozzi,~P.; Isaev,~E. Density-Functional Perturbation Theory for
Quasi-Harmonic Calculations. \emph{Reviews in Mineralogy and Geochemistry}
\textbf{2010}, \emph{71}, 39--57\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba(2014)]{Grun_MIO}
Erba,~A. On combining temperature and pressure effects on structural properties
of crystals with standard ab initio techniques. \emph{J. Chem. Phys.}
\textbf{2014}, \emph{141}, 124115\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2015)Erba, Shahrokhi, Moradian, and
Dovesi]{Thermal_MgCaO}
Erba,~A.; Shahrokhi,~M.; Moradian,~R.; Dovesi,~R. On How Differently the
Quasi-harmonic Approximation Works for Two Isostructural Crystals: Thermal
Properties of MgO and CaO. \emph{J. Chem. Phys.} \textbf{2015}, \emph{142},
044114\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2015)Erba, Maul, Demichelis, and Dovesi]{CORUNDUM}
Erba,~A.; Maul,~J.; Demichelis,~R.; Dovesi,~R. Assessing Thermochemical
Properties of Materials through Ab initio Quantum-mechanical Methods: The
Case of $\alpha$-Al$_2$O$_3$. \emph{Phys. Chem. Chem. Phys.} \textbf{2015},
\emph{17}, 11670--11677\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2015)Erba, Maul, {De La Pierre}, and
Dovesi]{FORSTERITE}
Erba,~A.; Maul,~J.; {De La Pierre},~M.; Dovesi,~R. Structural and Elastic
Anisotropy of Crystals at High Pressure and Temperature from
Quantum-mechanical Methods: The Case of Mg$_2$SiO$_4$ Forsterite. \emph{J.
Chem. Phys.} \textbf{2015}, \emph{142}, 204502\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2015)Erba, Maul, Itou, Dovesi, and
Sakurai]{LIF_PRL}
Erba,~A.; Maul,~J.; Itou,~M.; Dovesi,~R.; Sakurai,~Y. Anharmonic Thermal
Oscillations of the Electron Momentum Distribution in Lithium Fluoride.
\emph{Phys. Rev. Lett.} \textbf{2015}, \emph{115}, 117402\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2016)Erba, Maul, and Civalleri]{UREA_QHA}
Erba,~A.; Maul,~J.; Civalleri,~B. Thermal Properties of Molecular Crystals
through Dispersion-corrected Quasi-harmonic Ab initio Calculations: The Case
of Urea. \emph{Chem. Commun.} \textbf{2016}, \emph{52}, 1820--1823\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Destefanis \latin{et~al.}(2019)Destefanis, Ravoux, Cossard, and
Erba]{THERMOELAST_MIO}
Destefanis,~M.; Ravoux,~C.; Cossard,~A.; Erba,~A. Thermo-Elasticity of
Materials from Quasi-Harmonic Calculations. \emph{Minerals} \textbf{2019},
\emph{9}, 16\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ryder \latin{et~al.}(2019)Ryder, Maul, Civalleri, and
Erba]{ryder2019quasi}
Ryder,~M.~R.; Maul,~J.; Civalleri,~B.; Erba,~A. Quasi-Harmonic Lattice Dynamics
of a Prototypical Metal--Organic Framework. \emph{Adv. Theory Simul.}
\textbf{2019}, \emph{2}, 1900093\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Linnera \latin{et~al.}(2019)Linnera, Erba, and Karttunen]{CU2O_ANTTI}
Linnera,~J.; Erba,~A.; Karttunen,~A.~J. Negative thermal expansion of Cu$_2$O
studied by quasi-harmonic approximation and cubic force-constant method.
\emph{J. Chem. Phys.} \textbf{2019}, \emph{151}, 184109\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Banks \latin{et~al.}(2020)Banks, Maul, Mancini, Whalley, Erba, and
Ruggiero]{THERMOELA_ORGSEMI}
Banks,~P.~A.; Maul,~J.; Mancini,~M.~T.; Whalley,~A.~C.; Erba,~A.;
Ruggiero,~M.~T. Thermoelasticity in organic semiconductors determined with
terahertz spectroscopy and quantum quasi-harmonic simulations. \emph{J.
Mater. Chem. C} \textbf{2020}, \emph{8}, 10917--10925\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Maul \latin{et~al.}(2020)Maul, Ongari, Moosavi, Smit, and
Erba]{maul2020thermoelasticity}
Maul,~J.; Ongari,~D.; Moosavi,~S.~M.; Smit,~B.; Erba,~A. Thermoelasticity of
Flexible Organic Crystals from Quasi-harmonic Lattice Dynamics: The Case of
Copper (II) Acetylacetonate. \emph{J. Phys. Chem. Lett.} \textbf{2020},
\emph{11}, 8543--8548\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2019)Erba, Maul, Ferrabone, Carbonni\'ere,
R\'erat, and Dovesi]{PARTI_ANHARM}
Erba,~A.; Maul,~J.; Ferrabone,~M.; Carbonni\'ere,~P.; R\'erat,~M.; Dovesi,~R.
Anharmonic Vibrational States of Solids from DFT Calculations. Part I:
Description of the Potential Energy Surface. \emph{J. Chem. Theory Comput.}
\textbf{2019}, \emph{15}, 3755--3765\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dovesi \latin{et~al.}(2018)Dovesi, Erba, Orlando, Zicovich-Wilson,
Civalleri, Maschio, R\'erat, Casassa, Baima, Salustro, and
Kirtman]{CRYSTAL17PAP}
Dovesi,~R.; Erba,~A.; Orlando,~R.; Zicovich-Wilson,~C.~M.; Civalleri,~B.;
Maschio,~L.; R\'erat,~M.; Casassa,~S.; Baima,~J.; Salustro,~S.; Kirtman,~B.
Quantum-Mechanical Condensed Matter Simulations with CRYSTAL. \emph{WIREs
Comput. Mol. Sci.} \textbf{2018}, \emph{8}, e1360\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2022)Erba, Desmarais, Casassa, Civalleri, Donà,
Bush, Searle, Maschio, Edith-Daga, Cossard, Ribaldone, Ascrizzi, Marana,
Flament, and Kirtman]{PAPCRYSTAL23}
Erba,~A.; Desmarais,~J.~K.; Casassa,~S.; Civalleri,~B.; Donà,~L.; Bush,~I.~J.;
Searle,~B.; Maschio,~L.; Edith-Daga,~L.; Cossard,~A.; Ribaldone,~C.;
Ascrizzi,~E.; Marana,~N.~L.; Flament,~J.-P.; Kirtman,~B. CRYSTAL23: A Program
for Computational Solid State Physics and Chemistry. \emph{J. Chem. Theory
Comput.} \textbf{2022}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Erba \latin{et~al.}(2019)Erba, Maul, Ferrabone, Dovesi, R\'erat, and
Carbonni\`ere]{PARTII_ANHARM}
Erba,~A.; Maul,~J.; Ferrabone,~M.; Dovesi,~R.; R\'erat,~M.; Carbonni\`ere,~P.
Anharmonic Vibrational States of Solids from DFT Calculations. Part II:
Implementation of the VSCF and VCI Methods. \emph{J. Chem. Theor. Comput.}
\textbf{2019}, \emph{15}, 3766--3777\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Maul \latin{et~al.}(2019)Maul, Spoto, Mino, and
Erba]{maul2019elucidating}
Maul,~J.; Spoto,~G.; Mino,~L.; Erba,~A. Elucidating the structure and dynamics
of CO ad-layers on MgO surfaces. \emph{Phys. Chem. Chem. Phys.}
\textbf{2019}, \emph{21}, 26279--26283\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wigner(1931)]{wigner1931gruppentheorie}
Wigner,~E.~P. \emph{Gruppentheorie und ihre Anwendung auf die Quantenmechanik
der Atomspektren}; Springer, 1931\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zicovich-Wilson and Erba(2012)Zicovich-Wilson, and Erba]{ZicErb2}
Zicovich-Wilson,~C.~M.; Erba,~A. Beyond Wigner's Theorems: The Role of Symmetry
Equivalences in Quantum Systems. \emph{Int. J. Quantum Chem.} \textbf{2012},
\emph{112}, 3543\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zicovich-Wilson and Erba(2010)Zicovich-Wilson, and Erba]{ZicErb}
Zicovich-Wilson,~C.~M.; Erba,~A. A Fundamental Connection between Symmetry and
Spatial Localization Properties of Basis Sets. \emph{Theor. Chem. Acc.}
\textbf{2010}, \emph{126}, 165\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
{
"arxiv_id": "2302.08593",
"language": "en",
"timestamp": "2023-02-20T02:02:34",
"url": "https://arxiv.org/abs/2302.08593",
"yymm": "2302"
} | \section{Introduction}
The field of combinatorial game theory is a fertile area of mathematical study and graphs provide some of the most interesting playing grounds on which to explore these games. Well known games, such as Col and Snort~\cite{Conway}, originally contrived as map-coloring games, can be played and studied on game boards consisting of planar graphs. The Game of Cycles is an impartial combinatorial game introduced by Francis Su in his book \emph{Mathematics for Human Flourishing}~\cite{Su}. In this game two players take turns marking edges on a planar graph. When playing a move each player is aware of every move that has been played up to that point in the game and there is no element of chance involved. Such a game is known as a sequential game with perfect information. This game is considered impartial because the moves available at any point in the game depend only on the current configuration of the board and not on the player whose turn it is to move. For more information on combinatorial games, see~\cite{Siegel}.
The Game of Cycles is played on any simple planar graph of vertices and edges like in Figure~\ref{fig-cell} below.
\begin{figure}[ht]\center
\[
\begin{array}{cc}
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7){};
\draw (A) to(D);
\draw (D)to (C);
\draw (C)to (A);
\draw (B)to (C);
\draw (B) to(A);
\draw (D)to (B);
\end{scope}
\end{tikzpicture}
\hspace{1in}&
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7){};
\node(E) at (-0.31,-0.2)[label={[label distance=-2pt]: \scriptsize $c_1$}]{};
\node(F) at (0.31,-0.2)[label={[label distance=-2pt]: \scriptsize $c_2$}]{};
\node(G) at (0,-0.7)[label={[label distance=-2pt]: \scriptsize $c_3$}]{};
\draw (A) to(D);
\draw (D)to (C);
\draw (C)to (A);
\draw (B)to (C);
\draw (B) to(A);
\draw (D)to (B);
\end{scope}
\end{tikzpicture}\\
\hspace{-1in}K_4& K_4 \text{ with labeled cells }
\end{array}
\]
\caption{The complete graph $K_4$ game board and cells $c_1$, $c_2$, and $c_3$.}\label{fig-cell}
\end{figure}
We utilize the terminology defined originally by Alvarado et al.~\cite{Alvarado} to introduce the Game of Cycles. A planar graph of vertices and edges divides the plane into regions, which we call \emph{cells}, labelled $c_1$, $c_2$, and $c_3$ in Figure~\ref{fig-cell}. The graph together with its bounded cells is the \emph{game board}. Suppose there are two players and each player takes turns marking an edge that is not currently marked with an arrow, subject to the sink-source rule: players are not allowed to create a sink or a source, where a \emph{sink} is a vertex whose edges all point towards it and a \emph{source} is a vertex whose edges all point away from it. An example of a sink and a source is shown in Figure~\ref{fig-cells}. Therefore, every vertex must have at least one edge pointing towards it and at least one edge pointing away from it. The object of the game is to be the first player to create a \emph{cycle cell}, which is a single cell in the game board whose whose boundary edges are all marked in either a clockwise or a counterclockwise direction. Figure~\ref{fig-cells} gives an example of a cycle cell. Creating a cycle cell is not always possible, so the first player to create a cycle cell or make the last possible move wins the game.
\begin{figure}[ht]\center
\[
\begin{array}{ccc}
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7) {};
\draw (A)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (B);
\draw (A)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\draw (A)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0) {};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7) {};
\draw (B)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (C)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (D)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7){};
\draw (A)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (D)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\draw (C)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (B)to (C);
\draw (B) to(A);
\draw (D)to (B);
\end{scope}
\end{tikzpicture} \\
\text{ Source} &\text{ Sink} & \text{ Cycle cell}\\
\end{array}
\]
\caption{Examples of a source, sink, and cycle cell}\label{fig-cells}
\end{figure}
When an edge is marked, it can lead to consequences for other edges on the game board. A \emph{death move} occurs when an edge is marked with an arrow in such a way that it forms the penultimate arrow of a potential cycle cell. An edge is \emph{currently unplayable} if both possible markings of the edge lead to a either a sink/source or a death move. We say an edge is \emph{currently playable} if it is not currently unplayable.
There are also consequences for the sink-source rule. In particular, at degree 2 vertices the direction of the edge cannot change since changing directions will lead to a sink or source. A vertex in which all but one of its incident edges are pointed towards it is called an \emph{almost-sink}. Thus, the last edge, if marked, must be marked with an arrow pointing away from it. Similarly, an \emph{almost-source} is a vertex in which all but one of its incident edges are pointing away from it. An edge is \emph{markable} if it can be marked with an arrow without violating the sink-source rule. However, if an edge is incident to two almost-sinks (or almost-sources), then the edge is \emph{unmarkable}.
A \emph{strategy} is a sequence of moves for a player to make in any situation. A \emph{winning strategy} is a strategy which will force that player to win no matter how the other player plays. As a finite two-player sequential game with perfect information that is unable to end in a draw, Zermelo’s Theorem~\cite{Zermelo} tells us that for each game board one of the two players must possess a winning strategy. We say a player \emph{wins} on a certain game board if the outcome of the game is not defined by a strategy and game play is deterministic. In studying the game of cycles, Alvarado et al.~\cite{Alvarado} explored winning strategies for various game boards and determined which player had a winning strategy or, in some cases, a predetermined win.
\section{Preliminary results}
In this section we summarize some results from Alvarado et al.~\cite{Alvarado} that are referenced in the proofs of our results. We begin with results for the cycle board on $n$ vertices. A cycle board on $n$ vertices, denoted $C_n$, is the cycle graph
with $n$ vertices and $n$ edges alternating along the boundary of the cell.
\begin{theorem}{\rm \cite{Alvarado}} \label{cyclen}
Play on a $C_n$ is entirely determined by parity. For $n$ odd, Player~1 wins and for $n$ even Player~2 wins.
\end{theorem}
The following lemma was used to prove Theorem~\ref{cyclen}.
\begin{lemma}{\rm \cite{Alvarado}}
If a $C_n$ board has no markable edges, the number of unmarkable edges must be even.
\end{lemma}
Game boards with involutive symmetry were also studied by the authors in~\cite{Alvarado}. A game board has \emph{involutive symmetry} if there is a non-trivial symmetry of the board which is its own inverse. Every vertex $v$ (edge $e$) has a \emph{partner vertex} $v'$ (\emph{partner edge} $e$) which it is mapped to under involutive symmetry. A vertex, edge, or cell is \emph{self-involutive} if the involution of that vertex, edge, or cell is itself. A cell is defined to be \emph{part-involutive} if it is not self-involutive, but one of the cell's edges has its partner also in the cell. A cell is \emph{nowhere-involutive} if no edge of the cell has its partner in the cell. For any board with involutive symmetry, every edge is partnered with another edge or itself.
Alvarado et al.~\cite{Alvarado} proved the following result for graphs with an involution.
\begin{theorem}\label{invsym}{\rm \cite{Alvarado}}
Let $G$ be a board with an involution such that each cell is either self-involutive or nowhere-involutive. If there is no self-involutive edge, then Player~2 has a winning strategy. If there is exactly one self-involutive edge whose vertices are not fixed by the involution, then Player~1 has a winning strategy.
\end{theorem}
To prove Theorem~\ref{invsym}, the author's provided a ``mirror-reverse" strategy for the winning player to use in responding to the other player. The mirror-reverse strategy is as follows:
\begin{itemize}
\item[(1)] If possible to win by completing a cycle, do so.
\item[(2)] If that is not possible, mirror the other player's strategy by observing the player's most recent move on an edge and playing the partner edge with its arrow reversed.
\end{itemize}
When there is exactly one self-involutive edge whose vertices are not fixed by the involution, Player~1 begins the game by marking the self-involutive edge on their first turn and then using the mirror-reverse strategy for all other subsequent moves. The next two results are special cases of Theorem~\ref{invsym}.
\begin{corollary}\cite{Alvarado}\label{rotsym}
Let $G$ be a board with $180^{\circ}$ rotational symmetry, and no edge and its partner part of the same cell. If there is no edge through the center of the board then Player~2 has a winning strategy. If there is such an edge, then Player~1 has a winning strategy.
\end{corollary}
\begin{corollary}\label{mirrorsymbasic}\cite{Alvarado}
Let $G$ be a board that is symmetric by reflection across some line, with no edges along that axis of symmetry and at most one edge crossing that axis of symmetry.
On this board, Player~2 has a winning strategy if there is no edge crossing this axis of symmetry. If there is a single edge crossing this axis of symmetry, Player~1 has a winning strategy.
\end{corollary}
\section{Cactus graphs}\label{cactusgraph}
In this paper, we are concerned with winning strategies for games played on certain types of cactus graphs. A \emph{cactus} (sometimes called a cactus tree) is a connected graph in which any two simple cycles have at most one vertex in common. Equivalently, it is a connected graph in which every edge belongs to at most one simple cycle, or (for a nontrivial cactus) in which every block (maximal subgraph without a cut-vertex) is an edge or a cycle. We say that a graph is \emph{triangle-free} if it contains no $C_3$ graphs.
The primary goal of this paper is to extend the symmetry argument of \cite{Alvarado} Theorem 2.3 to a certain class of graphs that do not possess global symmetry but instead can be viewed as consisting of many joined parts, each possessing its own local ``axis of symmetry." This is done in Sections~\ref{cactusgraph} and~\ref{mainresultsection} with our main result being Theorem~\ref{mainresult}.
The results in these sections pertain to triangle-free cactus graphs in which every edge of the graph belongs to exactly one cycle.
We demonstrate a winning strategy for
graphs of this type satisfying certain symmetry conditions. We will first work through examples of two, three, and four joined cycles in order to help motivate those conditions required in our main theorem in Section~\ref{mainresultsection}. The strategies outlined in the examples in Section~\ref{cactusgraph} are formally proved in the main result Theorem~\ref{mainresult}.
In Alvarado et al.~\cite{Alvarado}, the authors observed that their results showed for graphs with an odd number of edges, Player~1 had a winning strategy, otherwise Player~2 had a winning strategy. This led the authors to pose the following question: Is there a game board that does not follow the parity pattern of Player~1 having a winning strategy when there is an odd number of edges, otherwise Player~2 winning?
In Section 5, we discuss cactus graphs that are not triangle-free and use them to answer Alvarado et al.'s question by showing there are game boards with an even number of edges in which Player~1 has a winning strategy.
\subsection{Two joined cycles}
We begin by detailing the results of the class of triangle-free cactus graphs composed of two cycles joined together at a single vertex. Notice that when two cycles are joined together at a vertex the resulting graph has reflective symmetry with the axis of symmetry passing through the joining vertex and through either a vertex or an edge in each of the joined cycles, as shown in Figure~\ref{fig:symgraph} Board $A$ below. In the case that the two joined cycles are of the same degree, the board will also have an axis of reflective symmetry passing only through the joining vertex, as shown in Figure~\ref{fig:symgraph} Board $B$.
\begin{figure}[ht]
\[
\begin{array}{cc}
\begin{tikzpicture}[scale=.45]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (.5, 1.414){};
\node[Bvertex] (C) at (-1.5,1.414){};
\node[Bvertex] (D) at (-1.5,-1.414){};
\node[Bvertex] (E) at (.5,-1.414){};
\node[Bvertex] (F) at (2.586, 1.414){};
\node[Bvertex] (G) at (4,1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (J) at (4,-1.414){};
\node[Bvertex] (K) at (2.586,-1.414){};
\node[Bvertex] (L) at (-2.6, 0){};
\draw (A) to (B);
\draw (B) to (C);
\draw (L) to (C);
\draw (L) to (D);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (J) to (K);
\draw (K) to (A);
\draw (D) to (E);
\draw[dashed] (5.7,0) --(-2.9,0){};
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.45]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (.414, 1.414){};
\node[Bvertex] (C) at (-1,1.414){};
\node[Bvertex] (D) at (-1,-1.414){};
\node[Bvertex] (E) at (.414,-1.414){};
\node[Bvertex] (F) at (2.586, 1.414){};
\node[Bvertex] (G) at (4,1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (J) at (4,-1.414){};
\node[Bvertex] (K) at (2.586,-1.414){};
\node[Bvertex] (L) at (-2.414,1.414){};
\node[Bvertex] (M) at (-2.414,-1.414){};
\draw (A) to (B);
\draw (B) to (C);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (J) to (K);
\draw (K) to (A);
\draw (D) to (E);
\draw (C) to (L);
\draw (L) to (M);
\draw (M) to (D);
\draw[dashed] (1.5,1.414) --(1.5,-1.414){};
\end{scope}
\end{tikzpicture}\\
\text{Board } A & \text{ Board } B\\
\end{array}
\]
\caption{Two cycles joined at a vertex with their axis of reflective symmetry shown.}
\label{fig:symgraph}
\end{figure}
Because of this symmetry, a winning strategy in these cases follows from Corollary~\ref{mirrorsymbasic}.
\begin{proposition}\label{parity}
Let $G$ be a board containing two cycles, $C_m$ and $C_n$, with $m>3$ and $n>3$, such that $C_m$ and $C_n$ are connected by a single vertex. If $C_m$ and $C_n$ are of different parity, then Player~1 has a winning strategy. If $C_m$ and $C_n$ are both of even parity, then Player~2 has a winning strategy.
\end{proposition}
\begin{proof}
Note that $G$ has reflective symmetry across the line that passes through the degree 4 vertex and through either a vertex or edge in both $C_m$ and $C_n$, as displayed in Board A of Figure \ref{fig:symgraph}. When $m$ and $n$ are of different parity, the result follows directly from Corollary~\ref{mirrorsymbasic} where $G$ has exactly one self-involutive edge. When $m$ and $n$ are of even parity, there is no edge that crosses the axis of symmetry and the result again follows from Corollary~\ref{mirrorsymbasic}.
\end{proof}
Recall that the proof of Theorem~\ref{invsym} uses the mirror-reverse strategy for the winning player. The mirror-reverse strategy only applies if there is no self-involutive edge or exactly one self-involutive edge whose vertices are not fixed by the involution. In the case where there are two odd cycles of different lengths, there are two self-involutive edges and the mirror-reverse strategy fails.
\begin{example}[Two joined odd cycles]
Figure~\ref{fig:c57} shows an example of the mirror-reverse strategy failing and gives a modification to the strategy to show there is a winning strategy for Player~2. Note that the labelling on the edges denote the direction and the order in which the edges were marked with the odd numbered edges being marked by Player~1 and the even numbered edges being marked by Player~2. We use this labelling for all game boards played throughout the paper.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.5]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (.5, 1.414){};
\node[Bvertex] (C) at (-1.5,1.414){};
\node[Bvertex] (D) at (-1.5,-1.414){};
\node[Bvertex] (E) at (.5,-1.414){};
\node[Bvertex] (F) at (2.586, 1.414){};
\node[Bvertex] (G) at (4,1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (J) at (4,-1.414){};
\node[Bvertex] (K) at (2.586,-1.414){};
\draw (A) to (B);
\draw (B) to (C);
\draw (D) to (C);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (J) to (K);
\draw (K) to (A);
\draw (D) to (E);
\draw (B) -- node[font=\small, label={[label distance=-6pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (A) -- node[font=\small, label={[label distance=-6pt]:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (E);
\draw (F) -- node[font=\small, label={[label distance=-2pt]:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (G);
\draw (J) -- node[font=\small, label={[label distance=-2pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (K);
\draw (I) -- node[font=\small, label={[label distance=-2pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (J);
\draw (G) -- node[font=\small, label={[label distance=-2pt]:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (C) -- node[font=\small, label={[label distance=-2pt]:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (B);
\draw (H) -- node[font=\small, label={[label distance=-2pt] above:$8$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (E) -- node[font=\small, label={[label distance=-2pt]:$9$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (D) -- node[font=\small, label={[label distance=-2pt]:$10$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\draw[dashed] (-1.5,0) --(5.414,0){};
\end{scope}
\end{tikzpicture}
\caption{Two odd cycles $C_5$ and $C_7$ joined at a vertex with reflective symmetry about the horizontal line.}
\label{fig:c57}
\end{figure}
In Figure~\ref{fig:c57}, there is a game board consisting of two odd cycles joined at a single vertex, cycle $C_5$ on the left and cycle $C_7$ on the right. Player~1 begins the game by playing on the edge marked 1. Utilizing the mirror-reverse strategy, Player~2 plays the partner edge to Player~1's first marked edge on their first turn in the opposite direction, and thus plays move 2. Moves 3 through 6 continues to follow the mirror-reverse strategy, with Player~2 utilizing the strategy. After Player~1 plays move 7, note that using the mirror-reverse strategy leads to a death move for Player~2 on the cycle graph $C_5$. The mirror-reverse strategy fails here for Player~2 because if they were to play the partner edge with the arrow reversed to follow the same direction (clockwise or counterclockwise) then Player~1 could mark the self-involutive edge (the last unmarked edge on the cycle $C_5$) and win by completing a cycle. Instead of making a death move, Player~2 can modify the mirror-reverse strategy and play the self-involutive edge on the cycle $C_7$ in its playable direction, denoted as move 8. Note that any edge Player~1 plays on their next turn would be a death move. So suppose Player~1 marks the edge denoted 9 on the $C_5$ board. After move 9 is made, Player~2 wins by marking the edge denoted 10 and creating a cycle.
\end{example}
In order to address the odd-odd example above and also larger cactus graphs as in our main theorem we introduce a modified mirror-reverse strategy.
The modified mirror-reverse strategy can be played on boards with involutive symmetry and is defined from the perspective of Player~2 because it relies on responding to the opposing player's moves. At the start of the game, if there are an even number of self-involutive edges on the game board, then Player~2 can implement the modified mirror-reverse strategy. However, if the total number of self-involutive edges is odd, then Player~1 will begin the game by marking a self-involutive edge and then is able to implement the modified mirror-reverse strategy functioning as the second player.
\begin{defn}
We define the \emph{modified mirror-reverse strategy} as follows:
\begin{enumerate}
\item If possible to win by completing a cycle, do so.
\item If it is not possible to win by completing a cycle and the opposing player plays a self-involutive edge, then play an available self-involutive edge in any playable direction.
\item If it is not possible to win by completing a cycle and the opposing player does not play a self-involutive edge, then mirror the opposing player's strategy by observing the player's most recent move on an edge and playing the partner edge with its arrow reversed unless that move is a death move. If the move is a death move, play an available self-involutive edge in any playable direction.
\end{enumerate}
\end{defn}
Utilizing this modified mirror reverse strategy, Player~2 has a winning strategy in the case of two joined odd cycles.
The following proposition is a consequence of the main result Theorem~\ref{mainresult} in Section~\ref{mainresultsection}.
\begin{proposition}\label{parity}
Let $G$ be a board containing two cycles, $C_m$ and $C_n$, with $m>3$ and $n>3$, such that $C_m$ and $C_n$ are connected by a single vertex. If $C_m$ and $C_n$ are both of odd parity, then Player~2 has a winning strategy.
\end{proposition}
\subsection{Three joined cycles}
Let's consider how we might generalize such a strategy to the case of cactus graphs consisting of three joined cycles. The first question that arises is: How should the axis of symmetry be defined for the graph? Consider the cactus graph in Figure~\ref{fig:c579} below. Note that this graph does not have reflective symmetry. However, we can define axes of symmetry for each of the three individual cycles, which gives each cycle its own local axis of symmetry. For the outer cycles $C_5$ and $C_7$, we will choose the axes of symmetry to pass through the vertices of degree 4. For the middle cycle $C_9$, an axis passing through the two degree 4 vertices would not define a reflective symmetry. Instead, we must define an axis of symmetry which is equidistant from the two degree 4 vertices, thus causing them to be partners via reflection, as shown in the figure. The partnering of the higher degree vertices is crucial to our strategy, as we will discuss later on. We define a vertex to be a vertex of \emph{high degree} if the vertex is of degree 4 or more. Note there are many ways to join three cycles including the board in which all three cycles are joined at a single vertex. In this case, we would define all three axes of symmetry to pass through the degree 6 vertex. In general, we utilize local axes of symmetry for individual cycles to define the reflective symmetry of the graph.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=.45 ]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,-1){};
\node[Bvertex] (B) at (1.8, -1){};
\node[Bvertex] (C) at (3.2,0.2){};
\node[Bvertex] (D) at (3.8,2.2){};
\node[Bvertex] (E) at (2.5,3.5){};
\node[Bvertex] (F) at (.75,4.2){};
\node[Bvertex] (G) at (-1,3.5){};
\node[Bvertex] (H) at (-2,2.2){};
\node[Bvertex] (I) at (-1.2,0.2){};
\node[Bvertex] (J) at (-3.2,3.5){};
\node[Bvertex] (K) at (-5,3.5){};
\node[Bvertex] (L) at (-5,0.5){};
\node[Bvertex] (M) at (-3.2,0.5){};
\node[Bvertex] (N) at (5.3,3.5){};
\node[Bvertex] (S) at (5.3,.5){};
\node[Bvertex] (O) at (6.8,3.5){};
\node[Bvertex] (R) at (6.8,0.5){};
\node[Bvertex] (P) at (8.5,3.5){};
\node[Bvertex] (Q) at (8.5,0.5){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw[dashed] (.75,4.2) --(0.9,-1){};
\draw[dashed] (-1.8, 2.2) --(-5,2.2){};
\draw[dashed] (3.5,2.2) --(8.5,2.2){};
\end{scope}
\end{tikzpicture}
\caption{Cactus graph with three cycles}
\label{fig:c579}
\end{figure}
\begin{example}[Three joined cycles winning strategy]
We now utilize the modified mirror-reverse strategy and show that Player~1 has a winning strategy on the graph in Figure~\ref{fig:c579}.
Note that in Figure~\ref{fig:c579}, there are three edges which are self-involutive with respect to the axes of symmetry on the cycles in which they lie. Since there are an odd number of self-involutive edges, Player~1 should begin the game by marking any one of the three self-involutive edges. Suppose Player~1 marks the self-involutive edge on the cycle $C_9$. From this point on, Player~1 will use the modified mirror-reverse strategy to respond to Player~2's moves.
Figure~\ref{fig:c579played} demonstrates an example of a completed game on the game board in Figure~\ref{fig:c579} using the modified mirror-reverse strategy.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,-1){};
\node[Bvertex] (B) at (1.8, -1){};
\node[Bvertex] (C) at (3.2,0.2){};
\node[Bvertex] (D) at (3.8,2.2){};
\node[Bvertex] (E) at (2.5,3.5){};
\node[Bvertex] (F) at (.75,4.2){};
\node[Bvertex] (G) at (-1,3.5){};
\node[Bvertex] (H) at (-2,2.2){};
\node[Bvertex] (I) at (-1.2,0.2){};
\node[Bvertex] (J) at (-3.2,3.5){};
\node[Bvertex] (K) at (-5,3.5){};
\node[Bvertex] (L) at (-5,0.5){};
\node[Bvertex] (M) at (-3.2,0.5){};
\node[Bvertex] (N) at (5.3,3.5){};
\node[Bvertex] (S) at (5.3,.5){};
\node[Bvertex] (O) at (6.8,3.5){};
\node[Bvertex] (R) at (6.8,0.5){};
\node[Bvertex] (P) at (8.5,3.5){};
\node[Bvertex] (Q) at (8.5,0.5){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw[dashed] (.75,4.2) --(0.9,-1){};
\draw[dashed] (-1.8, 2.2) --(-5,2.2){};
\draw[dashed] (3.5,2.2) --(8.5,2.2){};
\draw (A) -- node[font=\small, label={[label distance=-2pt]below:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (B);
\draw (F) -- node[font=\small, label={[label distance=-5pt] below right:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (G);
\draw (E) -- node[font=\small, label={[label distance=-5pt] below left:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\draw (G) -- node[font=\small, label={[label distance=-5pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (D) -- node[font=\small, label={[label distance=-5pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (E);
\draw (H) -- node[font=\small, label={[label distance=-6pt]:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (C) -- node[font=\small, label={[label distance=-5pt]:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (J) -- node[font=\small, label={[label distance=-7pt]:$8$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (H) -- node[font=\small, label={[label distance=-7pt]:$9$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (M);
\draw (K) -- node[font=\small, label={[label distance=-5pt]:$10$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (J);
\draw (P) -- node[font=\small, label={[label distance=-5pt]:$11$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (Q);
\draw (O) -- node[font=\small, label={[label distance=-3pt]below:$12$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (N);
\draw (S) -- node[font=\small, label={[label distance=-5pt]below:$13$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (R);
\draw (N) -- node[font=\small, label={[label distance=-7pt]below:$14$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (D) -- node[font=\small, label={[label distance=-7pt]below:$15$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (S);
\draw (I) -- node[font=\small, label={[label distance=-5pt]:$16$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (B) -- node[font=\small, label={[label distance=-5pt]:$17$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\end{scope}
\end{tikzpicture}
\caption{Completed game on the game board in Figure~\ref{fig:c57}.}
\label{fig:c579played}
\end{figure}
In the completed game shown in Figure~\ref{fig:c579played}, note that none of Player~1's moves on the cycle $C_9$ can result in a death move since a death move would only occur by marking the second-to-last edge on the cycle, which would be a Player~2 move by parity.
If Player~2 plays the self-involutive edge on either the cycle $C_5$ or the cycle $C_7$, Player~1 should respond by marking the remaining self-involutive edge. (This does not occur in our sample game.) If Player~2 plays on a non-self-involutive edge on either cycles $C_5$ or $C_7$, then Player~1 should respond by marking the mirror-reverse, as long as this move is not a death move. This is demonstrated by moves 8 and 9 on the $C_5$ and moves 12 through 15 on the $C_7$. If the mirror-reverse were to be a death move, then Player~1 should instead mark the self-involutive edge on the other cycle. When Player~2 plays move 10 on the cycle $C_5$, we note that the mirror-reverse would be a death move for Player~1. So instead Player~1 plays move 11 on the self-involutive edge on the cycle $C_7$. If the death move had instead occurred on the $C_7$, Player~1 would have marked the self-involutive edge on the cycle $C_5$.
In our sample game, Player~2 is forced to make a death move after Player~1 plays move 15. Since sinks and sources are not permitted, the only markable edges remaining at this point in the game are either a death move on the cycle $C_5$ or on the cycle $C_9$ (as shown in the sample game). Here, Player~1 wins by completing a cycle on the $C_9$ graph. There are a number of other ways in which a game could play out on this board using the modified mirror-reverse strategy--- all of which will result in a Player~1 win. The reader is encouraged to explore some of these games on their own.
\end{example}
In Theorem~\ref{mainresult} (the main result), we show that in general, the winning strategy on triangle-free cactus graph game boards depends on the number of self-involutive edges. If there is an odd number of self-involutive edges, Player~1 has the winning strategy; if there is an even number of self-involutive edges, Player~2 has the winning strategy. Both require the winning players to use the modified mirror-reverse strategy. In the above game with three self-involutive edges it was necessary for Player~1 to mark one of them so that the remaining number of self-involutive edges would be even, which then allows Player~1 to implement the modified mirror-reverse strategy with their remaining moves. If the above graph was altered to have a cycle $C_8$ instead of the cycle $C_9$ then there would only be two self-involutive edges and Player~2 could immediately respond to all of Player~1's moves using the mirror-reverse strategy. Figure~\ref{fig:c578played} shows a similar sample game to the one in Figure~\ref{fig:c579played}, where a cycle $C_8$ is used instead of the cycle $C_9$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0.9,-1){};
\node[Bvertex] (C) at (2.5,0.7){};
\node[Bvertex] (D) at (3.8,2.2){};
\node[Bvertex] (E) at (2.3,3.8){};
\node[Bvertex] (F) at (.75,5){};
\node[Bvertex] (G) at (-.7,3.8){};
\node[Bvertex] (H) at (-2,2.2){};
\node[Bvertex] (I) at (-0.8,0.7){};
\node[Bvertex] (J) at (-3.2,3.5){};
\node[Bvertex] (K) at (-5,3.5){};
\node[Bvertex] (L) at (-5,0.5){};
\node[Bvertex] (M) at (-3.2,0.5){};
\node[Bvertex] (N) at (5.3,3.5){};
\node[Bvertex] (S) at (5.3,.5){};
\node[Bvertex] (O) at (6.8,3.5){};
\node[Bvertex] (R) at (6.8,0.5){};
\node[Bvertex] (P) at (8.5,3.5){};
\node[Bvertex] (Q) at (8.5,0.5){};
\draw (A) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw[dashed] (.75,5) --(0.9,-1){};
\draw[dashed] (-1.8, 2.2) --(-5,2.2){};
\draw[dashed] (3.5,2.2) --(8.5,2.2){};
\draw (F) -- node[font=\small, label={[label distance=-5pt] below right:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (G);
\draw (E) -- node[font=\small, label={[label distance=-5pt]below left:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\draw (G) -- node[font=\small, label={[label distance=-7pt]:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (D) -- node[font=\small, label={[label distance=-7pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (E);
\draw (H) -- node[font=\small, label={[label distance=-6pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (C) -- node[font=\small, label={[label distance=-7pt]:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (J) -- node[font=\small, label={[label distance=-7pt]:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (H) -- node[font=\small, label={[label distance=-7pt]:$8$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (M);
\draw (K) -- node[font=\small, label={[label distance=-5pt]:$9$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (J);
\draw (P) -- node[font=\small, label={[label distance=-5pt]:$10$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (Q);
\draw (O) -- node[font=\small, label={[label distance=-3pt]below:$11$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (N);
\draw (S) -- node[font=\small, label={[label distance=-5pt]below:$12$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (R);
\draw (N) -- node[font=\small, label={[label distance=-7pt]below:$13$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (D) -- node[font=\small, label={[label distance=-7pt]below:$14$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (S);
\draw (A) -- node[font=\small, label={[label distance=-7pt]below:$15$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\draw (I) -- node[font=\small, label={[label distance=-7pt]below:$16$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\end{scope}
\end{tikzpicture}
\caption{Completed game on the game board with an even number of self-involutive edges.}
\label{fig:c578played}
\end{figure}
\subsection{Generalizing to cactus graphs with four or more cycles}
Now that we have shown how the modified mirror-reverse strategy works on cactus graphs consisting of two cycles and three cycles, a natural question to ask is: Can the modified mirror-reverse strategy be applied to all triangle-free cactus graphs?
Unfortunately, the answer is no. Consider the example game shown in Figure~\ref{fig:c5976played} below. In the example game we have labeled the problematic vertex ``$a$."
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,-1.2){};
\node[Bvertex] (B) at (1.8,-1.2){};
\node[Bvertex] (C) at (2.5,0){};
\node[Bvertex] (D) at (3.5,2.2)[label=above:$a$]{};
\node[Bvertex] (E) at (2.3,3.9){};
\node[Bvertex] (F) at (.75,4.9){};
\node[Bvertex] (G) at (-.65,3.9){};
\node[Bvertex] (H) at (-1.85,2.2){};
\node[Bvertex] (I) at (-.9,0){};
\node[Bvertex] (J) at (-2.7,3.2){};
\node[Bvertex] (K) at (-4,3.2){};
\node[Bvertex] (L) at (-4,1.2){};
\node[Bvertex] (M) at (-2.7,1.2){};
\node[Bvertex] (N) at (4.8,3.9){};
\node[Bvertex] (O) at (6.4,3.9){};
\node[Bvertex] (P) at (7.8,3.9){};
\node[Bvertex] (Q) at (7.8,0.2){};
\node[Bvertex] (R) at (6.4,0.2){};
\node[Bvertex] (S) at (4.5,0.2){};
\node[Bvertex] (T) at (5.4,-1){};
\node[Bvertex] (U) at (5.4,-2.5){};
\node[Bvertex] (W) at (3.6,-2.5){};
\node[Bvertex] (X) at (3.6,-1){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw (S) to (T);
\draw (T) to (U);
\draw (U) to (W);
\draw (W) to (X);
\draw (X) to (S);
\draw[dashed] (.75,4.9) --(0.9,-1.05){};
\draw[dashed] (-1.6, 2.2) --(-4,2.2){};
\draw[dashed] (4.1,1.25)--(7.6,3.7){};
\draw[dashed] (4.5,.5) --(4.5,-2.6){};
\draw (S) -- node[font=\small, label={[label distance=-7pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (B) -- node[font=\small, label={[label distance=-2pt]:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (N) -- node[font=\small, label={[label distance=-5pt]:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (S) -- node[font=\small, label={[label distance=-4pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (R);
\draw (C) -- node[font=\small, label={[label distance=-6pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (H) -- node[font=\small, label={[label distance=-6pt]:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (H) -- node[font=\small, label={[label distance=-6pt]below:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (G);
\draw (E) -- node[font=\small, label={[label distance=-6pt]below:$8$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\end{scope}
\end{tikzpicture}
\caption{Example of a game where the modified mirror reverse strategy fails.}
\label{fig:c5976played}
\end{figure}
The even number of self-involutive edges suggests that Player~2 might win here by employing the modified mirror-reverse strategy. However, the strategy dictates an 8th move that would result in a sink at vertex $a$. Such a move is not permitted and thus our strategy cannot be employed. Note that this does not necessarily mean that Player~2 cannot win the game, only that they cannot win by employing the modified mirror-reverse strategy.
Why does the modified mirror-reverse strategy fail in the case of four joined cycles but never in the case of two or three joined cycles? The sink/source issue is avoided in the case of the two or three joined cycles because each vertex of high degree has an axis of symmetry which passes through it. Thus, since each cycle is triangle-free, our strategy dictates that we can apply the mirror-reverse strategy to edges and their partner edges. Since applying the mirror-reverse strategy does not change the direction of any cycle, a sink or source is prevented at vertices of high degree.
However in the case of four or more joined cycles in a cactus graph, it becomes possible for a vertex of high degree to not have any axis of symmetry passing through it, as seen at vertex $a$ in Figure~\ref{fig:c5976played}.
In Figure~\ref{fig:c59767} below, we resolve this issue by joining another cycle to our graph. The addition of the uppermost cycle $C_7$ shifts the axis of symmetry on the original cycle $C_7$ so that it now passes through vertex $a$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.4]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,-1.2){};
\node[Bvertex] (B) at (1.8,-1.2){};
\node[Bvertex] (C) at (2.5,0){};
\node[Bvertex] (D) at (3.5,2.2)[label=above:$a$]{};
\node[Bvertex] (E) at (2.3,3.9){};
\node[Bvertex] (F) at (.75,4.9){};
\node[Bvertex] (G) at (-.65,3.9){};
\node[Bvertex] (H) at (-1.85,2.2){};
\node[Bvertex] (I) at (-.9,0){};
\node[Bvertex] (J) at (-2.7,3.2){};
\node[Bvertex] (K) at (-4,3.2){};
\node[Bvertex] (L) at (-4,1.2){};
\node[Bvertex] (M) at (-2.7,1.2){};
\node[Bvertex] (N) at (4.8,3.9){};
\node[Bvertex] (O) at (6.4,3.9){};
\node[Bvertex] (P) at (7.8,3.9){};
\node[Bvertex] (Q) at (7.8,0.2){};
\node[Bvertex] (R) at (6.4,0.2){};
\node[Bvertex] (S) at (4.5,0.2){};
\node[Bvertex] (T) at (5.4,-1){};
\node[Bvertex] (U) at (5.4,-2.5){};
\node[Bvertex] (W) at (3.6,-2.5){};
\node[Bvertex] (X) at (3.6,-1){};
\node[Bvertex] (Y) at (6,4.8){};
\node[Bvertex] (Z) at (6,6){};
\node[Bvertex] (AA) at (6,7){};
\node[Bvertex] (AB) at (3,7){};
\node[Bvertex] (AC) at (3,6){};
\node[Bvertex] (AD) at (3,4.8){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw (S) to (T);
\draw (T) to (U);
\draw (U) to (W);
\draw (W) to (X);
\draw (X) to (S);
\draw (N) to (Y);
\draw (Y) to (Z);
\draw (Z) to (AA);
\draw (AA) to (AB);
\draw (AB) to (AC);
\draw (AC) to (AD);
\draw (AD) to (N);
\draw[dashed] (.75,4.9) --(0.9,-1.05){};
\draw[dashed] (-1.6, 2.2) --(-4,2.2){};
\draw[dashed] (3.5,2.2)--(7.8,2.2){};
\draw[dashed] (4.5,.5) --(4.5,-3){};
\draw[dashed] (4.7,3.9) --(4.7,7.2){};
\end{scope}
\end{tikzpicture}
\caption{The game board in Figure~\ref{fig:c5976played} modified to ensure that every degree 4 vertex has an axis of symmetry passing through it.}
\label{fig:c59767}
\end{figure}
Although the modified mirror-reverse strategy cannot be successfully applied to every triangle-free cactus graph in which every edge belongs to exactly one cycle, it can be applied to any such graph in which every cycle has an axis of reflective symmetry such that
\begin{itemize}
\item symmetry partners vertices of degree 2 with one another and vertices of high degree with other vertices of high degree (possibly themselves), and
\item every vertex of high degree has at least one axis of symmetry passing through it.
\end{itemize}
There are many types of graphs that satisfy these criteria. Some examples are shown in Figure~\ref{fig:exampleboards}.
\begin{figure}[H]\center
\[
\begin{array}{ccc}
\begin{tikzpicture}[scale=.5]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (-1.2,-.7){};
\node[Bvertex] (C) at (-2.2, -.7){};
\node[Bvertex] (D) at (-2.2, .7){};
\node[Bvertex] (E) at (-1.2, .7){};
\node[Bvertex] (F) at (-.7, 1.2){};
\node[Bvertex] (G) at (-.7, 2.2){};
\node[Bvertex] (H) at (0, 3.2){};
\node[Bvertex] (I) at (.7, 2.2){};
\node[Bvertex] (J) at (.7,1.2){};
\node[Bvertex] (K) at (1.2,.7){};
\node[Bvertex] (L) at (2.2,.7){};
\node[Bvertex] (M) at (3.2,.7){};
\node[Bvertex] (N) at (3.2,-.7){};
\node[Bvertex] (O) at (2.2,-.7){};
\node[Bvertex] (P) at (1.2,-.7){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (J);
\draw (J) to (A);
\draw (A) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (A);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (-1, .7){};
\node[Bvertex] (C) at (-2, .7){};
\node[Bvertex] (D) at (-2.8, 0){};
\node[Bvertex] (E) at (-2, -.7){};
\node[Bvertex] (F) at (-1, -.7){};
\node[Bvertex] (G) at (1, -.7){};
\node[Bvertex] (H) at (2, -.7){};
\node[Bvertex] (I) at (3, -.7){};
\node[Bvertex] (J) at (3, .7){};
\node[Bvertex] (K) at (2,.7){};
\node[Bvertex] (L) at (1,.7){};
\node[Bvertex] (M) at (2.5,1.7){};
\node[Bvertex] (N) at (2.5, 2.7){};
\node[Bvertex] (O) at (3.5, 2.7){};
\node[Bvertex] (P) at (3.5, 1.7){};
\node[Bvertex] (Q) at (4, 1.2){};
\node[Bvertex] (R) at (5, 1.2){};
\node[Bvertex] (S) at (5.8, .7){};
\node[Bvertex] (T) at (5, 0){};
\node[Bvertex] (U) at (4, 0){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (A);
\draw (A) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (A);
\draw (J) to (M);
\draw (M) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (J);
\draw (J) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (T);
\draw (T) to (U);
\draw (U) to (J);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,1.8){};
\node[Bvertex] (B) at (.7,1){};
\node[Bvertex] (C) at (.7, 0){};
\node[Bvertex] (D) at (.7, -1){};
\node[Bvertex] (E) at (0,-1.8){};
\node[Bvertex] (F) at (-.7, -1){};
\node[Bvertex] (G) at (-.7, 0){};
\node[Bvertex] (H) at (-.7, 1){};
\node[Bvertex] (I) at (-1.2, 1.5){};
\node[Bvertex] (J) at (-2.2, 1.5){};
\node[Bvertex] (K) at (-2.8, 1){};
\node[Bvertex] (L) at (-2.2, .5){};
\node[Bvertex] (M) at (-1.2, .5){};
\node[Bvertex] (N) at (-1.2, -.5){};
\node[Bvertex] (O) at (-2.2, -.5){};
\node[Bvertex] (P) at (-2.2, -1.5){};
\node[Bvertex] (Q) at (-1.2, -1.5){};
\node[Bvertex] (R) at (1.2, -1.5){};
\node[Bvertex] (S) at (2.2, -1.5){};
\node[Bvertex] (T) at (2.8, -1){};
\node[Bvertex] (U) at (2.2, -.5){};
\node[Bvertex] (V) at (1.2, -.5){};
\node[Bvertex] (W) at (1.2, .5){};
\node[Bvertex] (X) at (2.2, .5){};
\node[Bvertex] (Y) at (2.2, 1.5){};
\node[Bvertex] (Z) at (1.2, 1.5){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (A);
\draw (H) to (I);
\draw (I) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (F) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (F);
\draw (D) to (R);
\draw (R) to (S);
\draw (S) to (T);
\draw (T) to (U);
\draw (U) to (V);
\draw (V) to (D);
\draw (B) to (W);
\draw (W) to (X);
\draw (X) to (Y);
\draw (Y) to (Z);
\draw (Z) to (B);
\end{scope}
\end{tikzpicture}\\
\end{array}
\]
\caption{Examples of game boards that satisfy the criteria needed to use the modified mirror-reverse strategy. }\label{fig:exampleboards}
\end{figure}
\section{Main result for cactus graphs}\label{mainresultsection}
In this section, we state and prove our general result for triangle-free cactus graphs.
\begin{theorem}\label{mainresult}
Let $G$ be a triangle-free cactus graph in which every edge belongs to exactly one cycle. Assume that $G$ has a fixed set of symmetries satisfying the following properties:
\begin{enumerate}
\item Each cycle has an axis of reflective symmetry under which every vertex of degree at least 4 is partnered via symmetry with another vertex of degree at least 4 (possibly itself).
\item For each vertex of degree at least 4 there is at least one of these axes of symmetry which passes through it.
\end{enumerate}
If the total number of self-involutive edges in $G$ is even, then Player~2 can win by employing the modified mirror-reverse strategy. If the total number of self-involutive edges in $G$ is odd, then Player~1 can win by first marking a self-involutive edge and then employing the modified mirror-reverse strategy.
\end{theorem}
\begin{proof}
Note that in the case when the number of self-involutive edges is odd, after marking the first self-involutive edge Player~1 essentially functions as Player~2 would in the case in which there are an even number of self-involutive edges. An example of such a game is shown in Figure~\ref{fig:c579played}. Thus, it suffices to consider only one case. In the style of Alvarado et al.~\cite{Alvarado}, we will call the player with the winning strategy Player W. Here, Player W will be using the modified mirror-reverse strategy to respond to the moves of their opponent, who we will call Player X. If $ij$ is the edge with endpoints $i$ and $j$, playing $i\rightarrow j$ means that a player will mark an arrow from $i$ to $j$. Let $i'$ and $j'$ denote the partner vertices of $i$ and $j$ respectively under the symmetry. Under the mirror-reverse strategy Player~W would complete a cycle if possible, but otherwise would look at the edge Player~X played on the previous move ($ i \rightarrow j$) and play the \emph{mirror-reverse} ($j'\rightarrow i'$).
We can assume that the current number of unmarked self-involutive edges is even. Since there is an even number of self-involutive edges, we can think of them as being ``paired'' by the strategy even before gameplay begins. That is, when Player X plays on a certain self-involutive edge Player W responds by playing on its paired self-involutive edge. Such a response is demonstrated in moves 1 and 2 of the game played in Figure~\ref{fig:c5976played}. And when Player X plays on a non-self-involutive edge but Player W cannot respond by playing its mirror-reverse due to it being a death move, Player W responds by playing on the self-involutive edge that is paired with the self-involutive edge belonging to the cycle in which the death move would have occurred. Such a response is demonstrated in moves 9 and 10 of the game played in Figure~\ref{fig:c578played}.
To prove that Player W indeed has the winning strategy we must show that the move dictated by the strategy is always available (previously unmarked), will never lead to a sink or source for Player W, and will never lead to a death move for Player W. Thus since Player X can never complete a cycle or make the final move in the game, Player W wins.
First we will show that the move dictated by the modified mirror-reverse strategy is always available for Player W. If the strategy dictates that Player W play on an edge that is not self-involutive, then Player X must have just played on its mirror-reverse edge. Clearly the edge is available for Player W to mark because had this edge been marked previously, either by Player X or Player W, it's mirror-reverse would have already been marked as well and hence unavailable for Player X to be marking now.
The case in which the strategy dictates that Player W mark a self-involutive edge is more complicated. This could occur because Player X marked another self-involutive edge or because Player X marked an edge which is not self-involutive, but whose mirror-reverse would be a death move. (Again, examples of each of these situations can be seen in Figures~\ref{fig:c5976played} and \ref{fig:c578played} respectively.) Note that this second case could only occur on a cycle with exactly one unmarked self-involutive edge.
This is because a death move on a cycle only occurs on the second-to-last unmarked edge. If the cycle were even with either two or zero unmarked self-involutive edges, then since the remaining edges are marked in pairs via mirror-reverse, parity dictates that the second-to-last move would belong to Player X. The same is true for an odd cycle in which the single self-involutive edge had already been marked.
Thus, if Player W cannot play a mirror-reverse of a non-self-involutive edge due to it being a death move, then one of the two remaining unmarked edges must have been self-involutive. Marking this self-involutive edge would now be a death move itself, so even though it is unmarked this edge is now considered currently unplayable. We observe this in the cycle $C_5$ of the example game in Figure~\ref{fig:c578played} when Player~2 must use the strategy to respond to the 9th move of the game. So we make the important observation that the modified mirror-reverse strategy causes self-involutive edges to become currently unplayable (either by being marked or becoming death moves) in pairs, with Player W playing immediately after Player X.
That is, whenever the strategy dictates that Player W play on a self-involutive edge, it must be the case that Player X just played in such a way to make its paired edge no longer currently playable.
Thus, since there are an even number of self-involutive edges, it cannot be Player X that marks the last playable self-involutive edge. So Player W's move is always available.
Next we will show that the modified mirror-reverse strategy will never dictate that Player W play in a way that will create a sink or source at a vertex. For high degree vertices, a sink/source is prevented by Property 2 of graph $G$ described in the theorem. Since such a vertex must have an axis of reflective symmetry passing through it and $G$ is triangle-free, this vertex would be incident to a pair of edges being marked via mirror-reverse. Thus, if one edge enters the vertex its partner exits the vertex and this prevents a sink/source at the vertex. For example, consider the degree 4 vertices of the game board shown in Figure~\ref{fig:c578played}. Moves 7 and 8 and moves 13 and 14, respectively, demonstrate how the strategy prevents a sink/source at high degree vertices. Let us instead consider vertices of degree 2. In the case that the two incident edges are both not self-involutive, they will be marked by Player W only by employing the mirror-reverse strategy. By Property 1 of graph $G$, degree 2 vertices are paired via symmetry with other degree 2 vertices, thus a sink/source cannot be created by Player W without a sink/source having just been created by Player X at the partner vertex. In the case in which one of the incident edges is self-involutive, note that this self-involutive edge must be incident to two other edges which are not self-involutive and are in fact paired with one another under symmetry. Thus, mirror-reversing ensures that these edges can only be marked with the same direction (clockwise or counterclockwise). Since there are no rules about the direction which Player W must mark a self-involutive edge, Player W can choose the direction which avoids a sink/source incident to a self-involutive edge. Note that because of the symmetry and mirror-reverse play, the edge cannot be unmarkable.
Finally, we will show that the modified mirror-reverse strategy can never lead to Player W making a death move. Since the strategy dictates that Player W only mark the mirror-reverse when it is not a death move, we need only consider the case in which Player W marks a self-involutive edge. However, as noted before, self-involutive edges become currently unplayable in pairs, with Player W playing second. So if a self-involutive edge were a death move for Player W, then it was already currently unplayable and hence its partner edge would have also been currently unplayable---that is, it was either already marked, and hence unable to be played by Player X, or it was a death move for Player X, in which case the strategy would dictate that Player W complete the cycle and win.
\end{proof}
The careful reader will observe that although the above theorem requires that we fix a set of symmetries on our graph $G$, it is possible that multiple sets of such symmetries exist. We note that our choice of symmetries does not matter, as long as Properties 1 and 2 of the theorem hold. Since every odd cycle will have an axis of symmetry with exactly one self-involutive edge and every even cycle will have an axis of symmetry with either two or zero self-involutive edges, changing the set of symmetries on $G$ cannot affect the parity of self-involutive edges. So although the moves dictated by the strategy may differ, the winning player is fixed regardless of the set of symmetries used.
\section{Cactus graphs that are not triangle-free}\label{trianglecactus}
Recall that in Theorem~\ref{mainresult} we restricted our cactus graphs to be triangle-free. A natural question regarding this restriction is: Is there a winning strategy for any player if the cactus graph is not triangle-free? We answer this question for cactus graphs in which two cycles are joined together by a vertex with one cycle being a $C_3$ graph in the following result. This result also provides an example in which there is an even number of edges in the board and Player~1 has a winning strategy, thus answering the question of Alvarado et al.~\cite{Alvarado} mentioned at the start of Section~\ref{cactusgraph}.
\begin{theorem}\label{triangletheorem}
Let $G$ be a board containing two cycles, $C_3$ and $C_n$, with $n\geq4$, such that $C_3$ and $C_n$ are connected by a single vertex, denoted vertex $a$, and every edge belongs to exactly one cycle. Player~1 has a winning strategy on $G$.
\end{theorem}
\begin{proof}
In the case where $n$ is even, $G$ has reflective symmetry by drawing the axis of symmetry to only intersect the degree 4 vertex as shown in Figure~\ref{fig:symgraph} Board $A$ and there is exactly one self-involutive edge whose vertices are not fixed by the reflection. Therefore Corollary~\ref{mirrorsymbasic} holds and Player~1 has a winning strategy by marking the self-involutive edge on their first move and then using the mirror-reverse strategy for subsequent moves.
In the case where $n$ is odd, Player~1's strategy is to ensure that vertex $a$ is an almost-sink. Note that a similar strategy for creating an almost-source at vertex $a$ would also produce a win for Player~1. On Player~1's first move, they will mark one of the edges on the cycle $C_3$ that is incident to vertex $a$ with an arrow towards vertex $a$. Player~1's second move is dependent upon the first move of Player~2. There are three possible moves which Player~2 could make that are not death moves. In what follows we describe the strategy for Player~1 in each of the three cases. For examples of game play illustrating these three cases we have provided sample games on the board consisting of a cycle $C_3$ and a cycle $C_5$ joined at a single vertex in Figure~\ref{fig:C3C5graph} below.
In case one, if Player~2's first move is on the cycle $C_3$ then they would play on the other edge incident to vertex $a$ in the direction towards vertex $a$ to avoid a death move. In this case, Player~1's second move would be to play on the cycle $C_n$ on one of the edges incident to vertex $a$ with an arrow towards vertex $a$. This creates an almost sink at vertex $a$.
In case two, if Player~2's first move is on the cycle $C_n$ and they mark an edge that is incident to any vertex within distance one of vertex $a$, then Player~1's second move should be to mark the edge paired with that one via symmetry in the direction opposite Player~2's first move (that is, Player~1 should use the mirror-reverse strategy). Then on Player~1's third turn, they should ensure that the remaining edge incident vertex $a$ on the cycle $C_3$ is marked with an arrow towards vertex $a$ by either marking it or observing that Player~2 marked it on their second move, which did not result in a death move. Note that if an almost sink has not yet been created, Player~1 can do so on their fourth move.
In case three, if Player~2's first move is not incident to a vertex within distance one of vertex $a$, then Player~1's second move is to mark the remaining edge in the cycle $C_3$ that is incident to vertex $a$ with an arrow towards vertex $a$, which leaves the remaining unmarked edge in $C_3$ unmarkable. After Player~2's second move, Player~1 should either observe that an almost-sink has been created at vertex $a$ or play an edge incident to vertex $a$ on the cycle $C_n$ to create one.
In all, this strategy ensures the creation of an unmarkable edge in the cycle $C_3$ and that the number of unmarkable edges on $C_n$ are even. Hence, there are an odd number of edges that can be marked on the board $G$ and this leads to a Player~1 win.
\end{proof}
\begin{figure}[ht]
\[
\begin{array}{ccc}
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (-.5, 1.414){};
\node[Bvertex] (E) at (-.5,-1.414){};
\node[Bvertex] (F) at (3.2, 1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (K) at (3.2,-1.414){};
\draw (A) to (B);
\draw (B) to (E);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (K) to (A);
\draw (B) -- node[font=\small, label={[label distance=-7pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (E) -- node[font=\small, label={[label distance=-9pt]below:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (F) -- node[font=\small, label={[label distance=-9pt] below right:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (I) -- node[font=\small, label={[label distance=-4pt]below:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (K);
\draw (H) -- node[font=\small, label={[label distance=-4pt]below:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (-.5, 1.414){};
\node[Bvertex] (E) at (-.5,-1.414){};
\node[Bvertex] (F) at (3.2, 1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (K) at (3.2,-1.414){};
\draw (A) to (B);
\draw (B) to (E);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (K) to (A);
\draw (B) -- node[font=\small, label={[label distance=-7pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (A) -- node[font=\small, label={[label distance=-6pt]:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\draw (K) -- node[font=\small, label={[label distance=-9pt]:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (H) -- node[font=\small, label={[label distance=-5pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (E) -- node[font=\small, label={[label distance=-9pt]below:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (I) -- node[font=\small, label={[label distance=-4pt]below:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (K);
\draw (F) -- node[font=\small, label={[label distance=-4pt]above:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (-.5, 1.414){};
\node[Bvertex] (E) at (-.5,-1.414){};
\node[Bvertex] (F) at (3.2, 1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (K) at (3.2,-1.414){};
\draw (A) to (B);
\draw (B) to (E);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (K) to (A);
\draw (B) -- node[font=\small, label={[label distance=-7pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (I) -- node[font=\small, label={[label distance=-4pt]below:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (E) -- node[font=\small, label={[label distance=-7pt]below:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (A) -- node[font=\small, label={[label distance=-4pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\draw (K) -- node[font=\small, label={[label distance=-7pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\end{scope}
\end{tikzpicture}\\
\text{Case } 1 & \text{Case } 2& \text{Case } 3\\
\end{array}
\]
\caption{An example showing of cycles $C_3$ and $C_5$ joined at a vertex displaying the three possible winning strategies for Player~1 in Theorem~\ref{triangletheorem}.}
\label{fig:C3C5graph}
\end{figure}
\section{Further Questions}
In this paper, we found a winning strategy for triangle-free cactus graphs with certain properties. However, there are questions that are unanswered from our studies:
\begin{itemize}
\item In our main result, we required our graphs to be triangle-free cactus graphs and to have a fixed set of symmetries that satisfied two properties (See Theorem~\ref{mainresult}). If we relax these criteria, can we determine a strategy for cactus graphs without these properties?
\item The modified mirror-reverse strategy was used for certain types of cactus graphs in our results. Are there other classes of graphs that the modified mirror-reverse strategy can be applied to?
\item Can we utilized the modified mirror-reverse strategy on graphs with other types of symmetry (not just reflective symmetry)?
\item We noted that our Theorem~\ref{triangletheorem} answers a question of Alvarado et at.~\cite{Alvarado} in the negative, by leading to examples of game boards with an even number of edges for which Player~1 possesses a winning strategy. However, such examples will all include a 3-cycle. We might ask the following variation of the question in \cite{Alvarado}: If we consider only triangle-free game boards, does it then follow that if the number of edges in the board is odd, Player 1 has a winning strategy, and otherwise Player 2 has a winning strategy?
\end{itemize}
\section{Acknowledgements}
We extend our gratitude to Jonah Amundsen and to Peter Graziano, doctoral student at the University of Connecticut, for their assistance with early aspects of the project.
The authors would like to thank the reviewers for their comments in reviewing the paper. We also thank the University of Wisconsin-Eau Claire Department of Mathematics, the Office of Research and Sponsored Projects for supporting Jonah Amundsen, Heather Baranek, and Shanise Walker on this project. In addition, we also thank the University of Wisconsin-Eau Claire Foundation, Walter M. Reid First Year Research Fellowship, and the Blugold Fellowship for supporting Heather Baranek. Most of the work for this project was completed while Samuel Adefiyiju and Alison LaBarre were students at Providence College.
|
{
"arxiv_id": "2302.08694",
"language": "en",
"timestamp": "2023-02-20T02:06:58",
"url": "https://arxiv.org/abs/2302.08694",
"yymm": "2302"
} | \section{Introduction}
The immersed boundary (IB) method is generally applicable to problems
of fluid-structure interaction \citep{peskinImmersedBoundaryMethod2002}. The immersed boundary can be thin, and
when idealized as infinitely thin it applies a singular force density
(i.e., a delta-function layer of force per unit volume) to the
surrounding fluid. An example of such a thin elastic boundary immersed in a viscous incompressible fluid is a
heart valve leaflet, and this was the motivating example for introduction of the IB method \citep{peskinNumericalAnalysisBlood1977}. The IB method smooths the
singularity by introducing a kernel $\delta_h^{(n)}$ which is a smoothed
approximation to the Dirac delta function and is finitely supported on a box, the size of which in each space direction is some small integer $n$ times the meshwidth $h$. This smoothing, which
affects not only the application of force to the fluid but also the
evaluation of the fluid velocity field at the location of the
immersed boundary, is a source of inaccuracy since it gives the
computational immersed boundary an effective thickness, on the order
of a meshwidth, that is not part of the mathematical formulation
of the problem but is needed for computational reasons.
Moreover, the interpolated velocity field of the standard IB
method does not have zero divergence, even when the grid velocity
field that is being interpolated is discretely divergence-free. A
consequence of this is a systematic volume leak that is an especially
disturbing kind of numerical error in some applications, even though
it can be made arbitrarily small by refinement of numerical
parameters. There are several ways to reduce the volume leak by a large constant factor with extra effort, e.g. a modified finite difference operator \citep{peskinImprovedVolumeConservation1993} or the MAC discretization \citep{griffithVolumeConservationImmersed2012}. A more recent development is a divergence-free IB method that involves a vector potential computed on a staggered grid, interpolated as in the standard IB method (but with a delta-function kernel that
has three continuous derivatives), and then differentiated to obtain a divergence-free velocity field in which the immersed boundary moves \citep{baoImmersedBoundaryMethod2017}. The present paper has in common with \citep{baoImmersedBoundaryMethod2017} that the volume leak is eliminated by making the interpolated velocity field continuously divergence-free. This happens here in a much more natural way, however, since our pseudo-spectral fluid solver works directly with a
continuously divergence-free representation of the fluid velocity
field, and since our interpolation method is equivalent to the
direct evaluation of that velocity field at the immersed boundary. Neither the vector potential nor the staggered grid is needed in our new method.
Another issue with the immersed boundary method is translation
invariance. On a periodic (or infinite) domain, a computational mesh of a given
meshwidth can be shifted by an arbitrary amount (not necessarily an
integer multiple of a meshwidth) in each coordinate direction. It
would be ideal for the computed solution to be independent of any
such shift, except for being sampled on the shifted grid. For this
to happen, it must be the case that the influence of one immersed
boundary point on another, even though that influence occurs through
the fluid grid, should depend only on the vector that connects the
two points, and not on how those two points are situated in relation
to the fluid grid. In the standard IB method \citep{peskinImmersedBoundaryMethod2002}, the issue of translation invariance is addressed by introducing a sum-of-squares condition for the delta-function kernel that is used in the velocity interpolation
and the force spreading. The sum-of-squares condition ensures
translation invariance for the self-interaction (mediated by the
fluid grid) of any immersed boundary point, and it also ensures that
pairwise interactions are bounded by the translation-invariant
self-interaction. Recently, the translation invariance for pair interaction has been improved through the introduction of
Gaussian-like delta-function kernels with bounded support \citep{baoGaussianLikeImmersedBoundary2016}. Exact translation invariance has not been achieved in the context of
immersed boundary methods, and indeed it can be shown that exact
translation invariance is incompatible with bounded support of the
regularized delta functions. Although regularized delta functions
with unbounded support exist, their use would seem to be impractical,
since the cost of interpolation and force spreading would then be
proportional to the product of the number of immersed boundary points
and the number of fluid grid points. Thus, it might well be believed
that exact translation invariance is unachievable within the
framework of the IB method.
The purpose of this article is to overcome these limitations of the IB method. We introduce a new Fourier spectral immersed boundary (FSIB) method for a thin elastic boundary immersed in a viscous incompressible fluid. The fluid equations, including both the Stokes equations and the Navier-Stokes equations, are solved by a Fourier spectral method,
also known as a pseudospectral method \citep{gottliebNumericalAnalysisSpectral1977a}. The force spreading and the velocity interpolation steps of the IB method
are here expressed in terms of finite Fourier series. The coefficients of the Fourier series can be calculated efficiently by a fast algorithm, the Non-uniform Fast Fourier Transform (NUFFT) \citep{duttFastFourierTransforms1993}. This plays the role of the force spreading step of the IB method. The Fourier series can be evaluated at any location in physical space by the NUFFT as well, and this plays the role of the velocity interpolation step of the IB method. We use a fast and parallel implementation of the NUFFT called the finufft \citep{barnettParallelNonuniformFast2019,barnettAliasingErrorExp2020}. From an algorithmic point of view, there is no need for
any regularized delta function and the FSIB method does not employ an Eulerian grid in the physical space. This is the key to translation invariance.
Despite the foregoing, we prove herein that
our new method is equivalent to an IB method with a
`$\mathtt{sinc}$' function kernel as the regularized delta function. Moreover, the FSIB method has the duality of the force spreading and the velocity interpolation that is similar to the IB method, and the `$\mathtt{sinc}$' kernel even satisfies the conditions of the standard IB kernels continuously and discretely. Thus, the FSIB method preserves the merits of the IB method including the conservation of momentum and the conservation of energy.
Related to the present paper but in a different field is \citep{mitchellEfficientFourierBasis2019}, in which the Vlasov-Poisson equations are solved by a Fourier spectral method. In this reference, however, the counterpart of the immersed boundary is a collection of point chargers that are smoothed by a Gaussian shape function. In contrast to this, our motivation in the present paper is to avoid any such smoothing, and of course, the Vlasov-Poisson equations are different from the Navier-Stokes equations.
The structure of the present paper is as follows. In \cref{sec:equations}, we describe the continuum formulation that is the foundation of the FSIB method, and in \cref{sec:discretization}, we discuss the
details of discretization. In \cref{sec:relation}, we show that the FSIB method can be viewed as an
immersed boundary method with a `$\mathtt{sinc}$' function kernel, which is globally supported, as the
regularized delta function. We will also show that the FSIB method has exact translation invariance,
and its time-continuous but spatially discretized version
conserves both energy and momentum. In \cref{sec:numerical}, a series of numerical experiments will be provided to verify these properties of FSIB in Stokes flow and in Navier-Stokes flow in two space dimensions and also in three space dimensions. In this section, we also study the convergence rate and boundary resolution of the FSIB method.
\section{Mathematical Formulation of the FSIB method}\label{sec:equations}
In this section, we provide a general description of the equations used in the FSIB method. We consider a thin, massless, elastic boundary $\Gamma$, immersed in a viscous incompressible fluid. The fluid is described in Eulerian form by the Navier-Stokes equations.
\begin{equation}\label{eq:navierstokes}
\left\{\begin{array}{l}
\rho\left(\frac{\partial \boldsymbol{u}}{\partial t}+\boldsymbol{u} \cdot \nabla \boldsymbol{u}\right)+\nabla p=\mu \Delta \boldsymbol{u}+\boldsymbol{f}, \\
\nabla \cdot \boldsymbol{u}=0,
\end{array}\right.
\end{equation}
where $\boldsymbol u(\boldsymbol x,t)$ is the fluid velocity, $p(\boldsymbol x,t)$ is the fluid pressure, and
where $\boldsymbol f(\boldsymbol x,t)$ is the force per unit volume applied to the fluid. Our principal use of $\boldsymbol f(\boldsymbol x,t)$ will be to represent the force per unit
volume applied to the fluid by the immersed elastic boundary, in
which case $\boldsymbol f(\boldsymbol x,t)$ will be a delta-function layer with support on the
immersed boundary, but sometimes we will also consider given external
forces that drive a flow. The constant parameters $\rho$ and $\mu$ in
equation \cref{eq:navierstokes} are the mass density and the dynamic viscosity of the
fluid, respectively.
The spatial domain occupied by the fluid will be 3-torus, that is, a
cube with periodic boundary conditions. It is equivalent, however,
to say that the fluid occupies all of $\mathcal{R}^3$ and that the functions $\boldsymbol u$,
$p$, and $\boldsymbol f$ are periodic with some specified period $L$ in all three
spatial dimensions. Both points of view will be useful herein.
We will sometimes consider the two-dimensional case in numerical experiments,
but our description of the method will be three-dimensional.
As in the IB method, the immersed boundary and the fluid are coupled as follows
\begin{equation}\label{eq:force_spreading}
\left\{\begin{array}{l}
\boldsymbol{f}(\boldsymbol{x}, t)=\int \boldsymbol{F}(\boldsymbol{\theta}, t) \delta(\boldsymbol{x}-\boldsymbol{X}(\boldsymbol{\theta}, t)d \boldsymbol{\theta},\\
\boldsymbol U(\boldsymbol \theta , t) = \frac{\partial \boldsymbol{X}}{\partial t}(\boldsymbol{\theta}, t)=\boldsymbol{u}(\boldsymbol{X}(\boldsymbol{\theta}, t), t)=\int \boldsymbol{u}(\boldsymbol{x}, t) \delta(\boldsymbol{x}-\boldsymbol{X}(\boldsymbol{\theta}, t)) d \boldsymbol{x},
\end{array}\right.
\end{equation}
where $\boldsymbol{F}(\boldsymbol \theta,t )$ denotes the force density exerted on the fluid by the immersed boundary, $\boldsymbol{X}(\boldsymbol \theta,t)$ denotes the location of the immersed boundary and $\boldsymbol U (\boldsymbol \theta,t)$ denotes the velocity of the immersed boundary, all parameterized by the Lagrangian variable $\boldsymbol{\theta}$. The force density described on the immersed boundary is coupled to the fluid through convolution with the Dirac delta function, and the velocity of the immersed boundary is similarly obtained by convolution of the velocity of the fluid with the Dirac delta function.
The foundation of the FSIB method is a rewrite of the above equations in terms of Fourier series. Any periodic function $g \in \mathcal L^2_{\Omega_L}$ on the periodic box $\Omega_L = [0, L]^3$ can be written as:
\begin{equation}\label{eq:contfourseri}
g(\boldsymbol x)=\sum_{\boldsymbol k \in \mathcal{K}} \hat{g}(\boldsymbol k) \exp (i \boldsymbol k \cdot \boldsymbol x),
\end{equation}
where the $\hat{g}(\boldsymbol k)$ denotes the Fourier series coefficient of wavenumber $\boldsymbol k \in \mathcal{K}$ and $\mathcal{K}=\frac{2\pi}{L}\mathbb{Z}^3$. We denote
\begin{equation}\label{eq:hat_operator}
\hat{g} = \mathcal{H}(g),
\end{equation}
where the operator $\mathcal{H}$ is the map from a periodic function to its Fourier coefficients and it gets the name `H' because it puts a hat on the function. In this way, all of the periodic functions $\boldsymbol f, p$ and $\boldsymbol u$ can be written as Fourier series with coefficients $\hat{\boldsymbol f}, \hat{p}$ and $\hat{\boldsymbol u}$. In the following, we omit arguments such as $t$, $\boldsymbol x$, $\boldsymbol k$, and $\boldsymbol \theta$ in the notations for convenience as long as the meaning is clear. To express the Navier-Stokes \cref{eq:navierstokes} in the Fourier domain, we first note that pressure $p$ satisfies the pressure Poisson equation:
\begin{equation}\label{eq:pressure_poisson}
\Delta p = \nabla \cdot (\boldsymbol f - \rho \boldsymbol u\cdot \nabla \boldsymbol u),
\end{equation}
which is obtained from taking the divergence of momentum equation of Navier-Stokes equations (\ref{eq:navierstokes}) and eliminating the divergence-free terms.
We denote the nonlinear term $\boldsymbol s(\boldsymbol u):=\boldsymbol u \cdot \nabla \boldsymbol u$ and its Fourier coefficients as $\hat{\boldsymbol s}(\hat{\boldsymbol u})$. The calculation of $\boldsymbol s$ usually takes place in the physical space as
\begin{equation}\label{eq:nonlinear_physical}
\hat{\boldsymbol s}(\hat{\boldsymbol u}) = \mathcal{H}\left( (\mathcal{H}^{-1}(\hat{\boldsymbol u}))\cdot (\mathcal{H}^{-1}(i\boldsymbol k \hat{\boldsymbol u}))\right).
\end{equation}
In Fourier space, the pressure Poisson equation becomes
\begin{equation}\label{eq:pressure_poisson_k}
\hat{p}=-\frac{i\boldsymbol k \cdot (\hat{\boldsymbol f}-\rho \hat{\boldsymbol s}(\hat{\boldsymbol u}))}{|\boldsymbol k|^2}.
\end{equation}
The case $\boldsymbol k=0$ for the denominator makes no difficulty because the addition of a constant to the pressure makes no difference, so we can set $\hat{p}(\boldsymbol k=0)$ as any constant.
With the use of \cref{eq:pressure_poisson_k}, the Fourier series version of \cref{eq:navierstokes} becomes
\begin{equation}\label{eq:contfoursoln}
\frac{\partial \hat{\bm{u}}(\boldsymbol k)}{\partial t}=(\bm{I}-\frac{\bm{k} \bm{k}^T}{|\bm{k}|^2})(\frac{\hat{\bm{f}}(\bm k)}{\rho}-\hat{\boldsymbol s}(\hat{\boldsymbol u}, \bm k))-\frac{\mu}{\rho}|\bm{k}|^2\hat{\bm{u}}(\bm k),
\end{equation}
where $\boldsymbol k \in \mathcal{K}$. Note that $\bm{I}-\frac{\bm{k} \bm{k}^T}{|\bm{k}|^2}$ is the projection operator onto the divergence-free space in the Fourier domain. Inherently, this enforces the divergence-free condition of velocity, i.e. $\boldsymbol k \cdot \hat{\boldsymbol u}=0$ in Fourier space or $\nabla \cdot \boldsymbol u=0$ in physical space, analytically.
By Fourier series, the coupling between the immersed boundary and the fluid in \cref{eq:force_spreading} can be evaluated as
\begin{equation}\label{eq:force_spreading_1}
\hat{\boldsymbol f}(\boldsymbol k)=\frac{1}{L^3} \int_{\boldsymbol x \in \Omega_L} \boldsymbol f(\boldsymbol x) \exp(-i\boldsymbol k \cdot \boldsymbol x) d\boldsymbol x = \frac{1}{L^3} \int_{\boldsymbol \theta \in \Gamma} \boldsymbol F(\boldsymbol \theta) \exp(-i \boldsymbol k \cdot \boldsymbol X(\boldsymbol \theta)) d \boldsymbol \theta,\quad \boldsymbol k \in \mathcal{K},
\end{equation}
\begin{equation}\label{eq:velocity_interpolation_1}
\bm U(\bm \theta)= \bm u(\bm X(\bm \theta))=\sum_{\bm{k} \in \mathcal{K}}\hat{u}(\bm{k})\exp(i\bm{k}\cdot \bm{X}(\bm \theta)).
\end{equation}
It is encouraging that the singularity of the Dirac delta function no longer appears in these equations.
In summary, for a given system state with Lagrangian coordinates of the immersed boundary in physical space $\boldsymbol X(\boldsymbol \theta,t)$ and fluid velocity in Fourier space $\hat{\boldsymbol u}(\boldsymbol k,t)$, we first compute the force in Fourier space $\hat{f}(\boldsymbol k,t)$ from \cref{eq:force_spreading_1}. Then by evaluating the right-hand side of \cref{eq:contfoursoln}, we are able to obtain the rate of change of the velocity $\hat{\boldsymbol u}(\boldsymbol k,t)$. By evaluating the Fourier series \cref{eq:velocity_interpolation_1}, we are able to obtain the velocity of the immersed boundary $\boldsymbol U(\boldsymbol \theta,t)$. Last, we are able to compute the rate of change of the location of the immersed boundary $\boldsymbol X(\boldsymbol \theta,t)$ as $\partial \boldsymbol X(\boldsymbol \theta,t) /\partial t=\boldsymbol U(\boldsymbol \theta,t)$.
\section{Numerical implementation of the FSIB method}\label{sec:discretization}
First, we provide some foresight into the framework of this section. According to the Fourier spectral solver in \cref{eq:contfoursoln}, we need to compute $\hat{\boldsymbol f}$ and $\hat{\boldsymbol s}(\hat{\boldsymbol u})$ for given $\hat{\boldsymbol u}$ and $\boldsymbol X$, all of which are discussed in \cref{sec:spatial_discretization}. Note that $\hat{\boldsymbol f}$ is acquired by \cref{eq:force_spreading_1} and \cref{eq:velocity_interpolation_1} is used to update the velocity of the immersed boundary. Both are types of NUFFT and are shown in detail in appendix \ref{appendix:nufft}. Furthermore, a temporal integrator is needed to evolve the \cref{eq:contfoursoln} and we show this process in \cref{sec:temporal_discretization}. We will summarize the FSIB algorithm in \cref{sec:algo_summary}.
It should be noticed that the Lagrangian variables including $\boldsymbol X$, $\boldsymbol F$ and $\boldsymbol U$ are defined in physical space while the Eulerian variables like $\hat{\boldsymbol u}$ are defined in Fourier space. These two types of variables are connected by force spreading and velocity interpolation. Note that the FSIB method has no Eulerian grid in the physical space. This is unlike the standard IB method, and is one of the best features of the FSIB method.
\subsection{Spatial approximation}\label{sec:spatial_discretization}
In the standard IB method, force spreading and velocity interpolation are the most important steps, as they connect Eulerian variables and Lagrangian variables. The singular Dirac delta function in \cref{eq:force_spreading} is approximated by a regularized delta function that has a finitely large height and finitely small width \citep{peskinImmersedBoundaryMethod2002}. It thus computes the force spreading and the velocity interpolation by convolutions with the regularized kernel in physical space. In the new FSIB method, however, these two processes are easily done in Fourier space and do not require any regularization of the Dirac delta function in the physical space.
In practice, the computation of the coupling between the immersed boundary and the fluid in eq. (\ref{eq:force_spreading_1}) and (\ref{eq:velocity_interpolation_1}) is limited by the number of Lagrangian grid points or wavenumbers of Fourier series. First, it is natural to use a finite number of Fourier modes $\mathcal{K}_{N}$ that is the truncation of $\mathcal{K}$
\begin{equation}\label{eq:kNh}
\mathcal{K}_{N}=\left\{
\begin{array}{ll}
\left\{\frac{2\pi}{L} [-N/2,...,0,...,N/2-1]\right\}^3, & N\text{ is even,} \\
\left\{\frac{2\pi}{L} [-(N-1)/2,...,0,...,(N-1)/2]\right\}^3, & N\text{ is odd,}
\end{array}\right.
\end{equation}
where $N$ is the number of Fourier modes in each dimension so there are $N^3$ Fourier modes in total and the power of $3$ means it is a three-dimensional vector as a Cartesian product. On the truncated range of frequencies $\mathcal{K}_{N}$, we evaluate the integral of \cref{eq:force_spreading_1} with some quadrature $\boldsymbol \theta_i$ where $i\in {1,...,N_b}$ and $N_b$ denotes the number of Lagrangian grid points. Note that the Lagrangian grids $\boldsymbol \theta_i$ are not necessarily equispaced. This gives the force spreading in the FSIB method as follows
\begin{equation}\label{eq:force_spreading_quad}
\hat{\boldsymbol f}(\boldsymbol k)= \frac{1}{L^3} \int_{\boldsymbol \theta \in \Gamma} \boldsymbol F(\boldsymbol \theta) \exp(-i \boldsymbol k \cdot \boldsymbol X(\boldsymbol \theta)) d \boldsymbol \theta \approx \frac{1}{L^3} \sum_{j=1}^{N_b} \omega_j \boldsymbol F(\boldsymbol \theta_j) \exp(-i \boldsymbol k \cdot \boldsymbol X(\boldsymbol \theta_j)),\quad \boldsymbol k \in \mathcal{K}_{N},
\end{equation}
where the $\omega_j$ are quadrature weights. Note that the Lagrangian variable $\theta$ is usually periodic, e.g. a closed curve in two dimensions, in which case a simple equispaced quadrature will give exponential convergence and we also show this numerically in \cref{sec:stokes}.
We compute the velocity interpolation on the Lagrangian grids:
\begin{equation}\label{eq:velo_intp_fini}
\bm U(\bm \theta_j)\approx \sum_{\bm{k} \in \mathcal{K}_{N}}\hat{u}(\bm{k})\exp(i\bm{k}\cdot \bm{X}(\bm \theta_j)),\quad j=1,...,N_b.
\end{equation}
Direct calculations of the new force spreading and the velocity interpolation above (eq. (\ref{eq:force_spreading_quad}) and (\ref{eq:velo_intp_fini})) are quite expensive. Both of them require $O(N_b\cdot N^3)$ for naive for-loops. Moreover, we will show that, in practice, the number of Fourier modes and the Lagrangian grid points are refined in a relationship as $N_b \sim N^2$ in three dimensions, which makes the computational cost $O(N^5)$. This is unaffordable and also why we need the NUFFT algorithm to reduce the computational cost to $O(N^3\log(N))$. The force spreading in \cref{eq:force_spreading_quad} is from nonuniform grids to uniform grids and is a type-1 NUFFT. The velocity interpolation in \cref{eq:velo_intp_fini} is from uniform grids to nonuniform grids and is a type-2 NUFFT. In order not to break the smooth flow of the deduction process, we leave the details of using the type-1 and the type-2 NUFFT to compute the force spreading and the velocity interpolation to appendix \ref{appendix:nufft}. For now, we assume $\hat{\boldsymbol f}(\boldsymbol k)$ in \cref{eq:force_spreading_quad} and $\boldsymbol U(\boldsymbol \theta)$ in \cref{eq:velo_intp_fini} are computed by an efficient algorithm, i.e. NUFFT, with the complexity of $O(N^3\log(N))$.
Another term that is tricky to compute is the nonlinear term $\boldsymbol s(\boldsymbol u) = \boldsymbol u \cdot \nabla \boldsymbol u$ in Fourier space. By the convolution theorem, the dot product in physical space becomes convolution in Fourier space. The multiplication in physical space $\boldsymbol u \cdot \nabla \boldsymbol u$ has only $O(N^3)$ complexity, but the convolution in Fourier space has $O(N^6)$ complexity, which is again unaffordable. Thus, it is better to compute the nonlinear term $\boldsymbol s(\boldsymbol u)$ by multiplication in physical space. An exact computation of \cref{eq:nonlinear_physical} in $\mathcal{K}_N$ is
\begin{equation}\label{eq:nonlinear_kspace}
\hat{\boldsymbol s}(\hat{\boldsymbol u}) = \mathcal{T}_{N} \circ \mathcal{H}\left( (\mathcal{H}^{-1}(\hat{\boldsymbol u}))\cdot (\mathcal{H}^{-1}(i\boldsymbol k \hat{\boldsymbol u}))\right),
\end{equation}
where $\mathcal{T}_{N}$ denotes the truncation operator on $\mathcal{K}_N$.
A naive way to compute $\hat{\boldsymbol s}(\hat{\boldsymbol u})$ is the following. For input $\hat{\boldsymbol u}$, we first compute $\widehat{\nabla \boldsymbol u}=i\boldsymbol k\cdot \hat{\bm{u}}$ in Fourier space. Then convert back to physical space to get $\boldsymbol u$ and $\nabla \boldsymbol u$ by inverse FFT and do the multiplication to get $\boldsymbol s(\boldsymbol u)= \boldsymbol u \cdot \nabla \boldsymbol u$. Then convert $\boldsymbol s(\boldsymbol u)$ back into Fourier space by FFT to get $\hat{\boldsymbol s}(\hat{\boldsymbol u})$. A complication here is that the multiplication in physical space causes aliasing error because it is a multiplication of two bandlimited functions $\boldsymbol u$ and $\nabla \boldsymbol u$, but in Fourier space, the convolution of two functions that are bandlimited in $\mathcal{K}_{N}$ will contribute to nonzero Fourier coefficients that are outside of $\mathcal{K}_{N}$. Since we only resolve $\boldsymbol s(\boldsymbol u)$ on Fourier modes $\mathcal{K}_{N}$ by FFT, those frequencies outside of $\mathcal{K}_{N}$ will pollute low frequencies inside of $\mathcal{K}_{N}$, which is called aliasing. Moreover, such aliasing error will make the Fourier spectral method suffer from numerical instability \citep{goodmanStabilityUnsmoothedFourierMethod1994, phillips1959example}.
The famous ``three-halves" rule, also known as Orszag's ``two-thirds" rule \citep[Chapter~11]{boydChebyshevFourierSpectral}, is applied to prevent such aliasing. First, we know the two Fourier coefficients $\hat{\boldsymbol u}$ and $\widehat{\nabla \boldsymbol u}=i \boldsymbol k \hat{\boldsymbol u}$ that have a length of $N$ on each dimension. A centered zero padding is implemented on each space dimension to length $\Tilde{N}>\frac{3}{2}N$ such that the range of frequencies is now $\mathcal{K}_{\tilde{N}}$. Then, an inverse FFT will give $\boldsymbol u$ and $\nabla \boldsymbol u$ in physical space of length $\Tilde{N}$ on each dimension. We then compute the multiplication to get $\boldsymbol s(\boldsymbol u) = \boldsymbol u \cdot \nabla \boldsymbol u$. Finally, one FFT gives $\widehat{\boldsymbol u\cdot \nabla \boldsymbol u}$ and truncate the frequency range from $\mathcal{K}_{\tilde{N}}$ to $\mathcal{K}_{N}$, i.e. $\hat{\boldsymbol s}(\hat{\boldsymbol u})$ for $\boldsymbol k \in \mathcal{K}_{N}$. As long as $\Tilde{N}>\frac{3}{2}N$, it can be proved that the frequencies in $\mathcal{K}_{N}$ are not polluted by the high-frequency modes outside of $\mathcal{K}_{N}$ and there is no aliasing error for the Fourier modes in $\mathcal{K_N}$. This is why it is called the ``three-halves" rule.
In general, the ``three-halves'' rule provides us a way to compute the multiplication of two bandlimited functions efficiently in Fourier space with complexity $O(N^3\log(N))$ without aliasing error. One may disregard the details here and just consider the ``three-halves'' rule as a black-box algorithm to compute \cref{eq:nonlinear_kspace} exactly.
Until now, we have obtained all the terms that are needed in the Fourier spectral method, so we can compute the rate of change of the velocity numerically below, which is a spacial discretization and Fourier truncation from \cref{eq:contfoursoln}.
\begin{equation}\label{eq:fourier_spectral_sol}
\frac{\partial \hat{\bm{u}} (\bm k)}{\partial t}=(\bm{I}-\frac{\bm{k} \bm{k}^T}{|\bm{k}|^2})(\frac{\hat{\bm{f}}(\bm k)}{\rho}-\hat{\boldsymbol s}(\bm k))-\frac{\mu}{\rho}|\bm{k}|^2\hat{\bm{u}}(\bm k):=\hat{V}(\hat{\bm{f}},\hat{\bm{u}})-\frac{\mu}{\rho}|\bm{k}|^2\hat{\bm{u}}(\bm k),\quad \bm k \in \mathcal{K}_{N},
\end{equation}
where we denote the first term as $\hat{V}(\hat{\boldsymbol f},\hat{\boldsymbol u})$ for the benefit of describing the temporal integrator in \cref{sec:temporal_discretization}.
Note that $|\bm{k}|^2$ is in the denominator in the projection operator $\bm{I}-\frac{\bm{k} \bm{k}^T}{|\bm{k}|^2}$ and we need to avoid the divide-by-zero issue. In the code, the $|\boldsymbol k|^2$ is a three-dimensional matrix and we just need to artificially set $|\boldsymbol k|^2=1$ when $|\boldsymbol k|=0$, which should be the first entry of the matrix, and everything follows naturally as matrix operations. This is because this term comes from the pressure Poisson equation and the pressure $p$ makes no difference by adding any constant which has been discussed in \cref{eq:pressure_poisson_k}.
\subsection{Temporal approximation}\label{sec:temporal_discretization}
We denote the time step as $\Delta t$. Thus, for a given time period $[0,T]$, we have the temporal grids $t_m=m \Delta t$, where $m=0,...,N_t-1,N_t$ and $\Delta t=T/N_t$.
Note that the FSIB method is gridless and the system should be initialized by given $\boldsymbol \hat{u}^0$ and $\boldsymbol X^0$. The velocity of the immersed boundary $\boldsymbol U^0$ is easily obtained by a Fourier series evaluation which is the velocity interpolation with NUFFT. Now we can assume, at any time point $t_m$, $m=0,...,N_t-1$, the variables $\hat{\boldsymbol u}^m$, $\boldsymbol U^m$ and $\boldsymbol X^m$ are known inputs to the temporal integrator. Similar to the standard IB method, a Runge-Kutta multistep temporal integral is used here to solve \cref{eq:fourier_spectral_sol}. First, a half-time step is carried out explicitly to update the location of the immersed boundary $\bm{X}^{m+1/2}$ in \cref{eq:spatial_1}. Using this geometry, the Force density $\bm{F}^{m+1/2}$ is obtained, usually by elasticity. Thus, $\hat{\bm{f}}^{m+1/2}$ is easily available through the force spreading in \cref{eq:force_spreading_quad} with the NUFFT.
For improved numerical stability, the diffusion term will be computed implicitly. As in the standard immersed boundary method, we first use the backward Euler for a half time step with inputs $\hat{\bm{f}}^{m+1/2}$ and $\hat{\bm{u}}^m$ to get $\hat{\bm{u}}^{m+1/2}$ in \cref{eq:spatial_2}. Then, for inputs $\hat{\bm{u}}^{m+1/2}$ and $\hat{\boldsymbol f}^{m+1/2}$, we use the Crank-Nicolson method for a whole time step to solve for $\hat{\bm{u}}^{m+1}$ in \cref{eq:spatial_3}. Using velocity interpolation in \cref{eq:velo_intp_fini}, the velocity on the Lagrangian grids $\boldsymbol U^{m+1/2}$ and $\boldsymbol U^{m+1}$ are easily computed. The location of the immersed boundary at the next time step $X^{m+1}$ is then updated in \cref{eq:spatial_4}.
\begin{flalign}
&\boldsymbol X^{m+1/2}_\theta=\boldsymbol X_{\boldsymbol \theta}^m+\boldsymbol U^m\frac{\Delta t}{2} \label{eq:spatial_1}\\
&\frac{\hat{\boldsymbol u}^{m+1/2}-\hat{\boldsymbol u}^m}{\Delta t/2}=\hat{V} (\hat{\boldsymbol f}^{m+1/2},\hat{\boldsymbol u}^m)-\frac{\mu|\boldsymbol k|^2}{\rho} \hat{\boldsymbol u}^{m+1/2} \label{eq:spatial_2} \\
&\frac{\hat{\boldsymbol u}^{m+1}-\hat{\boldsymbol u}^m}{\Delta t}= \hat{V} (\hat{\boldsymbol f}^{m+1/2},\hat{\boldsymbol u}^{m+1/2})-\frac{\mu|\boldsymbol k|^2(\hat{\boldsymbol u}^m+\hat{\boldsymbol u}^{m+1})}{2\rho} \label{eq:spatial_3} \\
&\boldsymbol X^{m+1}_{\boldsymbol \theta}=\boldsymbol X_{\boldsymbol \theta}^m+\boldsymbol U^{m+1/2} \Delta t \label{eq:spatial_4}
\end{flalign}
\subsection{Algorithm summary}\label{sec:algo_summary}
Up to now, we have introduced all processes of the FSIB algorithm and we summarize the algorithm in this section. The overall framework of the FSIB method is stated in \cref{alg:FSIB}. First, the system is initialized by $\hat{\boldsymbol u}^0$, $\boldsymbol X^0$, and $\boldsymbol U^0$. Note that the velocity of the immersed boundary $\boldsymbol U^0$ is then obtained by the velocity interpolation with a type-2 NUFFT from $\hat{\boldsymbol u}^0$ as shown in \cref{eq:velo_intp_fini}. In the temporal for-loop, everything is computed as matrix operations. Note that the implicit methods including the backward Euler and the Crank-Nicolson become diagonal in Fourier space. Thus they are implemented by simple divisions in Fourier space.
\begin{algorithm}
\caption{The algorithm framework of FSIB method}\label{alg:FSIB}
\begin{algorithmic}
\Require $\hat{\boldsymbol u}^0$, $\boldsymbol X^0$ and $\boldsymbol U^0$ \Comment{Initialization}
\For {$m=0,...,N_t-1$}
\State $\boldsymbol X^{m+1/2}\gets \boldsymbol X^{m}+\frac{\boldsymbol U^m \Delta t}{2}$
\State Compute $\boldsymbol F^{m+1/2}$ from $\boldsymbol X^{m+1/2}$ \Comment{e.g. potential force or target points}
\State Compute $\hat{\boldsymbol f}^{m+1/2}$ from $\boldsymbol F^{m+1/2}$ \Comment{Type-1 NUFFT as \cref{eq:force_spreading_quad}}
\State $\hat{\boldsymbol u}^{m+1/2}\gets (\frac{\Delta t}{2}\hat{V} (\hat{\boldsymbol f}^{m+1/2},\hat{\boldsymbol u}^m)+\hat{\boldsymbol u}^{m})/(1+\frac{\mu \Delta t |\boldsymbol k|^2}{2\rho})$ \Comment{\cref{eq:spatial_2}}
\State $\hat{\boldsymbol u}^{m+1}= (\Delta t \hat{V} (\hat{\boldsymbol f}^{m+1/2},\hat{\boldsymbol u}^{m+1/2})+(1-\frac{\mu\Delta t |\boldsymbol k|^2}{2\rho})\hat{\boldsymbol u}^m)/(1+\frac{\mu\Delta t |\boldsymbol k|^2}{2\rho})$ \Comment{\cref{eq:spatial_3}}
\State Compute $\boldsymbol U^{m+1/2}$ and $\boldsymbol U^{m+1}$ from $\boldsymbol u^{m+1/2}$ and $\boldsymbol u^{m+1}$ \Comment{Type-2 NUFFT as \cref{eq:velo_intp_fini}}
\State $\boldsymbol X^{m+1}\gets \boldsymbol X^m+\boldsymbol U^{m+1/2} \Delta t $
\EndFor
\end{algorithmic}
\end{algorithm}
Moreover, the function $\hat{V}(\hat{\boldsymbol f}, \hat{\boldsymbol u})$ is called twice at every time step. So, we list the algorithm of $\hat{V}(\hat{\boldsymbol f}, \hat{\boldsymbol u})$ as \cref{alg:projector}. We first compute the nonlinear term using the ``three-halves" dealiasing rule. Then project $\hat{f}/\rho-\hat{s}$ into divergence-free space by the projection operator. Note that we need to set $|\boldsymbol k|^2=1$ when $\boldsymbol k=0$. It is worth noticing that all the steps are operated matrix-wised to reach optimal efficiency.
\begin{algorithm}
\caption{Computation of $\hat{V}(\hat{\boldsymbol f}, \hat{\boldsymbol u})$}\label{alg:projector}
\begin{algorithmic}
\Procedure{$\hat{V}$}{$\hat{\boldsymbol f},\hat{\boldsymbol u}$}
\State Compute the nonlinear term $\hat{\boldsymbol s}(\hat{\boldsymbol u})$ as discussed in \cref{sec:spatial_discretization}
\State $P \gets \frac{\hat{\boldsymbol f}}{\rho}-\hat{\boldsymbol s}(\hat{\boldsymbol u})$ \Comment{The vector to be projected}
\State $Q\gets P-\frac{\boldsymbol k \boldsymbol k^T}{|\boldsymbol k|^2}P$, set $|\boldsymbol k|^2=1$ when $\boldsymbol k=0$
\State \textbf{return} $Q$
\EndProcedure
\end{algorithmic}
\end{algorithm}
The main calculations include the NUFFT algorithms and matrix operations of the fluid solver, both of which have complexity $O(N^3\log(N))$. Therefore, the overall computational complexity for the FSIB method is $O(N^3\log(N))$ per time step in three dimensions.
\section{Relationship to the standard IB method}\label{sec:relation}
It may appear that the FSIB method is \textit{not} an IB method
at all, since the hallmark of the IB method is the discretization
of \cref{eq:force_spreading} through regularization of the Dirac delta function,
but no such regularization appears explicitly in the FSIB method.
In this section, however, we will show that the FSIB method
implicitly does involve such a regularization, which comes from the
bandlimited representation of the velocity and force fields that
appear in \cref{eq:force_spreading_1,eq:velocity_interpolation_1}. This regularization is equivalent
to the use of a `$\mathtt{sinc}$' function instead of the kernels that are
normally used in the IB method. Note that the `$\mathtt{sinc}$' function has
unbounded support and decays slowly, so a naive implementation based on actual
use of the `$\mathtt{sinc}$' function would be expensive, but this expense
is avoided by using the NUFFT algorithm. Even though the `$\mathtt{sinc}$' function
is not used explicitly in the FSIB implementation, it is
instructive to see the close relationship between the standard
IB method and the FSIB method. Indeed, as we shall see, the `$\mathtt{sinc}$'
function satisfies an infinite number of moment conditions of the same
kind as are satisfied by some standard IB kernels (but the
standard kernels only satisfy a finite number of these conditions).
Moreover, the `$\mathtt{sinc}$' kernel satisfies a sum-of-squares condition
that is also satisfied by standard IB kernels.
For simplicity, we discuss the relationship between the standard
IB method and the FSIB method in a one-dimensional context.
Everything we say extends directly to the two-dimensional and
to the three-dimensional case, since the kernels used in two
or in three space dimensions are simply tensor products of
one-dimensional kernels.
In the standard IB method, the Dirac delta function that appears in \cref{eq:force_spreading} is approximated by regularized delta functions \citep{peskinImmersedBoundaryMethod2002}, which we denote by $\delta^{(n)}_h$. Here $h=L/N$ is the Eulerian mesh width. $N$ is the number of Eulerian grid points in each spatial dimension and we use the same notation as the number of Fourier modes in the FSIB method deliberately because we will demonstrate that these parameters play the same role in the regularization. The integer $n\geq 3$ represents the support of $\delta^{(n)}_h$ in terms of meshwidths. These kernels are constructed for each $n$ from a corresponding function $\phi$ that is independent of $h$ according to the recipe $\delta^{(n)}_h(x)=\phi(x/h)/h$, where $\phi$ is continuous and finitely supported as $\phi(r)=0$ for $|r|\geq n/2$. Note that $\phi$ depends on $n$ for the support and for the conditions (see below) but we leave this dependence understood. Then the functions $\phi(x)$ are uniquely determined by some particular selection of the following conditions depending on whether $n$ is odd or even.
\begin{equation}\label{eq:tradIBkernels}
\begin{split}
\sum_{j} \phi(r-j)=1,&\quad (\romannumeral 1)\\
\sum_{j \text { even }} \phi(r-j)=\sum_{j \text { odd }} \phi(r-j)=\frac{1}{2},\quad n \text{ even},&\quad (\romannumeral 2) \\
\sum_{j}(r-j)^\alpha \phi(r-j)=0,\quad \alpha = 1,\ldots,n ,&\quad (\romannumeral 3) \\
\sum_j \phi^2(r-j)=C,&\quad (\romannumeral 4) \\
\end{split}
\end{equation}
where $C$ is a positive constant to be determined\footnote{The idea of considering this whole family of IB delta functions parameterized by $n$ is due to John Stockie (unpublished communication), who wrote a Maple code to generate any one of them.}.
The purposes of these conditions are discussed in detail in \citep{peskinImmersedBoundaryMethod2002}. In general, the zeroth moment condition (\romannumeral 1) guarantees that the total force spread to the grids remains the same as the total Lagrangian force. The even-odd condition (\romannumeral 2) is stronger than the condition (\romannumeral 1) and avoids the decoupling of two sets of grids consisting of the n-th grid points with n odd or even. The moment condition (\romannumeral 3) enforces the conservation of total torque when $\alpha=1$. Higher order moment conditions are for higher interpolation accuracy order and help determine a kernel uniquely. The sum-of-squares condition (\romannumeral 4) provides an upper estimation for the pair-coupling of the Lagrangian points.
If $n$ is odd, we use the zeroth moment condition ($\romannumeral 1$), the moment conditions ($\romannumeral 3$) for $\alpha = 1,...,n$ and the sum-of-squares condition ($\romannumeral 4$). If $n$ is even, we use the even-odd condition ($\romannumeral 2$) which implies condition ($\romannumeral 1$), the moment condition ($\romannumeral 3$) for $\alpha=1,...,n$ and the sum-of-squares condition ($\romannumeral 4$). Either case determines a constant $C$ and a unique kernel, which is called the standard n-point kernel $\delta^{(n)}_h$.
With these kernels, the force spreading and the velocity interpolation approximations are given between the Eulerian grids and the Lagrangian grids as
\begin{equation}\label{eq:force_spreading_IB}
\left\{\begin{array}{l}
f(x_m)=\sum_{j=1}^{N_b} F(\theta_j) \delta_h^{(n)}\left(x_m-X(\theta_j)\right) \Delta \theta,\\
U( \theta_j ) =\sum_{m=1}^{N}u(x_m) \delta_h^{(n)}(x_m-X(\theta_j)) h ,
\end{array}\right.
\end{equation}
where an equispaced Lagrangian grid is used so the quadrature weight is $\Delta \theta$. Unlike the IB method, the FSIB processes these two steps in Fourier space. Nevertheless, it is possible that we convert it into a form like the IB method in physical space. Consider the force spreading \cref{eq:force_spreading_quad} in one dimension and an equispaced Lagrangian grid. So $\omega_j=\Delta \theta$ and, by Fourier series, $f(x_m)=\sum_{k \in \mathcal{K}_N} \hat{f}(k)\exp(i\frac{2\pi}{L}kx)$. Interchange the order of summation and we get
\begin{equation}\label{eq:forc_sprea_eule_FSIB}
f(x_m)=\sum_{j=1}^{N_b} F(\theta_j) \delta_{L,N}\left(x_m-X(\theta_j)\right) \Delta \theta,
\end{equation}
where $\delta_{L,N}(x)=\frac{1}{L}\sum_{k \in \mathcal{K}_N}\exp(i\frac{2\pi}{L}kx)$ is the equivalent kernel in physical space that we are looking for. Similarly, we find the velocity interpolation \cref{eq:velo_intp_fini} is written in physical space as
\begin{equation}\label{eq:velo_inte_eule_FSIB}
U( \theta_j ) =\sum_{m=1}^{N}u(x_m) \delta_{L,N}(x_m-X(\theta_j)) h.
\end{equation}
Thus, it is concluded the FSIB method has an analog in the IB method using a new kernel $\delta_{L,N}$ because the force spreading \cref{eq:forc_sprea_eule_FSIB} and the velocity interpolation \cref{eq:velo_inte_eule_FSIB} are in the same form with the IB method \cref{eq:force_spreading_IB} but with a different kernel. For simplicity, we now assume that $N$ is odd and thus the FSIB kernel is computed as
\begin{equation}\label{eq:FSIB_kern_odd}
\begin{aligned}
\delta_{L, N}(x) &=\frac{1}{L} \sum_{k=\left(\frac{N-1}{2}\right)}^{\frac{N-1}{2}} e^{i \frac{2 \pi}{L} k x} \\
&=\frac{1}{L} \frac{\sin \left(\frac{N \pi}{L} x\right)}{\sin \left(\frac{\pi}{L} x\right)}.
\end{aligned}
\end{equation}
Now it is easy to see that the FSIB kernel is real, even, and periodic with period $L$. Moreover, we will show that it satisfies the conditions of the standard kernels in \cref{eq:tradIBkernels} except for the odd-even condition (\romannumeral 2) which it does not need to satisfy. First, this continuous kernel has the property that its integral over any one period is equal to $1$. This is obvious from \cref{eq:FSIB_kern_odd} since only the term $k=0$ contributes to the integral. Besides the continuous integral, the discrete zeroth moment also gives
\begin{equation}
\sum_{j=-\frac{N-1}{2}}^{\frac{N-1}{2}} \delta_{L, N}\left(x-j h\right) \cdot h= \sum_{k=-\frac{N-1}{2}}^{\frac{N-1}{2}} e^{i \frac{2 \pi}{L} k x}\left(\frac{1}{N} \sum_{j=-\frac{N-1}{2}}^{\frac{N-1}{2}} e^{-i \frac{2 \pi}{N} j k}\right) =1,
\end{equation}
since the inner summation is equal to $0$ except for the term $k=0$, and then it is equal to $1$. Note that this identity holds for all real $x$ and thus it is the same as the zeroth condition (\romannumeral 1) of the IB method in \cref{eq:tradIBkernels} after normalization by $h$. The even-odd condition (\romannumeral 2) is not satisfied by the FSIB kernel. This is because the even-odd condition is for the `checkerboard' issue of the Eulerian grid but the FSIB method is gridless, which makes the even-odd condition not necessary.
An even more remarkable property of $\delta_{L,N}$ related to the translation invariance that is discussed in detail in \cref{sec:trans_invar} is the following:
\begin{equation}\label{eq:tran_inva_kern_FSIB}
\begin{aligned}
&\sum_{j=-\frac{N-1}{2}}^{\frac{N-1}{2}} \delta_{L, N}\left(x_1-j h\right) \delta_{L, N}\left(x_2-j h\right) h \\
&=\frac{1}{L} \sum_{k_1=-\frac{N-1}{2}}^{\frac{N-1}{2}} \sum_{k_2=-\frac{N-1}{2}}^{\frac{N-1}{2}} e^{i \frac{2 \pi}{L}\left(k_1 x_1+k_2 x_2\right)}\left(\frac{1}{N} \sum_{j=-\frac{N-1}{2}}^{\frac{N-1}{2}} e^{-i \frac{2 \pi}{N}\left(k_1+k_2\right) j}\right) \\
&=\sum_{k=-\frac{N-1}{2}}^{\frac{N-1}{2}} e^{i \frac{2 \pi}{L} k\left(x_1-x_2\right)}\\
&=\delta_{L, N}\left(x_1-x_2\right),
\end{aligned}
\end{equation}
where we use the fact that the sum of $j$ on the second line is zero unless $k_1+k_2=0$. This identity essentially means any pair interaction between two points $x_1$ and $x_2$ does not depend on the location of Eulerian grids, which implies translation invariance. The sum-of-squares condition (\romannumeral 4) in \cref{eq:tradIBkernels} of the IB method only provides an upper estimation of the pair-coupling, which is a much weaker condition than the translation invariance. Indeed, we can set $x_1=x_2$ in \cref{eq:tran_inva_kern_FSIB} and get
\begin{equation}\label{eq:sum_square_FSIB}
\sum_{j=-\frac{N-1}{2}}^{\frac{N-1}{2}} \delta_{L, N}^2\left(x-j h\right) h=\delta_{L, N}(0)=\frac{1}{h},
\end{equation}
which proves the FSIB kernel satisfies the sum-of-squares condition of the IB kernel as well.
It is interesting to see the limit of the period $L$ goes to infinity with a fixed meshwidth $h=L/N$. This is a transition from the periodic boundary condition to the free space and we have
\begin{equation}\label{eq:limit_FSIB_kern}
\lim _{L, N \rightarrow \infty} \delta_{L, N}(x)=\frac{\sin (\pi x / h)}{\pi x}=\frac{1}{h} \frac{\sin (\pi x / h)}{(\pi x / h)}=\frac{1}{h} \mathtt{sinc}\left(\frac{\pi x}{h}\right),
\end{equation}
which we denote as the `$\mathtt{sinc}$' kernel $\delta_h(x)=\frac{1}{h}\mathtt{sinc}(\frac{\pi x}{h})$. This is not a coincidence but points out the essence of the FSIB method. One should notice that the force spreading \cref{eq:forc_sprea_eule_FSIB} and the velocity interpolation \cref{eq:velo_inte_eule_FSIB} are first considered in the form of summation over one period, which means the convolutions with $\delta_{L,N}$ kernel are carried out in one period. One may also consider this in a different way where periodic functions such as the force $f$ or the velocity $u$ are viewed as functions with repeated images in real space $\mathbb{R}$. So the force spreading and the velocity interpolation are convolutions in $\mathbb{R}$ rather than just one period. We know that the truncation of $\mathcal{K}_{N}$ in Fourier space is equivalent to multiplication with a square function as a filter. The square function is centered at $\boldsymbol k=0$ and is in the form, no matter whether $N$ is odd or even, as
\begin{equation*}
\tilde{\delta}_h(k)= \left\{
\begin{array}{ll}
1 &\text{, if } -\frac{\pi}{h}\le k< \frac{\pi}{h}, \\
0 &\text{, else,}
\end{array}\right.
\end{equation*}
which we will later show why it is denoted as $\tilde{\delta}_h$. By the convolution theorem, the multiplication in Fourier space is equivalent to a convolution with the inverse Fourier transform in physical space $\mathbb{R}$. We know that the inverse transform of $\tilde{\delta}_h(k)$ is
$$\mathcal{F}^{-1}\circ \tilde{\delta}_h = \frac{1}{2\pi}\int_{-\pi/h}^{\pi/h} \exp(ikx)\,dk=\frac{\sin(x\pi/h)}{x\pi}=\frac{1}{h}\mathtt{sinc}(\frac{\pi x}{h})=\delta_h(x),$$
where $\mathcal{F}$ denotes the Fourier transform and $\mathcal{F}^{-1}$ denotes the inverse Fourier transform. Thus, it turns out the inverse Fourier transform of $\tilde{\delta}_h(k)$ is the `$\mathtt{sinc}$' kernel $\delta_h(x)$ and this is exactly why it is denoted as $\tilde{\delta}_h$ as the $\tilde{\cdot}$ represents the Fourier transform operator. Note that in the previous context we use $\hat{\cdot}$ to represent Fourier series coefficients. Therefore, the convolutions of the `$\mathtt{sinc}$' kernel should be an integral in the whole space as
\begin{align*}
&f(x)=\sum_{n=-\infty}^{\infty}\int_{\theta \in \Gamma}F(\theta)\delta_{h}(x-(X(\theta)+nL))d \theta,\\
&U( \theta ) = \int_{-\infty}^{\infty} u(x, t) \delta_h(x-X(\theta)) d x .\\
\end{align*}
Note that the `$\mathtt{sinc}$' kerne $\delta_h$ is a function that is non-periodic and defined in the whole space and the $\delta_{L,N}$ kernel is the `periodized' version of $\delta_h$. The difference before and after periodization in the standard IB method is minor because the IB kernel $\delta_h^{(n)}$ is supported only on several meshwidth. But it is not the case for the global FSIB kernel and it is important to specify whether it is an integral over the whole space or only one period.
It is easy to show that the `$\mathtt{sinc}$' kernel $\delta_h$ satisfies the IB kernel moment conditions (\romannumeral 1) and (\romannumeral 3) in \cref{eq:tradIBkernels} in the form of continuous integral, i.e. $\int_{\mathbb{R}} \delta_h(x) \,dx=1$ and $\int_{\mathbb{R}} x^{\alpha}\delta_h(x) \,dx=0$ where $\alpha=1,2,\ldots$. This is because any $\alpha$ order moment in physical space is the $\alpha$ order derivative evaluated at $k=0$ in Fourier space ,and we know the fact that $\tilde{\delta}_h(0)=1$ and $\tilde{\delta}^{(\alpha)}_h(0)=0$.
Note that the `$\mathtt{sinc}$' kernel satisfies the discrete IB kernel conditions in \cref{eq:tradIBkernels} as well as the corresponding continuous conditions. Consider any function $f: \mathbb{R}\rightarrow \mathbb{R}$. The function is periodized with any period $a$ as $\sum_{j=-\infty}^{\infty}f(x-ja)$. Since it is periodic and thus we can find its Fourier coefficients as
\begin{equation}
\sum_{j=-\infty}^{\infty} f(x-j a)=\sum_{k=-\infty}^{\infty} c_k e^{2 \pi i k \frac{x}{a}},
\end{equation}
where
\begin{equation}
\begin{aligned}
c_k&=\frac{1}{a} \int_0^{a} \sum_{j=-\infty}^{\infty} f(x-j a) e^{-2 \pi i k \frac{x}{a}} d x \\
&=\frac{1}{a} \sum_{j=-\infty}^{\infty} \int_0^a f(x-j a) e^{-2 \pi i k \frac{x-j a}{a}} d x \\
&=\frac{1}{a} \int_{-\infty}^{\infty} f(x) e^{-2 \pi i k \frac{x}{a}} d x=\frac{1}{a} \tilde{f}\left(\frac{2 \pi k}{a}\right).
\end{aligned}
\end{equation}
Thus, we know that the periodized function can be written as
\begin{equation}\label{eq:peri_four_coef}
\sum_{j=-\infty}^{\infty} f(x-j a)=\frac{1}{a} \sum_{k=-\infty}^{\infty} \tilde{f}\left(\frac{2 \pi k}{a}\right) e^{2 \pi i k \frac{x}{a}} .
\end{equation}
This identity helps us derive interesting properties of $\delta_h$. Most obviously, let $f=\delta_h$ and $a=L$, then
\begin{equation}
\sum_{j=-\infty}^{\infty} \delta_h(x-j L)=\frac{1}{L} \sum_{k=-\infty}^{\infty} \tilde{\delta}_h\left(\frac{2 \pi k}{L}\right) e^{2 \pi i k \frac{x}{L}}=\frac{1}{L}\sum_{k=-\frac{N-1}{2}}^{\frac{N-1}{2}}e^{2\pi i k \frac{x}{L}}=\delta_{L,N}(x),
\end{equation}
where the second identity comes from the fact that $\tilde{\delta}_h$ is a square function. Recall that $\delta_h$ can be obtained by the limit of $\delta_{L,N}$ as $L,N\rightarrow \infty$ with $L/N=h$. Here we prove that $\delta_{L,N}$ can be reconstructed from $\delta_h$ by periodization, which we stated before without proof.
Another application of \cref{eq:peri_four_coef} is to prove the moment conditions. Let $a=h$ and let $f(x)=x^\alpha \delta_h(x)$ for any non-negative integer $\alpha$, then
\begin{equation}
\sum_{j=-\infty}^{\infty}(x-j h)^\alpha \delta_h(x-j h) h=\sum_{k=-\infty}^{\infty}\widetilde{\left(x^\alpha \delta_h\right)}\left(\frac{2 \pi k}{h}\right) e^{2 \pi i k \frac{x}{h}}=\left\{
\begin{aligned}
&1,\quad \alpha=0\\
&0,\quad \alpha=1,2,3,...
\end{aligned}
\right.
\end{equation}
When $\alpha=0$, the term in the summation is $1$ only when $k=0$ and is $0$ otherwise, so we prove the zeroth moment condition (\romannumeral 1). When $\alpha\ge 1$, it is always zero because the $\alpha$ order moment is proportional to the $\alpha$ order derivative of $\tilde{\delta}_h$ which is zero. So we prove that the moment conditions in \cref{eq:tradIBkernels} are all satisfied for the `$\mathtt{sinc}$' kernel $\delta_h$.
So we have checked all the IB conditions for odd $n$, i.e. the condition (\romannumeral 1), (\romannumeral 3), and (\romannumeral 4) are satisfied by the FSIB kernel. Recall that for every $n$, these conditions uniquely determined an IB kernel $\delta_{h}^{(n)}$. Note that the IB kernels are finitely supported but the FSIB kernel is global. Thus it is reasonable to make a conjecture that
\begin{equation}
\lim _{\substack{n \rightarrow \infty \\ n \text { odd }}} \delta_h^{(n)}(x)=\delta_h(x).
\end{equation}
We give a visual demonstration of this in \cref{fig:kernels_comp} where the standard IB kernels with $n=3,7,11,15$ along with the FSIB `$\mathtt{sinc}$' kernel are shown. Note that we make a nondimensionalization by plotting $\delta_h(x)\cdot h$ vs $x/h$. As $n$ gets bigger, the standard IB kernel develops more wiggles, has wider support, and is closer to the FSIB `$\mathtt{sinc}$' kernel. Also, the central peak is narrower and narrower and it represents the effective width of the immersed boundary. Note that the central peak of the `$\mathtt{sinc}$' kernel almost has a width of only one meshwidth $h=L/N$, which is the best we can achieve numerically. This is how it improves singularity resolution on the immersed boundary.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{fig/kernels_comp_5.eps}
\caption{ $\delta_h(x)\cdot h$ vs $x/h$. The new `$\mathtt{sinc}$‘ kernel (the black dashed line) is compared with standard kernels (the solid lines) with an odd number of points. 3-point kernel (blue), 7-point kernel (red), 11-point kernel (yellow), and 15-point kernel (purple).}
\label{fig:kernels_comp}
\end{figure}
\vspace{6pt}
\textbf{Remark 1.} One may question that the quadrature makes \cref{eq:force_spreading_quad} equivalent to compute Fourier transform of $N_b$ discrete singular force points, i.e. $\sum_{q=1}^{N_b} \omega_q \boldsymbol F(\boldsymbol \theta_q)\delta (\boldsymbol x-\boldsymbol X(\boldsymbol \theta_q))$. It seems not only to make the force singular in the direction perpendicular to the immersed boundary, which is what we desire, but also to make it singular along the immersed boundary, which is not physical. For example, a singular point force will generate infinite flow near the singularity in Stokes flow. For Navier-stokes flow, singular point force on the immersed boundary will cause the flow to leak between the singular points. As discussed above, the new method replaces the regularized delta functions used in the IB method with the $\mathtt{sinc}$ function. Therefore, it smooths the singular boundary. More importantly, the effective width of the singular immersed boundary in the FSIB method is equal to $L/N$ approximately, which is the best we can do numerically. This provides some insight into how to choose the number of Fourier modes $N$ and the number of Lagrangian grid points $N_b$ accordingly. The number of Fourier modes determines the resolution in physical space as $L/N$ and it should match the Lagrangian meshwidth $h_b$. If $h_b\ll L/N$, the Fourier space is not well-resolved and still has a large aliasing error. As we add Fourier modes, the aliasing error decreases. Nevertheless, it should not cross the match point because the regularized point force has an effective radius of about $L/N$ and the immersed boundary will not be continuous and causes severe leakage if $h_b\gg L/N$. This will be shown in our numerical examples in \cref{sec:numerical}.
\section{Properties of the FSIB method} \label{sec:prop_FSIBM}
\subsection{Divergence-free condition and volume conservation.}
An advantage of the FSIB method is that the velocity field is analytically divergence-free. Enforced by the Fourier spectral method, we automatically have $\hat{\boldsymbol u}(\boldsymbol k)\cdot \boldsymbol k=0$ in Fourier space. The velocity field $\boldsymbol U(\boldsymbol x)$ at any point $\boldsymbol x$ is interpolated by evaluation of Fourier series in \cref{eq:force_spreading_quad}. Thus, we know that $\nabla \cdot \boldsymbol U=0$ analytically everywhere in physical space.
As discussed in the introduction, Bao et al. \citep{baoImmersedBoundaryMethod2017} have proposed an IB method with a divergence-free velocity field. In that method, a vector potential is introduced and the velocity field $\boldsymbol U$ is computed by the curl of the vector potential, thus satisfying the analytical divergence-free condition. Our new Fourier spectral method achieves the same result without the use of a vector potential. A direct consequence of the divergence-free velocity is volume conservation. We observe that the volume leakage is eliminated in numerical tests in \cref{sec:numerical}.
\subsection{Translation invariance} \label{sec:trans_invar}
Exact translation invariance cannot be achieved in the standard IB method because the regularized delta function has finite support. Here, however, we have a kernel with unbounded support, see \cref{sec:relation}. We will prove the translation invariance for Stokes flow and for Navier-Stokes flow separately in the following sections.
\subsubsection{Exact translation invariance for Stokes flow}
Consider a force density $\boldsymbol f$ that is moved by displacement $\boldsymbol r$, then denote the new force density $\boldsymbol f_r(\boldsymbol x)=\boldsymbol f(\boldsymbol x-\boldsymbol r)$. The exact translation invariance means that the solution of our new method, denoted as $\boldsymbol u_r$, should also move the same displacement, i.e. $\boldsymbol u_r(\boldsymbol x)=\boldsymbol u(\boldsymbol x-\boldsymbol r)$.
Denote the truncation operator in Fourier space as $\mathcal{T}_{N}$. In Fourier space, it is convenient that the displacement simply becomes a phase shift as
\begin{equation*}
\hat{\boldsymbol f}_r(\boldsymbol k)=\hat{\boldsymbol f}(\boldsymbol k) \exp(-i\boldsymbol k \cdot \boldsymbol r).
\end{equation*}
Also, the spectral solver in Fourier space simply becomes the Stokeslet kernel in \cref{eq:stokeslet}, denoted as operator $\mathcal{M}$. Thus, we get the solution as
\begin{equation*}
\hat{\boldsymbol u}_r(\boldsymbol k) = \mathcal{T}_{N} \circ \mathcal{M} \circ \mathcal{T}_{N} \circ \hat{\boldsymbol f}_r (\boldsymbol k).
\end{equation*}
The mobility operator $\mathcal{M}$ is diagonalized in Fourier space and thus we have $\hat{\boldsymbol u}_r(\boldsymbol k)=\hat{\boldsymbol u}(\boldsymbol k) \exp(-i\boldsymbol k \cdot \boldsymbol r)$. This proves the desired exact translational invariance $\boldsymbol u_r(\boldsymbol x)=\boldsymbol u(\boldsymbol x-\boldsymbol r)$.
Note that the duality of the spreading and interpolation operator is shown vividly above as they are the same truncation operator $\mathcal{T}_{N}$. The last truncation operator, however, does nothing in the case of Stokes flow because the output of $\mathcal{M}$ operator has the same Fourier modes as the input. But it is not the case when it is Navier-Stokes flow.
\subsubsection{Exact translation invariance for Navier-Stokes flow}
For Navier-Stokes flow, at each time step, we are given $\hat{\boldsymbol u}$ and $\hat{\boldsymbol f}$. So, the translation gives $\boldsymbol u_r(\boldsymbol x,t)=\boldsymbol u(\boldsymbol x-\boldsymbol r,t)$ and $\boldsymbol f_r(\boldsymbol x)=\boldsymbol f(\boldsymbol x-\boldsymbol r,t)$. Similarly, we have the phase shifted function in Fourier space as $\hat{\boldsymbol u}_r(\boldsymbol k,t)=\hat{\boldsymbol u}(\boldsymbol k,t) \exp(-i\boldsymbol k \cdot \boldsymbol r)$ and $\hat{\boldsymbol f}_r(\boldsymbol k,t)=\hat{\boldsymbol f}(\boldsymbol k,t) \exp(-i\boldsymbol k \cdot \boldsymbol r)$. We need to prove that the solution of the FSIB method, i.e. the time derivative $\partial \boldsymbol u/\partial t$, at any location satisfies the translation condition that
\begin{equation}\label{eq:trans_cond}
\frac{\partial \boldsymbol u_r(\boldsymbol x,t)}{\partial t}=\frac{\partial \boldsymbol u (\boldsymbol x-\boldsymbol r,t)}{\partial t}.
\end{equation}
We denote the spectral solver in \cref{eq:fourier_spectral_sol} as an operator $\mathcal{V}$. Thus,
\begin{equation*}
\frac{\partial \hat{\boldsymbol u}_r(\boldsymbol x,t)}{\partial t} = \mathcal{T}_{N} \circ \mathcal{V} \circ \mathcal{T}_{N} \circ \hat{\boldsymbol f}_r (\boldsymbol x).
\end{equation*}
Note that all the linear terms in the spectral solver preserve the factor $\exp(-i\boldsymbol k \cdot \boldsymbol r)$ so do not compromise the identity \cref{eq:trans_cond}. To prove the translation condition in \cref{eq:trans_cond}, we only need to prove the nonlinear term $\hat{s}_r(\boldsymbol k) = \hat{s}(\boldsymbol k) \exp (-i\boldsymbol k \cdot \boldsymbol r)$. By the convolution theorem, the multiplication in physical space becomes convolution in Fourier space such that
\begin{equation*}
\begin{array}{ll}
\hat{s}_r(\boldsymbol k) &= \sum_{\boldsymbol k_{1}} \hat{\boldsymbol u}_r (\boldsymbol k- \boldsymbol k_1) \cdot i k_{1} \hat{\boldsymbol u}_{r}\left(\boldsymbol k_{1}\right)\\
&=\sum_{\boldsymbol k_{1}} \hat{\boldsymbol u}(\boldsymbol k-\boldsymbol k_{1}) \cdot i \boldsymbol k_{1} \hat{\boldsymbol u} (\boldsymbol k_{1}) \cdot \exp \left(-i(\boldsymbol k-\boldsymbol k_{1} )\cdot \boldsymbol r \right)\cdot \exp \left(-i \boldsymbol k_{1} \cdot \boldsymbol r \right) \\
&=\sum_{\boldsymbol k_{1}} \hat{\boldsymbol u}\left(\boldsymbol k-\boldsymbol k_{1}\right) \cdot i \boldsymbol k_1 \hat{\boldsymbol u}\left(\boldsymbol k_{1}\right) \exp (-i \boldsymbol k \cdot \boldsymbol r)\\
&=\hat{s}(\boldsymbol k) \cdot \exp (-i \boldsymbol k \cdot \boldsymbol r). \\
\end{array}
\end{equation*}
Thus, we prove the translation invariant condition in \cref{eq:trans_cond}.
The duality of the spreading and the interpolation operators is also satisfied. Moreover, the last truncation in Fourier space takes effect because the nonlinear term will create higher frequency modes and we leave them out by the last truncation operator. This is different from the Stokes case where it has no effect.
The above proof of translation invariance is in the setting of continuous time. Thus, the exact translation invariance for an arbitrary temporal integrator is not guaranteed, but at least the translation invariance holds in the limit $\Delta t \rightarrow 0$.
\subsection{Conservation of momentum}
Note that the integral of a function in physical space is the zero frequency in Fourier space, which gives
\begin{equation}
\begin{aligned}
&\hat{\boldsymbol f}(0, t)=\frac{1}{L^{3}} \iiint_{\Omega_L} \boldsymbol f(\boldsymbol x,t) d\, \boldsymbol x = \frac{1}{L^{3}} \sum_{j=1}^{N_b} \omega_j \boldsymbol F_{j} (t),\\
&\hat{\boldsymbol u}(0, t)=\frac{1}{L^{3}} \iiint_{\Omega_L} \boldsymbol u(\boldsymbol x,t) d\, \boldsymbol x .
\end{aligned}
\end{equation}
Therefore, to prove the conservation of momentum, we just need to find the equation of the zero frequency of the velocity and the force in Fourier space. Setting $\boldsymbol k=0$ in \cref{eq:fourier_spectral_sol} and we immediately obtain the equation for zero frequency.
\begin{equation}
\rho\left(\frac{\partial \hat{\boldsymbol u}}{\partial t}(0, t)+\hat{\boldsymbol s}(0, t)\right)=\hat{\boldsymbol f}(0, t).
\end{equation}
Note that the term of $\boldsymbol k \boldsymbol k^T/(|\boldsymbol k|^2)$ is eliminated because we set it to be zero when $\boldsymbol k=0$. Then, it is easy to prove the nonlinear term is zero $\hat{\boldsymbol s}(0,t)=0$.
\begin{equation}
\begin{aligned}
\hat{\boldsymbol s}_{\alpha}(0, t) &= \frac{1}{L^{3}} \iiint_{\Omega_L} \boldsymbol u (\boldsymbol x, t) \cdot\left(\nabla \boldsymbol u_{\alpha}\right)(\boldsymbol x, t) d \boldsymbol x \\
&=\frac{1}{L^{3}} \iiint_{\Omega_L} \nabla \cdot\left(\boldsymbol u(\boldsymbol x, t) \boldsymbol u_{\alpha}(\boldsymbol x, t)\right) d \boldsymbol x=0,
\end{aligned}
\end{equation}
where $\alpha = 1,2,3$ and we have used the divergence-free condition $\nabla \cdot \boldsymbol u = 0$ to integrate by part, and periodicity to conclude the last integral is zero.
Now, all ingredients are prepared to prove the conservation of momentum, which is to prove the statement that the rate of change in time of fluid momentum is equal to the total force applied to the fluid.
\begin{equation}
\frac{d}{d t} \left( \iiint_{\Omega_L} \rho \boldsymbol u (\boldsymbol x,t) d \boldsymbol x \right) = \frac{d}{d t} \left( \rho L^3 \hat{\boldsymbol u} (0,t) \right) = \sum_{j=1}^{N_b} \omega_j \boldsymbol F_{j} (t)
\end{equation}
\subsection{Conservation of energy}
First, the kinetic energy of the fluid in each period is given by
\begin{equation}
\begin{aligned}
E_K(t) &=\frac{1}{2} \rho \iiint_{\Omega_L} \boldsymbol u (\boldsymbol x, t) \cdot \boldsymbol u(\boldsymbol x, t) d \boldsymbol x \\
&=\frac{1}{2} \rho L^3 \sum_{\boldsymbol k \in \mathcal{K}_N} \hat{\boldsymbol u}(\boldsymbol k, t) \cdot \boldsymbol \hat{u}(-\boldsymbol k, t).
\end{aligned}
\end{equation}
So we know the rate of change of the kinetic energy is
\begin{equation}
\begin{aligned}
\frac{d E_K}{d t}(t)=& \frac{1}{2} \rho L^3 \sum_{\boldsymbol k \in \mathcal{K}_N}\left(\frac{\partial \hat{\boldsymbol u}}{\partial t}(\boldsymbol{k}, t) \cdot \hat{\boldsymbol u}(-\boldsymbol{k}, t)+\hat{\boldsymbol{u}}(\boldsymbol{k}, t) \cdot \frac{\partial \hat{\boldsymbol u}}{\partial t}(-\boldsymbol{k}, t)\right) \\
=& \rho L^3 \sum_{\boldsymbol k \in \mathcal{K}_N} \frac{\partial \hat{\boldsymbol u}}{\partial t}(\boldsymbol{k}, t) \cdot \hat{\boldsymbol u}(-\boldsymbol{k}, t).
\end{aligned}
\end{equation}
From the moment condition of the Navier-Stokes \cref{eq:navierstokes} in Fourier space, we know
\begin{equation}\label{eq:moment_k_space}
\rho\left(\frac{\partial \hat{\boldsymbol{u}}}{\partial t}(\boldsymbol{k}, t) \cdot \hat{\boldsymbol{u}}(-\boldsymbol{k}, t)+\hat{\boldsymbol{s}}(\boldsymbol k, t) \cdot \hat{\boldsymbol{u}}(-\boldsymbol{k}, t)\right)=-\mu\|\boldsymbol{k}\|^2 \hat{\boldsymbol{u}}(\boldsymbol{k}, t) \cdot \hat{\boldsymbol{u}}(-\boldsymbol{k}, t)+\hat{\boldsymbol{f}}(\boldsymbol{k}, t) \cdot \hat{\boldsymbol{u}}(-\boldsymbol{k}, t),
\end{equation}
where we multiply both side by $\hat{\boldsymbol u}(-\boldsymbol k,t)$. If we sum this over $\boldsymbol k \in \mathcal{K}_N$, then the first term on the LHS is equal to $d E_k/d t$ and we will prove later that the second term involving the nonlinear term vanishes. Therefore, we have
\begin{equation}\label{eq:kine_energ_derivative}
\frac{d E_K}{d t}(t)=-\mu L^3 \sum_{\boldsymbol{k} \in \mathcal{K}_N}\|\boldsymbol{k}\|^2 \hat{\boldsymbol{u}}(\boldsymbol{k}, t) \cdot \hat{\boldsymbol{u}}(-\boldsymbol{k}, t)+L^3 \sum_{\boldsymbol{k} \in \mathcal{K}_N} \hat{\boldsymbol f}(\boldsymbol{k}, t) \cdot \hat{\boldsymbol{u}}(-\boldsymbol{k}, t).
\end{equation}
Note that the first term on the RHS is the rate of viscous energy dissipation. So, in order to prove the conservation of energy, we only need to prove the second term on the RHS is the negative rate of change of the potential energy $E_p$. By the force spreading (\cref{eq:force_spreading_quad}) and the velocity interpolation (\cref{eq:velo_intp_fini}), we have
\begin{equation}\label{eq:pote_energ_derivative}
\begin{aligned}
L^3 \sum_{\boldsymbol{k} \in \mathcal{K}_N} \hat{\boldsymbol{f}}(\boldsymbol{k}, t) \cdot \boldsymbol{\hat{u}}(-\boldsymbol{k}, t)&=\sum_{j=1}^{N_b} \omega_j \boldsymbol F_{j}(t) \cdot \sum_{\boldsymbol{k} \in \mathcal{K}_N} e^{-i \frac{2 \pi}{L} \boldsymbol{k} \cdot \boldsymbol X_{j}(t)} \hat{\boldsymbol{u}}(-\boldsymbol{k}, t)\\
&=\sum_{j=1}^{N_b} \omega_j \boldsymbol F_{j} (t) \sum_{\boldsymbol{k} \in \mathcal{K}_N} e^{i \frac{2 \pi}{L} \boldsymbol k \cdot \boldsymbol X_{j}(t)} \hat{\boldsymbol u}(\boldsymbol{k}, t)\\
&=\sum_{j=1}^{N_b} \omega_j \boldsymbol F_{j}(t) \cdot \boldsymbol U_j(t)\\
&=\sum_{j=1}^{N_b} -\frac{\partial E_p}{\partial \boldsymbol X_j} \cdot \frac{\partial X_j}{\partial t}\\
&=-\frac{d E_p}{dt}.
\end{aligned}
\end{equation}
Note that we make the change from $\hat{\boldsymbol u}(-\boldsymbol k,t)$ to $\hat{\boldsymbol u}(\boldsymbol k,t)$ because $\hat{\boldsymbol u}(\boldsymbol k,t)$ is even in $\boldsymbol k$ for a real function $\boldsymbol u$. We also use the fact that the force on the immersed boundary is given by the negative derivative of the potential, i.e. $\omega_j \boldsymbol F_{j}=-\partial E_p/\partial \boldsymbol X_j$. By substituting \cref{eq:pote_energ_derivative} into \cref{eq:kine_energ_derivative}, we get the energy conservation for the spatially discretized system in the FSIB method as
\begin{equation}
\frac{d}{d t}\left(E_K+E_p\right)=-\mu \sum_{\boldsymbol{k} \in \mathcal{K}_N}\|\boldsymbol{k}\|^2 \boldsymbol{\hat{u}}(\boldsymbol{k}, t) \cdot \boldsymbol{\hat{u}}(-\boldsymbol{k}, t).
\end{equation}
There is one last thing we need to show to complete the proof though. We need to prove the nonlinear term in \cref{eq:moment_k_space} will vanish after summing up over $\boldsymbol k \in \mathcal{K}_N$. So we have
\begin{equation}
\begin{aligned}
\sum_{\boldsymbol{k} \in \mathcal{K}_N} \boldsymbol{\hat{s}} (\boldsymbol{k}, t) \cdot \boldsymbol{\hat{u}}(-\boldsymbol{k}, t) & =\sum_{\boldsymbol{k} \in \mathcal{K}} \hat{\boldsymbol{s}} (\boldsymbol{k}, t) \cdot \boldsymbol{u}(-\boldsymbol{k}, t)\\
&=\frac{1}{L^3} \iiint_{\Omega_L} \boldsymbol{u} \cdot(\boldsymbol{u} \cdot \nabla \boldsymbol{u}) d \boldsymbol{x}\\
&=\frac{1}{L^3} \iiint_{\Omega_L} \sum_{\alpha=1}^3 u_\alpha \boldsymbol{u} \cdot \nabla u_\alpha d \boldsymbol{x}\\
&=\frac{1}{L^3}\sum_{\alpha=1}^3 \iiint_{\Omega_L} u_\alpha \nabla \cdot(\boldsymbol u u_\alpha ) d \boldsymbol{x}\\
&=-\frac{1}{L^3}\sum_{\alpha=1}^3 \iiint_{\Omega_L} \nabla u_\alpha \cdot u_\alpha \boldsymbol{u} d \boldsymbol{x}\\
&=0,
\end{aligned}
\end{equation}
where we extend $\mathcal{K}_N$ to all the Fourier modes $\mathcal{K}$ in the first line because $\hat{\boldsymbol s}=0$ outside of $\mathcal{K}_N$. We also use the fact that $\nabla \cdot \boldsymbol u=0$ in the fourth line. An integral by part is used in the fifth line. The identity is zero because the third line and the fifth line are opposite to each other so the value must be zero. Thus we now complete the proof of the conservation of energy.
\section{Numerical results}\label{sec:numerical}
\subsection{Stokes equations}\label{sec:stokes}
A good starting point for our numerical experiments is the incompressible flow in two dimensions governed by the Stokes equations
\begin{equation}\label{eq:stokes}
\left\{\begin{array}{l}
-\nabla p+\mu \Delta \boldsymbol{u}+\boldsymbol{f}=0, \\
\nabla \cdot \boldsymbol{u}=0.
\end{array}\right.
\end{equation}
In this case, there is no nonlinear term or time integrator. For comparison with the standard IB method, we are interested in the FSIB method's performance and properties for a benchmark model that a closed elastic neutral-buoyant curve $\Gamma = \{\boldsymbol x: \boldsymbol x=\boldsymbol X(\theta)\}$, $\theta=[0,2\pi)$ is immersed in the flow. Similarly, we set it periodic in both directions as a $[0,L]\times[0,L]$ periodic box. The force $F(\theta)$, $\theta=[0,2\pi)$ is given on the closed boundary $\Gamma$ as a singular layer. Thus, the force vector field can be written as integral with the Dirac delta function
\begin{equation}\label{eq:force}
f(x)=\int_{0}^{2 \pi} F(\theta) \delta(x-X(\theta)) d \theta.
\end{equation}
Moreover, for the existence of the Stokes solution, it is required that the integral of force inside each periodic box vanishes.
\begin{equation}\label{eq:force_vanish}
\int_0^{2\pi}F(\theta)d\theta =0
\end{equation}
The Stokes equations (\ref{eq:stokes}) in a periodic box are easily solved by Fourier series. The solution in Fourier space is
\begin{equation}\label{eq:stokeslet}
\left\{\begin{array}{l}
\hat{u}=\frac{1}{\mu}\left(\frac{1}{|k|^{2}}-\frac{k k^T}{|k|^{4}}\right) \hat{f} \\
\hat{p}=\frac{k \cdot \hat{f}}{i|k|^{2}}
\end{array}\right. ,
\end{equation}
which is the term-by-term product of the Fourier series of the Stokeslet and the Fourier series of the applied force. This follows from the linearity of the Stokes equations and the convolution theorem for Fourier series.
We first consider an immersed boundary in the form of a circle (\cref{fig:circle}). The circle has a radius of $r=L/8$ and is located at the center of the square box with a length $L=1$. The viscosity coefficient is set to be $\mu=1$.
In the first `normal' case (\cref{fig:circle:normal}), there is an outward normal force, applied along the circular immersed boundary, and this results in a pressure difference across the boundary. In the second `tangent' case (\cref{fig:circle:tangent}), the applied force is tangent to the immersed boundary. In both cases, the magnitude of the force per unit $\theta$ is set to be constant $|\boldsymbol F|=1$. Thus, the net-zero force condition in \cref{eq:force_vanish} is automatically satisfied and solutions to the Stokes equations therefore exist. Our discretization of the Stokes equations is derived from the one used for the Navier-Stokes equations in \cref{sec:discretization} simply by removing the nonlinear and also the time-derivative terms. Since the fluid domain is now two-dimensional, the Lagrangian parameter $\theta$ is now one-dimensional and can be discretized as $\theta_j=j\cdot \Delta \theta$, where $j=0,...,N_b-1$ and $\Delta \theta=2\pi /N_b$. Therefore, the Lagrangian markers have coordinates $\boldsymbol X_j=(r\cos(\theta_j),r\sin(\theta_j))$.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/normal.eps}
\caption{Normal case.}\label{fig:circle:normal}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/tangent.eps}
\caption{Tangent case.}\label{fig:circle:tangent}
\end{subfigure}
\caption{Circle problem for Stokes flow. For (a) `normal' case, a uniform force field is applied to the circle in the radial outwards direction. For (b) `tangent' case, a uniform force field is applied tangent to the circle clockwise.}\label{fig:circle}
\end{figure}
For the `normal' case, the analytical solution of velocity should be zero everywhere despite the lack of radial symmetry on our periodic domain. The reason for this is that the singular force field considered here is a gradient (of a step function, with one value inside and another value outside of the circular immersed boundary), and therefore its Hodge projection onto the space of incompressible vector fields is zero, which we also verify from the numerical tests. There will be a pressure jump on the circle since there is normal force uniformly distributed. The pressure is constant inside and outside of the circle because the force vanishes there. Now assume that the integral of the pressure over the periodic box is zero, which we may do because addition of a constant to the pressure makes no difference. The exact solution for the pressure is then given by
\begin{equation*}
\left\{
\begin{array}{l}
p_1=\frac{F}{r}\frac{\pi r^2}{L^2},\\
p_2=-\frac{F}{r}\frac{L^2-\pi r^2}{L^2},
\end{array}
\right.
\end{equation*}
where $p_1$ is the pressure outside of the circle, and $p_2$ is the pressure inside of the circle.
For the `tangent' case, the pressure should be constant, which is not a trivial result. The reason is that $\nabla \cdot \boldsymbol f=0$. A complete analysis of the jump condition is given in \citep{laiRemarkJumpConditions2001}. There will be a circular flow around the circle inside or outside. We plot the numerical result of velocity $u_x$ on the vertical centerline, i.e. $x=L/2$, as a function of $y$ in \cref{fig:circle:u2_centerline}. We also plot the numerical result of the standard IB method with the 4-pt standard kernel for the same grids, i.e. same $N_b$ and $N$, for comparison, but here $N$ is the number of Eulerian grid points rather than the number of Fourier modes in the FSIB method. Remarkably, the sharpness of the boundary for the new FSIB method is visibly better than the standard IB method. It is no surprise that the flow inside of the circle is almost the same as a rigid body rotation, which means $u_x$ is linear in $y$ on the vertical centerline. But it is not an exactly rigid body rotation since the periodic boundary condition breaks the rotational symmetry and this has a small effect inside of the immersed boundary as well as on the outside. Moreover, we see no oscillation on the spikes since there is no jump discontinuity of velocity. Although the analytical solution of the velocity is not available, we know that the derivative of velocity $du_x/dy$ on the vertical centerline should have a jump discontinuity over the circular boundary as a result of the uniform tangential force. We want to observe how the new FSIB method behaves for the jump discontinuity, so we plot $du_x/dy$ on the vertical centerline for both the new FSIB method and the standard IB method for the same grids in \cref{fig:circle:u2_centerline}. This figure shows that the new FSIB method has a better resolution of the jump discontinuity. More quantitative comparisons of the boundary resolution will be made in \cref{sec:bdr_reso_stokes} for Stokes flow and in \cref{sec:poiseuille} for Navier-Stokes flow.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/ux.eps}
\caption{}\label{fig:circle:u2_centerline}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/dux_dy.eps}
\caption{}\label{fig:circle:u2_p_centerline}
\end{subfigure}
\caption{(a). Velocity $u_x$. (b). Derivative of the velocity $d u_x/dy$. Both are plotted on the vertical centerline, i.e $x=L/2$, as a function of $y$ for the `tangent' case. The red solid lines with markers `o' are for the FSIB method. The blue dashed lines with markers `+' are for the standard IB method.}
\end{figure}
As is discussed in \cref{sec:spatial_discretization}, the spreading of the force to the uniform grids (\cref{eq:force_spreading_quad}) is exponentially accurate for a simple trapezoidal rule because the function $F$ is smooth as a function of the Lagrangian variable $\theta$. Here we design a numerical experiment to verify such exponential accuracy. We fix the number of Fourier modes $N=128$ and increase the number of points on the Lagrangian grids $N_b=4,8,16,32$. In \cref{fig:err_2_N}, we plot the error of velocity $u_N$ and pressure $p_N$ on the vertical centerline as a function of $N_b$. The errors of both the pressure and the velocity converge exponentially for the `normal' case and also for the `tangent' case.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.7\textwidth]{fig/err_2_N.eps}
\caption{Log-linear scale plot. Error in norm 2 of numerical solutions of pressure $p(y)$ compared with analytical solutions and empirical error in norm 2 of velocity $u_x(y)$ on the vertical centerline, i.e. $x=L/2$. For fixed $N=128$, we increase $N_b=4,8,16,32$. Blue lines are for velocity $u_x$, red lines are for pressure $p$. Solid lines with markers `o' are for the `normal' case and dashed lines with markers `+' are for the `tangent' case.}
\label{fig:err_2_N}
\end{figure}
\subsubsection{Coupling of the refinement of the Fourier resolution and the Lagrangian meshwidth}
First, we need to decide how we should choose the Fourier resolution $L/N$ according to Lagrangian meshwidth $h_b=\mathcal{L}(\Gamma)/N_b$, where $\mathcal{L}(\Gamma)$ is the length of the immersed boundary. As the standard IB method suggests, they should be linearly coupled, so hopefully, we will get the same coupling relation. We fix $N_b=2^6$ and increase $N$ by a power of 2 from $N=2^2$ to $N=2^{10}$ and plot the error of pressure and velocity in \cref{fig:circle:error_Nh}. Generally, there are two types of solutions shown here. One is non-trivial solutions including the pressure for the `normal' case (blue lines with markers `o' in \cref{fig:circle:p_Nh_comp}) and the velocity for the `tangent' case (red lines with markers `+' in \cref{fig:circle:u_Nh_comp_bd,fig:circle:u_Nh_comp}). The others are trivial solutions, i.e. zero solutions, including the velocity for the `normal' case (red lines with markers `+' in \cref{fig:circle:p_Nh_comp}) and the pressure for the `tangent' case (blue lines with markers `o' in \cref{fig:circle:u_Nh_comp_bd,fig:circle:u_Nh_comp}). For the non-trivial solutions, our new FSIB method outperforms the standard IB method by an order of magnitude but still has the same first-order convergence before a transition point. After the transition point, both methods start to accumulate errors. It should be noticed that the transition point is where the Fourier resolution $h=L/N$ and Lagrangian grids $h_b=2\pi r/N_b$ are approximately equal, i.e. $h_b\approx h$. For the trivial solutions, numerical solutions are purely errors and provide us a proxy to check how errors accumulate. We observe that the new FSIB method has machine error before the transition point, which is plausible, and the standard IB method has decreasing errors that are comparable to that of non-trivial solutions. After the transition point, errors of both methods increase to the same level.
Before the transition point, the error decreases because the truncation error in Fourier space decreases as we use more Fourier modes. After the transition point, the boundary effectively develops holes, since the effective width of the regularized delta function becomes less than the distance between the Lagrangian boundary markers. It is interesting and instructive that this occurs for the FSIB method as well as for the standard IB method, since the transition illustrates the somewhat hidden role that the `$\mathtt{sinc}$' kernel is playing in the FSIB methodology.
Thus, in conclusion, we should couple the Fourier resolution and the Lagrangian meshwidth by the condition that $h_b\approx L/N$ which means they should match. In this way, they are linearly coupled just like the standard IB method. Moreover, it is not wise to make the Fourier resolution on the Eulerian grids higher than the Lagrangian grid resolution because it will destroy the machine accuracy of the FSIB method for trivial solutions.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/p_Nh_comp.eps}
\caption{}\label{fig:circle:p_Nh_comp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/u_Nh_comp_bd.eps}
\caption{}\label{fig:circle:u_Nh_comp_bd}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/u_Nh_comp.eps}
\caption{}\label{fig:circle:u_Nh_comp}
\end{subfigure}
\caption{(a). The error of pressure on the Eulerian grids. (b). The error of velocity on the Lagrangian grids. (c). The error of velocity on the Eulerian grids. The blue lines with markers `o' are for the `normal' case. The red lines with markers `+' are for the `tangent' case. The solid lines are for the new FSIB method. The dashed lines are for the standard IB method for comparison. The invisible points below the y-axis limit, which are (a) the red solid line, (b) \& (c) the blue solid line, $N<10^2$, are all machine errors, i.e. $1e-16$.}\label{fig:circle:error_Nh}
\end{figure}
\subsubsection{Spatial convergence of the FSIB method for the Stokes equations.}\label{sec:FSIB_Stokes_conv_2D}
Given the conclusion above, we couple the Fourier resolution and Lagrangian meshwidth and, in two dimensions, it is equivalent to increasing $N$ and $N_b$ simultaneously to see the real spatial convergence rate of the FSIB method. We start with $N=8$ and $N_b=4$ and double them for each refinement. We plot the successive empirical error of the velocity on the Lagrangian grids for the `tangent' case in \cref{fig:u_Nh_h_comp}, which is the most important non-trivial solution, for 8 successive refinements in total. The standard IB method shows the first-order convergence which is not a surprise and has been addressed for the Stokes equations in \citep{moriConvergenceProofVelocity2008}. The FSIB method shows the same first-order convergence as the standard IB method but outperforms it by an order of magnitude for the Stokes problem in two dimensions.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.4\textwidth]{fig/u_Nh_h_comp.eps}
\caption{Error of velocity on the Lagrangian grids for the `tangent' case vs $N$ in log-log scale. Lagrangian grid size $N_b$ and the number of Fourier modes $N$ increase simultaneously. The dash-dotted straight line is the first-order reference line. The solid lines with markers `o' are for the FSIB method. The dashed lines with markers `+' are for the standard IB method. }\label{fig:u_Nh_h_comp}
\end{figure}
\subsubsection{Boundary resolution in Stokes flow}\label{sec:bdr_reso_stokes}
Although we compare the `$\mathtt{sinc}$' kernel with those used in the standard IB method in \cref{fig:kernels_comp} and claim the FSIB method has sharper boundary resolution, we have not quantified the resolution yet. Analytically, the jump continuity has zero width, but numerical solutions always have a finite width, which provides us a way to measure the boundary resolution. On the vertical centerline, the pressure for the `normal' case and the velocity's derivative $du_x/dy$ have jump discontinuity on the boundary. We fit the slope of the jump and divide the height of the jump by the slope to get the width of the jump, denoted as $\Delta$. We increase grid size $N_b$ and $N$ simultaneously as the coupling rule and plot $\Delta/h$ vs $\log_2(N)$ in \cref{fig:reso_stokes}. As the grid refines by two times each time, $\Delta/h$ converges to fixed values for the FSIB method and the standard IB method. That suggests the ratio $\Delta/h$ is an intrinsic property of the FSIB method and the standard IB method no matter how much you refine the grids. For the standard IB method, the boundary resolution converges to $3h$, i.e. three Eulerian meshwidths. For the FSIB method, the boundary resolution converges to about $2h$, which is two thirds of the standard IB method. This shows that the FSIB method has improved boundary resolution for a singular force layer in Stokes flow and we will verify this again in Navier-Stokes flow in \cref{sec:poiseuille}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.7\textwidth]{fig/reso_stokes.eps}
\caption{$\Delta/h$ vs $\log_2(N)$, where $\Delta$ is the width of the numerical solutions on the jump discontinuity. Blue lines with markers `o' are for the pressure $p$ on the vertical centerline for the `normal' case, and red lines with markers `+' are for the velocity's derivatives $du_x/dy$ on the vertical centerline for the `tangent' case. Solid lines are for the FSIB method, dashed lines are for the standard IB method.}
\label{fig:reso_stokes}
\end{figure}
\subsection{Navier-Stokes equations}
In this section, we switch from the Stokes equations to the Navier-Stokes equations which involve nonlinearity and time dependence. We start with numerical experiments in two dimensions and then extend them to three-dimensional experiments.
\subsubsection{Circle with sine velocity in Navier-Stokes flow in two dimensions}\label{sec:circle_2D_NS}
We first consider an initially circular boundary immersed in Navier-Stokes flow, which is a classic problem \citep{linSolvabilityStokesImmersed2019,moriWellPosednessGlobalBehavior2019}. The circle with radius $r=L/4$ is placed at the center $[L/2,L/2]$ of the periodic box $\Omega_L=[0,L]^2$. The immersed boundary is uniformly discretized as $X(\theta_i)=[L/2+(L\cos(\theta_i))/4,L/2+(L\sin(\theta_i))/4]$, where $\theta_i=i\Delta \theta = i\cdot 2\pi/N_b$ and $i=0,...,N_b-1$. The Fourier modes and temporal grids are the same as \cref{sec:discretization}. Note that we should keep $\Delta t \sim h$ for the Runge-Kutta temporal solver to be stable.
The circle is initialized with a perturbation velocity in the y direction as a sine function $\boldsymbol u_0(x,y)= [0,\sin(2\pi x/L)]$. And the force density on the immersed boundary is simply given by elasticity
\begin{equation}\label{eq:elstic_force_2d}
\boldsymbol F(\theta_i) = K \frac{\boldsymbol X(\theta_{i+1})+\boldsymbol X(\theta_{i-1})-2\boldsymbol X(\theta_i)}{\Delta \theta^2} .
\end{equation}
First, we run a simple simulation with $L=1$, $N=64$, $N_b=202$ such that $h_b\approx L/N$, $\Delta t=1e-3$, $K=1$, $\rho=1$, $\mu=0.01$. The total simulation time $T=10$ is long enough that we observe that the immersed boundary returns to its initial circular geometry, and we check how the volume enclosed by the immersed boundary evolves. We compared the results of our new FSIB method with the standard IB method in a movie \footnote{\url{https://www.math.nyu.edu/~zc1291/public/IBM/NUFFT/comp_N_64_spec_ns_T_32_colored.mp4}} and also plot representative snapshots with fluid velocity fields and vorticity contours in \cref{fig:snapshot_t32}. In the beginning, the sine perturbation flow drives the circle to deform and the two methods show consistency. Then, the oscillation decays, the flow starts to settle down, and the immersed boundary changes back to a circle. The discrepancy between the solutions of the IB method and the FSIB method grows with time. The IB method is more dissipative and develops defects of the vorticity contours near the boundary after the boundary returns to a circular configuration. Moreover, the immersed boundary force law, \cref{eq:elstic_force_2d}, that we are using would make the immersed boundary shrink to a point if this were not prevented by the incompressibility of the fluid. As the movie evolves, the volume of the circle in the IB method shrinks gradually and this effect is large enough to be visible in the movie. The volume leakage of the IB method can also be noticed by the snapshots in \cref{fig:snapshot_t32} and the circle in the FSIB method remains the same volume from the beginning to the end. We can quantify the change of volume by calculating the area enclosed by the immersed boundary numerically, denoted by $V(t)$, at each snapshot for both methods and plot $V(t)/V_0$, where $V_0$ is the initial volume, as a function of $t$ in \cref{fig:NS2Dvolume}. It shows that the new FSIB method has volume conservation with no leakage at all within numerical accuracy, while the standard IB method leaks volume linearly in time.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{.95\textwidth}
\includegraphics[width=\textwidth]{fig/snapshot_t03_colored.eps}
\end{subfigure}
\vfill
\begin{subfigure}[b]{.95\textwidth}
\includegraphics[width=\textwidth]{fig/snapshot_t06_colored.eps}
\end{subfigure}
\vfill
\begin{subfigure}[b]{.95\textwidth}
\includegraphics[width=\textwidth]{fig/snapshot_t5_colored.eps}
\end{subfigure}
\caption{Snapshots for a circle in Navier-Stokes flow in 2D. The red vectors represent the velocity field with fixed scaling. The contours are for the vorticity with the colormap scaled by a fixed interval. Colors on the immersed boundary encode the Lagrangian coordinate, so a given color stays with a corresponding material segment of the immersed boundary. Note that the material points of the immersed boundary are rotating clockwise even though there is a wave of deformation at early times that is propagating counterclockwise. The first row is at $t=0.3$, the second row is at $t=0.6$, and the third row is at $t=5.0$ after the immersed boundary returns to a circular shape and is nearly static. The left is for the IB method and the right is for the FSIB method.}
\label{fig:snapshot_t32}
\end{figure}
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/NS2Dvolume.eps}
\caption{}\label{fig:NS2Dvolume}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/NS3DElasticVolume.eps}
\caption{}\label{fig:NS3DElasticvolume}
\end{subfigure}
\caption{Normalized volume $V/V_0$ vs t, where $V$ is the volume enclosed by the immersed boundary, $V_0=V(t=0)$. (a). The sine velocity circle problem in Navier-Stokes flow in 2D. (b). The sphere with elastic surface energy in Navier-Stokes flow in 3D. The blue solid lines are for the FSIB method. The red dashed lines are for the standard IB method.}\label{fig:volume_conv}
\end{figure}
Also, we verify the temporal second-order convergence of the FSIB method since it is a Runge-Kutta-2 integrator. We start with the largest stable time step $\Delta t=0.002$ and refine the temporal grid by a factor of two every time. Then the relative error of the solution at the elapsed time $t=0.1$ of different time resolutions is plotted in \cref{fig:NS2DerrNt}. It shows a perfect second-order convergence that is consistent with the theory. Note, however, that only the time step was refined in this study. The number of Fourier modes and the number of Lagrangian markers on the immersed boundary were kept constant.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/NS2Derr_Nt.eps}
\caption{Empirical relative norm-2 error of velocity at the final time $t=0.1$ for different refinements of temporal grids for a circle in Navier-Stokes flow in 2D. The solid line with markers `o' is for the FSIB method and the dash-dotted line is the reference for the second-order convergence.}\label{fig:NS2DerrNt}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/spatial_conv.eps}
\caption{Empirical relative norm-2 error of velocity at the final time $t=0.1$ for refinements of both spatial and temporal grids for a circle in Navier-Stokes flow in 2D. The solid line is for the FSIB method and the dashed line is for the IB method. The dash-dotted lines are for reference (3/2 order convergence for the FSIB and 1st order for the IB).}\label{fig:spatial_conv}
\end{subfigure}
\end{figure}
We also couple the spatial grids and the temporal grids to check the overall convergence rate. We start with $N=8$, $\Delta t=0.001$, $N_b=26$ and, for each refinement, we halve $\Delta t$ and double $N$ and $N_b$. The relative error of the velocity at $t=T=0.1$ is plotted in \cref{fig:spatial_conv}. The overall convergence rate is first-order for the IB method, but the new FSIB method outperforms by half an order in this two-dimensional Navier-Stokes problem.
\subsubsection{Poiseuille flow governed by Navier-Stokes equations in a two-dimensional channel}\label{sec:poiseuille}
Consider the classic problem of the Navier-Stokes flow passing through a Poiseuille channel in two dimensions. The channel walls are placed at $y=0$ and $y=D$. We have the no-slip boundary condition on the channel walls so $\boldsymbol u(x,y=0,t)=\boldsymbol u(x,y=D,t)=0$. The flow is initialized at rest, i.e. $\boldsymbol u(x,y,t=0)=0$. A uniform force field $f_0$ in the x direction is applied as $\boldsymbol f(x,y,t)=(f_0,0)$. One of the reasons that we choose this problem is that, for comparison with our numerical results, we can derive the analytical solutions as follows. In the range of laminar flow, the velocity field is simply in the x direction as a function of $y$, so $\boldsymbol u=(u(y,t),0)$, which allows us to simplify the Navier-Stokes \cref{eq:navierstokes} as an equation for $u(y,t)$
\begin{equation}\label{eq:poiseuille}
u_t=\frac{\mu}{\rho} u_{yy}+\frac{f_0}{\rho}
\end{equation}
with the no-slip boundary condition $u(0,t)=u(D,t)=0$. So we can write the solution as a linear combination of eigenfunctions
\begin{equation}
u(y,t)=\sum_{k=1}^{\infty}u_k(t)\sin(\lambda_k y),
\end{equation}
where eigenvalues $\lambda_k=\frac{k\pi}{D}$. We solve \cref{eq:poiseuille} as
\begin{equation}
u_k=\frac{f_k}{\mu \lambda_k^2}(1-\exp(-\frac{\mu}{\rho}\lambda_k^2t)).
\end{equation}
We also write the force field as $f_0=\sum_{k=1}^{\infty}f_k\sin(\lambda_k y)$, where $f_k=\frac{2}{D}\int_0^D f_0\sin(\lambda_k y) dy$. This gives us
\begin{equation}
\left\{
\begin{array}{ll}
f_{2k+1}&=\frac{4f_0}{(2k+1)\pi}, \\
f_{2k}&=0 ,
\end{array}
\right. \quad k\ge 0.
\end{equation}
Therefore, we have the analytical solution to the Poiseuille flow as
\begin{equation}\label{eq:poiseuille_velo}
u(y,t)=\sum_{k=0}^{\infty} \frac{4f_0}{\mu \lambda_{2k+1}^2(2k+1)\pi}\left(1-\exp(-\frac{\mu}{\rho}\lambda_{2k+1}^2 t)\right)\sin(\lambda_{2k+1}y).
\end{equation}
The flux through the Poiseuille channel $\Phi(t)$ is then given by
\begin{equation}\label{eq:poiseuille_flux}
\Phi(t)=\int_0^D u(y,t)dy=\sum_{k=0}^{\infty} \frac{8f_0}{\mu \lambda_{2k+1}^3(2k+1)\pi}\left(1-\exp(-\frac{\mu}{\rho}\lambda_{2k+1}^2 t)\right).
\end{equation}
It is worth noticing that as $t\xrightarrow[]{}\infty$, the fluid goes to equilibrium and the sine series in \cref{eq:poiseuille_velo} converges to the famous steady state solution of the Poiseuille flow as
\begin{equation}\label{eq:steady_poiseuille_velo}
u_{eq}(y)=\frac{f_0}{2\mu}y(D-y)
\end{equation}
with steady flux
\begin{equation}\label{eq:steady_poiseuille_flux}
\Phi_{eq}=\frac{f_0D^3}{12\mu}.
\end{equation}
We implement the FSIB method as follows. The Poiseuille channel is placed at the center of a periodic box $\Omega_L=[0,L]^2$ with the boundary at $y=L/4$ and $y=3L/4$, so the width of the channel is $D=L/2$. The Lagrangian meshwidth and the Fourier resolution are coupled as $L/N=h_b$. The no-slip boundary condition is implemented by the target point method as two static walls. The target point method introduces a fixed point that is called the target point for each Lagrangian marker on the wall and uses a spring force to keep each Lagrangian marker close to its target position. The idea of this method dates back to the original paper of the IB method to simulate a heart valve \citep{peskinNumericalAnalysisBlood1977} where static structures to which moving valve leaflets are attached are implemented by the target point method. The target point method was later developed into a complete theory to simulate moving or static immersed boundary with or without the additional mass \citep{kimPenaltyImmersedBoundary2016}. Most recently, the numerical stability of the target point method has been analyzed in \citep{huaAnalysisNumericalStability2022}. Here we put fixed target points on the boundary at $\tilde{\boldsymbol X}_i=(ih,L/4), \tilde{\boldsymbol X}_{i+N}=(ih,3L/4),\ i=0,...,N-1$, so that $N_b=2 N$. The Lagrangian points are initialized on the fixed target points as $\boldsymbol X_i(t=0)=\tilde{\boldsymbol X}_i$. The Lagrangian points are attached to the target points on the boundary by the spring force
\begin{equation}
\boldsymbol F_i(t) = K (\tilde{\boldsymbol X_i}-\boldsymbol X_i(t)),\ i=0,...,N_b-1.
\end{equation}
This model has fixed boundary walls unlike the freely moving boundary in \cref{sec:circle_2D_NS}. Moreover, it will provide us a way to quantitatively measure the effective width of the boundary.
We set $L=1,\ \mu=0.1,\ \rho=1,\ f_0=100$ and start with a coarse grid where $N=8$. First, we compute the flux of the numerical solution of the FSIB method as a function of time until equilibrium and compare it with the analytical flux in \cref{eq:poiseuille_flux}. We plot $(\Phi(t)-\Phi_{eq})/\Phi_{eq}$ vs $t$ in \cref{fig:flux_comp}. The analytical result starts from $-1$, i.e. zero flux, and converges to $0$ as $t\rightarrow \infty$. The numerical result has a similar evolution but converges to a smaller value than the analytical solution. This means the implementation of the boundary has the effect to slow the flow as a result of the finite resolution. Thus, it provides us a way to quantify the boundary resolution. From the steady-state flux formula in \cref{eq:steady_poiseuille_flux}, we know the effective width of the channel for a given equilibrium flux $\Phi_{eq}$ is
\begin{equation}
D_e = \left(\frac{12\mu\Phi_{eq}}{f_0}\right)^{1/3} .
\end{equation}
Therefore, $\Delta_e=D-D_e$ is a good measurement of the resolution of the boundary, which we call the effective boundary width. We compare the effective width of the boundary of the FSIB method and the standard IB method. We plot $\Delta_e/h$ vs $N$ in \cref{fig:effective_width} and observe that $\Delta_e/h$ converges to fixed values for both methods, which is similar to the Stokes flow case in \cref{sec:bdr_reso_stokes}. For the standard IB method, the boundary width converges approximately to $1.7h$. For the FSIB method, the boundary width converges to approximately $0.4h$, which is about $4$ times smaller than the standard IB method. This result quantifies the improvement in boundary resolution achieved by the FSIB method.
\begin{figure}[!htbp]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/flux_comp.eps}
\caption{Numerical flux (solid line with marker `o') on the coarse grid $N=8$ compared with analytical flux (dashed line with marker `+') as functions of time}\label{fig:flux_comp}
\end{minipage}
\hfill
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/effective_width.eps}
\caption{Effective width of the boundary with normalization $\Delta_e/h$ vs $N$. The solid line marked by `o' is the FSIB method, and the dashed line marked by `+' is the standard IB method}\label{fig:effective_width}
\end{minipage}
\end{figure}
The fixed boundary affects the convergence rate of the FSIB method. For simplicity, we set the channel width $D=L$ so only one boundary wall at $y=0$ is needed due to periodicity, and we only solve the equilibrium problem, which is governed by the Stokes equations. We start with $N=N_b=8$ and refine the grids 6 times and plot the relative errors of the velocity field compared with analytical solutions in \cref{fig:err_2_steady_poiseuiile}. The FSIB method shows first-order convergence, which is no surprise because it is the same as the convergence rate of the previous Stokes problem in \cref{sec:FSIB_Stokes_conv_2D}. If we compare the numerical solution with the analytical solution with the effective channel width, i.e. replace $D$ by $D_e$ in \cref{eq:steady_poiseuille_velo}, however, we see $3/2$ order convergence in \cref{fig:err_2_steady_poiseuiile_3_halves_effective_width}. This verifies that the static boundary is a first-order effect that compromises the convergence rate of the FSIB method and the effective width of the channel represents the boundary effect.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/err_2_steady_poiseuiile.eps}
\caption{}\label{fig:err_2_steady_poiseuiile}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/err_2_steady_poiseuiile_3_halves_effective_width.eps}
\caption{ }\label{fig:err_2_steady_poiseuiile_3_halves_effective_width}
\end{subfigure}
\caption{Relative error in norm 2 of the velocity field in the channel at equilibrium vs $N$ in log-log scale. Compare the solution of the FSIB method with the analytical solution given by channel width (a) $D$ (b) $D_e$. The blue solid lines with markers `o' are for the FSIB method and the red dash-dotted lines are for reference (1st order for the left and 3/2 order for the right).}
\end{figure}
\subsubsection{Elastic Sphere in Navier-Stokes flow in three dimensions}\label{sec:sphere_energ}
Now we extend numerical experiments to three dimensions. The analog of a circle in three dimensions is a sphere. So, we start with the most simple case as a sphere with radius $r=L/3$ that is placed at the center of a periodic box $\Omega_L = [0,L]^3$. But a uniform discretization for a sphere is not possible while it is easy to obtain for a circle in two dimensions. Thus, we turn to triangulation on the sphere and use the nodes of triangles as the discrete point $\boldsymbol X_i$, $i=0,...,N_b-1$. The edge length of triangles should be as uniform as possible. The regular icosahedron with 12 vertices provides a beautiful starting point for the triangulation. Successive refinement will provide any Lagrangian resolution that is required. For each refinement, we split each triangle into four smaller triangles by adding new points on the midpoints of the three edges and projecting the three midpoints to the spherical surface to get the new vertices and triangles. The edge length is approximately halved and the number of triangles is four times greater. Here we visualize the triangulation with the refinement of level four in \cref{fig:snapshot_t1} and one can refer to fig. 11 in \citep{baoImmersedBoundaryMethod2017} for detailed visualization of such a triangulation. Although the triangulation looks regular, it is important to note that this is not quite the case. In particular, the refinement process produces triangles with a variety of shapes, although all of them are close to equilateral.
The continuous elastic force field on the immersed boundary is modeled by springs on the edges of the triangles. So, the force at any node $X_i$ is given by all the edges attached to this node as
$$\boldsymbol F_i = \sum_{j \text{ linked with }i} K_{i,j}(\boldsymbol X_j- \boldsymbol X_i),$$
where $K_{i,j}$ is the spring constant of the edge that links node $\boldsymbol X_i$ and $\boldsymbol X_j$. The choice of $K_{i,j}$ that immediately comes to one's mind might be a constant $K_{i,j}$, but the fact is that this is not a physical model, i.e., it has no continuum limit, even if the constant K is made a function of the refinement level in an attempt to produce such a continuum limit. Therefore, the choice of constant $K_{i,j}$ should be avoided. To derive a physically reasonable model we consider the elastic energy density
$$\epsilon_0(t)=\frac{\sigma}{2}(\lambda_1^2(t)+\lambda_2^2(t)),$$
where the subscript $0$ means this energy density should be integrated over the spherical reference configuration. The constant $\sigma$ represents the strength of elasticity. The variables $\lambda_1$ and $\lambda_2$ are the principal stretch ratios at time $t$. The elastic energy minimizes as the immersed boundary shrinks to a point but the incompressible fluid prevents this from happening. So this is an ideal experiment to verify the volume conservation of the FSIB method. For any triangle $\Delta ABC$ of the triangulation, we denote the edge lengths as $a=|\boldsymbol{BC}|$, $b=|\boldsymbol{AC}|$, $c=|\boldsymbol{AB}|$. The initial reference triangle edges at $t=0$ are thus denoted as $a_0,b_0,c_0$. So, at any time $t$, the stretch on the triangle $\Delta ABC$ is calculated as
$$\lambda_1^2+\lambda_2^2=\frac{\sum_{a,b,c}a^2(b_0^2+c_0^2-a_0^2)}{8 S_0^2(\Delta ABC)},$$
where the subscript $a,b,c$ represents the circular summation of any permutation of $a,b,c$ and the $S_0^2(\Delta ABC)$ is the area of the triangle at $t=0$. Now multiply the force density by the reference triangle area $S_0$ and sum over all of the triangles. This gives the total energy on the sphere as
\begin{equation}\label{eq:total_elastic_energy}
E=\frac{\sigma}{2}\sum_{\Delta ABC}\frac{\sum_{a,b,c}a^2(b_0^2+c_0^2-a_0^2)}{8 S_0(\Delta ABC)}.
\end{equation}
Thus, the tension in the edge $|\boldsymbol{AB}|=c$ should be the negative derivative of the energy as
$$T_{AB}=-\frac{\partial E}{\partial c}=-\sigma\frac{\sum_{\Delta_c} c(a_0^2+b_0^2-c_0^2)}{8 S_0(\Delta_c)},$$
where the notation $\Delta_c$ represents the two triangles that edge $c$ attaches to. Note that the tension is given by a linear function of the edge length $c$ and thus satisfies the spring model. Therefore, the spring constant on any edge $\boldsymbol{AB}$ is
\begin{equation*}
K_{\boldsymbol{AB}} = \sum_{\Delta_c}\frac{\sigma}{8}\frac{a_0^2+b_0^2-c_0^2}{S_0(\Delta_c)} = \sum_{\Delta_c} \frac{\sigma}{8}\frac{2ab\cos(\angle C)}{\frac{1}{2}ab\sin(\angle C)}=\frac{\sigma}{2}(\cot(\angle C_1)+\cot(\angle C_2)),
\end{equation*}
where $\angle C$ denotes the angle of node $C$ in the triangle $ABC$ at time $t=0$. $C_1$ and $C_2$ are the nodes opposite edge $\boldsymbol{AB}$ in the two triangles that have the edge $\boldsymbol{AB}$ in common. Thus, when we initialize the simulation, we can compute the stiffness, i.e. spring constant, of each edge $\boldsymbol{AB}$. Then we store these spring coefficients of each edge and use them to compute force at each time step. This model of continuous elastic energy simplifies to springs that connect edges of triangles with various stiffness $K$.
Another way to compute the discrete force of the elasticity model is to rewrite the total elastic energy in \cref{eq:total_elastic_energy} as a function of the coordinates of all the nodes, i.e. substitute $a=|\boldsymbol B-\boldsymbol C|$, $b=|\boldsymbol A-\boldsymbol C|$, $c=|\boldsymbol A-\boldsymbol B|$. Then take the negative derivative of the total energy with respect to the node coordinate, e.g. $\boldsymbol F_{\boldsymbol A}=-\partial E/ \partial \boldsymbol A$, to obtain the elastic force on the node. This way avoids the introduction of the spring model but is less intuitive.
At time $t=0$, the flow is initialized as a sine wave $\boldsymbol u(x,y,z)=[\sin(2\pi z/L),0,\sin(2\pi y/L)]$, which is similar to what was done in \cref{sec:circle_2D_NS}. Here we first present a numerical experiment on this problem. We set parameters $L=1$, $N=32$, $\rho=1$, $\mu=0.01$, $T=1$, $\Delta t=0.001$ and $\sigma=1$. For the discretization on the sphere, we do four steps of refinements of the regular icosahedron as described above such that the Lagrangian meshwidth matches with the Fourier resolution. We visualize the results of the standard IB method in comparison with our new FSIB method as a movie \footnote{\url{https://www.math.nyu.edu/~zc1291/public/IBM/NUFFT/comp_N_32_spec_ns_Elastic.mp4}} and also the snapshots in \cref{fig:snapshot_t1}. The FSIB method shows a consistent solution with the IB method. The simulation ends at $t=1$, which is enough time for the elastic immersed boundary to return approximately to its initial spherical configuration.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\textwidth]{fig/snapshot_t05.eps}
\end{subfigure}
\vfill
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\textwidth]{fig/snapshot_t1.eps}
\end{subfigure}
\caption{Snapshots for the elastic sphere in Navier-Stokes flow in 3D. The blue vectors represent fluid velocity fields. The top is for the intermediate state when $t=0.5$ and the bottom is for the final state when $t=1.0$. The left is for the IB method and the right is for the FSIB method.}
\label{fig:snapshot_t1}
\end{figure}
A physical quantity that is worth observing is the volume enclosed by the immersed boundary. To quantify the error in volume conservation, we evaluate the volume enclosed by the triangulated surface, which is given by
\begin{equation*}
V=\frac{1}{6}\sum_{(i,j,k) \in \texttt{Tri} } (\boldsymbol X_j-\boldsymbol X_k, \boldsymbol X_k-\boldsymbol X_i, \boldsymbol X_i-\boldsymbol X_j) ,
\end{equation*}
where $\texttt{Tri}$ denotes the circular triads of all triangles in the form of three circular numbers $(i,j,k)$ and the circular triple product of three vectors $(\boldsymbol a, \boldsymbol b,\boldsymbol c)$ is defined as $(\boldsymbol a,\boldsymbol b,\boldsymbol c):=\boldsymbol a \cdot (\boldsymbol b \times \boldsymbol c)$.The order of these triads is taken to be counterclockwise when viewed from outside of the surface. It follows that every edge is traversed in opposite directions in the two triangles to which that edge belongs. Note that this formula for the volume enclosed by the triangulated surface gives a result that is independent of the choice of origin, which can even be outside of the surface, and the formula is still correct even if the surface has a complicated shape in which a ray emanating from the origin intersects the surface more than once. These features are consequences of using the \textit{signed} volumes of the individual tetrahedra instead of summing the absolute values of those volumes. So, we can plot the volume that is normalized by the initial volume $V(t)/V_0$ as a function of time in \cref{fig:NS3DElasticvolume}. We have here the same conclusion for the volume conservation in three dimensions as in two dimensions that the new FSIB method has the property of volume conservation while the standard IB method has constant-rate volume leakage as time evolves.
The main purpose of this experiment on elastic surface energy is to check the convergence of the FSIB method in three dimensions. We start with the coarsest grid where $N=4$, $\Delta t=0.004$, and the number of refinements of the regular icosahedron is $1$. Then, we refine the Lagrangian grids, i.e. the triangulation, on the immersed boundary, halve the time step and double the number of Fourier modes in each direction at the same time for each successive refinement. After that, we compute the empirical relative error in norm 2 of the interpolated velocity field on Eulerian grids at the final time at $t=T$. Due to the scaling of the grids in three dimensions being massive, we managed to simulate only until $T=0.2$ and do the refinement up to $5$ times. We plot the relative error in log scale vs the number of refinement steps in \cref{fig:err_elastic_sphere}. The result is clear that the FSIB method shows second-order convergence while the IB method has only first-order convergence. The advantage of the FSIB method over the IB method in three dimensions regarding the convergence rate is even greater than in the two-dimensional case, which is surprising. Although the standard IB method is generally first-order, there is an IB-like method called the immersed interface method \citep{levequeImmersedInterfaceMethod1994} that has second-order convergence by a modification of the finite difference operator near the immersed boundary. No such modification is needed here.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/err_elastic_sphere.eps}
\caption{}\label{fig:err_elastic_sphere}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/err_euler_1st_2nd.eps}
\caption{ }\label{fig:err_euler_1st_2nd}
\end{subfigure}
\caption{Relative error in norm 2 of the interpolated velocity field at the final snapshot $T=0.2$ in log scale vs the number of refinements. (a) An elastic sphere in Navier-Stokes flow in 3D. (b) A sphere with surface tension in Navier-Stokes flow in 3D. Solid lines with markers `o' are for the FSIB method. Dashed lines with markers `+' are for the IB method. Dash-dotted lines are for reference (second-order convergence for the FSIB method and first-order convergence for the IB method).}
\end{figure}
It is also of interest to consider the case of surface tension. The surface tension of a sphere is modeled by a total energy $E$ that is proportional to the surface area $S$ of the geometry as
$$E=\gamma S,$$
where the constant $\gamma$ is the surface tension and one may easily verify that the total surface energy will equal to the elastic energy in the spherical reference at the initial state configuration in \cref{eq:total_elastic_energy} if we set $\gamma=\sigma$. We make this choice $\gamma=\sigma=1$ so that the stiffness of the two experiments will be similar. Unlike the elastic energy used previously, the surface tension will not prevent the geometry from deformation since the energy only depends on the surface area rather than the geometry. It follows that the surface tension does not oppose changes in the shape of the individual triangles. Therefore, it provides different dynamics. Moreover, the surface tension model produces force in the normal direction to the immersed boundary. This means there is no tangential jump in stress on the immersed boundary. It is also true in this case that the normal derivative of the tangential velocity has no jump across the immersed boundary \citep{laiRemarkJumpConditions2001}.
We use a triangulated surface as before and approximate the surface area as the sum of the areas of the triangles. Following a similar process here as in the case of the elastic model, we take the negative derivative of the energy with respect to the node coordinates and obtain the force on the node $\boldsymbol X_i$ given by
$$\boldsymbol F_i=-\frac{\gamma}{2}\sum_{(i,j,k)\in \Delta_{i}} (\boldsymbol X_j-\boldsymbol X_k)\times \boldsymbol n_{i,j,k},$$
where $\Delta_i$ denotes all the triangles in the form of the three indexes $(i,j,k)$ of the three vertices to which node $\boldsymbol X_i$ is attached, and $\boldsymbol n_{i,j,k}$ is the unit normal vector of the triangle $(i,j,k)$ with the positive direction given by the right-hand rule of the ordering $\boldsymbol X_i$, $\boldsymbol X_j$ $\boldsymbol X_k$.
First, we run the essentially same simulation as the elastic model and get similar results for volume conservation. For the convergence study, we start with $N=4$ and one refinement of the regular icosahedron. Time step $\Delta t=0.004$ and we run until $T=0.2$ to compute the interpolated velocity on Eulerian grids. We manage to run 6 times of refinements of the grids and plot the successive relative error in norm 2 of the velocity field in \cref{fig:err_euler_1st_2nd}. The FSIB method shows second-order convergence while the IB method has only first-order convergence, which is the same result as the elastic model.
\section{Conclusions}
In this paper, we present a new Fourier Spectral Immersed Boundary method. The new method is fully independent of any Eulerian grid and solves the fluid equations in Fourier space. The force spreading and the velocity interpolation of the standard IB method become an integral over the Lagrangian variables and a Fourier series evaluation, respectively, both of which are implemented by the NUFFT. We demonstrate that the FSIB method has an analog in the framework of the IB method by using a new `$\mathtt{sinc}$' kernel. The `$\mathtt{sinc}$' kernel is not finitely supported, as are the standard IB kernels, but it shares many similarities with the standard kernels. It satisfies all the conditions of the standard kernels discretely and continuously and may be viewed as the limit of the standard kernels as the width of their support goes to infinity. We also show that the FSIB method preserves the duality of the force spreading and the velocity interpolation. The conservation of momentum and the conservation of energy are proved. We implement the FSIB method efficiently with the help of the NUFFT with a complexity of $O(N^3\log(N))$ per time step where $N$ is the number of Fourier modes in each dimension. Besides these properties that are shared with the IB method, the FSIB method substantially outperforms the IB method in the following ways. The interpolated velocity field of the FSIB method is analytically divergence-free and thus conserves volume with no leakage. Moreover, the FSIB method has the property of exact translation invariance, and this is impossible to achieve with the finitely supported kernels of the standard IB method.
We verify these properties of the FSIB method in comparison with the IB method by a series of numerical experiments for the Stokes equations and the Navier-Stokes equations in two space dimensions and also in three space dimensions. The convergence rate of the standard IB method is first-order in all experiments. The FSIB method has the same first-order convergence for Stokes flow but with an empirical error that is 10 times smaller than in the case of the standard IB method. The FSIB method shows $3/2$ order accuracy for the circle problem in Navier-Stokes flow in two dimensions. In three-dimensional Navier-Stokes flow, which is by far the most important and also the most challenging of the cases that we have considered, the FSIB method shows second-order convergence with immersed boundaries that are topologically spherical and have mechanical properties derived from a surface elasticity model or from a surface tension model. We have no explanation for the different convergence rates observed in different settings, but in every case that we have tested that the FSIB method outperforms the standard IB method, either by having a higher order of convergence or a much smaller empirical error with the same order of convergence. Our quantitative estimates of effective boundary thickness also show that the FSIB method has improved boundary resolution in comparison to the standard IB method. Because of these advantages, which are achieved without increased computational cost, we believe that the FSIB method will turn out to be widely applicable.
\section{Acknowledgement}
The first author, Zhe Chen, is supported by the Henry MacCracken Fellowship at New York University.
\begin{appendices}\label{sec:append}
\section{Nonuniform Fast Fourier Transform (NUFFT)}\label{appendix:nufft}
The NUFFT algorithm generalizes the FFT to the nonuniform data that is off the grids. Two types of NUFFTs are used in this paper and are called the type-1 NUFFT and the type-2 NUFFT. For simplicity, we denote the number of dimensions $d=3$ and set the number of desired Fourier modes $N$ in each spatial dimension to be even.
\subsection{Type-1, nonuniform to uniform}
The goal of the type-1 NUFFT is to compute the nonuniform Fourier transform below efficiently and accurately.
\begin{equation}\label{eq:type1}
\hat{f}_{\boldsymbol k}:=\sum_{j=1}^{N_b} f_j \exp(-i \boldsymbol k \cdot \boldsymbol x_j),\quad \boldsymbol k \in \mathcal{K}_{N}
\end{equation}
Note that definition of $\hat{f}$ is equivalent to Fourier transform of periodic function $f(\boldsymbol x)=\sum_{j=1}^{N_b} f_j \delta(\boldsymbol x- \boldsymbol x_j)$ in $[0,L]^3$. Here $f(\boldsymbol x)$ is a sum of the delta function with values $f_j$ in locations $x_j$. Also, one can consider this as a generalization of discrete Fourier transform(DFT). When $x_j$ happens to be on the uniform grids, the type-1 NUFFT becomes the DFT. We desire to compute only $N$ Fourier modes in $\mathcal{K}_{N}$ that is defined in \cref{sec:spatial_discretization}.
The first step is to spread the nonuniform source to the uniform grids $x_{i,j,j},\ i,j,k=1,...,N$ by convolution with a kernel function $G_\xi$ as
\begin{equation}\label{eq:type1_conv}
f_G(\boldsymbol x_{i,j,k}) := G_\xi * f (\bm{x_{i,j,k}})= \sum _{q=1}^{N_b} f_q G_\xi (\bm x_{i,j,k}-\bm x_q) .
\end{equation}
Due to the choice of $G_\xi$ that is discussed in \cref{sec:kern_choi}, we only need to spread to the nearest $\sigma$ grid points, i.e. the width of the kernel is truncated to $2\sigma h$.
Then, we apply an FFT on the uniform grids to get the Fourier space and cancel the effect of the convolution with the kernel by dividing by the Fourier transform of the kernel, i.e. $\hat{G}_\xi$. The $\hat{G}_\xi$ is usually analytically available or efficiently computed. So, we could compute $\hat{f}$ by
\begin{equation}\label{eq:type1_deco}
\hat{f}(\bm{k})\approx\frac{\mathcal{F} \left( f_G(\boldsymbol x_{i,j,k})\right)}{\hat{G}_\xi(\boldsymbol k)}, \quad \bm k \in \mathcal{K}_{N},
\end{equation}
where $\mathcal{F}$ denotes FFT.
\subsection{Type-2, uniform to nonuniform}
The type-2 NUFFT is usually considered as evaluating Fourier series on nonuniform points in physical space for given uniform Fourier modes in $\mathcal{K}_{N}$, which formulates to
\begin{equation}\label{eq:type2}
f_j:=\sum_{\boldsymbol k \in \mathcal{K}_{N}} \hat{f}_{\boldsymbol k} \exp(i \boldsymbol k \cdot \boldsymbol x_j), \quad j=1,...,N_b.
\end{equation}
Therefore, $f_j$ is the value of the Fourier series at location $x_j$.
The algorithm of the type-2 NUFFT is like the reciprocal of the type-1. We first divide the $\hat{f}_{\boldsymbol k}$ by Fourier transform the same kernel, i.e. $\hat{G}_\xi$. Then, apply an inverse FFT to get function $g(\boldsymbol x_{i,j,k})$ on the grids as
\begin{equation}\label{eq:type2_divi}
g(\boldsymbol x_{i,j,k}) := \mathcal{F}^{-1}\left(\frac{\hat{f}}{\hat{G}_\xi}\right) .
\end{equation}
Last, spread it to the nonuniform off-grid points $x_q$ by the kernel $G_\xi$ as
\begin{equation}\label{eq:type2_conv}
f_q=\sum_{i,j,k} g(\boldsymbol x_{i,j,k}) G_\xi(\boldsymbol x_q- \boldsymbol x_{i,j,k})h^3,
\end{equation}
where $q=1,...,N_b$ and $\mathcal{F}^{-1}(\cdot)$ denotes the inverse FFT. This cancels the effect of $G_\xi$ in \cref{eq:type2_conv} by the convolution theorem. Again, the spreading is truncated to the nearest $\sigma$ grid points.
\subsection{Choice of the kernels and its parameters}\label{sec:kern_choi}
For the kernel $G_\xi$ to be good, the criteria are simple. First, $G_\xi$ should be able to be truncated to as small support as possible, i.e. $G_\xi$ decays fast in physical space. Second, the tail of $\hat{G}_\xi$ should be as small as possible relative to the truncation windows $|k_j|<N/2$. Third, $\hat{G}_\xi$ should be easily available. Here we provide a popular choice of $G_\xi$, the "Kaiser-Bessel" kernel $G_{\text{KB},\xi}$ \citep{barnettParallelNonuniformFast2019}
\begin{equation}\label{eq:kernel_KB}
G_{\text{KB}, \xi}(x):=\left\{\begin{array}{ll}
I_{0}\left(\xi \sqrt{1-x^{2}}\right) / I_{0}(\xi), & |x| \leq 1 ,\\
0, & \text { otherwise },
\end{array}\right.
\end{equation}
where $I_0$ is the regular modified Bessel function of order zero. Its Fourier transform $\hat{G}_{\text{KB},\xi}$ is analytically available as
\begin{equation}\label{eq:kernel_KB_hat}
\hat{G}_{\text{KB}, \xi}(k)=\frac{2}{I_{0}(\xi)} \frac{\sinh \sqrt{\xi^{2}-k^{2}}}{\sqrt{\xi^{2}-k^{2}}}.
\end{equation}
Because $G_\xi$ decays exponentially in physical space, we choose truncation in physical space $\sigma \sim |\log(\epsilon)|$ for any desired accuracy $\epsilon$ of \cref{eq:type1} and \cref{eq:type2} compared with exact nonuniform Fourier transform. Then, we should choose the parameter $\xi$ such that the tail of $\hat{G}_\xi$ has a smaller error than $\epsilon$. One may refer to \citep{barnettParallelNonuniformFast2019} for further details on error analysis and parameter choice.
As for the computational cost, the direct computation of the nonuniform Fourier transform is $O(N_b\cdot N^3)$. Considering $N_b$ is proportional to $N^2$ in three dimensions, this is an unbearable computational cost. By using NUFFT, we reduce the cost to $O(N_b\sigma^3+N^3\log(N))$, the first term of which is the spreading cost and the second is the FFT's cost.
\end{appendices}
\bibliographystyle{plain}
|
{
"arxiv_id": "2302.08696",
"language": "en",
"timestamp": "2023-02-20T02:07:02",
"url": "https://arxiv.org/abs/2302.08696",
"yymm": "2302"
} | \section{Introduction}
\section{Introduction}
{
Cellular mobile communication technology evolves as new services and new wireless channels emerge.
As user demand shifted from simple voice-only services to high speed data, wireless technology shifted from narrowband time-division in 2G systems to frequency-division in 4G and 5G systems \cite{WC3a}-\cite{WC3d}. To some degree, it is innovation at the physical layer that distinguishes one generation from the next.
Mobile communications systems serving billions of people all over the world
have been made possible by theoretical advances in coding \cite{inno1},\cite{inno2}, in methods that provide spatial diversity \cite{inno3}-\cite{inno8}, and in methods of user cooperation \cite{inno9},\cite{inno10}.
This flow of ideas has transformed the teaching of wireless communications
\cite{WC1}-\cite{WC4}.
We expect that innovation at the physical layer will continue as carrier frequencies increase (mmWave and THz), and new high-mobility use cases emerge (bullet trains, autonomous vehicles, airplanes). However, perhaps the most fundamental change in mobile communications is the relative importance of the physical layer, the network layer and the services layer. As the IP revolution transformed the wireless world, cell phones migrated to smartphones which are complex software platforms. Demand for capacity could not be met by circuit switched networks engineered to provide worst case coverage at the cell boundary, and the physical layer changed in response \cite{WC3}. We emphasize predictability in this article because a physical layer that is predictable simplifies network management and the provision of services.
Enhancements to mobile communication services across wireless generations have been enabled by judicious allocation of time and frequency resources. To date, these enhancements have been achieved primarily within the framework of time domain modulation (TDM) or frequency domain modulation (FDM). In TDM based schemes, information is carried by a narrow time domain (TD) pulse, and an information packet is a superposition of such TD pulses. In TDM, the signal is localized in the TD but not in the frequency domain (FD), and as a consequence, the interaction between a TD pulse and the environment leads to time selective fading. Likewise, in FDM based schemes, information is carried by a narrow FD pulse, which is essentially a sinusoid in the time domain. The FDM signal is localized in the FD but not in the TD, and this lack of localization in the TD causes frequency selective fading. The lack of time frequency (TF) localization in TDM/FDM signals also makes the relation between channel inputs and channel outputs less predictable. Fading and lack of predictability degrade the performance of TDM/FDM on doubly-spread channels, that is, channels that are selective in both time and frequency \cite{WC4}. As carrier frequencies increase and high-speed use cases emerge, we encounter doubly-spread channels that are more extreme. TDM/FDM have served well in wireless generations to date, but better TF localization may be required in the wireless generations to come.
TF localization is particularly important to emerging intelligent transportation systems, where accurate and high-resolution radar imaging can identify potential hazards in fast-moving environments and enable responses that enhance road safety \cite{RS1}. Waveforms with better TF localization enable more accurate location of objects like pedestrians and bicycles which have small radar cross sections. Looking back to 1953, only 5 years after Claude Shannon created information theory, Philip Woodward described how to think of radar in information theoretic terms \cite{RS2}. He suggested that we view the radar scene as an unknown operator parametrized by delay and Doppler, and that we view radar waveforms as questions that we ask the operator. Woodward proposed to define a good question in terms of lack of ambiguity in the answer, and he sought questions with good localization in delay and Doppler. By identifying a single waveform, good for both channel estimation and communication, we seek to decrease system complexity, reduce electromagnetic interference, and mitigate spectrum related concerns \cite{RS3}.
We describe wireless channels in terms of delay and Doppler operators, and we describe how to use certain geometric modes of these operators for communication and sensing. These special modes constitute the orthogonal time frequency space (OTFS) waveforms, introduced in \cite{RH1} (see also \cite{RH2,RH3}). When viewed in the time domain, an OTFS waveform is a pulse train modulated by a tone, a pattern that we refer to as a pulsone and we provide a comprehensive review of its mathematical foundation. The heart of this foundation is the Zak transform, which converts the TD signal into a quasi-periodic function in delay and Doppler. We describe how OTFS waveforms are pulses in the DD domain, that are engineered to mirror the dynamics of the wireless channel. We derive their TF localization properties from first principles, explaining why the relation between channel inputs and channel outputs is predictable, and why fading is eliminated. We also explain how this approach to TF localization provides a geometric interpretation of the Nyquist rate. In the context of radar, we explain why Woodward might have thought of OTFS waveforms as good questions. In the context of communications, we explain how OTFS waveforms give rise to a modulation scheme that multiplexes information in the DD domain. The wireless channel is determined by a small number of dominant reflectors, which means that in the DD domain, it admits a sparse representation. Moreover, this representation changes only at the speed of the underlying physics of the reflectors which renders it ``effectively" stationary. In this context, we explain how better TF localization of the carrier waveform takes advantage of this DD domain stationary representation of the channel, translating to better performance under doubly-spread channels.
This is the first of two papers, providing the mathematical foundation for signal processing in the delay-Doppler domain.} {It prepares the ground for the second paper which
studies the performance of OTFS in comparison to other modulation schemes such as TDM and FDM, and explores the utility of the OTFS waveform for radar sensing.}
\begin{comment}
{\color{blue}
Cellular mobile communication technology which started in the 1980s has witnessed numerous innovations in the last few decades and has been evolving rapidly \cite{WC1}-\cite{WC3}. Over these decades, the fraction of global population that has access to mobile communication services has increased tremendously. Present day mobile communication systems have become a key driving force behind many new-age industries providing several new services and products. They have advanced from being a simple voice-only service in the first generation (1G) to being data driven in the fifth generation (5G). \textcolor{blue}{Physical layer waveform design has always been a key differentiator from one generation to the next.} Steady increase in the data rate and spectral efficiency of transmission using increasingly sophisticated physical layer waveforms and techniques has been a hallmark of cellular evolution.
Efficient use of time and frequency resources is key to achieve enhanced communication capabilities across generations. Such enhanced capabilities so far have been achieved primarily within the framework of time domain modulation (TDM) or frequency domain modulation (FDM) techniques. While TDM/FDM based waveforms have served well in the generations so far, they have fundamental limitations as outlined below. In TDM based schemes, information is carried by a narrow time domain (TD) pulse, and an information packet is a superposition of such TD pulses carrying information symbols. {\color{blue} In TDM, the signal is localized in the TD but not in the frequency domain (FD). A consequence of this lack of localization in the FD is that the interaction between a TD pulse and the environment leads to time selective fading.} Likewise, in FDM based schemes, information is carried by a narrow FD pulse, which is essentially a sinusoid in the time domain. {\color{blue} The FDM signal is localized in the FD but not in the TD, and this lack of localization in the TD causes frequency selective fading. Another consequence of the lack of time frequency (TF) localization in TDM/FDM signals is the lack of predictability of the input-output relation. Both the fading phenomenon and the lack of predictability of the input-output relation in TDM/FDM result in degraded performance in doubly-spread channels (i.e., in channels that are selective in both time and frequency \cite{WC4}). In severe doubly-spread channels, this performance degradation is severe.}
It is now widely recognized that future generations of mobile communication systems have to deal with severe doubly-spread channels. This is because of the emergence of several high-mobility use cases (e.g., bullet trains, autonomous vehicles, aeroplanes) and increased carrier frequencies of operation (e.g., mmWave, THz). Therefore, TF localization ability of waveforms becomes critically important to achieve reliable communication in doubly-spread channels.
Interestingly, in a related context, the importance of TF localization is coming to the fore in the emerging intelligent transportation systems, where accurate and high-resolution radar imaging of the environment is a key requirement for road safety \cite{RS1}. Radar sensing helps to recognize and respond to potential hazards in fast-moving environments.
The radar cross-section of objects to be sensed in such environments can be small (e.g., pedestrians, bicycles).
Lack of TF localization of waveforms used to sense these environments can result in inaccurate estimation of sensing parameters and poor resolution, leading to target detection failures. Radar waveforms with good TF localization allow to see things faster, to see things more finely when necessary, and to see things with more sensitivity by being more discriminate in about how we look \cite{RS2}.
Jointly implementing radar and communication contributes to reducing system size and cost as well as alleviating electromagnetic interference and spectrum related concerns \cite{RS3}. So, there is a growing need and opportunity to take a fresh look at new waveform designs that can jointly address the communication and radar sensing requirements.
In the above context, orthogonal time frequency space (OTFS) waveform, introduced in 2017 \cite{RH1}, possesses
attractive TF localization properties for communication and radar.
It satisfies TF localization attributes, which,
in the context of communication i) eliminate fading, ii) provide predictable input-output relation, and iii) provide a natural and simple geometric interpretation of Nyquist rate, and in the context of radar i) provide accurate estimation of sensing parameters, ii) provide high resolution imaging of the environment, and iii) coincide with Woodward's construction \cite{RS3}.
In OTFS, TF localization is defined based on Zak transform. The Zak transform gives rise to delay-Doppler (DD) representation of signals, which are quasi-periodic functions in two dimensions. The localized signals are defined in the DD signal representation which naturally maps into the kind of attributes that TF localized signals would be expected to satisfy.
The Zak theoretic localization attribute of OTFS contributes to its superior performance in doubly-spread channels. OTFS multiplexes information symbols in the DD domain and views the doubly-spread channel also in the DD domain which renders the channel to be sparse and slowly varying.
This paper introduces OTFS in the Zak theoretic framework from first principles and highlights its structural and beneficial attributes. The paper is organized into two parts. Part I deals with theory aspects of OTFS and Part II deals with performance, implementation, and radar aspects of OTFS. This Part I of the paper
\begin{itemize}
\item introduces DD domain representation of doubly-spread mobile radio channels,
\item characterizes a DD pulse (called a {\em pulsone}) and orthogonality of pulsones,
\item shows TDM and FDM as limiting cases of DD modulation (i.e., OTFS),
\item introduces a generic signal processing framework for wireless transceivers,
\item describes the interaction between the TDM/FDM/OTFS waveforms and the channel, and
\item derives the input-output relation for TDM, FDM, and OTFS.
\end{itemize}
The key takeaways from this part include the following attributes of OTFS: Zak theoretic framework, pulsones, TF localization aspects, non-fading and predictability of the input-output relation, and an interesting phenomenon called DD domain input-output {\em crystallization}.
}
{\color{blue}
The paper is organized into two parts. Part I deals with theory aspects of OTFS and Part II deals with performance, implementation, and radar aspects of OTFS.
}
\end{comment}
\begin{comment}
Motivation from Communications Perspective:
Lack of TF localization in TDM/FDM leads to,
(i) fading,
(ii) lack of predictability of input-output relation.
(i) and (ii) result in performance degradation in severely doubly spread channels.
Motivation from Radar Sensing Perspective:
Lack of TF localization leads to,
(i) inaccurate estimation of sensing parameters,
ii) poor resolution for objects having small radar cross-section.
(i) and (ii) result in target detection failures.
The OTFS waveform satisfies TF localization attributes which,
in the context of communication,
i) eliminate fading,
ii) provide predictable input-output relation,
iii) provide a natural and simple geometric interpretation of the Nyquist rate, and
in the context of radar,
i) it naturally provides accurate estimation of sensing parameters,
ii) provides high resolution imaging of the environment,
iii) coincides with Woodward's construction,
iv) to see faster and with greater sensitivity by being more discriminating in what we see,
\end{comment}
\begin{comment}
Shannon capacity, notion lives in the space of time and bandwidth limited waveforms.
The question of time-frequency localization is therefore ancient and has been investigated
by giants with beautiful mathematical results. Time-frequency localization is common to both communication
and radar. We are adding a chapter to that theory of time-frequency localization,
it has a theoretical dimension and it has an engineering dimension.
We provide a new formal perspective to interpret simultaneous localization in time and frequency.
This is different from other traditional interpretations.
The theoretical dimension,i) for radar it encompasses filtering (a manner to cast radar waveform design problem
as a filter design problem), ii) localization in different domains and how they are related.
In the engineering dimension, i) localization in the context of radar makes it simpler,
ii) facilitates a way to design waveforms which have advantageous attributes like predictability, mitigation of fading.
Module-1: Motivation
Why do we need to relook at time-frequency localization?
Historically, moving from one generation to the next has involved
re-engineering of time and frequency resources to improve performance.
However, in the past, time-frequency localization has not been looked at with a fresh perspective.
For Radar, so far we are at the mercy of a few vendors. What we present here will allow
radars to be programmable?
Module-2: Time-frequency localization
Module-3: Communications (as we move from 5G to 6G we need to have a fresh look
at time-frequency localization)
Module-4: Radar
\end{comment}
\section{The Delay-Doppler Domain}
\label{secWirelessChannel}
\begin{figure*}[!h
\centering
\includegraphics[width=16cm, height=8.0cm]{Fig1.jpg}
\caption{{
The delay-Doppler spreading function $h(\tau,\nu)$ of a doubly-spread wireless channel. Four dominant paths between the base station (BS) and the user equipment (UE) result in a function $h(\tau,\nu)$ comprising impulses at $(\tau,\nu) = (\tau_i,\nu_i)$ where $\tau_i$ is the $i$th path delay and $\nu_i$ is the $i$th path Doppler shift.
Note that the first and third reflectors do not introduce Doppler shifts since they are stationary. Also, the Doppler shift induced by the second and the fourth reflector have opposite
polarity as these mobile reflectors are travelling in opposite directions.
}}
\label{fig_1}
\end{figure*}
{A typical} wireless channel between a transmitter and a receiver is determined by a relatively small number of dominant propagation paths. Fig.~\ref{fig_1} shows a time-domain (TD) signal $x(t)$ transmitted by a base station (BS) and received at some user equipment (UE) through four propagation paths. Each path is characterized by the \emph{path delay}, which is the time taken by the signal to propagate along the path, and by the \emph{path Doppler shift}, which is the frequency shift induced
by the relative motion of transmitter, reflector, and receiver.
In Fig.~\ref{fig_1}, we assume the BS and UE are stationary so that only reflector motion determines path Doppler shift.
The signal copy received at UE along the $i$th path is given by $h_i x(t - \tau_i) \, e^{j 2 \pi \nu_i (t - \tau_i)}$, where
$h_i$ is the complex channel path gain, $\tau_i$ seconds is the path delay, and
$\nu_i$ Hz is the path Doppler shift.
We refer to wireless channels as \emph{doubly-spread}, characterized by the delay, Doppler shift, and complex channel gain of each channel path. The action of the wireless channel on the transmitted signal $x(t)$ is specified by the
\emph{delay-Doppler (DD) spreading function}
$h(\tau,\nu)$, where $\tau \in {\mathbb R}$
and $\nu \in {\mathbb R}$ are the delay and Doppler variables, respectively. In Fig.~\ref{fig_1}
\begin{eqnarray}
\label{eqn1}
h(\tau, \nu) & = & \sum\limits_{i=1}^4 \, h_i \, \delta(\tau - \tau_i) \, \delta(\nu - \nu_i),
\end{eqnarray}
where $\delta(\cdot)$ denotes the Dirac-delta impulse function.
The noise-free TD signal received at the UE is given by \cite{Bello63}
\begin{eqnarray}
\label{eqn2}
y(t) & = & \int\int h(\tau,\nu) \, x(t - \tau) \, e^{j 2 \pi \nu (t - \tau)} \, d\tau \, d\nu.
\end{eqnarray}
In Fig.~\ref{fig_1},
\begin{eqnarray}
\label{eqn3}
y(t) & = & \sum\limits_{i=1}^4 h_i \, x(t - \tau_i) \, e^{j 2 \pi \nu_i (t - \tau_i)},
\end{eqnarray}
which is the sum of the signal copies received along the four paths.
\section{Time and Frequency Domain Modulation}
\label{sec2}
\begin{figure*}[!h
\centering
\includegraphics[width=14cm, height=8.0cm]{Fig2.jpg}
\caption{Information carriers for TDM/FDM/OTFS in modulation domain and time domain. Traditional TDM and FDM carriers are narrow pulses in TD and FD but spread in FD and TD respectively, manifesting the fundamental obstruction for TF localization. In contrast, the OTFS carrier is a quasi-periodic pulse in the DD domain, viewed as ``effectively" localized jointly in time and frequency.}
\label{fig_2}
\end{figure*}
In time domain modulation (\emph{TDM}), information is carried by a \emph{narrow TD pulse}, and an information packet is the superposition of distinct TD pulses, each modulated by an information symbol.
In frequency domain modulation (\emph{FDM}), information is carried by a \emph{narrow FD pulse}, which is essentially a sinusoid in the time domain. An information packet is a TD signal, which is the inverse Fourier transform of a superposition of distinct FD pulses, each modulated by an information symbol.
The information carrier in TDM is localized in the TD but not in the FD. Similarly, the information carrier in FDM is localized in the FD but not in the TD (the sinusoid $e^{j 2 \pi f_0 t}$ shown in Fig.~\ref{fig_2} is spread in time, but localized at $f = f_0$ in frequency). {As will be explained in the sequel, the implication of this lack of localization is that the TDM/FDM input-output relation under doubly-spread channel witnesses fading and non-predictability.
The fading attribute implies a degradation in the BER performance while the non-predictability attribute implies
frequent acquisition of the effective channel response
in order to maintain good performance (see Section \ref{secTDMFDMIOrelation} and Section \ref{secFDMIOrelation}).}
{The Heisenberg uncertainty principle implies that a signal cannot be simultaneously localized jointly in time and frequency. However, as will be explained in the sequel, this obstruction can be ``effectively'' eliminated as long as a certain quasi-periodic condition is maintained (see Section \ref{secDDCarrier}).
In OTFS, information is carried by a \emph{DD domain} quasi-periodic pulse. The translation from a DD signal into a TD signal is carried by the \emph{inverse Zak transform} (see Section \ref{secDDCarrier}), just as the translation from an FD signal into a TD signal is carried by the inverse Fourier transform.
The bottom part of Fig.~\ref{fig_2} depicts the TD realization of a DD domain pulse. We refer to this TD signal as a \emph{pulsone}, since it is essentially comprised of a pulse train modulated by a frequency tone (see also \cite{RH3} and \cite{SKM1})\footnote{\footnotesize{{As we shall see later in Section \ref{secDDCarrier}, the ``effective" localization of a pulsone (of approximate bandwidth $B$ and time duration $T$) is not exact and that the corresponding DD domain pulse has most of its energy spread in a narrow region having width $B^{-1}$ and $T^{-1}$ along the delay and Doppler axis, respectively.} }}.
The main beneficial property of the pulsone is that its interaction with a doubly-spread channel is both non-fading and predictable, which translates to superior BER performance and more efficient channel acquisition (see Section \ref{secOTFSIOrelation}).}
\section{Delay-Doppler Modulation}
\label{secDDCarrier}
\begin{figure}[!h
\centering
\includegraphics[width=7.5cm, height=5.5cm]{Fig4.jpg}
\caption{A DD domain pulse is localized only within the fundamental DD domain period ${\mathcal D}_0$, as it repeats infinitely many times in a quasi-periodic fashion.}
\label{fig_3}
\end{figure}
{As mentioned in the previous section, in OTFS, information is carried over a DD domain pulse. In this section, we give the formal definition of a DD domain pulse and study its local structure. In addition, we illustrate the impact of various DD domain pulse parameters on its TD and FD realizations. Finally, we derive the global properties of the OTFS modulation from the local properties of a DD domain pulse. The definition of a DD domain pulse depends on a choice of two
periods: the \emph{delay period}, denoted by $\tau_p \in {\mathbb R}_{\geq 0}$ and the \emph{Doppler period}, denoted by $\nu_p \in {\mathbb R}_{\geq 0}$. The two periods should be reciprocal, that is, $\nu_p = 1/\tau_p$. For any given choice of the periods, there is an associated transform, called the time Zak transform, denoted by ${\mathcal Z}_t$, that establishes a unitary equivalence between TD signals and a sub-class of quasi-periodic DD domain signals.\footnote{\footnotesize{{The definition of the time Zak transform depends on the values of the periods. For clarity, we omit the periods from the notation, assuming their values to be clear from the context.}}}} {The time Zak transform of a TD signal $x(t)$ is given by}
\begin{eqnarray}
\label{eqn7}
x_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu) & = & {\mathcal Z}_t( x(t)) \nonumber \\
& \Define & \sqrt{\tau_p} \sum\limits_{k=-\infty}^{\infty} x(\tau + k \tau_p) \, e^{-j 2 \pi \nu k \tau_p}.
\end{eqnarray
{Observe from (\ref{eqn7}) that,
for any $n,m \in {\mathbb Z}$,
$x_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$ satisfies}
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{eqn4}
x_{_{\mbox{\footnotesize{dd}}}}(\tau + n \tau_p, \nu + m \nu_p) & & \nonumber \\
& \hspace{-24mm} = & \hspace{-13mm}
\sqrt{\tau_p} \hspace{-1mm} \sum\limits_{k=-\infty}^{\infty} \hspace{-1mm} x(\tau + (k + n) \tau_p) \, e^{-j 2 \pi (\nu + m \nu_p) k \tau_p} \nonumber \\
& \hspace{-24mm} = & \hspace{-13mm} \sqrt{\tau_p} \sum\limits_{k^\prime=-\infty}^{\infty} x(\tau + k^\prime \tau_p) \, e^{-j 2 \pi \nu k^\prime \tau_p} e^{j 2 \pi n \nu \tau_p} \nonumber \\
& \hspace{-24mm} = & \hspace{-13mm} e^{j 2 \pi n \nu \tau_p} \, x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) \,\,,\,\, n,m \in {\mathbb Z}.
\end{eqnarray}\normalsize}
\hspace{-4mm}
{The condition in (\ref{eqn4}) is referred to as the quasi-periodicity condition.
Fig.~\ref{fig_2} depicts a quasi-periodic pulse which is localized at the point $(\tau,\nu) = (\tau_0,\nu_0)$
within the rectangular region}
\begin{eqnarray}
{\mathcal D}_0 & \Define & \{ (\tau,\nu) \, {\Big \vert} \, 0 \leq \tau < \tau_p, \, 0 \leq \nu < \nu_p \}.
\end{eqnarray}
{We refer to ${\mathcal D}_0$ as the \emph{fundamental period} of the DD domain.
Due to its quasi-periodicity (see (\ref{eqn4})), the pulse is present at all integer translates
$(\tau,\nu) = (\tau_0 + n \tau_p, \nu_0 + m \nu_p)$, where $m,n \in {\mathbb Z}$, as shown
in Fig.~\ref{fig_3}.}
Note that the phase of the pulse
changes when the pulse location shifts by an integer multiple of $\tau_p$ along the delay axis, but there is no change in phase when the pulse location shifts by an integer multiple of $\nu_p$ along the Doppler axis. In summary, the DD domain pulse comprises of a configuration of infinitely many pulses which repeat at integer multiples of $\tau_p$ along the delay axis, and at integer multiples of $\nu_p = 1/\tau_p$ along the Doppler axis.
{The center part of Fig.~\ref{fig_4} depicts a DD domain pulse $x_{_{\mbox{\footnotesize{dd}}}}(\tau , \nu )$ located at $(\tau,\nu) = (\tau_0 , \nu_0 )$
within the fundamental period ${\mathcal D}_0$. Along the delay axis, it is spread over a
characteristic length $1/B < \tau_p$, and along the Doppler axis, it is spread over a characteristic length $1/T < \nu_p$. Since the DD domain pulse is quasi-periodic, it repeats infinitely many times
as shown in Fig.~\ref{fig_3}.}
\begin{figure*}[!h
\centering
\includegraphics[width=14cm, height=8.0cm]{Fig3.jpg}
\caption{{A DD domain pulse and its TD/FD realizations referred to as TD/FD pulsone. The TD pulsone comprises of a finite duration pulse train modulated by a TD tone. The FD pulsone comprises of a finite bandwidth pulse train modulated by a FD tone. The location of the pulses in the TD/FD pulse train and the frequency of the modulated TD/FD tone is determined by the location of the DD domain pulse $(\tau_0,\nu_0)$. The time duration and bandwidth of a pulsone are inversely proportional to the characteristic width of the DD domain pulse along the Doppler axis and the delay axis, respectively. The number of non-overlapping DD pulses, each spread over an area $B^{-1}T^{-1}$, inside the fundamental period ${\mathcal D}_0$ (which has unit area) is equal to the time-bandwidth product $BT$ and the corresponding pulsones are orthogonal to one another, rendering OTFS an orthogonal modulation that achieves the Nyquist rate. As $\nu_p \rightarrow \infty$, the FD pulsone approaches a single FD pulse which is the FDM carrier. Similarly, as $\tau_p \rightarrow \infty$, the TD pulsone approaches a single TD pulse which is the TDM carrier. OTFS is therefore a family of modulations parameterized by $\tau_p$ that interpolates between TDM and FDM.}}
\label{fig_4}
\end{figure*}
{We first analyze the structure of the TD realization,
which is obtained by applying the inverse time Zak transform \cite{Zak67, Janssen88}, i.e.,}
{\small
\vspace{-4mm}
\begin{eqnarray}
\label{eqn6}
x(t) & = & {\mathcal Z}_t^{-1}\left( x_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu) \right) \, \Define \, \sqrt{\tau_p} \int_{0}^{\nu_p} x_{_{\mbox{\footnotesize{dd}}}}(t,\nu) \, d\nu.
\end{eqnarray}\normalsize}
\hspace{-4mm}
The top part of Fig.~\ref{fig_4} shows that $x(t)$ is a pulse train of finite duration $T$. Each pulse in the train is spread over a time duration $1/B$, and consecutive pulses are separated by the delay period $\tau_p$. The pulses are located at time instances $t = n \tau_p + \tau_0$, $n \in {\mathbb Z}$, where $\tau_0$ is the delay coordinate of the underlying DD domain pulse. The pulse train is modulated by a sinusoid of frequency $\nu_0$, where $\nu_0$ is the Doppler coordinate of the underlying DD domain pulse.
{Next, we analyze the structure of the FD realization, which is obtained by applying the inverse frequency Zak transform ${\mathcal Z}_f^{-1}$ \cite{Zak67, Janssen88}, i.e.,}
{\small
\vspace{-4mm}
\begin{eqnarray}
\label{eqn8}
\hspace{-2mm}
X(f) & \hspace{-2mm} = & \hspace{-2mm} {\mathcal Z}_f^{-1}( x_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)) \, \Define \, \sqrt{\nu_p} \int_{0}^{\tau_p} x_{_{\mbox{\footnotesize{dd}}}}(\tau,f) \, e^{-j 2 \pi f \tau} d\tau.
\end{eqnarray}\normalsize}
\hspace{-3mm}
The left part of Fig.~\ref{fig_4} shows that $X(f)$ is also a pulsone, comprising of a pulse train which extends over a bandwidth $B$. Each pulse in the train is spread over a frequency interval $1/T$. Consecutive pulses are separated by the Doppler period $\nu_p$. The pulses are located at frequencies $f = m \nu_p + \nu_0$, $m \in {\mathbb Z}$, where $\nu_0$ is the Doppler coordinate of the underlying DD domain pulse.
The pulse train is modulated by a FD sinusoid $e^{-j 2 \pi \tau_0 f }$ where $\tau_0$ is the delay coordinate of the underlying DD domain pulse.
Fig.~\ref{fig_5a} depicts the effect of shifting the location of the DD domain pulse on the structure of the corresponding TD and FD pulsones. In a nutshell, a shift of the DD domain pulse along the delay axis translates to a time displacement of the TD pulsone, and a shift of the DD domain pulse along the Doppler axis translates to a frequency displacement of the FD pulsone.
Fig.~\ref{fig_5b} and Fig.~\ref{fig_5c} depict the effect of increasing the width of the DD domain pulse on the structure of the corresponding TD and FD pulsones. In a nutshell, Fig.~\ref{fig_5b} shows that an increase of the width of the DD domain pulse along the delay axis translates to a reduction of the FD pulsone bandwidth, and Fig.~\ref{fig_5c} shows that an increase of the width of the DD domain pulse along the Doppler axis translates to a reduction of the TD pulsone time duration.
\begin{figure*}[!h
\centering
\includegraphics[width=15cm, height=7.0cm]{Fig5.jpg}
\caption{{Impact of DD domain pulse location on the TD/FD pulsone characteristics. The figure consists of six plots organized into three rows and two columns (one column for TD and the other for FD). The TD plot in the second row of the figure shows that, a shift in the delay axis location of a DD domain pulse from $\tau_0$ to $\tau_0'$ translates to a time displacement of the TD pulsone by $(\tau_0' - \tau_0)$ seconds. The effect of this shift on the corresponding FD pulsone (shown on the right in the same row) is that the modulating FD tone changes from $e^{-j 2 \pi \tau_0 f}$ to $e^{-j 2 \pi \tau_0' f}$ (only the real part of the TD/FD pulsones is plotted). The FD plot in the third row of the figure shows that, a shift in the Doppler axis location of a DD domain pulse from $\nu_0$ to $\nu_0'$ translates to a frequency displacement of the FD pulsone by $(\nu_0' - \nu_0)$ Hz. The effect of this shift on the corresponding TD pulsone (shown on the left in the same row) is that the modulating TD tone changes from $e^{j 2 \pi \nu_0 (t - \tau_0)}$ to $e^{j 2 \pi \nu_0' (t - \tau_0)}.$}}
\label{fig_5a}
\end{figure*}
\begin{figure*}[!h
\centering
\includegraphics[width=14cm, height=6.0cm]{Fig6.jpg}
\caption{{Impact of DD domain pulse width along delay axis on the TD/FD pulsone characteristic. Increasing the width of the DD domain pulse along the delay axis from $\frac{1}{B}$ to $\frac{1}{B'}$ translates to a reduction of the FD pulsone bandwidth from $B$ to $B'$ and an increase in the width of each TD pulse in the corresponding TD pulsone from $\frac{1}{B}$ to $\frac{1}{B'}$.}}
\label{fig_5b}
\end{figure*}
\begin{figure*}[!h
\centering
\includegraphics[width=14cm, height=6.0cm]{Fig7.jpg}
\caption{{Impact of DD domain pulse width along the Doppler axis on TD/FD pulsone characteristic. Increasing the width of the DD domain pulse along the Doppler axis from $\frac{1}{T}$ to $\frac{1}{T'} $ translates to a reduction of the TD pulsone time duration from $T$ to $T'$ and an increase in the width of each FD pulse in the corresponding FD pulsone from $\frac{1}{T}$ to $\frac{1}{T'}$.}}
\label{fig_5c}
\end{figure*}
We now derive global properties of the OTFS modulation
from the local properties described above.
\underline{Orthogonality of pulsones:}
{From Fig.~\ref{fig_4} we see that
a DD domain pulse is localized within a rectangular DD domain region which
is $1/B$ seconds wide along the delay axis and $1/T$ Hz wide along the Doppler axis.
Geometrically, this implies almost no overlap between two DD pulses whose delay domain locations differ by $1/B$ or whose Doppler domain locations differ by $1/T$, i.e., such pulses
are almost orthogonal.}
{The same observation can be made by separately observing the TD and FD realizations of a DD pulse.
We know from Fig.~\ref{fig_5a}, that shifting the location of the DD domain pulse by $1/B$ along the delay axis induces a time displacement of the TD pulsone by $1/B$ seconds. From Fig.~\ref{fig_4}, we also know that each pulse in the TD pulsone has time duration $1/B$ and that consecutive pulses are separated by $\tau_p$. When a DD domain pulse is shifted by $1/B$ along the delay axis the corresponding TD pulsones do not overlap.
Similarly, when a DD domain pulse is shifted by $1/T$ Hz along the Doppler axis, the corresponding FD pulsones do not overlap. Delay shifts by integer multiples of $1/B$ and Doppler shifts by integer multiples of $1/T$ lead to TD/FD
pulsones that are almost orthogonal. Here, the impact of delay and Doppler shifts on the orthogonality between pulsones is understood \emph{separately} through
their TD and FD realizations, respectively. The geometric interpretation of
orthogonality in the previous paragraph is however much \emph{simpler} since we
view the pulsone as a pulse in the DD domain.}
\underline{Optimality as time- and bandlimited signals:}
Slepian, Landau and Pollack \cite{Slepian62, Landau62}
introduced the family of prolate spheroidal waveforms
to measure the space of essentially time- and bandlimited
signals. For signals limited approximately to a time duration of $T$ seconds and bandwidth $B$ Hz, they showed that the number of orthogonal carrier waveforms is approximately
equal to the time-bandwidth product $B T$. We use pulsones
with time-duration $T$ and bandwidth $B$ to span this same space. Recall that pulsones are almost orthogonal if the location of the corresponding DD domain pulses differ by integer multiples
of $1/B$ along the delay axis and by integer multiples of
$1/T$ along the Doppler axis. The number of approximately
orthogonal carrier waveforms is essentially $\frac{\tau_p \nu_p}{\frac{1}{B} \frac{1}{T}} = BT$, which is the time-bandwidth product.
\underline{TDM as a limiting case:}
In the TD pulsone shown in Fig.~\ref{fig_4}, as $\tau_p \rightarrow \infty$,
the TD pulses located at $t = \tau_0 + n \tau_p, n \in {\mathbb Z}, n \ne 0,$
move towards $\pm \infty$ and only the TD pulse at $t = \tau_0$ remains, i.e.,
the TD pulsone approaches a single TD pulse which is the TDM carrier. In other words, as the Doppler domain collapses, the OTFS carrier tends to the TDM carrier.
\underline{FDM as a limiting case:}
In the FD pulsone, as $\nu_p \rightarrow \infty$ (i.e., $\tau_p = (1/\nu_p) \rightarrow 0$), the pulses located at $f = \nu_0 + m \nu_p, m \in {\mathbb Z}, m \ne 0,$ move towards $\pm \infty$ and only the FD pulse at $f=\nu_0$ remains, i.e., the FD pulsone approaches a single FD pulse which is the FDM carrier.
In other words,
as the delay domain collapses, the OTFS carrier tends to the FDM carrier.
\emph{OTFS is therefore a family of modulations parameterized by $\tau_p$ that interpolates between TDM and FDM} (see the hyperbola $\tau_p \, \nu_p = 1$ in the top right corner of Fig.~\ref{fig_4}).\\
\underline{TD pulsones encode wireless channel dynamics:}
TD pulsones are engineered to mirror the dynamics of the wireless channel. The effect of channel path delay on a DD domain pulse is to simply shift the pulse along the delay axis by an amount equal to the path delay. The effect of a channel path Doppler shift on a DD domain pulse is to simply
shift the pulse along the Doppler axis by an amount equal to the path Doppler shift.
\underline{The Fourier transform as a composition:}
Fig.~\ref{fig_4} illustrates that we can map a TD signal to
its FD realization by first applying the Zak transform ${\mathcal Z}_t$ from the TD to the DD domain, then applying
the inverse frequency Zak transform ${\mathcal Z}_f^{-1}$ from the
DD domain to the FD. In other words, the Fourier transform
is the composition of ${\mathcal Z}_t$ and ${\mathcal Z}_f^{-1}$. Fig.~\ref{fig_6} represents the three signal representations (TD, FD and DD domain) as the three vertices of a triangle, and labels edges between vertices by the transforms between signal representations.
\begin{figure}[!h
\centering
\includegraphics[width=9cm, height=6.0cm]{Fig8.jpg}
\vspace{-1mm}
\caption{Three different basic signal realizations, TD, FD and DD domains. Signal representations in these
domains are related through transforms. The well known Fourier transform is in fact a composition of the Zak transform ${\mathcal Z}_t$ and the inverse Zak transform ${\mathcal Z}_f^{-1}$ from DD domain to FD.}
\label{fig_6}
\end{figure}
\section{Interaction of channel paths and carrier waveforms}
\label{secChannelInteraction}
In this section, we use the example shown in Fig.~\ref{fig_7}, to illustrate how a doubly-spread wireless channel interacts with the carrier waveforms for TDM, FDM and OTFS. {The doubly-spread wireless channel considered in Fig.~\ref{fig_7} comprises of four paths. The first and the third paths are due to reflection from stationary building and therefore these paths do not result in any Doppler shift. The second and the fourth paths are due to reflection from moving vehicles and these therefore induce Doppler shift. To highlight the
phenomenon of fading and non-predictability of channel interaction in TDM, we consider the first and the second path to have the same delay so that the TD pulses received along these two paths superimpose in a time-varying manner (due to the Doppler shift of the second path). Similarly, choosing the first and the third path to have the same Doppler shift (i.e., zero) but different delay, highlights the phenomenon of fading and non-predictability of channel interaction in FDM. However,
since any two paths differ in either delay or Doppler shift, in OTFS there is no superposition of DD domain pulses received along distinct paths. Due to this reason, for suitable values of $(\tau_p,\nu_p)$ the interaction of a DD pulse with the channel is non-fading and predictable. The exact values of the path gain, delay and Doppler shifts is mentioned inside the parameter box in Fig.~\ref{fig_7}.}
\subsection{Interaction with a TD pulse}
\label{secChannelInteractionTD}
\begin{figure*}[!h
\centering
\includegraphics[width=16cm, height=8.0cm]{Fig9.jpg}
\caption{Interaction of a doubly-spread channel with TDM/FDM carrier waveform. Two TD pulses are transmitted at $t=t_0$ and $t= t_1$. Due to the first and the second path (which have the same propagation delay), pulses are received at $t = t_0 + 2 \mu s$ and $t = t_1 + 2 \mu s$. Due to the different Doppler shifts of these two paths, the magnitude and phase of these received pulses is dependent on the time instances $t_0$ and $t_1$ in a non-simple manner. Therefore, the magnitude and phase
of the pulse received at $t= t_1 + 2 \mu s$ \emph{cannot} be simply predicted given the knowledge of the pulse received at $t = t_0 + 2 \mu s$. As the magnitude and phase of these two received pulses are different, this interaction is \emph{non-predictable, fading and non-stationary}. Similarly,
two FD pulses are transmitted at $f=f_0$ and $f=f_1$. Due to the first and the third path which induce the same zero Doppler shift and have different path delays, the magnitude and phase of the received FD pulses along these two paths depends on the FD location of the transmitted FD pulse. This dependence is non-simple and therefore the magnitude and phase of the FD pulse received at $f=f_1$ cannot be predicted based on the knowledge of the magnitude and phase of the FD pulse received at $f=f_0$. The dependence on the FD pulse location is due to the non-zero path delays of these two paths, which results
in a \emph{non-predictable, fading and non-stationary} interaction between the channel and the FD pulse. The \emph{non-predictable, fading and non-stationary} interaction between the TD/FD pulse and the channel is not because of the fixed underlying channel (i.e., path gain, delays and Doppler shifts), but is because these pulses are not simultaneously localized in TD and FD.}
\label{fig_7}
\end{figure*}
Fig.~\ref{fig_7} illustrates the TD response to a transmitted signal
\begin{eqnarray}
\label{eqn9}
x(t) & = & \delta(t - t_0) \, + \, \delta(t - t_1).
\end{eqnarray}
comprising two narrow TD pulses\footnote{\footnotesize{We represent the narrow pulses by Dirac-delta impulses since the width of a pulse is much smaller than the channel path delays and the inverse of the maximum Doppler shift.}}
transmitted at $t=t_0 = 1$ ms and
$t=t_1 = 1.5$ ms. We consider a stationary doubly-spread channel
comprising four channel paths with
parameters $h_i, \tau_i, \nu_i$ listed in the channel parameter box.
We let $h(t; t_i), i=0,1$ denote the TD impulse response of the channel to a TD pulse transmitted at time $t_i$. Then
\begin{eqnarray}
\label{eqn13}
h(t;t_0) & \hspace{-2mm} = & \hspace{-2mm} (h_1 + h_2 e^{j 2 \pi \nu_2 t_0}) \delta(t - 2 \mu s) \nonumber \\
& & + \, h_3 \delta(t - 3 \mu s) \, + \, h_4 e^{j 2 \pi \nu_4 t_0}\delta(t - 4 \mu s),
\end{eqnarray}
and
\begin{eqnarray}
\label{eqn14}
h(t;t_1) & \hspace{-2mm} = & \hspace{-2mm} (h_1 + h_2 e^{j 2 \pi \nu_2 t_1}) \delta(t - 2 \mu s) \nonumber \\
& & + \, h_3 \delta(t - 3 \mu s) \, + \, h_4 e^{j 2 \pi \nu_4 t_1}\delta(t - 4 \mu s).
\end{eqnarray}
We look to predict the received TD signal
$y_1(t) = h(t - t_1 ; t_1)$
from the previous signal $y_0(t) = h(t - t_0 ; t_0)$.
After estimating the path delays from $y_0(t)$, we can predict that $y_1(t)$ will involve path delays of
$2 \, \mu s$, $3 \, \mu s$ and $4 \, \mu s$.
{$\boldsymbol{h_3 \delta(t - 3 \, \mu s)}$}: This term due to the third channel path is
common to $h(t ; t_0)$ and $h(t ; t_1)$.
\emph{Channel paths which do not induce Doppler shift and whose delay is distinct from that of other paths result in a predictable channel interaction.} The complex gain $h_3$ in this term does not depend on when the pulse was transmitted, so the interaction of the pulse with this
path is stationary and non-fading (received pulse power is independent of when the pulse was transmitted). The response of such a channel path to any transmitted pulse can therefore be \emph{predicted}.
{$\boldsymbol{h_4 e^{j 2 \pi \nu_4 t_0} \delta(t - 4 \, \mu s)}$ and $\boldsymbol{h_4 e^{j 2 \pi \nu_4 t_1} \delta(t - 4 \, \mu s)}$}:
These terms are due to the fourth channel path, whose Doppler shift $\nu_4$ interacts with the time at which the pulse was transmitted. As it is not possible to simultaneously estimate $h_4$ and $\nu_4$ from $h_4 e^{j 2 \pi \nu_4 t_0}$, we cannot predict the complex gain $h_4 e^{j 2 \pi \nu_4 t_1}$
of the corresponding term in $h(t; t_1)$.
The received signal power is time-independent since $\vert h_4 e^{j 2 \pi \nu_4 t_i} \vert^2 = \vert h_4 \vert^2$ but the phase is time-dependent.
\emph{Channel paths, which induce Doppler shift and whose delay is different from that of other paths, result in a non-predictable channel interaction which is non-stationary and non-fading.}
${\boldsymbol{\left(h_1 + h_2 e^{j 2 \pi \nu_2 t_0} \right) \delta(t - 2 \, \mu s)}}$ and ${\boldsymbol{\left(h_1 + h_2 e^{j 2 \pi \nu_2 t_1} \right) \delta(t - 2 \, \mu s)}}$: The first and second paths have the same delay, but only the second path introduces a Doppler shift. It is not possible to predict
${\left(h_1 + h_2 e^{j 2 \pi \nu_2 t_1} \right) \delta(t - 2 \, \mu s)}$ since it is not possible to separately estimate $h_1, h_2, \nu_1$ and $\nu_2$ from ${\left(h_1 + h_2 e^{j 2 \pi \nu_2 t_0} \right) \delta(t - 2 \, \mu s)}$. Note that in this case both the complex gain and received signal power are time-dependent (i.e., they depend on the time at which the pulse is transmitted).
\emph{Two or more channel paths which have the same path delay and at least one of which induces a Doppler shift which is different from that induced by the other paths, result in
non-predictable channel interaction which is fading and non-stationary}.
\subsection{Interaction with a FD pulse}
\label{secChannelInteractionFD}
We consider the same stationary doubly-spread channel
illustrated in Fig.~\ref{fig_7}. Here we consider
the FD response to a transmitted FD signal given by
\begin{eqnarray}
\label{eqn14}
X(f) & = & \delta(f - f_0) \, + \, \delta(f - f_1)
\end{eqnarray}
comprising two FD pulses transmitted at $f = f_0 = 15$ KHz
and $f = f_1 = 450$ KHz.
We let $H(f; f_i)$ denote the FD impulse response of the channel to a FD impulse transmitted at frequency $f_i$. Then
\begin{eqnarray}
\label{eqn20}
H(f; f_0) & \hspace{-2mm} = & \hspace{-2mm} {\Big [} h_1 \, e^{- j 2 \pi f_0 \tau_1} \, + \, h_3 \, e^{- j 2 \pi f_0 \tau_3} {\Big ]} \delta(f) \nonumber \\
& & + \, h_2 \, e^{-j 2 \pi (f_0 + \nu_2) \tau_2 } \, \delta(f + 950) \nonumber \\
& & + \, h_4 \, e^{-j 2 \pi (f_0 + \nu_4) \tau_4 } \, \delta(f - 750)
\end{eqnarray}
and
\begin{eqnarray}
\label{eqn21}
H(f; f_1) & \hspace{-2mm} = & \hspace{-2mm} {\Big [} h_1 \, e^{- j 2 \pi f_1 \tau_1} \, + \, h_3 \, e^{- j 2 \pi f_1 \tau_3} {\Big ]} \delta(f) \nonumber \\
& & + \, h_2 \, e^{-j 2 \pi (f_1 + \nu_2) \tau_2 } \, \delta(f + 950) \nonumber \\
& & + \, h_4 \, e^{-j 2 \pi (f_1 + \nu_4) \tau_4 } \, \delta(f - 750).
\end{eqnarray}
Again we look to predict the received FD signal $Y_1(f) = H(f - f_1; f_1)$ due to a pulse transmitted at $f=f_1$ from the received signal $Y_0(f) = H(f - f_0; f_0)$ due to the pulse transmitted at $f=f_0$.
Since the underlying channel is stationary, we can accurately predict that $H(f; f_1)$ will comprise impulses at $0$ Hz, $-950$ Hz and $750$ Hz.
{$\boldsymbol{h_2 \, e^{- j 2 \pi (f_0 + \nu_2) \tau_2} \, \delta(f + 950 \, Hz)}$}: The second path interacts with the FD pulse transmitted at $f=f_0$ resulting in a received pulse at $f= f_0 + \nu_2$ having complex gain $h_2 \, e^{- j 2 \pi (f_0 + \nu_2) \tau_2}$. From this complex gain we cannot predict the complex gain $h_2 \, e^{- j 2 \pi (f_1 + \nu_2) \tau_2}$ of the corresponding term in $Y_1(f)$
since it is not possible to estimate $h_2$ and $\tau_2$
from $h_2 \, e^{- j 2 \pi (f_0 + \nu_2) \tau_2}$. Since there is no other path with the same Doppler shift, the magnitude of the
received FD pulse is $\vert h_2 \, e^{-j 2 \pi (f_i + \nu_2) \tau_2 } \vert = \vert h_2 \vert$. There is no FD fading
since this magnitude does not depend on the FD location of the transmitted pulse. The same analysis applies to the interaction
of the $4$th channel path with the FD pulses. \emph{Channel paths whose Doppler shift is distinct from that of other paths result in a non-predictable, non-fading and non-stationary
interaction between the channel and the FD pulse}.
{$\boldsymbol{\left( h_1 e^{- j 2 \pi f_0 \tau_1 } \, + \, h_3 e^{- j 2 \pi f_0 \tau_3 } \right) \delta(f)}$}: The $1$st and $3$rd paths do not induce any Doppler shift and have different path delays. The interaction of these two paths with the pulse transmitted at $f=f_0$ results in a received pulse at $f=f_0$ having complex gain $\left( h_1 e^{- j 2 \pi f_0 \tau_1 } \, + \, h_3 e^{- j 2 \pi f_0 \tau_3 } \right)$. From this complex gain, it is not possible to predict the complex gain $\left( h_1 e^{- j 2 \pi f_1 \tau_1 } \, + \, h_3 e^{- j 2 \pi f_1 \tau_3 }\right)$ of the pulse received due to the interaction of the 1st and 3rd paths with the pulse transmitted at $f=f_1$.
This is because, it is not possible to separately estimate $h_1, h_3, \tau_1$ and $\tau_3$ from $\left( h_1 e^{- j 2 \pi f_0 \tau_1 } \, + \, h_3 e^{- j 2 \pi f_0 \tau_3 }\right)$.
This interaction is non-stationary and fading. \emph{Two or more channel paths which induce the same Doppler shift, and at least one of which has a path delay which is different from that of the other paths result in a non-predictable
channel interaction which is fading and non-stationary.}
\subsection{Interaction with a DD domain pulse}
\label{secChannelInteractionDD}
\begin{figure*}[!h
\centering
\includegraphics[width=16cm, height=8.0cm]{Fig10.jpg}
\caption{The transmit DD pulse at $(\tau_a, \nu_a)$ is ``effectively" localized along both delay and Doppler domain, and therefore DD pulses are received along each path at distinct DD locations $(\tau_a + \tau_i, \nu_a + \nu_i), i=1,2,3,4$ which allows for accurate estimation of the channel DD spreading function $h(\tau,\nu)$ (see (\ref{eqn1})). Due to quasi-periodicity, the transmit DD pulse is repeated along the delay and Doppler domain with period $\tau_p$ and $\nu_p$, respectively. These repetitions also interact with the channel resulting in received DD pulses. If $\tau_p$ is less than the channel delay spread and/or if $\nu_p$ is less than the channel Doppler spread (i.e, $(\tau_p, \nu_p)$ does not satisfy (\ref{eqn28})), then the received DD pulses corresponding to the quasi-periodic pulse repetition could overlap/alias with the pulses received due to the DD pulse transmitted at $(\tau_a, \nu_a) \in {\mathcal D}_0$, thereby making channel prediction non-simple and difficult.
However, if $(\tau_p, \nu_p)$ is properly chosen to satisfy (\ref{eqn28}) then the channel response
to a DD pulse transmitted at some other DD location (e.g., $(\tau_b, \nu_b)$) can be predicted from perfect knowledge of the channel response to the DD pulse transmitted at $(\tau_a, \nu_a)$. Also, when $(\tau_p,\nu_p)$ satisfies (\ref{eqn28}), the energy of the received DD domain pulses is invariant of the location of the transmitted DD pulse. Therefore, the interaction of a DD pulse with the channel is \emph{predictable and non-fading} if (\ref{eqn28}) is satisfied.}
\label{fig_8}
\end{figure*}
We continue to study the stationary doubly-spread channel illustrated in Fig.~\ref{fig_7}.
In this section, we analyze the interaction of this channel with two DD domain pulses
transmitted at $(\tau,\nu) = (\tau_a, \nu_a)$ and $(\tau,\nu) = (\tau_b, \nu_b)$, $0 \leq \tau_a, \tau_b < \tau_p$, $0 \leq \nu_a, \nu_b < \nu_p$. From (\ref{eqn4}) it follows that the
corresponding quasi-periodic DD domain signal is given by
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{eqn22}
x_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu) & \hspace{-2mm} = & \hspace{-2mm} \sum\limits_{m \in {\mathbb Z}}\sum\limits_{n \in {\mathbb Z}} e^{j 2 \pi n \nu \tau_p } {\Big [} \delta(\tau - n \tau_p - \tau_a) \delta (\nu - m \nu_p - \nu_a) \nonumber \\
& & \,\,\,\,\,\,\,\, + \, \delta(\tau - n \tau_p - \tau_b) \delta (\nu - m \nu_p - \nu_b) {\Big ]}.
\end{eqnarray}
\normalsize}
We obtain the transmitted TD signal $x(t)$ by applying the inverse Zak transform
${\mathcal Z}_t^{-1}$ as in (\ref{eqn6}).
\begin{eqnarray}
\label{eqn23}
x(t) & = & \sqrt{\tau_p} \int_{0}^{\nu_p} x_{_{\mbox{\footnotesize{dd}}}}(t, \nu) \, d\nu \nonumber \\
& = & \sqrt{\tau_p} \sum\limits_{n \in {\mathbb Z}} e^{j 2 \pi n \nu_a \tau_p}
\, \delta(t - \tau_a - n \tau_p) \nonumber \\
& & \, + \, \sqrt{\tau_p} \sum\limits_{n \in {\mathbb Z}} e^{j 2 \pi n \nu_b \tau_p}
\, \delta(t - \tau_b - n \tau_p).
\end{eqnarray}
We obtain the received TD signal $y(t)$ by substituting (\ref{eqn23}) in (\ref{eqn3}).
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{eqn24}
y(t) & \hspace{-3mm} = & \hspace{-3mm} \sqrt{\tau_p} \sum\limits_{i=1}^{4} h_i {\Big [} e^{j 2 \pi \nu_i \tau_a} \sum\limits_{n \in {\mathbb Z}} e^{j2 \pi n (\nu_a + \nu_i) \tau_p} \delta(t - (\tau_a + \tau_i) - n \tau_p) \nonumber \\
& & \,\,\,\, + \, e^{j 2 \pi \nu_i \tau_b} \sum\limits_{n \in {\mathbb Z}} e^{j2 \pi n (\nu_b + \nu_i) \tau_p} \delta(t - (\tau_b + \tau_i) - n \tau_p) {\Big ]} .
\end{eqnarray}
\normalsize}
We obtain $y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$, the DD domain representation of
$y(t)$ by applying the Zak transform ${\mathcal Z}_t$ as in (\ref{eqn7}). We write
\begin{eqnarray}
\label{eqn25}
y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu) & = & y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu ; \tau_a, \nu_a) \, + \, y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu ; \tau_b, \nu_b),
\end{eqnarray}
where
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{eqn26}
y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu ; \tau_a, \nu_a) & \hspace{-3mm} = & \hspace{-3mm} \sum\limits_{i=1}^4 h_i e^{j 2 \pi \nu_i \tau_a} {\Big [} \hspace{-2mm} \sum\limits_{m,n \in {\mathbb Z}} \hspace{-2mm} e^{j 2 \pi n \nu \tau_p} \delta(\tau - \tau_a - \tau_i - n \tau_p) \nonumber \\
& & \hspace{19mm} \delta(\nu - \nu_a - \nu_i - m \nu_p) {\Big ]}
\end{eqnarray}\normalsize}
\hspace{-2.5mm}
{is the response associated with the pulse transmitted at $(\tau_a, \nu_a)$, referred to as the $a$-response, and}
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{eqn27}
y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu ; \tau_b, \nu_b) & \hspace{-3mm} = & \hspace{-3mm} \sum\limits_{i=1}^4 h_i e^{j 2 \pi \nu_i \tau_b} {\Big [} \hspace{-2mm} \sum\limits_{m,n \in {\mathbb Z}} \hspace{-2mm} e^{j 2 \pi n \nu \tau_p} \delta(\tau - \tau_b - \tau_i - n \tau_p) \nonumber \\
& & \hspace{19mm} \delta(\nu - \nu_b - \nu_i - m \nu_p) {\Big ]}
\end{eqnarray}\normalsize}
\hspace{-2.5mm}
{is the response associated with the pulse transmitted at $(\tau_b, \nu_b)$, referred to as the $b$-response.}
Recall from Section \ref{secDDCarrier}, that channel path delay shifts the DD domain pulse
along the delay axis, and that channel path Doppler shifts the DD domain pulse
along the Doppler axis.
{From (\ref{eqn26}), we see that
the $a$-response includes four distinct pulses received at $(\tau_a + \tau_i, \nu_a + \nu_i)$, and from (\ref{eqn27}), we see that the $b$-response includes four distinct pulses received at $(\tau_b + \tau_i, \nu_b + \nu_i)$, where $i=1,2,3,4$.} Fig.~\ref{fig_8} illustrates that the received pulse along each path
is observed \emph{separately} in the DD domain.
Again we look to predict the received DD domain signal $y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu ; \tau_b, \nu_b)$ from the signal $y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu ; \tau_a, \nu_a)$.\\
\underline{The channel interaction is predictable}:
{\emph{The channel interaction is predictable when the delay period $\tau_p$ is greater than the channel delay spread and the Doppler period $\nu_p$ is greater than the channel Doppler spread,}} i.e.,
\begin{eqnarray}
\label{eqn28}
\tau_p & > & (\max_i \tau_i \, - \, \min_i \tau_i), \nonumber \\
\mbox{\small{and}} \,\,\,\, \nu_p & > & (\max_i \nu_i \, - \, \min_i \nu_i). \end{eqnarray}
{Condition (\ref{eqn28}) is called the \emph{crystallization condition}. In our example, when the crystallization condition holds, the $b$-response can be predicted from the $a$-response.
Fig.~\ref{fig_8} illustrates this fact. It shows that the complex gain of the pulse received along the $i$th path is $h_i e^{j 2 \pi \nu_i \tau_a}$ for the $a$-response, and $h_i e^{j 2 \pi \nu_i \tau_b}$ for the $b$-response. The later can be predicted from the former via}
\begin{eqnarray}
\label{eqn28p5}
\hspace{-4mm}
h_i e^{j 2 \pi \nu_i \tau_b} & \hspace{-2mm} = & \hspace{-2mm} h_i e^{j 2 \pi \nu_i \tau_a} \, e^{j 2 \pi \nu_i (\tau_b - \tau_a)}, \ \ i=1,2,3,4.
\end{eqnarray}
{The predictive relation (\ref{eqn28p5}) breaks down when responses
associated with replicas outside the fundamental period interacts with responses associated with replicas inside the fundamental period. This phenomenon is referred to as DD domain aliasing.
The point is that when the crystallization condition holds, DD domain aliasing is precluded. Here is an example of an aliasing situation: say $\tau_p = \tau_4 - \tau_1$ and $\nu_p = \nu_4 - \nu_1$. In this situation, the responses associated with the pulse transmitted at $(\tau_a,\nu_a)$ and its replica at $(\tau_a-\tau_p, \nu_a - \nu_p)$ are both received at the same location $(\tau_a + \tau_1, \nu_a + \nu_1)$
along the $1$st and the $4$th channel paths, respectively.
At this location, the complex gain of the $a$-response is
$\left( h_1 e^{j 2 \pi \nu_1 \tau_a} + h_4 \, e^{j 2 \pi \nu_4 \tau_a} \, e^{-j 2 \pi (\nu_a + \nu_1)(\tau_4 - \tau_1)}\right)$,
from which it is difficult to separately estimate the terms corresponding to $h_1$ and $h_4$.
Therefore, the complex gain $\left( h_1 e^{j 2 \pi \nu_1 \tau_b} + h_4 e^{j 2 \pi \nu_4 \tau_b} e^{-j 2 \pi (\nu_b + \nu_1)(\tau_4 - \tau_1)}\right)$ of the received pulse at $(\tau_b + \tau_1, \nu_b + \nu_1)$ in
the $b$-response cannot be simply predicted from the complex gain of the received pulse at $(\tau_a + \tau_1, \nu_a + \nu_1)$ in the $a$-response. Fig.~\ref{fig_ddalias} depicts the phenomena of DD domain aliasing.}
\begin{figure*}[!h
\centering
\includegraphics[width=16cm, height=8.0cm]{Fig11.jpg}
\caption{
A single DD domain pulse is transmitted in the fundamental DD domain period ${\mathcal D}_0$.
Rectangular DD domain regions containing the received DD domain pulses corresponding to the transmitted DD pulse in ${\mathcal D}_0$ and its quasi-periodic repetitions are depicted as colour filled rectangles.
The size of each such rectangular region is invariant of the delay and Doppler domain period, i.e., they have length equal to the channel delay spread $\left( \max_i \tau_i \, - \, \min_i \tau_i \right)$ along the delay axis and equal to the channel Doppler spread $\left( \max_i \nu_i \, - \, \min_i \nu_i \right)$ along the Doppler axis. Two choices of the delay and Doppler domain period are considered, choice A: the delay and Doppler domain period satisfy the condition in (\ref{eqn28}), and
choice B: Doppler domain period is greater than the channel Doppler spread but the delay domain period is \emph{less} than the channel delay spread.
The fundamental DD domain period ${\mathcal D}_0$ is depicted by a blue filled rectangle. For choice A, DD pulses received in ${\mathcal D}_0$ are within the green and the red coloured rectangles which do not overlap and therefore there is no aliasing.
However, in choice B, due to small $\tau_p$ and the invariance of the size of the coloured rectangles, the green rectangle overlaps with the red and the yellow coloured rectangles resulting in delay domain aliasing.
Similarly, there would be aliasing along the Doppler domain if the Doppler domain period is chosen to be smaller than the channel Doppler spread.}
\label{fig_ddalias}
\end{figure*}
Conditions (\ref{eqn28}) preclude aliasing from the replicas of the DD domain pulse
transmitted in ${\mathcal D}_0$.
Since $\tau_p \, \nu_p = 1$, we require
\begin{eqnarray}
\label{eqn31}
(\max_i \tau_i \, - \, \min_i \tau_i) \, \times \, (\max_i \nu_i \, - \, \min_i \nu_i) & < & 1,
\end{eqnarray}
that is the product of the channel delay spread and Doppler spread is less than one.
This condition is generally satisfied for most doubly-spread channels of practical interest. For example, in a typical cellular wireless system with channel delay spread $5 \mu s$ and Doppler spread $1000$ Hz, the product of the delay and Doppler spread is only $5 \times 10^{-3}$.\\
\underline{The channel interaction is non-fading}:
{\emph{The channel interaction is non-fading when the crystallization condition (\ref{eqn28}) holds, in the sense that the amplitude of the response does not depend on the location of the transmitted pulse.} Specifically, in our example, when the crystallization condition holds, the power distribution of the $b$-response is equal to the power distribution of the $a$-response. Fig.~\ref{fig_8} illustrates this fact.
It shows that in both distributions the amplitude of the pulse received along the $i$-th path is $\vert h_i \vert$. When the
crystallization condition does not hold, due to aliasing effect,
the power distribution in general depends on the location of the
transmit pulse. We demonstrate this using the aliasing example considered in the predictability discussion. In this example, the
amplitude of the pulse received at $(\tau_a + \tau_1 , \nu_a + \nu_1)$ is $\vert \left( h_1 e^{j 2 \pi \nu_1 \tau_a} + h_4 \, e^{j 2 \pi \nu_4 \tau_a} \, e^{-j 2 \pi (\nu_a + \nu_1)(\tau_4 - \tau_1)}\right) \vert$. It can be verified that this expression
in general depends on the location $(\tau_a, \nu_a)$ of the transmitted pulse.}
\underline{The channel interaction is non-stationary}:
{It should be noted that even when the crystallization condition holds, the actual phases in the response depend on the location of the transmitted pulse. In our example,
the complex gain of the pulse received along the $i$th path is
$h_i e^{j 2 \pi \nu_i \tau_a}$ in the $a$-response and is $h_i e^{j 2 \pi \nu_i \tau_b}$ in the $b$-response. The point is that
non-stationarity is not a major issue as long as predictability
(\ref{eqn28p5}) is maintained.}
{
\begin{figure*}[!h
\centering
\vspace{-7mm}
\includegraphics[width=19cm, height=5.4cm]{Fig12.jpg}
\caption{Transceiver signal processing.}
\label{fig_8b}
\end{figure*}}
\section{Transceiver Signal Procesing}
\label{iorelation}
In this section, we show that OTFS
is similar to TDM and FDM in that it
involves a similar sequence of signal processing operations at the transmitter and receiver.
In TDM and FDM respectively, information is embedded in TD and the FD, whereas
in OTFS it is embedded in the DD domain.
Throughout this section, we consider transmitting a packet of $BT$ information symbols $x[k], k=0,1,\cdots, (BT-1)$ using
a TD signal limited to time $T$ and bandwidth $B$.
The information symbols are first converted to a discrete information signal $x_{_{\mathcal M}}[\cdot]$ in the \emph{modulation domain} ${\mathcal M}$.
The subscripts \emph{td}, \emph{fd} and \emph{dd} identify the TD, FD and DD modulation domains.
Within the modulation domain, we then
apply a filter $w_{tx}(\cdot)$ to ensure that the transmit TD information signal $s_{\mbox{\footnotesize{td}}}(t)$ satisfies the time and bandwidth constraints.
The signal $s_{\mbox{\footnotesize{td}}}(t)$ interacts with the doubly-spread channel to provide a received TD signal $r_{\mbox{\footnotesize{td}}}(t)$.
This signal is then converted to the appropriate modulation domain, using the inverse of the transform used at the transmitter, to provide a signal $y_{_{\mathcal M}}(\cdot)$.
Matched filtering at the receiver with the receive filter $w_{rx}(\cdot)$ then optimizes the
signal to noise ratio (SNR) in the post-filtered signal to produce a modulation domain analog signal $y_{_{\mathcal M}}^{w_{rx}}(\cdot)$.
This signal is then sampled to provide the discrete received modulation domain signal $y_{_{\mathcal M}}[\cdot]$ which is processed to recover the information symbols.
These steps are shown in Fig.~\ref{fig_8b}
and are made explicit in Table-\ref{tabTDMFDM} for TDM and FDM. Table-\ref{tab1} lists the transforms, filtering operation mentioned in Fig.~\ref{fig_8b} for TDM/FDM/OTFS.
\begin{table*}
\caption{Signal processing steps in TDM and FDM.}
\centering
\begin{tabular}{ | c || c | c | }
\hline
Transceiver operation & TDM & FDM \\
\hline
Generating the discrete &
$\begin{aligned}
x_{\mbox{\footnotesize{td}}}[k] & \Define \begin{cases}
x[k] &, k=0,1,\cdots,(BT-1) \\
0 &, \mbox{\small{otherwise}}
\end{cases}. \nonumber
\end{aligned}$ & $\begin{aligned}
x_{\mbox{\footnotesize{fd}}}[k] & \Define \begin{cases}
x[k] &, k=0,1,\cdots,(BT-1) \\
0 &, \mbox{\small{otherwise}}
\end{cases}. \nonumber
\end{aligned}$ \\
information signal & & \\
\hline
Generating the analog signal & ${\Lambda}_{\mbox{\footnotesize{td}}} = \{ q/B \, | \, q \in {\mathbb Z}\}$ & ${\Lambda}_{\mbox{\footnotesize{fd}}} = \{ q/T \, | \, q \in {\mathbb Z}\}$ \\
on the information grid & $\begin{aligned}
x_{\mbox{\footnotesize{td}}}(t) & = & \sum\limits_{k \in {\mathbb Z}} x_{\mbox{\footnotesize{td}}}[k] \, \delta(t - k/B)
\end{aligned}$
& $\begin{aligned}
x_{\mbox{\footnotesize{fd}}}(f) = \sum\limits_{k \in {\mathbb Z}} x_{\mbox{\footnotesize{fd}}}[k] \, \delta(f - k/T)
\end{aligned}$ \\
\hline
Shaping the pulse &
$\begin{aligned}
x_{\mbox{\footnotesize{td}}}^{w_{tx}}(t) = w_{tx}(t) \, * \, x_{\mbox{\footnotesize{td}}}(t) \nonumber
\end{aligned}$
& $\begin{aligned}
x_{\mbox{\footnotesize{fd}}}^{w_{tx}}(f) = w_{tx}(f) \, * \, x_{\mbox{\footnotesize{fd}}}(f) \nonumber
\end{aligned}$ \\
at the transmitter & & \\
\hline
Converting from the modulation & $s_{\mbox{\footnotesize{td}}}(t) = x_{\mbox{\footnotesize{td}}}^{w_{tx}}(t)$ & $s_{\mbox{\footnotesize{td}}}(t) = \int x_{\mbox{\footnotesize{fd}}}^{w_{tx}}(f) \, e^{j 2 \pi f t} \, df$\\
domain to the time domain & & \\
\hline
Applying the channel $h(\tau,\nu)$ & $ r_{\mbox{\footnotesize{td}}}(t) = \iint h(\tau,\nu) s_{\mbox{\footnotesize{td}}}(t-\tau) \, e^{j 2 \pi \nu (t - \tau)} \, d\tau \, d\nu$ & $ r_{\mbox{\footnotesize{td}}}(t) = \iint h(\tau,\nu) s_{\mbox{\footnotesize{td}}}(t-\tau) \, e^{j 2 \pi \nu (t - \tau)} \, d\tau \, d\nu$ \\
\hline
Converting from the time domain & $y_{\mbox{\footnotesize{td}}}(t) = r_{\mbox{\footnotesize{td}}}(t)$ & $y_{\mbox{\footnotesize{fd}}}(f) = \int r_{\mbox{\footnotesize{td}}}(t) \, e^{-j 2 \pi f t} \, dt$\\
to modulation domain & & \\
\hline
Shaping the pulse & $y_{\mbox{\footnotesize{td}}}^{w_{rx}}(t) = w_{rx}(t) * y_{\mbox{\footnotesize{td}}}(t)$ & {$y_{\mbox{\footnotesize{fd}}}^{w_{rx}}(f) = w_{rx}(f) * y_{\mbox{\footnotesize{fd}}}(f)$} \\
at the receiver & & \\
\hline
Sampling on the & $\begin{aligned}
y_{\mbox{\footnotesize{td}}}[k] = y_{\mbox{\footnotesize{td}}}^{w_{rx}}(t = {k}/{B})\,,\, k \in {\mathbb Z}.
\end{aligned}$
& $\begin{aligned}
y_{\mbox{\footnotesize{fd}}}[k] = y_{\mbox{\footnotesize{fd}}}^{w_{rx}}(f = {k}/{T} )\,,\, k \in {\mathbb Z}.
\end{aligned}$\\
information grid & & \\
\hline
\end{tabular}
\label{tabTDMFDM}
\end{table*}
\begin{table}
\caption{Signal processing operations for TDM/FDM/OTFS in Fig.~\ref{fig_8b}.}
\centering
\begin{tabular}{ | c || c | c | c | }
\hline
& TDM & FDM & OTFS \\
\hline
Modulation & TD & FD & DD \\
domain & & & \\
\hline
Transform & Identity & Fourier & Zak \\
\hline
Filtering & Linear & Linear & Twisted \\
& convolution & convolution & convolution \\
& in TD & in FD & in DD \\
\hline
\end{tabular}
\label{tab1}
\end{table}
\subsection{TDM input-output relation}
\label{secTDMFDMIOrelation}
It follows from Table-\ref{tabTDMFDM} that the output signal
$y_{\mbox{\footnotesize{td}}}[k], k \in {\mathbb Z}$ is obtained from the input signal $x_{\mbox{\footnotesize{td}}}[k], k \in {\mathbb Z}$ by a discrete time convolution
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{eqn38}
y_{\mbox{\footnotesize{td}}}[k'] & = & \sum\limits_{k \in {\mathbb Z}} x_{\mbox{\footnotesize{td}}}[k] \, h_{_{\mbox{\footnotesize{td}}}}[k' - k \, ; \, k], \nonumber \\
& & \hspace{-25mm} \mbox{\small{where, for}} \, n \in {\mathbb Z}, \nonumber \\
h_{_{\mbox{\footnotesize{td}}}}[n \, ; \, k] & \hspace{-3mm} \Define & \hspace{-3mm} \iiint {\Big [} e^{j 2 \pi \nu \frac{k}{B}} h(\tau,\nu) w_{rx}(\tau') e^{j 2 \pi \nu (\frac{n}{B} - \tau - \tau')} \nonumber \\
& & \hspace{6mm} w_{tx}\left(\frac{n}{B} - \tau - \tau'\right) {\Big ]} \, d\tau \, d\nu \, d\tau'.
\end{eqnarray}
\normalsize}
The discrete TD filter $h_{_{\mbox{\footnotesize{td}}}}[ \cdot \, ; \, k]$ represents the effective discrete TD channel response
to the $k$th discrete input symbol $x_{\mbox{\footnotesize{td}}}[k]$.
The dependence of $h_{_{\mbox{\footnotesize{td}}}}[ \cdot \, ; \, k]$ on $k$ is not simple, and knowledge of this filter
$h_{_{\mbox{\footnotesize{td}}}}[ \cdot \, ; \, q]$ for some integer $q$ is not sufficient to accurately predict $h_{_{\mbox{\footnotesize{td}}}}[ \cdot \, ; \, k]$
for all $k \in {\mathbb Z}$. Also, the expected received signal power ${\mathbb E}\left[ \left \vert y_{\mbox{\footnotesize{td}}}[k'] \right \vert^2\right]$ varies with $k'$. Therefore, for a generic doubly-spread channel, the TDM input-output relation is \emph{non-predictable},
\emph{fading} and \emph{non-stationary} (cf. Section \ref{secChannelInteractionTD}).
If there is no Doppler shift, then $h(\tau, \nu) = g(\tau) \delta(\nu)$. Let $h_{_{\mbox{\footnotesize{td}}}}(t) \Define w_{rx}(t) * g(t) * w_{tx}(t)$, then substituting $h(\tau, \nu) = g(\tau) \delta(\nu)$ in (\ref{eqn38}) we get
$h_{_{\mbox{\footnotesize{td}}}}[ n \, ; \, k]= h_{_{\mbox{\footnotesize{td}}}}(t = n/B)$ which is independent of $k$. In this case, the TDM input-output relation is predictable, non-fading and stationary.
These facts are summarized in Table-\ref{tab2}.
\subsection{FDM input-output relation}
\label{secFDMIOrelation}
We see from Table-\ref{tabTDMFDM} that
the output signal $y_{\mbox{\footnotesize{fd}}}[k], k \in {\mathbb Z}$
is obtained from the input signal $x_{\mbox{\footnotesize{fd}}}[k], k \in {\mathbb Z}$ by the discrete FD convolution
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{eqn45}
y_{\mbox{\footnotesize{fd}}}[k'] & = & \sum\limits_{k \in {\mathbb Z}} x_{\mbox{\footnotesize{fd}}}[k] \, h_{_{\mbox{\footnotesize{fd}}}}[ k' - k \, ; \, k] \nonumber \\
& & \hspace{-25mm} \mbox{\small{where, for}} \, n \in {\mathbb Z}, \nonumber \\
h_{_{\mbox{\footnotesize{fd}}}}[ n \, ; \, k] & \hspace{-3mm} = & \hspace{-3mm} \iiint {\Big [} e^{-j 2 \pi \tau \frac{k}{T}} h(\tau,\nu) w_{rx}(f') e^{-j 2 \pi \tau (\frac{n}{T} - f')} \nonumber \\
& & \hspace{5mm} w_{tx}\left(\frac{n}{T} - \nu - f'\right) {\Big ]} \, d\tau \, d\nu \, df'.
\end{eqnarray}
\normalsize}
The discrete FD filter $h_{_{\mbox{\footnotesize{fd}}}}[ \cdot \, ; \, k]$ represents the effective discrete FD channel response
to the $k$th discrete input symbol $x_{\mbox{\footnotesize{fd}}}[k]$.
The dependence of $h_{_{\mbox{\footnotesize{fd}}}}[ \cdot \, ; \, k]$ on $k$ is not simple, and knowledge of this filter
$h_{_{\mbox{\footnotesize{fd}}}}[ \cdot \, ; \, q]$ for some integer $q$ is not sufficient to accurately predict $h_{_{\mbox{\footnotesize{fd}}}}[ \cdot \, ; \, k]$
for all $k \in {\mathbb Z}$. Also, the expected received signal power ${\mathbb E}\left[ \left \vert y_{\mbox{\footnotesize{fd}}}[k'] \right \vert^2\right]$ varies with $k'$. Therefore, for a generic doubly-spread channel, the FDM input-output relation is \emph{non-predictable},
\emph{fading} and \emph{non-stationary} (cf. Section \ref{secChannelInteractionFD}).
If there are no path delays (Doppler only channel), then $h(\tau, \nu) = \delta(\tau) g(\nu)$. Let $h_{_{\mbox{\footnotesize{fd}}}}(f) \Define w_{rx}(f) * g(f) * w_{tx}(f)$, then substituting $h(\tau, \nu) = \delta(\tau) g(\nu)$ in (\ref{eqn45}) we get
$h_{_{\mbox{\footnotesize{fd}}}}[ n \, ; \, k]= h_{_{\mbox{\footnotesize{fd}}}}(f = n/T)$ which is independent of $k$. In this special case, the FDM input-output relation is predictable, non-fading and stationary. These facts regarding FDM are summarized in Table-\ref{tab2}.
\begin{table*}
\caption{Signal processing steps in OTFS.}
\centering
\begin{tabular}{ | c || c | }
\hline
Transceiver operation & OTFS \\
\hline
Generating the discrete information signal &
$\begin{aligned}
x_{_{\mbox{\footnotesize{dd}}}}[k+nM,l+mN] & \Define \begin{cases}
x[k,l] & \hspace{-3mm} , m = n=0 \\
x[k,l] \, e^{j 2 \pi n \frac{l}{N}} & \hspace{-3mm} , \mbox{\small{otherwise}}
\end{cases}.
\end{aligned}$ \\
\hline
Generating the analog signal on the information grid &
${\Lambda}_{_{\mbox{\footnotesize{dd}}}} \, = \, \left\{ \left( k \frac{\tau_p}{M} , l \frac{\nu_p}{N} \right) \, {\Big |} \, k, l \in {\mathbb Z} \right\}$\\
& $\begin{aligned}
x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) = \sum\limits_{k,l \in {\mathbb Z}} x_{_{\mbox{\footnotesize{dd}}}}[k,l] \, \delta\left(\tau - k \frac{\tau_p}{M} \right) \delta\left(\nu - l \frac{\nu_p}{N} \right)
\end{aligned}$ \\
\hline
Shaping the pulse at the transmitter & $x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu) \, = \, w_{tx}(\tau,\nu) \, *_{\sigma} \, x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$ \\
\hline
Converting from the modulation domain to the time domain & $s_{_{\mbox{\footnotesize{td}}}}(t) \, = \, {\mathcal Z}_t^{-1}\left( x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu) \right)$ \\
\hline
Applying the channel $h(\tau,\nu)$ & $ r_{\mbox{\footnotesize{td}}}(t) = \iint h(\tau,\nu) s_{\mbox{\footnotesize{td}}}(t-\tau) \, e^{j 2 \pi \nu (t - \tau)} \, d\tau \, d\nu$ \\
\hline
Converting from the time domain to modulation domain & $y_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) = {\mathcal Z}_t\left( r_{_{\mbox{\footnotesize{td}}}}(t)\right)$ \\
\hline
Shaping the pulse at the receiver & $y_{_{\mbox{\footnotesize{dd}}}}^{w_{rx}}(\tau, \nu) = w_{rx}(\tau,\nu) \, *_{\sigma} \, y_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$ \\
\hline
Sampling on the information grid & $y_{_{\mbox{\footnotesize{dd}}}}[k',l'] = y_{_{\mbox{\footnotesize{dd}}}}^{w_{rx}}\left(\tau = k' \frac{\tau_p}{M}, \nu = l' \frac{\nu_p}{N} \right), \,\, k', l' \in {\mathbb Z}$ \\
\hline
\end{tabular}
\label{tabOTFS}
\end{table*}
\subsection{OTFS input-output relation}
\label{secOTFSIOrelation}
OTFS modulation is parameterized by integers $M \approx B \tau_p$ and $N \approx T \nu_p$. Since OTFS is a 2-D modulation, the information symbols are usually
arranged as a 2-D finite array $x[k,l], k=0,1,\cdots,M-1, l=0,1,\cdots, N-1$.
The discrete DD domain information signal $x_{_{\mbox{\footnotesize{dd}}}}[k',l'], k',l' \in {\mathbb Z}$ is then
defined as follows. For all $k=0,1,\cdots, M-1$ and all $l=0,1,\cdots, N-1$ we have
\begin{eqnarray}
\label{eqn46}
x_{_{\mbox{\footnotesize{dd}}}}[k+nM,l+mN] & \hspace{-2mm} \Define \begin{cases}
x[k,l] & \hspace{-3mm} , m = n=0 \\
x[k,l] \, e^{j 2 \pi n \frac{l}{N}} & \hspace{-3mm} , \mbox{\small{otherwise}}
\end{cases}
\end{eqnarray}
where $m,n \in {\mathbb Z}$.
The DD domain information grid is given by
\begin{eqnarray}
\label{eqn465}
{\Lambda}_{_{\mbox{\footnotesize{dd}}}} & \Define & \left\{ \left( k \frac{\tau_p}{M} , l \frac{\nu_p}{N} \right) \, {\Big |} \, k, l \in {\mathbb Z} \right\}.
\end{eqnarray}
The discrete DD domain information signal is then lifted to the information
grid ${\Lambda}_{_{\mbox{\footnotesize{dd}}}}$, resulting in the continuous DD domain
analog information signal $x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$ which is given by
\begin{eqnarray}
\label{eqn47}
x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) & \hspace{-3mm} \Define & \hspace{-3mm} \sum\limits_{k,l \in {\mathbb Z}} x_{_{\mbox{\footnotesize{dd}}}}[k,l] \, \delta\left(\tau - k \frac{\tau_p}{M} \right) \delta\left(\nu - l \frac{\nu_p}{N} \right).
\end{eqnarray}
Since $x_{_{\mbox{\footnotesize{dd}}}}[\cdot,\cdot]$ satisfies (\ref{eqn46}), it follows that $x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$ is quasi-periodic.
In order to satisfy the time and bandwidth constraints for the transmit TD signal, $x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$ is filtered (twisted convolution) in the DD domain with a DD domain filter $w_{tx}(\tau,\nu)$ resulting in the
quasi-periodic DD domain signal\footnote{\footnotesize{Twisted convolution between two DD domain functions $a(\tau,\nu)$ and $b(\tau,\nu)$ is given by $a(\tau,\nu) *_{\sigma} b(\tau,\nu) \Define \iint a(\tau',\nu') \, b(\tau - \tau', \nu - \nu') \, e^{j 2 \pi \nu'(\tau - \tau')} \, d\tau'd\nu'$. Unlike linear convolution, twisted convolution is
non-commutative, i.e., $a(\tau,\nu) *_{\sigma} b(\tau,\nu) \ne b(\tau,\nu) *_{\sigma} a(\tau,\nu)$. It is however associative, i.e., $a(\tau,\nu) *_{\sigma} [ b(\tau,\nu) *_{\sigma} c(\tau,\nu) ] = [ a(\tau,\nu) *_{\sigma} b(\tau,\nu) ] *_{\sigma} c(\tau,\nu) $.}}
\begin{eqnarray}
\label{eqn48}
x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu) & \Define & w_{tx}(\tau,\nu) \, *_{\sigma} \, x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu).
\end{eqnarray}
The inverse Zak transform (see (\ref{eqn6})) of this filtered signal gives the transmit TD signal $s_{_{\mbox{\footnotesize{td}}}}(t)$, i.e.,
\begin{eqnarray}
\label{eqn49}
s_{_{\mbox{\footnotesize{td}}}}(t) & \Define & {\mathcal Z}_t^{-1}\left( x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu) \right).
\end{eqnarray}
The received TD signal $r_{_{\mbox{\footnotesize{td}}}}(t)$ is then given by
\begin{eqnarray}
\label{eqn35}
r_{_{\mbox{\footnotesize{td}}}}(t) & \hspace{-2mm} = & \hspace{-2mm} \iint h(\tau,\nu) s_{\mbox{\footnotesize{td}}}(t-\tau) \, e^{j 2 \pi \nu (t - \tau)} \, d\tau \, d\nu.
\end{eqnarray}
At the receiver, Zak transform (\ref{eqn7}) converts the received TD signal to a DD domain signal $y_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$, i.e.,
\begin{eqnarray}
\label{eqn50}
y_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) & \Define & {\mathcal Z}_t\left( r_{_{\mbox{\footnotesize{td}}}}(t)\right).
\end{eqnarray}
This received DD domain signal is then match-filtered with a DD domain receive filter $w_{rx}(\tau,\nu)$
resulting in the filtered DD domain signal
\begin{eqnarray}
\label{eqn51}
y_{_{\mbox{\footnotesize{dd}}}}^{w_{rx}}(\tau, \nu) & \Define & w_{rx}(\tau,\nu) \, *_{\sigma} \, y_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu).
\end{eqnarray}
This quasi-periodic signal is then sampled on the information grid, resulting in the discrete DD
domain received signal
\begin{eqnarray}
\label{eqn52}
y_{_{\mbox{\footnotesize{dd}}}}[k',l'] & \hspace{-3mm} \Define & \hspace{-3mm} y_{_{\mbox{\footnotesize{dd}}}}^{w_{rx}}\left(\tau = k' \frac{\tau_p}{M}, \nu = l' \frac{\nu_p}{N} \right), \,\, k', l' \in {\mathbb Z}.
\end{eqnarray}
The signal processing steps from (\ref{eqn46})-(\ref{eqn52}) are summarized in Table-\ref{tabOTFS}.\footnote{\footnotesize{{The receiver processing, i.e., Zak transform of the received TD signal followed by twisted convolution with the receive DD filter and subsequent DD domain sampling can be implemented efficiently in the discrete DD domain using the Discrete Zak Transform (DZT \cite{Lampel2022}) on the sampled received TD signal (sampled at integer multiples of $1/B$). For a given $(M,N)$, the complexity of DZT is $O(MN \log N)$ when compared to the $O(MN \log (MN))$ complexity of DFT/IDFT processing in a FDM based system with the same frame duration $T$ and bandwidth $B$.}}}
{ From (\ref{eqn46})-(\ref{eqn52}), it follows that the OTFS input-output relation can be expressed as a discrete twisted convolution}
\begin{eqnarray}
\label{eqn53}
\hspace{-6mm}
y_{_{\mbox{\footnotesize{dd}}}}[k',l'] & \hspace{-3mm} = & \hspace{-3mm} \sum\limits_{k, l \in {\mathbb Z}} h_{_{\mbox{\footnotesize{dd}}}}[k, l] \, x_{_{\mbox{\footnotesize{dd}}}}[k' - k ,l' - l]
\, e^{j 2 \pi \frac{(k' - k)}{M} \frac{l}{N} } \nonumber \\
& \hspace{-3mm} = & \hspace{-3mm} \sum\limits_{k, l \in {\mathbb Z}} h_{_{\mbox{\footnotesize{dd}}}}[k' - k, l' - l] \, x_{_{\mbox{\footnotesize{dd}}}}[k ,l]
\, e^{j 2 \pi \frac{(l' - l)}{N} \frac{k}{M}},
\end{eqnarray}
{where $h_{_{\mbox{\footnotesize{dd}}}}[k, l]$ is the discrete effective DD domain channel filter, given by sampling the continuous effective DD domain channel filter $h_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$, i.e.,}
\begin{eqnarray}
\label{eqn53_b}
h_{_{\mbox{\footnotesize{dd}}}}[k, l] & \Define & h_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu){\Big \vert}_{\left( \tau = \frac{k \tau_p}{M} \,,\, \nu = \frac{l \nu_p}{N} \right)}\,,\, \mbox{\small} \nonumber \\
h_{_{\mbox{\footnotesize{dd}}}}(\tau , \nu) & \Define & w_{rx}(\tau, \nu) *_{\sigma} h(\tau,\nu) *_{\sigma} w_{tx}(\tau, \nu).
\end{eqnarray}
{Typically, the transmit and receive filters are localized and the channel admits bounded delay and Doppler spreads, and hence, in this case, $h_{_{\mbox{\footnotesize{dd}}}}(\tau , \nu)$ is zero whenever $\tau > \tau_{max}, \tau < 0$ or $\vert \nu \vert > \nu_{max}$. We refer to $\tau_{max}$ and $2\nu_{max}$ as the effective delay and Doppler spreads, respectively. Consequently, if the periods satisfy the crystallization condition (\ref{eqn28}) with respect to the effective spreads, that is, $\tau_p > \tau_{max}$ and $\nu_p > 2\nu_{max}$, the input-output relation becomes non-fading and predictable (cf. Section \ref{secChannelInteractionDD}).}
\begin{figure*}[!h
\centering
\includegraphics[width=15cm, height=9.0cm]{Fig13.jpg}
\caption{For a fixed
$M = N = 4$, the average power of the
received discrete DD domain signal is plotted for varying delay and Doppler domain period $(\tau_p,\nu_p)$
under the constraint $\tau_p \, \nu_p = 1$.
As $\tau_p \rightarrow 0$, the delay period becomes smaller than the effective channel delay spread due to which
there is aliasing along the delay domain resulting in variation in the received power i.e., fading along the Doppler domain.
We know that OTFS $\rightarrow$ FDM as $\tau_p \rightarrow 0$, which explains frequency selectivity in FDM.
On the other hand, as $\tau_p \rightarrow \infty$, $\nu_p = (1/\tau_p) \rightarrow 0$ and the Doppler period
becomes smaller than the effective channel Doppler spread
due to which there is aliasing along the Doppler domain resulting in fading along the delay domain. We know that OTFS $\rightarrow$ TDM as $\tau_p \rightarrow \infty$, which explains time selectivity in TDM.
However, when the delay and Doppler periods are greater than the effective channel delay and Doppler spread respectively, there is no aliasing and the OTFS input-output
relation is predictable and exhibits no fading (the average received power profile is flat and appears like the surface of crystalline solids).}
\label{fig_9b}
\end{figure*}
\underline{Crystallization of the OTFS input-output relation}:
{Fig.~\ref{fig_9b} depicts the non-fading and predictability attributes of the OTFS input-output relation for the channel example from Fig.~\ref{fig_7}. Recall, that the delay and Doppler spreads of this channel are $2 \mu s$ and $1700$ Hz, respectively. The main point is that despite this channel being doubly-spread, the OTFS input-output relation is non-fading and predictable whenever the DD periods satisfy the crystallization condition with respect to the channel spreads. For this example, it means that $\tau_p > 2 \mu s$ and $\nu_p > 1700$ Hz. Moreover, the constraint $\tau_p \, \nu_p = 1$ implies that the \emph{crystalline regime} is $2 \mu s < \tau_p < 588 \mu s$ and $1700 < \nu_p < 5 \times 10^5$ Hz (see the green rectangle in Fig.~\ref{fig_9b}). Operating the system in the crystalline regime allows to maintain uniform performance over a wide a range of channel spreads and multitude of use-cases (e.g., Leo-satellites/ UAV communication, mmWave/THz communication). These practical aspects will be discussed in more detail in the second part of this paper.} Table \ref{tab2} summarizes attributes of the input-output relations for TDM, FDM and OTFS.
\begin{table}
\caption{Attributes of input-output relation.}
\centering
\begin{tabular}{ | c || c | c | c | c | }
\hline
Attribute & Channel type & TDM & FDM & OTFS \\
\hline
\hline
\multirow{3}{5em}{Non-fading} & Delay spread only & $\checkmark$ & $\times$ & $\checkmark$ \\
\cline{2-5}
& Doppler spread only & $\times$ & $\checkmark$ & $\checkmark$ \\
\cline{2-5}
& Doubly-spread & $\times$ & $\times$ & $\checkmark$ \\
\hline
\multirow{3}{5em}{Predictable} & Delay spread only & $\checkmark$ & $\times$ & $\checkmark$ \\
\cline{2-5}
& Doppler spread only & $\times$ & $\checkmark$ & $\checkmark$ \\
\cline{2-5}
& Doubly-spread & $\times$ & $\times$ & $\checkmark$ \\
\hline
\end{tabular}
\label{tab2}
\end{table}
\section{Conclusions}
{\color{black}
The roots of OTFS modulation go back to the Erlangen Program, introduced by Felix Klein in 1872, which sought to understand mathematical structures, like channels, through the symmetries which leave them invariant. It provides an umbrella for several results in information theory and coding, beginning with the fact that it is Gaussian inputs that achieve capacity on Gaussian channels \cite{Shannon}. It suggests that we should take advantage, when channel errors form a group. Quantum computing provides an example, since bit flips provide discrete analogs of delay operators, and phase flips provide discrete analogs of Doppler operators. A commutative subgroup of the Pauli group determines a quantum error correcting codes, which stores information on common eigenmodes of the subgroup \cite{CSS}. The parallels with OTFS are very clear.}
{ We have described the OTFS modulation within a mathematical framework of Zak theory. Within this framework, the OTFS carrier is a quasi-periodic DD domain pulse which when converted to time via the inverse Zak transform is realized by a pulsone. The main technical message of this paper is that whenever the DD periods of the pulse are taken to be large compared with the channel spreads, the OTFS input-output relation is non-fading and predictable. When this constraint holds, we say that one operates in the crystalline regime. The follow-up of this paper will demonstrate in detail the performance advantages of operating in the crystalline regime.}
{ Compatibility with contemporary multi-carrier signaling motivated an approximation \cite{RH1}, which we refer to as MC-OTFS, to the Zak theoretic variant of OTFS described in this paper, which we refer to as Zak-OTFS. In MC-OTFS, DD domain signals are periodic functions of two variables (instead of quasi-periodic functions) and the conversion to the TD is carried in two steps (instead of one step Zak transform): the first step is conversion to the TF domain using the inverse symplectic finite Fourier transform, and the second step is conversion from TF domain to TD using the Heisenberg transform. In the follow-up of this paper, we will provide a detailed performance comparison between Zak-OTFS and MC-OTFS variants.
The first wave of OTFS research mainly focused on MC-OTFS. We expect the next wave of OTFS research to be focused on Zak-OTFS, which can offer performance and complexity advantages.}
\begin{comment}
{\color{blue}
In this paper, we emphasized the importance and need for better localization of waveforms to achieve improved communication and radar sensing performance in next generation wireless and intelligent transportation systems. We introduced OTFS waveform in the Zak theoretic framework from first principles and explained its beneficial attributes of localization, non-fading, and predictability of the input-output relation. These attributes are behind the superior performance of OTFS compared to contemporary multicarrier waveforms. In a follow-on second paper, we will cover these performance aspects of OTFS in detail. Special attention will be paid on a detailed comparison between the Zak realization of OTFS described in this paper and the approximate realization of OTFS which was introduced in \cite{RH1} and has been widely researched so far. While the Zak approach moves signals from DD domain to TD (and vice versa) in one step, the approach in \cite{RH1} does this in two steps (DD domain to TF domain conversion using inverse symplectic finite Fourier transform, followed by TF domain to TD conversion using Heisenberg transform). While the first wave of OTFS research has centered around the two-step approach introduced in \cite{RH1}, we expect the next wave of OTFS research to be centered around the direct Zak approach which can offer performance and complexity advantages.}
\end{comment}
\begin{comment}
\section{Acknowledgement}
The first author would like to thank research scholar Imran Ali Khan at the Dept. of Electrical Engineering I.I.T. Delhi, for helping with some of the figures.
\end{comment}
\section{Acknowledgements}
The first author would like to thank Mr. Imran Ali Khan (Research Scholar at I.I.T. Delhi) for his help in making some of the figures in this paper. The work of Saif Khan Mohammed was supported by the Prof. Kishan and Pramila Gupta Chair at I.I.T. Delhi. The second author, Ronny Hadani, would like to thank Shlomo Rakib and Shachar Kons from Cohere Technologies as many of the concepts described in this paper are inspired from several years of development conducted in collaboration with them. A. Chockalingam acknowledges the support from the J. C. Bose National Fellowship, Science and Engineering Research Board, Department of Science and Technology, Government of India. The work of Robert Calderbank was supported in part by the Air Force Office of Scientific Research under grants FA 8750-20-2-0504 and FA 9550-20-1-0266.
|
{
"arxiv_id": "2302.08602",
"language": "en",
"timestamp": "2023-02-20T02:02:55",
"url": "https://arxiv.org/abs/2302.08602",
"yymm": "2302"
} | \section{Introduction}
Finding sharp norm estimates of Fourier multipliers is a central and intriguingly delicate theme in Euclidean harmonic analysis. Nowadays several multiplier theorems are known including the celebrated H\"ormander-Mikhlin multiplier theorem that estimates the bound of an $L^p$-multiplier in terms of differentiability and regularity properties of the symbol. This is just one of the many theorems in harmonic analysis and we refer to the monographs \cite{GrafakosClassical}, \cite{GrafakosModern}, \cite{Stein} for a broader treatment of the subject.
Over the past decade there has been an increasing interest in the construction of $L^p$-multipliers on a non-abelian locally compact group $G$. Here the group plays the role of the frequency side. For a symbol $m \in L^\infty( G)$ the central question is for which $1 < p < \infty$ the Fourier multiplier
\[
T_m: L^2( \widehat{G} ) \rightarrow L^2( \widehat{G} ): \lambda_G(f) \mapsto \lambda_G(mf),
\]
extends to a bounded map $L^p( \widehat{G} ) \rightarrow L^p( \widehat{G} )$. Here $\lambda_G$ is the left regular representation and $L^p(\widehat{G})$ is the non-commutative $L^p$-space of the group von Neumann algebra $L^\infty(\widehat{G})$ of $G$. Moreover, ideally one would have sharp bounds on the norms of such multipliers and understand their regularity properties.
Recently, a number of such multiplier theorems have been obtained. In the realm of discrete groups Ricard and Mei \cite{MeiRicard} gave a free analogue of the Hilbert transform through Cotlar's identity yielding multipliers on free groups. The techniques were exploited further in \cite{MeiRicardXu}, \cite{GonzalezCotlarPaper} for free (amalgamated) products of groups and groups acting on tree-like structures. In \cite{JungeMeiParcet} (see further \cite{GonzalezSpectralPaper}, \cite{JungeMeiParcetJEMS}) a very effective method based on cocycles was introduced to construct a wide class of multipliers. Some of the results we mentioned so far also yield multipliers for non-discrete groups.
In the case of semi-simple Lie groups the main achievements that have been made are contained in \cite{PRS} and \cite{Tablate}. In particular for ${\rm SL}(n, \mathbb{R})$, the group of $n \times n$-matrices over $\mathbb{R}$ with determinant 1, an analogue of the classical H\"ormander-Mikhlin multiplier theorem is obtained \cite{PRS}. The differentiability properties of the symbol are then described in terms of Lie derivatives. The methods from \cite{PRS}, \cite{Tablate} rely on a local transference from Schur multipliers and local approximations of the Lie group with Euclidean spaces. The main theorem \cite[Theorem A]{PRS} is extremely effective and close to sharp for symbols that are supported on a small enough neighbourhood of the identity. Moreover, these H\"ormander-Mikhlin type conditions automatically imply certain integrability of the symbol and therefore the theorem in fact extends from a small neighbourhood to symbols on the entire Lie group. Such symbols thus have a fast decay; fast enough to assure integrability properties of the symbol \cite[Remark 3.8]{PRS}. Concerning the behavior of (radial) multipliers away from the identity the rigidity theorem \cite[Theorem B]{PRS} shows that symbols of $L^p$-multipliers in fact must necessarily have a sufficient amount of decay. The fundament of this phenomenon stems from \cite{LafforgueDeLaSalle}, \cite{DeLaat}. In the degree of decay there is a gap between the multiplier theorems \cite[Theorem A]{PRS}, \cite[Theorem A2]{Tablate} and the rigidity theorem \cite[Theorem B]{PRS} in case $p$ is close to 2. The current paper provides a new viewpoint on this gap.
We also mention that connections between discrete and locally compact groups have been made through noncommutative versions of De Leeuw theorems \cite{CPPR}, \cite{CJKM}.
\vspace{0.3cm}
The main result of this paper obtains a new multiplier theorem that is applicable to a natural class of semi-simple Lie groups. Our main result is the following theorem as announced in the abstract. The theorem provides a Sobolev type estimate for Fourier multipliers. The regularity properties are formulated in terms of the distance of $p$ from 2.
\begin{theorem} \label{Thm=MainIntro}
Let $G$ be a semi-simple Lie group in the Harish-Chandra class with maximal compact subgroup $K$. Let $\Omega_K$ be minus the radial Casimir operator. Let $S_G > \frac{1}{4} \dim(G/K), s \in (0, S_G]$ and $p \in (1,\infty)$ be such that
\[
\left| \frac{1}{p} - \frac{1}{2} \right| < \frac{s}{2 S_G}.
\]
Then, there exists a constant $C_{G,s,p} >0$ such that for every $m \in L^\infty(G) \cap L^2(G)$ bi-$K$-invariant with $m \in {\rm Dom}(\Omega_K^s)$ and $\Omega_K^s(m) \in L^{2S_G/s}(G)$ we have,
\begin{equation}\label{Eqn=IntroMain}
\Vert T_m: L^p(\widehat{G}) \rightarrow L^p(\widehat{G}) \Vert \leq C_{G, s,p} \Vert \Omega_K^s(m) \Vert_{L^{2S_G/s}(G)}.
\end{equation}
\end{theorem}
There are several novelties in our approach compared to earlier multiplier theorems on non-abelian groups, in particular Lie groups. Firstly it is the first time that differentiability properties with respect to the Casimir operator are used in estimates on $L^p$-multipliers. As the Casimir operator equals the Laplace-Beltrami operator on the homogeneous space $G \slash K$ the estimate \eqref{Eqn=IntroMain} should be understood as a Sobolev norm estimate.
Secondly, our proof uses the representation theory of $G$ to construct $L^p$-multipliers. In this case we use the spherical dual of the group to construct bi-$K$-invariant multipliers. This makes a link to Harish-Chandra's Plancherel theorem for spherical functions. Most notably, we establish a link between the construction of $L^p$-multipliers and Heat kernel estimates of the Casimir operator which are well-studied in the literature. For the Heat kernel estimates and consequent estimates for Bessel-Green-Riesz potentials we shall mostly rely ourselves on Anker-Ji \cite{AnkerJi} (see Section \ref{Sect=HeatKernel}); though in many cases such estimates were obtained earlier, see \cite{GangolliActa}, \cite{AnkerOldPaper}, or for ${\rm SL}(n, \mathbb{R})$, see \cite{Sawyer}.
Theorem \ref{Thm=MainIntro} is very much in the spirit of the Calder\'on-Torchinsky theorem \cite{CT1}, \cite{CT2} in Euclidean analysis, see also the more recent papers \cite{GrafakosIMRN}, \cite{GrafakosCT}, but there are several fundamental differences too. For instance we do not have a Littlewood-Payley theory at our disposal and neither we are able to control the volumes of translations of areas. Secondly, the classical results on Bessel potentials that are used in \cite{GrafakosCT} (see \cite[Section 6.1.2]{GrafakosModern}) need to be replaced by the much deeper results of Anker and Ji.
We shall show in Section \ref{Sect=Examples} that Theorem \ref{Thm=MainIntro} leads to several new classes of Fourier multipliers that are fundamentally beyond the reach of the earlier theorems from \cite{PRS}, \cite{Tablate}. For instance, the following question seems to be unknown. Suppose we have a symbol $m$ that is smooth in a neighbourhood of the origin of $G$ and of the form $m(k_1 \exp(H) k_2) = e^{-A_p \Vert H \Vert}, k_1, k_2 \in K, H \in \fraka$ outside that neighbourhood for some constant $A_p > 0$. Determine for which $A_p > 0$ and which $1 < p < \infty$ such a symbol can be an $L^p$-Fourier multipler. The rigidity theorem \cite[Theorem B]{PRS} puts a lower bound on $A_p$ whereas
\cite[Remark 3.8]{PRS}, which is fundamental to \cite[Theorem A]{PRS}, yields an upper bound on $A_p$. This paper improves on this upper bound as explained in Section \ref{Sect=Examples}, where we also explain that our theorem reaches beyond \cite[Theorem A2]{Tablate}. This gives a negative answer to the question at the end of \cite[Remark 4.2]{Tablate} if one is allowed to differentiate in $p$.
\vspace{0.3cm}
\noindent {\bf Acknowledgements.} The author wishes to thank Jordy van Velthoven for several useful discussions and references to the literature.
\section{Preliminaries}\label{Sect=Prelim}
For expressions $A$ and $B$ we write $A \approx B$ if there exists two absolute constants $c_1, c_2 > 0$ such that $c_1 A \leq B \leq c_2 A$. We write $A \preceq B$ if only $c_1 A \leq B$. The symbol $\otimes$ will denote the tensor product. In case we take a tensor product of von Neumann algebras we mean the von Neumann algebraic tensor product, i.e. the strong operator topology closure of their vector space tensor product. In case of a tensor product of $L^p$-spaces we mean the $L^p$-norm closure of that vector space tensor product.
\subsection{Lie groups and Lie algebras} For standard references on Lie groups and Lie algebras we refer to \cite{Helgason}, \cite{Knapp}, \cite{Humphreys} and for spherical functions to \cite{HelgasonSpherical} and \cite{JorgensonLang}.
This paper crucially relies on \cite[Section 4]{AnkerJi} and therefore from this point onwards we assume that {\it $G$ is semi-simple and in the Harish-Chandra class}, see \cite{Knapp}. This includes all semi-simple, connected, linear Lie groups.
In particular $G$ has finite center and finitely many connected components. Let $K$ be a maximal compact subgroup of $G$ which is unique up to conjugation.
Let $G = KAN$ be the Iwasawa decomposition where $A$ is abelian and $N$ is nilpotent. Let $G = KAK$ be the Cartan decomposition of $G$. Let $\frakg$, $\fraka$ and $\frakn$ be the Lie algebras of respectively $G$, $A$ and $N$.
We let $\Sigma \in \fraka^\ast$ be the set of roots and by fixing a positive Weyl chamber $\fraka^+$ we let $\Sigma^+$ be the set of positive roots. We let $\Sigma^{++}$ be the set of positive indivisible roots; recall that this means that $\alpha \in \Sigma^{++}$ if $\alpha$ is a positive root and $\frac{1}{2} \alpha$ is not a root.
If $G$ is complex then $\Sigma^+ = \Sigma^{++}$ (see \cite[Section II.8.4]{Humphreys}); otherwise if $\alpha \in \Sigma^{++}$ the only scalar multiples of $\alpha$ that are possibly roots are $-2 \alpha, -\alpha, \alpha$, and $2 \alpha$ (see \cite[Exercise III.9]{Humphreys}) .
Let $m_\alpha$ be the multiplicity of a root.
Recall that $\dim(N) = \sum_{\alpha \in \Sigma^+} m_\alpha$.
We also set $\rho = \frac{1}{2} \sum_{\alpha \in \Sigma^+} m_\alpha \alpha$.
\subsection{Killing form}
Let $\langle \cdot, \cdot \rangle$ be the Killing form on $\frakg$ which is a non-degenerate bilinear form. The Killing form restricts to a positive definite non-degenerate form on $\fraka$. For $H \in \fraka$ we set $\Vert H \Vert = \langle H, H \rangle^{\frac{1}{2}}$. The Killing form linearly identifies the dual $\fraka^\ast$ with $\fraka$ by identifying $H \in \fraka$ with $\alpha_H( \: \cdot \:) := \langle H, \cdot \rangle \in \fraka^\ast$. Under this identification the pairing $\langle \: \cdot \:, \: \cdot \: \rangle$ and corresponding norm are thus defined on $\fraka^\ast$ as well; this also defines the pairing between $\fraka$ and $\fraka^\ast$.
\subsection{Haar measure}
We denote $\mu_G$ for the Haar meausure of $G$ which is both left- and right-invariant as $G$ is semi-simple hence unimodular.
The Haar measure decomposes with respect to the Cartan decomposition (see \cite[Theorem I.5.8]{Helgason} or \cite[Eqn. (2.1.5)]{AnkerJi}) as
\begin{equation}\label{Eqn=CartanIntegral}
\int_G f(g) d\mu_G(g) = \vert K \slash M \vert \int_{K} \int_{\fraka^+} \int_K f(k_1 \exp(H) k_2) \delta(H) dk_1 dH dk_2,
\end{equation}
where $M$ is group of elements in $K$ that commute with $A$ (i.e. the centralizer) and
\begin{equation}\label{Eqn=CartanIntegral2}
\delta(H) = \prod_{\alpha \in \Sigma^+} \sinh^{m_\alpha} \langle \alpha, H \rangle \approx \prod_{\alpha \in \Sigma^+} \left( \frac{ \langle \alpha, H \rangle }{ 1 + \langle \alpha, H \rangle } \right)^{m_\alpha} e^{2 \langle \rho, H \rangle} dH.
\end{equation}
The volume of the quotient $ \vert K \slash M \vert$ will not play a very significant role in our paper and can be regarded as a constant.
We let $(f_1 \ast f_2)(g) = \int_G f_1(h) f_2(h^{-1} g) d\mu_G(h)$ be the convolution product for suitable $\mathbb{C}$-valued functions $f_1$ and $f_2$ on $G$.
\subsection{Function spaces and $K$-invariance}
Let $F$ be either $C^\infty, C^\infty_c, C, L^p$ so that we mean by $F(G)$ either $C^\infty(G), C^\infty_c(G), C(G), L^p(G)$ which are respectively the functions $G \rightarrow \mathbb{C}$ that are smooth, smooth with compact support, continuous and $p$-integrable with respect to $\mu_G$.
We use the notation $F(K \backslash G), F(G \slash K), F(K \backslash G \slash K)$ to denote the space of functions in $F(G)$ that are $K$-invariant from the left, right or both left and right respectively.
\subsection{Casimir operator}\label{Sect=Casimir}
Let $\Omega \in U(\frakg)$ be the Casimir element of $G$ where $U(\frakg)$ is the universal enveloping algebra of $\frakg$. $\Omega$ acts as a second order differential operator on $C^\infty(G)$ (see \cite[Section II.1]{JorgensonLang}) and we let $\Omega_K^0$ be the restriction of $-\Omega$ to $C^\infty(K \backslash G \slash K)$. On the domain
\[
{\rm Dom}(\Omega_K^0) = \{ f \in C^\infty( K \backslash G \slash K) \cap L^2( K \backslash G \slash K) \mid \Omega_K(f) \in L^2( K \backslash G \slash K) \},
\]
the operator $\Omega_K^0$ is essentially self-adjoint; in fact its restriction to the Harish-Chandra Schwartz space is well-known to be essentially self-adjoint. So the closure $\Omega_K$ of $\Omega_K^0$ is an unbounded self-adjoint operator on $L^2( K \backslash G \slash K )$. Then $\Omega_K$ is {\it positive} and the spectrum of $\Omega_K$ is the interval $[ \langle \rho, \rho \rangle, \infty)$. We may therefore consider fractional powers $\Omega_K^s$ with $s \in \mathbb{R}$; these fractional powers are bounded operators in case $s \leq 0$.
\subsection{Von Neumann algebras} We denote $B(H)$ for the bounded operators on a Hilbert space $H$ and for $B \subseteq B(H)$ we set $B'= \{ x \in B(H) \mid \forall b \in B: xb = bx \}$ for the commutant. For von Neumann algebras we refer to \cite{Tak1} as a standard work and for non-commutative $L^p$-spaces to \cite{Nelson}, \cite{PisierXu}.
Let $M$ be a semi-finite von Neumann algebra with normal semi-finite faithful trace $\tau$. By $L^p(M, \tau), 1 \leq p < \infty$ we denote the Banach space consisting of all closed densely defined operators $x$ affiliated with $M$ such that $\Vert x \Vert := \tau( \vert x \vert^p )^{1/p} < \infty$. The set $M \cap L^p(M, \tau)$ is dense in $L^p(M, \tau)$; so that alternatively $L^p(M, \tau)$ can be defined as the abstract completion of this intersection. We set $L^\infty(M, \tau) = M$.
\subsection{Group von Neumann algebras} Let $G$ be a locally compact unimodular group; in particular any semi-simple Lie group. We let $(\lambda_G(g) f)( h) = f(g^{-1} h)$ and $(\lambda_G'(g) f)( h) = f(hg), g,h \in G$ be the left- and right regular representation of $G$ on $L^2(G)$. Let $L^\infty(\widehat{G}) = \{ \lambda_G(g) \mid g \in G \}''$ (double commutant of the set) be the left group von Neumann algebra of $G$. For $f \in L^1(G)$ set $\lambda_G(f) = \int_G f(g) \lambda_G(g) d\mu_G(g) \in L^\infty(\widehat{G})$. $L^\infty(\widehat{G})$ can be equipped with the Plancherel trace $\tau_{\widehat{G}}: L^\infty(\widehat{G})^+ \rightarrow [0, \infty]$ given by
\[
\tau_{\widehat{G}}(x^\ast x) = \Vert f \Vert_2^2,
\]
in case there is $f \in L^2(G)$ such that $x h = f \ast h$ for all $h \in C_c(G)$. We set $\tau_{\widehat{G}}(x^\ast x) = \infty$ otherwise. Briefly set $L^p(\widehat{G}) = L^p( L^\infty(\widehat{G}), \tau_{\widehat{G}})$.
Then, by definition, $\lambda_G$ extends to an isometry $L^2(G) \rightarrow L^2(\widehat{G})$ (Plancherel identity).
There exists a normal $\ast$-homomorphism called the comultiplication
\[
\Delta_{\widehat{G}}: L^\infty(\widehat{G}) \rightarrow L^\infty(\widehat{G}) \otimes L^\infty(\widehat{G}),
\]
that is determined by $\Delta_{\widehat{G}}(\lambda_G(g)) = \lambda_G(g) \otimes \lambda_G(g), g \in G$.
\subsection{Fourier multipliers} Let $1 < p < \infty$. We call a function $m \in L^\infty(G)$ an $L^p$-Fourier multiplier if there is a bounded map $T_m: L^p(\widehat{G}) \rightarrow L^p(\widehat{G})$ that is determined by $\lambda_G(f) \mapsto \lambda_G(mf)$ for $f \in C_c(G) \ast C_c(G)$.
\subsection{Vector-valued $L^2$-spaces}\label{Sect=VecVal} Let $(X, \mu_X)$ be a regular Borel measure space and $\mathcal{X}$ be a Banach space. We write $L^2(X; \mathcal{X})$ for the Banach space of locally Bochner integrable functions $f: X \rightarrow \mathcal{X}$ such that $\Vert f \Vert_{L^2(X; \mathcal{X})} := \int_X \Vert f(x) \Vert_{\mathcal{X}}^2 d\mu(x)^{\frac{1}{2}} < \infty$.
\begin{remark}\label{Rmk=VectorFunctional}
Let $\varphi \in L^2(X, \mu_X)$. Then the map $(\varphi \otimes {\rm id}): L^2(X; \mathcal{X}) \rightarrow \mathcal{X}: f \mapsto \int_X f(x) \varphi(x) d\mu_X(x)$ is bounded with the same norm as $\varphi$ as easily follows from the Cauchy-Schwartz inequality.
\end{remark}
\section{Kernel estimates}\label{Sect=HeatKernel}
In \cite{AnkerJi} Anker and Ji showed that the fractional powers of the Casimir operator are represented by convolution kernels and they determined their asymptotic behavior. Consequently we can easily determine when these kernel are contained in $L^p(G)$. These results from \cite{AnkerJi} follow from rather deep estimates on the Heat kernel associated with the Casimir operator that were conjectured in \cite{AnkerOldPaper} and proved in several special cases before \cite{AnkerJi}.
We give some precise definitions borrowing some notation from \cite{AnkerJi}; but what really matters for us is \eqref{Eqn=Conv}. Define the Heat kernel, for $g \in G, t >0$,
\[
k_t(g) = \frac{C_{SF}}{ \vert W \vert} \int_{\fraka} \vert c(\lambda) \vert^{-2} e^{-t ( \Vert \lambda \Vert^2 + \Vert \rho \Vert^2 ) } \varphi_\lambda(g) d\lambda.
\]
Here $W$ is the Weyl group, $C_{SF} > 0$ a normalisation constant in the Spherical Fourier transform, $c$ the Harish-Chandra $c$-function and $\varphi_\lambda$ the spherical function indexed by $\lambda \in \fraka$ as in \cite{AnkerJi}.
Then for $s >0$ we set the Bessel-Green-Riesz kernel using the $\Gamma$-function identity
\[
\kappa_s := \Gamma(s)^{-1} \int_0^\infty t^{s-1} k_t dt.
\]
These kernels are bi-$K$-invariant, as so is the spherical function $\varphi_\lambda$, and satisfy
\begin{equation}\label{Eqn=Conv}
e^{ - t \Omega_K} f = k_t \ast f, \qquad \Omega_K^{-s} f = \kappa_s \ast f \qquad f \in C_c(K \backslash G \slash K).
\end{equation}
The following theorem is essentially proved in \cite{AnkerJi}.
\begin{theorem}\label{Thm=AnkerJi}
For $0 < 2s < \dim(G \slash K) $ and
\[
1 < q < \frac{ \dim( G/K ) }{ \dim( G/K ) - 2s},
\]
with moreover $q \leq 2$ we have that $\kappa_s$ is contained in $L^{q}( K \backslash G \slash K)$.
\end{theorem}
\begin{proof}
By the integral decomposition \eqref{Eqn=CartanIntegral} it suffices to show that $\kappa_s^{\fraka} := \kappa_s \circ \exp\vert_{\fraka^+}$ is contained in $L^{q}( \fraka^+, \delta(H) dH)$. We first consider the behavior of $\kappa_s^{\fraka}$ on $B_{\geq 1}^+ := \{ H \in \fraka^+ \mid \Vert H \Vert \geq 1 \}$, i.e. away from the origin.
By \cite[Theorem 4.2.2]{AnkerJi} combined with \cite[Proposition 2.2.12 or Remark 4.2.3.(i)]{AnkerJi} for $H \in B_{\geq 1}^+$,
\[
\begin{split}
\vert \kappa_s^{\fraka}( H ) \vert \preceq & \Vert H \Vert^{s - \frac{ l + 1}{2} - \vert \Sigma^{++} \vert} e^{- \Vert \rho \Vert \Vert H \Vert - \langle \rho, H \rangle } \prod_{\alpha \in \Sigma^{++}} \left( 1 + \langle \alpha, H \rangle \right). \\
\end{split}
\]
Recall the asymptotic behavior of the Haar measure $\delta(H) dH$ from \eqref{Eqn=CartanIntegral2}.
It follows that there exists a polynomial $P$ in $H$ such that
\begin{equation}\label{Eqn=AwayFromZero}
\int_{B_{\geq 1}^+} \vert \kappa_s^{\fraka}( H ) \vert^q \delta(H) d H \leq
\int_{B_{\geq 1}^+}
e^{-q ( \Vert \rho \Vert \Vert H \Vert + \langle \rho, H \rangle ) + 2 \langle \rho, H \rangle } P( \Vert H \Vert )
dH.
\end{equation}
As long as $q \leq 2$ we have for the exponent of the exponential function in the latter expression that
\[
-q ( \Vert \rho \Vert \Vert H \Vert + \langle \rho, H \rangle ) + 2 \langle \rho, H \rangle
\leq (2 - 2q) \Vert \rho \Vert \Vert H \Vert.
\]
And therefore as long as $1 < q \leq 2$ we have that the integral \eqref{Eqn=AwayFromZero} is finite.
Next we consider the behavior of $\kappa_s^{\fraka}$ on the region $B_{\leq 1}^+ := \{ H \in \fraka^+ \mid \Vert H \Vert \leq 1 \}$ in order to have $\kappa_s^\fraka \in L^{q}( B_{\leq 1}^+ , \mu_A)$. By \cite[Remark 4.2.3.(iii)]{AnkerJi} for $H \in B_{\leq 1}^+$ we have
\[
\vert \kappa_s(\exp(H)) \vert \approx \Vert H \Vert^{2s - \dim(G \slash K)}, \qquad 0 < 2 s < \dim(G \slash K).
\]
It therefore follows, using again the expression of the Haar measure \eqref{Eqn=CartanIntegral}, that $\kappa_s$ is in $L^q(B_{\leq 1}, d\mu_{\fraka})$ if and only if
\[
q (2s - \dim(G \slash K)) + \dim(N) > - \dim(A).
\]
This is equivalent to $q < \frac{\dim(G \slash K) }{ \dim(G \slash K) - 2s}$.
\end{proof}
Recall that $\Omega_K^{-s}$ with $s >0$ is a bounded operator on $L^2(K \backslash G \slash K)$. The next corollary concerns its extension to $L^p$-spaces of bi-$K$-invariant functions.
\begin{corollary}\label{Cor=CasimirBound}
For every $0 < 2s < \dim(G \slash K)$ and $\frac{\dim(G \slash K)}{ 2 s } < p < \infty$ with moreover $2 \leq p$, we have
\begin{equation}\label{Eqn=YoungPlus}
\Vert \Omega_K^{-s}: L^{p}(K \backslash G \slash K ) \rightarrow L^{\infty}(K \backslash G \slash K ) \Vert < \infty.
\end{equation}
\end{corollary}
\begin{proof}
By Theorem \ref{Thm=AnkerJi} the operator $\Omega_K^{-s}$ is a convolution operator with kernel $\kappa_s \in L^q(K \backslash G \slash K)$ where $\frac{1}{p} + \frac{1}{q} = 1$.
By Young's inequality for convolutions,
\[
\Vert \kappa_s \ast f \Vert_{ L^{\infty}( G )} \leq
\Vert \kappa_s \Vert_{ L^{q}( G ) } \Vert f \Vert_{ L^{p}( G ) },
\]
for any $f \in L^{p}( G )$. So the corollary follows from Theorem \ref{Thm=AnkerJi}.
\end{proof}
The following corollary is a special case of Theorem \ref{Thm=AnkerJi}.
\begin{corollary}\label{Cor=L2Case}
For $s > \frac{1}{4} \dim(G \slash K)$ we have that $\kappa_s \in L^{2}( K \backslash G \slash K )$.
\end{corollary}
\begin{remark}
Corollary \ref{Cor=L2Case} does not necessarily require the results from \cite{AnkerJi} but may also be derived more directly from the asymptotic behavior of the Harish-Chandra $c$-function as obtained in \cite[Eqn (3.44)]{Duistermaat} by showing that the spherical Fourier transform of $\kappa_s$, given by $\widehat{\kappa_s}(\lambda) = (\Vert \rho \Vert^2 + \Vert \lambda\Vert^2)^{-s}, \lambda \in \fraka$, is in $L^2(\fraka, c^{-2}(\lambda) d\lambda)$.
\end{remark}
\begin{remark}\label{Rmk=CowlingApplies}
As observed in \cite[Proposition 3.1, Eqn. (3.12)]{GangolliActa} we have that the Heat kernel $k_t$ is the fundamental solution of the parabolic differential equation $-\Omega_K u = \frac{\partial}{\partial t} u$ and as such $\Vert k_t \Vert_{ L^1(K \backslash G \slash K) } = 1$ for $t >0$ \cite{Ito}; note that \cite{GangolliActa} assumes $G$ is complex but here it is not relevant. Therefore by Young's inequality
\[
\Vert k_t \ast f \Vert_{L^p(G)} \leq \Vert f \Vert_{L^p(G)} \qquad f \in L^p(G), 1 \leq p \leq \infty, t > 0.
\]
We already noted that $\Omega_K$ is a positive (unbounded) operator on $L^2(K \backslash G/K)$.
In particular we are in the setting of \cite{CowlingAnnals} and we will apply a result from \cite{CowlingAnnals} below.
\end{remark}
\section{$L^p$-estimates for radial multipliers}
In this section we prove Theorem \ref{Thm=Lp} which gives a first estimate for the norm of Fourier multipliers in terms of smoothness and regularity of the symbol with respect to the Casimir operator.
We set
\[
P_K = \int_K \lambda_G(k) d\mu_K(k), \qquad P_K' = \int_K \lambda'_G(k) d\mu_K(k),
\]
as the orthogonal projection of $L^2(G)$ onto $L^2(K \backslash G)$ and $L^2(G \slash K)$, respectively.
We define
\[
L^\infty(\widehat{ K \backslash G \slash K}) := \{ \lambda_G(f) \mid f \in L^1(K \backslash G \slash K) \}'',
\]
which is a von Neumann algebra acting on $L^2(G)$.
As $(G,K)$ forms a Gelfand pair $L^\infty(\widehat{ K \backslash G \slash K})$ is commutative.
Set further
\[
\mathbb{E}_K(x) = P_K x P_K, \qquad x \in L^\infty(\widehat{G}),
\]
which we view as an operator on $L^2(G)$. Note that for $ f \in L^1(G)$ we have
\[
\mathbb{E}_K( \lambda_G(f) ) = \lambda_G( \widetilde{f}), \quad \textrm{ where } \quad \widetilde{f}(g) = \int_K \int_K f(k_1 g k_2 ) dk_1 dk_2.
\]
and hence we see that $\mathbb{E}_K$ is a normal $\tau_{\widehat{G}}$-preserving conditional expectation of $L^\infty(\widehat{G})$ onto $L^\infty(\widehat{ K \backslash G \slash K})$. We denote $L^p(\widehat{ K \backslash G \slash K})$ for the $L^p$-space constructed from $L^\infty(\widehat{ K \backslash G \slash K})$ with trace $\tau_{\widehat{G}}$. Then $L^p(\widehat{ K \backslash G \slash K})$ is a closed subspace of $L^\infty(\widehat{G})$. All von Neumann algebras we defined above act on $L^2(G)$ whereas the Casmir operator acts on $L^2(K \backslash G \slash K)$. Through the following lemma we show that it naturally acts on $L^2(G)$ as well.
\begin{lemma}\label{Lem=PolarVNA}
The map $\pi: \lambda_G(f) \mapsto \lambda_G(f) P_K'$ extends to an isomorphism of von Neumann algebras $M_1 \rightarrow M_2$ where
\[
M_1 := L^\infty(\widehat{ K \backslash G \slash K}), \qquad \textrm{ and } \qquad M_2 = M_1 P_K'.
\]
\end{lemma}
\begin{proof}
As $P_K'$ commutes with $M_1$ we have that $\pi$ is a normal $\ast$-homomorphism and it remains to show that it is injective. Set $A^+ = \exp(\fraka^+)$ and consider
\[
\varphi: K \times A^+ \times K \rightarrow G: (k_1, a, k_2) \mapsto k_1 a k_2.
\]
The complement of the range of $\varphi$ in $G$ has measure 0 \cite[Section xix]{JorgensonLang}; note that the map is generally not injective as $M$ (the centralizer of $A$ in $K$, see Section \ref{Sect=Prelim}) is usually non-trivial.
Then the pullback map
\[
U := \varphi^\ast: L^2(G) \rightarrow L^2(K \times A^+ \times K) \simeq L^2(K \times A^+ ) \otimes L^2(K ).
\]
is isometric by \eqref{Eqn=CartanIntegral} if we equip $K$ with its Haar measure and $A^+$ with the pullback along $\exp: \fraka^+ \rightarrow A^+$ of the measure $\vert K \slash M \vert \delta(H) dH$ on $\fraka^+$. Let $P_U = U U^\ast$ be the range projection of $U$. We have $(1 \otimes \lambda_K'(k)) U = U \lambda_G'(k)$ for all $k \in K$. It follows that
\[
U P_K' = (1 \otimes P_K') U \quad \textrm{ and } \quad (1 \otimes P_K') P_U = P_U (1 \otimes P_K').
\]
Take $x \in M_1$. As $x$ commutes with every $\lambda'_G(k), k \in K$ we find that
\begin{equation}\label{Eqn=Inclusion}
U x U^\ast \in B( L^2(K \times A^+ ) ) \otimes L^\infty(\widehat{K})'.
\end{equation}
If for $x \in M_1$ we have that $x P_K' = 0$ then it follows that
\[
0 = Ux P_K' = U x U^\ast U P_K' = U x U^\ast (1 \otimes P_K') U.
\]
But this means that
\[
0 = U x U^\ast (1 \otimes P_K') P_U = U x U^\ast P_U (1 \otimes P_K') = U x U^\ast (1 \otimes P_K').
\]
So the range of $1 \otimes P_K'$ given by $L^2(K \times A^+ ) \otimes 1$ is in the kernel of $Ux U^\ast$ and hence, by \eqref{Eqn=Inclusion}, we have $Ux U^\ast = 0$. This yields that $x = U^\ast U x U^\ast U = 0$. This shows that $\pi$ is injective.
\end{proof}
It follows in particular that $\pi$ in Lemma \ref{Lem=PolarVNA} restricts to an isomorphism $\pi'$ of $L^\infty(K \backslash G \slash K)$ onto its image. As this image consists of bi-$K$-invariant functions we see that $\pi'(x) = x P_K P_K'$ for $x \in L^\infty(K \backslash G \slash K)$. Thus the restriction map $\pi_K: x \mapsto x\vert_{L^2(K \backslash G \slash K) }$ yields an isomorphism of $L^\infty(\widehat{K \backslash G \slash K})$ onto its image. Now the Casimir operator $\Omega_K$ is affiliated with $\pi_K(L^\infty(\widehat{K \backslash G \slash K}))$; indeed all its finite spectral projections are in this von Neumann algebra, see \cite[Eqn. (4.2.1)]{AnkerJi}, \cite{HelgasonSpherical}. There we may see $\Omega_K$ as an operator affiliated with $L^\infty(\widehat{K \backslash G \slash K})$ and thus acting on $L^2(G)$ as well. We note that formally $\Omega_K$ is equal to the Casimir operator {\it restricted} to the bi-$K$-invariant functions.
The following is now a consequence of Corollary \ref{Cor=L2Case}, the spectral theorem for the Casimir operator \cite[Eqn. (4.2.1)]{AnkerJi} and the Plancherel identity/unitarity of the spherical Fourier transform.
\begin{corollary}\label{Cor=L2CaseCasimir}
For $s > \frac{1}{4} \dim(G \slash K)$ we have that $\Omega_K^{-s} \in L^{2}( \widehat{ K \backslash G \slash K } )$.
\end{corollary}
Now for $s>0$ consider the following completely positive map,
\begin{equation}\label{Eqn=TMap}
T_{s} = ( \Omega_K^{-s} \otimes 1) ( \mathbb{E}_K \otimes \id) \circ \Delta_{\widehat{G}}: L^\infty(\widehat{G}) \rightarrow L^\infty(\widehat{K \backslash G \slash K}) \otimes L^\infty(\widehat{G}).
\end{equation}
We will need vector valued $L^p$-spaces in case the measure space is the underlying measure space of the commutative von Neumann algebra $L^\infty(\widehat{K \backslash G \slash K})$. The following definition is rather explicit for $p =1$ and $p=2$ and all our other results will follow from complex interpolation between these cases. It agrees with the one in Section \ref{Sect=VecVal} if one identifies $L^\infty(\widehat{K \backslash G \slash K})$ with $L^\infty(X, \mu_X)$ for suitable $X$; in fact one may take $(X, \mu_X) = (\fraka, c(\lambda)^{-2} d\lambda)$ where $c$ is the Harish-Chandra $c$-function.
\begin{definition}
We define the $L^p(\widehat{G} )$-valued $L^2$-space $L^2( \widehat{K \backslash G \slash K}; L^p(\widehat{G} ))$ as follows. For $p=1,2$ it is the completion of
\[
L^\infty(\widehat{K \backslash G \slash K}) \otimes L^\infty(\widehat{G}) \cap L^1(\widehat{K \backslash G \slash K}) \otimes L^1(\widehat{G})
\]
with respect to the respective norms:
\begin{equation}\label{Eqn=MixedNorm}
\begin{split}
\Vert y \Vert_{ L^2( \widehat{K \backslash G \slash K}; L^1(\widehat{G} )) } = &
\tau_{\widehat{G}} ( \vert (\id \otimes \tau_{\widehat{G}})(\vert y\vert) \vert^2)^{\frac{1}{2}}. \\
\Vert y \Vert_{ L^2( \widehat{K \backslash G \slash K}; L^2(\widehat{G} )) } = &
(\tau_{\widehat{G}} \otimes \tau_{\widehat{G}} ) (y^\ast y)^{\frac{1}{2}} = \Vert y \Vert_{ L^2( \widehat{K \backslash G \slash K}) \otimes L^2(\widehat{G}) }.
\end{split}
\end{equation}
Further, through complex interpolation we may isometrically identify
\[
L^2( \widehat{K \backslash G \slash K}; L^p(\widehat{G} )) \simeq [L^2( \widehat{K \backslash G \slash K}; L^1(\widehat{G} )), L^2( \widehat{K \backslash G \slash K}; L^2(\widehat{G} ))]_{\theta},
\]
where $\theta$ is such that $\frac{1}{p} = \frac{1-\theta}{2} + \frac{\theta}{1}$.
\end{definition}
Typically we want $S_G$ in the next lemma to be very close to $\frac{1}{4} \dim(G \slash K)$ to get sharp estimates.
\begin{lemma}\label{Lem=TSEstimate}
Let $S_G > \frac{1}{4} \dim(G \slash K)$. Then, for $1 \leq p \leq 2$,
\begin{equation}\label{Eqn=TS}
\begin{split}
\Vert T_{S_G}: L^p(\widehat{G}) \rightarrow L^2( \widehat{K \backslash G \slash K}; L^p(\widehat{G} )) \Vert \leq & 2 \Vert \Omega_K^{-S_G} \Vert_{ L^2(\widehat{K \backslash G \slash K}) } < \infty.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
That the right hand side of \eqref{Eqn=TS} is finite follows from Corollary \ref{Cor=L2CaseCasimir}.
We prove this for $p=1$ and $p=2$ so that the lemma follows from complex interpolation.
For $p=2$ we find for $x \in L^2(\widehat{G}) \cap L^\infty(\widehat{G})$ by the Kadison-Schwarz inequality,
\[
\begin{split}
\Vert T_{S_G}(x) \Vert_{ L^2( \widehat{K \backslash G \slash K}; L^2(\widehat{G} )) } ^2 = & (\tau_{\widehat{G}} \otimes \tau_{\widehat{G}})(T_{S_G}(x)^\ast T_{S_G}(x)) \\
= & (\tau_{\widehat{G}} \otimes \tau_{\widehat{G}}) \left( (\Omega_K^{-S_G} \otimes 1) (\mathbb{E}_K \otimes \id) (\Delta_{\widehat{G}}(x))^\ast (\mathbb{E}_K \otimes \id) (\Delta_{\widehat{G}}(x)) (\Omega_K^{-S_G} \otimes 1) \right) \\
\leq & (\tau_{\widehat{G}} \otimes \tau_{\widehat{G}}) \left( (\Omega_K^{-S_G} \otimes 1) (\mathbb{E}_K \otimes \id) (\Delta_{\widehat{G}}(x^\ast x)) (\Omega_K^{-S_G} \otimes 1) \right).
\end{split}
\]
Then by left invariance of the Plancherel trace $\tau_{\widehat{G}}$ on $L^\infty(\widehat{G})$ (see \cite[Definition 1.1]{KustermansVaes}),
\[
\Vert T_{S_G}(x) \Vert_{ L^2( \widehat{K \backslash G \slash K}; L^2(\widehat{G} )) } ^2 \leq \tau_{\widehat{G}}( \Omega_K^{-2S_G} \mathbb{E}_K (1) ) \tau_{\widehat{G}}(x^\ast x) = \tau_{\widehat{G}}( \Omega_K^{-2S_G } ) \tau_{\widehat{G}}(x^\ast x) =
\tau_{\widehat{G}}( \Omega_K^{-2S_G} ) \Vert x \Vert_{L^2(\widehat{G} )}^2.
\]
This proves the $L^2$-estimate.
For the $p=1$ estimate, take $x \in L^1(\widehat{G})$ positive so that $T_{S_G}(x)$ is positive.
By \eqref{Eqn=MixedNorm} and using left-invariance of $\tau_{\widehat{G}}$ twice,
\begin{equation}\label{Eqn=L12Est}
\begin{split}
\Vert T_{S_G}(x) \Vert_{ L^2( \widehat{K \backslash G \slash K}; L^1(\widehat{G} )) }^2 = &
\tau_{\widehat{G}}\left( \Omega_K^{-S_G} \vert (\mathbb{E}_K \otimes \tau_{\widehat{G}}) \Delta_{\widehat{G}}(x) \vert^2 \Omega_K^{-S_G} \right)\\
\leq &
\tau_{\widehat{G}}\left( \Omega_K^{-S_G}
\mathbb{E}_K\left(
(\id \otimes \tau_{\widehat{G}} ) ( \Delta_{\widehat{G}}(x^\ast) )
(\id \otimes \tau_{\widehat{G}} ) ( \Delta_{\widehat{G}}(x) )
\right) \Omega_K^{-S_G} \right) \\
= & \tau_{\widehat{G}}\left( \Omega_K^{-S_G} (\mathbb{E}_K \otimes \tau_{\widehat{G}} )( \Delta(x^\ast) ) \Omega_K^{-S_G} \right)
\tau_{\widehat{G}}(x) \\
= & \tau_{\widehat{G}}( \Omega_K^{-2S_G})
\vert \tau_{\widehat{G}}(x)\vert^2 = \tau_{\widehat{G}}( \Omega_K^{-2S_G}) \Vert x \Vert_1^2.
\end{split}
\end{equation}
For general (non-positive) self-adjoint $x \in L^1(\widehat{G})$ we use the fact that it can be written as $x = x_1 - x_2$ with $x_i$ positive and $\Vert x_1 \Vert_1 + \Vert x_2 \Vert_1 = \Vert x \Vert_1$ to conclude the same estimate \eqref{Eqn=L12Est}. For general $x \in L^1(\widehat{G})$ we use the fact that it can be written as $x = x_1 + ix_2$ with $x_i$ self-adjoint and $\Vert x_i \Vert \leq \Vert x \Vert$ and conclude
\[
\Vert T_{S_G}(x) \Vert_{ L^2( \widehat{K \backslash G \slash K}; L^1(\widehat{G} )) } \leq 2 \tau_{\widehat{G}}( \Omega_K^{-2S_G})^{\frac{1}{2}} \Vert x \Vert_1.
\]
This concludes the proof.
\end{proof}
\begin{lemma} \label{Lem=Differential}
Let $s > 0$. For $m \in L^1(K \backslash G \slash K) \cap L^2(K \backslash G \slash K)$ in the domain of $\Omega_K^{s}$ such that $\Omega_K^{s}(m) \in L^1(K \backslash G \slash K)$ we have
\[
\Omega_K^{s} \lambda_G(m) = \lambda_G( \Omega_K^s (m)),
\]
and in particular for every $\xi \in L^2(G)$ we have that $\lambda_G(m) \xi$ is in the domain of $\Omega_K^{s}$.
\end{lemma}
\begin{proof}
Take $f \in C_c(G)$. As $\Omega_K^{s}$ is affiliated with $L^\infty(\widehat{K \backslash G \slash K})$ it commutes with right convolutions and so we have
\[
\Omega_K^{s} \lambda_G(m) f = \Omega_K^{s} (m \ast f) = \Omega_K^{s}(m) \ast f = \lambda_G( \Omega_K^s (m) ) f.
\]
Now $\Omega_K^{s} \lambda(m)$ is a closed operator; this follows from the general fact that if $d$ is closed and $x$ is bounded then $dx$ with domain $\{ \xi \mid x \xi \in {\rm Dom}(d)\}$ is closed. The assumption that $\Omega_K^s (m) \in L^1(K \backslash G \slash K)$ assures that
$\lambda_G( \Omega_K^s (m))$ is bounded. Further, as we have shown that $\lambda_G( \Omega_K^s (m))$ equals $\Omega_K^{s} \lambda_G(m)$ on the domain $C_c( G)$ it follows that $\Omega_K^{s} \lambda_G(m) = \lambda_G( \Omega_K^s (m))$ as operators on $L^2( G)$.
\end{proof}
\begin{theorem}\label{Thm=Lp}
Let $1 \leq p \leq 2$. Let $m \in L^2(K \backslash G \slash K) \cap L^\infty(K \backslash G \slash K)$ with $m \in {\rm Dom}(\Omega_K^{S_G})$. Then, for $S_G > \frac{1}{4} \dim(G \slash K)$,
\[
\Vert T_m: L^p(\widehat{G}) \rightarrow L^p(\widehat{G}) \Vert \leq 2 \Vert \Omega_K^{S_G}( m) \Vert_{L^2(G)} \Vert \Omega_K^{-S_G} \Vert_{L^2(\widehat{K \backslash G \slash K})}.
\]
\end{theorem}
\begin{proof}
Let $(U_i)_i$ be shrinking bi-$K$-invariant neighbourhoods of the identity of $G$ such that $\cap_i U_i = K$ and $U_i^{-1} = U_i$.
Let $m_i = m \ast I_i$ where $I_i = \vert U_i \vert^{-1} 1_{U_i}$ is an $L^1(G)$-normalisation of the indicator function on $U_i$. As
\begin{equation}\label{Eqn=FourierAlgebra}
\Omega_K^{S_G}(m_i) =
\Omega_K^{S_G}(m \ast I_i) = \Omega_K^{S_G}(m) \ast I_i,
\end{equation}
it follows that $m_i$ satisfies the same assumptions as made on $m$ in the statement of the proposition. Suppose that we have proved the proposition for $m_i$ then by taking limits in $i$ it also follows for $m$. Now $m_i$ has the additional property that it is contained in the Fourier algebra $A(G)$ of $G$ (see Eymard \cite{Eymard}) meaning that
\begin{equation}\label{Eqn=VarphiM}
\varphi_{m_i}: \lambda_G(f) \mapsto \int_G m_i(g) f(g) d\mu_G(g),
\end{equation}
extends to a normal bounded functional on $L^\infty(\widehat{G})$. The equation \eqref{Eqn=FourierAlgebra} similarly shows that $\Omega_K^{S_G}(m_i) \in A(G)$ and hence $\varphi_{\Omega_K^{S_G}(m_i)}$ may also be defined as a normal bounded functional on $L^\infty(\widehat{G})$ by replacing $m_i$ by $\Omega_K^{S_G}(m_i)$ in \eqref{Eqn=VarphiM}.
We first derive a number of properties for our setup that shall be used in the core of our proof.
\vspace{0.3cm}
\noindent (1) Note that as $m_i$ is bi-$K$-invariant, for $f \in L^1(G)$ we have, using the left and right invariance of the Haar measure,
\[
\begin{split}
\varphi_{m_i}(\mathbb{E}_K( \lambda_G(f)) ) = & \int_G m_i(g) f(k_1 g k_2) d\mu_G(g) d\mu_K(k_1) d \mu_K(k_2)\\ = &
\int_G m_i( g ) f(g) d\mu_G(g)
=
\varphi_{m_i}(\mathbb{E}_K( \lambda_G(f)) ).
\end{split}
\]
So $\varphi_{m_i} \circ \mathbb{E}_K = \varphi_{m_i}$.
\noindent (2) We have by Lemma \ref{Lem=Differential} for $f \in L^1(K \backslash G \slash K) \cap L^2(K \backslash G \slash K)$ in the domain of $\Omega_K^{s}$ such that $\Omega_K^{s}(f) \in L^1(K \backslash G \slash K)$ that
\[
\begin{split}
\varphi_{m_i}( \Omega_K^s \lambda_G(f) ) = &
\varphi_{m_i}( \lambda_G( \Omega_K^s f) ) =
\int_G m_i( g ) (\Omega_K^s f)(g) d\mu_G(g) \\
= &
\int_G (\Omega_K^s m_i)( g ) f(g) d\mu_G(g) =
\varphi_{\Omega_K^s (m_i)} ( \lambda_G(f) ).
\end{split}
\]
In particular, $\lambda_G(f) \mapsto \varphi_{m_i}( \Omega_K^s \lambda_G(f) )$ extends to a normal map on $L^\infty(\widehat{K \slash G \backslash K})$.
\noindent (3) As $m_i \in L^2(K \backslash G \slash K)$ it follows directly from \eqref{Eqn=FourierAlgebra} that
\begin{equation}\label{Eqn=L2VecEst}
\Vert \varphi_{ \Omega_K^s (m_i) } \Vert_{L^2( \widehat{K \backslash G \slash K} )^\ast} \leq \Vert \Omega_K^s (m_i) \Vert_{ L^2(G) }.
\end{equation}
\vspace{0.3cm}
We now come to the main part of the proof. Now take $f \in C_c(G)^{\ast 2}$. Then,
\begin{equation}\label{Eqn=TmComp}
\begin{split}
T_{m_i}(\lambda_G(f)) = & \int_G m_i(g) f(g) \lambda_G(g) d\mu_G(g)
= (\varphi_{m_i} \otimes \id) \left( \int_G f(g) \lambda_G(g) \otimes \lambda_G(g) d\mu_G(g) \right) \\
= & ( \varphi_{m_i} \otimes \id) \circ \Delta_{\widehat{G}}(\lambda_G(f))
= ( \varphi_{m_i} \circ \mathbb{E}_K \otimes \id) \circ \Delta_{\widehat{G}}(\lambda_G(f)).
\end{split}
\end{equation}
Note that $(\Omega_K^{S_G} \otimes 1) (\Omega_K^{-S_G} \otimes 1)$ equals the unit of $L^\infty(K \backslash G \slash K)$, in particular with equality of domains.
Therefore we get, and this is the most crucial equality in this paper,
\begin{equation}\label{Eqn=TmComp2}
\begin{split}
T_{m_i} = & ( \varphi_{m_i} \otimes \id) \left( (\Omega_K^{S_G} \otimes 1) (\Omega_K^{-S_G} \otimes 1) (\mathbb{E}_K \otimes \id) \Delta_{\widehat{G}} \right) \\
= & ( \varphi_{\Omega_K^{S_G} (m_i)} \otimes \id) \left( (\Omega_K^{-S_G} \otimes 1) (\mathbb{E}_K \otimes \id) \Delta_{\widehat{G}} \right)
= ( \varphi_{\Omega_K^{S_G} (m_i)} \otimes \id) \circ T_{S_G}.
\end{split}
\end{equation}
It follows by this equation, Remark \ref{Rmk=VectorFunctional}, \eqref{Eqn=L2VecEst} and Lemma \ref{Lem=TSEstimate} that
\[
\begin{split}
\Vert T_{m_i}: L^p(\widehat{G}) \rightarrow L^p(\widehat{G}) \Vert \leq & \Vert \varphi_{ \Omega_K^s (m_i) } \Vert_{L^2(\widehat{K \backslash G \slash K})^\ast }
\Vert T_{S_G}: L^p(\widehat{G}) \rightarrow L^2(\widehat{K \backslash G \slash K} ; L^p(\widehat{G}) ) \Vert \\
\leq & 2 \Vert \Omega_K^s (m_i) \Vert_{L^2(G)} \Vert \Omega_K^{-S_G} \Vert_{ L^2(\widehat{K \backslash G \slash K}) }.
\end{split}
\]
The theorem now follows by taking limits in $i$ as justified in the beginning of the proof.
\end{proof}
\section{Interpolation between $L^p$ and $L^2$ and conclusion of the main theorem}
The result in this section should be seen as a complex interpolation result between the estimate from Theorem \ref{Thm=Lp} and the bound obtained in Corollary \ref{Cor=CasimirBound} that followed from the analysis by Anker and Ji \cite{AnkerJi}. Similar results can be found in the literature (see \cite{Triebel}, \cite{GrafakosIMRN}) but we have not found a theorem that was directly applicable and therefore we provide a self-contained proof.
We use the following variation of the three lines lemma which can be found in \cite{GrafakosClassical} or \cite{Hirschman}. We define the usual strip
\[
\mathcal{S} = \{ z \in \mathbb{C} \mid 0 \leq \Im(z) \leq 1 \}.
\]
\begin{lemma}\label{Lem=ThreeLines}
Let $F: \mathcal{S} \rightarrow \mathbb{C}$ be continuous and analytic on the interior of $\mathcal{S}$. Assume that for every $0 \leq \beta \leq 1$ there exists a function $A_\beta: \mathbb{R} \rightarrow \mathbb{R}_{> 0}$ and scalars $A >0$ and $0<a<\pi$ such that such that for all $t \in \mathbb{R}$ we have
\[
F(\beta + it) \leq A_\tau(t) \leq e^{ A e^{a \vert t \vert} }.
\]
Then for $0 < \beta < 1$ we have $\vert F(\beta) \vert \leq e^{D_\beta}$ where
\[
D_\beta = \frac{\sin(\pi \theta)}{2} \int_{-\infty}^\infty \frac{\log(\vert A_0(t) \vert) }{ \cosh(\pi t) - \cos(\pi \beta) } + \frac{\log \vert A_1(t) \vert}{ \cosh(\pi t) + \cos(\pi \beta) } dt.
\]
\end{lemma}
To apply this lemma it is crucial to realize that
\[
\frac{\sin(\pi \theta)}{2} \int_{-\infty}^\infty \frac{ dt }{ \cosh(\pi t) - \cos(\pi \beta) } = 1 - \theta, \qquad
\frac{\sin(\pi \theta)}{2} \int_{-\infty}^\infty \frac{ dt }{ \cosh(\pi t) + \cos(\pi \beta) }.
\]
The following is now the main theorem of this paper.
\begin{theorem}\label{Thm=Main}
Let $S_G > \frac{1}{4} \dim(G/K)$.
Let $p \in (1,\infty)$. Let $s \in (0, S_G]$ be such that
\[
\left| \frac{1}{p} - \frac{1}{2} \right| < \frac{s}{2 S_G}.
\]
Then, there exists a constant $C_{G, s,p} >0 $ only depending on the group $G$ and the exponents $s$ and $p$, such that for every $m \in L^2(K \backslash G \slash K) \cap L^\infty(K \backslash G \slash K)$ with $m \in {\rm Dom}(\Omega_K^s)$ and $\Omega_K^s(m) \in L^{2S_G/s}(G)$ we have,
\begin{equation}\label{Eqn=MainTheorem}
\Vert T_m: L^p(\widehat{G}) \rightarrow L^p(\widehat{G}) \Vert \leq C_{s,p} \Vert \Omega_K^s(m) \Vert_{L^{2S_G/s}(G)}.
\end{equation}
\end{theorem}
\begin{proof}
For completeness we mention that for $p=2$ this result is Theorem \ref{Thm=Lp} in combination with Corollary \ref{Cor=L2Case}; of course for $p=2$ the estimate in the theorem is very crude as $\Vert m \Vert_{L^\infty(G)}$ is the norm of $T_m$ in \eqref{Eqn=MainTheorem}.
Assume $p \not = 2$. By duality it suffices to treat the case $p \in (1,2)$.
Take $\alpha \in (0,1)$ such that
\[
\frac{1}{p} - \frac{1}{2} < \frac{\alpha s}{2 S_G}.
\]
Set $p_1 = \frac{2}{\alpha +1}$ and $s_1 = S_G$.
Set $p_0 = 2$. Set
\begin{equation}\label{Eqn=SGEstimate}
\theta := \left(\frac{1}{p} - \frac{1}{2} \right) \frac{2}{\alpha} < \frac{ s}{S_G} \leq 1.
\end{equation}
Hence $\theta \in [0,1]$ and further $\frac{1}{p} = \frac{1-\theta}{2} + \frac{\theta}{p_1}$; in particular $p_1 < p < 2$. In \eqref{Eqn=SGEstimate} we have already noted that $\theta S_G < s \leq S_G$. Therefore we may pick $s_0 \in (0, s)$ such that
\[
s = (1-\theta) s_0 + \theta S_G = (1-\theta) s_0 + \theta s_1.
\]
The idea of the rest of the proof is to interpolate between $(p_0,s_0)$ and $(p_1, s_1)$ by means of Lemma \ref{Lem=ThreeLines}.
\vspace{0.3cm}
\noindent {\it Step 1: Defining the function $F$.}
Recall that $s$ was fixed in the statement of the theorem and set,
\[
m_s = \Omega_K^s(m) \in L^{2S_G/s}(K \backslash G \slash K).
\]
For $z \in \mathcal{S}$ set
\[
M_z = \Omega_K^{-(1-z) s_0 - z s_1 }( m_s \vert m_s \vert^{\frac{-s + (1-z) s_0 + z s_1 }{s} } );
\]
we need to argue how the application of $\Omega_K^{-(1-z) s_0 - z s_1 }$ is interpreted, and we shall do that in Step 1a where we show that it is a bounded operator from
$L^{q_\beta}$ to $L^\infty$ (notation below) and at the same time we show that $M_z$ is a function in $L^\infty(G)$. At this point we observe already that
\begin{equation}\label{Eqn=Mtheta}
M_\theta = \Omega_K^{- s }( m_s ) = m.
\end{equation}
Let $p', p_0'$ and $p_1'$ be the conjugate exponents of respectively $p, p_0$ and $p_1$.
Take $f_1 \in C_c(G)^{\ast 2}$ and set $f = f_1^\ast \ast f_1$. Similarly, take $g_1 \in C_c(G)$ and set $g = g_1^\ast \ast g_1 \ast \ldots \ast g_1^\ast \ast g_1$ with $k\in \mathbb{N}$ occurrences of $g_1^\ast \ast g_1$ where $k \geq \frac{p'}{p_1'}$. Set $a = \lambda_G(f), b \in \lambda_G(g)$ which are positive and
contained in $L^\infty(\widehat{G}) \cap L^1(\widehat{G})$. Our assumptions moreover imply that $a^z \in L^\infty(\widehat{G}) \cap L^2(\widehat{G})$ as long as $\Re(z) \geq \frac{1}{2}$. Further, $b^z \in L^\infty(\widehat{G}) \cap L^2(\widehat{G})$ as long as $\Re(z) \geq \frac{1}{2k}$. So surely all complex powers of $a$ and $b$ in the expression \eqref{Eqn=Fz} below are contained in $L^\infty(\widehat{G}) \cap L^2(\widehat{G})$. Further, the application of $T_{M_z}$ in \eqref{Eqn=Fz} is justified as it is a bounded map on $L^2(\widehat{G})$.
So we define,
\begin{equation}\label{Eqn=Fz}
F(z) = \tau_{\widehat{G}}( T_{M_z}( a^{(1-z) \frac{p}{p_0} + z \frac{p}{p_1} } ) b^{ (1-z) \frac{p'}{p_0'} + z \frac{p'}{p_1'} } ).
\end{equation}
Then $F$ is continuous on $\mathcal{S}$ and analytic on the interior of $\mathcal{S}$. We now require 3 estimates on $F$.
\vspace{0.3cm}
\noindent {\it Step 1a: Estimating $F$ on the strip $\mathcal{S}$.}
For any $z \in \mathcal{S}$ we have,
\[
\vert F(z) \vert \leq \Vert M_z \Vert_{L^\infty(G)} \Vert a^{(1-z) \frac{p}{p_0} + z \frac{p}{p_1} } \Vert_{L^2(\widehat{G})} \Vert b^{ (1-z) \frac{p'}{p_0'} + z \frac{p'}{p_1'} } \Vert_{L^2(\widehat{G})}.
\]
Here the terms
\[
\Vert a^{(1-z) \frac{p}{p_0} + z \frac{p}{p_1} } \Vert_{L^2(\widehat{G})}, \quad \textrm{ and } \quad \Vert b^{ (1-z) \frac{p'}{p_0'} + z \frac{p'}{p_1'} } \Vert_{L^2(\widehat{G})},
\]
are uniformly bounded in $z \in \mathcal{S}$. Now write $z = \beta + it, \beta \in [0,1], t \in \mathbb{R}$. Set $s_\beta = (1-\beta)s_0 + \beta s_1$ and then $q_\beta = 2S_G/s_\beta$. Set $q = 2S_G/s$. So $s = s_\theta$ and $q = q_\theta$.
We estimate,
\begin{equation}\label{Eqn=StripEstimate}
\begin{split}
\Vert M_z \Vert_{L^\infty(G)} = & \Vert \Omega_K^{-s_\beta +it (s_0 - s_1)}( \Omega_K^s(m) \vert\Omega_K^s(m)\vert^{\frac{-s + (1-z) s_0 + z s_1 }{s} } ) \Vert_{L^\infty(G)} \\
\leq & \Vert \Omega_K^{- s_\beta}: L^{q_\beta}(K \backslash G \slash K) \rightarrow L^\infty(K \backslash G \slash K) \Vert \\
& \: \times \: \Vert \Omega_K^{ it (s_0 - s_1) }: L^{q_\beta}(K \backslash G \slash K) \rightarrow L^{q_\beta}(K \backslash G \slash K) \Vert \Vert \vert\Omega_K^s(m)\vert^{ \frac{s_\beta}{s } } \Vert_{ L^{q_\beta}(K \backslash G \slash K) }.
\end{split}
\end{equation}
By Corollary \ref{Cor=CasimirBound} and using that $q_\beta = 2S_G/s_\beta > \dim(G \slash K)/2 s_\beta$, we have
\begin{equation}\label{Eqn=ConvEstimate}
C_{\beta} := \Vert \Omega_K^{- s_\beta}: L^{q_\beta}(K \backslash G \slash K) \rightarrow L^\infty(K \backslash G \slash K) \Vert < \infty.
\end{equation}
By Remark \ref{Rmk=CowlingApplies} the Heat semi-group is a contractive semi-group with positive generator and hence falls within the setting of \cite{CowlingAnnals}. By \cite[Corollary 1]{CowlingAnnals} there exists a constant $C_\beta' > 0$ only depending on $\beta$ such that
\begin{equation}\label{Eqn=CowlingEstimate}
\Vert \Omega_K^{ it (s_0 - s_1) }: L^{q_\beta}(K \backslash G \slash K) \rightarrow L^{q_\beta}(K \backslash G \slash K) \Vert
\leq C_\beta' (1 + \vert t \vert^3 \log^2(\vert t \vert) )^{\vert \frac{1}{q_\beta} - \frac{1}{2} \vert}.
\end{equation}
Finally note that
\begin{equation}\label{Eqn=SwitchingSpace}
\Vert \vert\Omega_K^s(m)\vert^{ \frac{s_\beta}{s } } \Vert_{ L^{q_\beta}(K \backslash G \slash K) } =
\Vert \Omega_K^s(m) \Vert_{ L^{q}(K \backslash G \slash K) }^{\frac{q}{q_\beta}}.
\end{equation}
Combining \eqref{Eqn=StripEstimate} with \eqref{Eqn=ConvEstimate}, \eqref{Eqn=CowlingEstimate} and \eqref{Eqn=SwitchingSpace} yields
\begin{equation}\label{Eqn=MasterEstimate}
\begin{split}
\vert F(z) \vert \leq & C_\beta C_\beta' (1 + \vert t \vert^3 \log^2(\vert t \vert) )^{\vert \frac{1}{q_\beta} - \frac{1}{2} \vert} \Vert \Omega_K^s(m) \Vert_{ L^{q}(K \backslash G \slash K) }^{\frac{q}{q_\beta}} \\
& \: \times \: \Vert a^{(1-z) \frac{p}{p_0} + z \frac{p}{p_1} } \Vert_{L^2(\widehat{G})} \Vert b^{ (1-z) \frac{p'}{p_0'} + z \frac{p'}{p_1'} } \Vert_{L^2(\widehat{G})}.
\end{split}
\end{equation}
We see that for any $z \in \mathcal{S}$ we have $\vert F(z) \vert \leq e^{A e^{B t}}$ for suitable constants $A >0$ and $0 < B < \pi$.
\vspace{0.3cm}
\noindent {\it Step 1b: Estimating $F$ on $i \mathbb{R}$.}
By \eqref{Eqn=MasterEstimate} and recalling that $p_0=p_0'=2$ we have in particular that
\[
\begin{split}
\vert F(it) \vert \leq &
C_\beta C_\beta' (1 + \vert t \vert^3 \log^2(\vert t \vert) )^{\vert \frac{1}{q_\beta} - \frac{1}{2} \vert} \Vert \Omega_K^s(m) \Vert_{ L^{q}(K \backslash G \slash K) }^{\frac{q}{q_0}} \Vert a \Vert_{L^p(\widehat{G})}^{\frac{ p }{2}} \Vert b \Vert_{L^{p'}(\widehat{G})}^{\frac{ p' }{2}}.
\end{split}
\]
\vspace{0.3cm}
\noindent {\it Step 1c: Estimating $F$ on $1+ i \mathbb{R}$.} We apply Theorem \ref{Thm=Lp} to the symbol $M_{1 +it}$. This is possible as we have $m_s = \Omega_K^s(m) \in L^{2S_G/s}(K \backslash G \slash K)$ and therefore, recalling that $s_1 = S_G$,
\[
m_s \vert m_s \vert^{\frac{-s - it s_0 + (1 + it) s_1 }{s} } = m_s \vert m_s \vert^{-1 + \frac{ it(s_1 - s_0) + s_1 }{s} } \in L^{2}(K \backslash G \slash K).
\]
So that $M_{1 + it} = \Omega_K^{- it s_0 - (1+it) s_1 } ( m_s \vert m_s \vert^{-1 + \frac{ it(s_1 - s_0) + s_1 }{s} } )$ lies in $L^{2}(K \backslash G \slash K)$ as negative powers of $\Omega_K$ are bounded operators. Further in Step 1a we already justified that $M_{1+it}$ also lies in $L^{\infty}(K \backslash G \slash K)$. Hence we can apply Theorem \ref{Thm=Lp}. Together with Corollary \ref{Cor=L2CaseCasimir} it gives that there exists a constant $C_G > 0$ such that
\[
\begin{split}
\vert F(1+it) \vert \leq & C_G \Vert \Omega_K^{s_1} ( M_{1+it} ) \Vert_{L^{2}( G)} \Vert a^{ -it \frac{p}{p_0} + (1+it) \frac{p}{p_1} } \Vert_{L^{p_1}(\widehat{G})} \Vert b^{ it \frac{p'}{p_0'} + (1+it) \frac{p'}{p_1'} } \Vert_{L^{p_1'}(\widehat{G})} \\
= & C_G \Vert \Omega_K^{s_1} ( M_{1+it} ) \Vert_{L^{2}(G)} \Vert a \Vert_{L^{p}(\widehat{G})}^{ \frac{p}{p_1} } \Vert b \Vert_{L^{p'}(\widehat{G})}^{ \frac{p'}{p_1'} }.
\end{split}
\]
Further, recalling that $s_1 = S_G$,
\[
\begin{split}
\Vert \Omega_K^{s_1} ( M_{1+it} ) \Vert_{L^{2}( G)} = &
\Vert
\Omega_K^{ it ( s_1 - s_0) }( \Omega_K^s(m) \vert\Omega_K^s(m)\vert^{-1 + \frac{ it(s_1 - s_0) + s_1 }{s}} ) \Vert_{L^{2}(G) } \\
= & \Vert \vert\Omega_K^s(m)\vert^{ \frac{ s_1}{s} } ) \Vert_{L^{2}( G )}
= \Vert \Omega_K^s(m) \Vert_{L^{\frac{2S_G}{s}}( G )}^{\frac{S_G}{s}} = \Vert \Omega_K^s(m) \Vert_{L^{q }( G )}^{\frac{q}{2}}.
\end{split}
\]
\vspace{0.3cm}
\noindent {\it Step 2: Remainder of the proof.} We apply Lemma \ref{Lem=ThreeLines}. The assumptions are met by Steps 1a, 1b and 1c. Further we find that
\[
\begin{split}
\frac{2}{\sin(\pi \beta)} D_\beta \leq & \int_{-\infty}^\infty \frac{F_0(t) +
\frac{q}{q_0} \log(\Vert \Omega_K^s(m) \Vert_{ L^{q}( G ) }) + \frac{p}{2} \log( \Vert a \Vert_{L^p(\widehat{G})}) + \frac{p'}{2} \log( \Vert b \Vert_{L^{p'}\widehat{G})} )
}{ \cosh(\pi t) - \cos(\pi \beta) } dt \\
& +
\int_{-\infty}^\infty \frac{
\frac{q}{2} \log( \Vert \Omega_K^s(m) \Vert_{L^{q }(G)} ) + \frac{p}{p_1} \log(\Vert a \Vert_{L^{p}(\widehat{G})}) + \frac{p'}{p'_1} \log( \Vert b \Vert_{L^{p'}(\widehat{G} )} )
}{ \cosh(\pi t) + \cos(\pi \beta) } dt
\end{split}
\]
where
\[
F_0(t) = \log(C_\beta C_\beta') + \left| \frac{1}{q_\beta} - \frac{1}{2} \right| \log( 1 + \vert t \vert^3 \log^2(\vert t \vert) ).
\]
Recall from \eqref{Eqn=Mtheta} that $M_\theta = m$. It follows by the remarks after Lemma \ref{Lem=ThreeLines} that for some constant $C_{G, p, p_0,p_1} >0$ we have
\[
\tau_{\widehat{G}}( T_{m}( a ) b ) =
\vert F(\theta) \vert \leq \exp(D_\theta) \leq C_{G, p, p_0,p_1} \Vert \Omega_K^s(m) \Vert_{ L^{q}( G ) } \Vert a \Vert_{L^p(\widehat{G})}
\Vert b \Vert_{L^{p'}\widehat{G})}.
\]
Since this holds for all possible $a$ and $b$ as defined in the beginning of the proof a density argument concludes the proof.
\end{proof}
\section{Examples}\label{Sect=Examples}
The aim of this section is to illustrate that Theorem \ref{Thm=Main} provides new examples of Fourier multipliers on a wide class of Lie groups.
To the knowledge of the author the only examples of $L^p$-multipliers on classes of semi-simple Lie groups come from the two papers \cite{PRS}, \cite{Tablate}.
The method of proof \cite{PRS}, \cite{Tablate} is very effective to find bounds of $L^p$-multipliers for symbols $m$ that are supported on some neighbourhood of the identity of $G$. Then in \cite{PRS} a patching argument \cite[Proof of Theorem A]{PRS} is used to provide bounds of sums of translates of such multipliers. Due to this patching argument, or a simple and crude triangle inequality, the norms grow with the $L^1$-norm of such a multiplier. Hence these multipliers have a local behavior. The method was improved upon in \cite{Tablate} yielding also multipliers without such an integrability property. Here we show that for $p$ closer to 2 even less conditions are needed and we get multipliers with an even slower decay rate, see Remark \ref{Rmk=Examples}.
\subsection{Multipliers with slow decay}\label{Sect=Decay}
For $f: \fraka \rightarrow [0,1]$ a $C^\infty$-function that is invariant under the action of the Weyl group we define
\[
(\Psi f)( k_1 \exp(H) k_2 ) = f( H ), \qquad H \in \fraka, k_1, k_2 \in K.
\]
The Weyl group invariance assures that this function is well-defined.
Then $\Psi f$ is a bi-$K$-invariant smooth function on $G = K \exp(\fraka) K$. By \cite[Proposition II.3.9]{HelgasonSpherical} there exists a second order linear diffential operator $\mathcal{D}$ acting on $C^\infty( \fraka )$ such that
\[
\Omega_K (\Psi f) = \Psi ( \mathcal{D} f).
\]
We emphasize that $\mathcal{D}$ contains differential operators of lower order as well.
Now set $S_G > \dim(G \slash K)$ and define
\[
f( H ) = e^{-A \Vert H \Vert}, \qquad \textrm{ in case } \Vert H \Vert \geq 1,
\]
where $A > \Vert \rho \Vert S_G^{-1}$. Assume further that $f$ is smooth everywhere.
Then, as $\mathcal{D}$ is a second order differential operator we still have for every $H \in \fraka$ with $\Vert H \Vert \geq 1$ that $\vert (\mathcal{D} f)(H) \vert \leq D e^{-A \Vert H \Vert}$ for some constant $D>0$.
We consider $s =1$. We have,
\[
\Vert \Omega_K (\Psi f) \Vert_{L^{2 S_G}(G) } = \Vert \Psi ( \mathcal{D} f) \Vert_{L^{2 S_G}(G) } \leq
D \vert K \slash M \vert
\left( \int_{\fraka^+} e^{-\Vert H \Vert A 2 S_G} \delta(H) dH \right)^{\frac{1}{2S_G}}
< \infty.
\]
As by \eqref{Eqn=CartanIntegral2} we have $\delta(H) \leq e^{-2\Vert H \Vert \Vert \rho \Vert}$ we see that this integral is finite by our choice of $A$.
This shows that for every $p \in (1, \infty)$ with $\vert \frac{1}{2} - \frac{1}{p} \vert < \frac{1}{2 S_G}$ we have by Theorem \ref{Thm=Main},
\begin{equation}\label{Eqn=Bound}
\Vert T_{\Psi f}: L^p( \widehat{G} ) \rightarrow L^p( \widehat{G} ) \Vert \leq C_{G, p} \Vert \Omega_K( \Psi f ) \Vert_{L^{2 S_G}(G) } < \infty.
\end{equation}
for a constant $C_{G, p}$.
\begin{remark}
Suppose that $G = {\rm SL}(n, \mathbb{R})$ so that $K = {\rm SO}(n, \mathbb{R})$ and $A$ consists of diagonal matrices with trace 1. Then $\fraka$ are the diagonal matrices with trace 0. We have that
\[
\dim(G \slash K) = \frac{1}{2} n (n+1) - 1.
\]
And we recall that we typically chose $S_G = \dim(G \slash K) + \epsilon$ for some $\epsilon > 0$ small. The Killing form is given by $\langle X, Y \rangle = 2n {\rm Tr}(XY)$ and $\rho = \sum_{1 \leq i < j \leq n} \alpha_{i,j}$ where $\alpha_{i,j}(H) = H_{i} - H_{j}$ for $H = {\rm Diag}(H_1, \ldots, H_n) \in \fraka$ the diagonal trace 0 matrix with diagonal entries $H_i \in \mathbb{R}$. Then $\alpha_{i,j} \in \fraka^\ast$ is identified with $\frac{1}{2n}(E_{ii} - E_{jj}) \in \fraka$ through the Killing form.
Therefore,
\[
\Vert \rho \Vert^2 = \langle \rho, \rho \rangle = \frac{1}{n} \left( \sum_{k=1}^n (n-k)^2 - \sum_{k=1}^n (n-k) (k-1) \right) \approx
\frac{1}{n} \int_0^n (n-x)(n-2x+1) dx =
\frac{1}{6} n (n+3).
\]
Therefore, for $n$ large enough
\[
\frac{\Vert \rho \Vert^2}{\dim(G \slash K)^2} \approx \frac{\frac{1}{6} n (n+3) }{ \frac{1}{4} n^2 (n+1)^2 } \approx \frac{2}{3 n^2}.
\]
Therefore, we find that for $S_G > \frac{1}{2} n (n+1) - 1$.
and
\[
A > \Vert \rho \Vert S_G^{-1} \qquad ( \approx \sqrt{ \frac{2}{3} } \frac{1}{n} \textrm{ for } n \textrm{ large and } S_G \searrow \frac{1}{2} n (n+1) - 1),
\]
we have that for $f: \fraka \rightarrow \mathbb{C}$ smooth with
\[
f( H ) = e^{- A \Vert H \Vert}, \qquad \textrm{ in case } \Vert H \Vert \geq 1.
\]
That $\Psi f$ is the symbol of an $L^p$-Fourier multiplier for $\vert \frac{1}{2} - \frac{1}{p} \vert < \frac{1}{2 S_G}$.
\end{remark}
\begin{remark}\label{Rmk=Examples}
Let us argue that the multipliers we have conctructed here for $p$ close to 2 are new compared to what is known from the results in \cite{PRS} and \cite{Tablate}.
Remark \cite[Remark 3.8]{PRS} excludes the symbol $m = \Psi f$ we have constructed from the class of multipliers obtained in \cite[Theorem A]{PRS} as \cite[Remark 3.8]{PRS} implies that the symbols are integrable. Our symbols are not necessarily integrable as can easily be seen from \eqref{Eqn=CartanIntegral}. The multipliers we construct here are also out of reach of the theorem \cite[Theorem A2]{Tablate}. Indeed, assume $G = {\rm SL}(n, \mathbb{R}), n \geq 3$; if $n =2$ \cite[Theorem A2]{Tablate} is not applicable in the first place as the discussion following that theorem shows. Then for $H = {\rm Diag}(H_1, \ldots, H_n) \in \fraka$ set $\Vert H \Vert_\infty = \max_{1 \leq i \leq n} \vert H_i \vert$ and $\Vert H \Vert_2 = {\rm Tr}(H^2)^{\frac{1}{2}} = (2n)^{- \frac{1}{2}} \Vert H \Vert$. We have (see \cite{Tablate} for the adjoint representation and its norm),
\[
\Vert {\rm Ad}_{\exp(H)} \Vert \geq \exp( \Vert H \Vert_\infty) \geq \exp( n^{-\frac{1}{2}} \Vert H \Vert_{2} ) \geq \exp( 2^{-1/2} n^{-1} \Vert H \Vert ).
\]
So that \cite[Equation following Theorem A2]{Tablate} yields that
\[
\vert (\Psi f)(\exp( H )) \vert \preceq \Vert {\rm Ad}_{\exp(H)} \Vert^{-d_G} \leq \exp(- 2^{-1/2} d_G n^{-1} \Vert H \Vert ),
\]
where $d_G = \lfloor n^2/4 \rfloor$ (see \cite[Example 3.14]{CJKM}, \cite{Maucourant}).
So as $d_G$ increases as $n$ increases we see that the decay of these multipliers is faster than in our examples. Of course \cite{Tablate} considers symbols that are multipliers in the full range $p \in (1,\infty)$ whereas our methods yield the sharper estimates only when $p$ approximates 2.
\end{remark}
\begin{comment}
\subsection{Growing balls}
For $f: \mathbb{R} \rightarrow [0,1]$ be a $C^\infty$-function we set $f_R(x) = f(x -R), R >0$.
Now suppose that $f(x) =1$ for $x \leq 0$, $f(x) = 0$ for $x \geq 1$. Let $s =1$, $S_G > \dim(G \backslash K)$ and also adopt the other notation from Section \ref{Sect=Decay}. We find that
\[
\Vert \Omega_K (\Psi f_R) \Vert_{L^{2 S_G}(G) } = \Vert \Psi ( \mathcal{D} f_R) \Vert_{L^{2 S_G}(G) } \leq C_{G} \vert B_R \vert^{\frac{1}{2 S_G}},
\]
for some constant $C_G >0$.
This shows that for every $p \in (1, \infty)$ with $\vert \frac{1}{2} - \frac{1}{p} \vert < \frac{1}{2 S_G}$ we have
\begin{equation}\label{Eqn=Bound}
\Vert T_{f_R}: L^p( \widehat{G} ) \rightarrow L^p( \widehat{G} ) \Vert \leq C_{G, p} \vert B_R \vert^{\frac{1}{2 S_G}},
\end{equation}
for a constant $C_{G, p}$ that is independent of $R$.
\begin{remark}
We compare our estimate to \cite[Theorem A]{PRS}. This theorem is concluded from the last floating equation of \cite[p. 1275]{PRS} which in our situation says that there exists a constant $D_{G, p} >0$ such that for every $R >0$ we have
\[
\Vert T_{m_R}: L^p( \widehat{G} ) \rightarrow L^p( \widehat{G} ) \Vert \leq D_{G, p} \vert B_R \vert.
\]
Since this estimate holds for all $p \in (1, \infty)$ we may try to use complex interpolation between large $L^q$ and $L^2$ to obtain a better bound in terms of $R$.
Assume therefore $p \in (2, \infty)$. Let $q \in (p, \infty)$, typically very close to $\infty$. Set $\theta_q \in (0,1)$ by
\[
\frac{1-\theta_q}{2} + \frac{\theta_q}{q} = \frac{1}{p}.
\]
By complex interpolation
\[
\Vert T_{m_R}: L^p( \widehat{G} ) \rightarrow L^p( \widehat{G} ) \Vert \leq (D_{G, p} \vert B_R \vert)^{\theta_q} \Vert m \Vert_{L^\infty(G)}^{1-\theta_q} = (D_{G, p} \vert B_R \vert)^{\theta_q} .
\]
Note that if $q \rightarrow \infty$ we have that $\theta_q$ converges to
\[
\theta = 2 \left( \frac{1}{2} - \frac{1}{p}\right).
\]
We see that the bound \eqref{Eqn=Bound} has a better dependence on $R>0$ in case $p$ is chosen such that $\vert \frac{1}{2} - \frac{1}{p}\vert$ is close to $\frac{1}{2 S_G}$.
\end{remark}
\end{comment}
|
{
"arxiv_id": "2302.08616",
"language": "en",
"timestamp": "2023-02-21T02:20:19",
"url": "https://arxiv.org/abs/2302.08616",
"yymm": "2302"
} | \section{Introduction}\label{intro}
In this paper we consider existence and regularity for Poiseuille flow of nematic liquid crystals. Liquid crystals have many forms and a particular form is the nematic whose molecules can be viewed as
rod-like/thread-like. Macroscopically, the state of a nematic liquid crystal is characterized by its velocity field ${\bf u}$ for the flow and its director field ${\bf n}\in \mathbb S^2$ for the alignment of the rod-like feature. These two characteristics interact with each other so that any distortion of the director ${\bf n}$ causes
a motion ${\bf u}$ and, likewise, any flow ${\bf u}$ affects the alignment ${\bf n}$. Using the convention to denote $\dot{f}=f_t+{\bf u}\cdot \nabla f$ the material derivative, the full Ericksen-Leslie model for nematics is given as follows
\begin{equation}\label{wlce}
\begin{cases}
\rho\dot {\bf u}+\nabla P=\nabla\cdot\sigma-\nabla\cdot\left(\frac{\partial W}{\partial\nabla {\bf n}}\otimes\nabla {\bf n}\right),
\\
\nabla\cdot {\bf u}=0,\\
\nu{\ddot {\bf n}}=\lambda{\bf n}-\frac{\partial W}{\partial {\bf n}}-{\bf g}+\nabla\cdot\left(\frac{\partial W}{\partial\nabla {\bf n}}\right), \\
|{\bf n}|=1.\\
\end{cases}
\end{equation}
In (\ref{wlce}), $P$ is the pressure, $\lambda$ is the Lagrangian multiplier of the constraint $|{\bf n}|=1$, $\rho$ is the density, $\nu$ is the inertial coefficient of the director ${\bf n}$, and $W$,
${\bf g}$ and $\sigma$ are the Oseen-Frank energy, the kinematic transport and the viscous stress tensor, respectively (see, e.g., \cites{DeGP,ericksen62, frank58, liucalderer00,leslie68,Les, lin89} for details).
\subsection{Poiseulle flow of nematics} For Poiseulle flows of nematics with a choice of coordinates system, ${\bf u}$ and ${\bf n}$ take the form
\[{\bf u}(x,t)= (0,0, u(x,t))^T \;\mbox{ and }\;{\bf n}(x,t)=(\sin\theta(x,t),0,\cos\theta(x,t))^T,\]
where the motion ${\bf u}$ is along the $z$-axis and the director ${\bf n}$ lies in the $xz$-plane with angle $\theta$ made from the $z$-axis. For this case of Poiseulle flows, taking $\rho=\nu=1$ for simplicity, the Ericksen-Leslie model is reduced to system (\ref{sysf}) and (\ref{sysw}) below, whose derivation is available in the literature (see, e.g., \cite{CHL20}),
\begin{align}\label{sysf}
\displaystyle u_t&=\left(g(\theta)u_x+h(\theta)\theta_t\right)_x,\\\label{sysw}
\theta_{tt}+\gamma_1\theta_t&=c(\theta)\big(c(\theta)\theta_x\big)_x-h(\theta)u_x.
\end{align}
We will be interested in Cauchy problem of the Poiseulle flow
with initial data
\beq\label{initial}
u(x,0)=u_0(x)\in H^1(\mathbb {R}),\; \theta(x,0)=\theta_0(x)\in H^1,
\; \theta_t(x,0)=\theta_1(x)\in L^2(\mathbb {R}).
\eeq
We further impose an assumption
\beq\label{tecinitial}\begin{split}
u'_0(x)+\frac{h(\theta_0(x))}{g(\theta_0(x))}&\theta_1(x)\in H^1(\mathbb R)\cap L^1(\mathbb {R}).
\end{split}
\eeq
The functions $c(\theta)$, $g(\theta)$ and $h(\theta)$ in the model are given by
\begin{align}\label{fgh}\begin{split}
g(\theta):=&\alpha_1\sin^2\theta\cos^2\theta+\frac{\alpha_5-\alpha_2}{2}\sin^2\theta+\frac{\alpha_3+\alpha_6}{2}\cos^2\theta+\frac{\alpha_4}{2},\\
h(\theta):=&\alpha_3\cos^2\theta-\alpha_2\sin^2\theta=\frac{\gamma_1+\gamma_2\cos(2\theta) }{2},\\
c^2(\theta):=&K_1\cos^2\theta+K_3\sin^2\theta,
\end{split}
\end{align}
where, depending on the nematic material and temperature, $K_1>0$ and $K_3>0$ are Frank's coefficients for splay and bending energies, and $\alpha_j$'s $(1 \leq j \leq 6)$ are the Leslie dynamic coefficients. The following relations are assumed in the literature.
\begin{align}\label{a2g}
\gamma_1 =\alpha_3-\alpha_2,\quad \gamma_2 =\alpha_6 -\alpha_5,\quad \alpha_2+ \alpha_3 =\alpha_6-\alpha_5.
\end{align}
The first two relations are compatibility conditions, while the third relation is called a Parodi's relation, derived from Onsager reciprocal relations expressing the equality of certain relations between flows and forces in thermodynamic systems out of equilibrium (cf. \cite{Parodi70}).
They also satisfy the following empirical relations (p.13, \cite{Les})
\begin{align}\label{alphas}
&\alpha_4>0,\quad 2\alpha_1+3\alpha_4+2\alpha_5+2\alpha_6>0,\quad \gamma_1=\alpha_3-\alpha_2>0,\\
& 2\alpha_4+\alpha_5+\alpha_6>0,\quad 4\gamma_1(2\alpha_4+\alpha_5+\alpha_6)>(\alpha_2+\alpha_3+\gamma_2)^2\notag.
\end{align}
Note that the fourth relation is implied by the third together with the last relation and the last can be rewritten as
$\gamma_1(2\alpha_4+\alpha_5+\alpha_6)> \gamma_2^2$.
{Very importantly, relations (\ref{a2g}) and (\ref{alphas}) imply that (see formula (2.4) in \cite{CHL20}), for some constant $ \overline C>0$,
\begin{align}\label{positiveDamping}
g(\theta)\ge g(\theta)-\frac{h^2(\theta)}{\gamma_1}>\overline C.
\end{align}
}
It is known that the solution of system \eqref{sysf} and \eqref{sysw} generically form finite time cusp singularity \cites{CS22,CHL20}.{ The goal of this paper is to establish a global existence of H\"older continuous solutions for the Cauchy problem \eqref{sysf}-\eqref{tecinitial} beyond singularity formation as stated in Theorem \ref{main} below.}
\subsection{Directly relevant results and our results.}
In \cite{CHL20}, a special case of Poiseuille flow was treated. More precisely, the authors chose the parameters as
\[\rho=\nu=1,\; \alpha_1=\alpha_5=\alpha_6=0,\;\alpha_2=-1,\;\alpha_3=\alpha_4=1,\]
which result in $\gamma_1=2$, $\gamma_2=0$, and $g=h=1$. With this special choice of parameters, system (\ref{sysf}) and (\ref{sysw}) becomes
\begin{align}\label{simeqn0}
\begin{split}
u_t=&(u_x+\theta_t)_x,\\
\theta_{tt}+2\theta_t=&c(\theta)(c(\theta)\theta_{x})_x
-u_x.
\end{split}
\end{align}
In \cite{CHL20}, on one hand, the authors constructed solutions for \eqref{simeqn0} with smooth initial data that produce, in finite time, cusp singularities—blowups of $u_x$ and $\theta_t$. The method directly extends that of \cites{CZ12, GHZ} for variational wave equations. On the other hand, the global existence of weak solutions, which are H\"older continuous, of system \eqref{simeqn0} were established for general initial data similar to (\ref{initial}).
The latter resolved satisfactorily the physical concerns from application point of view about what happens after the finite time singularity formation.
A crucial ingredient for existence beyond singularity formation is the identification of the quantity
\[J(x,t)=u_x(x,t)+\theta_t(x,t)\]
and the reveal of a singularity cancellation—the quantity $J$ remains bounded and H\"older continuous while its components $u_x$ and $\theta_t$ approach infinity at formations of cusp singularities.
The change of coordinates framework in \cite{BZ} for the variational wave equations was used to cope with the wave part $\eqref{simeqn0}_2$, and will be used in this paper too for \eqref{sysw}.
The detailed idea will be given in Section \ref{idea}. See other works on the global well-posedness of H\"older continuous solutions for variational wave equations \cites{BC,BC2015,BH,BHY,BCZ, CCD, ZZ03,ZZ10,ZZ11,GHZ}.
In a recent paper \cite{CS22}, the singularity formation for the general system \eqref{sysf} and \eqref{sysw} was established.
As mentioned above, we are concerning with the global existence of the Cauchy problem for the general system \eqref{sysf} and \eqref{sysw}. {It should be pointed out that the generalization is far beyond straightforward.
One apparent trouble is that the diffusion coefficient $g(\theta(x,t))$ in the parabolic equation \eqref{sysf} is only H\"older continuous, which creates difficulties in handling the quantity $J$ for the singularity (see system (\ref{J_eq}) and the discussion followed for details). This leads the authors to introduce and work with the potential $A$ of $J$ in (\ref{VarA}). Another difficulty is caused by rough (worse than H\"older continuous) non-homogeneous terms in the parabolic equation for $A$, in addition to the diffusion coefficient $g(\theta(x,t))$ being only H\"older continuous. This difficulty is overcome with a careful analysis in \cite{MS} that relies on but goes beyond treatments in \cite{Fri}. The work in \cite{MS} has a much more broad interest besides a direct application to the present work.}
For the statement of our result, we need the following definition of weak solutions.
\begin{definition}\label{def1}
For any fixed time $T\in (0,\infty)$, we say that $(u(x,t),\theta(x,t))$ is a weak solution of \eqref{sysf}, \eqref{sysw}, and \eqref{initial} over $\mathbb {R}\times [0,T]$ if
\begin{itemize}
\item
For any test function $\phi\in H^1_0(\mathbb {R}\times (0,T)),$
\begin{align}\label{thetaweak1}
\int_0^T\int_\mathbb {R}\theta_t\phi_t-\gamma_1\theta_t\phi\,dx\,dt=\int_0^T\int_\mathbb {R} (c(\theta)\theta_x)(c(\theta)\phi)_x+hu_x\phi\,dx\,dt,
\end{align}
\begin{align}\label{uweak}
\int_0^T\int_R u\phi_t-(gu_x+h\theta_t)\phi_x\,dx\,dt=0.
\end{align}
\item The initial data
\begin{align}\label{data}
u(x,0)=u_0(x),\; \theta(x,0)=\theta_0(x),\; \text{and}\; \theta_t(x,0)=\theta_1(x)
\end{align}
hold point-wise for $u(x,0)$ and $\theta(x,0),$ and in $L^p$ sense for $\theta_t(x,0)$ for any $p\in[1,2).$
\end{itemize}
\end{definition}
Throughout the paper, we will always assume relations \eqref{a2g} and \eqref{alphas} for the Leslie coefficients $\alpha_j$'s and refer to $g$, $h$ and $c$ as the functions given in \eqref{fgh}.
\begin{thm}[Global Existence]\label{main} For any fixed time $T\in (0,\infty)$, the Cauchy problem of system \eqref{sysf} and \eqref{sysw} with the initial conditions $u_0(x)$, $\theta_0(x)$ and $\theta_1(x)$ given in \eqref{initial} and \eqref{tecinitial} has a weak solution $(u(x,t),\theta(x,t))$ defined on $\mathbb {R}\times [0,T]$ in the sense of Definition \ref{def1}.
Moreover,
\[u(x,t)\in L^2([0,T],H^1(\mathbb {R}))\cap L^\infty(\mathbb {R}\times [0,T])\] and
\[\theta(x,t)\in C^{1/2}(\mathbb {R}\times [0,T])\cap L^2([0.T],H^1(\mathbb {R})),\]
and, for any $t\in[0,T]$, the associated energy
\begin{align}\label{calEdef}
\mathcal{E}(t):=\int_\mathbb {R}\theta_t^2+c^2(\theta)\theta_x^2+u^2\,dx
\end{align}
satisfies
\begin{align}
\mathcal{E}(t)\leq \mathcal{E}(0)-\int_0^t\int_\mathbb {R} (u_x+\frac{h}{g}\theta_t)^2+\theta_t^2\,dx\,dt.
\end{align}
\end{thm}
\medskip
The rest of this paper is organized as follows. In Section \ref{idea}, we introduce the main idea of this work. In Section \ref{J2theta}, as the first step in carrying out the main idea, we analyze the wave equation for $\theta$ with a prescribed forcing term.
In Section \ref{HK}, we recall some basic properties from \cite{Fri} on parabolic differential operators with only H\"older continuous diffusion coefficients. In Sections \ref{J2v}, we apply the formulation in Section \ref{HK} and results in \cite{MS} to analyze $u$-component. In Sections \ref{FixedPoint}-\ref{bdE} we prove the existence of weak solution for the Cauchy problem \eqref{sysf} and \eqref{sysw}.
\section{Main idea of this work}\label{idea}
The approach developed in \cite{CHL20} for the special case with $g=h=1$ provides a framework for the general system \eqref{sysf} and \eqref{sysw}. There are, however, a number of crucial issues in this generalization.
Note that singularity formation is unavoidable in general \cite{CS22}. Similar to \cite{CHL20}, we introduce a new variable
\[v=\int_{-\infty}^x u\,dx\]
and obtain, from \eqref{sysf},
\beq\label{veq}
v_t=g(\theta)v_{xx}+h(\theta) \theta_t=g(\theta) u_x+h(\theta) \theta_t.
\eeq
Motivated by singularity cancellation revealed in \cite{CHL20}, we also introduce
\begin{align}\label{VarJ}
J=\frac{v_t}{g(\theta)}=u_x+\frac{h(\theta)}{g(\theta)}\theta_t,
\end{align}
which agrees with the function $J$ in \cite{CHL20} for the special case. It follows from \eqref{veq} and \eqref{VarJ} that $J$ satisfies
\beq\label{J_eq}
(g(\theta)J)_t=g(\theta) (g(\theta)J)_{xx}+h(\theta) \theta_{tt}+h'(\theta) \theta_t^2+g'(\theta)\theta_t
J-g'(\theta)\frac{h(\theta)}{g(\theta)}\theta_t^2.\eeq
However, it turns out the coefficient $g(\theta)$ in $(g(\theta) J)_{xx}$ is only H\"older continuous in general.
And for the general case, we do not have an explicit formula using the heat kernel as in the special case.
The work in \cite{Fri} provides an implicit expression for the kernel which can be used to treat the nonhomogeneous terms in (\ref{J_eq}) in an indirect way (see Section \ref{HK} for more details). In order to follow the framework in \cite{Fri} for the nonhomogeneous parabolic equation \eqref{J_eq},
we introduce a new auxiliary function $A,$
\begin{align}\label{VarA}
A(x,t)=\hat A(x,t)-A_0(x)=\int_{-\infty}^xJ(z,t)\,dz-\int_{-\infty}^xJ(z,0)\,dz,
\end{align}
that is $A_x=J-J_0,$ with $J_0=J(x,0)$ determined by \eqref{tecinitial}.
By comparing equation (\ref{J_eq}) with equation (\ref{AAeq}) for $A$ given below, we avoid some complication in handling the rough term $\theta_{xx}$.
Instead of working directly on system \eqref{sysf} and \eqref{sysw}, we will treat the equivalent system in terms of the quantities $(v, \theta,A)$ as
\begin{align}
v_t=g(\theta)v_{xx}&+h(\theta) \theta_t,\label{ueq}\\
\theta_{tt}+\big(\gamma_1-\frac{h^2(\theta)}{g(\theta)}\big)\theta_t&=c(\theta)(c(\theta)\theta_x)_x-h(\theta)\hat A_x,\label{thetaeq}\\
A_t=g(\theta)A_{xx}-&\gamma_1A+g'(\theta)\theta_xA_x+g'(\theta)\theta_xJ_0+F(\theta, v), \label{AAeq}
\end{align}
where
\begin{align}\label{Fun}\begin{split}
F&=G+f,\\
f&=[\gamma_1-\frac{h^2}{g}]v_x+\frac{h(\theta)c^2(\theta)}{g}\theta_x+g(\theta)J'_0,\\
G&=\int_{-\infty}^x[\frac{h'}{g}-\frac{g'h}{g^2}]\theta_t^2-[\gamma_1-\frac{h^2}{g}]'\theta_zv_z-(\frac{h(\theta)c(\theta)}{g})'c(\theta)\theta_z^2\,dz-\gamma_1 A_0,
\end{split}
\end{align}
and
\beq\label{a0a0x}
A_0(x)=\int_{-\infty}^xJ_0(z)\,dz=\int_{-\infty}^xJ(z,0)\,dz,
\eeq
Note $A(x,0)=0.$
The splitting of $F=f+G$ is based on different regularities of each term, as we will see later.
A derivation for (\ref{AAeq}) from \eqref{J_eq} is provided in Appendix \ref{App-A}.\\
Roughly, Theorem \ref{main} will be proved in the following steps.
\bigskip
\paragraph{\bf Step 1:} For any given $J(x,t)\in C^\alpha\cap L^2\cap L^\infty$ for some $\alpha>0$, we consider the wave equation (\ref{thetaeq}) with $A_x$ being replaced by $J$
\begin{align}\label{waveJ}
\theta_{tt}+\big(\gamma_1-\frac{h^2(\theta)}{g(\theta)}\big)\theta_t&=c(\theta)(c(\theta)\theta_x)_x-h(\theta)J.
\end{align}\\
Using very similar method as in \cite{CHL20}, the existence of a $C^{1/2}$ solution $\theta^J$ of (\ref{waveJ}) will be shown in Section \ref{J2theta}.
\bigskip
\paragraph{\bf Step 2:} With $\theta^J$ obtained from Step 1, we then solve $v^J$ from the equation
\begin{align}\label{vJ}
v_t=g(\theta^J)v_{xx}+h(\theta^J)\theta_t^J,
\end{align}
and show that both $v^J$ and $u^J=v_x^J$ are in $ C^\alpha\cap L^2\cap L^\infty$ in Section \ref{J2v}.
\bigskip
\paragraph{\bf Step 3:} With $(v^J,\theta^J)$ from the above steps, we will solve for $A^J$ from
\[A_t=g(\theta^J)A_{xx}-\gamma_1A+g'\theta_x^JJ+F(\theta^J,v^J).\]
An expression of $A^J=\mathcal{N}(J)$ by the so called parametrix method in \cite{Fri} is very helpful. Recall that $A_x+J_0=J$. After setting $\mathcal{M}(J)=\left(\mathcal{N}(J)\right)_x+J_0$, we then have a fixed point problem for the map $J\mapsto \mathcal{M}(J)$ that will be analyzed by the Schauder fixed point theory in Section \ref{FixedPoint}.
\begin{center}
\begin{figure}[h]
\begin{tikzcd}
J \arrow[dr,leftrightarrow] \arrow[r, mapsto,"A=\mathcal{N}(J)"] & A \arrow[d, "\mathcal{M}(J)=A_x+J_0"] \\
& \mathcal{M}(J)
\end{tikzcd}
\caption{A diagram explaining the relation between the quantities $A,J$ and the map $\mathcal{M}.$}
\label{diagram}
\end{figure}
\end{center}
\section{The solution $\theta$ of (\ref{waveJ}) with a fixed $J$}\label{J2theta}
Given any function $J(x,t)\in L^2\cap L^\infty\cap C^\alpha$ for some $\alpha>0,$ we consider (\ref{waveJ}) recast below
\begin{align}\label{waveAx}
\theta_{tt}+\big[\gamma_1-\frac{h^2(\theta)}{g(\theta)}\big]\theta_t&=c(\theta)(c(\theta)\theta_x)_x-h(\theta)J.
\end{align}
It is crucial that $\gamma_1-\frac{h^2(\theta)}{g(\theta)}>C_*$ for some constant $C_*>0$ (see (\ref{positiveDamping})).
Using the change of coordinates method in \cites{BZ,CHL20}, we can prove the global existence of weak solution for \eqref{waveAx} in a very similar way as for the simplified system. To make this paper self-contained, we include the proof.\\
Given a point $(x_0,t_0)$, we define the characteristic curves
\[x^\pm(s)\equiv x^\pm(s;\, x_0,\, t_0)\]
by the solutions $x^\pm(s)$ of
\[\frac{d x^\pm(s)}{d s}=\pm c\bigl(\theta(x^\pm(s),s)\bigr)\;\mbox{ with }\; x^\pm(t_0)=x_0.\]
Note that $x^\pm(0)$ are the intersections of the characteristic curves $x^\pm(s)$ with the $x$-axis.
With the help of the following gradient variables
\begin{align}\label{RS}\begin{split}
S(x,t)= &\theta_t(x,t) - c(\theta(x,t)) \theta_x(x,t),\\
R(x,t)=& \theta_t(x,t)+c(\theta(x,t)) \theta_x(x,t),
\end{split}
\end{align}
we make the change of coordinates $(x,t)\mapsto (X,Y)$:
\begin{align}\label{charaXY}\begin{split}
X\equiv X(x,t):=& \int_1^{x^-(0;\, x,\, t)} (1+ R^2(x',0))\,d x',\\
Y\equiv Y(x,t):=& \int_{x^+(0;\, x,\, t)}^1 (1+ S^2(x',0))\,d x'.
\end{split}
\end{align}
Note that, for any differentiable function $f$, one has
\begin{align}\label{forward}\begin{split}
f_t(X,Y) +c(\theta) f_x(X,Y)&=2c X_x f_X,\\
f_t(X,Y)-c(\theta) f_x(X,Y)&=-2c Y_x f_Y.
\end{split}
\end{align}
In order to complete the system, we introduce several variables:
\beq\label{defwz}
w:=2\arctan R,\quad z:=2\arctan S,
\eeq
and
\beq\label{defpq}
p:=\frac{1+R^2}{X_x},\quad q:=\frac{1+S^2}{-Y_x}.
\eeq
We write the system with respect to the new coordinates to obtain
\begin{align}\label{semisys11}
&\theta_X=\frac{\sin w}{4c(\theta)}p,\quad \theta_Y=\frac{\sin z}{4c(\theta)}q
\end{align}
and to close the system we derive the equations for $z,\omega,p$ and $q$
\begin{align}
&\textstyle z_X=p
\Big\{\frac{c'}{4c^2}(\cos^2\frac{w}{2}-\cos^2\frac{z}{2})+\frac{b(\theta)}{4c}(\sin w\cos^2\frac{z}{2}+\sin z\cos^2\frac{w}{2})-\frac{h(\theta)}{c}J\cos^2\frac{z}{2}\cos^2\frac{w}{2} \Big\},\\
&\textstyle w_Y=q
\Big\{\frac{c'}{4c^2}(\cos^2\frac{z}{2}-\cos^2\frac{w}{2})+\frac{b(\theta)}{4c}(\sin w\cos^2\frac{z}{2}+\sin z\cos^2\frac{w}{2})-\frac{h(\theta)}{c}J\cos^2\frac{z}{2}\cos^2\frac{w}{2}\Big\}, \\
&\textstyle p_Y=pq\Big\{\frac{c'}{8c^2}(\sin z-\sin w)
+\frac{b(\theta)}{2c}(\frac{1}{4}\sin w\sin z+\sin^2 \frac{w}{2}\cos^2\frac{z}{2})
-\frac{h(\theta)}{2c}J\sin w\cos^2\frac{z}{2}\Big\},\\\label{semisys55}
&\textstyle q_X=pq\Big\{\frac{c'}{8c^2}(\sin w-\sin z)
+\frac{b(\theta)}{2c}(\frac{1}{4}\sin w\sin z+\sin^2 \frac{z}{2}\cos^2\frac{w}{2})
-\frac{h(\theta)}{2c}J\sin z\cos^2\frac{w}{2}\Big\},
\end{align}
and
\beq
\left\{
\begin{array}{ll}\label{eqnxt}
\displaystyle x_X=\frac{1}{2X_x}=\frac{1+\cos w}{4}p,\\
\\
\displaystyle x_Y=\frac{1}{2Y_x}=-\frac{1+\cos z}{4} q,
\end{array}
\right.
\qquad
\left\{
\begin{array}{ll}
\displaystyle t_X=\frac{1}{2cX_t}=\frac{1+\cos w}{4c}p,\\
\\
\displaystyle t_Y=-\frac{1}{2cY_t}=\frac{1+\cos z}{4c} q,
\end{array}
\right.
\eeq
where $b(\theta)=\frac{h^2(\theta)}{g(\theta)}-\gamma_1.$ See \cite{CHL20} for the derivation of \eqref{semisys11}-\eqref{eqnxt}.\\
Comparing this system with that in \cite{CHL20} for the special case $g=h=1$ and $\gamma_1=2$, one can see that the appearance of the bounded and smooth functions $g$ and $h$ does not create any difficulties applying the process in \cite{CHL20}.
\begin{prop}[Local Existence]\label{localwave}
There exists $ T>0$ sufficiently small such that system \eqref{semisys11}-\eqref{semisys55} has a solution $(\theta,z,\omega, p,q)(X,Y)$ defined on
\[\Omega_ T:=\{(X,Y)\in\mathbb {R}^2: d\big((X,Y),\gamma\big)\leq T\},\] where $\gamma$ is the curve in the $(X,Y)$ plane corresponding to the line $t=0$ on the $(x,t)$ plane and $d(\cdot,\gamma)$ is the distance between the curve and a point.
\end{prop}
\begin{proof}
The proof is similar to that in \cite{CHL20} and is outlined in Appendix \ref{app-semilinear}.
\end{proof}
Now, to extend the solution globally, meaning to any arbitrary time $0<T<\infty,$ we need some a prior uniform bound on $p$ and $q$.
\begin{lemma}\label{lemma3.3}Consider any solution of \eqref{semisys11}-\eqref{semisys55} constructed in the local existence result with $t\in [0, T]$. Then, we have
\beq\label{bdspq}
0<A_1\leq \max_{(X,Y)\in \Omega_ T }\left\{p(X,Y),\, q(X,Y)\right\}\leq A_2,
\eeq
for some constants $A_1$ and $A_2$ independent of $ T.$
\end{lemma}
\begin{proof}
We skip the proof since it is entirely similar to that for Lemma 6.2 in \cite{CHL20}.
\end{proof}
Now we need to transfer the solution back to the original coordinate system $(x,t)$ using \eqref{eqnxt}. We note that in general, due to the lack of enough regularity of the solution $\theta,$ we might well lose the uniqueness of the characteristic curves. So $x(X,Y),t(X,Y)$ might not be well defined. Instead, we will show that $\theta(x,t)$ is always well defined. In fact, a possible scenario in this case is that we might have the following
\[X_1:=X(x_1,t_1)=X(x_2,t_2)=:X_2\] and \[Y_1:=Y(x_1,t_1)=Y(x_2,t_2)=:Y_2.\] But, one can show that $\theta(X_1,Y_1)=\theta(X_2,Y_2)$ and hence $\theta$ remains well defined (\cite{BZ}).\\
Finally, we can prove the existence of weak solution in $(x,t)$ coordinates.
\begin{prop}\label{propth} Given a function $J(x,t)\in (L^2\cap L^\infty\cap C^\alpha)([0,T],\mathbb {R})$ and initial data $\theta_0(x)$ and $\theta_1(x)$ as in \eqref{data}. Then
\eqref{waveAx} has a weak solution $\theta(x,t)=\theta^J(x,t)$ in the following sense
\begin{align}
\int_0^T\int_\mathbb {R}\theta_t\phi_t-\big[\gamma_1-\frac{h^2(\theta)}{g(\theta)}\big]\theta_t\phi\,dx\,dt=\int_0^T\int_\mathbb {R} (c(\theta)\theta_x)(c(\theta)\phi)_x+h(\theta)J\phi\,dx\,dt,
\end{align}
for any function $\phi\in H^1_0(\mathbb {R}\times (0,T))$. Moreover, we have
\[\theta(x,t)\in C^{1/2}([0,T],\mathbb {R})\cap L^\infty([0,T],H^1(\mathbb {R})).\]
The initial data is satisfied in the sense that $\displaystyle\theta(x,0)=\theta_0(x)$ point-wise and $\displaystyle \theta_t(x,0)=\theta_1(x)$ in $L^p_{loc}$ for $p=[1,2).$
\end{prop}
\begin{proof}
We show that the solution constructed as a fixed point of the map in Appendix \ref{app-semilinear} satisfies the weak formulation,
\begin{align*}
\int_0^T\int_\mathbb {R}\theta_t\phi_t+b(\theta)\theta_t\phi\,dx\,dt=\int_0^T\int_\mathbb {R} (c(\theta)\theta_x)(c(\theta)\phi)_x+hJ\phi\,dx\,dt,
\end{align*}
where $b(\theta)=\frac{h^2(\theta)}{g(\theta)}-\gamma_1.$
The calculations are similar to the one in \cite{BZ}. Rewrite the above equation in terms of the variables $R$ and $S$ defined in \eqref{RS} to get
\begin{align*}
\int_0^T\int_\mathbb {R}\big[\phi_t-c\phi_x\big]R+\big[\phi_t+c\phi_x\big]S+c'\theta_x(S-R)\phi+b(\theta)(R+S)\phi-2hJ\phi\,dx\,dt=0.
\end{align*}
Using \eqref{forward} and $\displaystyle dxdt=\frac{pq}{2c(1+R^2)(1+S^2)}dXdY,$
we obtain
\begin{align*}
& \int_0^T\int_\mathbb {R}\bigg\{\big(-2cY_x\phi_Y\big)R+\big(2cX_x\phi_X\big)S+c'\big[\theta_XX_x+\theta_YY_x\big](S-R)\phi\\
&\qquad -\big[\gamma_1-\frac{h^2(\theta)}{g(\theta)}\big](R+S)\phi-2hJ\phi\bigg\}\frac{pq}{2c(1+R^2)(1+S^2)}dXdY=0.
\end{align*}
Apply (\ref{defpq}) and (\ref{semisys11}) to get
\begin{align*}
& \int_0^T\int_\mathbb {R}\frac{R}{1+R^2}p\phi_Y+\frac{S}{1+S^2}q\phi_X+\frac{c'pq}{8c^2}\bigg(\frac{\sin\omega}{1+S^2}-\frac{\sin z}{1+R^2}\bigg)(S-R)\phi\\
&\qquad -\frac{pq}{2c}\big[\gamma_1-\frac{h^2(\theta)}{g(\theta)}\big]\frac{R+S}{(1+R^2)(1+S^2)}\phi-\frac{pqhJ}{c(1+R^2)(1+S^2)}\phi\,dX\,dY=0.
\end{align*}
Noticing $\displaystyle \frac{R}{1+R^2}=\frac{\sin \omega}{2}\;\mbox{and}\; \frac{S}{1+S^2}=\frac{\sin z}{2}$ from (\ref{defwz}), we get
\begin{align*
& \int_0^T\int_\mathbb {R}\frac{\sin\omega}{2}p\phi_Y+\frac{\sin z}{2}q\phi_X\\
& \qquad +\frac{c'pq}{8c^2}\big(\sin\omega\sin z-\sin\omega \cos^2\frac{z}{2}\tan \frac{\omega}{2}-\sin z \cos^2\frac{\omega}{2}\tan \frac{z}{2}\big)\phi\\
&\qquad -\frac{pq}{2c}\big[\gamma_1-\frac{h^2(\theta)}{g(\theta)}\big]\big(\cos\frac{\omega}{2}\cos^2\frac{z}{2}\sin \frac{\omega}{2}+\cos^2\frac{\omega}{2}\cos\frac{z}{2}\sin \frac{z}{2}\big)\phi\\
&\qquad -\frac{pq}{c}h\cos^2\frac{\omega}{2}\cos^2\frac{z}{2}J\phi\,dX\,dY=0.
\end{align*}
Integrating the first two terms by parts and using the equations for $p,q,\omega$ and $z$, we prove that the weak formulation \eqref{thetaweak1} is satisfied.\\
It remains to show the H\"older continuity of $\theta(x,t)$ with exponent $1/2.$ This follows from the fact that
\begin{align*}
\int_0^t[\theta_t\pm c(\theta)\theta_x]^2\,dt\leq C
\end{align*}
for any $t\in [0,T],$ where the constant $C$ depends only on $t.$ Using the change of coordinates \eqref{forward} (See Appendix \ref{app-semilinear}), we obtain
\begin{align*}
\int_0^t[\theta_t+c(\theta)\theta_x]^2\,dt&=
\int_{X_0}^{X_t}(2cX_x\theta_X)^2\frac{1}{2X_t}\,dX\\
&=\int_{X_0}^{X_t}\big(\frac{4c}{p(1+\cos \omega}\frac{\sin \omega}{4c}p\big)^2\frac{1+\cos \omega}{4c}p\,dX\\
&=\int_{X_0}^{X_t}\frac{\sin^2\omega}{1+\cos \omega}p\,dX\leq C.
\end{align*}
Similar calculations for $\theta_t-c(\theta)\theta_x$ gives a similar bound. The two bounds together imply the square integrability of $\theta_t$ and $\theta_x$ hence Sobolev embedding implies the H\"older continuity of $\theta(x,t)$ with exponent $1/2.$\\
Finally we show a bound for the energy $E(t)$ defined as
\begin{align}
E(t):=\frac{1}{2}\int_\mathbb {R}\theta_t^2+c(\theta)^2\theta_x^2\,dx.
\end{align}
For any fixed $0<T<\infty,$ let $\Omega_T:=\mathbb {R}\times [0,T].$ For any function $J(x,t)\in L^\infty\cap L^2\cap C^\alpha(\Omega_T),$ the energy of a weak solution $\theta$ of \eqref{waveAx} satisfies a prior bound. More precisely,
the energy satisfies the following bound
\begin{align}
E(t)\leq C_E,
\end{align}
for some $C_E$ depending on $E(0)$ and $J.$ The proof is similar to the one in \cite{CHL20} and is provided in Appendix \ref{Appenergy}.
The estimate obtained,
\beq\label{ET}
\frac{1}{2}\max_{0\leq t
\leq T}E(t)\leq E(0)+C\int_0^T\int_{-\infty}^\infty|J|^2\,dxdt
\eeq
for some constant $C$. This implies that $\theta_t(\cdot, t)$ and $\theta_x(\cdot, t)$ are both square integrable functions in $x$, so are $R$ and $S$.
The proof that the solution satisfies the initial condition follows by the same argument in \cite[Theoerem 1]{BZ}. We omit the proof here.
\end{proof}
\section{A brief review of parabolic differential operators with non-constant H\"older coefficients.}\label{HK}
In this section we summarise relevant results from first chapter in \cite{Fri} in terms of the specific form of equations appeared in this paper for direct future usage.
Let the differential operator $\mathcal{L}$ be defined as
\[\mathcal{L}:=\partial_t-g(\theta)\partial_{xx}+\gamma_1,\]
where $g(\theta)$ is a strictly positive smooth function and $\theta$ is H\"older continuous with exponent $1/2$ with respect to $x$ and $t,$ and $\gamma_1$ is a positive constant. Consider the differential equation
\begin{align}\label{diff}
\mathcal{L}\,\omega&=0.
\end{align}
\par
Note that, for any fixed $(\xi,\tau)$, the heat kernel of the operator
\[\mathcal{L}_0^{\xi,\tau}:=\partial_t-g(\theta(\xi,\tau))\partial_{xx}\]
is
\begin{align}\label{heat}
H^{\xi,\tau}(x-\xi,t-\tau)=\frac{1}{2\sqrt{g(\theta(\xi,\tau))}\sqrt{t-\tau}}e^{-\frac{(x-\xi)^2}{4g(\theta(\xi,\tau))(t-\tau)}}.
\end{align}
\begin{remark}
The superscripts in $\mathcal{L}_0^{\xi,\tau}$ and $H^{\xi,\tau}(x-\xi,t-\tau)$ indicate the dependence on $(\xi,\tau)$ via $g(\theta(\xi,\tau)).$
\end{remark}
Several results established in Chapter 1 of \cite{Fri} will be recalled and used. These include Theorems 8-11, displays (4.15), (6.12) and (6.13) in \cite{Fri}.
\begin{prop}
There exists a function $\Phi$ such that $\Gamma$ given by
\begin{align}\label{GGGamma}
\Gamma(x,t,\xi,\tau)&=H^{\xi,\tau}(x-\xi,t-\tau)
+\int_\tau^t\int_\mathbb{R}H^{y,s}(x-y,t-s)\Phi(y,s;\xi,\tau)\,dy\,ds
\end{align}
satisfies \eqref{diff}. Moreover, one has
\begin{align}\label{phiest}
|\Phi(y,s;\xi,\tau)|\leq \frac{const}{(s-\tau)^{5/4}}e^{\frac{-d(y-\xi)^2}{4(s-\tau)}},
\end{align}
where $d$ is a constant depending on $\|g\|_{L^\infty(\mathbb {R})}$ and $\gamma_1.$
\end{prop}
Set $\displaystyle \Omega_T:=\mathbb {R}\times (0,T]$ for some $T>0$ and consider the Cauchy problem
\begin{align}\label{chy}
&\mathcal{L}\,\omega(x,t)=f(x,t),\quad \text{on}\quad\Omega_T\\\label{chyinit}
&\omega(x,0)=\phi(x),\quad \text{at} \quad t=0
\end{align}
where $f$ is H\"older continuous on $ \overline\Omega_T$ and $\phi$ is continuous on $\mathbb {R}$. It is shown in \cite[Theorem 12]{Fri} that the function
\begin{align}
\omega(x,t)=\int_R\Gamma(x,t,\xi,0)\phi(\xi)\,d\xi+\int_0^t\int_R\Gamma(x,t;\xi.\tau)f(\xi,\tau)\,d\xi\,d\tau
\end{align}
is a {\em classical} solution of the Cauchy problem \eqref{chy} and \eqref{chyinit}. Moreover,
\begin{align}\label{Gammaest}
|\Gamma(x,t;\xi,\tau)|\lesssim \frac{1}{\sqrt{t-\tau}}e^{-\frac{d(x-\xi)^2}{4(t-\tau)}}\approx H(x-\xi,t-\tau),
\end{align}
\begin{align}\label{Gamma_xest}
|\Gamma_x(x,t;\xi,\tau)|\lesssim\frac{1}{{t-\tau}}e^{-\frac{d(x-\xi)^2}{4(t-\tau)}}\approx\frac{1}{\sqrt{t-\tau}}H(x-\xi,t-\tau),
\end{align}
where $d$ is a constant depending on $g$ and $\lesssim$ and $\approx$ mean $\leq$ up to a constant and $=$ up to a constant, respectively. In both cases the constant is uniform in $(x,t,\xi,\tau).$ \\
We now apply the above results from \cite{Fri} to get a preliminary result for later usage.
Set
\begin{align*}
M_f(x,t)&:=\int_0^t\int_\mathbb {R}\Gamma(x,t;\xi,\tau)f(\xi,\tau)\,d\xi\,d\tau,\\
M_{f,x}(x,t)&:=\int_0^t\int_\mathbb {R}\Gamma_x(x,t;\xi,\tau)f(\xi,\tau)\,d\xi\,d\tau.
\end{align*}
\begin{prop}\label{GammaL2}
If $f(x,t)\in L^2(\Omega_T),$
then
\begin{align}\label{LinfMf}
\|M_f\|_{L^\infty(\Omega_T)}\lesssim T^{1/4}\|f\|_{L^2(\Omega_T)},\;&\; \|M_{f,x}\|_{L^\infty(\Omega_T)}\lesssim T^{1/4}\|f\|_{L^\infty((0,T),L^2(\mathbb {R}))},\\\label{L2Mf}
\|M_f\|_{L^2(\Omega_T)}\lesssim T\|f\|_{L^2(\Omega_T)},\;&\mbox{ and }\; \|M_{f,x}\|_{L^2(\Omega_T)}\lesssim \sqrt{T}\|f\|_{L^2(\Omega_T)}.
\end{align}
\end{prop}
\begin{proof}
To estimate the $L^\infty$ norm we use \eqref{Gammaest}, \eqref{Gamma_xest} and Cauchy–Schwarz inequality,
\begin{align*}
|M_f|&\lesssim \bigg(\int_0^t\int_\mathbb {R} \frac{1}{{t-\tau}}e^{-\frac{d(x-\xi)^2}{2(t-\tau)}}\,dx\,d\tau\bigg)^{1/2}\|f\|_{L^2(\Omega_T)}\lesssim T^{1/4}\, \|f\|_{L^2(\Omega_T)}.
\end{align*}
Taking the sup over $\Omega_T,$ we get the first estimate in \eqref{LinfMf}. Similarly,
\begin{align*}
|M_{f,x}|&\lesssim \bigg(\int_0^t\int_\mathbb {R} \frac{1}{{|t-\tau|}^{2-2r}}e^{-\frac{d(x-\xi)^2}{2(t-\tau)}}\,dx\,d\tau\bigg)^{1/2}\bigg(\int_0^t\int_\mathbb {R} \frac{1}{{|t-\tau|}^{2r}}f^2\,dx\,d\tau\bigg)^{1/2}\\
&\lesssim \bigg(\int_0^t \frac{1}{{t-\tau}^{\frac{3}{2}-2r}}\,d\tau\bigg)^{1/2} \bigg(\int_0^t \frac{1}{{|t-\tau|}^{2r}}\,dx\,d\tau\bigg)^{1/2}\|f\|_{L^\infty((0,T),L^2(\mathbb {R}))}.
\end{align*}
For $r=\frac{3}{8},$ we get the second estimate in \eqref{LinfMf}.\\
To estimate the $L^2$ norm we use \eqref{Gammaest}, \eqref{Gamma_xest} and the Young's convolution inequality with $r=2$, $p=1$, and $q=2$. On $\Omega_T,$
\begin{align*}
\|M\|_{L^2}\lesssim & \bigg\|\int_0^T\int_R\frac{1}{\sqrt{t-\tau}}e^{-\frac{d(x-\xi)^2}{4(t-\tau)}}|f(\xi,\tau)|\,d\xi\,d\tau\bigg\|_{L^2}\\
&\lesssim\|H\ast f\|_{L^2}\leq \|H\|_{L^{1}}\|f\|_{L^2}=C\, T\,\|f\|_{L^2},\\
\|M_x\|_{L^2}\lesssim & \bigg\|\int_0^T\int_R\frac{1}{{t-\tau}}e^{-\frac{d(x-\xi)^2}{4(t-\tau)}}|f(\xi,\tau)|\,d\xi\,d\tau\bigg\|_{L^2}\\
&\lesssim\|\frac{1}{\sqrt{t}}H\ast f\|_{L^2}\leq \|\frac{1}{\sqrt{t-\tau}} H\|_{L^{1}}\|f\|_{L^2}=C\, \sqrt{T}\,\|f\|_{L^2}.
\end{align*}
This completes the proof.
\end{proof}
\section{Existence of a solution $v^J$ for $\eqref{veq}$.}\label{J2v}
Recall that $\theta=\theta^J$ is the solution of wave equation (\ref{waveJ}) depending on $J$.
We now consider Cauchy problem of \eqref{vJ}
\begin{align}\label{vveq}
& v_t=g(\theta^J)v_{xx}+h(\theta^J)\theta^J_t,\\
\label{stvinit}
& v(x,0)=v_0(x)
\end{align}
and denote the solution by $v^J$.
\begin{prop}\label{propv}
Let $v_0(x)$ be defined as $v'_0(x)=u_0(x).$ For any $T\in (0,\infty),$ there exists a function $$v^J(x,t)\in L^2((0,T),H^1(\mathbb {R}))$$ that satisfies \eqref{vveq} and \eqref{stvinit} in the sense that
\begin{align}\label{vweak}
\int_0^T\int_R v^J\phi_t-v^J_x(g(\theta^J)\phi)_x+h(\theta^J)\theta^J_t\phi\,dx\,dt=0
\end{align}
for any $\phi\in H^1_0((0,T],\mathbb {R})$ and, as $t\to 0^+$,
\begin{align}\label{limv}
v^J(x,t)&\to v_0(x) \mbox{ point-wise, } ,\\\label{limv_x}
v_x^J(x,t)&\to v'_{0}(x)=u_0(x) \mbox{ almost everywhere. }
\end{align}
Moreover,
\[v^J, v^J_x\in C^\alpha((0,T],\mathbb {R})\cap L^\infty((0,T],\mathbb {R}) \mbox{ for any } 0<\alpha<1/4.\]
\end{prop}
\begin{proof} { For simplicity, we will drop the subscript $J$ in the proof.\\
Since $\theta_t$ is generally not H\"older continuous, to apply the results in Section \ref{HK}, we let $\theta_t^\epsilon$ be the mollification of $\theta_t$ for $\epsilon>0$ small.} It is known that $\theta_t^\epsilon\in C^\infty_c(\Omega_T)$ and, as $\epsilon\to 0$, $\theta_t^\epsilon\to \theta_t$ in $L^2(\Omega_T).$ Denote the solution of
\begin{align}\label{stv}
&v_t=g(\theta)v_{xx}+h(\theta)\theta_t^\epsilon
\end{align}
with the same initial condition (\ref{stvinit})
by $v^\epsilon(x,t).$ As discussed in Section \ref{HK}, $v^\epsilon(x,t)$ is
a classical solution and can be written explicitly as
\begin{align}\label{vepsilon}
v^\epsilon(x,t)=\int_\mathbb {R} \Gamma^0(x,t,\xi,0)v_0(\xi)\,d\xi+\int_0^t\int_\mathbb {R}\Gamma^0(x,t;\xi,\tau)h(\theta)\theta_\tau^\epsilon(\xi,\tau)\,d\xi\,d\tau,
\end{align}
{where $\Gamma^0$ is the kernel of the operator ${\mathcal L}_0:=\partial_t-g(\theta(x,t))\partial_{xx}$.
Note that the operator ${\mathcal L}_0$ is the same as ${\mathcal L}$ in (\ref{diff}) with $\gamma_1=0$. We comment that all estimates in Section \ref{HK} for $\Gamma$ still hold true for $\Gamma^0$ with possibly different constants.}\\
Clearly, $v^\epsilon$ in \eqref{vepsilon} satisfies the weak formulation \eqref{vweak} since it is a classical solution to \eqref{stv} and \eqref{stvinit}. We have
\begin{align}\label{epsilonweak}
\int_0^T\int_R v^\epsilon\phi_t-v^\epsilon_x(g\phi)_x+h\theta_t^\epsilon\phi\,dx\,dt=0
\end{align}
for all $\phi\in H^1_0(\mathbb {R}\times (0,T)).$
At this point, we claim that the expressions
\begin{align}\label{compv}
v(x,t)=\int_\mathbb {R} \Gamma^0(x,t,\xi,0)v_0(\xi)\,d\xi+\int_0^t\int_\mathbb {R}\Gamma^0(x,t;\xi,\tau)h(\theta)\theta_\tau(\xi,\tau)\,d\xi\,d\tau
\end{align}
and
\begin{align}\label{compv_x}
v_x(x,t)=\int_\mathbb {R} \Gamma^0_x(x,t,\xi,0)v_0(\xi)\,d\xi+\int_0^t\int_\mathbb {R}\Gamma^0_x(x,t;\xi,\tau)h(\theta)\theta_\tau(\xi,\tau)\,d\xi\,d\tau
\end{align}
are the limits of $v^\epsilon(x,t)$ and $v_x^\epsilon(x,t),$ respectively, in $L^2(\Omega_T)$ sense, and hence, $v(x,t)$
is a weak solution of \eqref{vveq}.\\
Subtract (\ref{compv}) from (\ref{vepsilon}) and apply Proposition \ref{GammaL2} and estimate \eqref{Gammaest} to get
\begin{align*}
|v^\epsilon-v|\leq \int_0^T\int_\mathbb {R}|\Gamma^0||h||\theta^\epsilon_\tau-\theta_\tau|\,d\xi\,d\tau
\end{align*}
and
\begin{align*}
\|v^\epsilon-v\|_{L^2(\Omega_T)}&\lesssim \|H\ast (|h||\theta^\epsilon_\tau-\theta_\tau|)\|_{L^2(\Omega_T)}\\
&\lesssim \|H\|_{L^1(\Omega_T)}\|\theta^\epsilon_\tau-\theta_\tau\|_{L^2(\Omega_T)}\to 0\;\mbox{ as }\; \epsilon\to 0.
\end{align*}
Similarly, by Proposition \ref{GammaL2} and estimate \eqref{Gamma_xest} we have
\begin{align*}\begin{split}
\|v_x^\epsilon-v_x\|_{L^2(\Omega_T)}&\lesssim \|\frac{1}{\sqrt{t}}H\ast (|h||\theta^\epsilon_\tau-\theta_\tau|)\|_{L^2(\Omega_T)}\\
&\lesssim
\|\frac{1}{\sqrt{t-\tau}}H\|_{L^1(\Omega_T)}\|\theta^\epsilon_\tau-\theta_\tau\|_{L^2(\Omega_T)}\to 0\;\mbox{ as }\; \epsilon\to 0.
\end{split}
\end{align*}
Taking $\epsilon\to 0$ in \eqref{epsilonweak} we obtain \eqref{vweak}. Hence, the weak formulation \eqref{vweak} is satisfied as a limit of the classical solution $v^\ve$ to the initial value problem \eqref{stv},\eqref{stvinit}.\\
For the initial data, the first limit \eqref{limv} follows from \cite{Fri}.
The second limit \eqref{limv_x} can be shown by considering the equation satisfied by the first term of \eqref{compv_x} that is by letting $u^0(x,t)=v^0_x(x,t):=\int_\mathbb {R} \Gamma^0_x(x,t,\xi,0)v_0(\xi)\,d\xi.$ Then
\[u^0_t-(g(\theta)u^0_x)_x=0\]
\[u^0(x,0)=u_0(x)\in H^1(\mathbb {R}).\]
It is known that the solution $u^0\in C([0,T],L^2(\mathbb {R}))$ and $u^0(x,0)=u_0(x)$ almost everywhere \cite{Evans}. Hence, we obtain
\[v_x(x,0)=v'_0(x), \mbox{ almost everywhere }\]
Finally, to show $v, v_x\in L^\infty\cap C^\alpha(\mathbb {R}\times (0,T]),$ we use Propositions \ref{GammaL2} and \ref{mainlemma}, which are established in \cite{MS} for more general equation was studied and H\"older estimates were established.
\end{proof}
\section{Existence of a solution $J.$}\label{FixedPoint}
{ Recall in Sections \ref{J2theta} and \ref{J2v}, for any $J(x,t)\in C^\alpha\cap L^2\cap L^\infty$ for some $\alpha>0$, we solve $\theta^J$ from system (\ref{thetaeq}) with $A_x$ replaced by $J$, then solve $v^J$ (and hence $u^J$) from system (\ref{ueq}) with $\theta$ replaced by $\theta^J$, and we show that
\[
v^J, u^J \in L^2\cap L^\infty\cap C^\alpha(\mathbb {R}\times (0,T))\;\mbox{ and }\;
\theta^J_t, \theta^J_x\in L^\infty((0,T),L^2(\mathbb {R})).\]
We now solve $A^J$ from system (\ref{AAeq}) with $(\theta,v,A_x+A_{0,x})$ replaced by $(\theta^J,v^J,J)$. With all these preparations, we will then define a mapping $\mathcal M(J)$ so that its fixed point gives rise to a solution of our original Cauchy problem.}
In view of system (\ref{AAeq}), define the operator
\[\mathcal{L}^J:=\partial_t-g(\theta^J)\partial_{xx}+\gamma_1.\]
The function $A$, introduced in (\ref{VarA}), satisfies \eqref{AAeq} recast below
\begin{align}\label{Aeq}
\mathcal{L}^J A=&F(\theta^J,\theta^J_t,\theta^J_x, v^J)+g'(\theta^J)\theta^J_x J,
\end{align}
where $F(\theta,\theta_t,\theta_x, v)=f(\theta,\theta_x, v_x)+G(\theta,\theta_t,\theta_x, v_x)$ with
$f$ and $G$ given in from \eqref{Fun}, along with the initial data\begin{align}\label{A_0}
A(x,0)=0.
\end{align}
Formally, following the discussion in Section \ref{HK}, $A$ can be expressed as
\begin{align}\label{Aexpression}
A(x,t)=\int_0^t\int_\mathbb {R}\Gamma(x,t;\xi,\tau)\big[F(\theta^J,v^J)+g'(\theta^J)\theta^J_{\xi}J\big](\xi,\tau)\,d\xi\,d\tau.
\end{align}
We now define a mapping $\mathcal{M}$ by
\begin{align}\begin{split}\label{5.6}
&\mathcal{M}( J)(x,t):=A_x(x,t)+J_{0}\\
&=
\int_0^t\int_\mathbb {R}\Gamma_x(x,t;\xi,\tau)\big[F(\theta^J,v^J)+g'(\theta^J)\theta^J_\xi J\big]\,d\xi\,d\tau+J_0.
\end{split}
\end{align}
The goal is to show the existence of a fixed point $J^*=\mathcal{M}(J^*)$ in a suitable space. This will lead to a weak solution $(\theta^{J^*},u^{J^*})$ for \eqref{sysf} and \eqref{sysw}.
We first give a uniform a priori energy estimate for $\mathcal{E}(t)$ defoined in \eqref{calEdef}.
\begin{thm}\label{lemma3.2} For any fixed $T>0$ and for any weak solution $(u(x,t),\theta(x,t))$ of system \eqref{sysf} and \eqref{sysw}, we have, for $t\in[0,T]$,
\beq\label{engineq2}
\mathcal{E}(t)\leq \mathcal{E}(0) -\iint_{\mathbb {R}\times [0,t]}(\frac{v_t^2}{g^2}+\theta^2_t)\,dxdt.
\eeq
\end{thm}
The proof is the same as the one in \cite{CHL20}. One can find it in the appendix \eqref{appE}.
\begin{cor}
For any weak solution of system \eqref{sysf} and \eqref{sysw}, there exists a constant $C_0$ depending on $\mathcal{E}(0),\|J'_0\|_{L^2(\mathbb {R})},$ and $\|J_0\|_{L^1(\mathbb {R})}$, such that
\beq\label{Gfbound}
\|G\|_{L^\infty(\Omega_T)}\leq C_0,\qquad \|f\|_{L^\infty([0,T],L^2(R))}\leq C_0,\eeq
\end{cor}
The proof is straightforward using \eqref{calEdef}, \eqref{tecinitial}, \eqref{positiveDamping},
and definition of $G$ and $f$ in \eqref{Fun}.
Now we fix an arbitrary $T>1$ once and for all and
consider the following spaces over $\overline{\Omega}_T:=\mathbb {R}\times [0,T].$ Denote
\begin{align*}
L_*:=\cap_{p\in[a+2,\infty)}L_*^p,
\end{align*}
for fixed $a>0$, and
\begin{align*}
N_T:=C_*^\alpha\cap L_*^2\cap L_*^\infty(\overline{\Omega}_T),
\end{align*}
with
\begin{align*}
\|S\|_{N_T(\overline\Omega_T)}=\max\left\{\|S\|_{L_*^\infty(\overline\Omega_T)}, \|S\|_{L_*^2(\overline\Omega_T)}, \|S\|_{C_*^\alpha(\overline\Omega_T)}\right\},
\end{align*}
where $\alpha\in (0,1/4)$ and
\[\|S\|_{L_*^\infty(\overline\Omega_T)}=\|e^{-\lambda t}S(x,t)\|_{L^\infty(\overline\Omega_T)},\]
\[\|S\|_{L_*^p(\overline\Omega_T)}=\|e^{-\lambda t}S(x,t)\|_{L^p(\overline\Omega_T)},\]
and
\[\|S\|_{C_*^\alpha(\overline\Omega_T)}=\sup_{\|(h_1,h_2)\|>0}e^{-\lambda t}\frac{|S(x+h_1,t+h_2)-S(x,t)|}{\|(h_1,h_2)\|^\alpha},\]
for some $\lambda=\lambda(\mathcal{E}(0),T)>0$ sufficiently large that will be determined later. Let
\beq\label{kT}
k_T=2(\|J_0\|_{C^\alpha\cap L^2\cap L^\infty}+\max\{C_0,C_0^2\}T^2).
\eeq
We define
\begin{align*}
K_T&:=\big\{\mathcal{S}(x,t)\in N_T:\|\mathcal{S}\|_{N_T}\leq k_T,\, \mathcal{S}(x,0)=J_0(x)\big\}.
\end{align*}
By \eqref{tecinitial} and the Sobolev embedding, it is clear that $\|J_0(x)\|_{N_T(\mathbb {R})}<\|J_0(x)\|_{C^\alpha\cap L^2\cap L^\infty}.$
Furthermore, for any fixed $T$, it is easy to show that the $N_T$ norm and the $C^\alpha\cap L^2\cap L^\infty$ norm are equivalent. In \cite{BZ}, a similar norm was used to prove the existence.
\begin{cor}
$K_T$ is compact in $L_*$ on any $\overline\Omega_T.$
\end{cor}
\begin{proof} This can be proved using a very similar method as in Section 6.3 of \cite{CHL20}.
\end{proof}
We now recall the Schauder Fixed Point Theorem that will be applied to complete our analysis.
\begin{thm}[Schauder Fixed Point Theorem]\label{sch}
Let $E$ be a Banach space, and let $K$ be a convex set in $E$. Let $\mathcal T: K\to K$ be a continuous map such that $\mathcal T(K) \subset K$, where
$K$ is a compact subset of $E$. Then $\mathcal T$ has a fixed point in $K$.
\end{thm}
The main step is to verify the two assumptions of Theorem \ref{sch}, that is
\bigskip
\begin{enumerate}[i.]
\item \emph{The continuity of the map $\mathcal{M}:K_T\to K_T.$}\smallskip\\
This can be verified using the same argument as in \cite{CHL20}. The idea is to use the change of coordinates and the semi-linear system introduced previously along with the regularity of the transformation that preserves the continuity of the map. We refer the reader to \cite{CHL20}.
\bigskip
\item \emph{The inclusion $\mathcal{M}(K_T)\subset K_T.$
}
\end{enumerate}
Now we prove (ii).\\
The following proposition, with the help of \cite[Propositions 1.1 and 1.2]{MS}, is the key estimate to show that for any $T>0$ we have the inclusion \[\mathcal{M}(J):K_T\to K_T.\]
To state the proposition, let $G\in L^\infty(\Omega_T)$ and $f\in$ $L^\infty((0,T),L^2(\mathbb {R}))$ and define
\begin{align*}
M_{G}(x,t):=\int_0^t\int_{\mathbb {R}}\Gamma(x,t;\xi,\tau)G(\xi,\tau)\,d\xi\,d\tau,
\end{align*}
\begin{align*}
M_{f}(x,t):=\int_0^t\int_{\mathbb {R}}\Gamma(x,t;\xi,\tau)f(\xi,\tau)\,d\xi\,d\tau.
\end{align*}
and
\begin{align*}
M_{G,x}(x,t):=\int_0^t\int_{\mathbb {R}}\Gamma_x(x,t;\xi,\tau)G(\xi,\tau)\,d\xi\,d\tau,
\end{align*}
\begin{align*}
M_{f,x}(x,t):=\int_0^t\int_{\mathbb {R}}\Gamma_x(x,t;\xi,\tau)f(\xi,\tau)\,d\xi\,d\tau.
\end{align*}
By the upper bounds established in \cite[Sections 2 and 4]{MS} and then by \eqref{Gfbound},
\begin{align}\label{infbound}
\begin{split}
\|M_{G,x}\|_{L^\infty(\Omega_ T)}&\lesssim T^{1/2}\|G\|_{L^\infty(\Omega_T)}\leq C_0T^{1/2},\\
\|M_{f,x}\|_{L^\infty(\Omega_ T)}&\lesssim T^{1/4}\|f\|_{L^\infty((0,T),L^2(R))}\leq C_0T^{1/4},
\end{split}
\end{align}
\begin{align}\label{alphbound}
\begin{split}
\|M_{G,x}\|_{C^\alpha(\Omega_ T)}&\lesssim T^{r}\|G\|_{L^\infty(\Omega_T)}\leq C_0 T^r,\\
\|M_{f,x}\|_{L^\alpha(\Omega_ T)}&\lesssim T^{s}\|f\|_{L^\infty((0,T),L^2(R))}\leq C_0 T^s,
\end{split}
\end{align}
for some fixed $0<r,s<1.$ And
\begin{align}\label{L2bound}
\begin{split}
\|M_{G,x}\|_{L^2(\Omega_ T)}&\leq \frac{1}{2} T^{3/2} \|G\|_{L^\infty(\Omega_T)}\|G_x\|_{L^\infty((0,T),L^1(\mathbb {R}))}\leq C^2_0 T^{3/2},\\
\|M_{f,x}\|_{L^2(\Omega_ T)}&\leq T^{1/2}\|f\|_{L^\infty((0,T),L^2(R))}\leq C_0T^{1/2}.
\end{split}
\end{align}
Using these estimates, \eqref{Gfbound} and the equivalence of $N_T$ norm and $C^\alpha\cap L^2\cap L^\infty$ norm when $T$ is given, it is easy to prove the following proposition.
\begin{prop}\label{mainlemma} Assume $f$ and $G$ satisfy the bounds in \eqref{Gfbound}. For any given $T>1$, we have
$$\displaystyle \max\{\|M_{G,x}(x,t)\|_{N_ T},\|M_{f,x}(x,t)\|_{N_ T}\}\leq \max\{C_0,C_0^2\}T^2.$$
\end{prop}
The map $\mathcal{M}$ in \eqref{5.6} contains two terms in the integration. The first term is $F^J:=G+f$ and the second term is $\displaystyle g'(\theta^J)\theta^J_\xi J\in L^\infty((0,T),L^2(\mathbb {R})).$ Proposition \eqref{mainlemma} gives a uniform bound on the first term, where the bound is less than $k_T,$ chosen in \eqref{kT}.
To control the second term, a special treatment needed due to the extra explicit dependence on $J.$
Denote
\[Q(x,t):=\int_0^t\int_\mathbb {R} \Gamma_x(x,t,\xi,\tau)g'\theta_\xi(\xi,\tau)J(\xi,\tau)\,d\xi d\tau.\]
We write
\begin{align*}
\big|\int_0^t\int_\mathbb {R}& e^{\lambda\tau} \Gamma_x(x,t,\xi,\tau)g'\theta_\xi(\xi,\tau)e^{-\lambda\tau}J(\xi,\tau)\,d\xi d\tau\big|\\
&\lesssim \|e^{-\lambda\tau} J\|_{L^\infty(\Omega_T)}\,\mathcal{E}(0) \bigg[\int_0^t\int_\mathbb {R}(t-\tau)^{3/4} \Gamma_x^2(x,t,\xi,\tau)\,d\xi\,d\tau\bigg]^{1/2}\bigg[\int_0^t\frac{e^{2\lambda\tau}}{(t-\tau)^{3/4}}\,d\tau\bigg]^{1/2}\\
&\leq \|e^{-\lambda\tau} J\|_{L^\infty(\Omega_T)}\,\mathcal{E}(0)\,t^{1/8}\bigg[\int_0^t\frac{e^{2\lambda\tau}}{(t-\tau)^{3/4}}\,d\tau\bigg]^{1/2}\\
&=\|e^{-\lambda\tau} J\|_{L^\infty(\Omega_T)}\,\mathcal{E}(0)\,t^{1/8}\bigg[\int_0^t\frac{e^{2\lambda(t-\tau)}}{\tau^{3/4}}\,d\tau\bigg]^{1/2},
\end{align*}
where we used \eqref{Gamma_xest}.
Multiplying by $e^{-\lambda t},$ we get
\begin{align*}
\big|e^{-\lambda t}\int_0^t\int_\mathbb {R}& e^{\lambda\tau} \Gamma_x(x,t,\xi,\tau)g'\theta_\xi(\xi,\tau)e^{-\lambda\tau}J(\xi,\tau)\,d\xi d\tau\big|\\
&\leq \|e^{-\lambda\tau} J\|_{L^\infty(\Omega_T)}\mathcal{E}(0)\,t^{1/8}\bigg[\int_0^t\frac{e^{-2\lambda\tau}}{\tau^{3/4}}\,d\tau\bigg]^{1/2}\\
&=\|e^{-\lambda\tau} J\|_{L^\infty(\Omega_T)}\mathcal{E}(0)\,t^{1/8}\bigg[\int_0^{1/\lambda}\frac{e^{-2\lambda\tau}}{\tau^{3/4}}\,d\tau+\int_{1/\lambda}^{t}\frac{e^{-2\lambda\tau}}{\tau^{3/4}}\,d\tau\bigg]^{1/2}\\
&\leq \|e^{-\lambda\tau} J\|_{L^\infty(\Omega_T)}\mathcal{E}(0)\,t^{1/8}\bigg[\frac{1}{\lambda^{1/4}}+\lambda^{3/4}\frac{1}{\lambda}(e^{-1}-e^{-\lambda T})\bigg]^{1/2}\\
&\leq \sqrt{2} \|e^{-\lambda\tau} J\|_{L^\infty(\Omega_T)}\mathcal{E}(0)\,T^{1/8}\frac{1}{\lambda^{1/8}},
\end{align*}
which yields, for $J\in K_T,$
\begin{align}
\|Q\|_{L_*^\infty(\Omega_T)}\leq C_T \frac{1}{\lambda^{1/8}} \|J\|_{L_*^\infty(\Omega_T)}\mathcal{E}(0)\leq \sqrt{2}\,T^{1/8} \frac{1}{\lambda^{1/8}} k_T\mathcal{E}(0).
\end{align}
Hence, choosing $\lambda>(2\sqrt{2})^8T\mathcal{E}^8(0),$ we obtain the bound
\begin{align}\label{Linf*}
\|Q\|_{L_*^\infty(\Omega_T)}\leq k_T/2.
\end{align}
For the $L^2_*$ estimate, using \eqref{Gamma_xest} and \eqref{L2bound}
\begin{align*}
e^{-\lambda t}|Q(x,t)|&\leq e^{-\lambda t}\int_0^t\int_\mathbb {R} \frac{1}{{t-\tau}}e^{-\frac{d(x-\xi)^2}{4(t-\tau)}}|g'||\theta_\xi(\xi,\tau)||J(\xi,\tau)|\,d\xi d\tau\\
&\leq \|g'\|_{L^\infty} \|J\|_{L_*^\infty}\int_0^t\int_\mathbb {R} \frac{1}{{t-\tau}}e^{-\frac{d(x-\xi)^2}{4(t-\tau)}}|\theta_\xi(\xi,\tau)|e^{\lambda(\tau-t)}\,d\xi d\tau\\
&=\|g'\|_{L^\infty}\|J\|_{L_*^\infty}\int_0^t\int_\mathbb {R} \frac{1}{{\tau}}e^{-\frac{d\,\xi^2}{4\tau}}|\theta_\xi(x-\xi,t-\tau)|e^{-\lambda\tau}\,d\xi d\tau.
\end{align*}
Taking the $L^2$ in $x,$
\begin{align*}
\|e^{-\lambda t}Q(x,t)\|_{L^2(\mathbb {R})}&\leq \|g'\|_{L^\infty} \|J\|_{L_*^\infty(\Omega_T)} \|\theta_\xi\|_{L^\infty((0,T),L^2(\mathbb {R}))}\int_0^t\frac{1}{\sqrt{\tau}}e^{-\lambda \tau}\,d\tau\\
&\leq \|g'\|_{L^\infty}\|J\|_{L_*^\infty} \mathcal{E}(0)\frac{3}{\sqrt{\lambda}}.
\end{align*}
This gives
\begin{align}
\|Q\|_{L^2_*(\Omega_T)}\leq \|g'\|_{L^\infty}k_T \mathcal{E}(0)\frac{3}{\sqrt{\lambda}}\,\sqrt{T}.
\end{align}
Hence, for $\lambda>36\,\mathcal{E}(0)^2\|g'\|_{L^\infty}^2T$ we have
\begin{align}\label{L2*}
\|Q\|_{L^2_*(\Omega_T)}\leq k_T/2.
\end{align}
By a similar argument one can show the existence of $\lambda>0$ depending only on $\mathcal{E}(0),\,T$ and bound of $g$ and $g'$ such that
\begin{align}\label{alpha*}
\|Q\|_{C^\alpha_*(\Omega_T)}\leq k_T/2.
\end{align}
\begin{remark}
The weight $e^{-\lambda t}$ introduced in the norm $N_T$ helps to get the inclusion of the map $\mathcal{M},$ particularly, for the term $Q$ treated above. One can see, for example, the $L^\infty_*$ estimate; we got \[\|Q\|_{L^\infty_*}\leq C_T\frac{1}{\lambda^{1/8}} \|J\|_{L^\infty_*}\mathcal{E}(0).\]
This allows us to choose $\lambda=\lambda(T,\mathcal{E}(0))$ such that $\|Q\|_{L^\infty_*}<\frac{1}{2} \|J\|_{L^\infty_*}.$ In other words, the size of $\|Q\|_{N_T}$ shrinks faster than the size of $\|J\|_{N_T}$ as $\lambda$ gets large.
\end{remark}
Now, Proposition \ref{mainlemma} with \eqref{Linf*},\eqref{L2*} and \eqref{alpha*} show that for any fixed $T>1,$
there exists $\lambda>0$ large enough (depending only on $T, \mathcal{E}(0), \|g\|_{L^\infty}$ and $\|g'\|_{L^\infty}$)
such that $$\mathcal{M}(K_T)\subset K_T.$$ Hence, by the Schauder fixed point theorem, we have a fixed point $J^*=\mathcal{M}(J^*)\in K_T.$ Since for a fixed finite time $T$ the $N_T$ norm and the $L^2\cap L^\infty\cap C^\alpha(\Omega_T)$ norm are equivalent, $J^*\in L^2\cap L^\infty\cap C^\alpha.$ Therefore, we have the existence of $A(x,t)$ by \eqref{Aexpression}.
The next proposition shows that the weak formulation is well-defined. Before that we will introduce $A^\ve(x,t)$ to be the solution to
\begin{align}\label{Aepsilon}
A^\ve_t-g(\theta)A^\ve_{xx}+\gamma_1A^\ve=F^{\epsilon}+g'\theta_x^\epsilon J.
\end{align}
Here $F^\epsilon,$ defined in \eqref{Fun}, and $\theta_x^\epsilon$ are smooth mollification where the mollification of $F$ is acting only on $\theta_x$ and $\theta_t$. Precisely, $F^\epsilon=F(\theta,\theta_t^\epsilon,\theta_x^\epsilon,v_x)$. The mollification is not needed for $v_x$ since we have shown $\theta\in C^{1/2}(\Omega_T)$ and $v_x\in C^\alpha(\Omega_T)$. Hence, the classical theory discussed in Section \ref{HK} applies.
\begin{prop}\label{propA}
For any $T\in (0,\infty),$ there exists a function $A(x,t)$ such that $$A_x\in L^2\cap L^\infty\cap C^\alpha([0,T],\mathbb {R})$$
and
\begin{align}\label{Aweak}
\int_0^T\int_\mathbb {R} A\phi_t-g(\theta) A_x\phi_x-\gamma_1 A\phi\,dx\,dt=-\int_0^T\int_\mathbb {R} (F+g'(\theta)\theta_x J_0)\phi\,dx\,dt,
\end{align}
for any $\phi\in C^\infty_c(\mathbb {R}\times (0,T)).$ More precisely, the weak formulation \eqref{Aweak} is satisfied as a limit of $A^\ve.$
\end{prop}
\begin{proof}
First, for short, we denote $F^{J^*}$ by $F$ and $\theta^{J^*}$ by $\theta.$
The fixed point $J^*$ gives
\begin{align}\label{Express4A}
A(x,t)=\int_0^t\int_\mathbb {R}\Gamma(x,t;\xi,\tau)\big[F+g'(\theta)\theta_\xi J^*\big]\,d\xi\,d\tau.
\end{align}
Now we show that $A$ satisfies \eqref{Aweak}.
We can write the classical solution $A^\epsilon$ of \eqref{Aepsilon} explicitly as
\begin{align}
A^\epsilon(x,t)=\int_0^t\int_\mathbb {R}\Gamma(x,t;\xi,\tau)\big[F^{\epsilon}+g'\theta_\xi^{\epsilon}J^*\big]\,d\xi\,d\tau.
\end{align}
The weak formulation of the solution to \eqref{Aepsilon} is defined as the following
\begin{align}\label{eweak}
\int_0^T\int_\mathbb {R} A^\epsilon \phi_t- A^\epsilon_x(g\phi)_x-\gamma_1 A^\epsilon\phi\,dx\,dt=-\int_0^T\int_\mathbb {R} \big[F^{\epsilon}+g'\theta_x^{\epsilon}J^*\big]\phi\,dx\,dt
\end{align}
since $A^\epsilon$ is a classical solution.
The same argument as in Section \ref{J2v} can be applied here to show that $A^\ve, A_x^\ve\to A,A_x$ in the $L^2_{loc}$ sense. This means
taking $\epsilon\to 0$ in \eqref{eweak} and using $J^*-J_0=A_x$ we obtain \eqref{Aweak}.
An application of Proposition \ref{mainlemma} shows
$$\displaystyle A_x\in L^\infty\cap L^2\cap C^\alpha([0,T],\mathbb {R}).$$
\end{proof}
\begin{cor}\label{hatweak}
The function $\hat A(x,t):=A(x,t)+A_0(x)$ satisfies the following identity
\begin{align}
\int_0^T\int_\mathbb {R} \hat A \phi_t-g(\theta)\hat A_x\phi_x-\gamma_1 \hat A\phi\,dx\,dt=&-\int_0^T\int_\mathbb {R} \hat F\phi\,dx\,dt
\end{align}
where
\begin{align}\begin{split}
\hat F&=\hat f+\hat G,\\
\hat f&=[\gamma_1-\frac{h^2}{g}]v_x+\frac{h(\theta)c^2(\theta)}{g}\theta_x,\\
\hat G&=\int_{-\infty}^x[\frac{h'}{g}-\frac{g'h}{g^2}]\theta_t^2-[\gamma_1-\frac{h^2}{g}]'\theta_zv_z-(\frac{h(\theta)c(\theta)}{g})'c(\theta)\theta_z^2\,dz.
\end{split}
\end{align}
\end{cor}
\begin{proof}
Using $A=\hat A(x,t)-A_0$ and \eqref{Aweak}, we have
\begin{align}
\int_0^T\int_\mathbb {R} \hat A \phi_t-g(\theta)\hat A_x\phi_x-\gamma_1 \hat A\phi\,dx\,dt=&\int_0^T\int_\mathbb {R} g(\theta) J'_0\phi-\gamma_1 A_0\phi-F\phi\,dx\,dt\nonumber\\
=&-\int_0^T\int_\mathbb {R} \hat F\phi\,dx\,dt\nonumber
\end{align}
\end{proof}
\section{Weak formulations}\label{bdE}
To summarize, from Propositions \ref{propth}, \ref{propv}, \ref{propA} and Corollary \ref{hatweak} we have proved that $(\theta,v,A)$ defines a weak solution in the sense that, for any test function $\phi\in H_0^1((0,T)\times\mathbb {R})$,
\begin{align}\label{thetaweakk}
\int_0^T\int_\mathbb {R}\theta_t\phi_t-\big(\gamma_1-\frac{h^2}{g}\big)\theta_t\phi\,dx\,dt=\int_0^T\int_\mathbb {R} (c(\theta)\theta_x)(c(\theta)\phi)_x+h\hat A_x\phi\,dx\,dt,
\end{align}
\begin{align}\label{vvweak}
\int_0^T\int_\mathbb {R} v\phi_t-v_x(g\phi)_x+h\theta_t\phi\,dx\,dt=0,
\end{align}
\begin{align}\label{AAweak}
\int_0^T\int_\mathbb {R} \hat A \phi_t-g(\theta)\hat A_x\phi_x-\gamma_1 \hat A\phi\,dx\,dt=-\int_0^T\int_\mathbb {R} \hat F\phi\,dx\,dt.
\end{align}
Recall,
\begin{align}\begin{split}
\hat F&=\hat f+\hat G,\\
\hat f&=[\gamma_1-\frac{h^2}{g}]v_x+\frac{h(\theta)c^2(\theta)}{g}\theta_x,\\
\hat G&=\int_{-\infty}^x[\frac{h'}{g}-\frac{g'h}{g^2}]\theta_t^2-[\gamma_1-\frac{h^2}{g}]'\theta_zv_z-(\frac{h(\theta)c(\theta)}{g})'c(\theta)\theta_z^2\,dz.
\end{split}
\end{align}
Now we show that $(u,\theta)$ with $u=v_x$ satisfies the requirement in definition \eqref{def1} for a weak solution.
By \eqref{vvweak}, for any test function $\eta\in C_c^\infty(\Omega_T)$, choose $\phi=\eta_x$, we have
\begin{align}
\int_0^T\int_\mathbb {R} u\eta_t-(gu_x+h\theta_t)\eta_x\,dx\,dt=0.
\end{align}
Next we establish the relation between $v_t$ and $J$. Precisely, we show that
\begin{align}
J-\frac{v_t}{g}=0\;\mbox{ almost everywhere.}
\end{align}
Taking $\displaystyle\phi=\frac{\psi_t}{g}$ in \eqref{vweak}, we have
\begin{align}\label{0}
\int_0^T\int_\mathbb {R} -v_t\frac{\psi_t}{g}-v_t \psi_{xx}+h\theta_t \frac{\psi_t}{g}\,dx\,dt=0,
\end{align}
which is
\begin{align}\label{1}
\int_0^T\int_\mathbb {R} \frac{v_t}{g}(\psi_t+g\psi_{xx})-(\frac{h\theta_t }{g}) \psi_t\,dx\,dt=0.
\end{align}
By \eqref{thetaweakk}, if we choose $\displaystyle \phi=\frac{h}{g}\psi,$ then
\begin{align}\label{2}\begin{split}
& \int_0^T\int_\mathbb {R}\theta_t\frac{h}{g}\psi_t+(\frac{h}{g})'\theta^2_t\psi-\big(\gamma_1-\frac{h^2}{g}\big)\theta_t\frac{h}{g}\psi\,dx\,dt\\
&\qquad =\int_0^T\int_\mathbb {R} (c\theta_x)(c\frac{h}{g}\psi)_x+hJ\frac{h}{g}\psi\,dx\,dt.
\end{split}
\end{align}
Finally, for \eqref{AAweak}, we choose $\phi=\psi_x$ to get
\begin{align}\label{3}
&\iint -J\psi_t-gJ\psi_{xx}+\gamma_1J\psi\,dx\,dt\\
&\qquad =\iint (\frac{h}{g})'\theta_t^2\psi-[\gamma_1-\frac{h^2}{g}] v_{xx}\psi
-(\frac{hc}{g})'c\theta_x^2\psi-\frac{hc^2}{g}\theta_x\,dx\,dt.\nonumber
\end{align}
Adding \eqref{1}-\eqref{3}, and using that
\[
\iint (\frac{h^2}{g}-\gamma_1)\frac{1}{g}\psi(v_t-gv_{xx}-\theta_t h)\,dx\,dt=0,
\]
we obtain
\begin{align}\label{identity}
\iint (J-\frac{v_t}{g})(\psi_t+g\psi_{xx}+(\frac{h^2}{g}-\gamma_1)\psi)\,dx\,dt=0.
\end{align}
Denote $\displaystyle a:=J-\frac{v_t}{g}.$ Now we show that the weak solution of
\[
a_t-(ag)_{xx}-\beta a=0\; \mbox{ where }\; \beta=\frac{h^2}{g}-\gamma_1
\]
with $a(x,0)=0$, has only zero solution almost everywhere. To prove that, let $\displaystyle m=ga$, so the problem becomes
\[
m_t-g m_{xx}-(\beta+\frac{g_t}{g})m=0
\]
with $m(x,0)=0$. We need to prove that $m=0$ almost everywhere is the only solution.
Applying same arguments from Section \ref{HK}, we have an integral formula of the solution and we write the map of solutions as
\[\hat m(x,t)=\int_0^t\int_\mathbb {R}\Gamma(x,t;\xi,\tau)\big(\big[\beta+\frac{g_t}{g}\big]m\big)(\xi,\tau)\,d\xi\,d\tau.\]
Following the same fixed point arguments in Section \ref{FixedPoint}, we have the following estimate for some short time $t$ and some constant $r>0$
\[\|m\|_{L^\infty}<t^r \|m\|_{L^\infty}\]
since $m(x,0)=0$, so $m=0$, a.e. when $t$ is small enough, then for any $t$. Hence,
\[
J=\frac{v_t}{g} \quad \hbox{almost everywhere.}
\]
Furthermore, by \eqref{vvweak} and choosing $\phi=\frac{\varphi}{g}$, we have
\begin{align}\label{4}
\int_0^T\int_\mathbb {R} \frac{v_t}{g}\varphi+u_x\varphi+\frac{h\theta_t }{g}\varphi\,dx\,dt=0.
\end{align}
So
\[
J=\frac{v_t}{g}=u_x+\frac{h\theta_t}{g} \quad \hbox{almost everywhere.}
\]
This also shows that $u_x\in L^\infty([0,T],L^2(\mathbb {R}))$.
|
{
"arxiv_id": "2302.08689",
"language": "en",
"timestamp": "2023-02-20T02:06:43",
"url": "https://arxiv.org/abs/2302.08689",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Action recognition is one of the important tasks in computer vision, widely used in human-robot interaction, video understanding and intelligent detection. In recent years, skeleton-based action recognition has attracted increasing attention due to its computational efficiency and robustness to background or environmental changes.
\begin{figure}[t]
\centering
\includegraphics[height=1.2\linewidth,width=0.8\linewidth]{fig19.pdf}
\caption{Skeleton representation based on hypergraph. (a) is the spatial topological hypergraph generated by k-means method and different colors indicate different hyperedges. (b) is the time-point hypergraph generated by k-NN and the depth of the colour indicates the importance of time.}
\label{fig01}
\end{figure}
Graph convolution methods are widely used to handle the spatial-temporal topology of the human skeleton. ST-GCN~\cite{DBLP:conf/aaai/YanXL18} proposed a joint segmentation strategy to delineate the joint points. However, using the human body's static segmentation strategy is insufficient to adapt to the numerous actions. Therefore, Chen et al.~\cite{2021Channel} proposed CTR-GCN that generates new topologies based on sample features dynamically. But the spatial topology used in these methods~\cite{DBLP:conf/aaai/YanXL18,2020Semantics,2021Channel} hardly reflects the multiple joints of unnatural dependencies and does not consider the issue of time-point dependencies. The emergence of hypergraph convolution-based approach~\cite{DBLP:journals/tip/HaoLGJY21,wei2021dynamic,zhu2022selective} considers unnatural dependencies beyond the natural topology. While these methods also ignore the temporal dependencies between actions to a certain extent. In addition, both hypergraph and graph convolution methods~\cite{2020Semantics,DBLP:conf/icmcs/ChenXJS21,2021Channel} use a strategy of learning spatial-temporal separately. This results in the spatial-temporal parallel information of action sequences being ignored.
To solve the above problem, we construct a dynamic spatial-temporal hypergraph convolutional network (DST-HCN) that consists of dynamic spatial-temporal hypergraph convolution (DST-HC) and high-order information fusion (HIF). We use the K-NN method in DST-HC to generate dynamic time-point hypergraph (TPH) based on sample features. For the spatial topology, we construct three hypergraph topologies using methods such as k-means. The overview is clearly shown in Fig.~\ref{fig01}. TPH and constructed spatial hypergraph topology are passed through a convolution operation to obtain high-order spatial-temporal feature information. In HIF, action modeling is done by aggregating and concatenating the high-order information through multi-feature channel operations. As shown in Fig.~\ref{fig002}, we introduce TPH to solve the adjacent topological similarity problem. In addition, we use spatial-temporal parallel learning to fuse features efficiently.
In summary, our contributions are mainly in the following areas:
\begin{itemize}
\item We propose the DST-HCN that captures important spatial-temporal feature information based on the constructed multiple hypergraphs. And an effective fusion is done through parallel spatial-temporal modeling to finalize the action recognition task.
\item We introduce the time-point hypergraph. By combining spatial hypergraphs, we construct completable spatial-temporal dependencies.
\item Extensive experiments show that DST-HCN achieves state-of-the-art on the NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.
\end{itemize}
\section{RELATED WORK}
\subsection{Skeleton-based Action Recognition}
CNNs have achieved remarkable results in processing euclidean data like images and RNNs have achieved significant advantages in processing time series. So the research began to emerge with CNN and RNN methods~\cite{2016NTU,DBLP:conf/icmcs/ZhangSZ19,xu2022topology}. The advent of graph convolutional neural networks (GCNs) have significantly improved the processing of non-Euclidean data. Yan et al.~\cite{DBLP:conf/aaai/YanXL18} proposed the ST-GCN model, which can better handle the topological relationship of skeletons to achieve more effectiveness. Chen et al.~\cite{2021Channel} proposed the CTR-GCN method, which can dynamically learn joint information on space to achieve channel topology modeling. Song et al.~\cite{2021Constructing} proposed a family of efficient GCN baselines with high accuracies and small amounts of trainable parameters. This methods use a strategy of spatial-temporal separate learning in model configuration.
\begin{figure}[t]
\centering
\includegraphics[height=1\linewidth,width=0.9\linewidth]{121401.pdf}
\caption{Comparison of our proposed framework with other skeleton recognition methods. (a) is graph convolution method. (b) is hypergraph convolution method. (c) is our proposed method.}
\label{fig002}
\end{figure}
\subsection{Hypergraph Neural Networks}
The graph structure used in traditional graph convolutional neural networks focuses on establishing a correspondence between two objects to form a set. Hyperedges can connect arbitrary objects to form hypergraphs to obtain high-order dependencies. Feng et al.~\cite{DBLP:conf/aaai/FengYZJG19} proposed hypergraph neural network (HGNN) to learn the hidden layer representation considering the high-order data structure. Jo et al.~\cite{DBLP:conf/nips/JoBLKKH21} proposed a pairwise hypergraph transformation method that makes the network focus on the feature information of the edges. Jiang et al.~\cite{jiang2019dynamic} proposed a dynamic hypergraph network (DHNN) to update the hypergraph structure dynamically, etc. In this paper, we use the generic hypergraph convolution described with HGNN~\cite{DBLP:conf/aaai/FengYZJG19}.
\section{METHOD}
\begin{figure*}[htbp]
\centerline{\includegraphics[width=16cm,height = 6cm]{121901.pdf}}
\caption{The framework for our dynamic spatial-temporal hypergraph neural network. DST-HC constructs multiple hypergraphs and then obtains high-order spatial-temporal feature information by convolution operations. The HIF aims at performing spatial-temporal parallel modeling in the channel dimension.}
\label{fig02}
\end{figure*}
\subsection{Preliminaries}
\noindent
{\bf Graph Convolutional Network.} A skeleton sequence consists of T single-frame skeletal maps, which is represented by $G=(V, E)$~\cite{DBLP:conf/aaai/YanXL18}, where $V$ is the set of joints, and $E$ is the set of skeletal edges. The skeleton graph is denoted by $A\in R^{N\times N}$, which is the adjacency matrix about the joints. A conventional graph convolution method based on skeletal data is:
\begin{align}
X_{i+1}=\sum _{A{i}\in A_{k}}X_{i}\cdot \widetilde{A_{i}}\cdot M_{i},
\end{align}
where $X_{i}\in R^{C_{in}\times T\times V}$ denotes the input features of the i-th layer of the network, $C_{in}$ is the number of channels, $\widetilde{A_{i}}$ denotes the adjacency matrix of the normalised spatial segmentation method, $A_{k}$ denotes spatial skeletal segmentation method~\cite{DBLP:conf/aaai/YanXL18}, and $ M_{i}$ denotes the learnable weight features.
\subsection{Dynamic Spatial-temporal Hypergraph Convolution (DST-HC)}
\label{BB}
\noindent
{\bf Hypergraph Construction.} Unlike the graph convolution method, the hypergraph structure is represented by $g=(v,d,w)$, where $v$ denotes the number of joints, $d$ is the set of hyperedges represented by the incidence matrix, and $w$ is represented as the set of weights of hyperedges. For the incidence matrix $H\in R^{V\times N} $, $V$ is the set of joints and $N$ denotes the number of hyperedges. For each hyperedge: $h(v,e)=1$ means the joint belongs to this hyperedges, and $h(v,e)=0$ means there is no node in this hyperedges.
In the dynamic time-point hypergraph construction method, we use the k-NN method to construct the features after linear transformation and dimensionality reduction to reduce the computational cost. As shown in Fig.~\ref{fig02}, the hypergraph $H_{T}\in R^{T\times T}$ is formulated as:
\begin{align}
H_{T} =TH(\delta(\varepsilon(X) ,k),
\end{align}
where $k$ denotes the number of neighbors of the each hyperedge, $X\in R^{C_{in}\times T\times V} $ denotes the input features, $\varepsilon(\cdot)$ is the linear transformation dimensionality reduction function, $\delta(\cdot)$ is the dimensionality transformation function, and $TH(\cdot)$ denotes the k-NN function.
We use k-NN, k-means, and the combination of centrifugal and centripetal joints~\cite{DBLP:conf/aaai/YanXL18} to construct the spatial hypergraph matrix, as shown in Fig.~\ref{fig02}. The spatial hypergraph method is formulated as:
\begin{align}
H_{N} =SH(A),
\end{align}
where $SH(\cdot)$ are the three methods mentioned above, and $A$ is the topological adjacency matrix.
Further, we construct two spatial-temporal dependencies $H_{ST}\in R^{V\times T}, H_{TS}\in R^{T\times V}$ based on the above hypergraph. As shown in the following equation:
\begin{align}
H_{ST}/H_{TS}=\Phi ((H_{N},H_{T})/(H_{T},H_{N})),
\end{align}
where $\Phi(a,b)=tanh (\mu(a)-\varphi(b))$, and $\mu(\cdot)$, $\varphi(\cdot)$ denotes two dimensional transformation operation functions.
\noindent
{\bf Hypergraph Convolution.} We use transposition methods to enable the construction of hypergraph structures with additional learning to the feature of the hyperedges~\cite{DBLP:conf/nips/JoBLKKH21}. We use the following hypergraph convolution to capture important high-order feature information~\cite{DBLP:journals/tip/HaoLGJY21}:
\begin{align}
HC^{T}(H_{i})=X_{i}\cdot (\widetilde{H_{i}}+\widetilde{H^{T} _{i}})\cdot \Theta _{i},
\end{align}
where $\Theta$ represents the learning weights, which are implemented in this paper by different convolution methods, $H_{i}$ is a hypergraph of different types of constructions, $\widetilde{H_{i}}=D_{v}^{-1/2} H_{i}WD_{e}^{-1}H_{i}^{T} D_{v}^{-1/2}$~\cite{DBLP:conf/aaai/FengYZJG19}, $D_{v},D_{e}$ are the diagonal matrices of hyperedges and node degrees respectively, $W$ is the diagonal matrix, initialized to the unit matrix, $X_{i}\in R^{C_{in}\times T\times V}$ is the input feature,.
In addition, $HC(H_{i})$ denotes the hyperedge feature is not considered:
\begin{align}
HC(H_{i})=X_{i}\cdot \widetilde{H_{i}}\cdot \Theta _{i}
\end{align}
Finally, we obtain the following features: $HC^{T}(H_{N})$, $HC^{T}(H_{T})$, $HC(A)$, $HC(H_{ST})$, $HC(H_{TS})$, denoted by $A^{'}-E^{'}$, respectively. The implementation process is shown in Fig.~\ref{fig02}.
\subsection{High-order Information Fusion (HIF)}
\label{CC}
With the multiple high-order information obtained from convolution operations, we constructed the HIF module for parallel modeling. The implementation is shown in Fig.~\ref{fig02}. First, we use temporal features to refine spatial features to obtain high-order joint information. Then through spatial-temporal aggregation in the channel dimension and channel concatenating operations to capture the properties between features. The final output is $F_{out}\in R^{C_{out}\times T\times V}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth,height = 0.9\linewidth]{ke1.png}
\caption{(a), (c) Spatial-temporal features of the TPH and k-means spatial topology. (b), (d) Spatial-temporal features of the TPH and k-NN spatial topology.}
\label{fig11}
\end{figure}
Specifically, to avoid the loss of the original topology information, we use the original natural joint topology as an auxiliary learning component. It is combined with spatial hypergraph features to learn more connected joint features of the action. As indicated below:
\begin{align}
Y_{2}=(HC^{T}(H_{N})+HC(A))/\beta,
\end{align}
where $\beta$ is the learning weights.
\subsection{Model Architecture}
\label{DD}
We use a multi-scale convolution, TF, with an attention mechanism~\cite{wang2022skeleton,2021Channel} to extract information at different scales. As expressed in the following equation:
\begin{align}
Z_{out} =TF\left ( F_{out1}+F_{out2}+F_{out3} \right ).
\end{align}
$Z_{out}\in R^{C_{out}\times T\times V}$ is the output of a block of dynamic spatial-temporal hypergraph convolutional network. The network consists of ten blocks. The difference in $F_{out1}$, $F_{out2}$, $F_{out3}$ is composed of three different spatial hypergraph topologies, respectively.
\section{Experiments}
\subsection{Datasets}
\label{11}
\noindent
{\bf Northwestern-UCLA.} The dataset~\cite{DBLP:journals/corr/WangNXWZ14} is extracted from 1494 videos with ten types of actions. The training set comes from two Kinect cameras and the test set from remaining cameras.
\noindent
{\bf NTU RGB+D}. NTU RGB+D~\cite{2016NTU} is a large and widely used action recognition dataset obtained from three different views using the Kinect v2 sensor and contains 60 action classes. Evaluation benchmarks: Cross-Subject (X-Sub) and Cross-View (X-View).
\noindent
{\bf NTU RGB+D 120.} NTU RGB+D 120~\cite{2020NTU} is an enhanced version of the NTU RGB+D dataset, with the number of action categories increased to 120. Evaluated using two division criteria: Cross-Subject (X-Sub) and Cross-Setup (X-Setup).
\subsection{Implementation Details}
\label{22}
Our experiments use cross entropy as the loss function and use three Tesla M40 GPUs. In addition, we use dual correlation channel data to aid fuse learning, which is obtained by subtracting two channels. The experiments use the stochastic gradient descent (SGD). The weight decay is set to 0.0004 and the Nesterov momentum is 0.9. The train contains 90 epochs, the first five epochs use a warm-up strategy, and we set cosine annealing for decay~\cite{loshchilov2016sgdr}. For the NTU-RGB+D~\cite{2016NTU} and NTU-RGB+D 120 datasets~\cite{2020NTU}, the batch size is set to 64. For the Northwestern-UCLA dataset~\cite{DBLP:journals/corr/WangNXWZ14}, we set the batch size to 16.
\begin{table}[t]
\centering
\caption{The ablation study of components.}
\setlength{\tabcolsep}{1.5mm}
\label{tab:34}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Methods &FLOPs&Param. & Acc($\%$)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
DST-HCN w/o C,H,G&3.05G&2.71M&89.5\\
DST-HCN w/o C,H&3.50G&2.93M&90.1 ($\uparrow$ 0.6$\%$)\\
DST-HCN w/o C&3.50G&2.93M&90.4 ($\uparrow$ 0.9$\%$)&\\
DST-HCN&3.50G&2.93M&90.9 ($\uparrow$ 1.4$\%$)\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Ablation Study}
Unless otherwise stated, we analyze our method on the X-sub benchmark of the NTU RGB+D dataset.
\begin{table*}[ht]
\centering
\caption{Comparison of recognition accuracy between hypergraph and graph network in confusing actions. The experiments are conducted on the X-sub benchmark of the NTU RGB+D 120 dataset. Where wield knife representative wield knife towards other person, hit other representative hit other person with something.}
\label{confu}
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
Action &Ens Acc($\%$)&Similar Action &Ours Acc($\%$)&Acc($\%$)$\uparrow$&Similar Action\\
\noalign{\smallskip}\hline\noalign{\smallskip}
writing&48.90&typing on a keyboard&60.66&11.76&typing on a keyboard\\
wield knife&62.50&hit other &65.80&3.30&hit other \\
blow nose&59.65&yawn&64.52&4.87&yawn\\
fold paper &62.78&ball up paper &66.96&4.18&counting money\\
yawn &67.65&hush (quite)&70.26&2.61&hush (quite)\\
snapping fingers &65.51&shake fist&71.43&5.92&make victory sign\\
sneeze/cough&71.74&touch head (headache) &75.72&3.98&touch head (headache) \\
apply cream on hand back&72.30& open bottle &75.26&2.96& rub two hands together \\
\noalign{\smallskip}\hline
\end{tabular}}
\end{table*}
\begin{table*}[ht]
\centering
\caption{ Classification accuracy comparison with state-of-the-art methods on the three benchmarks.}
\setlength{\tabcolsep}{3mm}
\label{1}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Type}&
\multirow{2}{*}{Method} &
\multicolumn{2}{c|}{NTU RGB+D} &
\multicolumn{2}{c|}{NTU RGB+D 120} &
\multicolumn{1}{c}{NW-UCLA} \\
\cline{3-7}
& &X-Sub ($\%$) &X-View ($\%$)&X-Sub ($\%$)&X-Set ($\%$) &- ($\%$)\\
\hline
\multirow{8}{*}{GCN,CNN}&
ST-GCN~\cite{DBLP:conf/aaai/YanXL18}&81.5&88.3&70.7&73.2&-\\
&2s-AGCN~\cite{2018Two}&88.5&95.1&82.5&84.2&-\\
&Shift-GCN~\cite{DBLP:conf/cvpr/Cheng0HC0L20}&90.7&96.5&85.9&87.6&94.6\\
&SGN~\cite{2020Semantics}&89.0&94.5&79.2&81.5&92.5\\
&CTR-GCN~\cite{2021Channel}&{\bf 92.4}& {\bf96.8}&{\bf88.9}&\underline{90.6}&\underline{96.5}\\
&Ta-CNN~\cite{xu2022topology}&90.4&94.8&85.4&86.8&96.1\\
&EfficientGCN-B4~\cite{2021Constructing}&92.1&96.1&88.7&88.9&-\\
&AGE-Ens~\cite{DBLP:journals/corr/abs-2105-01563}&92.1&96.1&88.2&89.2&-\\
\hline
\multirow{4}{*}{HypGCN}&
Hyper-GNN~\cite{DBLP:journals/tip/HaoLGJY21} &89.5&95.7&-&-&-\\
&DHGCN~\cite{wei2021dynamic}&90.7&96.0&86.0&87.9&-\\
&Selective-HCN~\cite{zhu2022selective}&90.8&\underline{96.6}&-&-&-\\
\cline{2-7}
&DST-HCN(Ours)&\underline{92.3}&{\bf 96.8}& \underline{88.8}&{\bf 90.7}&{\bf 96.6}\\
\hline
\end{tabular}
\end{table*}
\noindent
{\bf Visualization experiments.} We demonstrate the spatial-temporal features of the two topologies of an action sample "putting on glasses" in Fig.~\ref{fig11}. (a), (b) are the output of the HIF module "$C^{'}$" in Fig.~\ref{fig02}. (c), (d) are the output of HIF module "$F_{out}$". The coordinates are time and joints respectively.
In Fig.~\ref{fig11} (a), (b), We observe that different combinations of hypergraphs is able to learn different spatial-temporal information. In addition, (a) focuses more on information from the elbow joint and (b) focuses on information features of the hand, suggesting that these joints are more discriminatory for overall. In Fig.~\ref{fig11} (c), (d), the further changes in joint features occurred and the joints are differentiated in temporal aspects. It is shown that the channel learning method can effectively perform information fusion.
\noindent
{\bf Ablation experiments of different components.} We experimentally validate the importance of the components, where w/o G, H, and C denote the lack of the original natural topology, the original topology as a hypergraph structure, and the lack of cosine annealing, respectively. Table~\ref{tab:34} shows that the model without cosine annealing has a performance drop of 0.5$\%$. The result drops by 0.6$\%$ without using the original graph structure, which shows that the unnatural joint connection needs to learn certain a prior knowledge of the natural topology. In addition, the result of using the original topology as the hypergraph structure by a margin of 0.3$\%$, reflecting the hypergraph structure's advantage and is more beneficial for action recognition.
\noindent
{\bf The comparison of confusing actions.} The DST-HCN and the fine-grained recognition graph convolution method~\cite{DBLP:journals/corr/abs-2105-01563} were compared in terms of fine-grained action recognition performance. As shown in Table~\ref{confu}, the recognition performance of all relevant actions is improved. Especially in the writing and snapping fingers actions by a margin of 11.76$\%$ and 5.92$\%$, respectively. This indicates that our model can handle unnaturally dependent spatial-temporal variation features well.
\subsection{Comparisons With the State-of-the-Art Methods}
\label{33}
This section compares our dynamic spatial-temporal hypergraph neural network with state-of-the-art methods. For a fair comparison, we use four streams of fusion results, considering joint, bone, and motion data stream forms~\cite{2021Channel, DBLP:conf/cvpr/Cheng0HC0L20,2018Two}. We use different fusion parameters~\cite{wang2022skeleton} for different data to accommodate the diversity of varying stream forms.
The results are shown in Table~\ref{1}. It can be seen that our model exhibits comparable or better performance than the state-of-the-art approach for the three datasets.
Compared to the hypergraph methods, our proposed method has reached state-of-the-art accuracy in hypergraphs and by a margin of 1.5$\%$ and 2.8$\%$ on X-Sub, which fully illustrates the significant advantage of our method.
\section{Conclusion}
This paper proposes a dynamic spatial-temporal hypergraph neural network that contains dynamic spatial-temporal hypergraph convolution and high-order information fusion modules. The method presents a variety of hypergraph structures, such as a time-point hypergraph. The method simultaneously implements parallel spatial-temporal modeling. Finally, we validate the model on the NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08662",
"language": "en",
"timestamp": "2023-02-20T02:05:47",
"url": "https://arxiv.org/abs/2302.08662",
"yymm": "2302"
} | \section{Introduction}
Automatic image cropping is an effective technology to improve composition quality of the input images. As one of the key components of computational imaging, it has attracted wide research interest ~\cite{chen2003visual,islam2017survey}.
Provided human-annotated cropping boxes~\cite{wang2018deep,zeng2019reliable}, automatic cropping can be formalized as a cropping box regression problem. However, as regression models are easily affected by the bias of training set~\cite{yang2021delving}, recent regression-based approaches~\cite{hong2021composing,pan2021robust} are dominated by the normal samples and tend to render quasi-trivial cropping boxes (Fig.~\ref{fig:fig1}(a)). The quasi-trivial cropping boxes can be described as the central cropping boxes whose boundaries locate near the average location of training samples. When the quasi-trivial cropping boxes just inherit the location bias of the annotated cropping boundaries and ignore the composition patterns of the input images, the improvement of composition quality is little, which limits the real-world application. Trying to generate nontrivial cropping boxes, previous methods take extra data with additional annotations, {\emph{e.g.}}, composition classification dataset~\cite{lee2018photographic}, into account. However, the predicament is hardly addressed. Therefore, we try to reveal the intrinsic cause of quasi-trivial solution. As shown in Fig.~\ref{fig:motivation_analysis}, by analysing the CACNet baseline~\cite{hong2021composing}, we illustrate the cause of quasi-trivial solution: \emph{dominated by the normal samples, the composition patterns of rare samples are barely exploited}. Hence, we argue to find the beauty of composition by taking full advantage of rare samples rather than by introducing extra data.
In this paper, we probe into the problem: \emph{how to better exploit the composition patterns of the rare training samples?} Similar problems have been studied for decades by the imbalanced learning methods~\cite{branco2017smogn, lin2017focal,yang2021delving}, whose main idea is re-weighting. However, directly adapting these methods into the cropping box regression task can barely improve the composition quality. Therefore, we delve into this problem from another view, which is to enhance the composition representations of the rare by looking for the connections between samples. As illustrated in Fig.~\ref{fig:fig1}(c), compared in the same row, when the annotated top cropping boundaries tend to locate at a close relative location, the visual focuses locate relatively nearly; compared with two different rows, the focuses locate relatively differently. Based on this observation, we make a hypothesis: \emph{cropping boundaries with close relative location annotations share the similar composition patterns and vice versa}. Based on this, we make full use of the relationships between the samples, which especially benefits the rare ones.
Inspired by the contrastive learning~\cite{hadsell2006dimensionality,chen2020simple,he2020momentum} and deep clustering works~\cite{zhou2022comprehensive,10.1007/978-3-031-19818-2_14}, we propose Contrastive Composition Clustering (\textbf{C2C}) to regularize the composition features based on their regression target distance. Due to that our hypothesis is based on the cropping boundary, each image has four composition features to depict each cropping boundary respectively. The optimization targets of C2C are: (1) draw features of the samples with similar annotations closer (2)
widen the feature distance of negative pairs where annotated boundary locates far away from each other. When C2C is performed in the mini-batch, we calculate the distance map to select positive and negative pairs dynamically. Based on the distance map, after breaking the contrastive loss into alignment and uniformity loss~\cite{wang2020understanding}, the contrastive clustering procedure can successfully work in our regression scenario. As illustrated in Fig.~\ref{fig:fig1}(a) and (b), benefited from the proposed C2C, our solution can render nontrivial cropping boxes.
Extensive experiments on public benchmarks show that our solution outperforms state-of-the-art regression-based approaches even without extra data. By visualizing the cropping results, with C2C, our model hardly collapses into quasi-trivial solutions. Compared with other imbalanced learning baselines, C2C also shows superiority in both performance and time efficiency. For the first time, without external data, we demonstrate state-of-the-art image cropping with only a deep clustering procedure.
\section{Related Work}
\subsection{Image Cropping}
Conventional image cropping algorithms~\cite{chen2003visual,suh2003automatic,marchesotti2009framework,cheng2010learning,greco2013saliency} formulate the composition patterns with predefined rules~\cite{zhang2005auto,nishiyama2009sensation,yan2013learning,fang2014automatic}. However, performance of the classic algorithms are far from satisfactory. Recently, data-driven algorithms have significantly improvement. They are driven by two main ideas: candidate box selection and cropping box regression.
\textbf{Candidate box selection algorithms} follow a two-stage pipeline. First, candidate boxes are generated according to prior knowledge. Then, the candidate boxes are ranked based on
the learned composition aesthetic knowledge. The knowledge can be modeled by saliency detection~\cite{wang2018deep,tu2020image}, teacher-student architecture~\cite{wei2018good}, region of interest and discard~\cite{zeng2019reliable,zeng2020grid}, view-wise mutual relation~\cite{li2020composing}, visual elements dependencies~\cite{pan2021transview,9745054} and view-wise difference\cite{pan2022discriminate}. But the results
are dependent on the candidates. To overcome this limitation, some works attempt to find end-to-end solutions.
\textbf{Cropping box regression algorithms} are end-to-end solutions that imitate the process of human cropping. Reinforcement learning algorithms estimate the final cropping box step by step~\cite{li2018a2,li2019fast}. Other algorithms predict the cropping results according to a salient cluster~\cite{pan2021robust}, or a composition class~\cite{hong2021composing}. The common deficiencies of these algorithms are the need for extra data and the problem of degenerating into quasi-trivial solutions. In contrast to these approaches, our work can render nontrivial cropping boxes without extra data.
\vspace{-5pt}
\subsection{Contrastive Learning and Deep Clustering}
Motivated by the InfoMax principle~\cite{linsker1988self}, contrastive learning aims to learn representations by contrasting positive pairs against negative pairs~\cite{hadsell2006dimensionality,dosovitskiy2014discriminative}. The pioneer works introduce the memory bank to store the representation vectors and update them by contrasting~\cite{wu2018unsupervised,zhuang2019local,he2020momentum}. Some other works contrast in batches~\cite{doersch2017multi,ji2019invariant,ye2019unsupervised,chen2020simple}. Recently, Wang~{\emph{et al.}}~\cite{wang2020understanding} has characterized the contrastive learning by alignment and uniformity.
When deep clustering~\cite{zhou2022comprehensive} concerns the joint optimization of clustering and deep representation learning, some works~\cite{ling2022vision,deng2022strongly,chen2022design} introduce the idea of contrasting into deep clustering.
In this work, we also conduct composition feature clustering based on the idea of contrasting and combine the clustering procedure into regular training process, which helps to get a better composition representation.
\vspace{-5pt}
\subsection{Imbalanced Learning}
In confront of the training data bias, a naive idea is to re-weight the loss with the frequency of samples~\cite{lin2017focal}. Some other works also try to create new samples to balance the original distribution~\cite{torgo2013smote,branco2017smogn} or to decouple the encoder and decoder~\cite{kang2019decoupling}. Recently, Yang~{\emph{et al.}}~\cite{yang2021delving} proposes the label and feature distribution smoothing to train the regressor unbiasedly. We directly adapt these methods into our problem as baselines and compare our C2C with them in the experiment section.
\label{sec:motivation}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{Left_analyse_1.pdf}
\vspace{-10pt}
\caption{Statistical analysis of the left cropping boundaries generated from CACNet vanilla regression baseline. (a) The test results on the training set. (b) The test results on the validation set. (c) The error distribution on the training set. (d) The error distribution on the validation set. (e) The location distribution on the training set. (f) The location distribution on the validation set. ``prd" means the prediction, ``tgt" denotes the target. Best viewed in color.}
\vspace{-10pt}
\label{fig:motivation_analysis}
\end{figure*}
\section{Problem and Approach}
\subsection{Quasi-Trivial Solution}
We explain what is the quasi-trivial cropping solution and illustrate the cause of it by analyzing the recomposed results of the vanilla regression baseline CACNet~\cite{hong2021composing}. To make the analysis unaffected by extra data, the composition classification module is dropped.
Statistic of the left boundary is shown in Fig.~\ref{fig:motivation_analysis} (results of other boundaries are in similar distributions).
The red and blue points in Fig.~\ref{fig:motivation_analysis}(a-b) show the predicted and target left boundary location respectively. The location is normalized into $[0,1]$, where $0$ represents the leftmost location and $1$ represents the rightmost location. The ordinal number of samples are indicated by the horizontal coordinate. The error bar on the target point colored in light blue indicates the absolute value of the prediction error on this sample. From Fig.~\ref{fig:motivation_analysis}(b), on the validation set, we observe that the samples whose target is far away from the mean location tend to have larger prediction error, which means that the baseline tends to locate the boundary at an approximate average location for an unseen image. Therefore, we compare the baseline with the mean location which can serve as a trivial solution. As illustrated in Fig.~\ref{fig:motivation_analysis}(c-d), when the vertical and horizontal coordinates of each point represent the prediction error and the corresponding target location respectively, the trivial solution behaves as two symmetrical lines with the mean location as the axis of symmetry. From Fig.~\ref{fig:motivation_analysis}(d), on the validation set, we find that the baseline acts extremely similar to the trivial solution. We call this the quasi-trivial solution of this boundary. By plotting the location distribution of the targets and predicted results, we find that normal training samples with the most amount locate near the mean location (Fig.~\ref{fig:motivation_analysis}(e)), hence the baseline tends to predict around the mean location for an unseen image (Fig.~\ref{fig:motivation_analysis}(f)), which means the quasi-trivial solution comes from that the normal samples overwhelm the rare samples. Hence, we try to exploit the composition patterns of the rare samples better.
\subsection{Cropping Boundary Locating Network}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{pipeline.pdf}
\vspace{-10pt}
\caption{The pipeline of the cropping boundary locating network. The multi-scale features extracted from the input image are used to encode the horizontal and vertical features. Then, the horizontal and vertical features are split into boundary features. The final cropping boxes are predicted based on the four boundary features.}
\vspace{-10pt}
\label{fig:pipeline}
\end{figure*}
As the imbalanced distributions of four cropping boundaries have different levels and forms, the whole cropping box regression problem is divided into four boundary locating subtasks. Inspired by SABL~\cite{wang2020side}, we build the cropping boundary locating network (CBLNet). As shown in Fig.~\ref{fig:pipeline}, CBLNet contains three main components: a convolutional backbone, a boundary feature encoder which builds the boundary features, and a group of regression heads that predict the location of four cropping boundaries.
\textbf{Backbone.} A MobileNetV2-based~\cite{sandler2018mobilenetv2} backbone that fuses multi-scale features~\cite{zeng2019reliable,zeng2020grid} is employed. It generates multi-scale feature maps $\boldmath{\mathcal{F}}\in\vmathbb{R}^{C\times H\times W}$ from the original image, where $C$ is the channel dimensionality, $H$ and $W$ represents the height and width of the feature maps
\textbf{Boundary feature encoder.} The extracted features $\boldmath\mathcal{F}$ are processed by two multi-head attention modules to encode the horizontal and vertical information. The multi-head attention module is implemented by $m$ $1\times1$ convolutions, where $m$ is the number of heads. The horizontal feature map $\boldmath{\mathcal{H}} \in \vmathbb{R}^{mC \times H \times W}$ and vertical feature map $\boldmath{\mathcal{V}} \in \vmathbb{R}^{mC \times H \times W}$ can be computed by
\begin{gather}
\label{equ:attention}
\boldmath{\mathcal{H}} = \sigma_{y}(\xi(\tau_{0}^{h}(\boldmath{\mathcal{F}})*\boldmath{\mathcal{F}}, \ldots, \tau_{m-1}^{h}(\boldmath{\mathcal{F}})*\boldmath{\mathcal{F}}))\,,\\
\boldmath{\mathcal{V}} = \sigma_{x}(\xi(\tau_{0}^{v}(\boldmath{\mathcal{F}})*\boldmath{\mathcal{F}}, \ldots, \tau_{m-1}^{v}(\boldmath{\mathcal{F}})*\boldmath{\mathcal{F}}))\,,
\end{gather}
where
$\sigma_y(\cdot)$ and $\sigma_x(\cdot)$ are the normalizing operators along Y and X axis, respectively, $\xi$ is the concatenating operator along the channel axis,
$\tau_{i}^{h}(\cdot)$ and $\tau_{i}^{v}(\cdot)$ represent the $i$-th head $1\times1$ convolution of the horizontal and vertical attention modules, respectively.
After the dimension reduction and the sum operations along the Y and X axis, $\boldmath{\mathcal{H}}$ and $\boldmath{\mathcal{V}}$ are transformed into the horizontal vector $\vect h \in \vmathbb{R}^{C \times W}$ and vertical vector $\vect v \in \vmathbb{R}^{C \times H}$, respectively. Then, $\vect h$ and $\vect v$ are split into boundary features $\vect x^l,\vect x^r \in \vmathbb{R}^{C\times W/2}$ and $\vect x^t,\vect x^b \in \vmathbb{R}^{C\times W/2}$, which are composition features for left, right, top, and bottom cropping boundaries, respectively.
\textbf{Regression head.}
The normalized locations w.r.t.\ the input image of four cropping boundaries are predicted by a pooling operator and a $3$-layer fully-connected network with the $\tt sigmoid$ activation function. All regression heads are supervised with the $\ell_1$ loss.
\subsection{Contrastive Composition Clustering}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{contrastivepipeline.pdf}
\vspace{-5pt}
\caption{Contrastive composition clustering. C2C regularizes the features of the cropping boundaries. Taking the learning process of bottom boundary features as an example: the boundary features are $\ell_2$-normalized to the unit hypersphere. The pairwise distances on the unit hypersphere are maximized or minimized conditioned on the pairwise distances of their ground truth boundaries. After training, $\ell_2$ normalization is removed.}
\vspace{-10pt}
\label{fig:contrast}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{QualitativeComparison.pdf}
\vspace{-8pt}
\caption{Qualitative comparison with image cropping models. Redundancy in blue boxes violates the composition.}
\vspace{-10pt}
\label{fig:comparison}
\end{figure*}
After dividing the whole cropping box regression into four sub-regression, composition features w.r.t.\ four cropping boundaries are obtained. Based on our hypothesis, when similar cropping boundaries have common knowledge and dissimilar ones tend to have different composition patterns, it is intuitive to minimize the distance of similar samples and to maximize the distance of dissimilar samples in the composition feature space. As illustrated in Fig.~\ref{fig:contrast}, inspired by the contrastive learning algorithms~\cite{wu2018unsupervised,zhuang2019local,he2020momentum}, we propose the Contrastive Composition Clustering (C2C).
The C2C introduces the idea of contrastive learning into our regression scenario by setting the positive and negative pair thresholds, {\emph{i.e.}}, $z_p$ and $z_n$. The positive and negative pair refers to the composition feature pair whose boundary annotations locate nearly and far away from each other, respectively. In this setting, for each batch, the positive and negative pairs are not fixed. Hence, we structure the pairs with a distance map, which is used to calculate the positive and negative pair matrix $\mathcal{P}$ and $\mathcal{N}$. Inspired by \cite{wang2020understanding}, which identifies two contrastive loss properties: alignment and uniformity, we adapt the alignment and uniformity loss into our C2C. The alignment loss is to cluster positive pairs, and the uniformity loss is to discriminate between the elements in negative pairs. In our approach, the regression alignment loss and uniformity loss take the form
\begin{gather}
\label{equ:contrastiveloss}
L_{align} = \sum_{d=l,r,t,b}\underset{0\le i < j\le N}{\vmathbb{E}}\mathcal{P}_{i,j}^d\max(||\overline{\vect x_i^d}-\overline{\vect x^d_j}||_{2}^{\alpha}-\epsilon,0)\\
L_{uniform} = \sum_{d=l,r,t,b}\underset{0\le i < j\le N}{\vmathbb{E}}\mathcal{N}_{i,j}^{d}e^{-t||\overline{\vect x_i^d}-\overline{\vect x^d_j}||_2^2}\,,
\end{gather}
where $\overline{\vect x}$ is the $\ell_2$ normalized boundary composition features, $l,r,t,b$ indicates left, right, top, and bottom respectively, $N$ denotes the batch size, $\epsilon$ controls the expected similarity level of the features with similar regression targets, $\mathcal{P}_{i,j}^d$ and $\mathcal{N}_{i,j}^d$ can be calculated as
\begin{gather}
\mathcal{P}_{i,j}^d = \vmathbb{1}(y_i^d-y_j^d < z_p)\\
\mathcal{N}_{i,j}^d = \vmathbb{1}(y_i^d-y_j^d > z_n)\,,
\end{gather}
where $y$ is the normalized ground truth location of the cropping boundary, and $\vmathbb{1}(\cdot)$ is the sign function.
Details about the derivation process of the optimization target and the pseudo-code of C2C is described in the supplementary. The alignment and uniformity loss are jointly optimized with the $\ell_1$ loss for the cropping boundary locating. The total loss of the cropping boundary regression pipeline can be computed by
\begin{equation}
\label{equ:totalloss}
L = L_1 + \beta L_{align} + \gamma L_{uniform}\,,
\end{equation}
where $\beta$ and $\gamma$ are the hyper-parameters used to control the relative importance of the alignment and uniformity loss. With C2C, we get more informative and discriminative composition features from the rare samples which prevents the model collapse into the quasi-trivial solution.
\section{Experiments}
Here we demonstrate that, CBLNet with C2C achieves better performance than other cropping box regression algorithms with extra data. When imbalanced learning approaches can serve as naive baselines to exploit the rare samples, we compare C2C with these baselines. The results illustrate the superiority of C2C. A detailed ablation study is also conducted to illustrate the effect of each component. Further analyses and visualizations also prove the rationale of our hypothesis and design.
\subsection{Implementation Details}
The input images are resized to $224\times224$.
After processed by the backbone pre-trained on ImageNet, the channel dimensionality of multi-scale feature map $\boldmath{\mathcal{F}}$ of size $H=W=14$ is reduced to $C=256$. The head number of the multi-head spatial attention module is set to $m=6$. In the training stage, $32$ images are batched as the input. In the alignment loss, $z_l$ is set to $0.05$, $\alpha$ is set to $1$, and $\epsilon$ is set to $0.5$. In the uniformity loss, $z_h$ is set to $0.7$ and $t$ is set to $1$. Only random cropping is used in data augmentation. By setting $\beta=\gamma=0.025$, the network is optimized by Adam with the learning rate of $5\times10^{-4}$ for $80$ epochs.
\begin{table*}[!t]
\begin{center}
\begin{tabular*}{0.9\textwidth}{@{}@{\extracolsep{\fill}}lcccccccc@{}}
\toprule
\multirow{2}*{\diagbox{Algorithm}{Metric}} &\multicolumn{4}{c}{IoU$\uparrow$} &\multicolumn{4}{c}{BDE$\downarrow$}\\
&All &Many &Med. &Few &All &Many &Med. &Few\\
\hline
\multicolumn{9}{l}{candidate box selection algorithms}\\
\hline
VPN$^\dag$ &0.665 &\textbf{0.735} &0.631 &\textbf{0.435} &0.085 &\textbf{0.068} &0.092 &\textbf{0.147}\\
GAIC$^\dag$ &0.666 &0.721 &\textbf{0.655} &0.408 &0.084 &0.072 &\textbf{0.086} &\textbf{0.147}\\
TransView &0.682 &- &- &- &0.080 &- &- &-\\
VEN &\textbf{0.735} &- &- &- &\textbf{0.072} &- &- &-\\
\hline
\multicolumn{9}{l}{cropping box regression algorithms}\\
\hline
A2RL$^\dag$ &0.636 &0.734 &0.577 &0.349 &0.097 &0.070 &0.111 &0.186\\
A3RL &0.696 &- &- &- &0.077 &- &- &-\\
vanilla-CACNet$^\ddag$ &0.700 &0.799 &0.650 &0.369 &0.075 &0.050 &0.086 &0.167\\
CACNet$^\ddag$ &0.716 &\textbf{0.809} &0.673 &0.386 &0.070 &\textbf{0.047} &0.079 &0.159\\
regression baseline &0.692 &0.776 &0.655 &0.389 &0.077 &0.056 &0.085 &0.158\\
CBLNet - Ours &0.700 &0.787 &0.659 &0.391 &0.075 &0.052 &0.083 &0.158\\
CBLNet+C2C - Ours &\textbf{0.718} &0.805 &\textbf{0.680} &\textbf{0.418} &\textbf{0.069} &\textbf{0.047} &\textbf{0.078} &\textbf{0.146}\\
\bottomrule
\end{tabular*}
\caption{Quantitative comparison with other state-of-the-art image cropping models on the FCDB dataset. Best performance is in boldface. The performances tagged by $\dag$ are our reproduced results, and $\ddag$ denotes the results from the original authors.}
\label{table:comp_w_crop}
\end{center}
\end{table*}
\begin{table*}[!t]
\begin{center}
\begin{tabular*}{0.9\textwidth}{@{}@{\extracolsep{\fill}}lccccccccc@{}}
\toprule
\multirow{2}*{\diagbox{Algorithm}{Metric}} &\multicolumn{4}{c}{IoU$\uparrow$} &\multicolumn{4}{c}{BDE$\downarrow$} &\multirow{2}*{\makecell[c]{Training\\Time$\downarrow$}}\\
&All &Many &Med. &Few &All &Many &Med. &Few\\
\hline
CBLNet baseline &0.700 &0.787 &0.659 &0.391 &0.075 &0.052 &0.083 &0.158 &0.720\\
\hline
SMOGN &0.702 &0.796 &0.658 &0.374 &0.075 &0.051 &0.085 &0.166 &0.956\\
RRT &0.701 &0.791 &0.653 &0.416 &0.074 &0.051 &0.086 &0.149 &0.730$\times2$\\
FOCAL-R &0.708 &0.797 &0.663 &0.408 &0.072 &0.050 &0.082 &0.151 &0.747\\
INV &0.707 &0.798 &0.660 &0.404 &0.073 &0.050 &0.083 &0.153 &\textbf{0.734}\\
LDS\&FDS &0.709 &0.798 &0.666 &0.408 &0.072 &0.049 &0.082 &0.151 &1.187\\
C2C &\textbf{0.718} &\textbf{0.805} &\textbf{0.680} &\textbf{0.418} &\textbf{0.069} &\textbf{0.047} &\textbf{0.078} &\textbf{0.146} &0.788\\
\bottomrule
\end{tabular*}
\caption{Quantitative comparison of imbalanced learning algorithms on the FCDB dataset. Best performance is in boldface. The metric of training time shows how many seconds an algorithm needs for training a batch of data. Note that the regressor re-training (RRT) algorithm needs an extra training round, hence the training time is doubled.}
\label{table:comp_w_DIR}
\vspace{-10pt}
\end{center}
\end{table*}
\subsection{Datasets and Evaluation Metrics}
FCDB~\cite{chen2017quantitative} dataset is used for evaluation. The FCDB contains $1743$ images, and each of them has a ground truth cropping box. In the performance comparison, $1395$ images of the FCDB are used for training and $348$ images for testing. We follow the setting of \cite{hong2021composing} to randomly choose $200$ images as the validation set and the other images are used for training.
To demonstrate whether an approach can render nontrivial cropping boxes, according to their frequency, the test set or the validation set is divided into three disjoint subsets: many-shot, medium-shot, and few-shot samples. The frequency of samples is related to their ground truth box size ratios w.r.t.\ the original image. Hence, in our setting, the ground truth box size ratios of the many-shot, medium-shot, and few-shot samples are in the range of $65\% \sim 100\%$, $40\% \sim 65\%$, and $0\% \sim 40\%$, respectively. The performance of intersection over union (IoU) and the boundary displacement error (BDE) are reported.
\subsection{Performance Comparison}
\subsubsection{Comparison with image cropping models.} Quantitative results on the FCDB dataset are illustrated in Table~\ref{table:comp_w_crop}. For the selection-based algorithms, the recalled top-$1$ cropping boxes are used to calculate the IoU and BDE metrics. Based on the results, we can make the following observations:
(a) \textit{The gap between two types of algorithms lies in the few-shot samples.} This might be a manifestation of the quasi-trivial solution problem. The CBLNet trained with the C2C outperforms the previous regression algorithms especially on the few-shot samples without extra data, which bridges the performance gap between regression-based and selection-based algorithms.
(b) \textit{C2C is more helpful than extra data.} Compared to the vanilla CACNet without the composition classification to the final CACNet, it is obvious that the performance boost is from the extra data with the annotation about composition classification. When the performance of CBLNet without C2C is comparable with vanilla CACNet, after trained with C2C, the CBLNet can perform better than the CACNet, which shows the superiority of the proposed C2C. Qualitative comparison is shown in Fig.~\ref{fig:comparison}. We observe that the selection-based algorithms, {\emph{i.e.}}, the VPN, can generate nontrivial cropping results but can not obey the symmetry (first row) or visual balance (third row) composition rules well. This shows the limitation of selection-based algorithms. While previous regression-based algorithms tend to render quasi-trivial cropping results, our approach can render nontrivial cropping results and align better with composition rules.
\begin{table*}[t]
\begin{center}
\begin{tabular*}{0.9\textwidth}{@{}@{\extracolsep{\fill}}lcccccccc@{}}
\toprule
\multirow{2}*{\diagbox{Batch Size}{Metric}} &\multicolumn{4}{c}{IoU$\uparrow$} &\multicolumn{4}{c}{BDE$\downarrow$}\\
&All &Many &Med. &Few &All &Many &Med. &Few\\
\hline
4 &0.734 &0.811 &0.685 &0.424 &0.064 &\textbf{0.041} &0.076 &0.161\\
8 &0.738 &0.805 &0.698 &0.436 &0.061 &0.043 &0.071 &0.145\\
16 &0.741 &0.801 &0.710 &0.438 &0.060 &0.043 &0.066 &0.144\\
32 &\textbf{0.753} &0.809 &0.715 &0.460 &\textbf{0.059} &0.042 &\textbf{0.065} &0.139\\
64 &0.751 &0.809 &0.714 &0.463 &\textbf{0.059} &0.042 &0.066 &0.142\\
128 &0.749 &\textbf{0.813} &0.704 &0.465 &0.062 &0.042 &0.073 &0.142\\
256 &0.750 &0.799 &\textbf{0.718} &\textbf{0.471} &0.061 &0.046 &0.068 &\textbf{0.128}\\
\bottomrule
\end{tabular*}
\caption{Ablation study on the batch size. Best performance is in boldface.}
\label{table:ablation}
\vspace{-10pt}
\end{center}
\end{table*}
\subsubsection{Comparison with imbalanced learning baselines.} The imbalanced learning baselines to be compared:
\begin{itemize}[leftmargin=*]
\item \textit{SMOGN}~\cite{branco2017smogn} defines the rare samples and creates synthetic samples by interpolating the feature and the regression target. During the training stage, Gaussian noise is added to the target, hence, annotated boundary may shift in the range of $10\%$ of the original length or width.
\item \textit{FOCAL-R} is the regression version of the focal loss~\cite{lin2017focal}. Following \cite{yang2021delving}, we rewrite the focal loss as $1/n\sum_{i=1}^{n}\rho(|\mu e_i|)^{\psi}e_i$, where $e_i$ is the $\ell_1$ error for $i$-th sample, $\rho(\cdot)$ is the $\tt sigmoid$ function, and $\mu$ and $\psi$ are set to $\mu=\psi=2$.
\item \textit{Inverse reweighting (INV)} is a traditional method re-weighting the loss according to inverse frequency of the classes. \cite{yang2021delving} adopts the scheme based on the regression target distribution in the regression scenario.
\item \textit{Regressor re-training (RRT)}~\cite{kang2019decoupling} decouples the training process of feature and classifier. \cite{yang2021delving} transforms it into the regression version, re-training the regression head with inverse reweighting.
\item \textit{LDS\&FDS}~\cite{yang2021delving} propose to smooth the distribution of regression target to reweight the loss and recalibrate the statistics of the feature distribution.
\end{itemize}
The results are illustrated in Table~\ref{table:comp_w_DIR}. It shows that previous methods have notable limitations: (a) RRT can boost the performance on the few-shot samples but harms that on the medium-shot samples. (b) LDS\&FDS can improve the performance on many-, medium-, few-shot samples, but at a cost of additional time consumption.
C2C obtains significant gains on all groups of samples with low time cost.
\subsection{Ablation Study}
\begin{figure}[!t]
\centering
\includegraphics[width=0.98\columnwidth]{distribution_.pdf}
\vspace{-7pt}
\caption{Distributions of the boundary features from the CBLNet with/without C2C on the validation set. The features are reduced into two dimensions by t-SNE. The color of the points represents the location of their ground truth boundary, the lighter the color is, the closer the ground truth boundary locates to the original image boundary. Best viewed in color.}
\vspace{-10pt}
\label{fig:distribution}
\end{figure}
\subsubsection{Boundary feature encoder and C2C.} The performance contributions of the boundary feature and C2C are shown in Table~\ref{table:comp_w_crop}. We can observe that the regression baseline tends to output the quasi-trivial cropping results. By only adding the boundary feature encoder to build the CBLNet, the performance cannot be significantly improved. After training with C2C, the cropping results align better with the ground truth. This shows that the main contribution of this work is from the C2C rather than the CBLNet design.
\subsubsection{Batch size.} The proposed C2C contrasts the samples in the batch without the memory bank. We vary the batch size from $4$ to $256$. Larger batch size provides more positive and negative pairs. From the results in Table~\ref{table:ablation}, it can be seen that the change of the performance on the many-shot samples is relatively smooth. As for the few-shot samples, with larger batch size, the performance is better. The results reveal that the proposed C2C benefits from large batch sizes, which is consistent with the conclusion in \cite{chen2020simple,he2020momentum}. However, for a better overall performance, we adopt batch size of $32$.
\subsection{Analysis}
\label{exp:analysis}
As illustrated in Fig.~\ref{fig:distribution}, we visualize the distribution of the composition features of left and right boundaries (the results of top and bottom boundaries are in similar distributions) from the CBLNet trained with and without C2C by reducing the features into two dimensions by t-SNE~\cite{van2008visualizing}. The location of the corresponding ground truth boundaries is denoted by color. The darker color means that the annotated boundary locates farther from the original input boundary.
Without C2C, the features of the rare samples, whose ground truth is far away from the input image boundary, are mixed up with that of the normal samples. This means that the model without C2C can not properly depict the unseen rare samples, and seems to just project these samples randomly in the feature space. Hence, these samples follow the normal distribution of training set, which is the cause of quasi-trivial solutions.
With C2C, firstly, the rare samples with darker color clustered and the feature distribution aligns well with the location distribution of the regression targets, which shows the success of C2C. Secondly, the expected feature distribution leads to a better performance, which demonstrates the rationale of our observation and hypothesis that cropping boundaries locating nearly share similar composition patterns.
\section{Conclusions}
In this work, we study why existing cropping box regression algorithms tend to render quasi-trivial solution. We find that the crux lies in the limitation of representational capacity. Based on our observation, we make a hypothesis that cropping boundaries with similar annotations share similar composition patterns. Therefore, we propose to conduct composition feature clustering to enhance the composition representation. Firstly, by presenting CBLNet, the composition features of four cropping boundaries are obtained. Inspired by the idea of contrastive learning and deep clustering, we propose C2C to cluster the samples with similar annotations jointly with the normal optimization procedure. By training with C2C, our CBLNet can outperform other state-of-the-art regression-based image cropping models on the FCDB dataset even without extra data. Compared with other unbiased training baselines, the C2C also achieves superior performance with the little time cost. Our work reveals that cropping box regression networks actually can directly learn from the cropping box annotations, but how to train a network matters. Our work provides a new perspective on the task of cropping box regression and states that composition feature clustering can make a difference.
\section{Acknowledgements}
This work was funded by the DigiX Joint Innovation Center of Huawei-HUST.
|
{
"arxiv_id": "2302.08596",
"language": "en",
"timestamp": "2023-02-21T02:31:31",
"url": "https://arxiv.org/abs/2302.08596",
"yymm": "2302"
} |
\section{Introduction} \label{sec:intro}
With increasing detection of exoplanets and spectroscopic characterization of exoplanet atmospheres, our knowledge of other stellar systems and planets around these stars is expanding significantly. Many of the spectral exoplanet transmission data are from the atmosphere of hot Jupiters and giant planets with short orbital periods and small semi-major axis value, which indicates that both thermal equilibrium chemistry and UV photochemistry are available in these hot-Jupiter atmospheres (\cite{Moses-2014, Madhusudhan-2016}), and hence simultaneous thermal and photochemical reaction pathways need to be considered when it comes to modeling the atmospheric composition of these UV-rich hot-Jupiter like exoplanets. It has been suggested from previous theoretical studies that thermal chemistry mainly dominates hot-Jupiter atmospheres with temperatures above 1500 K, while disequilibrium chemistry such as photochemistry could play an important role in exoplanet atmospheres whose temperatures are lower than 1500 K (\cite{Moses-2011, Venot_2012, Moses-2014}).
The chemistry of an exoplanetary atmosphere largely depends on its C/O ratio, where C/O $\geq$ 1 indicates a carbon-rich atmosphere and C/O $<$ 1 indicates an oxygen-rich atmosphere (Solar C/O = 0.5). Along with temperature, these C/O ratios can be used to classify exoplanetary atmospheres and identify the major chemistry products that occur in each case. Several groups have explored the connection between C/O ratio, temperature, and chemistry in the hot-Jupiter atmospheres using modeling, observations, and experiments, including \cite{Madhusudhan_2012, Moses-2013, Venot_2015, Drummond-2019} and \cite{Fleury_2019, Fleury_2020}. \cite{Madhusudhan_2012} combined observations with an atmospheric chemical model (photochemistry not included) to classify how the C/O ratio affects the chemical compositions of the atmospheres, and \cite{Venot_2015} have explored with a 1D thermo-photochemical model how various parameters including the C/O ratio affect the chemistry in hot-Jupiter atmospheres. In \cite{Fleury_2019,Fleury_2020}, they have experimentally explored the influence of the C/O ratio on the formation of trace species and photochemical aerosols in hot-Jupiter atmospheres.
Among these, the experimental studies by \cite{Fleury_2019, Fleury_2020} clearly showed enhanced formation of \ce{CH4}, \ce{H2O}, and \ce{CO2} through simultaneous thermal- and UV photochemistry compared to the product formation with thermal-only chemistry at the temperature conditions below 1500 K. They also observed the formation of non-volatile hydrocarbon aerosols as solid thin films after the UV irradiation of carbon-enriched (C/O = 1) gas mixture at the temperature of 1473 K (\cite{Fleury_2019}).
These two previous experimental studies provided us with new and first experimental insights into potential importance of photochemistry in hot-Jupiter-like exoplanet atmospheres as well as interesting questions with regard to the formation pathways for both observed gaseous products (i.e. \ce{CH4}, \ce{H2O}, and \ce{CO2}) and aerosol products. By tracking these formation pathways for both gaseous and condensed phase products from laboratory experimental data points, we can identify key chemical species including intermediates that are formed during UV irradiation of the laboratory analogs of exoplanet atmospheres. Eventually, these reactions can be implemented into 1D T-P profile supported atmospheric photochemical models (also known as atmospheric photochemical models) that address various physical (e.g. vertical diffusion of molecules, dry and wet deposition, atmospheric escape, condensation and sedimentation of species, etc.) and chemical (e.g. photochemistry, kinetics of the reactions between atmospheric components, etc.) principles (\cite{Hu_2012}). Adding a more complete picture of reaction chemistry into this atmospheric photochemical model will benefit analysis and interpretation of a vast amount of current and future spectral data from exoplanets as well as designing future space missions.
For example, the combination between the recent observational data of the atmosphere of WASP-39b by the James Webb Space Telescope (JWST) and the atmospheric chemical modeling works has suggested the first evidence of photochemistry in an exoplanet atmosphere (\cite{Tsai_2022}). Indeed, the JWST observational data of the atmosphere of WASP-39b shows a peak at 4.05 $\mu$m which has been attributed to \ce{SO2} molecules. Using multiple atmospheric photochemical model results, \cite{Tsai_2022} strongly suggest
that \ce{SO2} is formed by photochemistry in the atmosphere of WASP-39b after tracking its formation pathways using reaction kinetic modeling. Therefore, it is clear that photochemistry plays an important role in the composition and spectra of observed exoplanet atmospheres, which gives a rationale that precise reaction kinetic modeling is essential when it comes to atmospheric photochemical modeling. When this precise reaction kinetic modeling is coupled with experimental work as presented in this study, it can significantly benefit the astronomical community by decreasing uncertainties in chemical reactions that need to be implemented in atmospheric chemical models used to interpret observational data.
Tracking chemical formation pathways of laboratory experiments is very challenging without the aid of reaction kinetic modeling. This is because the timescales of the initial chemical reactions are too short even for the well-controlled pseudo-first order radical chemistry (e.g. $\sim$30 $\mu$s) (\cite{Golan-2013}), so that the products that are observed in gas-phase static cell experiments (e.g. \cite{Fleury_2019, Fleury_2020}) are formed far beyond tertiary or even further reaction chemistry (including surface chemistry on the wall). For this reason, reaction kinetic modeling has been actively utilized in a variety of fields (e.g. astrochemistry, combustion research, etc.) to interpret various observational and experimental data. Particularly, the advancement of computer-aided automatic construction of reaction kinetic modeling in combustion research is impressive. For example, \cite{Liu_2020} used Reaction Mechanism Generator (RMG, \cite{Gao_2016, Liu-2021}) to automatically construct the acetylene pyrolysis model (with a temperature range of 1000--1500 K, a pressure of 0.2 atm, and a reaction time of 0.5 s) that contains 1594 species and 8924 reactions and successfully described up to pyrene formations observed from the previous acetylene (\ce{C2H2}) pyrolysis experiment by \cite{Norinaga_2008}.
RMG is an open-source software that automatically constructs networks with relevant core reactions based on its own algorithm to choose reaction rates (e.g. experimentally measured rates would be prioritized followed by less reliable sources in order) and a rate-based iterative algorithm for model generation (\cite{Gao_2016}). This approach has a few advantages over the traditional way (i.e. manually choosing reaction rates from previous references) of building reaction chemical networks: (i) since reaction rates are chosen based on a rate-based iterative algorithm, compared to manually selecting reaction rates, it is less likely that the reaction mechanism will miss important (i.e. relevant) reactions as long as the reaction libraries are solid, (ii) the approach has multiple self-feed back and refinement steps that are based on a solid algorithm, thus providing the model with more reliability, and (iii) this automatic approach enables us to describe larger and more complex chemical system that cannot be constructed by the traditional way.
As we can see from \cite{Liu_2020}'s \ce{C2H2} pyrolysis model, in order to reasonably describe the chemistry of exoplanet atmosphere-like conditions (i.e. T = 1000--1500 K and P = 0.2 atm) with even relatively simple starting material (i.e. \ce{C2H2}), it requires a model size that is too enormous to construct manually. This enormous model size gives us a rationale to utilize this computer-aided modeling technique to precisely interpret experimental studies of hot-Jupiter atmospheric chemistry.
On top of this, including photochemistry into reaction kinetic modeling is also important. Although the temperature, pressure, and chemical conditions of hot-Jupiter atmospheres are similar to those of flame (i.e. combustion), the existence of UV photons (mainly Lyman-$\alpha$ from their parent stars) in hot-Jupiter atmospheres is the major difference that distinguishes hot-Jupiter atmosphere from combustions (\cite{France-2013, Miguel-2015}). Molecules in the top layers of the hot-Jupiter atmospheres will interact with accessible UV-photons and contribute to whole reaction chemistry of the system. Among a variety of molecules available in hot-Jupiter atmospheres, carbon monoxide (CO) is one of major interest to astrochemists since CO is predicted to be one of the most abundant species in hot-Jupiter-type exoplanet atmospheres whose temperatures are higher than 1000 K (\cite{Moses-2013, Venot_2015, Drummond-2019}).
In addition to this, the recent laboratory experimental results by \cite{Fleury_2019,Fleury_2020} strongly suggest CO as a possible photochemical precursor driven by CO photoexcitation, but lacked a detailed reaction kinetic model to explain reaction pathways to the observed products. Indeed, CO can be electronically excited to stay chemically reactive with a relatively long-lifetime through UV irradiation (\cite{Fournier_1980, Gudipati_1998, Lee_2020}). Although photochemically excited CO has been raised as an important precursor to photochemical pathways (\cite{Fleury_2019, Roudier-2021}), as far as we know, no previous study has ever assessed the reaction kinetic role of this electronically excited CO in these hot-Jupiter-like atmospheres in detail.
With this background, in this paper, we will utilize an automatic chemical reaction mechanism generator to build a high-fidelity chemical network that can assess the chemical importance of photoexcited carbon monoxide and qualitatively rationalize the increase of the production yields of \ce{CH4}, \ce{H2O}, \ce{CO2}, and aerosols during UV irradiation compared to thermal-only chemistry, which were observed in the previous studies by \cite{Fleury_2019, Fleury_2020}. Our work reported here is the first of its kind to incorporate simultaneous thermally and photochemically excited CO-induced chemical reaction pathways, which will provide not only a better insight into reaction mechanisms in hot-Jupiter like exoplanet atmospheres, but also provide a tool to confidently predict major and minor atmospheric molecular species under different conditions.
\section{Methods} \label{sec:methods}
\subsection{Kinetic and thermodynamic parameter libraries}\label{rmg_library}
In order to consider the conversion of carbon monoxide into methane, a part of kinetic and thermodynamic parameters were taken from a recent experiment and kinetic modeling combined study of methane oxidation by \cite{Hashemi_2016} and chosen as the seed mechanism's kinetic parameters that are included in the reaction kinetic model as main chemistry. All the other kinetic and thermodynamic parameters were taken from the libraries included in the previous acetylene pyrolysis model by \cite{Liu_2020} that successfully described up to pyrene (4-ring polycyclic aromatic hydrocarbons) formation observed from the previous experiment by \cite{Norinaga_2008} starting from acetylene for the residence time of 0.5 s at the temperatures of 1073--1373 K, at the pressure of 80 mbar. In doing so, we can describe any larger molecules that might have been formed but not observed in the previous experiments by \cite{Fleury_2019} and \cite{Fleury_2020}. All these kinetic and thermodynamic parameters can be found in CHEMKIN format in the Supplementary Materials.
With regard to thermodynamic parameters of the triplet carbon monoxide (i.e. CO(a\textsuperscript{3}$\Pi$)) that is not available in the library (since RMG's library usually contains the thermodynamic parameters of molecules in the ground state), we first carried out electronic structure calculations at the CBS-QB3 level of theory using Gaussian 09 (\cite{g09}) to determine geometric conformations, energies, and vibrational frequencies of the triplet carbon monoxide. Then the thermodynamic parameters of this molecule were calculated by Arkane (\cite{Allen_2012}), a package included in the open-source software RMG v3.1.0 (\cite{Gao_2016, Liu-2021}), with atomic energy corrections, bond corrections, and spin orbit corrections, based on the CBS-QB3 level of theory as the model chemistry. These thermodynamic parameters of the triplet carbon monoxide is given in CHEMKIN format in the Supplementary Materials as well as molecular parameter outputs (Gaussian 09 output file) and Arkane input file.
\subsection{Automoatic thermochemical reaction model generation}\label{rmg_generation}
RMG was used to generate the thermal chemistry model that can simulate the experiments of \cite{Fleury_2019, Fleury_2020}. An initial molar composition of 99.7 \% of hydrogen and 0.3 \% of carbon monoxide was used for the experimental condition of \cite{Fleury_2019}, while an initial molar composition of 99.26 \% of hydrogen, 0.26 \% of carbon monoxide, and 0.48 \% of water was used for the experimental condition of \cite{Fleury_2020}. Batch reactor conditions were set with a temperature range of 300--1800 K and a pressure of 15 mbar for both models. These physical conditions are relevant to the atmosphere of hot Jupiters and gas giant exoplanets. For example, according to \cite{Tsai_2022}, the temperature profile of the atmosphere of WASP-39b (which is a hot-Jupiter type exoplanet) at 10 mbar ranges from 700 K to 1300 K, which is very similar to the physical conditions simulated in this study.
The pressure dependence feature of RMG was enabled to automatically construct pressure-dependent networks for species with up to 10 heavy atoms. Species constraints were set to limit the maximum number of carbon atoms in any molecule to 16 and the maximum number of radicals to 3 in order to keep the model generation realistic but conserve computing time as well. After the model generation completed, the final model contained 475 species and 1284 reactions (forward-reverse reaction pairs), which can be found in the Supplementary Materials as well as the RMG input file.
\subsection{Rate coefficients of photochemical reactions and phosphorescence quenching of CO(a\textsuperscript{3}$\Pi$)} \label{photoexcit_rate_calc}
Since RMG has been developed to simulate the combustion chemistry, it doesn't include any photochemical reactions in its library. However, it is essential to include photochemical reactions in order to assess the importance of CO photochemistry in the reaction kinetic model. For this reason, we selectively calculated photochemical reaction rate coefficients (\textit{k}\textsubscript{\textit{j}}) of major species (that were observed in the previous experiments) as a function of the path length (\textit{l}) and wavelength ($\lambda$) under a given experimental total gas pressure (i.e. 15 mbar in \cite{Fleury_2019,Fleury_2020}):
\begin{equation}
\textit{k}\textsubscript{\textit{j}}(\lambda,\textit{l})=\Phi\textsubscript{\textit{j}}(\lambda)\sigma\textsubscript{\textit{i}}(\lambda)\textit{F}\textsubscript{0}(\lambda)\textit{e}\textsuperscript{-$\sum$$\sigma$\textsubscript{\textit{i}}($\lambda$)\textit{n\textsubscript{\textit{i}}l}}
\label{eq:photodissrate}
\end{equation}
where \textit{i} represents each gas species (i.e. \ce{CO}, \ce{H2}, \ce{H2O}, \ce{CH4}, or \ce{CO2}), $\Phi\textsubscript{\textit{j}}(\lambda)$ is the quantum yield of the photochemical reaction \textit{j}, $\sigma$\textsubscript{\textit{i}} is the photoabsorption cross-sections of the gas species \textit{i} (see Figure \ref{fig:photo_rxns}c), and \textit{F}\textsubscript{0}($\lambda$) is a photon flux at the wavelength $\lambda$ at the photochemical reaction cell window (i.e. \textit{l}=0 or zero optical depth). Unfortunately, the exact VUV photon flux, \textit{F}\textsubscript{0}($\lambda$) profile used in the experiments conducted by \cite{Fleury_2019, Fleury_2020} was not available. So instead, \textit{F}\textsubscript{0}($\lambda$) profile was taken from \cite{Ligterink_2015} which used F-type microwave discharge hydrogen-flow lamps (MDHLs) similar to the system used in the previous experiments and scaled down for input power of 70 W (simulating stellar UV photons) instead of 100 W, and \ce{H2}-pressure of 1.2 mbar instead of 0.41 mbar (refer to Fig. 4 in \cite{Ligterink_2015}). The resulting VUV spectra is shown in Figure \ref{fig:photo_rxns}a. The exponential term is to consider the optical depth caused by the gas species absorbing photons emitted from the UV light source, where \textit{n\textsubscript{\textit{i}}} is the number density of corresponding gas species \textit{i}. As shown in Figure \ref{fig:photo_rxns}b, under the experimental condition of 99.7 \% \ce{H2} and 0.3 \% of \ce{CO}, 15 mbar, and 1473 K, at least more than 40 \% of Ly-$\alpha$ photons are available even at the end of the path length (i.e. \textit{l} = 48 cm). This Equation (\ref{eq:photodissrate}) is then integrated to the photochemical cell length (i.e. \textit{l} = 0 to 48 cm) as shown in Figure \ref{fig:photo_rxns}d, followed by integration to the wavelength range (i.e. $\lambda$ = 112 to 180 nm) to calculate corresponding photochemical reaction rate coefficient (\textit{k}\textsubscript{\textit{j}}) as listed in Table \ref{tab:photochem_rxns}.
A few assumptions have been made with respect to calculating the carbon monoxide photoexcitation (i.e. X\textsuperscript{1}$\Sigma^+$ $\rightarrow$ a\textsuperscript{3}$\Pi$) rate coefficient and phosphorescence (i.e. a\textsuperscript{3}$\Pi$ $\rightarrow$ X\textsuperscript{1}$\Sigma^+$; note that there is a spin change) rate coefficient, which are: (i) every carbon monoxide molecule in the ground state (X\textsuperscript{1}$\Sigma^+$) populates to a\textsuperscript{3}$\Pi$ after UV photoexcitation into the spin and dipole allowed A\textsuperscript{1}$\Sigma$ state; (ii) phosphorescence of CO(a\textsuperscript{3}$\Pi$) down to the ground state CO(X\textsuperscript{1}$\Sigma^+$) follows an exponential decay pattern determined by the radiative lifetime of the a\textsuperscript{3}$\Pi$ state; and (iii) there are no significant changes in the radiative rate coefficients for the molecules at different temperatures. In order to implement the assumption (i), we assumed the quantum yield ($\Phi_2$) of the CO(a\textsuperscript{3}$\Pi$) after photoexcitation from X\textsuperscript{1}$\Sigma^+$ to A\textsuperscript{1}$\Sigma^+$(i.e. X\textsuperscript{1}$\Sigma^+$ $\rightarrow$ a\textsuperscript{3}$\Pi$) to be unity. With regard to the assumptions (ii) and (iii), the mean lifetime of CO(a\textsuperscript{3}$\Pi$) was taken from \cite{Lee_2020} to be 3 milliseconds.
All these calculated photochemistry-related rate coefficients were later added to the CHEMIKIN format located at the bottom of the CHEMKIN input files indicated as 'Newly Added Photochemistry' (See Supplementary Materials).
\begin{figure
\centering
\includegraphics[width=1.0\textwidth]{figures/photochemistry_rate_calc.png}
\caption{\footnotesize Various parameters necessary to estimate photochemical reaction rate coefficients of major species used in this study. Detail of how we have obtained these parameters in this Figure is described in the section \ref{photoexcit_rate_calc}. (a) VUV spectra in the 112--180 nm range of F-type MDHL after scaling down of the spectral profile taken from \cite{Ligterink_2015} for a \ce{H2} pressure of 1.2 mbar, flowing rate of 0.5 sccm, and an input power of 70 W; (b) the proportion of photons of corresponding $\lambda$ left at 12, 24, 36, and 48 cm of the path length calculated at 1473 K and 15 mbar of total gas pressure of 99.7 \% \ce{H2} and 0.3 \% of CO using the Equation \ref{eq:photodissrate}; (c) photoabsorption cross-sections (\textit{$\sigma$}) profiles of corresponding gas species between 110--180 nm. Each \textit{$\sigma$} of (i) \ce{CH4} was taken from \cite{Laufer_2011, Mount_1977}, (ii) \ce{CO} was taken from \cite{MyerSamson_1970, Thompson_1963}, (iii) \ce{H2} was taken from \cite{Backx_1976}, (iv) \ce{H2O} was taken from \cite{Mota_2005}, and (v) \ce{CO2} was taken from \cite{Venot_2018}; (d) wavelength-dependent rate coefficient \textit{k}$_j(\lambda)$ of corresponding photochemical reactions which are defined in Table \ref{tab:photochem_rxns}}
\label{fig:photo_rxns}
\end{figure}
\begin{deluxetable}{ccccc
\tabletypesize{\footnotesize}
\tablewidth{1pt}
\tablenum{1}
\tablecaption{Rate coefficients of various photochemistry included in the reaction kinetic model (\textit{j}=1 corresponds to phosphorescence, \textit{j}=2 corresponds to photoexcitation, and \textit{j}=3--8 correspond to photodissociation) \label{tab:photochem_rxns}}
\tablehead{
\colhead{\textit{j}} & \colhead{Reactions} & \colhead{\textit{T} [K]}& \colhead{\textit{k}\textsubscript{\textit{j}} [s\textsuperscript{-1}] \textsuperscript{\textit{a}}} & \colhead{Note}
}
\startdata
1&{CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ CO(X\textsuperscript{1}$\Sigma^+$)}&-&3.33$\times$10\textsuperscript{2}& mean lifetime of 3 ms taken from \cite{Lee_2020}\\
\hline
{}&{}&573 &4.74$\times$10\textsuperscript{-5}&\\
2&{CO(X\textsuperscript{1}$\Sigma^+$) $\rightarrow$ CO(a\textsuperscript{3}$\Pi$)}&873&5.18$\times$10\textsuperscript{-5}&calculated using the Equation (\ref{eq:photodissrate}),\\
{}&{}&1173 &5.45(5.19)$\times$10\textsuperscript{-5}& $\Phi_2$ of 1 assumed to be unity\\
{}&{}&1473 &5.64(5.41)$\times$10\textsuperscript{-5}&\\
\hline
{}&{}&573 &1.10$\times$10\textsuperscript{-3}&\\
3&{\ce{H2O} $\rightarrow$ \ce{H} + \ce{OH}}&873&1.20$\times$10\textsuperscript{-3}&calculated using the Equation (\ref{eq:photodissrate}),\\
{}&{}&1173 &1.30(1.20)$\times$10\textsuperscript{-3}&$\Phi_3$ of 1 assumed to be unity\\
{}&{}&1473 &1.40(1.30)$\times$10\textsuperscript{-3}&\\
\hline
{}&{}&573 &5.46$\times$10\textsuperscript{-4}&\\
4&{\ce{CH4} $\rightarrow$ \ce{CH3} + \ce{H}}&873&6.72$\times$10\textsuperscript{-4}&calculated using the Equation (\ref{eq:photodissrate}),\\
{}&{}&1173&7.14(6.72)$\times$10\textsuperscript{-4}&$\Phi_4$ of 0.42 taken from \cite{Gans_2011}\\
{}&{}&1473&7.98(7.14)$\times$10\textsuperscript{-4}&\\
\hline
{}&{}&573 &6.24$\times$10\textsuperscript{-4}&\\
5&{\ce{CH4 } $\rightarrow$ \ce{CH2} + \ce{H2}}&873&7.68$\times$10\textsuperscript{-4}&calculated using the Equation (\ref{eq:photodissrate}),\\
{}&{}&1173&8.16(7.68)$\times$10\textsuperscript{-4}&$\Phi_5$ of 0.48 taken from \cite{Gans_2011}\\
{}&{}&1473&9.12(8.16)$\times$10\textsuperscript{-4}&\\
\hline
{}&{}&573 &9.10$\times$10\textsuperscript{-5}&\\
6&{\ce{CH4 } $\rightarrow$ \ce{CH} + \ce{H2} + \ce{H}}&873&1.12$\times$10\textsuperscript{-4}&calculated using the Equation (\ref{eq:photodissrate}),\\
{}&{}&1173&1.19(1.12)$\times$10\textsuperscript{-4}&$\Phi_6$ of 0.07 taken from \cite{Gans_2011}\\
{}&{}&1473&1.33(1.19)$\times$10\textsuperscript{-4}&\\
\hline
{}&{}&573 &3.90$\times$10\textsuperscript{-5}&\\
7&{\ce{CH4} $\rightarrow$ \ce{CH2} + \ce{H} + \ce{H}}&873&4.80$\times$10\textsuperscript{-5}&calculated using the Equation (\ref{eq:photodissrate}),\\
{}&{}&1173&5.10(4.80)$\times$10\textsuperscript{-5}&$\Phi_7$ of 0.03 taken from \cite{Gans_2011}\\
{}&{}&1473&5.70(5.10)$\times$10\textsuperscript{-5}&\\
\hline
{}&{}&573 &1.11$\times$10\textsuperscript{-4}&\\
8&{\ce{CO2} $\rightarrow$ \ce{CO} + \ce{O}}&873&1.23$\times$10\textsuperscript{-4}&calculated using the Equation (\ref{eq:photodissrate}),\\
{}&{}&1173&1.31(1.25)$\times$10\textsuperscript{-4}&$\Phi_8$ of 1 taken from \cite{Venot_2018}\\
{}&{}&1473&1.37(1.32)$\times$10\textsuperscript{-4}&\\
\hline
\enddata
\tablecomments{\footnotesize\textsuperscript{\textit{a}} Numbers in parentheses refer to calculated photochemical reaction rate coefficients under the experimental condition of \cite{Fleury_2020} (i.e. 99.26 \% of \ce{H2}, 0.48 \% of \ce{H2O}, and 0.26 \% of CO) whose optical depth is slightly larger compare to the experimental condition of \cite{Fleury_2019} (i.e. 99.7 \% of \ce{H2} and 0.3 \% of CO).}
\end{deluxetable}
\subsection{Temperature- and pressure-dependent rate coefficients of \ce{H2} + CO(a\textsuperscript{3}$\Pi$) reactions} \label{photochem_rate_calc}
Since the goal of this study is to gain a better insight into simultaneous thermally and photochemically driven reaction pathways involving electronically excited CO in its metastable state (a\textsuperscript{3}$\Pi$), we need to assess to what extent these metastable and reactive carbon monoxide molecules (i.e. CO in a\textsuperscript{3}$\Pi$ state) affect the whole chemistry observed in the experiments by \cite{Fleury_2019,Fleury_2020}. In order to achieve this, it is critical to estimate and include reactions between the most dominant gas species (i.e. \ce{H2}) and the excited carbon monoxide in the reaction kinetic modeling. For this reason, we first carried out the potential energy surface (PES) calculations of \ce{H2} + \ce{CO} previously explored by \cite{Euclides_2019}. The CBS-QB3 method was performed on the stationary points and transition states reported by \cite{Euclides_2019} using Gaussian 09 (\cite{g09}) and all these molecular parameter outputs are available in the Supplementary Materials. Then as it is shown in Figure \ref{fig:H2CO_PES}, instead of connecting \ce{H2} + CO(X\textsuperscript{1}$\Sigma^+$) to the entire PES, we connected \ce{H2} + CO(a\textsuperscript{3}$\Pi$) to the entire PES via Transition State 1 (TS1) and TS4 with assumptions that the entrance barriers of the TS1 and TS4 are zero (i.e. barrier-less reactions).
As for next step after finishing the PES calculation, Arkane (\cite{Allen_2012}) was used to calculate temperature- and pressure-dependent rate coefficients \emph{k}(\emph{T,P}) of \ce{H2} + CO(a\textsuperscript{3}$\Pi$) channels based on the \ce{CH2O} PES mentioned above. Briefly describing, Arkane is a tool that can calculate pressure-dependent phenomenological rate coefficients \emph{k}(\emph{T,P}) for unimolecular reaction networks based on the results of quantum chemistry calculations (i.e. PES) and the Rice–Ramsperger–Kassel–Marcus (RRKM) theory (\cite{Marcus_1952}). Arkane first generates a detailed model of the reaction network using the one-dimensional master equation methods (\cite{Miller_2006}) and then applies one of several available model reduction methods (e.g. the modified strong collision approximation) to simplify the detailed model into a set of phenomenological rate coefficients \emph{k}(\emph{T,P}) that are suitable for use in chemical reaction mechanisms.
The reservoir state method was used in calculating \emph{k(T,P)} from the \ce{CH2O} pressure-dependent networks. Lennard-Jones parameters of \ce{H2} and \ce{CH2O} isomers (approximated to be same as those for \ce{C2H6}) in helium bath gas were taken from \cite{Jasper_2015}. The collisional energy-transfer parameters,
$\langle\Delta\emph{E}\textsubscript{down}\rangle=\alpha\textsubscript{300}\left(\frac{T}{T_{0}}\right)\textsuperscript{n}$ cm\textsuperscript{-1}, were also taken from \cite{Jasper_2015}. There are four reaction channels from \ce{H2} + CO(a\textsuperscript{3}$\Pi$) and their rate coefficients under the experimental conditions of \cite{Fleury_2019, Fleury_2020} are listed in Table \ref{tab:additional_rxns} as well as the rate coefficients of the reaction of \ce{H2} + CO(X\textsuperscript{1}$\Sigma^+$) for a comparison. These reaction coefficients also can be found in CHEMKIN format in the Supplementary Materials. Since RMG-generated thermochemical reaction model didn't distinguish HCOH isomers (\textit{cis-} and \textit{trans-}HCOH), the rate coefficients of \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ \textit{cis}-HCOH and \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ \textit{trans}-HCOH were summed up together and appended as the rate coefficient of \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ HCOH into the CHEMKIN format file (see the Supplementary Materials).
\begin{figure
\centering
\includegraphics[width=1.0\textwidth]{figures/H2CO_PES.png}
\caption{\footnotesize The \ce{CH2O} potential energy surface calculated at the CBS-QB3 level of theory. The calculated energy of Transition State 1 (TS1) and TS4 were originally 81.8 and 104.9 kcal/mol at the CBS-QB3 level of theory, but here assumed to be barrier-less reactions with respect to \ce{H2} + CO(a\textsuperscript{3}$\Pi$). For the definition of "Well-skipping pathway", please refer to section \ref{subsec:H2_CO_rxn}. }
\label{fig:H2CO_PES}
\end{figure}
\begin{figure
\centering
\includegraphics[width=1\textwidth]{figures/rate_coefficients_list_2.png}
\caption{\footnotesize Calculated temperature- and pressure-dependent rate coefficients \emph{k}(\emph{T,P}) of \ce{H2} + CO(a\textsuperscript{3}$\Pi$) channels based on the \ce{CH2O} PES in Figure \ref{fig:H2CO_PES}. Note that \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO pressure-dependent rate coefficients overlap with each other, thus showing only one plot. Individual behavior of each reaction shown in these plots is described in detail in the section \ref{subsec:H2_CO_rxn}.}
\label{fig:rate_coefficients_list}
\end{figure}
\begin{deluxetable}{ccccc
\tablenum{2}
\tablecaption{Reaction rate coefficients of \ce{H2} + CO(a\textsuperscript{3}$\Pi$) and CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) at the different temperature conditions of \cite{Fleury_2019,Fleury_2020}\textsuperscript{\textit{a,b,c}}. The method to estimate the rate-coefficients of \ce{H2} + CO(a\textsuperscript{3}$\Pi$) is described in detail in the section \ref{photochem_rate_calc}, while individual behavior of each reaction is described in detail in the section \ref{subsec:H2_CO_rxn}. The rate-coefficient of the CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) reaction was estimated following the method described in the section \ref{CO_CO_rate_calc} and its behavior is described in detail in the section \ref{subsec:COt_CO_rxn}. \label{tab:additional_rxns}}
\tablehead{
\colhead{Reactions} & \colhead{573 K}& \colhead{873 K} & \colhead{1173 K} & \colhead{1473 K}
}
\startdata
\ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO&6.83$\times$10\textsuperscript{12}(1.00$\times$10\textsuperscript{-26})&6.44$\times$10\textsuperscript{12}(4.47$\times$10\textsuperscript{-13})&7.32$\times$10\textsuperscript{12}(2.43$\times$10\textsuperscript{-6})&8.94$\times$10\textsuperscript{12}(2.55$\times$10\textsuperscript{-2})\\
\ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ \ce{H2CO}&1.82$\times$10\textsuperscript{-15}(1.14$\times$10\textsuperscript{-19}) &2.77$\times$10\textsuperscript{-9}(1.25$\times$10\textsuperscript{-9})&6.94$\times$10\textsuperscript{-6}(7.09$\times$10\textsuperscript{-5})&8.21$\times$10\textsuperscript{-4}($3.60\times$10\textsuperscript{-2})\\
\ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ \textit{cis}-HCOH&8.42$\times$10\textsuperscript{-19}(4.52$\times$10\textsuperscript{-23}) &8.78$\times$10\textsuperscript{-13}(3.50$\times$10\textsuperscript{-13})&1.15$\times$10\textsuperscript{-9}(1.13
$\times$10\textsuperscript{-8})&7.26$\times$10\textsuperscript{-8}(3.31$\times$10\textsuperscript{-6})\\
\ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ \textit{trans}-HCOH&5.65$\times$10\textsuperscript{-19}(3.09$\times$10\textsuperscript{-23}) &3.65$\times$10\textsuperscript{-13}(1.49$\times$10\textsuperscript{-13})&3.53$\times$10\textsuperscript{-10}(3.58$\times$10\textsuperscript{-9})&1.93$\times$10\textsuperscript{-8}(9.03$\times$10\textsuperscript{-7})\\
CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) &2.07$\times$10\textsuperscript{6}(3.44$\times$10\textsuperscript{-50}) &2.60$\times$10\textsuperscript{8}(9.19$\times$10\textsuperscript{-29})&3.42$\times$10\textsuperscript{9}(3.34$\times$10\textsuperscript{-18})&1.79$\times$10\textsuperscript{10}(6.88$\times$10\textsuperscript{-12})\\
$\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2}&&&&\\
\hline
\enddata
\tablecomments{\footnotesize\textsuperscript{\textit{a}}Pressure condition is 15 mbar; \textsuperscript{\textit{b}}Unit is [cm\textsuperscript{3}/mol/s]; \textsuperscript{\textit{c}}Numbers in parentheses refer to calculated rate coefficients of corresponding reactions when CO is in the ground state (X\textsuperscript{1}$\Sigma^+$) }
\end{deluxetable}
\subsection{Temperature-dependent rate coefficients of CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2} reaction} \label{CO_CO_rate_calc}
In the previous study by \cite{Fleury_2019}, the reaction between the excited CO and the ground-state CO was suggested as the important reaction that might rationalize the formation of \ce{CO2} (along with C(\textsuperscript{3}P)) observed in the experiments. Since carbon monoxide was the second most abundant species in the system, it is important to estimate and include this reaction into the kinetic modeling. Since the potential energy surface of this system was already explored at the CCSD(T)/def2-qZVP level of theory (which is an even higher level of theory compared to CBS-QB3) with $\omega$B97M-V/6-311+G* zero-point correction by \cite{DeVine-2022} (shown in Figure \ref{fig:C2O2_PES}) with all the required parameters (e.g. potential energy differences, rotational constants, vibrational frequencies, symmetry number, etc.) available, all these parameters were manually provided as an Arkane input rather than carrying out ab-initio calculation at the CBS-QB3 level of theory from the beginning.
As long as each potential energy is calculated on the same level of theory, the uncertainty of the estimated reaction rate-coefficients would be good enough for the reaction kinetic modeling purpose. Arkane then generated temperature-dependent rate coefficients using conventional transition state theory \cite{Allen_2012} (Arkane input and output files are available in the Supplementary Materials). Temperature-dependent rate coefficients of the CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2} reaction at various temperatures are available in Table \ref{tab:additional_rxns}.
\begin{figure
\centering
\includegraphics[width=0.43\textwidth]{figures/C2O2_PES.png}
\includegraphics[width=0.56\textwidth]{figures/COCO_kinetics.png}
\caption{\footnotesize (Left) The triplet \ce{C2O2} potential energy surface calculated at the CCSD(T)/def2-qZVP level of theory with $\omega$B97M-V/6-311+G* zero-point correction by \cite{DeVine-2022}. (Right) Temperature-dependent rate coefficients of the CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2} reaction calculated by Arkane (\cite{Allen_2012}) using conventional transition state theory}
\label{fig:C2O2_PES}
\end{figure}
\subsection{Model simulation and analysis}\label{model_simulation}
With the reaction mechanism built following the procedure mentioned above, simulations (solving differential equations) were performed for the reaction time of 18 hours (i.e. 64800 seconds) on eight experimental conditions of \cite{Fleury_2019} (i.e. With UV and without UV at each T = 573, 873, 1173, and 1473 K with the initial composition of 99.7 \% of \ce{H2} and 0.3\% of CO) and four experimental conditions of \cite{Fleury_2020} (i.e. With UV and without UV at each T= 1173 and 1473 K with the initial composition of 99.26 \% of \ce{H2}, 0.26\% of CO, and 0.48 \% of \ce{H2O}) using reaction mechanism simulator (RMS, \cite{rms}), a package included in the RMG (\cite{Gao_2016}) suite package.
The model output is a set of temperature-dependent molecular mixing ratio profiles of each of the species, summarized in Figures \ref{fig:Fleury_2019_wUV_woUV}, \ref{fig:C2H2_pathway}a, and \ref{fig:Fleury_2020_wUV_woUV}, which add up to 1 (e.g. a molecular mixing ratio of 10\textsuperscript{-6} in Figure \ref{fig:Fleury_2019_wUV_woUV} refers to 1 ppm).
Additional model simulations were performed on the same experimental conditions of \cite{Fleury_2019} except for higher temperatures (i.e. each T = 2000 and 2500 K). The model simulation for longer reaction time (206 hours), higher pressure (81 mbar), and the temperature of 1473 K was also performed using RMS. The reactor was assumed as isobaric, isothermal, and homogeneous. Rate of production analysis (ROP) was done using the RMS and the ROP analysis of the kinetic model describing \cite{Fleury_2019} is available in the Appendix \ref{sec:ROP}. Sensitivity analysis was done using the RMG suite package (\cite{Gao_2016}).
\begin{figure
\centering
\includegraphics[width=1.0\textwidth]{figures/Modeling_schematic_diagram.pdf}
\caption{\footnotesize A schematic diagram of the flow of the reaction kinetic modeling carried out in this study. The red-colored texts refer to the software tools used in this study. The blue-colored text refers to the inputs and outputs necessary in this study (some of which are mentioned in the main text). The black-colored text in the boxes refers to the major steps described in the main text. For example, 'Automatic reaction mechanism generation' refers to the section \ref{rmg_library} and \ref{rmg_generation}, while the 'Photochemical reaction rate coefficient calculation' shown in this figure refers to the section \ref{photoexcit_rate_calc}, and the 'Potential energy surfaces' refers to the section \ref{photochem_rate_calc}. The solid line refers to the procedure for generating the thermochemical reaction model. The dashed line refers to the procedure for estimating the photochemical reaction rate-coefficients or any rate-coefficients that need to be estimated from the first-principle methods (i.e. ab-initio calculations) due to various reasons (e.g. There is no any available reaction rate-coefficient in a previous study).}
\label{fig:modeling_scheme}
\end{figure}
\section{Results and Discussions} \label{sec:resultsanddiscussions}
\subsection{\ce{H2} + CO(a\textsuperscript{3}$\Pi$) reaction rate coefficients}\label{subsec:H2_CO_rxn}
As it is shown in Table \ref{tab:additional_rxns} and Figure \ref{fig:rate_coefficients_list}, the rate-coefficient of the \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO reaction is at least 10 orders of magnitude faster than the rate coefficients of any other channels, which indicates that the reaction between UV-excited CO and \ce{H2} will dominantly proceed into H and HCO molecules at any temperature and pressure conditions. The \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO reaction is also called as a "well-skipping reaction" which refers to an elementary reaction that traverses more than one transition-state dividing surface in a single elementary step (see Figure \ref{fig:H2CO_PES}). This well-skipping behavior is mainly attributed to UV-excited CO's excessively high potential energy compared to the potential energy of all the other species and transition states. As shown in Figure \ref{fig:H2CO_PES}, the potential energy of (photo)chemically activated reactants (i.e. \ce{H2} + CO(a\textsuperscript{3}$\Pi$)) is already $\sim$40 kcal/mol above the second highest potential energy (i.e. the potential energy of TS4, 104.9 kcal/mol). For this reason, even at higher pressure condition of 100 bar, increased collisional stabilization into other stable molecules (i.e. HCOH isomers and \ce{H2CO}) is still not efficient enough to beat this well-skipping reaction to form HCO and H. For a similar reason (i.e. excessive energy from UV-photons), the UV-excited well-skipping channel is less sensitive to temperature changes compared to other reaction channels. However, this is not the case when CO is in the ground state. As can be seen from Table \ref{tab:additional_rxns}, in the case of the reaction between \ce{H2} and CO in the ground state (i.e. X\textsuperscript{1}$\Sigma^+$), the reaction channel of \ce{H2} + CO(X\textsuperscript{1}$\Sigma^+$) $\rightarrow$ \ce{H2CO} stays as a main channel up until 1173 K, while the well-skipping reaction (i.e. \ce{H2} + CO(X\textsuperscript{1}$\Sigma^+$) $\rightarrow$ H + HCO) takes over the main channel from the \ce{H2} + CO(X\textsuperscript{1}$\Sigma^+$) $\rightarrow$ \ce{H2CO} channel at 1473 K (see numbers in parentheses in Table \ref{tab:additional_rxns}). This is because the only energy source for the reactants (\ce{H2} + CO(X\textsuperscript{1}$\Sigma^+$) to overcome the reaction barrier of 103.4 kcal/mol (i.e. TS5 in Figure \ref{fig:H2CO_PES}) to form HCO and H is from thermal energy. Up until 873 K, thermal energy is not enough so the intermediates that overcome the reaction barrier of 81.8 kcal/mol (i.e. TS1) will stabilize into \ce{H2CO} rather than skipping all the other wells to form HCO and H. With increasing temperatures above 1173 K, the reactants are more thermally energized so that they are more likely to proceed to the well-skipping channel (i.e. \ce{H2} + CO(X\textsuperscript{1}$\Sigma^+$) $\rightarrow$ H + HCO) and competing with the stabilization channel down to \ce{H2CO}. So the major difference between the \ce{H2} + CO chemistry with and without UV photons would be whether this well-skipping channel (i.e. \ce{H2} + CO $\rightarrow$ H + HCO) is dominant (i.e. with UV) or not (i.e. thermal-only). These results demonstrate the importance of rigorous reaction kinetics modeling, including photochemical reaction pathways. In the previous study by \cite{Fleury_2019}, the CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2} reaction was considered as the only important photochemistry regardless of dominant \ce{H2} in the system. It was not immediately evident that the reaction \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO reaction is the most predominant channel and has several orders of magnitude higher rate coefficients than other reaction channels (e.g. \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ \ce{H2CO}). This has the important consequence that the H atoms generated are now very reactive and start radical chain reactions.
\subsection{CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2} reaction reaction rate coefficients}\label{subsec:COt_CO_rxn}
As it is shown in Figure \ref{fig:C2O2_PES}, the CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) reaction has a non-zero reaction barrier (i.e. 14.24 kcal/mol) even after the photoexcitation of the ground state CO, different from \ce{H2} + CO(a\textsuperscript{3}$\Pi$) reactions (i.e. barrier-less reactions). For this reason, the calculated reaction rate coefficient shows significant positive temperature dependency as shown in Figure \ref{fig:C2O2_PES}. The last row of the Table \ref{tab:additional_rxns} shows $\sim$4 orders of magnitude difference between the rate coefficient at the lowest temperature (i.e. 573 K) and the rate coefficient at the highest temperature (i.e. 1473 K). So we can easily predict that this reaction will play a more significant role at elevated temperatures. It is also clear from the Table \ref{tab:additional_rxns} that this reaction would be insignificant without UV photons (even at 1473 K, the rate-coefficient is still smaller than 10\textsuperscript{-13} cm\textsuperscript{3}/mol/s), which makes a major difference between the chemistry with and without UV photons (i.e. thermal-only).
\subsection{Modeling of the \ce{H2}/CO exoplanet atmosphere analogue of \cite{Fleury_2019}}
\subsubsection{Reaction kinetics of thermal-only chemistry}
As shown in Figure \ref{fig:Fleury_2019_wUV_woUV}, all major species (i.e. \ce{CH4}, \ce{H2O}, \ce{CO2}, and H radical) formations under thermal-only conditions are predicted to be extremely temperature dependent. This is mainly attributed to elevated reaction rate-coefficients with increasing temperature, leading to the formation of major species. This predicted behavior qualitatively matches well with the previous experimental results of \cite{Fleury_2019} using the quartz cell. For example, the absorption infrared (IR) spectrum of \ce{CO2} and \ce{H2O} show up at 573 K and increase with temperatures. The model predicts that the molecular mixing ratio of these molecules might be too low ($\sim$ 10\textsuperscript{-28} or lower) to be observed experimentally, but it has to be noted that the surface chemistry happening on the wall of the quartz cell might have affected the reaction chemistry. Compared to \ce{H2O} and \ce{CO2}, \ce{CH4} has much lower absorption cross-sections of IR, thus the IR peaks of \ce{CH4} start showing up at 1173 K in the quartz cell used in \cite{Fleury_2019}. The predicted molecular mixing ratio of \ce{CH4} and \ce{H2O} after the reaction time of 18 hours are almost same. This predicted behavior is due to their formation pathways shown in Figure \ref{fig:reactionpathwayswithandwithoutUV}a. According to the ROP analysis (see Appendix \ref{sec:ROP}), the predicted major precursors of \ce{CH4} and \ce{H2O} all over the reaction time (i.e. 18 hours) are \ce{CH3} and \ce{OH}, respectively. These radical species (i.e. \ce{CH3} and \ce{OH}) are simultaneously formed through the reaction HCOH + \ce{H2} $\rightarrow$ \ce{CH3} + \ce{OH} at all temperature conditions (additionally formed through the reaction \ce{CH2OH} + H $\rightarrow$ \ce{CH3} + \ce{OH} at the temperatures above 1173 K), and respectively react with \ce{H2} to form their corresponding stable species along with the H atom. Up until 1173 K, HCOH is directly formed through the reaction \ce{H2} + CO $\rightarrow$ HCOH followed by HCOH + \ce{H2} $\rightarrow$ \ce{CH3} + \ce{OH} or the unimolecular reaction (i.e. isomerization) into \ce{CH2O}. However, at temperatures above 1173 K, \ce{CH2O} starts to isomerize back to HCOH, which eventually leads to increased formation of \ce{CH4} and \ce{H2O}. \ce{CH2O} can also be formed through either \ce{H2} + CO $\rightarrow$ \ce{CH2O} at all temperature conditions or \ce{HCO} + \ce{H2} $\rightarrow$ \ce{CH2O} + H (only at T $\geq$ 873 K). HCO forms through \ce{H2} + CO $\rightarrow$ H + HCO, but readily dissociates back to CO + H. However, as mentioned previously, at T $\geq$ 873, HCO can proceed to \ce{CH2O} by reacting with \ce{H2} and at T $\geq$ 1473 K, HCO can be produced through the additional reaction H + CO $\rightarrow$ HCO.
H radicals are formed in a significant amount at T $\geq$ 873 K through thermal dissociation into H radicals (i.e. \ce{H2} $\rightarrow$ 2H), while the recombination back to \ce{H2} (i.e. H + H $\rightarrow$ \ce{H2}) becomes significant at T $\geq$ 1473 K after H radicals being formed enough in the system.
\ce{CO2} is formed through CO + OH $\rightarrow$ H + \ce{CO2} at T $\leq$ 1173 K. But at T $\geq$ 1473 K, \ce{CO2} is formed through \ce{CH2O}
+ CO $\rightarrow$ \ce{CH2} + \ce{CO2}, while \ce{CO2} reacts with H radical and forms CO and OH (not indicated in Figure \ref{fig:reactionpathwayswithandwithoutUV}a). This reaction (i.e. \ce{CO2} + H $\rightarrow$ CO + OH) is attributed to an increased amount of H radicals due to elevated thermal dissociation of \ce{H2} at elevated temperatures. Overall, as can be seen from Figure \ref{fig:reactionpathwayswithandwithoutUV}a, with increasing temperatures, additional reaction pathways are being added to the system and push the chemistry to the right side of the figure toward the formation of \ce{CH4}, \ce{H2O} and \ce{CO2}.
It is also interesting to observe that the model predictions of molecular mixing ratios of species at thermal-only conditions becomes similar to those at the condition with UV photons at around 2000 K. At T = 1970 K, the amount of \ce{CH4}, \ce{H2O}, and \ce{CO2} formed in the system with UV photons was only different by less than 10 \% compared to that in the thermal-only system. In this paper, we will call this temperature a "transition temperature" with regard to these species (i.e. \ce{CH4}, \ce{H2O}, and \ce{CO2}). This indicates that at temperatures higher than 2000 K, thermally driven reactions are fast enough so that the system becomes less sensitive to UV photons compared to the system at lower temperatures. This feature has already been suggested from previous theoretical studies (\cite{Moses-2011, Venot_2012, Moses-2014}). However, it has to be noted that there are multiple factors (e.g. other photochemical reactions not considered in this study, temperature-dependent UV-photoabsorption cross-sections of carbon monoxide, etc.) that can change the transition temperature and this will be discussed later. Also, it is predicted from the model that the molecular mixing ratios of all major species (i.e. \ce{CH4}, \ce{H2O}, and \ce{CO2}) decrease at the temperature above this transition temperature, which means that all these major species have reached quasi-equilibrium and are not favored at this temperature condition anymore. In case of \ce{CH4}, the decrease in molecular mixing ratio at temperatures higher than 2000 K is more significant compared to molecular mixing ratios of \ce{H2O} and \ce{CO2}. This behavior is due to the thermal conversion of \ce{CH4} into \ce{C2} species (mainly \ce{C2H2}). This will be discussed in detail in a later section.
\begin{figure
\centering
\includegraphics[width=1\textwidth]{figures/Fluery_2019_wUV_woUV.png}
\caption{\footnotesize Temperature-dependent molecular mixing ratio profiles predicted by the reaction kinetic modeling for the experimental conditions described in \cite{Fleury_2019} and extra simulation up to 2500 K. Solid symbols indicate predicted mixing ratios of molecular species with both thermal energy and UV photons, while open symbols indicate predicted mixing ratios of molecular species at the thermal-only condition (i.e. without UV photons). Please note that several open symbols (i.e. thermal-only chemistry) overlap with solid symbols. The solid lines refer to the transition temperature at which the molecular mixing ratios of each species formed in different systems (i.e. thermal + UV vs thermal-only) only differ by less than 10 \%): (a) Red circles indicate \ce{CO2}; Blue circles indicate \ce{H2O}; Green circles indicate \ce{CH4}; the transition temperature with regard to \ce{CH4}, \ce{H2O}, and \ce{CO2} is calculated to be 1970 K; \ce{CO2}, \ce{CH4}, and \ce{H2O} mixing ratios with the thermal-only condition at 573 K are predicted to be lower than 10\textsuperscript{-20} (i.e. the absolute tolerance of the differential equation solver) and not shown in the Figure \ref{fig:Fleury_2019_wUV_woUV}a (b) Black circles indicate H-atom (radical species); Asterisks indicate UV-excited CO (i.e. carbon monoxide in the a\textsuperscript{3}$\Pi$ state); the transition temperature with regard to \ce{H2} is calculated to be 1650 K}
\label{fig:Fleury_2019_wUV_woUV}
\end{figure}
\begin{figure*}
\centering(\textbf{a})
\gridline{\includegraphics[width=1\textwidth]{figures/without_UV_pathways.png}
}
\centering\textbf{(b)}
\gridline{\includegraphics[width=1\textwidth]{figures/with_UV_pathways.png}
}
\caption{Model predicted major reaction pathways based on the ROP analysis describing previous experimental studies of \cite{Fleury_2019,Fleury_2020} at various temperature conditions indicated by the color of lines: (a) without UV photons, and (b) with UV photons. Reaction rates inside the yellow highlighted region in Figure \ref{fig:reactionpathwayswithandwithoutUV}b are at least around three orders of magnitude faster than the rates of any other reactions in the model.
\label{fig:reactionpathwayswithandwithoutUV}}
\end{figure*}
\subsubsection{Reaction kinetics of thermal- and UV-photochemistry}
The most notable feature shown from the reaction kinetic modeling of simultaneous thermal- and UV-photochemistry is the significant increase of the molecular mixing ratios of all the species at all temperatures compared to the case with thermal-only chemistry (see Figure \ref{fig:Fleury_2019_wUV_woUV}). All the species (i.e. \ce{CH4}, \ce{H2O}, \ce{CO2}, and H radicals) are predicted to be produced at least $\sim$20 orders of magnitude, $\sim$7 orders of magnitude, and $\sim$3 orders of magnitude more in the thermal- and UV-photochemistry condition than in the thermal-only condition at temperatures of 573 K, 873 K, and 1173 K, respectively. At temperature of 1473 K, all these species are predicted to be formed $\sim$40 times more in the thermal- and UV-photochemistry condition compared to the predicted productions of the species in the thermal-only condition. Figure \ref{fig:Fleury_2019_wUV_woUV}b shows that the molecular mixing ratio of CO(a\textsuperscript{3}$\Pi$) is not sensitive to temperature difference, which indicates that only the photochemical cycle (mentioned later) determines its abundance in the system. Overall, the model prediction can well explain significantly augmented formations of \ce{CH4}, \ce{H2O}, and \ce{CO2} at all temperature conditions (T $\leq$ 1473 K) under simultaneous thermal- and UV-photochemistry, as shown in Figure 6 of \cite{Fleury_2019}.
If we look at the major reaction pathways based on the ROP analysis as shown in Figure \ref{fig:reactionpathwayswithandwithoutUV}b, we can see that the chemistry of the whole system at all temperature conditions is dominated by one photochemical cycle indicated in the yellow highlighted region: (i) Carbon monoxide in the ground state gets electronically excited to the a\textsuperscript{3}$\Pi$ state by UV-photons, (ii) Excited CO can either go down to the ground state CO within its radiative lifetime of 3 ms (\cite{Lee_2020}) or react with \ce{H2} to form H and HCO, and (iii) HCO can either dissociate into H + CO or react with H radical to disproportionate to \ce{H2} and CO. At elevated temperatures, H radical recombination (i.e. H + H $\rightarrow$ \ce{H2} at T $\geq$ 873 K) and \ce{H2} dissociation (\ce{H2} $\rightarrow$ H + H at T $\geq$ 1473 K) also become dominant as well. Based on the ROP analysis, these major reaction pathways of the highlighted cycle are at least $\sim$3 orders of magnitude faster than any other reactions shown (or not shown if they are minor reactions) in Figure \ref{fig:reactionpathwayswithandwithoutUV}b. For this reason, a significant amount of HCO becomes available with the aid of this photochemical cycle even at very low temperature such as 573 K, which makes the chemistry of the system very different from thermal-only chemistry. As a result, every chemical reaction that is involved with HCO becomes much more efficient with UV-photons than it used to be in thermal-only chemistry, which in turn pushes the whole chemistry to the right side of the Figure \ref{fig:reactionpathwayswithandwithoutUV}b and eventually leads to the augmented formation of \ce{CH4}, \ce{H2O}, and \ce{CO2}, in agreement with the experimental observations of \cite{Fleury_2019}.
The reaction between the photoexcited CO and the ground-state CO that forms a triplet C atom and \ce{CO2} turns out to be very important according to the kinetic model especially at high temperatures above 873 K. As shown in Figure \ref{fig:reactionpathwayswithandwithoutUV}b, this reaction efficiently forms \ce{CO2} and the triplet C atom first. The C atom will then be hydrogenated by reacting with \ce{H2} into CH, \ce{CH2}, and then \ce{CH3} step by step, which will form \ce{CH4}. Along with every hydrogenation step (i.e. CH\textsubscript{n} + \ce{H2} $\rightarrow$ CH\textsubscript{n+1} + H), H radicals are efficiently formed and then attack \ce{CO2} to form OH radicals as well, which will form \ce{H2O}. As mentioned earlier in the section \ref{subsec:COt_CO_rxn}, due to its temperature-dependency of the CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2} reaction, this reaction will be more important at elevated temperatures. According to our kinetic model result, significantly augmented formations of major species (\ce{CH4}, \ce{H2O}, and \ce{CO2}) are attributed to the \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO reaction up until 873 K, while the augmented major species formations are attributed to the CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2} reaction at temperatures above 873 K. In short, two different photochemical schemes (i.e. \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO vs CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ C(\textsuperscript{3}P) + \ce{CO2}) determines the mixing ratios of \ce{CH4}, \ce{H2O}, and \ce{CO2} at different temperature regions below the transition temperature (i.e. 1970 K for \ce{CH4}, \ce{H2O}, and \ce{CO2}), respectively. With regard to H radicals, however, it has to be noted that \ce{H2} + CO(a\textsuperscript{3}$\Pi$) mainly determines the mixing ratio of H radicals in the system at temperatures below the transition temperature (i.e. 1650 K for H radicals).
In contrast to thermal-only chemistry, the molecular mixing ratio of \ce{CH4}, \ce{H2O}, and \ce{CO2} wiggles as shown in Figure \ref{fig:Fleury_2019_wUV_woUV}a. These molecules have been efficiently formed as a result of UV photochemistry enhanced by metastable a\textsuperscript{3}$\Pi$ state carbon monoxide and remain relatively stable at relatively lower temperatures below 873 K (i.e. UV photochemistry determines their mixing ratio after the reaction time of 18 hours). However, at 1173 K, thermal chemistry starts to partially determine their mixing ratio and this is well-shown at 2000 K in Figure \ref{fig:Fleury_2019_wUV_woUV}a where the molecular mixing ratios of \ce{CH4}, \ce{H2O}, and \ce{CO2} formed with UV photons are not distinguishable from those formed without UV photons (i.e. thermal only). We further simulated the molecular mixing ratio of \ce{CH4}, \ce{H2O}, and \ce{CO2} at 2000 and 2500 K after 18 hours while maintaining other conditions as same as that at 1473 K, and as expected, there was no significant difference in the molecular mixing ratio of \ce{CH4}, \ce{H2O}, and \ce{CO2} formed no matter whether UV photons were available or not (see Figure \ref{fig:Fleury_2019_wUV_woUV}). So based on the model prediction (and as mentioned earlier in the text), we can say that the transition temperature where thermal chemistry becomes dominant compared to photochemistry starts at around 1970 K and it has to be noted that this transition temperature can be shifted due to multiple factors including (i) errors in the UV photon fluxes, (ii) errors in photoabsorption cross-sections (e.g. temperature dependent) of species, (iii) errors in quantum yields of any photochemical reactions, (iv) any missing reaction species or reactions in the system, (v) any errors in included thermal reactions, and (vi) any errors in included thermodynamic parameters of related species. With regard to this, the sensitivity of the model will be discussed later.
\subsubsection{The potential importance of the acetylene (\ce{C2H2}) formation with regard to the organic refractory aerosol formation in hot-Jupiter exoplanet atmospheres}
Although a detectable amount of solid-phase product was not observed from any of the experiments at the temperature below 1473 K conducted by \cite{Fleury_2019} (i.e. 15 mbar; 573--1473 K; with and without UV; reaction time of 18 hours), an observable amount of condensed-phase organic aerosol products as a thin film deposited on sapphire windows was reported by \cite{Fleury_2019} after 204 hours of experiments with increased starting total gas pressure of 81 mbar with UV irradiation at 1473 K. Since there was no any direct information about the molecular structure of this aerosol, it is hard to exclusively say whether this observed organic aerosol is composed of aromatic hydrocarbons (if not polycyclic) or not. However, it is logical to say that \ce{C1} species should first go through \ce{C2} species before growing into aerosol particles, which gives us a rationale to look into any \ce{C2} or larger species predicted to be formed during our model simulation. The model simulation of the corresponding experimental condition (i.e. 81 mbar, 1473 K, with UV-photons, and the reaction time of 206 hours) predicted the molecular mixing ratio of each species as following: [\ce{C2H6}] = 6.62$\times$10\textsuperscript{-15}, [\ce{C2H4}] = 7.32$\times$10\textsuperscript{-12}, and [\ce{C2H2}] = 1.95$\times$10\textsuperscript{-10}. Any species larger than \ce{C2} species (e.g. \ce{C3H3} or larger) were predicted to be around or less than 1.00$\times$10\textsuperscript{-20} (the absolute tolerance of the differential equation solver), which indicates that no significant amount of the species larger than \ce{C2} species are predicted to be formed in the simulated systems even at the temperature of 2500 K.
Among these \ce{C2} species, we focused on the acetylene formation in the model for the following reason: (i) Previous studies have suggested benzene (1-ring aromatic hydrocarbon) and naphthalene (2-ring aromatic hydrocarbon) as precursors to refractory hydrocarbon aerosols (\cite{Trainer-2013,Brem-2015}), (ii) one of the prevailing mechanisms that rationalize the formation up to 2-ringed aromatics (including benzene and naphthalene) is the hydrogen-abstraction-acetylene-addition mechanism (HACA), an aromatic radical formed via hydrogen abstraction adds to acetylene (\ce{C2H2}) to form a larger vinylic radical adduct (\cite{Bittner-1981, Chu_2019, Frenklach_1985, Parker_2014, Yang_2016, Smith_2020}), and (iii) a previous photochemical experiment by \cite{Franklin_2018-thesis} using a \ce{D2}-lamp (115--170 nm) at 25$^{\circ}$C in oxygen-free conditions has shown that the VUV irradiation on \ce{C2H2} is the most efficient in the formation of organic particles compared to the VUV irradiation on other \ce{C2} species (i.e. \ce{C2H6} and \ce{C2H4}). Although it is not clear how the acetylene formation is exactly related to the formation of organic aerosols observed from \cite{Fleury_2019}, as mentioned earlier, many previous studies indicate the positive relationship between the acetylene formation and the aerosol formation.
\begin{figure
\centering
\includegraphics[width=1\textwidth]{figures/Fluery_2019_C2H2.png}
\caption{\footnotesize (a) Temperature-dependent mixing ratio profiles of acetylene (\ce{C2H2}) predicted by the reaction kinetic modeling for the experimental conditions described in \cite{Fleury_2019}. Solid symbols indicate the predicted \ce{C2H2} mixing ratio with UV photons, while open symbols indicate the predicted \ce{C2H2} mixing ratio at the thermal-only condition (i.e. without UV photons). The solid line refers to the transition temperature at which the molecular mixing ratio of \ce{C2H2} formed in different systems (i.e. thermal + UV vs thermal-only) only differ by less than 10 \%). The transition temperature with regard to \ce{C2H2} is 1985 K; (b) A schematic diagram of reaction pathways toward \ce{C2H2} formation based on the ROP analysis}
\label{fig:C2H2_pathway}
\end{figure}
\begin{deluxetable}{lccccc
\tablenum{3}
\tablecaption{The model predicted \ce{C2H2} mixing ratio comparison among various conditions (initial molecular compositions of \ce{H2} : CO = 99.7 : 0.3) with UV photons available \textsuperscript{\textit{a}} \label{tab:C2H2_comparison}}
\tablehead{
\colhead{} & \colhead{1473 K, 15 mbar}{\textsuperscript{\textit{b}}}& \colhead{1473 K, 81 mbar} & \colhead{1473 K, 81 mbar}\textsuperscript{\textit{b},{\textit{c}}} & \colhead{2000 K, 15 mbar}& \colhead{2500 K, 15 mbar}\\
\colhead{} & \colhead{(18 hr)}& \colhead{(18 hr)} & \colhead{(206 hr)} & \colhead{(18 hr)}& \colhead{(18 hr)}
}
\startdata
[\ce{C2H2}]&1.10$\times$10\textsuperscript{-11} (2.68$\times$10\textsuperscript{-14})&1.57$\times$10\textsuperscript{-12} (2.18$\times$10\textsuperscript{-13})&1.95$\times$10\textsuperscript{-10} (2.43$\times$10\textsuperscript{-11})&1.37$\times$10\textsuperscript{-7} (1.25$\times$10\textsuperscript{-7})&8.55$\times$10\textsuperscript{-8} (8.55$\times$10\textsuperscript{-8})\\
\hline
\enddata
\tablecomments{\footnotesize\textsuperscript{\textit{a}}Numbers in parentheses refer to calculated molecular mixing ratio of \ce{C2H2} at thermal-only condition; \textsuperscript{\textit{b}}Experimental conditions conducted by \cite{Fleury_2019}; \textsuperscript{\textit{c}}Experimental condition from which a detectable amount of solid-phase product was observed in \cite{Fleury_2019}}
\end{deluxetable}
With this background, if we look at the temperature-dependent mixing ratio profiles of acetylene predicted by the reaction kinetic modeling for the experimental conditions described in \cite{Fleury_2019}, we can see that UV-excited metastable carbon monoxides can significantly enhance the formation of acetylene compared to thermal-only chemistry as shown in Figure \ref{fig:C2H2_pathway}a at the temperature up to 1473 K. Up until 1173 K, acetylene (\ce{C2H2}) is predicted to be produced at least $\sim$4 orders of magnitude more in the thermal- and UV-photochemistry condition than in the thermal-only condition as shown in Figure \ref{fig:C2H2_pathway}a. At the temperature of 1473 K, acetylene is predicted to be formed $\sim$410 times more in the thermal- and UV-photochemistry condition compared to the predicted acetylene production in the thermal-only condition. At the temperature above 2000 K, there was no significant difference in the predicted molecular mixing ratio of \ce{C2H2} formed no matter whether UV photons were available or not, which indicates that the transition temperature is around 2000 K (precisely 1985 K). Figure \ref{fig:C2H2_pathway}b shows major reaction pathways from \ce{C1} species to \ce{C2H2}. As can be seen from Figure \ref{sec:ROP}\ref{fgr:ROP_573K_3}--\ref{sec:ROP}\ref{fgr:ROP_1473K_3} in the Appendix \ref{sec:ROP}, at early timescales (i.e. up to $\sim$10 ms), \ce{CH3} + CO $\rightarrow$ \ce{C2H2} + OH is the major pathway, and then the major \ce{C2H2} formation pathway changes to others such as \ce{HCCOH} + H $\rightarrow$ \ce{C2H2} + OH or \ce{CH2CO} + H $\rightarrow$ \ce{C2H2} + OH. With UV photons available in the system, the UV-elevated amount of HCO results in the elevated formation of HCOH and \ce{CH2OH}, which then results in the elevated formation of \ce{C2H2}. However, at temperatures higher than the transition temperature (i.e. 1985 K), thermal chemistry now determines the \ce{C2H2} mixing ratio like it did with the molecular mixing ratios of \ce{CH4}, \ce{H2O}, and \ce{CO2}. Like the calculated rate-coefficient of the reaction between the excited CO and \ce{H2} forming HCO and H is much faster than the rate-coefficient of the reaction between the ground state CO and \ce{H2} forming HCO and H, the rate-coefficient of the reaction between the excited CO and \ce{CH3} forming \ce{C2H2} and OH might affect the formation of \ce{C2H2} in a significant way. To test this, we recalculated the potential energy surfaces of \ce{C2H2} + OH taken from \cite{Miller_1989} at the CBS-QB3 level of theory using Gaussian 09 (\cite{g09}) and calculated the temperature- and pressure-dependent rate coefficients of \ce{CH3} + CO(a\textsuperscript{3}$\Pi$) reactions following the similar method described in \ref{photochem_rate_calc} (rate-coefficients and the corresponding PES are available in the Supplementary Materials). However, no matter whether \ce{CH3} + CO(a\textsuperscript{3}$\Pi$) chemistry is included in the reaction kinetic model or not, no significant change was observed on the formation of \ce{C2H2}. This might indicate that the molecular mixing ratio of \ce{CH3} is still too low to draw any significant increases in the acetylene formation through \ce{CH3} + CO(a\textsuperscript{3}$\Pi$) chemistry.
Table \ref{tab:C2H2_comparison} shows the model (thermal- and UV-photochemistry incorporated) predicted \ce{C2H2} mixing ratios at different conditions (starting from \ce{H2} : CO = 99.7 : 0.3). You can see from the Table \ref{tab:C2H2_comparison} that the model simulation indicates that the formation of acetylene (\ce{C2H2}) is favored with increasing temperature. The model also shows that \ce{C2H2} formation has not reached quasi-equilibrium state even after 206 hrs (81 mbar) and 18 hrs (15 mbar) at 1473 K, respectively (see Figure \ref{sec:C2H2_mole_fraction_profile}\ref{fig:C2H2_formation_profiles}a--b in the Appendix \ref{sec:C2H2_mole_fraction_profile}). However, at 2000 K, the acetylene formation reaches its quasi-equilibrium state after 18 hrs with its molecular mixing ratio of $\sim$1.37$\times$10\textsuperscript{-7} (see Figure \ref{sec:C2H2_mole_fraction_profile}\ref{fig:C2H2_formation_profiles}c in the Appendix \ref{sec:C2H2_mole_fraction_profile}) and no significant amount of any species larger than \ce{C3} molecules were predicted in the given system. This mainly indicates that (i) the amount of acetylene formed via thermal-only chemistry (up to 2500 K) from the condition of \ce{H2} : CO = 99.7 : 0.3 is not enough to push the chemistry to larger species such as polycyclic aromatic hydrocarbons (PAHs) that are accepted as precursors to organic aerosols in the majority of combustion research (\cite{Frenklach_2020}) and (ii) some major photochemical reactions can bring the chemistry into disequilibrium chemistry to form larger hydrocarbon species (e.g. 1-ring aromatic hydrocarbons or PAHs) or even organic aerosols. With regard to (ii), the results of previous experimental studies (\cite{Franklin_2018-thesis} and \cite{Fleury_2019}) in combination with the current study in this paper might indicate that the reaction kinetic model including the \ce{C2H2}-photochemistry is the key to explain the experimentally observed organic aerosol formations from \cite{Fleury_2019}. Note that our reaction kinetic model including CO(a\textsuperscript{3}$\Pi$)-photochemistry was successful in qualitatively explaining the significant augmented formation of major species (i.e. \ce{CH4}, \ce{H2O}, and \ce{CO2}) observed in \cite{Fleury_2019} at all temperature conditions. As can be seen from Table \ref{tab:C2H2_comparison}, our thermal- and UV-photochemistry model predicts $\sim$3 orders of magnitude more \ce{C2H2} are formed in the system at the 2000 K and 15 mbar condition even with an order of magnitude shorter time scale (i.e. 18 hrs) compared to the 1473 K and 81 mbar condition with the reaction time of 206 hrs. Thus if our hypothesis (i.e. the amount of \ce{C2H2} is closely related to aerosol formations) is correct, we are able to observe aerosol formations even at temperatures lower than 2000 K within the reaction time shorter than 18 hrs from the same experimental device of \cite{Fleury_2019}. Conducting this experiment and including \ce{C2H2}-related photochemistry into our current model would be interesting future studies.
\subsubsection{Sensitivity analysis of the kinetic model}
One of the major features of this study is the assessment of the transition temperature in which thermal chemistry starts to become dominant compared to photochemistry. However, this transition temperature can be affected by multiple factors as mentioned earlier: (i) errors in the UV light sources, (ii) errors in photoabsorption cross-sections, (iii) errors in quantum yields of any photochemical reactions, (iv) any missing reaction species or reactions (even surface chemistry), (v) any errors in thermochemical reactions included in the model, and (vi) any errors in thermodynamic parameters of related species. Although (iv) is important when it comes to model improvements, this was beyond the scope of the current work. With regard to (i)--(iii), these three factors directly affect the calculated carbon monoxide photoexcitation rate-coefficient (i.e. X\textsuperscript{1}$\Sigma^+$ $\rightarrow$ a\textsuperscript{3}$\Pi$) in \ref{photoexcit_rate_calc}. With regard to (v)--(vi), these errors originally came from the embedded errors of rate coefficients imported from the references described in \ref{rmg_library}. We computed the sensitivity of the major species (i.e. \ce{CH4}, \ce{H2O}, and \ce{CO2}) to all of the rate coefficients in the model simulated under the condition of 1473 K, 15 mbar, [\ce{H2}] = 0.997, and [CO] = 0.003, and the most sensitive three parameters that affect the predicted formation of the major species turn out to be photochemical reactions which are
\begin{equation}
\mathrm{CO(X\textsuperscript{1}\Sigma^+)} \to \mathrm{CO(a\textsuperscript{3}\Pi)}
\label{eq:rxn1}
\end{equation}
\begin{equation}
\mathrm{CO(X\textsuperscript{1}\Sigma^+)} + \mathrm{CO(a\textsuperscript{3}\Pi)} \to \ce{CO2} + \mathrm{C(\textsuperscript{3}P)}
\label{eq:rxn2}
\end{equation}
\begin{equation}
\ce{H2} + \mathrm{CO(a\textsuperscript{3}\Pi)} + \mathrm{M} \to \mathrm{H} + \mathrm{HCO} + \mathrm{M}
\label{eq:rxn3}
\end{equation}
They each have normalized sensitivities $\frac{d(lnC\textsubscript{n})}{d(\emph{k}\textsubscript{i})}$ with magnitudes of around 1, 0.95, and -0.75, respectively (see Figure \ref{sec:sensitivity_analysis}14). This means that, for example, if one increases the rate-coefficient of reaction \ref{eq:rxn1} by a factor of 10, the predicted molecular mixing ratio of the major species (i.e. \ce{CH4}, \ce{H2O}, and \ce{CO2}) would be increased by 10 times and the transition temperature would be pushed to a much higher temperature. On the contrary, if one reduces this carbon monoxide photoexcitation rate-coefficient (i.e. reaction \ref{eq:rxn1}, \textit{k}\textsubscript{2}) by a factor of 2, the predicted molecular mixing ratio of the major species (i.e. \ce{CH4}, \ce{H2O}, and \ce{CO2}) would be decreased by $\sim$50\% and the transition temperature would be shifted to lower temperature. The former case is possible if the photoabsorption cross-sections of carbon monoxide are underestimated. It has to be noted that the photoabsorption cross-sections of CO used to calculate \textit{k}\textsubscript{2} were taken from \cite{MyerSamson_1970} which experimentally measured the VUV absorption cross-sections at 298 K. Since VUV absorption cross-sections usually increase with increasing temperatures due to thermally increased populations at higher vibrational states, which then might result in increased access to upper electronic states. \cite{Venot_2018} experimentally showed that the photoabsorption cross-sections of \ce{CO2} around Lyman-$\alpha$ can vary by two orders of magnitude between 150 and 800 K. For this reason, it might be the case if the photoabsorption cross-sections of CO around Lyman-$\alpha$ increase by more than an order of magnitude from 298 K to 1473 K, which can potentially increase the carbon monoxide photoexcitation rate-coefficient (i.e. \textit{k}\textsubscript{2}) by a factor of 10, which means the calculated \textit{k}\textsubscript{2} is 10 times underestimated. However, the latter case (i.e. reducing \textit{k}\textsubscript{2} by a factor of 2) is also possible. It has to be noted that the quantum yield (i.e. $\Phi_2$) used in the calculation of \textit{k}\textsubscript{2} in \ref{photoexcit_rate_calc} is assumed to be unity since we couldn't find any theoretically or experimentally determined quantum yield of this photoexcitation reaction. However, it wouldn't be surprising if $\Phi_2$ is 0.5. In this case, the model predicted molecular mixing ratio would be overestimated by $\sim$50\%. These cases suggest interesting future studies (i.e. a measurement of temperature-dependent VUV photoabsorption cross-sections of carbon monoxide or a determination of the $\Phi_2$).
It has to be noted that there are too many model parameters and too few experimental data (e.g. a measurement of the CO absorption cross-section at various temperatures or a measurement of $\Phi_2$) to allow a perfect determination of the model parameters (and keep in mind that there are several other moderately sensitive rate coefficients and thermochemistry in the model, not just these 3 most sensitive reactions). However, even considering these, our kinetic modeling work indicates that electronically excited CO in its metastable state (a\textsuperscript{3}$\Pi$) can push the whole chemistry to the augmented formation of \ce{CH4}, \ce{CO2}, and \ce{H2O} in an obvious way, which qualitatively matches well with the experimental observation from \cite{Fleury_2019}. The quantitative discrepancy doesn't indicate a fundamental issue with the reaction kinetic model, but rather reflects our imperfect knowledge of the values of the model parameters.
\subsection{Modeling of the \ce{H2}/CO/\ce{H2O} exoplanet atmosphere analogue of \cite{Fleury_2020}}
When \ce{H2O} is added to the \ce{H2}/CO mix to change the C/O ratio to more realistic solar elemental abundance, both of the kinetic model-predicted thermal-only and UV-incorporated thermal chemistry of \cite{Fleury_2020} were almost similar to those of \cite{Fleury_2019}. The major reaction pathway forming \ce{CH4} (note that \ce{H2O} was included from the beginning of the experiments at much higher mixing ratio, so not focused as a product) in \cite{Fleury_2020} are predicted to be almost similar to those described in Figure \ref{fig:reactionpathwayswithandwithoutUV} and the previous section of describing the modeling result of \cite{Fleury_2019}. The major reaction pathways forming \ce{CO2} was exclusively CO being oxidized by OH radicals forming \ce{CO2} and H radicals. This behavior is due to a huge amount of \ce{H2O} included in the system from the beginning, which then can either thermally or photochemically dissociate into H and OH radicals in the system. Although the \ce{H2O} formation is predicted from our model simulation of \cite{Fleury_2019}, the amount of the \ce{H2O} predicted in the simulation of \cite{Fleury_2019} is still way smaller than the amount of \ce{H2O} introduced into the system from the beginning of the simulation of \cite{Fleury_2020}. For this reason, the simulated system of \cite{Fleury_2020} would contain much more OH radicals compared to the simulated system of \cite{Fleury_2019}. This leads to the notable difference between the model simulation of \cite{Fleury_2020} and that of \cite{Fleury_2019} in the form of the reversed molecular mixing ratio of \ce{CH4} to \ce{CO2}. As shown in Figure \ref{fig:Fleury_2020_wUV_woUV}, the molecular mixing ratio of \ce{CO2} is $\sim$3 orders and $\sim$4 orders of magnitude larger compared to that of \ce{CH4} at 1173 and 1473 K, respectively. In the case of \cite{Fleury_2019}, the molecular mixing ratio of \ce{CH4} is always at least 2 orders of magnitude higher than that of \ce{CO2} at temperatures higher than 1173 K, no matter UV photons are available or not. As mentioned above, this reversed ratio is attributed to the inclusion of \ce{H2O} from the beginning of the kinetic simulation of \cite{Fleury_2020}. According to the kinetic modeling, \ce{H2O} already thermally dissociates to form both H and OH radicals at 1173 K. These OH radicals reach the quasi-equilibrium state within $\sim$11 hours when the energy source is thermal-only, while they reach quasi-equilibrium much faster within 200 seconds with the aid of UV photons (i.e. photodissociation of \ce{H2O} into H + OH). This tendency gets more severe at higher temperature (i.e. 1473 K) in which thermal dissociation becomes much more efficient (since the thermal dissociation rate-coefficient is positively temperature dependent). This can be observed in the form of increasing OH mixing ratio in Figure \ref{fig:Fleury_2020_wUV_woUV}. Compared to 1173 K, the experimental condition of \cite{Fleury_2020} is much more oxidizing, so that the predicted \ce{CH4} mixing ratio decreases rapidly by 20 times (i.e. from [\ce{CH4}] = 1.97$\times$10\textsuperscript{-8} at 1173 K to [\ce{CH4}] = 9.55$\times$10\textsuperscript{-10} at 1473 K). It also has to be noted that the \ce{CO2} mixing ratio doesn't show any difference whether UV irradiation is available or not (see red symbols and lines in Figure \ref{fig:Fleury_2020_wUV_woUV}). This feature indicates that the \ce{CO2} formation has already reached its thermal quasi-equilibrium after 18 hours. If you look at Figure \ref{sec:CO2_mole_fraction_profile}\ref{fig:CO2_t_profile} in Appendix \ref{sec:CO2_mole_fraction_profile}, \ce{CO2} mixing ratio at 1173 K reaches equilibrium within 7 hours in thermal-only condition, and 50 seconds with UV-irradiation, while \ce{CO2} mixing ratio at 1473 K reaches equilibrium within 200 seconds in thermal-only condition, and 30 seconds with UV-irradiation. According to our model prediction, the predicted methane molecular mixing ratio at 1173 K is enough to be detected (i.e. [\ce{CH4}] = 1.97$\times$10\textsuperscript{-8} which is around the same to the predicted mixing ratio of \ce{CH4} plotted on Figure \ref{fig:Fleury_2019_wUV_woUV}a at 1173 K with UV irradiation). However, it was surprising that no methane formation was observed in any of experiments in \cite{Fleury_2020}. Two plausible explanations can be (i) errors in the current model overestimated the amount of \ce{CH4} formation or (ii) significant amount of \ce{H2O} included in the system from the beginning has disturbed the detection of any IR or mass-spectrometry peaks of \ce{CH4} at 1173 K. With regard to (i), we can simply assess this by sensitivity analysis. According to sensitivity analysis, the most sensitive reaction and the most sensitive thermochemistry that affect the predicted formation of \ce{CH4} are the reaction \ce{CH3} + OH $\rightarrow$ \ce{CH2OH} + H and the thermochemistery of \ce{H2} with their normalized sensitivities $\frac{d(lnC\textsubscript{n})}{d(\emph{k}\textsubscript{i})}$ of 0.5 and 1.25, respectively (see Figure \ref{sec:sensitivity_analysis}\ref{fig:SA_1173K}c and d in Appendix \ref{sec:sensitivity_analysis}). This means that we have to reduce the rate-coefficient of reaction \ref{eq:rxn1} by 20 times or we have to reduce the Gibbs free energy of \ce{H2} by 72 \% in order to decrease the predicted mixing ratio of \ce{CH4} by 5 times. Although it is obvious that those parameters embed a certain amount of errors inside, it is less likely that these errors can solely explain the non-detection of \ce{CH4} in \cite{Fleury_2020}. Rather, it would be more reasonable to lean to the explanation (ii) (i.e. an interference on the IR peaks and mass-spec peaks of \ce{CH4} due to significant amount of pre-existing \ce{H2O} in their experimental condition of \cite{Fleury_2020}). Other than that, the current model's reaction kinetic interpretation qualitatively matches well with the experimental observation of \cite{Fleury_2020}.
\begin{figure
\centering
\includegraphics[width=0.75\textwidth]{figures/Fluery_2020_wUV_woUV.png}
\caption{\footnotesize Temperature-dependent molecular mixing ratio profiles predicted by the reaction kinetic model for the experimental conditions described in \cite{Fleury_2020}. Solid symbols indicate predicted mixing ratio of molecular species with both thermal energy and UV photons, while open symbols indicate predicted mixing ratio of molecular species at the thermal-only condition (i.e. without UV photons): (a) Red circles indicate \ce{CO2}. Predicted \ce{CO2} mixing ratio at the thermal-only condition overlaps with the mixing ratio of \ce{CO2} with UV photons, thus not shown in the Figure; Blue circles indicate \ce{OH}; Green circles indicate \ce{CH4}; Black circles indicate H-atom (radical species); Asterisks indicate UV-excited CO (i.e. carbon monoxide in the a\textsuperscript{3}$\Pi$ state).}
\label{fig:Fleury_2020_wUV_woUV}
\end{figure}
\subsection{Quantitative comparison on the mixing ratio of \ce{CO2} between the model simulation results and the experimental results of \cite{Fleury_2019} and \cite{Fleury_2020}}
Since \cite{Fleury_2019, Fleury_2020} quantified the mixing ratios of \ce{CO2} at various experimental conditions using the Beer-Lambert law on the IR spectroscopy results, we tried to derive some quantitative comparisons of \ce{CO2} between the model simulation results and the experimental results at corresponding experimental conditions. Although the discrepancy between the model simulation results and the experimental results of the C/O=0.35 condition is within 2 orders of magnitude as it is shown in Table \ref{tab:mixing_ratio_comparison}, the discrepancy between the model simulation results and the experimental results of the C/O=1 condition is way larger compared to the discrepancy in the C/O=0.35 condition. There are multiple factors that can derive this discrepancy. First, as mentioned in the main text, the products that are observed in gas-phase static cell experiments conducted by \cite{Fleury_2019,Fleury_2020} are formed far beyond tertiary or even further reaction chemistry including surface chemistry on the wall since these experiments were conducted for more than 18 hrs. Surface chemistry is considered to be more efficient and faster compared to gas-phase chemistry and very difficult to simulate in detail. Although surface chemistry is indeed very important when it comes to precisely describe experimental study, it was beyond the scope of the current work to gain a better insight into simultaneous thermally and photochemically driven reaction pathways involving electronically excited CO in its metastable state (a\textsuperscript{3}$\Pi$). Natural isotopic contamination when using enriched isotopic gases was mentioned in \cite{Fleury_2019} and affected the chemistry that might have led to the discrepancy as well. The calculation of the absorption cross-section of \ce{CO2} described in the section 2.5 in \cite{Fleury_2019} can also affect the discrepancy as well. We conclude it is likely that discrepancy is mainly due to combined effects of these multiple factors mentioned above as well as experimental uncertainties. As mentioned earlier, the quantitative discrepancy doesn't indicate a fundamental issue with the reaction kinetic model, but rather reflects our imperfect knowledge of the values of the model parameters.
Keeping the quantitative comparison aside that can be influenced by the experimental conditions discussed above, our modeling work clearly predicts the experimental observations: (a) for C/O=1, where only \ce{H2} and CO molecules were present in the beginning, the thermochemical formation of \ce{CO2} is significantly less efficient than thermally augmented photochemical production of \ce{CO2}, as observed by \cite{Fleury_2019}; (b) for C/O=0.35, which was achieved through the addition of \ce{H2O} to the \ce{H2} + CO starting composition, chemistry is mainly driven by the \ce{H2O} dissociation (both thermally and photochemically) and no significant difference is observed between thermal and thermally augmented photochemical reaction pathways as a result. Our model again confirms the experimental observations of \cite{Fleury_2020}.
This analysis also highlights the importance of both experimental work and rigorous theoretical reaction kinetics modeling. Further, this work confirms that though experimental conditions have limitations that are unavoidable (such as reaction cell boundaries), relative equilibrium mixing ratios obtained from the experiments are similar to the predicted ones.
\begin{deluxetable}{ccccccccc
\tablenum{4}
\tablecaption{The molecular mixing ratio comparison of \ce{CO2} at various conditions between the model simulation results and the experimental results of \cite{Fleury_2019} and \cite{Fleury_2020}. \label{tab:mixing_ratio_comparison}}
\tablehead{
\colhead{} & \multicolumn{4}{c}{C/O=1}&\multicolumn{4}{c}{C/O=0.35}\\
\colhead{} & \multicolumn{2}{c}{Thermal only}&\multicolumn{2}{c}{Thermal + UV}& \multicolumn{2}{c}{Thermal only}&\multicolumn{2}{c}{Thermal + UV}\\
\colhead{} & \colhead{This study}&\colhead{Exp. 2019\textsuperscript{\textit{a}}}& \colhead{This study}&\colhead{Exp. 2019\textsuperscript{\textit{b}}}&\colhead{This study}&\colhead{Exp. 2020\textsuperscript{\textit{c}}}&\colhead{This study}&\colhead{Exp. 2020\textsuperscript{\textit{c}}}}
\startdata
573 K&1.83$\times$10\textsuperscript{-31}&1.50$\times$10\textsuperscript{-5}&7.60$\times$10\textsuperscript{-10}&1.00$\times$10\textsuperscript{-3}& \multicolumn{4}{c}{N/A}\\
\hline
873 K&4.96$\times$10\textsuperscript{-17}&6.40$\times$10\textsuperscript{-5}&4.56$\times$10\textsuperscript{-10}&8.50$\times$10\textsuperscript{-4}& \multicolumn{4}{c}{N/A}\\
\hline
1173 K&6.88$\times$10\textsuperscript{-14}&3.80$\times$10\textsuperscript{-5}&5.33$\times$10\textsuperscript{-11}&1.20$\times$10\textsuperscript{-3}&1.08$\times$10\textsuperscript{-5}&3.50$\times$10\textsuperscript{-4}&1.08$\times$10\textsuperscript{-5}&8.80$\times$10\textsuperscript{-4}\\
\hline
1473 K&1.68$\times$10\textsuperscript{-12}&3.40$\times$10\textsuperscript{-5}&7.21$\times$10\textsuperscript{-11}&4.30$\times$10\textsuperscript{-4}&5.56$\times$10\textsuperscript{-6}&3.60$\times$10\textsuperscript{-4}&5.56$\times$10\textsuperscript{-6}&5.60$\times$10\textsuperscript{-4}\\
\hline
\enddata
\tablecomments{\footnotesize\textsuperscript{\textit{a}}Mixing ratios of \textsuperscript{13}\ce{CO2} taken from Table 2 in \cite{Fleury_2019}; \textsuperscript{\textit{b}}Mixing ratios of \textsuperscript{13}\ce{CO2} taken from Table 3 in \cite{Fleury_2019}; \textsuperscript{\textit{c}}Mixing ratios of \textsuperscript{13}\ce{CO2} taken from Table 3 in \cite{Fleury_2020}}
\end{deluxetable}
\subsection{Impact of the current study on the field of astrochemistry}
In this work, we have implemented the state-of-the-art computer-aided automatic construction of an astrochemical reaction network and successfully demonstrated how this computer-aided reaction kinetic model can help us precisely interpret the previous photochemical experiment in detail. Not only we have figured out the photochemically important role of the UV-excited state of carbon monoxide, but also we suggest future studies such as the measurement of temperature-dependency of the cross-sections of carbon monoxide or the experiment and modeling combined kinetic study of \ce{C2H2}-UV-photochemistry. Our work clearly demonstrates the importance of coupling photochemical reaction pathways to thermochemical models to better understand exoplanet atmospheres. Only then will we be able to realistically describe atmospheric chemistry of exoplanets that receive significant amount of UV photons from their parent stars. The JWST transmission spectroscopy data will keep providing us with the deluge of exoplanetary data that need to be efficiently and precisely interpreted by atmospheric chemical models. As we can see from the case of the recent study by \cite{Tsai_2022}, it is important to properly implement photochemical reactions in chemical models to properly interpret observational data and extract valuable insight on exoplanet atmospheres. We hope our study would be the benchmark for future exoplanet atmospheric photochemical modeling study that will consequently lead to a rapid innovation in the field of astrochemistry.
\section{Conclusions} \label{sec:conclusions}
In this work, we utilized an automatic chemical reaction mechanism generator to build a large and complex thermo- and photochemical network that can qualitatively rationalize the augmented chemistry observed from previous experimental works by \cite{Fleury_2019, Fleury_2020}. Our model has demonstrated that Lyman-$\alpha$-aided electronically excited carbon monoxide in its metastable state (a\textsuperscript{3}$\Pi$) can significantly enhance the chemistry in the exoplanet atmosphere-like conditions through two different reactions which are (i) \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO and (ii) CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ \ce{CO2} + C(\textsuperscript{3}P). The first reaction (i.e. \ce{H2} + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ H + HCO) leads to significantly augmented formation of HCO radicals in \ce{H2}-dominated system, which can push the whole chemistry to further \ce{CH4}, \ce{H2O}, and \ce{CO2} formation even at a very low temperature of 573 K. At temperatures above 873 K, the second reaction (i.e. CO(X\textsuperscript{1}$\Sigma^+$) + CO(a\textsuperscript{3}$\Pi$) $\rightarrow$ \ce{CO2} + C(\textsuperscript{3}P)) forms \ce{CO2} and the triplet C radicals which rapidly hydrogenate into \ce{CH4}, which mainly contributes to significantly augmented formation of major species. However, at temperatures above 2000 K, thermal chemistry then becomes efficient enough to dominate the whole chemistry. Given the experimental conditions of \cite{Fleury_2019, Fleury_2020}, the transition temperature in which thermal chemistry becomes dominant compared to photochemistry starts at $\sim$1970 K, and this transition temperature can be shifted due to multiple factors. Finally, under the experimental conditions of \cite{Fleury_2019, Fleury_2020}, our model might suggest that thermal-only chemistry up to 2500 K cannot push the chemistry to larger species such as PAHs or even organic aerosols, and instead, \ce{C2H2} photochemistry and the photochemistry of higher carbon species that are produced from \ce{C2H2} photochemistry might be the key to explain the experimentally observed hydrocarbon aerosol formations observed from the previous experiments by \cite{Fleury_2019}. The model results have demonstrated the importance of electronically excited metastable carbon monoxide in exoplanet atmospheres and that adding more photochemical reactions and species to current 1D photochemical models would provide a more comprehensive understanding of exoplanet atmospheres.
\vspace{5mm}
This research work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This research work was funded by the NASA Exoplanet Research Program grant to MSG. BF thanks the Universit\'e Paris-Est Cr\'eteil (UPEC) for funding support (postdoctoral grant).
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
|
{
"arxiv_id": "2302.08585",
"language": "en",
"timestamp": "2023-02-20T02:02:16",
"url": "https://arxiv.org/abs/2302.08585",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
Bernd Sturmfels has a knack for neologisms, minting memorable mathematical terms that pithily portray their essence
and pass into general use.
Nonlinear algebra~\cite{NLA_book} is a Sturmfelian neologism expressing the focus on computation in applications of algebraic geometry, the
objects that appear in applications, and the theoretical underpinnings this inquiry requires.
Numerical nonlinear algebra is numerical computation supporting nonlinear algebra.
It is complementary to symbolic computation (also a key input to nonlinear algebra), and its development has opened up new
vistas to explore and challenges to overcome.
Sturmfels did not create this field, but his work with Huber introducing the polyhedral homotopy algorithm~\cite{HuSt95} catalyzed it.
This algorithm exemplifies Sturmfels' mathematical contributions, exploiting links between algebra and geometric combinatorics
to address problems in other areas of mathematics, in this case the ancient problem of solving equations.
He was also important for its development with his encouragement of researchers, early decisive use of its methods~\cite{Sturmfels2002,BHORS}, and by
popularizing it~\cite{3264}.
In Section~\ref{S:phc} we describe polynomial homotopy continuation and its basic use to solve systems of polynomial equations.
We develop the background and present some details of the polyhedral homotopy algorithm in Section~\ref{polyhedral-homotopy}.
Numerical algebraic geometry, which uses these tools to represent algebraic varieties on a computer, is presented in Section~\ref{sec:nag},
along with new methods for solving equations that this perspective affords.
A welcome and perhaps surprising feature is that there are often methods to certify the approximate solutions these algorithms provide,
which is sketched in Section~\ref{sec:cert}.
We close this survey by presenting three domains in which numerical nonlinear algebra has been applied in Section~\ref{sec:apps}.\newpage
\section{What is Polynomial Homotopy Continuation?}\label{S:phc}
\demph{Polynomial Homotopy Continuation} is a numerical method to compute complex-number solutions to systems of polynomial equations,
\begin{equation}\label{system}
F(x_1,\ldots,x_n)\ =\ \begin{bmatrix}
\ f_1(x_1,\ldots,x_n)\ \\
\vdots \\
\ f_m(x_1,\ldots,x_n)\
\end{bmatrix}\ =\ 0\,,
\end{equation}
where $f_i(x_1,\ldots,x_n)\in \mathbb{C}[x_1,\ldots,x_n]$ for $1\leq i\leq m$.
A point $\mathbf{z}\in\mathbb{C}^n$ is a \demph{regular zero} of $F$ if $F(\mathbf{z}) = 0$ and the Jacobian
matrix \defcolor{$JF$} of $F$ at $\mathbf{z}$ has rank $n$.
Necessarily, $m\geq n$.
When $m=n$, the system is \demph{square}
and the corresponding Jacobian matrix is a square $n\times n$ matrix.
The underlying idea is simple: to solve $F(\mathbf{x})=0$, we construct another system $G(\mathbf{x})=0$ of polynomial equations with known zeroes,
together with a \demph{homotopy}.
This is a family of systems $H(\mathbf{x},t)$ for $t\in\mathbb{C}$ interpolating between $F$ and $G$ in that
$H(\mathbf{x},0)=F(\mathbf{x})$ and $H(\mathbf{x},1)=G(\mathbf{x})$.
Considering one zero, $\mathbf{y}$, of $G(\mathbf{x})$ and restricting to $t\in[0,1]$, $H(\mathbf{x},t)=0$ defines a \demph{solution path}
$\mathbf{x}(t)\subset\mathbb{C}^n$ such that $H(\mathbf{x}(t),t)=0$ for $t\in[0,1]$ and $\mathbf{x}(1)=\mathbf{y}$.
The path is followed from $t=1$ to $t=0$ to compute the solution $\mathbf{z}=\mathbf{x}(0)$.
This is equivalent to solving the initial value problem
\begin{equation}
\label{Davidenko}
\frac{\partial}{\partial \mathbf{x}} H(\mathbf{x},t) \, \Big(\, \frac{\mathrm d}{\mathrm d t}\mathbf{x}(t)\, \Big)
+ \frac{\partial}{\partial t} H(\mathbf{x},t)\ =\ 0\,,\qquad \mathbf{x}(1)\ =\ \mathbf{y}\,.
\end{equation}
This \demph{Davidenko differential equation}~\cite{Davidenko,Davidenko_full} is typically solved using a standard predictor-corrector scheme (see
Section~\ref{sec:tracking}).
We say that $\mathbf{x}(1)=\mathbf{y}$ gets \demph{tracked} towards $\mathbf{x}(0)$.
For this to work, $\mathbf{x}(t)$ must be a regular zero of $H(\mathbf{x},t)=0$ for every $t\in(0,1]$.
Nonregular solutions at $t=0$ are handled with specialized numerical methods called \demph{endgames}~\cite{Morgan:Sommese:Wampler:1992a}.
So far, there is nothing special about polynomials---all we need is for $F$,~$G$, and $H$ to be analytic.
However, when $F$ is a system of polynomials, we can construct a \demph{start system} $G$ with known zeroes such that for every
isolated zero $\mathbf{z}$ of~$F$, there is at least one zero of $G$ that gets tracked towards $\mathbf{z}$.
That is, we may compute all isolated zeros of $F$.
Garc\'ia and Zangwill~\cite{GZ79} proposed polynomial homotopy continuation and a classic reference is Morgan's book~\cite{Morgan}.
The textbook by Sommese and Wampler~\cite{Sommese:Wampler:2005} is now a standard reference.
Historically, the first implementation with wide acceptance was \texttt{PHCpack} \cite{PHCpack}, followed a decade later by
\texttt{Bertini} \cite{Bertini}, which is also widely used.
Later came the \texttt{HOM4PS} family \cite{HOM4PSArticle,LeeLiTsai2008HOM4PS},
\texttt{NAG4M2}~\cite{NumericalAlgebraicGeometryArticle}, and \texttt{HomotopyContinuation.jl}~\cite{HC.jl}.
\texttt{NAG4M2} implements interfaces to many of these other packages, for example, see~\cite{NAG4M2_PHCpack}.
We will now explain polynomial homotopy continuation in more detail.
We first discuss what is meant by \demph{numerical method} and \demph{solution} (a synonym for zero) of a system.
\subsection{The solution to a system of polynomial equations}
A solution to the system~\eqref{system} is a point $\mathbf{z}\in\mathbb{C}^n$ satisfying~\eqref{system}.
The collection of all such points is an \demph{algebraic variety},
\begin{equation}\label{eq:AlgVariety}
V\ =\ \{\mathbf{z}\in\mathbb{C}^n \mid f_1(\mathbf{z})=\cdots = f_m(\mathbf{z})=0\}\,.
\end{equation}
This defines solutions $\mathbf{z}$ \demph{implicitly}, using just the definition of $F$.
It is hard to extract any information other than ``$\mathbf{z}$ is a solution to $F$'' from this representation.
A more useful representation of $V$ is given by a \demph{Gr\"obner basis} \cite{Sturmfels2002, Sturmfels2005}.
Consider a simple example of two polynomial equations in two variables,
\begin{equation}\label{Eq:Ex2.1}
x^2+y^2-1\ =\ x^2-y^3-y-1\ =\ 0\,,
\end{equation}
describing the intersection of two plane curves.
A Gr\"obner basis is $\{y^3+y^2+y,\ x^2+y^2-1\}$.
Its triangularity facilitates solving.
The first equation, $y^3+y^2+y=0$, has the three solutions $0, (-1\pm \sqrt{-3})/2$.
Substituting each into the second gives two solutions, for six solutions altogether.
While these equations can be solved exactly, one cannot do this in general. A Gr\"obner basis is an equivalent
implicit representation of $V$ from which we may transparently extract numerical invariants such as the number of
solutions or the dimension and degree of $V$.
Finer questions about individual solutions may require computing them numerically.
Numerical methods only compute numerical approximations of solutions to a system~\eqref{system}.
Thus $(1.271+ .341\sqrt{-1}\,,\,-.500+.866\sqrt{-1})$ is an approximation of a solution to~\eqref{Eq:Ex2.1}.
A numerical approximation of a point $\mathbf{z}\in V$ is any point $\mathbf{y}\in\mathbb{C}^n$ which is in some sense close to~$\mathbf{z}$.
For example, we could require that $\mathbf{y}$ is within some tolerance $\epsilon>0$ of $\mathbf{z}$, i.e., $|\mathbf{y}-\mathbf{z}|<\epsilon$.
Consequently, the concept of zero of (or solution to) a polynomial system is replaced by an \demph{approximate zero} (defined in
Section~\ref{sec:tracking}).
This is fundamentally different than using exact methods like Gr\"obner bases, where the goal is to handle the true exact zeros of
polynomial systems.
As an approximate zero is not a true zero, a numerical computation does not yield all the information obtained in an exact
computation.
On the other hand, a numerical computation is often less costly than a symbolic computation.
Other advantages are that the architecture of modern computers is optimized for floating point arithmetic and that numerical
continuation is readily parallelized (see Remark~\ref{R:parallel}).
Despite not containing all the information of true zeroes, we discuss in Section~\ref{sec:cert} how to use approximate zeroes to obtain
precise and provable results.
\subsection{The Parameter Continuation Theorem}\label{SS:parameterContinuation}
Our discussion of homotopy continuation assumed that solution paths exist.
The \demph{Parameter Continuation Theorem} by Morgan and Sommese~\cite{MS1989} asserts this when the homotopy arises from
a path in \demph{parameter} space.
Suppose the system of polynomials~\eqref{system} depends on $k$ parameters $\mathbf{p}=(p_1,\ldots,p_k)\in\mathbb{C}^k$.
Write $F(\mathbf{x};\mathbf{p})$ for the
polynomial system corresponding to a particular choice of $\mathbf{p}$, and further suppose that the map $\mathbf{p}\mapsto F(\mathbf{x};\mathbf{p})$ is smooth.
For example, the parameters may be the coefficients in $F$.
Consider the incidence variety
\begin{equation}\label{Eq:Incidence_I}
Z\ =\ \{ (\mathbf{x}, \mathbf{p}) \in \mathbb{C}^n \times \mathbb{C}^k \mid F(\mathbf{x};\mathbf{p}) = 0\}\ \subseteq\ \mathbb{C}^n \times \mathbb{C}^k\,.
\end{equation}
Let $\pi_1\colon Z \rightarrow \mathbb{C}^n$ and $\pi_2\colon Z \rightarrow \mathbb{C}^k$ be the projections onto the first and second
factors.
The map $\pi_1$ identifies points in the fiber $\pi_2^{-1}(\mathbf{p})$ with solutions to $F(\mathbf{x};\mathbf{p})=0$.
\begin{theorem}[Parameter Continuation Theorem]\label{parameter_theorem}
For $\mathbf{p}\in\mathbb{C}^k$, let $N(\mathbf{p})$ be the number of regular zeroes of $F(\mathbf{x};\mathbf{p})=0$.
There exists a proper algebraic subvariety $B \subset \mathbb{C}^k$ and a number $N$, such that $N(\mathbf{p})\leq N$ for $\mathbf{p}\in\mathbb{C}^k$
and $N(\mathbf{p})=N$ when $\mathbf{p}\not\in B$.
Set $U\vcentcolon=\mathbb{C}^k \smallsetminus B$ and suppose that $\gamma(t)\colon [0, 1] \rightarrow \mathbb{C}^k$ is a continuous path.
Write $\mathbf{p}_0\vcentcolon=\gamma(0)$.
\begin{enumerate}
\item If $\gamma([0, 1])\subset U$, then the homotopy $F(\mathbf{x}; \gamma(t))$ defines $N$ continuous, isolated smooth solution paths
$\mathbf{x}(t)$.
\item If $\gamma((0, 1])\subset U$, then as $t \rightarrow 0$, the limits of the solution paths, if they exist, include all the isolated
solutions to $F(\mathbf{x}; \mathbf{p}_0)=0$.
This includes both regular solutions and solutions with multiplicity greater than one.
\end{enumerate}
At points $t\in[0,1]$ with $\gamma(t)\in U$ where $\gamma$ is differentiable, $\mathbf{x}(t)$ is differentiable.
\end{theorem}
The point of this theorem is that any path satisfying $\gamma((0,1])\subset U$ can be used for homotopy continuation, so that
$G(\mathbf{x})=F(\mathbf{x};\gamma(1))$ is the start system.
Since the \demph{branch locus} $B = \mathbb{C}^k{\smallsetminus}U$ is a subvariety, it has real codimension at least two and typical
paths in the parameter space $\mathbb{C}^k$ do not meet $B$.
Call $\pi_2\colon Z\to\mathbb{C}^k$ a \demph{branched cover}.
Theorem~\ref{parameter_theorem} can be generalized, replacing the parameter space $\mathbb{C}^k$ by an irreducible
variety~\cite[Theorem 7.1.4]{Sommese:Wampler:2005}.
A \demph{parameter homotopy} is one arising from a path $\gamma$ such as in Theorem~\ref{parameter_theorem}(2).
The Parameter Continuation Theorem follows from Bertini's Theorem, other standard results in algebraic
geometry, and the implicit function theorem.
A proof is given in~\cite{MS1989}.
\begin{example}\label{ex:paths}
Figure~\ref{F:paths} shows possibilities for homotopy paths $\mathbf{x}(t)$, when
Theorem~\ref{parameter_theorem}(2) holds.
\begin{figure}[htb]
\centering
\begin{picture}(200,120)(0,-5)
\put(0,0){\includegraphics{pictures/paths}}
\put(8,-10){$0$} \put(98,-10){$t$} \put(190,-10){$1$}
\end{picture}
\caption{Homotopy Paths.}\label{F:paths}
\end{figure}
The start system $G(\mathbf{x})$ at $t=1$ has $N=5$ regular zeros, and each lies on a unique path $\mathbf{x}(t)$ for $t\in(0,1]$.
One path has no finite limit as $t\to 0$, while the other four have limits.
Two have unique limits; the endpoint of one at $t=0$ is the regular zero of the target system $F(\mathbf{x})$,
while the endpoint of the other is not an isolated zero of $F(\mathbf{x})$.
Two paths have the same limit, and their common endpoint is an isolated zero of~$F(\mathbf{x})$ of multiplicity
two.\hfill$\diamond$
\end{example}
\subsection{The total degree homotopy}\label{sec:totalDegree}
To reach \demph{all}
isolated zeros of the target system,
the start system must have at least as many zeros as the target system.
Thus, an upper bound on the number of isolated zeros is often needed to choose a homotopy.
One such upper bound is provided by B\'{e}zout's Theorem:
The number of isolated zeros of the system $F = (f_1,\ldots,f_n)$ is at most $d_1 d_2 \cdots d_n$,
where $d_i = \deg f_i$ for $i=1,\ldots,n$.
It inspires start systems of the form
\begin{equation}\label{Eq:BezoutStart}
\begin{bmatrix}
\ b_0 x_1^{d_1}\ -\ b_1\ \\
\vdots \\
\ b_0 x_n^{d_n}\ -\ b_n\
\end{bmatrix}\ .
\end{equation}
For nonzero complex numbers $b_0,b_1,\ldots,b_n \in \mathbb{C}{\smallsetminus}\{0\}$, this start system is outside the branch locus $B$ and it has
$d = d_1 d_2 \cdots d_n$ solutions, which are all are easily computed.
This gives the \demph{total degree homotopy},
\begin{equation}\label{E:TDH}
H(x_1,\ldots,x_n,t) =
t \cdot
\begin{bmatrix}
\ b_0 x_1^{d_1}\ -\ b_1\ \\
\vdots \\
\ b_0 x_n^{d_n}\ -\ b_n\
\end{bmatrix}
\ +\
(1-t) \cdot
\begin{bmatrix}
\ f_1(x_1,\ldots,x_n)\ \\
\vdots \\
\ f_n(x_1,\ldots,x_n)\
\end{bmatrix}\ .
\end{equation}
Such a convex combination of two similar systems is called a \demph{straight-line homotopy}. The straight line homotopy is a particular case of a parameter homotopy.
For general choices of the parameters $b_i$ the smoothness conditions of Theorem~\ref{parameter_theorem}(2)
hold~\cite[Thm.~8.4.1]{Sommese:Wampler:2005}.
\begin{example}\label{Ex:TD_simpleSystem}
The total degree homotopy for the system~\eqref{Eq:Ex2.1} has the form
%
\begin{equation}\label{equ: total degree homotopy example}
H(x,y,t) =
t\cdot
\begin{bmatrix}
\ b_0 x^{2} - b_1\ \\
\ b_0 y^{3} - b_2\
\end{bmatrix}
+
(1-t)\cdot
\begin{bmatrix}
\ x^2 + y^2 - 1\ \\
\ x^2 - y^3 - y -1\
\end{bmatrix}
.
\end{equation}
%
For $b_0,b_1,b_2 \in \mathbb{C}{\smallsetminus}\{0\}$, the start system
$H(x,y,1) = (b_0 x^2 - b_1, b_0 y^3 - b_2)$ has six distinct complex zeros, all of which are regular,
and the zero set of $H(x,y,t)$ consists of six paths
in $\mathbb{C}^2 \times [0,1]$, each smoothly parameterized by $t$ for {\em almost all} choices of $b_0,b_1,b_2$.
The parameter $b_0$ is used to avoid cancellation of the highest degree terms for $t\in(0,1]$.\hfill$\diamond$
\end{example}
The phrase ``almost all'' in this example is because the set of choices of $b_i$ for which the paths are singular
(they meet the branch locus $B$ of Theorem~\ref{parameter_theorem})
has measure zero.
Such situations occur frequently in this field, often referred to as \demph{probability one}.
\hypertarget{path-tracking}{%
\subsection{Path-tracking}\label{sec:tracking}}
Path-tracking is the numerical core of homotopy continuation.
Suppose that $H(\mathbf{x},t)$ for $\mathbf{x}\in\mathbb{C}^n$ and $t\in\mathbb{C}$ is a homotopy with target system
$F(\mathbf{x})=H(\mathbf{x},0)$ and start system $G(\mathbf{x})=H(\mathbf{x},1)$.
Further suppose that for $t\in (0,1]$, $H(\mathbf{x},t)=0$ defines smooth paths $\mathbf{x}(t)\colon(0,1]\to\mathbb{C}^n$ such that each
isolated solution to $F$ is connected to at least one regular solution to $G$ through some path, as in Theorem~\ref{parameter_theorem}(2).
By the Implicit Function Theorem, each isolated solution to $G$ is the endpoint~$\mathbf{x}(1)$ of a unique path $\mathbf{x}(t)$.
Lastly, we assume that all regular solutions to $G$ are known.
Given this, the isolated solutions to $F$ may be computed as follows:
For each regular solution $\mathbf{x}(1)$ to $G$, track the path $\mathbf{x}(t)$ from $t=1$ towards $t=0$.
If it converges, then $\mathbf{x}(0)=\lim_{t\to 0} \mathbf{x}(t)$ satisfies $F(\mathbf{x}(0))= 0$, and this will find all isolated solutions to $F$.
The path $\mathbf{x}(t)$ satisfies the Davidenko differential equation, and thus we may compute values $\mathbf{x}(t)$ by solving
the initial value problem~\eqref{Davidenko}.
Consequently, we may use any numerical scheme for solving initial value problems.
This is not satisfactory for solving nonlinear polynomial systems due to the propagation of error.
As the solution paths $\mathbf{x}(t)$ are defined implicitly, there are standard methods to mitigate error propagation.
Let $E$ be a system of $n$ polynomials.
Given a point $\mathbf{z}_0$ where the Jacobian $JE$ of $E$ is invertible, we may apply the
\demph{Newton operator} \defcolor{$N_E$} to $\mathbf{z}_0$, obtaining $\mathbf{z}_1$,
\begin{equation}\label{Newton}
\mathbf{z}_1\ \vcentcolon=\ N_E(\mathbf{z}_0)\ \vcentcolon=\
\mathbf{z}_0 - \left( JE(\mathbf{z}_0)\right)^{-1} E(\mathbf{z}_0)\,.
\end{equation}
We explain this: if we approximate the graph of the function $E$ by its tangent plane at $(\mathbf{z}_0,E(\mathbf{z}_0))$, then
$\mathbf{z}_1\in\mathbb{C}^n$ is the unique zero of this linear approximation.
There exists a constant $0<c<1$ such that when $\mathbf{z}_0$ is sufficiently close to a regular zero $\mathbf{z}$ of $E$,
we have \demph{quadratic convergence} in that
\[
\|\mathbf{z}_1-\mathbf{z}\|\ \leq\ c\|\mathbf{z}_0-\mathbf{z}\|^2\,.
\]
This is because $\mathbf{z}$ is a fixed point of $N_E$ at which the derivative of $N_E$ vanishes.
The inequality follows from standard error estimates from Taylor's Theorem for $N_E$ in a neighborhood of $\mathbf{z}$.
A consequence is that when $\mathbf{z}_0$ is sufficiently close to a regular zero $\mathbf{z}$, each Newton iterate starting from $\mathbf{z}_0$
doubles the number of accurate digits.
Such a point $\mathbf{z}_0$ is an \demph{approximate zero} of $F$.
This leads to algorithms to certify numerical output as explained in Section~\ref{sec:cert}.
\demph{Predictor-corrector algorithms} for solving the initial value problem for homotopy paths
take a discretization $1=t_0>t_1>\dotsb>t_m=0$ of the interval $[0,1]$ and iteratively compute
approximations $\mathbf{x}(1)=\mathbf{x}(t_0),\mathbf{x}(t_1),\dotsc,\mathbf{x}(t_m)=\mathbf{x}(0)$ to points on the solution path $\mathbf{x}(t)$.
This requires an initial approximation $\mathbf{x}_0$ to $\mathbf{x}(t_0) = \mathbf{x}(1)$.
Then, for $k=0,\dotsc,m{-}1$, given an approximation $\mathbf{x}_k$ to $\mathbf{x}(t_k)$, a prediction $\hat{\mathbf{x}}_{k+1}$ for $\mathbf{x}(t_{k+1})$ is
computed.
This typically uses one step in an iterative method for solving the initial value problem (a local solver).
This is the \demph{predictor step}.
Next, one or more Newton iterations $N_E$ for $E(\mathbf{x})=H(\mathbf{x},t_{k+1})$ are applied to $\hat{\mathbf{x}}_{k+1}$ to obtain a new
approximation $\mathbf{x}_{k+1}$ to $\mathbf{x}(t_{k+1})$.
This is the \demph{corrector step}.
Predictor steps generally cause us to leave the proximity of the path being tracked; corrector steps bring us back.
The process repeats until $k=m{-}1$.
There are a number of efficient local solvers for solving initial value problems.
They typically use approximations to the Taylor series for the trajectory $\mathbf{x}(t)$ for $t$ near $t_k$.
For example, the Euler predictor uses the tangent line approximation,
\[
\hat{\mathbf{x}}_{k+1}\ =\ \mathbf{x}_k+\Delta t_k \Delta \mathbf{x}_k
\qquad\mbox{where}\qquad
\frac{\partial H}{\partial \mathbf{x}}(\mathbf{x}_k,t_k) \cdot \Delta \mathbf{x}_k + \frac{\partial H}{\partial t}(\mathbf{x}_k,t_k)\ =\ 0\,.
\]
Here, $\Delta t_k=t_{k+1}-t_k$ and $(\Delta x_k,1)$ spans the kernel of the Jacobin $JH(\mathbf{x}_k,t_k)$.
Figure~\ref{Fig:Predictor-Corrector} illustrates an Euler prediction followed by Newton corrections.
\begin{figure}[htb]
\centering
\begin{picture}(330,166)(0,-6)
\put(0,0){\includegraphics{pictures/predictor-corrector}}
\put(155,142){predictor} \put(157,137){\vector(-1,-1){15}} \put(40,138){{\color{Plum}$\hat{\mathbf{x}}_{k+1}$}}
\put(91,104){corrector} \put(89,106.5){\vector(-1,0){21}}
\put(96,85){{\color{blue}$\mathbf{x}_{k+1}$}}\put(94,87.5){{\color{blue}\vector(-3,-1){25}}}
\put(284,109){{\color{blue}$\mathbf{x}_{k}$}}\put(285,105.5){{\color{blue}\vector(-1,-1){16}}}
\put(6,40){{\color{mymaroon}$\mathbf{x}(t_{k+1})$}} \put(42,49){{\color{mymaroon}\vector(1,1){22}}}
\put(214,45){{\color{mymaroon}$\mathbf{x}(t_{k})$}} \put(240.7,54){{\color{mymaroon}\vector(1,1){22}}}
\put(100,24){$\epsilon$-neighborhood} \put(135,34){\vector(0,1){15}}
\put(43,13){$t_{k+1}$} \put(157,-9){$\Delta t_k$} \put(270,13){$t_{k}$}
\end{picture}
\caption{Euler prediction followed by Newton corrections.
The image is adapted from \cite{BT_Intro} (we thank Sascha Timme for allowing us to use his figure).}
\label{Fig:Predictor-Corrector}
\end{figure}
It suggests a stopping criterion for Newton iterations based on a fixed tolerance $\epsilon$.
Another is to apply Newton iterations until quadratic convergence is observed.
\begin{remark}\label{R:parallel}
Since each solution path defined by $H(\mathbf{x},t) = 0$ may be tracked independently,
path-tracking is (in the words of Jan Verschelde) pleasingly parallelizable,
%
%
which is a strength of polynomial homotopy continuation.\hfill$\diamond$
\end{remark}
We hardly mentioned
the state of the art in path-tracking methods.
There is a significant literature on other predictor-corrector schemes (see \cite[Sections 15--18]{BC2013} for an overview), practical
path-tracking heuristics~\cite{BertiniBook,Timme21}, and endgames for dealing with issues that arise near $t=0$,
such as divergent~\cite{Morgan:1986} or singular paths~\cite{Morgan:Sommese:Wampler:1992a}.
Indeed, the methods we describe suffice only for the most
well-conditioned paths ending at regular solutions at $t=0$.
\subsection{Squaring up}\label{SS:SquaringUp}
It is common to need to solve \demph{overdetermined systems}, which have more equations than variables.
This presents a challenge as both the total degree homotopy of Section~\ref{sec:totalDegree} and the polyhedral homotopy from the next
section enable us to find all isolated solutions to a {\it square} system $F(\mathbf{x})=0$ of polynomial equations.
Let us discuss an example, and then explain the method of squaring up, which reduces the problem of solving overdetermined systems to that of solving square
systems.
\begin{example}\label{Ex:squareUp}
Let $A,B,C$ be the following $2\times 3$ matrices,
\[
A\ :=\ \left(\begin{matrix}1&3&5\\2&4&6\end{matrix}\right)\ ,
\qquad
B\ :=\ \left(\begin{matrix}2&3&7\\2&5&-11\end{matrix}\right)\
\quad\mbox{ and }\quad
C\ :=\ \left(\begin{matrix}1&-1&1\\-2&3&-7\end{matrix}\right)\ .
\]
We ask for the matrices of the form $D(x,y):=A+Bx+Cy$ that have rank 1.
This is given by the vanishing of the three $2\times 2$ minors of $D(x,y)$.
This overdetermined system of equations in $x,y$ has three solutions:
\begin{equation}\label{ThreePoints}
(-\tfrac{4}{5}, \tfrac{3}{5})\,,\ (-0.15019, 0.16729)\,,\, (-0.95120, 2.8373)\,.
\end{equation}
We may find these using the total degree homotopy as follows.
The determinants of the first two columns and the last two columns of $D(x,y)$ give a square subsystem of the system of three minors,
and these have $4=2\cdot 2$ solutions (the B\'ezout number).
In addition to the three solutions in~\eqref{ThreePoints}, the fourth is $(-13/14, 3/14)$.
Let $f(x,y)$ be the remaining minor.
Then $f(-13/14, 3/14)=-963/98$, while the three solutions in~\eqref{ThreePoints} evaluate (close to) zero.
This is a simplification of the general scheme. \hfill$\diamond$
\end{example}
Let $F$ be an overdetermined system consisting of $m$ polynomials in $n$ variables, where $m>n$.
\demph{Squaring up} $F$ replaces it by a square system $G(\mathbf{x}):=M F(\mathbf{x})$ as follows:
Let $M$ be a (randomly chosen) $n\times m$ complex matrix, so that $G(\mathbf{x})$ consists of $n$ polynomials, each of which is a linear combination of
polynomials in $F(\mathbf{x})$.
Next, find all isolated solutions to $G(\mathbf{x})=0$.
Since the solutions of $F(\mathbf{x})=0$ are among those of $G(\mathbf{x})=0$, we need only to determine the zeros of $G$ which are not zeros of
$F$.
A simple way is to evaluate $F$ at each of the zeros of $G$ and discard those that do not evaluate to zero (according to some heuristic).
It is numerically more stable to apply the Newton operator for the overdetermined system $F$ \cite{DS2000} to the zeros of $G$ and retain
those which converge quadratically.
Example~\ref{Ex:squareUp} is a simplification, using a very specific matrix $M$ rather than a randomly chosen matrix.
\begin{remark}
Suppose that the overdetermined system $F(\mathbf{x})$ depends on a parameter $\mathbf{p}\in\mathbb C^k$, i.e., we have
$F(\mathbf{x})=F(\mathbf{x};\mathbf{p})$, and that for a general parameter $\mathbf{p}\in\mathbb C^k$ the system of equations $F(\mathbf{x};\mathbf{p})$ has $N>0$ isolated
solutions (as in the Parameter Continuation Theorem~\ref{parameter_theorem}). Suppose further that we have already computed all the
solutions of $F(\mathbf{x};\mathbf{p}_0)=0$ for a fixed parameter $\mathbf{p}_0\in\mathbb C^k$ (either by squaring up or by using another method).
Then, we can use the Newton operator for overdetermined systems from \cite{DS2000} for homotopy continuation along the path~$F(\mathbf{x};t\mathbf{p}_0 +(1-t)\mathbf{p})$ for any other parameter $\mathbf{p}$.\hfill$\diamond$
\end{remark}
\section{Polyhedral homotopy}
\label{polyhedral-homotopy}
The total degree homotopy from \Cref{sec:totalDegree} computes all isolated zeroes to any system of polynomial equations.
Its main flaw is that it is based on the B\'ezout bound.
Many polynomial systems arising in nature have B\'ezout bound dramatically larger than their number of zeroes; for these,
the total degree homotopy will track many excess paths.
\begin{example}\label{Ex:biquadratic}
Consider the following problem, posed in~\cite{DHOST2016}:
Find the distance from a point~$\mathbf{x}^*\in\mathbb{R}^d$ to a hypersurface given by the vanishing of a single polynomial $f$.
A first step is to compute all critical points of the distance function $\|\mathbf{x}-\mathbf{x}^*\|$ for $f(\mathbf{x})=0$.
We formulate this using a Lagrange multiplier $\lambda$,
\[
f(\mathbf{x})\ =\ 0\ \quad\mbox{ and }\quad
\lambda(\mathbf{x}-\mathbf{x}^*)\ =\ \nabla f(\mathbf{x})\,.
\]
When $d=2$, $f=5-3x_2^2-3x_1^2+x_1^2x_2^2$, and $x^*=(0.025,0.2)$, these become
%
\begin{equation}\label{Eq:biquadratic}
5-3x_2^2-3x_1^2+x_1^2x_2^2\ =\ 0\quad\mbox{ and }\quad
\lambda\left[\begin{array}{c} x_1-0.025\\x_2-0.2\end{array}\right]
\ =\ \left[\begin{array}{c}
-6x_1+2x_1x_2^2\\-6x_2+2x_1^2x_2\end{array}\right]\ ,
\end{equation}
%
which are polynomials in $x_1,x_2,\lambda$ of degrees $4,3,3$, respectively.
The system \eqref{Eq:biquadratic} has 12 solutions.
We show the corresponding critical points below.
\[
\includegraphics[height=100pt]{pictures/Biquadratic}
\]
Note that a total degree homotopy for solving \eqref{Eq:biquadratic} follows $36>12$ homotopy paths. \hfill{$\diamond$}
\end{example}
An alternative general-purpose homotopy is the \demph{polyhedral homotopy} of Sturmfels and Huber.
This is based on \demph{Bernstein's bound}, which is at most the B\'ezout bound and at least the actual number of isolated zeros.
Bernstein's bound is often significantly smaller than the B\'ezout bound, which makes the polyhedral homotopy an efficient tool for polynomial
homotopy continuation because it produces fewer excess paths to track.
The \demph{Polyhedral Homotopy Algorithm} is summarized in Algorithm \ref{polyhedral_algo} below.
It is implemented in \texttt{PHCpack} \cite{PHCpack}, the \texttt{HOM4PS} family \cite{HOM4PSArticle,LeeLiTsai2008HOM4PS}, and
\texttt{HomotopyContinuation.jl}~\cite{HC.jl}.
To understand how it works we first develop some theory.
We begin with Bernstein's bound.
\subsection{Bernstein's bound}\label{S:Bernstein}
The polyhedral homotopy takes place on the complex torus~\defcolor{$(\mathbb{C}^\times)^n$}, where
$\defcolor{\mathbb{C}^\times}\vcentcolon=\mathbb{C}\smallsetminus\{0\}$ is the set of invertible complex numbers.
Each integer vector $\mathbf{a}\in\mathbb{Z}^n$ gives a \demph{Laurent monomial} $\defcolor{\mathbf{x}^\mathbf{a}}\vcentcolon=x_1^{a_1}\dotsb x_n^{a_n}$, which is a function on the
torus $(\mathbb{C}^\times)^n$.
A linear combination of Laurent monomials,
\[
f\ \vcentcolon=\ \sum_{\mathbf{a}\in\mathcal{A}} c_{\mathbf{a}} \mathbf{x}^\mathbf{a}\qquad\quad c_\mathbf{a}\in\mathbb{C}^\times\,,
\]
is a \demph{Laurent polynomial}.
The (finite) index set $\defcolor{\mathcal{A}}\subset\mathbb{Z}^n$ is the \demph{support} of $f$.
The convex hull of $\mathcal{A}$ is the \demph{Newton polytope} of $f$.
The polynomial $f$ in Example~\ref{Ex:biquadratic} has support the columns of the matrix
$\left(\begin{smallmatrix} 0&0&2&2\\0&2&0&2\end{smallmatrix}\right)$ and its Newton polytope is the $2\times 2$ square, $[0,2]\times[0,2]$.
Bernstein's bound concerns square systems of Laurent polynomials, and it is in terms of mixed volume~\cite[pp.~116--118]{Ewald}.
The \demph{Minkowski sum} of polytopes $P$ and $Q$ in $\mathbb{R}^n$ is
\[
\defcolor{P+Q}\ \vcentcolon=\ \{\mathbf{x} + \mathbf{y} \mid \mathbf{x}\in P\mbox{ and }\mathbf{y}\in Q\}\,.
\]
Given polytopes $P_1,\dotsc,P_n$ in $\mathbb{R}^n$ and positive scalars $t_1,\dotsc,t_n$, Minkowski proved that the volume
${\mathrm{vol}}( t_1 P_1 + \dotsb + t_n P_n)$
is a homogeneous polynomial in $t_1,\dotsc,t_n$ of degree $n$.
He defined the \demph{mixed volume}
\defcolor{${\mathrm{MV}}(P_1,\dotsc,P_n)$} to be the coefficient of $t_1\dotsb t_n$ in that polynomial.
While mixed volume is in general hard to compute, when $n=2$, we have the formula
\begin{equation}\label{polarization}
{\mathrm{MV}}(P,Q)\ =\ {\mathrm{vol}}(P+Q)-{\mathrm{vol}}(P)-{\mathrm{vol}}(Q)\,.
\end{equation}
This formula and its generalizations to $n> 2$ are the \demph{polarization identities}.
Consider \eqref{polarization} for the $2\times 2$ square and triangle below.
\begin{equation}\label{Eq:MinkowskiSum}
P\ =\ \raisebox{-15pt}{\includegraphics{pictures/square}}
\qquad
Q\ =\ \raisebox{-15pt}{\includegraphics{pictures/triangle}}
\qquad
P+Q\ =\ \raisebox{-30pt}{\includegraphics{pictures/mixedSubdivision}}
\end{equation}
The final shape is the Minkowki sum $P+Q$, and the mixed volume $MV(P,Q)$ is the sum of the areas of the two (non-square) parallelograms,
which is 8.
We give a version of Bernstein's Theorem~\cite[Thm.~A]{Bernshtein1975Number}.
\begin{theorem}[Bernstein]\label{thm: Bernstein}
Let $F=(f_1,\dotsc,f_n)$ be a system of Laurent polynomials.
The number of isolated solutions to $F$ in $(\mathbb{C}^\times)^n$ is at most ${\mathrm{MV}}(P_1,\dotsc,P_n)$, where for each $i=1,\dotsc,n$,
$P_i$ is the Newton polytope of the polynomial $f_i$.
\end{theorem}
\begin{example}\label{Ex:more EDD}
We show the supports and Newton polytopes of the three polynomials from \Cref{Ex:biquadratic},
$f$, $\lambda(x_1-x_1^*)-\partial f/\partial x_1$, and $\lambda(x_2-x_2^*)-\partial f/\partial x_2$ (see Equation~\eqref{Eq:biquadratic}).
\[
\begin{picture}(95,90)(-7,0)
\put(0,0){\includegraphics{pictures/22}}
\put(-7,80){\small$\lambda$} \put(15,2){\small$x_1$} \put(61,62){\small$x_2$}
\end{picture}
\qquad
\begin{picture}(95,90)(-7,0)
\put(0,0){\includegraphics{pictures/lx}}
\put(-7,80){\small$\lambda$} \put(15,2){\small$x_1$} \put(61,62){\small$x_2$}
\end{picture}
\qquad
\begin{picture}(95,90)(-7,0)
\put(0,0){\includegraphics{pictures/ly}}
\put(-7,80){\small$\lambda$} \put(15,2){\small$x_1$} \put(61,62){\small$x_2$}
\end{picture}
\]
Their mixed volume is twelve, so the system~\eqref{Eq:biquadratic} achieves the Bernstein bound. \hfill$\diamond$
\end{example}
Bernstein proved that this bound is typically achieved in the following sense:
For each $i=1,\dotsc,n$, let \defcolor{$\mathcal{A}_i$} be the support of polynomial $f_i$ and \defcolor{$P_i$} its Newton polytope.
The set of polynomial systems $G=(g_1,\dotsc,g_n)$ where each $g_i$ has support a subset of $\mathcal{A}_i$ is a vector space \defcolor{$V$} of
dimension $|\mathcal{A}_1|+\dotsb+|\mathcal{A}_n|$ whose coordinates are given by the coefficients of the polynomials $g_i$.
Bernstein showed that there is a nonempty Zariski open subset $U\subset V$ consisting of systems $G$ with exactly ${\mathrm{MV}}(P_1,\dotsc,P_n)$ regular
zeroes.
Also, if \defcolor{$U'$} is the (larger) set of systems $G$ with exactly ${\mathrm{MV}}(P_1,\dotsc,P_n)$ solutions, counted with multiplicity, then
$U'$ is open, and Bernstein~\cite[Thm.~B]{Bernshtein1975Number} gave a criterion for when $G\in V\smallsetminus U'$.
We remark that Bernstein's bound and Bernstein's Theorem are often called the Bernstein-Kushnirenko-Khovanskii (BKK) bound and BKK Theorem
due to their joint paper~\cite{BKK} and the circle of closely related
work~\cite{Bernshtein1975Number,Kushnirenko1976Polyedres,Khovanskii1978Newton}.
\subsection{Polyhedral homotopy of Huber and Sturmfels}
In their seminal work~\cite{HuSt95}, Huber and Sturmfels developed a homotopy continuation method for solving systems of Laurent polynomials
that takes advantage of Bernstein's bound in that it tracks only ${\mathrm{MV}}(P_1,\dotsc,P_n)$ paths.
Their work also provided a new interpretation for mixed volume in terms of mixed cells
(the parallelograms in~\eqref{Eq:MinkowskiSum}) and a new proof of Bernstein's Theorem~\cite[Thm.~A]{Bernshtein1975Number}.
We sketch its main ideas.
This is also described in Sturmfels' award-winning Monthly article~\cite{StMon}.
Suppose that $F=(f_1,\dotsc,f_n)$ is a system of Laurent polynomials and that $\mathcal{A}_i$ is the support of $f_i$ for $i=1,\dotsc,n$, so that
\begin{equation}\label{Eq:target}
f_i\ =\ \sum_{\mathbf{a}\in\mathcal{A}_i} c_{i,\mathbf{a}} \mathbf{x}^\mathbf{a}\,,\quad\mbox{with }c_{i,\mathbf{a}}\in \mathbb{C}^\times\, .
\end{equation}
For now, assume that $F$ is sufficiently generic, in that $F$ has ${\mathrm{MV}}(P_1,\dotsc,P_n)$ regular zeroes.
In the polyhedral homotopy, the continuation parameter $t$ appears in a very different way than in the total degree homotopy~\eqref{E:TDH}.
First, the start system is at $t=0$ and the target system $F$ is at $t=1$, but this is not the primary substantive difference.
The polyhedral homotopy depends upon a choice of \demph{lifting functions}, $\defcolor{\omega_i}\colon\mathcal{A}_i\to\mathbb{Z}$, for $i=1,\dotsc,n$.
That is, a choice of an integer $\omega_i(\mathbf{a})$ for each monomial $\mathbf{a}$ in $\mathcal{A}_i$.
We address this choice later.
Given lifting functions, define the homotopy $H(\mathbf{x},t)\vcentcolon=(h_1,\dotsc,h_n)$ by
\[
h_i(\mathbf{x},t)\ \vcentcolon=\
\sum_{\mathbf{a}\in\mathcal{A}_i} c_{i,\mathbf{a}}\, \mathbf{x}^{\mathbf{a}}\, t^{\omega_i(\mathbf{a})}\,.
\]
By the Implicit Function Theorem and our assumption on $F$, over $t\in(0,1]$ the system of equations $H(\mathbf{x},t)=0$ defines
${\mathrm{MV}}(P_1,\dotsc,P_n)$ smooth paths.
It is however not at all clear what happens as $t\to 0$.
For example, $H(\mathbf{x},0)$ is undefined if some $\omega_i(\mathbf{a})<0$, and if $\omega_i(\mathbf{a})>0$ for all $i$ and $\mathbf{a}$, then $H(\mathbf{x},0)$ is
identically zero.
The key idea is to use an invertible linear change of coordinates to study the homotopy paths as $t\to 0$.
This coordinate change depends upon a \demph{weight} $\defcolor{{\boldsymbol{\alpha}}}\in\mathbb{Z}^n$ and a positive integer \defcolor{$r$}.
The weight gives a \demph{cocharacter} of the torus, for $s\in\mathbb{C}^\times$, $\defcolor{s^{{\boldsymbol{\alpha}}}}\vcentcolon=(s^{\alpha_1},\dotsc,s^{\alpha_n})$.
Set
\[
\defcolor{\mathbf{y}}\ =\ \mathbf{x}\circ s^{-{\boldsymbol{\alpha}}} \ \vcentcolon=\ (x_1 s^{-\alpha_1}, \dotsc, x_n s^{-\alpha_n})\,.
\]
Then $\mathbf{x}=\mathbf{y}\circ s^{\boldsymbol{\alpha}}$, and we have $H^{({\boldsymbol{\alpha}})}(\mathbf{y},s)\vcentcolon=(h_1^{({\boldsymbol{\alpha}})},\dotsc,h_n^{({\boldsymbol{\alpha}})})$,
where for $i=1,\dotsc,n$,
\begin{equation}\label{Eq:alpha-system}
\defcolor{h_i^{({\boldsymbol{\alpha}})}(\mathbf{y},s)}\ \vcentcolon=\ s^{-\beta_i} h_i(\mathbf{y}\circ s^{{\boldsymbol{\alpha}}}, s^r)\ =\
\sum_{\mathbf{a}\in\mathcal{A}_i} c_{i,\mathbf{a}}\, \mathbf{y}^{\mathbf{a}}\, s^{\langle {\boldsymbol{\alpha}}, \mathbf{a}\rangle + r\omega_i(\mathbf{a})-\beta_i}\,,
\end{equation}
where $\defcolor{\beta_i}\vcentcolon=\min\{ \langle {\boldsymbol{\alpha}}, \mathbf{a}\rangle + r\omega_i(\mathbf{a})\mid \mathbf{a}\in\mathcal{A}_i\}$.
The purpose of $\beta_i$ is to ensure that $s$ appears in $h_i^{({\boldsymbol{\alpha}})}(\mathbf{y},s)$ with only non-negative exponents, and that
$h_i^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ is defined and not identically zero.
Specifically, if
$\defcolor{\mathcal{A}_i^{({\boldsymbol{\alpha}})}}\vcentcolon=\{\mathbf{a}\in\mathcal{A}_i\mid \langle {\boldsymbol{\alpha}}, \mathbf{a}\rangle + r\omega_i(\mathbf{a})=\beta_i\}$, then
\begin{equation}\label{Eq:alpha-facial-system}
h_i^{({\boldsymbol{\alpha}})}(\mathbf{y},0)\ =\
\sum_{\mathbf{a}\in\mathcal{A}_i^{({\boldsymbol{\alpha}})}} c_{i,\mathbf{a}} \mathbf{y}^\mathbf{a}
\qquad\mbox{for } i=1,\dotsc,n\,.
\end{equation}
The purpose of the positive integer $r$ is to keep the exponents integral.
As $r>0$, we have that for $s\in[0,1]$, $t=s^r\to 0$ if and only if $s\to 0$.
Thus the role of $r$ and $s$ is to parameterize the homotopy path.
We remark on this later.
We will see that for almost all $({\boldsymbol{\alpha}},r)$, $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ has no zeroes, but for appropriately chosen $({\boldsymbol{\alpha}},r)$,
the system~$H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ has easily computed zeroes, each defining a homotopy path to a solution of
$H^{({\boldsymbol{\alpha}})}(\mathbf{y},1)=H(\mathbf{x},1)$.
For such an $({\boldsymbol{\alpha}},r)$, $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ is a \demph{start subsystem}.
The polyhedral homotopy algorithm consists of determining those $({\boldsymbol{\alpha}},r)$, solving
the start subsystems $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$, and then
tracking the homotopy paths from $t=0$ to $t=1$.
Before discussing this in more detail, including the role of the choices of lifting functions~$\omega_i$, cocharacter~${\boldsymbol{\alpha}}$, and
positive integer $r$, let us consider an example.
\begin{example}\label{Ex:phtpy}
Let $f_1$ be the biquadratic from Example~\ref{Ex:biquadratic} and let
$f_2= 1 + 2x_1x_2 - 5x_1x_2^2 - 3x_1^2 x_2$.
Here, $\mathcal{A}_1=\left(\begin{smallmatrix} 0&0&2&2\\0&2&0&2\end{smallmatrix}\right)$ and
$\mathcal{A}_2=\left(\begin{smallmatrix} 0&1&1&2\\0&1&2&1\end{smallmatrix}\right)$.
Their Newton polygons are the square $P$ and triangle $Q$ in~\eqref{Eq:MinkowskiSum}.
Figure~\ref{F:Eight_Sols} shows their $8={\mathrm{MV}}(P,Q)$ common zeroes.
\begin{figure}[htb]
\centering
\begin{picture}(169,120)(-3.5,0)
\put(0,0){\includegraphics[height=120pt]{pictures/TwoCurves}}
\put( 37.5, 88.5){{\color{myred}\small$f_2$}} \put(108, 67){{\color{myred}\small$f_2$}}
\put(141, 9){{\color{myred}\small$f_2$}}
\put(-3.5,90){{\color{myblue}\small$f_1$}} \put(158, 90){{\color{myblue}\small$f_1$}}
\put(88,80){{\color{myblue}\small$f_1$}}
\put(-3.5,26){{\color{myblue}\small$f_1$}} \put(158,26){{\color{myblue}\small$f_1$}}
\put(22,48){\small$-2$} \put(100,48){\small$2$} \put(135,48){\small$4$}
\put(72,23){\small$-2$} \put(72,92){\small$2$}
\end{picture}
\caption{Common zeroes of $f_1$ and $f_2$.}\label{F:Eight_Sols}
\end{figure}
Define $\omega_1$ to be identically zero and set $\omega_2\left(\begin{smallmatrix}a\\b\end{smallmatrix}\right)\vcentcolon=a+b$.
Then $h_1(\mathbf{x},t)=f_1(\mathbf{x})$ and
\[
h_2(\mathbf{x},t)\ =\ 1 + 2 x_1 x_2 t^2 - 5x_1x_2^2 t^3 - 3x_1^2 x_2 t^3\,.
\]
Let ${\boldsymbol{\alpha}}=(0,-3)$ and $r=2$, so that $x_1=y_1$ and $x_2=y_2 t^{-3}$.
Then we may check that $\beta_1=-6$ and $\beta_2=\min\{0-0,4-3,6-6,6-3\}=0$, so that
\begin{eqnarray*}
h_1^{({\boldsymbol{\alpha}})}(\mathbf{y},s) &=& \underline{- 3 y_2^2 + y_1^2y_2^2}\ +\ s^6(5 - 3y_1^2)\,,\\
h_2^{({\boldsymbol{\alpha}})}(\mathbf{y},s) &=& \underline{1 - 5 y_1y_2^2}\ +\ s(2y_1y_2 - 3y_1^2 y_2 s^2)\,.
\end{eqnarray*}
The underlined terms are the binomials $h_1^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ and $h_2^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$.
They have four solutions,
\[
( \sqrt{3}, 75^{-\frac14})\,,\
( \sqrt{3},-75^{-\frac14})\,,\
(-\sqrt{3}, 75^{-\frac14})\,,\
(-\sqrt{3},-75^{-\frac14})\,.
\]
These four solutions lead to four homotopy paths, which is fewer than the $8={\mathrm{MV}}(P,Q)$ paths defined by $H(\mathbf{x},t)=0$ over
$t\in(0,1]$.
If we let ${\boldsymbol{\gamma}}=(-3,0)$ and $r=2$, then
\begin{eqnarray*}
h_1^{({\boldsymbol{\gamma}})}(\mathbf{y},s) &=& - 3 y_1^2 + y_1^2y_2^2\ +\ s^6(5 - 3y_2^2)\,,\\
h_2^{({\boldsymbol{\gamma}})}(\mathbf{y},s) &=& 1 - 3y_1^2 y_2 \ +\ s(2y_1y_2 - 5 y_1y_2^2 s^2)\,,
\end{eqnarray*}
so that $h_1^{({\boldsymbol{\gamma}})}(\mathbf{y},0)$ and $h_2^{({\boldsymbol{\gamma}})}(\mathbf{y},0)$ are again binomials and they have four solutions
\[
( 27^{-\frac14}, \sqrt{3})\,,\
(-27^{-\frac14}, \sqrt{3})\,,\
( 27^{-\frac14},-\sqrt{3})\,,\
(-27^{-\frac14},-\sqrt{3})\,.
\]
These lead to the other four homotopy paths.
The partition $4+4=8$ is seen in the decomposition of $P+Q$ in~\eqref{Eq:MinkowskiSum}.
The weight ${\boldsymbol{\alpha}}$ corresponds to the parallelogram on the upper left, which is the Minkowski sum of the supports of
the components $h_1^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ and $h_2^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ of the start subsystem, and the weight ${\boldsymbol{\gamma}}$ corresponds to
the parallelogram on the lower right.
The only weights and positive integers for which the start subsystem has solutions are positive multiples of $({\boldsymbol{\alpha}},2)$ and
$({\boldsymbol{\gamma}},2)$.\hfill$\diamond$
\end{example}
\subsection{Computation of mixed cells}
In \Cref{Ex:phtpy} only two choices of $({\boldsymbol{\alpha}}, r)$ gave start subsystems $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ with solutions.
We now address the problem of computing the pairs $({\boldsymbol{\alpha}}, r)$, such that $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ has solutions.
This leads to an algorithm that computes these pairs given the start system $F=(f_1,\ldots,f_n)$.
We will show that the pairs $({\boldsymbol{\alpha}}, r)$ which give zeros at $t=0$ correspond to certain \demph{mixed cells}
in a decomposition of the Minkowski sum $P_1+\cdots+P_n$, where $P_i$ is the Newton polytope of $f_i$.
We examine the geometric combinatorics of the lifting functions $\omega_i$, weights ${\boldsymbol{\alpha}}$, and positive integer $r$.
Let $P\subset\mathbb{R}^{n+1}$ be a polytope.
If $P$ has the same dimension as its projection to $\mathbb{R}^n$, then it and all of its faces are \demph{lower faces}.
Otherwise, replace $\mathbb{R}^{n+1}$ by the affine span of $P$ and assume that $P$ has dimension $n+1$.
A \demph{lower facet} of $P$ is a \demph{facet} $Q$ of $P$ ($\dim Q=n$) whose inward-pointing normal vector has positive last coordinate.
A \demph{lower face} of $P$ is any face lying in a lower facet.
The union of lower faces forms the \demph{lower hull} of $P$.
Let $\mathcal{A}\subset\mathbb{Z}^n$ be a finite set and $\omega\colon\mathcal{A}\to\mathbb{Z}$ be a lifting function.
The \demph{lift} of $\mathcal{A}$ is the set
\[
\defcolor{\widehat{\mathcal{A}}}\ \vcentcolon=\ \{(\mathbf{a},\omega(\mathbf{a}))\mid\mathbf{a}\in\mathcal{A}\}\subset\mathbb{Z}^{n+1}\,.
\]
Let $\defcolor{\widehat{P}}\vcentcolon={\mathrm{conv}}(\widehat{\mathcal{A}})$ be its convex hull.
Given a lower face $Q$ of $\widehat{P}$, the projection to $\mathbb{Z}^n$ of $Q\cap\widehat{\mathcal{A}}$ is a subset $\defcolor{\mathcal{C}(Q)}$ of $\mathcal{A}$
whose convex hull is the projection to $\mathbb{R}^n$ of $Q$.
If $({\boldsymbol{\alpha}},r)$ is upward-pointing ($r>0$) and $a\mapsto \langle{\boldsymbol{\alpha}},\mathbf{a}\rangle + r\omega(\mathbf{a})$ achieves its minimum on $Q$, then
$\mathcal{C}(Q)= \mathcal{A}^{({\boldsymbol{\alpha}})}$.
For each $i=1,\dotsc,n$, let $\mathcal{A}_i\subset\mathbb{Z}^n$ be a finite set, $\omega_i\colon\mathcal{A}_i\to\mathbb{Z}$ be a lifting function, and set
$\widehat{P}_i\vcentcolon={\mathrm{conv}}(\widehat{\mathcal{A}}_i)$.
Let $\defcolor{\widehat{P}}\vcentcolon=\widehat{P}_1+\dotsb+\widehat{P}_n$ be their Minkowski sum.
As $\widehat{P}$ is a Minkowski sum, if $Q$ is a lower face of $\widehat{P}$, for each $i=1,\dotsc,n$ there is a lower face
$Q_i$ of $\widehat{P}_i$ with
\begin{equation}
\label{Eq:lowerSum}
Q\ =\ Q_1+\dotsb+Q_n\,.
\end{equation}
\begin{definition}\label{def_generic_lifting_function}
Lifting functions $\omega_i\colon\mathcal{A}_i\to\mathbb{Z}$ for $i=1,\dotsc,n$ are
\demph{generic} if for each lower facet $Q$ of $P$, if $Q_1,\dotsc,Q_n$ are the lower faces in~\eqref{Eq:lowerSum}, then
\begin{equation}
\label{Eq:dimSum}
\dim Q\ =\ n\ =\ \dim Q_1 + \dotsb + \dim Q_n\,,
\end{equation}
and when $\dim Q_i=1$, then $\# Q_i\cap\widehat{\mathcal{A}}_i=2$ and thus $\#\mathcal{C}(Q_i)=2$.
A lower facet $Q$ for which every $Q_i$ in~\eqref{Eq:dimSum} has dimension 1 (and thus $\# Q_i\cap\widehat{\mathcal{A}}_i=2$) is a
\demph{mixed facet} and its projection to $\mathbb{R}^n$ is a \demph{mixed cell}.
Mixed facets and mixed cells are parallelepipeds (Minkowski sums of independent line segments).
\hfill$\diamond$
\end{definition}
Huber and Sturmfels show that almost all real lifting functions are generic and the density of rational numbers
implies that there exist generic integral lifting functions.
Setting $P_i\vcentcolon={\mathrm{conv}}(\mathcal{A}_i)$ for $i=1,\dotsc,n$, then the projection to $\mathbb{R}^n$ of the lower faces of $\widehat{P}$ forms a polyhedral
subdivision of the Minkowski sum $P_1+\dotsb+P_n$, called a \demph{mixed decomposition}.
This leads to a new interpretation for mixed volume.
\begin{theorem}[Huber-Sturmfels]
Suppose that $\omega_i\colon\mathcal{A}_i\to\mathbb{Z}$ for $i=1,\dotsc,n$ are generic lifting functions.
Then the mixed volume ${\mathrm{MV}}(P_1,\dotsc,P_n)$ is the sum of the volumes of the mixed cells in the
induced polyhedral decomposition of the Minkowski sum $P_1+\dotsb+P_n$.
\end{theorem}
\begin{proof}
These constructions---the lifts $\widehat{P}_i$, lower faces, and the mixed subdiv\-ision---scale multilinearly with
positive $t_1,\dotsc,t_n\in\mathbb{R}$.
For example, a lower face $Q=Q_1+\dotsb+Q_n$~\eqref{Eq:lowerSum} of $\widehat{P}_1+\dotsb+\widehat{P}_n$ corresponds to a lower face
$t_1Q_1+\dotsb+t_nQ_n$ of $t_1\widehat{P}_1+\dotsb+t_n\widehat{P}_n$.
Let $\pi\colon\mathbb{R}^{n+1}\to\mathbb{R}^n$ be the projection.
This shows
\[
{\mathrm{vol}}(t_1P_1+\cdots + t_n P_n)\ =\ \sum_{Q} {\mathrm{vol}}(\pi(t_1Q_1+\dotsb+t_nQ_n))\,,
\]
the sum over all lower facets $Q$.
By Condition~\eqref{Eq:dimSum}, $n=\dim(Q_1)+\dotsb+\dim(Q_n)$, and thus
\[
{\mathrm{vol}}(\pi(t_1Q_1+\dotsb+t_nQ_n))\ =\
t_1^{\dim(Q_1)}\dotsb t_n^{\dim(Q_n)} {\mathrm{vol}}(\pi(Q))\,.
\]
Hence the coefficient of $t_1\dotsb t_n$ in ${\mathrm{vol}}(t_1P_1+\dotsb+t_nP_n)$ is the sum of the volumes of the mixed cells.
\end{proof}
\begin{example}\label{Ex:MixedDecomposition}
Let us consider this on our running example, using the lifts from Example~\ref{Ex:phtpy}.
Figure~\ref{Fi:mixed} shows two views of the lower hull of the Minkowski sum $\widehat{P}+\widehat{Q}$,
along with the mixed decomposition.
\begin{figure}[htb]
\centering
\begin{picture}(140,120)(-12,0)
\put(0,0){\includegraphics[height=120pt]{pictures/Lift1}}
\put(-4,112){\small$\omega$} \put(-12,55){\small$x_2$} \put(111,6){\small$x_1$}
\put(30,30){\small${\boldsymbol{\alpha}}$} \put(70,16){\small${\boldsymbol{\gamma}}$}
\put(108,90){\vector(0,-1){60}} \put(111,57){\small$\pi$}
\put(25, 8){\small$P$} \put(25, 52){\small$\widehat{P}$}
\put(70,31){\small$Q$} \put(72,101){\small$\widehat{Q}$}
\end{picture}
\qquad\qquad
\begin{picture}(150,120)(-3,0)
\put(0,0){\includegraphics[height=120pt]{pictures/Lift2}}
\put(128,112){\small$\omega$} \put(0,50){\small$x_1$} \put(122,2){\small$x_2$}
\put(78,10){\small${\boldsymbol{\alpha}}$} \put(55,30){\small${\boldsymbol{\gamma}}$}
\put(135,80){\vector(0,-1){40}} \put(138,57){\small$\pi$}
\thicklines
\put(114.5,81){{\color{white}\vector(0,-1){16}}}
\put(114 ,81){{\color{white}\vector(0,-1){16}}}
\put(113.5,81){{\color{white}\vector(0,-1){16}}}
\put(15,84.5){{\color{white}\vector(1,0){27}}}
\put(15,84 ){{\color{white}\vector(1,0){27}}}
\put(15,83.5){{\color{white}\vector(1,0){27}}}
\put(15,6.5){{\color{white}\vector(4,1){27}}}
\put(15,6 ){{\color{white}\vector(4,1){27}}}
\put(15,5.5){{\color{white}\vector(4,1){27}}}
\thinlines
\put(101,32){\small$P$} \put(111, 83){\small$\widehat{P}$} \put(114,81){\vector(0,-1){15}}
\put(4,2){\small$Q$} \put(15,6){\vector(4,1){26}}
\put(4,81){\small$\widehat{Q}$} \put(15,84){\vector(1,0){26}}
\end{picture}
\caption{Two views of the lower hull of lift and mixed subdivision with mixed cells labeled by corresponding cocharacter.}
\label{Fi:mixed}
\end{figure}
Note that $\widehat{P}=P$ as the lifting function is 0 and $\widehat{Q}$ is affinely equivalent to $Q$.
There are two mixed lower facets, whose corresponding mixed cells are the parallelograms of~\eqref{Eq:MinkowskiSum}, showing them to be mixed
cells of the mixed subdivision induced by these lifts.
The dot product with $({\boldsymbol{\alpha}},2)=(0,-3,2)$ is minimized along the mixed lower facet
${\mathrm{conv}}\{(0,2,0), (2,2,0), (3,4,3), (1,4,3)\}$ with minimal value $-6$ and the dot product with $({\boldsymbol{\gamma}},2)=(-3,0,2)$ is minimized along
the mixed lower facet ${\mathrm{conv}}\{(2,0,0), (2,2,0), (4,3,3), (4,1,3)\}$ with minimal value $-6$. \hfill$\diamond$
\end{example}
Keeping Example~\ref{Ex:MixedDecomposition} in mind, we return to our problem of studying the homotopy given by generic lifting functions
$\omega_i\colon\mathcal{A}_i\to\mathbb{Z}$ for $i=1,\dotsc,n$,
for the supports of our target system~\eqref{Eq:target}.
Let $\widehat{P}\vcentcolon=\widehat{P}_1+\dotsb+\widehat{P}_n$ be the Minkowski sum of the convex hulls $\widehat{P}_i$ of the lifted
supports $\widehat{\mathcal{A}}_i$.
A vector $({\boldsymbol{\alpha}},r)\in\mathbb{Z}^{n+1}$ with $r>0$ is \demph{upward-pointing}, and the linear function $\langle({\boldsymbol{\alpha}},r),-\rangle$ it defines
achieves its minimum on $\widehat{P}$ along a lower face $Q$---the lower face of $\widehat{P}$ \demph{exposed} by $({\boldsymbol{\alpha}},r)$.
When $Q$ has the form~\eqref{Eq:lowerSum}, then for each $i=1,\dotsc,n$, $Q_i$ is the lower face of $\widehat{P}_i$ exposed by
$({\boldsymbol{\alpha}},r)$, and the minimum value of $\langle({\boldsymbol{\alpha}},r),-\rangle$ along $Q_i$ is
\begin{equation}\label{Eq:beta}
\min \{ \langle({\boldsymbol{\alpha}},r), (\mathbf{a},\omega(\mathbf{a}))\rangle = \langle {\boldsymbol{\alpha}},\mathbf{a}\rangle + r\omega(\mathbf{a})
\mid \mathbf{a}\in\mathcal{A}_i\}\ =\ \beta_i\,,
\end{equation}
which explains the geometric significance of $({\boldsymbol{\alpha}},r)$ and of $\beta_i$.
When $Q$ is a facet, there is a unique primitive (components have no common factor) upward-pointing integer vector $({\boldsymbol{\alpha}},r)$
that exposes $Q$.
In this case, $\mathcal{A}_i^{({\boldsymbol{\alpha}})}=\pi(Q_i\cap\widehat{\mathcal{A}}_i)=\mathcal{C}(Q_i)$ is the support of $h^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$.
We explain the algebraic consequences.
Suppose that $H^{({\boldsymbol{\alpha}})}(\mathbf{y},s)$ is the system of polynomials $h_i^{({\boldsymbol{\alpha}})}(\mathbf{y},s)$ defined by~\eqref{Eq:alpha-system}.
Then $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ is given by the polynomials $h_i^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ of~\eqref{Eq:alpha-facial-system}.
If, for some $i$, $\#\mathcal{C}(Q_i)=1$, so that $\dim Q_i=0$, then $h_i^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ is a monomial and therefore
$H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ has no solutions in $(\mathbb{C}^\times)^n$.
Suppose that $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ has solutions in $(\mathbb{C}^\times)^n$.
Necessarily, $\dim Q_i\geq 1$ for all $i$.
By~\eqref{Eq:dimSum}, $\dim Q_i=1$ and $\#\mathcal{C}(Q_i)=2$ for all $i$, and thus $Q$ is a mixed facet.
Consequently, each $h_i^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ is a binomial and $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ is a system of independent binomials, which may be
solved by inspection.
Thus the only start subsystems $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ with solutions are those for which $({\boldsymbol{\alpha}},r)$ exposes a mixed facet
$Q$ of $\widehat{P}$.
The following proposition, whose proof is sketched in Section~\ref{S:binomial}, records the number of solutions to such a mixed system.
\begin{proposition}\label{P:binomialSystem}
The number of solutions to the system of binomials $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ is the volume of the mixed cell
$\pi(Q)={\mathrm{conv}}(\mathcal{C}(Q_1)+\dotsb+\mathcal{C}(Q_n))$.
\end{proposition}
\subsection{The Polyhedral Homotopy Algorithm}
We sketch this algorithm and provide a brief argument about its correctness.
\begin{algorithm}
\caption{The Polyhedral Homotopy Algorithm\label{polyhedral_algo}}
\SetAlgoLined
\KwIn{A system $F=(f_1,\dotsc,f_n)$ of $n$ polynomials in $n$ variables, where $f_i$ has support $\mathcal{A}_i$ and Newton polytope $P_i$.
The system $F$ is assumed general and has ${\mathrm{MV}}(P_1,\dotsc,P_n)$ regular solutions.}
\KwOut{All complex zeros of $F$.}
Compute generic lifting functions $\omega_i\colon\mathcal{A}_i\to\mathbb{Z}$ (see Definition \ref{def_generic_lifting_function}). They define a notion of
mixed cell in the Minkowski sum $P = P_1+\cdots+P_n$;
\For{each mixed cell $Q$ of $P$}{
Compute the pair $({\boldsymbol{\alpha}},r)$ given as the primitive upward pointing normal of the mixed facet of $\widehat P$ that corresponds to $Q$.
Solve the start subsystem $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ and then use homotopy
continuation to track those solutions along the homotopy $H^{({\boldsymbol{\alpha}})}(\mathbf{y},s)$ from $s=0$ to $s=1$, giving solutions to
$H^{({\boldsymbol{\alpha}})}(\mathbf{y},1)$.
}
The solutions computed in (2) to $H^{({\boldsymbol{\alpha}})}(\mathbf{y},1)=H(\mathbf{x},1)=F(\mathbf{x})$ for all mixed cells are all the
solutions to $F(\mathbf{x})$.
\end{algorithm}
\begin{proof}[Sketch of Proof of Correctness]
The system of equations $H(\mathbf{x},t)=0$ defines an algebraic curve $C$ in $(\mathbb{C}^\times)^n\times\mathbb{C}^\times_t$ whose projection onto $\mathbb{C}^\times_t$
has degree equal to~$\defcolor{{\mathrm{MV}}}\vcentcolon={\mathrm{MV}}(P_1,\dotsc,P_n)$ with the fiber over $t=1$ having ${\mathrm{MV}}$ points.
This curve has ${\mathrm{MV}}$ branches near $t=0$, each of which is a point $\mathbf{z}(t)$ in $\mathbb{C}\{t\}^n$.
Here, $\mathbb{C}\{t\}$ is the field of Puiseaux series, which contains the algebraic closure of the field $\mathbb{C}(t)$ of rational functions in
$t$~\cite[Sect.\ 2.5.3]{Shafarevich}.
Elements of $\mathbb{C}\{t\}$ may be represented by fractional power series of the form
\[
\sum_{m\geq N} b_m t^{m/r}\,,
\]
where $m,N,r\in\mathbb{Z}$ with $r>0$, and $b_m\in\mathbb{C}$.
Observe that both the exponents of $t$ and the denominators in those exponents are bounded below.
Fix a branch $\mathbf{z}(t)$ of $C$ and let $r$ be the least common denominator of all exponents of coordinates of $\mathbf{z}(t)$.
Consider the lowest order terms of the coordinates in $\mathbf{z}(t)$,
\[
( c_1 t^{\alpha_1/r},\dotsc, c_n t^{\alpha_n/r})\,,
\]
where $\alpha_i\in\mathbb{Z}$ and $r\in\mathbb{N}$.
Set ${\boldsymbol{\alpha}}\vcentcolon=(\alpha_1,\dotsc,\alpha_n)$.
The substitution $t=s^r$ clears the denominators, converting $\mathbf{z}(t)$ to a vector $\mathbf{z}(s^r)$ of Laurent series in $s$.
The coordinate change $\mathbf{z}(s^r)\circ s^{-{\boldsymbol{\alpha}}}$ converts these Laurent series to ordinary power series with constant coefficients
$\mathbf{c}\vcentcolon=(c_1,\dotsc,c_n)$.
Finally, $\mathbf{c}$ is a solution to the start subsystem $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$.
The point is that for each branch $\mathbf{z}(t)$ of $C$ near $t=0$, there is a weight ${\boldsymbol{\alpha}}$ and positive integer $r$
such that the vector $\mathbf{c}$ of lowest order coefficients of $\mathbf{z}(t)$ is a solution to $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$.
The discussion preceding the statement of the Polyhedral Homotopy Algorithm shows that $({\boldsymbol{\alpha}},r)$ exposes a mixed lower facet $Q$ of
$\widehat{P}$, and that $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ has ${\mathrm{vol}}(\pi(Q))$ solutions.
Furthermore, each solution $\mathbf{c}$ to $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ may be developed into a power series solution $\mathbf{y}(s)$ to
$H^{({\boldsymbol{\alpha}})}(\mathbf{y},s)$.
Reversing the coordinate changes and reparameterization, this gives a solution $\mathbf{z}(t)$ to $H(\mathbf{x},t)=0$ in $(\mathbb{C}\{t\})^n$ and thus a
branch of the curve $C$ near $t=0$.
Thus the homotopy paths for $H(\mathbf{x},t)$
correspond to the ${\mathrm{MV}}$ distinct branches $\mathbf{z}(t)$ of $C$ near $t=0$
and the solutions computed in (3) give all ${\mathrm{MV}}$ solutions solutions to $F(\mathbf{x})$.
\end{proof}
\begin{remark}\label{R:generalPHA}
The assumption that $F(\mathbf{x})$ is general in the Polyhedral Homotopy Algorithm ensures that $F(\mathbf{x})$ has ${\mathrm{MV}}$ regular solutions
{\sl and} that $H(\mathbf{x},t)|_{t\in(0,1]}$ consists of ${\mathrm{MV}}$ smooth arcs.
Thus, to solve a given system $F(\mathbf{x})=(f_1,\dotsc,f_n)$ where $f_i$ has support $\mathcal{A}_i$, one first
generates a general system $G=(g_1,\dotsc,g_n)$ where $g_i$ has support $\mathcal{A}_i$.
In practice, this is done by choosing random complex numbers as coefficients, and then with probability one, $G(\mathbf{x})$ is general and
satisfies the hypotheses of the Polyhedral Homotopy Algorithm.
The Polyhedral Homotopy Algorithm is used to solve $G(\mathbf{x})=0$, and then a parameter homotopy with start system $G$ and target system $F$
is used to compute the solutions to $F(\mathbf{x})=0$.\hfill$\diamond$
\end{remark}
\subsection{Solving binomial systems}\label{S:binomial}
To complete the discussion, we take a brief look at Step 6 in Algorithm \ref{polyhedral_algo}.
By construction, the subsystems $H^{({\boldsymbol{\alpha}})}(\mathbf{y},0)$ in Step 6 are binomial systems.
We explain how to solve such a system.
Suppose that $H(\mathbf{y})$ is a system of binomials
\[
H(\mathbf{y}) = \begin{bmatrix}
p_1\mathbf{y}^{\mathbf{u}^{(1)}}-q_1\mathbf{y}^{\mathbf{v}^{(1)}}\ &\
\dotsc\ &\
p_n\mathbf{y}^{\mathbf{u}^{(n)}}-q_n\mathbf{y}^{\mathbf{v}^{(n)}}\ \end{bmatrix}
\]
where each $p_i,q_i\neq 0$ and $\mathbf{u}^{(1)}-\mathbf{v}^{(1)},\dotsc,\mathbf{u}^{(n)}-\mathbf{v}^{(n)}$ are linearly independent.
This is equivalent to the assertion that the Minkowski sum of the supports of the binomials is a parallelepiped $\pi(Q)$ of dimension $n$.
Then for $\mathbf{y}\in(\mathbb{C}^\times)^n$, $H(\mathbf{y})=0$ becomes
\begin{equation}\label{Eq:binomial}
\mathbf{y}^{\mathbf{u}^{(i)}-\mathbf{v}^{(i)}}\ =\ q_i/p_i\qquad\mbox{ for }i=1,\dotsc,n\,.
\end{equation}
Let \defcolor{$A$} be the $n\times n$ matrix with rows $\mathbf{u}^{(1)}{-}\mathbf{v}^{(1)},\dotsc,\mathbf{u}^{(n)}{-}\mathbf{v}^{(n)}$.
Then $\det A = {\mathrm{vol}}(\pi(Q))$.
The \demph{Smith normal form} of $A$ consists of unimodular integer matrices~$X,Y$ (integer matrices with determinant 1) and a diagonal matrix
$D=\mbox{diag}(d_1,\dotsc,d_n)$ such that $XAY=D$ and thus $\det A=\det D = d_1\cdot d_2\dotsb d_n$.
The unimodular matrices $X$ and $Y$ give coordinate changes on $(\mathbb{C}^\times)^n$ which convert the system~\eqref{Eq:binomial} into a diagonal
system of the form
\[
x_i^{d_i}\ =\ b_i\qquad\mbox{ for }i=1,\dotsc,n\,.
\]
All $d_1\dotsb d_n$ solutions may be found by inspection, and then the coordinate changes may be reversed to obtain all solutions to the
original system $H(\mathbf{y})$.
\section{Numerical Algebraic Geometry}\label{sec:nag}
We have described methods to compute all isolated solutions to a system of polynomial equations.
\demph{Numerical algebraic geometry} uses this ability to compute zero-dimensional algebraic varieties to represent and manipulate
higher-dimensional algebraic varieties on a computer.
This is an essential component of numerical nonlinear algebra.
Besides expanding the reach of numerical methods, the geometric ideas behind numerical algebraic geometry have led to new
methods for solving systems of polynomial equations, including regeneration and monodromy.
While the term was coined in~\cite{Sommese96numericalalgebraic}, the fundamental ideas were developed in a series of papers
including~\cite{SOMMESE2000572,OriginalTraceTest}, and a more thorough treatment is
in~\cite[Part III]{Sommese:Wampler:2005}.
\begin{example}\label{Ex:Reducible}
Consider the following square system of polynomials in the variables $x,y,z$:
\begin{equation}\label{ex:nag}
F(x,y,z)\ =\ \begin{bmatrix} f(x,y,z)g(x,y,z)(x-4)(x-6)\\ f(x,y,z)g(x,y,z)(y-3)(y-5)\\ f(x,y,z)(z-2)(z-5) \end{bmatrix},
\end{equation}
where
\[
f(x,y,z)\ =\ \tfrac{1}{40}(2xy-x^2) - z -1\qquad \text{and}\qquad
g(x,y,z)\ =\ x^4 - 4x^2 - y - 1\,.
\]
Figure~\ref{F:NID} shows the real part of the variety $V$ of $F(x,y,z)=0$, consisting of a quadric (degree~2) surface, two quartic (degree 4) curves
(at $z=2$ and $z=5$, respectively), and eight points.
The surface is in blue, the two curves in red, and the eight points in green.\hfill$\diamond$
\begin{figure}[htb]
\centering
\begin{picture}(237,165)(-10,-13)
\put(0,0){\includegraphics[width = 200pt]{pictures/new_NID}}
\put( 5,116){\small$6$}
\put( 5, 94){\small$3$}
\put( 5, 72){\small$0$}
\put( -3, 50){\small$-3$}
\put( -3, 28){\small$-6$}
\put(-10, 62){\small$z$}
\put(16, 1){\small$-9$} \put( 37.5, -0.4){\small$-6$} \put( 59, -1.8){\small$-3$}
\put(90,-3.2){\small$0$} \put(111.5,-4.6){\small$3$} \put(133,-6){\small$6$}
\put(75,-13){\small$x$}
\put(149, -4){\small$-10$} \put(164, 7){\small$-5$} \put(181, 17){\small$0$} \put(190, 24){\small$5$}
\put(190,3){\small$y$}
\end{picture}
\caption{A reducible variety, defined implicitly by (\ref{ex:nag}).}
\label{F:NID}
\end{figure}
\end{example}
Given any system $F(\mathbf{x})$ defining a reducible variety $V$, implemented symbolic algorithms (primary decomposition and computing
radicals) will decompose the variety $V$ as follows.
These methods will compute a list $I_1,\dotsc,I_r$, where each $I_i$ is the ideal of an irreducible component $V_i$
of $V$.
Each ideal $I_i$ is represented by a Gr\"obner basis, which is a finite set of generators, and thus serves as a data structure encoding information about
$V_i$.
For example, the dimension and degree of a component $V_i$ may be computed from the data $I_i$.
In numerical algebraic geometry, the data structure to represent a positive-dimensional component of a variety is a witness set.
Suppose that \defcolor{$F(\mathbf{x})$} is a system of polynomials, and let \defcolor{$V$} be an irreducible component of the variety defined by
$F(\mathbf{x})=0$.
A \demph{witness set} \defcolor{$W$} for the component~$V$ is a triple $W=(F, L, L \cap V)$, where $L$ is a general linear subspace complimentary to
$V$ in that ${\mathrm{codim}}(L)=\dim(V)$ and $L \cap V$ consists of numerical approximations of the points in the intersection of $L$ and $V$.
Generality (see Section \ref{ssec:witness}) ensures that the \demph{linear slice} $L\cap V$ is transverse and consists of $\deg(V)$ points.
In practice, $L$ is represented by $\dim(V)$ randomly chosen polynomials of degree one.
The simple algorithm of \demph{membership testing} illustrates the utility of this data structure.
Given a witness set $W=(F, L, L \cap V)$ for an irreducible variety $V\subset\mathbb{C}^n$ and a point $\mathbf{x}_0\in\mathbb{C}^n$, we would
like to determine if $\mathbf{x}_0\in V$.
Evaluating $F$ at $\mathbf{x}_0$ only implies that $\mathbf{x}_0$ lies near the variety defined by $F$, not that it lies near the irreducible component $V$.
We instead choose a general linear subspace $L'$ with the same codimension as $L$, but for which $\mathbf{x}_0\in L'$ (that is, $L'$ is otherwise
general, given that $\mathbf{x}_0\in L'$).
Next, form the \demph{linear slice homotopy},
\begin{equation}\label{Eq:moveWS}
\defcolor{H(\mathbf{x},t)}\ :=\ (F(\mathbf{x}),\, tL(\mathbf{x}) + (1-t) L'(\mathbf{x}))\,,
\end{equation}
and use it to track the points of $L \cap V$ from $t=1$ to
$t=0$, obtaining the points of $L'\cap V$.
As the intersection of $V$ with the other components of the variety of $F$ has lower dimension than $V$, and its complement in $V$
is path-connected, $\mathbf{x}_0$ lies in $L'\cap V$ if and only if $\mathbf{x}_0\in V$.
The core of this membership test reveals another algorithm involving witness sets.
Given a witness set $W=(F, L, L \cap V)$ and a general linear subspace $L'$ with the same codimension as $L$,
the step of following the points of $L\cap V$ along the homotopy~\eqref{Eq:moveWS} to obtain the points $L'\cap V$ is called
\demph{moving a witness set}.
This is because $W'=(F, L', L'\cap V)$ is a new witness set for $V$.
This may also be considered to be an algorithm for sampling points of $V$.
The rest of this section discusses algorithms for computing a witness set and the corresponding numerical irreducible decomposition of a
variety $V$.
It concludes with a summary of regeneration and monodromy, two new methods for solving systems of polynomials.
\begin{remark}
The set of points in the linear slice $L\cap V$ is considered a concrete version of Andr\'e Weil's generic
points of a variety~\cite{Weil_FAG}.
We call it \demph{witness point set}.
A witness point set is related to Chow groups from intersection theory~\cite{Fulton}.
Indeed, a witness set for an irreducible variety $V$ may be interpreted as a specific way to represent the class of $V$ in the Chow
ring of $\mathbb{P}^n$.
In \cite{Sottile_Witness_Sets_2020} this point of view was used to extend witness sets to represent subvarieties of varieties other than
$\mathbb{P}^n$.
\end{remark}
\subsection{More on linear slices}\label{ssec:witness}
An irreducible algebraic subvariety $V$ of affine or projective space has two fundamental invariants---its \demph{dimension},
\defcolor{$\dim(V)$}, and its \demph{degree}, \defcolor{$\deg(V)$}.
The dimension of $V$ is the dimension of its (dense subset of) smooth points, as a complex manifold.
Equivalently, this is the dimension of its tangent space at any smooth point.
By Bertini's theorem~\cite[Thm.\ 2,\ \S6.2]{Shafarevich}, there is a dense Zariski-open subset of (affine) linear spaces $L$
of codimension $\dim(V)$ such that the linear slice $L\cap V$ is transverse.
Here, a codimension $d$ linear subspace is defined by $d$ independent degree one polynomials.
The degree of $V$ is the maximal number of points in such an intersection.
By Bertini again, this maximum is achieved by linear spaces lying in Zariski open subset of linear subspaces.
In practice, $L$ is represented by $\dim(V)$
random degree one polynomials (their coefficients are chosen randomly).
By the nature of Zariski open sets, for most reasonable probability distributions on these coefficients, a suitably
general $L$ will be found with probability one.
When the variety $V$ defined by the vanishing of $F(\mathbf{x})$ is reducible and the maximum dimension of an irreducible component is $d$, then a
randomly-chosen linear subspace $L$ of codimension $d$ will meet each irreducible component $V'$ of $V$ of dimension $d$ in $\deg(V')$
points $L\cap V'$, and it will not intersect any components of $V$ of dimension less than $d$.
If $V'$ is the unique component of dimension $d$, then $(F, L, L\cap V')$ is a witness set for $V'$.
\begin{example}\label{example_witness}
We continue Example~\ref{Ex:Reducible}.
To compute the linear slice $L\cap V$ with the line $L$ parameterized by $(t,-t-2,-3+t/4)$,
we add to $F$ two degree one polynomials $x+y+2$ and $z+3-x/4$. The augmented system defines the intersection $L\cap V$.
It has two solutions $(10/3, -16/3, -13/6)$ and $(-8, 6, -5)$.
The line $L$ is sufficiently general so that it only meets the
two-dimensional surface defined by $f(x,y,z)$, and neither of the
curves nor any isolated points.
Figure~\ref{F:Surface_section} shows this configuration. \hfill$\diamond$
\begin{figure}[htb]
\centering \begin{picture}(237,165)(-10,-15)
\put(0,0){\includegraphics[width = 200pt]{pictures/NID_surface_slice}}
\put( 5,116){\small$6$}
\put( 5, 94){\small$3$}
\put( 5, 72){\small$0$}
\put( -3, 50){\small$-3$}
\put( -3, 28){\small$-6$}
\put(-10, 62){\small$z$}
\put(16, 1){\small$-9$} \put( 37.5, -0.4){\small$-6$} \put( 59, -1.8){\small$-3$}
\put(90,-3.2){\small$0$} \put(111.5,-4.6){\small$3$} \put(133,-6){\small$6$}
\put(75,-13){\small$x$}
\put(149, -4){\small$-10$} \put(164, 7){\small$-5$} \put(181, 17){\small$0$} \put(190, 24){\small$5$}
\put(190,3){\small$y$}
\put(190,62){\small$L$}
\end{picture}
\caption{Slice of $V$ by a line $L$.}
\label{F:Surface_section}
\end{figure}
\end{example}
While finding a witness point set for the top-dimensional component of $V$ in Example~\ref{example_witness} was straightforward, finding
witness point sets for the other components is not as simple.
To find points on curves, we intersect $V$ with the vertical plane \defcolor{$P$}
defined by $x+y-2=0$, finding eight isolated solutions.
These come from the two curves of degree four, each contributing four points.
This number eight does not yet tell us that there are two curves, there may be a single curve of degree eight or some other
configuration.
Furthermore, the plane intersects the surface in a curve \defcolor{$C$}, and we may have found additional non-isolated points
on~$C$.
This is displayed in Figure \ref{F:Surface_section_2}.
\begin{figure}[htb]
\centering \begin{picture}(237,165)(-10,-13)
\put(0,0){\includegraphics[width = 200pt]{pictures/NID_curve_slice}}
\put( 5,116){\small$6$}
\put( 5, 94){\small$3$}
\put( 5, 72){\small$0$}
\put( -3, 50){\small$-3$}
\put( -3, 28){\small$-6$}
\put(-10, 62){\small$z$}
\put(16, 1){\small$-9$} \put( 37.5, -0.4){\small$-6$} \put( 59, -1.8){\small$-3$}
\put(90,-3.2){\small$0$} \put(111.5,-4.6){\small$3$} \put(133,-6){\small$6$}
\put(75,-13){\small$x$}
\put(149, -4){\small$-10$} \put(164, 7){\small$-5$} \put(181, 17){\small$0$} \put(190, 24){\small$5$}
\put(190,3){\small$y$}
\put(73,117){\small$P$}
\thicklines
\put(180,64){{\color{white}\vector(-4,-1){40}}}
\put(180,64.5){{\color{white}\line(-4,-1){38}}}
\put(180,63.5){{\color{white}\line(-4,-1){38}}}
\thinlines
\put(183,61){\small$C$}\put(180,64){\vector(-4,-1){39}}
\end{picture} \caption{The plane $P$ intersects $V$ in eight isolated points and a curve $C$.}
\label{F:Surface_section_2}
\end{figure}
Methods to remove points on higher-dimensional components and to determine which points lie on which components of the same
dimension are described in the next subsection.
\subsection{Numerical irreducible decomposition}\label{ssec:NID}
A system $F(\mathbf{x})$ of polynomials in $n$ variables defines the algebraic variety
$\defcolor{V}\vcentcolon=\{\mathbf{x}\in\mathbb C^n\mid F(\mathbf{x})=0\}$.
Were $V$ irreducible, a witness set would be an acceptable representation for $V$.
An analog when $V$ is reducible is a \demph{numerical irreducible decomposition} of $V$.
This data structure for representing $V$ consists of a collection of witness sets $(F, L', L'\cap V')$, one for each
irreducible component $V'$ of $V$.
We present a numerical irreducible decomposition for our running example:
\begin{gather*}
\left(F, [x{+}y{+}2, z{+}3{-}\tfrac{x}{4}], \{(\tfrac{10}{3}, -\tfrac{16}{3}, -\tfrac{13}{6}), (-8, 6, -5)\}\right)
\\
\left(F, [x{+}y{+}2], \{
(-2.06 , 0.06,2), (-0.40 , -1.60 , 2), (0.69 , -2.69 , 2), (1.76 , -3.76 , 2)\}\right)\,,
\\
\left(F, [x{+}y{+}2], \{
(-2.06 , 0.06,5), (-0.40 , -1.60 , 5), (0.69 , -2.69 , 5), (1.76 , -3.76 , 5)\}\right)\,,
\\
\left(F,[],\{(4,3,2)\}\right)\,,\ \left(F,[],\{(4,3,5)\}\right)\,,\ \left(F,[],\{(4,5,2)\}\right)\,,\ \left(F,[],\{(4,5,5)\}\right)\,,
\\
\left(F,[],\{(6,3,2)\}\right)\,,\ \left(F,[],\{(6,3,5)\}\right)\,,\ \left(F,[],\{(6,5,2)\}\right)\,,\ \left(F,[],\{(6,5,5)\}\right)\,.
\end{gather*}
We later present the \demph{Cascade Algorithm}~\ref{NID_algo} to compute a numerical irreducible
decomposition.
We first explain its constituents.
\subsubsection{Witness point supersets}\label{SSS:WPS}
A starting point is to compute, for each $i$, a set of points $U_i$ in a linear slice $L\cap V$ of $V$ with a codimension $i$ linear space $L$,
where $U_i$ contains all witness point sets $L\cap V'$ for $V'$ an irreducible component of dimension $i$.
For this, let $\ell_1,\dotsc,\ell_{n-1}$ be randomly chosen (and hence independent) degree one polynomials.
For each $i$, let \defcolor{$L^i$} be defined by $\ell_1,\dotsc,\ell_i$, and let \defcolor{$F_i$} be a subsystem of $F$ consisting of $n-i$
randomly chosen linear combinations of elements of $F$.
Then $(F_i,\ell_1,\dotsc,\ell_i)$ is a square subsystem of $(F,L^i)$, and we may use it to compute points \defcolor{$U_i$}
that lie in $L^i\cap V$, as explained in Section~\ref{SS:SquaringUp}.
By the generality of $\ell_1,\dotsc,\ell_{n-1}$, there will be no solutions to $(F_i,\ell_1,\dotsc,\ell_i)$ when $i$ exceeds the dimension
$d$ of $V$.
(This is another application of Bertini's theorem.)
By the same generality, the set $U_i$ contains witness point sets for each irreducible component of $V$ of dimension $i$, and perhaps
some points on irreducible components of $V$ of larger dimension.
The next two sections describe how to remove points of $U_i$ that lie on components of $V$ of dimension exceeding $i$, and then how to
decompose such an equidimensional slice into witness point sets for the irreducible components of $V$ of dimension $i$.
\subsubsection{Removing points on higher-dimensional components}\label{NID_step1}
Suppose that we have computed $U_0,U_1,\dotsc,U_d$, where \demph{$d$} is the dimension of $V$ as in Section~\ref{SSS:WPS}.
By the generality of $\ell_1,\dotsc,\ell_d$, $U_d$ is equal to the linear slice $L^d\cap V$, and thus is the union
of witness point sets for the irreducible components of $V$ of dimension $d$.
For each $i=0,\dotsc,d$, let \defcolor{$V_i$} be the union of all irreducible components of $V$ of dimension $i$.
Then $\defcolor{W_i}\vcentcolon= L^i\cap V_i\subset U_i$ consists of points in $U_i$ lying on some component of $V$ of
dimension $i$.
This union of the witness point sets of $i$-dimensional components of $V$ is an \demph{equidimensional slice}.
Also, note that $W_d=U_d$.
Since points of $U_i\smallsetminus W_i$ lie on $V_{i+1},\dotsc,V_d$, downward induction on $i$ and the membership test computes
$U_i\smallsetminus W_i$ and thus $W_i$.
This uses the observation that the membership test, starting with $W_j=L^j\cap V_j$, may be used to determine if a point
$\mathbf{x}_0\in U_i$ lies on $V_j$, for any $j>i$.
This is invoked in Step~\ref{Step_MT} in Algorithm~\ref{Alg:EDS} for computing such equidimensional slices.
\begin{algorithm}
\caption{Computing equidimensional slices\label{Alg:EDS}}
\SetAlgoLined
\KwIn{$F(\mathbf{x})$, $\ell_1,\dotsc,\ell_d$, $U_1,\dotsc,U_d$ as above.}
\KwOut{Equidimensional slices $W_0,\dotsc,W_d$ of $V$.}
Set $W_d\vcentcolon=U_d$.
\For{ $i$ from $d-1$ down to $0$}{
Set $W_i\vcentcolon=\{\}$.
\For{each point $\mathbf{x}_0\in U_i$}{
If $\mathbf{x}_0\not\in V_{i+1}\cup\dotsb\cup V_d$, then $W_i\vcentcolon= W_i\cup\{\mathbf{x}_0\}$.\label{Step_MT}
}
}
Return $W_d,\dotsc,W_1,W_0$.
\end{algorithm}
\begin{remark}\label{Rem:alternative}
An alternative to Algorithm~\ref{Alg:EDS} is a \demph{local dimension test}~\cite{BHPS_local_dim},
which can
determine if a point $\mathbf{x}_0\in U_i$ lies on a component of dimension exceeding $i$.\hfill
\end{remark}
\subsubsection{Decomposing equidimensional slices}
\label{NID_step2}
Suppose that we have the equidimensional slices $W_0,\dotsc,W_d$ of $V$, where $W_i=L^i\cap V_i$ for each $i$, as computed in
Algorithm~\ref{Alg:EDS}.
Fix $i$ and suppose that the irreducible decomposition of $V_i$ is
\[
V_i\ =\ X_1 \cup X_2 \cup \dotsb \cup X_r\,,
\]
so that $X_1,\dotsc, X_r$ are all of the irreducible components of $V$ of dimension $i$.
Then
\begin{equation}\label{Eq:WitnessSetPartition}
W_i\ =\ L^i\cap V_i\ =\ \left(L^i\cap X_1\right) \sqcup \left(L^i\cap X_2\right) \sqcup \dotsb \sqcup \left(L^i\cap X_r\right)\,.
\end{equation}
This union is disjoint by the generality of $\ell_1,\dotsc,\ell_d$, as the intersection of $X_j\cap X_k$ with $j\neq k$
has dimension less than $i$.
Call~\eqref{Eq:WitnessSetPartition} the \demph{witness set partition} of equidimensional slice $W_i$.
Each part $L^i\cap X_j$ is a witness point set for $X_j$.
Computing a witness set partition of $W_i$ is tantamount to computing a numerical irreducible decomposition of $V_i$.
Suppose that $H(\mathbf{x},t):=(F(\mathbf{x}), tL^i(\mathbf{x})+(1{-}t)L'(\mathbf{x}))$ is a linear slice homotopy~\eqref{Eq:moveWS}.
As with moving a witness set, if we track a point
$\mathbf{x}\in L^i\cap X_j$ to a point $\mathbf{x}'\in L'\cap V$, then all points of the homotopy path, including its endpoint $\mathbf{x}'$, lie on
$X_j$.
Suppose that we combine linear slice homotopies together, moving points of $W_i=L^i\cap V_i$ to $L'\cap V_i$ on to $L''\cap V_i$,
and then back to $L^i\cap V_i$.
The three convex combinations,
\[
tL^i(\mathbf{x})+(1-t)L'(\mathbf{x})\,,\ \ tL'(\mathbf{x})+(1-t)L''(\mathbf{x})\,,\ \mbox{ and }\ tL''(\mathbf{x})+(1-t)L^i(\mathbf{x})\,,
\]
%
for $t\in[0,1]$ together
form a based loop in the space of codimension $i$ affine linear subspaces.
Tracking each $\mathbf{x}_0\in W_i$ along the three homotopies gives another point $\sigma(\mathbf{x}_0)\in W_i$.
This computes a \demph{monodromy permutation} \defcolor{$\sigma$} of $W_i$.
This has the property that the partition of~$W_i$ into the cycles of $\sigma$ refines the witness set
partition~\eqref{Eq:WitnessSetPartition}.
Following additional based loops may lead to other partitions of $W_i$ into cycles of monodromy permutations.
The common coarsening of these monodromy cycle partitions is an \demph{empirical partition} of $W_i$.
Every empirical partition is a refinement of the witness set partition.
Since the smooth locus of $X_j\smallsetminus (\bigcup_{k\neq j}X_k)$ is path-connected,
the common coarsening of all empirical partitions is the witness set partition.
Thus computing monodromy permutations will eventually give the witness set partition.
The problem with this approach to numerical irreducible decomposition is that only when~$V_i$ is irreducible is there a stopping
criterion.
Namely, if we discover an empirical partition consisting of a single part, then we conclude that $V_i$ is irreducible,
and $(F,L^i,W_i)$ is a numerical irreducible decomposition of $V_i$.
All other cases lack a stopping criterion.
A common heuristic stopping criterion is the trace test~\cite{OriginalTraceTest}.
To begin, form a linear slice homotopy~\eqref{Eq:moveWS} using a linear subspace $L'$ such that $L^i\cap L'$ has codimension $i{+}1$.
Then the convex combination $tL^i(\mathbf{x})+(1-t)L'(\mathbf{x})$ forms a \demph{pencil}.
The \demph{trace test} follows from the observation that while each homotopy path $\mathbf{x}(t)$ for $t\in[0,1]$ tracked from a point
$\mathbf{x}\in W_i$ is nonlinear, if we sum over all points $\mathbf{x}\in L^i\cap X_j$ in a single part of the witness set partition, then that sum or
its average is an affine-linear function of the homotopy parameter $t$.
Given a subset $S\subset W_i$, the average of the points tracked from $S$ is the \demph{trace} of $S$.
The trace is an affine-linear function if and only if $S$ is the full witness point set~\cite[Theorem 15.5.1]{Sommese:Wampler:2005}.
\begin{example}\label{Ex:Trace}
Consider the folium of Descartes which is defined by $f=x^3+y^3-3xy$.
A general line $\ell$ meets the folium in three points $W$ with the triple $(f,\ell,W)$ forming a witness set for
the folium.
Figure~\ref{F:folium} shows these witness sets on four parallel lines, which lie in a pencil.
\begin{figure}[htb]
\[
\begin{picture}(232,92)(-30,0)
\put(0,0){\includegraphics{pictures/Folium}}
\put(-30,72){$x^3+y^3-3xy$}
\put(163,53){$\ell$}
\put(125,7){collinear traces}
\put(123,10){\vector(-1,0){16}}
\end{picture}
\]
\caption{The trace test for the folium of Descartes.}
\label{F:folium}
\end{figure}
Note that the four traces are collinear.\hfill$\diamond$
\end{example}
The collinearity of traces may be seen as a consequence of Vi\`{e}ta's formula that the sum of
the roots of a monic polynomial of degree $\delta$ in $y$ is the coefficient of $-y^\delta$~\cite{traceTest}.
This gives the following stopping criterion for computing a numerical irreducible decomposition.
Given a part $S$ of an empirical partition of $W_i$, track all points of $S$ along a linear slice homotopy given by a
pencil containing $L^i$.
If the traces are collinear, then $S$ is a witness point set for some component of $V_i$.
Otherwise, either compute more monodromy permutations to coarsen the empirical partition or check the collinearity of the
trace for the union of $S$ with other parts of the empirical partition.
This is called the \demph{trace test}.
\begin{example}\label{Ex:Trace_Decomposition}
Suppose that $V$ is the union of the ellipse $8(x+1)^2+ 3(2y+x+1)^2 = 8$ and the folium, as in
Figure~\ref{F:folium_ellipse}.
\begin{figure}[htb]
\[
\begin{picture}(192,108
\put(0,0){\includegraphics{pictures/Folium_Ellipse}}
\put(184,74){$\ell$}
\end{picture}
\]
\caption{Numerical irreducible decomposition for the ellipse and folium.}
\label{F:folium_ellipse}
\end{figure}
A witness set for $V$ consists of the five points $W=V\cap\ell$.
Tracking points of $W$ as $\ell$ varies over several loops in the space of lines gives an empirical
partition of $W$ into two sets, of cardinalities two and three, respectively.
Applying the trace test to each subset verifies that each is a witness set of a component of $V$.\hfill$\diamond$
\end{example}
The methods described in this and the previous subsections combine to give the \demph{Cascade Algorithm} for computing a numerical
irreducible decomposition.
This was introduced in~\cite{SOMMESE2000572} and is implemented in \texttt{PHCpack}~\cite{PHCpack}, \texttt{Bertini}~\cite{Bertini} and
\texttt{HomotopyContinuation.jl}~\cite{HC.jl}, and in \texttt{NAG4M2}~\cite{NumericalAlgebraicGeometryArticle} through its interfaces.
We present a simplified form in Algorithm~\ref{NID_algo}.
\begin{algorithm}
\caption{The Cascade Algorithm\label{NID_algo}}
\SetAlgoLined
\KwIn{A system $F(\mathbf{x})$ of polynomials in $n$ variables defining a variety $V$ of dimension $d$.}
\KwOut{A numerical irreducible decomposition of $V$.}
\For{each dimension $i$ from $d$ down to $0$}{
Choose a codimension $i$ linear space $L^i$ and compute $L^i\cap V$, yielding points $U_i$.
Remove from $U_i$ any points lying on higher-dimensional components as in Algorithm~\ref{Alg:EDS}. Call the remaining points $W_i$.
Compute the witness set partition $W_i=S_1\sqcup\cdots\sqcup S_r$ using monodromy and the trace test as explained in~\Cref{NID_step2}.
Return $(F,L^i,S_j)$ for $j=1,\ldots,r$. These are witness sets for the irreducible components of $V$ of dimension $i$.
}
\end{algorithm}
\subsection{Advanced methods for solving}\label{ssec:advanced}
The perspective afforded by numerical algebraic geometry and its tools---witness sets and monodromy---lead to new algorithms for solving
systems of equations.
We describe two such algorithms.
Regeneration~\cite{Regen} is a bootstrap method that constructs a numerical irreducible decomposition one equation at a time.
Monodromy solving~\cite{Monodromy_Solve} exploits that it is often easier to find a system of equations for which a given point is a
solution than to find a solution to a given system.
\subsubsection{Regeneration}\label{ssec:regen}
Let $F\vcentcolon=(f_1,\dotsc,f_m)$ be a system of polynomials in $n$ variables.
Rather than solve all equations at once as in~\eqref{eq:AlgVariety}, we instead consider
the sequence of varieties
\[
\mathbb{C}^n\ =\ V_0 \supset V_1 \supset V_2 \supset \dotsb \supset V_m = V\,,
\]
where $\defcolor{V_i}$
is defined by the first $i$ polynomials in $F$.
The approach of \demph{equation-by-equation solvers}~\cite{EqByEq,Regen,RegenCascade} is to iteratively compute
$V_i$ given $V_{i-1}$ for each $i = 1,\dotsc,m$.
Let $X\subset V_{i-1}$ be an irreducible component of dimension $d$.
Then
\[
X\cap V_i\ =\ \{\mathbf{x}\in X\mid f_i(\mathbf{x})=0\}\,.
\]
If $f_i$ vanishes identically on $X$, then $X$ is an irreducible component of $V_i$.
Otherwise $X\cap V_i$, if nonempty, is a union of irreducible components of $V_i$ of dimension $d{-}1$.
We explain how to obtain a numerical irreducible decomposition of $X\cap V_i$ given a witness set for $X$.
Let $((f_1,\dotsc, f_{i-1}), L^d, L^d\cap X)$ be a witness set for $X$.
By the generality of $L^d$, with probability one we may conclude that a general polynomial $f$ vanishes on $X$ only if $f$ vanishes
at each point of $W\vcentcolon=L^d\cap X$.
\demph{Regeneration} is a method to use $W$ to compute a witness point superset for $X\cap V_i$.
Let $\ell_1,\dots,\ell_d$ be $d$ linear polynomials defining $L^d$, and suppose that \defcolor{$\delta$} is the degree of $f_i$.
Form the convex combination $\defcolor{\ell(t)}\vcentcolon= t \ell_d + (1-t)\ell'$ of $\ell_d$ with a new general degree one polynomial,
\defcolor{$\ell'$}.
Use the straight-line homotopy
\[
(f_1,\dotsc, f_{i-1}\,,\ \ell(t)\,,\ \ell_1,\dotsc,\ell_{d-1})
\]
to move the witness point set $W_1=W=L^d\cap X$ at $t=1$ to witness point sets $W_2,\dotsc,W_\delta$ at distinct points
$t=t_2,\dotsc,t_\delta$, respectively.
Then the product $f\vcentcolon= \ell_d\cdot \ell(t_2)\dotsb \ell(t_\delta)$ has degree $\delta$ in $\mathbf{x}$ and we have
\[
\defcolor{U'}\ =\ W_1\cup W_2 \cup\dotsb\cup W_\delta
\ =\ L^{d-1}\cap \left(X \cap\{\mathbf{x} \mid f(\mathbf{x})=0\}\right)\,,
\]
where $L^{d-1}$ is defined by $\ell_1,\dotsc,\ell_{d-1}$.
Use the straight-line homotopy
\[
(f_1,\dotsc,f_{i-1}\,,\ t f + (1-t) f_i\,,\ \ell_1,\dotsc,\ell_{d-1})
\]
to track the points of $U'$ at $t=1$ to the set $U$ at $t=0$.
Then
\[
U\ =\ L^{d-1}\ \cap\ \bigl(X\ \cap\ V_i\bigr)
\]
is a witness point superset for $X\cap V_i$.
Finally, use monodromy and the trace test of Section~\ref{NID_step2} to decompose $U$ into witness point sets for the irreducible components of $X\cap V_i$.
As regeneration computes a numerical irreducible decomposition of the variety $V$ of $F$, it will also compute all isolated solutions to
$F$.
\subsubsection{Solving by monodromy}\label{SS:Monodromy}
Suppose that we wish to solve a system $F=F(\mathbf{x};\mathbf{p})$ of polynomials that lies in a parameter space of polynomial systems as in
Section~\ref{SS:parameterContinuation}, and that evaluation at a general point $\mathbf{x}_0\in\mathbb{C}^n$ gives $n$ independent linear equations in
the parameters $\mathbf{p}\in\mathbb{C}^k$.
For example, $F(\mathbf{x};\mathbf{p})$ could be the family of all polynomial systems $f_1(\mathbf{x}),\dotsc,f_d(\mathbf{x})$ where the degree of $f_i$ is $d_i$
and $\mathbf{p}\in\mathbb{C}^k$ is the vector of coefficients.
More generally, each $f_i(\mathbf{x})$ could be a sparse polynomial of support $\mathcal{A}_i$.\smallskip
Consider the incidence variety~\eqref{Eq:Incidence_I} with projections $\pi_1$ to $\mathbb{C}^n$ and $\pi_2$ to $\mathbb{C}^k$.
\begin{equation}\label{Eq:Incidence_II}
\raisebox{-27pt}{\begin{picture}(290,61)(3,0)
\put(20,49){$Z\ =\ \{ (\mathbf{x}, \mathbf{p}) \in \mathbb{C}^n \times \mathbb{C}^k \mid F(\mathbf{x};\mathbf{p}) = 0\}\ \subseteq\ \mathbb{C}^n \times \mathbb{C}^k$}
\put(21,45){\vector(-1,-4){8}} \put( 3,30){\small$\pi_1$} \put( 5,0){$\mathbb{C}^n$}
\put(26,45){\vector( 1,-4){8}} \put(34,30){\small$\pi_2$} \put(31,0){$\mathbb{C}^k$}
\end{picture}}
\end{equation}
For any parameters $\mathbf{p}\in\mathbb{C}^k$, $\pi_2^{-1}(\mathbf{p})$ is the set of solutions $\mathbf{x}\in\mathbb{C}^n$ to $F(\mathbf{x};\mathbf{p})=0$.
On the other hand, if we fix a general $\mathbf{x}_0\in\mathbb{C}^n$, then $\pi_1^{-1}(\mathbf{x}_0)\subset\mathbb{C}^k$ is defined by $n$ independent
linear equations on $\mathbb{C}^k$, and is thus a linear subspace of dimension $k{-}n$.
(This implies that $Z$ is irreducible and has dimension $k$, which explains why the general fiber $\pi_2^{-1}(\mathbf{p})$ is finite and
$\pi_2\colon Z\to\mathbb{C}^k$ is a branched cover.)
Imposing $k{-}n$ additional general degree one equations on $\pi_1^{-1}(\mathbf{x}_0)$ gives a single parameter value $\mathbf{p}_0\in\mathbb{C}^k$ such that
$F(\mathbf{x}_0;\mathbf{p}_0)=0$, that is, a system of polynomials $F(\mathbf{x};\mathbf{p}_0)$ in the family $Z$ for which $\mathbf{x}_0$ is a solution.
The underlying idea of monodromy solving~\cite{Monodromy_Solve} is to use monodromy to discover all solutions to
$F(\mathbf{x};\mathbf{p}_0)=0$ and then use a parameter homotopy to find solutions to any desired system of polynomials in the family.
Similar to the description of monodromy in Section~\ref{NID_step2}, if we choose general points $\mathbf{p}_1,\mathbf{p}_2\in\mathbb{C}^k$, we may form the
trio of parameter homotopies
$F(\mathbf{x}; t\mathbf{p}_0+(1-t)\mathbf{p}_1)$,
$F(\mathbf{x}; t\mathbf{p}_1+(1-t)\mathbf{p}_2)$, and
$F(\mathbf{x}; t\mathbf{p}_2+(1-t)\mathbf{p}_0)$.
For $t\in[0,1]$, these form a loop in the parameter space based at $\mathbf{p}_0$, and we may track the point $\mathbf{x}_0$ along this loop to obtain a
possibly new point $\mathbf{x}'\in\pi_2^{-1}(\mathbf{p}_0)$ so that $F(\mathbf{x}';\mathbf{p}_0)=0$.
More generally, given a subset $S\subset\pi_2^{-1}(\mathbf{p}_0)$ of computed points, we may track it along a possibly new loop in $\mathbb{C}^k$ based
at $\mathbf{p}_0$ to obtain a subset $S'\subset\pi_2^{-1}(\mathbf{p}_0)$.
Thus we may discover additional solutions to $F(\mathbf{x};\mathbf{p}_0)=0$.
When the number $N$ of solutions to a general system in the family is known (e.g.,\ via Bernstein's bound for the sparse systems of
Section~\ref{S:Bernstein}), this method has a stopping criterion.
Otherwise, some heuristic may be used once sufficiently many solutions are known.
The technique of solving using monodromy was introduced in~\cite{Monodromy_Solve}, where a more complete description may be found.
It is implemented in \texttt{HomotopyContinuation.jl}~\cite{HC.jl} and widely used, in particular when it is not necessary to compute all
solutions to a given system.
Suppose that we have a branched cover $\pi\colon Z\to Y$ with $Y$ rational (e.g.\ as in~\eqref{Eq:Incidence_II} where $Y=\mathbb{C}^k$), and we know all
solutions $\pi^{-1}(\mathbf{y}_0)$ for a parameter value $\mathbf{y}_0$ not in the branch locus, $B$.
As in Section~\ref{NID_step2}, tracking all points in $\pi^{-1}(\mathbf{y})$ as $\mathbf{y}$ varies along a loop in $Y\smallsetminus B$ based at
$\mathbf{y}_0$ gives a monodromy permutation $\sigma$ of $\pi^{-1}(\mathbf{y}_0)$, which we regard as an element of the symmetric group $S_N$, where
$N=|\pi^{-1}(\mathbf{y}_0)|$.
The set of all monodromy permutations forms the \demph{monodromy group} of the branched cover $Z$.
This is in fact a Galois group~\cite{Harris,J1870,GGEGA}:
Let $\mathbb{K}=\mathbb{C}(Y)$ be the field of rational functions on the parameter space $Y$ and let $\mathbb{L}=\mathbb{C}(Z)$ be the function field of the incidence
variety $Z$.
As $\pi$ is dominant, we may regard $\mathbb{K}$ as a subfield of $\mathbb{L}$ via $\pi^{-1}$, and $\mathbb{L}/\mathbb{K}$ is a field extension of degree $N$.
The Galois group of the normal closure of $\mathbb{L}/\mathbb{K}$ is equal to the monodromy group of
$Z$, and we call it the Galois group of the branched cover $Z$, $\mathcal{G}(Z)$.
There are several approaches to computing Galois groups using methods from numerical nonlinear algebra.
In~\cite{Galois_LS}, monodromy permutations were computed and used to show some Galois groups were the full symmetric group (see
Section~\ref{ssec:enum}).
Other approaches, including methods guaranteed to compute generators of Galois groups, were developed in~\cite{Ngalois}.
Yahl~\cite{Yahl_Fano} introduced a method to certify that a Galois group contains a simple transposition, using ideas from this
section and from Section~\ref{sec:cert}.
A Galois group that is imprimitive (preserves a partition of the solitions) is equivalent to the branched cover decomposing as a comosition
of branched covers, and this may be exploited for solving (comuting points in a fiber).
This is explained in~\cite{amendola2016solving,brysiewicz2021solving}.
\section{Certification}\label{sec:cert}
Let $F$ be a square system of polynomials and $\mathbf{z}_0$ be a point presumed to be an approximation of a solution to $F$.
We discuss methods that can give a computational proof
that Newton iterates starting from $\mathbf{z}_0$ converge to a nearby regular zero $\mathbf{z}$ of $F$.
Such methods \demph{certify} the numerical solution $\mathbf{z}_0$ to $F$.
Certification methods can also prove that two numerical solutions correspond to two distinct zeroes, and are thus a
useful tool in both theoretical and applied problems in numerical nonlinear algebra.
There are two main strategies to certify solutions to square polynomial systems, \demph{Smale's $\alpha$-theory} and
\demph{Krawczyk's method}.
A difference is that Smale's $\alpha$-theory uses exact arithmetic, while Krawczyk's method uses floating-point arithmetic.
\begin{remark}
There are other approaches.
In \cite{DUFF2022367} the authors develop methods to certify overdetermined systems which require global information.
In~\cite{HHS}, overdetermined systems are reformulated as square systems to enable certification.\hfill$\diamond$
\end{remark}
\subsection{Smale's $\alpha$-theory}
Smale's $\alpha$-theory certifies approximate zeroes of a square system~$F$.
An approximate zero of $F$ is a data structure representing a solution to $F$.
Mentioned in Section~\ref{sec:tracking}, we now give a more formal definition.
\begin{definition}\label{def_approx_zero}
Let $F(\mathbf{x})$ be a square system of $n$ polynomials in $n$ variables.
Writing $\defcolor{JF}:=\frac{\partial F}{\partial \mathbf{x}}$ for its Jacobian matrix, its
Newton operator $N_F$~\eqref{Newton} is
$N_F(\mathbf{x}):=\mathbf{x} - \left(JF(\mathbf{x})\right)^{-1} F(\mathbf{x})$.
A point $\mathbf{z}_0 \in \mathbb{C}^n$ is an \demph{approximate zero} of $F$ if there exists
a regular zero $\mathbf{z}\in \mathbb{C}^n$ of $F$ such that the sequence $\{ \mathbf{z}_k\mid k\geq 0\}$ of Newton iterates
defined by $\mathbf{z}_{k+1} = N_F(\mathbf{z}_k)$ for $k\geq 0$ converges quadratically to $\mathbf{z}$ in that
\[
\| \mathbf{z}_{k+1} - \mathbf{z}\|\ \leq \ \frac{1}{2} \| \mathbf{z}_k-\mathbf{z}\|^2
\qquad \forall k\,\geq\,0\,.
\]
We call $\mathbf{z}$ the \demph{associated zero} of $\mathbf{z}_0$.\hfill$\diamond$
\end{definition}
Smale's $\alpha$-theory certifies that a point $\mathbf{x}$ is an approximate zero using only local information encoded in
two functions of $F$ and $\mathbf{x}$,
\[
\defcolor{\beta(F,\mathbf{x})}\ \vcentcolon =\ \| JF(\mathbf{x})^{-1}F(\mathbf{x})\|
\ \ \text{ and }\ \
\defcolor{\gamma(F,\mathbf{x})}\ \vcentcolon=\
\max_{k\geq 2}\big\| \frac{1}{k!}\, JF(\mathbf{x})^{-1} \mathrm D^kF(\mathbf{x})\big\|^\frac{1}{k-1}\,.
\]
Here, $\beta$ is the size of a Newton step,
$\mathrm D^kF(\mathbf{x})$ is the tensor of derivatives of order $k$ at $\mathbf{x}$, and $JF(\mathbf{x})^{-1} \mathrm D^kF(\mathbf{x})$ is the corresponding
multilinear map $(\mathbb{C}^n)^k\to \mathbb{C}^n$.
The norm is the operator norm $\| A\| \vcentcolon= \max_{\| v \| = 1} \| A(v,\ldots,v)\|$.
Let $\defcolor{\alpha(F,\mathbf{x})}\vcentcolon=\beta(F,\mathbf{x}) \cdot \gamma(F,\mathbf{x})$ be the product of these two functions.
We state two results of Smale \cite[Theorem 4 and Remark 6 in Chapter 8]{BCSS}.
\begin{theorem}\label{alpha_theorem}
Let $\mathbf{x}\in\mathbb{C}^n$ and $F$ be a system of $n$ polynomials in $n$ variables.
\begin{enumerate}
\item If $\alpha(F,\mathbf{x})<\frac{13-3\sqrt{17}}{4}\approx 0.15767$, then $\mathbf{x}$ is an approximate zero of $F$
whose associated zero $\mathbf{z}$ satisfies $\|\mathbf{x}-\mathbf{z}\|\leq 2\beta(F,\mathbf{x})$.
\item If $\mathbf{x}$ is an approximate zero of $F$ and $\mathbf{y}\in\mathbb{C}^n$ satisfies $\Vert \mathbf{x}-\mathbf{y}\Vert < \tfrac{1}{20\,\gamma(F,\mathbf{x})}$,
then $\mathbf{y}$ is also an approximate zero of $F$ with the same associated zero as $\mathbf{x}$.
\end{enumerate}
\end{theorem}
Shub and Smale \cite{SS99} derived an upper bound for $\gamma(F,\mathbf{x})$ which can be computed using exact arithmetic, and thus one
may decide algorithmically if $\mathbf{x}$ is an approximate zero of $F$, using only data of $F$ and the point $\mathbf{x}$ itself.
The software \texttt{alphaCertified}~\cite{alpha_certify} uses this theorem in an algorithm.
An implementation is publicly available for
download\footnote{\url{https://www.math.tamu.edu/~sottile/research/stories/alphaCertified/index.html}}.
If the polynomial system $F$ has only real coefficients, then \texttt{alphaCertified} can decide if an associated zero is real.
The idea is as follows.
Let $\mathbf{x}\in \mathbb{C}^n$ be an approximate zero of $F$ with associated zero $\mathbf{z}$.
Since the Newton operator has real coefficients, $N_F(\overline{\mathbf{x}}) =\overline{ N_F(\mathbf{x})}$, we see that $\overline{\mathbf{x}}$ is an
approximate zero of $F$ with associated zero $\overline{\mathbf{z}}$.
Consequently, if $\| \mathbf{x}-\overline{\mathbf{x}}\| < \tfrac{1}{20\,\gamma(F,\mathbf{x})}$, then $\mathbf{z}=\overline{\mathbf{z}}$ by \Cref{alpha_theorem}(2).
\subsection{Krawczyk's method}
Interval arithmetic certifies computations using floating-point arithmetic.
\demph{Krawczyk's method} \cite{Krawczyk:1969} adapts Newton's method to interval arithmetic and can certify zeros of
analytic function $\mathbb{C}^n\to\mathbb{C}^n$.
This is explained in \cite{Rump83}.
Real interval arithmetic involves the set of compact real intervals,
\[
\defcolor{\mathbb{IR}}\ \vcentcolon=\ \{[x,y]\mid x,y\in\mathbb R, x\leq y\}\,.
\]
For $X, Y \in \mathbb{IR}$ and $\circ \in \{ +,-, \cdot, \div \}$, we define
$\defcolor{X \circ Y} \vcentcolon= \{ x \circ y \,|\, x\in X,y\in Y\}$.
(For $\div$ we require that $0\not\in Y$.)
For intervals $I,J,K\in \mathbb{IR}$ we have $I\cdot (J + K) \subseteq I\cdot J + I\cdot K$, but the inclusion may be strict.
Indeed,
\begin{align*}
[0,1] \cdot ( [-1,0] + [1,1] )\ &=\ [0,1]\cdot [0,1] \ =\ [0,1]\,\ \ \mbox{ but}\\
[0,1] \cdot [-1,0] + [0,1] \cdot [1,1] \ &=\ [-1,0] + [0,1]\ =\ [-1,1]\,.
\end{align*}
Thus there is no distributive law in interval arithmetic.
\demph{Complex intervals} are rectangles in the complex plane of the form
\[
X+\sqrt{-1}Y\ =\ \{x+\sqrt{-1}y\mid x\in X,y\in Y\}\,, \quad
\mbox{where}\ X,Y\in\mathbb{IR}\,.
\]
Let \defcolor{$\mathbb{IC}$} be the set of all complex intervals.
Writing $\frac{X}{Y}$ for $X\div Y$,
we define arithmetic for complex intervals $I=X+\sqrt{-1}\ Y$ and $J=W+\sqrt{-1}Z$ as follows.
\begin{alignat*}{2}
I + J &\vcentcolon=\ (X + W) + \sqrt{-1}\ (Y+Z)
\qquad I \cdot J &&\vcentcolon=\ (X\cdot W - Y \cdot Z) + \sqrt{-1}\ (X\cdot Z + Y\cdot W)\\
I - J &\vcentcolon=\ (X - W) + \sqrt{-1}\ (Y-Z)\qquad
\;\, \frac{I}{J} &&\vcentcolon=\
\frac{X \cdot W + Y\cdot Z}{W\cdot W + Z\cdot Z} + \sqrt{-1}\ \frac{Y \cdot W - X\cdot Z}{W\cdot W + Z\cdot Z}
\end{alignat*}
As before, for $\tfrac{I}{J}$ we assume that $0\not\in (W\cdot W + Z\cdot Z)$.
As with real intervals, there is no distributive law for complex intervals.
Consequently, evaluating a polynomial at intervals is not well-defined.
Evaluation at intervals is well-defined for expressions of a polynomial as a straight-line program,
which is an evaluation of the polynomial via a sequence of arithmetic operations that does not involve distributivity.
\begin{example}\label{example_IA}
Consider the polynomial $f(x,y,z)=x(y+z) = xy+xz$.
These two expressions of the distributive law are different straight-line programs for $f$, and we have shown that they have distinct evaluations
on the triple $( [0,1], [-1,0], [1,1])$.\hfill$\diamond$
\end{example}
We sidestep this issue with the notion of an interval enclosure.
\begin{definition}
Let $F$ be a system of $n$ polynomials in $n$ variables. We call a map $$\defcolor{\square F}\colon (\mathbb{IC})^n \rightarrow (\mathbb{IC})^{n}$$
such that
$\{F(\mathbf{x}) \mid \mathbf{x} \in \mathbf{I}\} \subseteq \square F(\mathbf{I})$ for every $\mathbf{I} \in (\mathbb{IC})^n$ an \demph{interval enclosure} of
$F$.\hfill$\diamond$
\end{definition}
Let~$\square F$ be an interval enclosure of a square polynomial system $F$ and $\square JF$ be an interval enclosure of its Jacobian map
$JF\colon\mathbb{C}^n\to\mathbb{C}^{n\times n}$.
Furthermore, let $\mathbf{I}\in (\mathbb{IC})^n$, $\mathbf{x} \in \mathbb{C}^n$, and let $Y \in \mathbb{C}^{n\times n}$ be an invertible matrix.
The \demph{Krawczyk operator} these define is
\[\defcolor{K_{\mathbf{x},Y}(\mathbf{I})} \vcentcolon=\
\mathbf{x} - Y \cdot \square F(\mathbf{x}) + (\mathbf{1}_n - Y \cdot \square \mathrm{J}F(I))(\mathbf{I}-\mathbf{x})\,.
\]
The norm of a matrix interval $A\in(\mathbb{IC})^{n\times n}$ is
$\| A\|_\infty \vcentcolon= \max\limits_{B\in A} \max\limits_{\mathbf{v}\in\mathbb{C}^n} {\| B\mathbf{v}\|_\infty} / {\| \mathbf{v}\|_\infty}$,
where $\Vert (v_1,\ldots,v_n) \Vert_\infty = \max_{1\leq i\leq n}\vert v_i\vert$ for $\mathbf{v}\in\mathbb{C}^n$.
We state the main theorem underlying Krawczyk's method, which is proven in~\cite{Rump83}.
\begin{theorem}\label{main_theorem_IA}
Let $F=(f_1,\ldots,f_n)$ be a system of $n$ polynomials in $n$ variables, $\mathbf{I}\in(\mathbb{IC})^n$, $\mathbf{x}\in \mathbf{I}$,
and let $Y\in\mathbb{C}^{n\times n}$ be invertible.
\begin{enumerate}
\item If $K_{\mathbf{x},Y}(\mathbf{I}) \subset \mathbf{I}$, there is a zero of $F$ in $\mathbf{I}$.
\item If $\sqrt{2} \, \| \mathbf{1}_n - Y\cdot \square JF(\mathbf{I}) \|_\infty < 1$, then $F$ has a unique zero in $\mathbf{I}$.
\end{enumerate}
\end{theorem}
Several choices have to be made to implement Krawczyk's method.
For instance, we have to choose interval enclosures of both $F$ and its Jacobian $JF$.
\Cref{example_IA} shows that this is nontrivial as different straight-line programs for the same polynomial system can produce different
results in interval arithmetic.
Furthermore, choosing $\mathbf{I}$ in \Cref{main_theorem_IA} too small might cause the true zero not to lie in $\mathbf{I}$, while choosing $\mathbf{I}$ too
large can be an obstacle for the contraction property in (1).
Heuristics are usually implemented to address these issues.
Krawczyk's method is implemented in the commercial \texttt{MATLAB} package \texttt{INTLAB} \cite{Rump1999}, the
\texttt{Macaulay2} package \texttt{NumericalCertification}~\cite{LeeM2},
and in \texttt{HomotopyContinuation.jl} \cite{HC.jl,BKT2020}.
Krawczyk's method can also certify the reality of a zero:
Assume that $F$ has real coefficients.
Suppose that we have found an interval $\mathbf{I} \in (\mathbb{IC})^n$ and a matrix $Y\in \mathbb{C}^{n\times n}$ such that
$K_{\mathbf{x},Y}(\mathbf{I}) \subset \mathbf{I}$ and $\sqrt{2} \, \| \mathbf{1}_n - Y\cdot \square JF(\mathbf{I}) \|_\infty < 1$.
By Theorem~\ref{main_theorem_IA}, $F$ has a unique zero $\mathbf{z}$ in $\mathbf{I}$.
Since $\overline{\mathbf{z}}$ is also a zero of $F$, if $\{ \overline{\mathbf{y}} \mid \mathbf{y}\in K_{\mathbf{x},Y}(\mathbf{I}) \} \subset \mathbf{I}$, then
$\mathbf{z} = \overline{\mathbf{z}}$.
\section{Applications}\label{sec:apps}
While we have largely discussed the theory and many aspects, methods, and some implementations of numerical nonlinear algebra, these were all
motivated by its applications to questions within and from outside of mathematics.
Many of these are well-known and may be found in other contributions in this volume.
We present three such here, involving synchronization of oscillators, enumerative geometry, and computer vision.
\subsection{The Kuramoto model}\label{ssec:kuramoto}
In his 1673 treatise on pendulums and horology~\cite{Huygens1673Horologium}, Christiaan Huygens observed an ``odd kind of sympathy'' between
pendulum clocks, which was one of the earliest observations of synchronization among coupled oscillators.
Other examples range from pacemaker cells in the heart to the formation of circadian rhythm in the brain
to synchronized flashing of fireflies.
The Kuramoto model emerged from this study and has many interesting applications
that have fueled several decades of active research~\cite{Acebron2005}.
A network of oscillators can be modeled as a swarm of points circling the origin which pull on each other.
For weakly coupled and nearly identical oscillators, the natural separation of timescales \cite{Winfree1967Biological,Kuramoto1975Self}
allows a simple description of the long-term behavior
in terms of phases of the oscillators.
Kuramoto singled out the simplest case governed by equations
\begin{equation}\label{equ: generalized kuramoto}
\dot{\theta}_i\ =\
\omega_i -
\sum_{j \sim i} k_{ij} \sin(\theta_i - \theta_j)
\quad\text{ for } i = 0,\ldots,n\,.
\end{equation}
Here, $\defcolor{\theta_0},\ldots,\defcolor{\theta_n} \in [0,2\pi)$ are the phases of the oscillators,
\defcolor{$\omega_i$} are their natural frequencies, $\defcolor{k_{ij}} = \defcolor{k_{ij}}$ are coupling coefficients,
and $\defcolor{j\sim i}$ is adjacency in the graph $G$ underlying the network.
This is the Kuramoto model \cite{Kuramoto1975Self}.
It is simple enough to be analyzed yet it exhibits interesting emergent behaviors,
and has initiated an active research field \cite{Strogatz2000From}.
One core problem that can be studied algebraically is \demph{frequency synchronization}.
This occurs when the tendency of oscillators to relax to their limit cycles
and the influences of their neighbors reach equilibrium,
and the oscillators are all tuned to their mean frequency.
Such synchronized configurations correspond to
equilibria of \eqref{equ: generalized kuramoto},
which are solutions to a nonlinear system of equations.
Even though this system is derived from a simplification of the oscillator model,
its utility extends far beyond this narrow setting.
For example, in electric engineering,
it coincides with a special case of the power flow equations,
derived from laws of alternating current circuits~\cite{DorflerBullo2014Synchronization}.
The equilibrium equations become algebraic after a change of variables.
Numerical nonlinear algebra has been used to solve these and related families of equations
finding synchronization configurations that cannot be found by simulations or symbolic computation.
For example, the IEEE 14 bus system from electric engineering is a well-studied test case,
yet its full set of solutions remained unknown until it was computed using
total degree and polyhedral homotopy methods by Mehta, et al.~\cite{MehtaNguyenTuritsyn2016Numerical},
using \texttt{Bertini} \cite{Bertini} and \texttt{HOM4PS-2.0} \cite{LeeLiTsai2008HOM4PS}.
For rank one coupling, Coss, et al.~\cite{CossHauensteinHongMolzahn2018Locating}
showed that the equilibrium equation of \eqref{equ: generalized kuramoto}
may be reformulated as a set of decoupled univariate radical equations,
which are easy to solve.
Determining the number of complex equilibira (solutions to the equilibrium equations~\eqref{equ: generalized kuramoto})
is another line of research that has used numerical nonlinear algebra.
In the 1980s, Baillieul and Byrnes \cite{BaillieulByrnes1982Geometric}
showed that a complete network of three oscillators has at most six complex equilibria, and all may be real.
For a complete network of four oscillators, they constructed 14 real equilibria.
There are 20 complex equilibria.
In the 2010s, Molzahn, et al.~\cite{MNMH} showed there could be 16 real equilibria
and in 2020, Lindberg et al.~\cite{LZBL} improved this to 18.
It remains unknown if all 20 complex equilibria can be real.
We have a more complete answer for the enumeration of complex equilibria.
Using the bihomogeneous B\'ezout bound of an algebraic formulation of the equilibrium equations of \eqref{equ: generalized kuramoto},
Baillieul and Byrnes showed that a network of $n{+}1$ oscillators has at most $\binom{2n}{n}$ complex equilibria.
This upper bound is attained for generic parameters $\{\omega_i\}$ and $\{ k_{ij} \}$ whose network is a complete graph.
For sparse networks whose underlying graph is not complete, the bihomogeneous B\'ezout bound is not sharp,
as the equations are sparse in the sense of Section~\ref{polyhedral-homotopy}.
Bernstein's Theorem~\ref{thm: Bernstein} is used in \cite{MehtaNguyenTuritsyn2016Numerical} to give a bound that depends upon the underlying
graph.
This is elegantly expressed in terms of the normalized volumes of symmetric edge polytopes.
To a $G$ connected graph we associated its \demph{symmetric edge polytope}, which is defined by
\begin{equation}\label{equ: AP bound}
\defcolor{\Delta_G}\ \vcentcolon=\ {\mathrm{conv}} \{ e_i - e_j \mid i\sim j \mbox{ in }G \}\,.
\end{equation}
For a network of $n{+}1$ oscillators this has dimension $n$.
Figure~\ref{F:SEP} shows symmetric edge polytopes for connected graphs on three vertices.
\begin{figure}[htb]
\centering
\begin{picture}(63,54)
\put(6.5,0.5){\includegraphics{pictures/SEP_triangle}}
\put(43,33){\small$K_3$}
\put(0,0){\small 1} \put(26,43){\small 2} \put(53,0){\small 3}
\end{picture}
%
\begin{picture}(63,54)
\put(6.5,0.5){\includegraphics{pictures/SEP_tree}}
\put(43,33){\small$T_3$}
\put(0,0){\small 1} \put(26,43){\small 2} \put(53,0){\small 3}
\end{picture}
%
\
%
\begin{picture}(77,54)(-2,0)
\put(0,0){\includegraphics{pictures/SEP_hex}}
\put(54,42){\small$\Delta_{K_3}$}
\end{picture}
%
%
\begin{picture}(77,54)(-2,0)
\put(0,0){\includegraphics{pictures/SEP_pgram}}
\put(54,42){\small$\Delta_{T_3}$}
\end{picture}
%
\
%
\begin{picture}(96,54)(-2,0)
\put(0,0){\includegraphics{pictures/SEP_coords}}
\put(63,8){\small $e_1{-}e_2$}
\put(49,40){\small $e_1{-}e_3$}
\put(-2,46){\small $e_2{-}e_3$}
\end{picture}
%
\caption{Connected graphs on three vertices, their symmetric edge polytopes, and the coordinates.}
\label{F:SEP}
\end{figure}
A result in~\cite{ChenKorchevskaiaLindberg2022Typical} is that for a connected graph $G$ and generic parameters,
there are exactly $n!{\mathrm{vol}}(\Delta_G)$ complex equilibria.
We may see the bound of six from \cite{BaillieulByrnes1982Geometric} for the network $K_3$ in Figure~\ref{F:SEP}; the hexagon $\Delta_{K_3}$
is composed of six primitive triangles.
This symmetric edge polytope $\Delta_G$ is quite natural and has been independently studied in geometry and number
theory~\cite{DAliDelucchiMichalek2022Many}.
Table \ref{tab: kuramoto root count} shows examples of the numbers of complex equilibria obtained through this connection.
\begin{table}[htb]
\caption{Known results for the generic and maximum complex equilibria of \eqref{equ: generalized kuramoto}}
\label{tab: kuramoto root count}
\small
\centering
\begin{tabular}{ll}\toprule
A tree network with $n+1$ nodes \cite{ChenDavisMehta2018Counting} & $2^{n}$
\\ \midrule
A cycle network with $n+1$ nodes \cite{ChenDavisMehta2018Counting} & $ (n+1) \binom{n}{ \lfloor n / 2 \rfloor } $
\\ \midrule
Cycles of lengths
$2m_1, \ldots, 2m_k$ joined along an edge \cite{DAliDelucchiMichalek2022Many} &
$\frac{1}{2^{k-1}} \prod_{i=1}^k m_i \binom{2m_i}{m_i}$
\\ \midrule
Cycles of lengths $2m_1{+}1$ and $2m_2{+}1$ joined along an edge \cite{DAliDelucchiMichalek2022Many} &
$(m_1 {+} m_2 {+} 2m_1m_2) \binom{2m_1}{m_1} \binom{2m_2}{m_2}$
\\ \midrule
A wheel graph with $n+1$ nodes for odd $n$ \cite{DAliDelucchiMichalek2022Many}&
$(1-\sqrt{3})^{n} + (1+\sqrt{3})^{n}$
\\ \midrule
A wheel graph with $n+1$ nodes for even $n$ \cite{DAliDelucchiMichalek2022Many}&
$(1-\sqrt{3})^{n} + (1+\sqrt{3})^{n} - 2$
\\
\bottomrule
\end{tabular}
\end{table}
Finding exact formula for other families of networks
remains an active topic.
For trees and cycle networks,
is is possible for all complex equilibria to be real.
It is still unknown if the same holds for other families of networks.
\subsection{Numerical nonlinear algebra in enumerative geometry}\label{ssec:enum}
Paraphrasing Schubert \cite{Schubert1879}, enumerative geometry is the art of counting geometric figures satisfying conditions imposed by
other, fixed, geometric figures.
Traditionally, these counting problems are solved by understanding the structure of the space of figures we are to count well enough to
construct their cohomology or Chow rings~\cite{Fulton}, where the computations are carried out.
Numerical nonlinear algebra allows us to actually compute the solutions to a given instance of an enumerative problem and then glean
information about the problem that is not attainable by other means.
While the polyhedral homotopy of Section~\ref{polyhedral-homotopy} based on Bernstein's Theorem may be viewed as a numerical homotopy method
to solve a class of enumerative problems,
perhaps the first systematic foray in this direction was in~\cite{NSC_HSS} by Sturmfels and coauthors, who exploited structures in the
Grassmannian to give three homotopy methods for solving simple Schubert problems.
These are enumerative problems that ask for the $k$-planes in $\mathbb{C}^n$ that meet a collection of linear subspaces non-trivially, such as
finding all (462) 3-planes in $\mathbb{C}^7$ that meet twelve 4-planes~\cite{Sch1886b}.
Their number may be computed using Pieri's formula.
The Pieri homotopy algorithm from~\cite{NSC_HSS} was later used~\cite{Galois_LS} to study Galois groups in Schubert calculus.
This included showing that a particular Schubert problem with 17589 solutions had Galois group the full symmetric group $S_{17589}$.
One of the most famous and historically important enumerative problems is the problem of five conics:
How many plane conics are simultaneously tangent to five given plane conics?
This was posed by Steiner~\cite{St1848} who gave the answer $7776$.
Briefly, a conic $ax^2+bxy+cy^2+dxz+eyz+fz^2=0$ in $\mathbb{P}^2$ is given by the point $[a,b,c,d,e,f]$ in $\mathbb{P}^5$, and the condition to be tangent
to a given conic is a sextic in $a,b,\dotsc,f$.
By B\'ezout's Theorem, Steiner expected $6^5=7776$.
The only problem with this approach is that every ``doubled-line conic'', one of the form $(\alpha x+\beta y+\gamma z)^2$, is tangent to
every conic, and thus the B\'ezout count of $7776$ includes a contribution from the doubled-line conics.
Chasles~\cite{Ch1864} essentially introduced the Chow ring of smooth conics to give the correct answer of $3264$~\cite{Kl80}.
Fulton~\cite[p.~55]{Fu96} asked how many of the 3264 conics could be real, later determining that all can be real, but he did not publish
his argument.
His argument involves deforming an asymmetric pentagonal arrangement of lines and points to prove
the {\it existence} of five real conics having all 3264 tangent conics real.
Ronga, Tognoli, and Vust~\cite{RTV} published a different proof of existence via a delicate local computation near a symmetric arrangement
that had 102 tangent conics, each of multiplicity 32.
Fulton's argument is sketched in~\cite[Ch.~9]{IHP} and Sturmfels with coauthors wrote a delightful article
``3264 conics in a second''~\cite{3264} in which they give an explicit five real conics with 3264 real tangent conics, together with a proof
using certification as in Section~\ref{sec:cert} using numerical nonlinear algebra.
Figure~\ref{F:3264} shows a picture.
\begin{figure}[htb]
\centering
\includegraphics[height=220pt]{pictures/3264}
\caption{3264 real conics tangent to five (nearly degenerate) conics.}
\label{F:3264}
\end{figure}
\subsection{Computer vision}\label{ssec:comp_vision}
Computer vision is a field of artificial intelligence that trains computers to interpret and understand the visual world.
Several types of problems in computer vision are amenable to algebraic computational methods.
We shall focus on one type---minimal problems---and one method---homotopy continuation.
Minimal problems are the backbone of the \demph{structure from motion} pipeline that is used for three-dimensional (3D) reconstruction in
applications from medical imaging to autonomous vehicles.
The chapter ``Snapshot of Algebraic Vision'' in this volume treats other types of problems and other algebraic methods, including symbolic
computation.
All problems amenable to this algebraic analysis share a purely geometric problem at their core.
For computer vision, this often begins with basic projective geometry.
We consider the projective space $\mathbb{P}^3$ as the 3D world, as it compactifies the Euclidean space $\mathbb{R}^3$.
A mathematical model of a pin-hole camera \defcolor{$C$} is a projective linear map given by a matrix
\[
C\ \vcentcolon=\ [R\mid t]\,, \quad R\in\mathbb{R}^{3\times 3}\ \text{ and }\ t \in \mathbb{R}^{3\times 1}\,,
\]
which captures the images of world points in the \demph{image plane} $\mathbb{P}^2$.
A \demph{calibrated} camera $C$ has $R \in \mathrm{SO}(3)$.
While this is formulated in the $\mathbb{P}^3$ compactifying $\mathbb{R}^3$, for computations we extend scalars to the complex
numbers.
We may also interpret a calabrated camera as an element of the special Euclidean group $\mathrm{SE}(3)$ acting on $\mathbb{R}^3$,
the rotation $R$ followed by the translation $t$.
It is convenient to operate in a fixed affine chart on $\mathbb{P}^3$ and consider the (affine) camera plane as a plane of points with the third
coordinate equal to $1$ in $\mathbb{R}^3$ (the local affine coordinates of the camera).
The image of a point is obtained intersecting the image plane with the line going through the point and the center of the camera.
Figure~\ref{fig:5-points} illustrates this as well as the definition of \demph{depth}.
\begin{figure}[htb]
\centering
\begin{picture}(250,192)
\put(0,0){\includegraphics[width=240pt]{pictures/hand-5pts.png}}
\put(180,152){\small$\begin{pmatrix}\lambda x\\\lambda y\\ \lambda \end{pmatrix}$}
\put( 64,48){\small$\begin{pmatrix}x\\y\\1\end{pmatrix}$}
\end{picture}
\caption{\emph{5pt problem}: given images of five points in two views, find the relative pose $[R\mid t]$ of the two cameras
with $t= (t_1,t_2,1)^T$.}
\label{fig:5-points}
\end{figure}
The relative positions of the two cameras is encoded by an element $[R\mid t]$ of the Euclidean special group $\mathrm{SE}(3)$.
The third coordinate of $t$ is set equal to 1 to remove a scaling ambiguity.
In the (3D) coordinate frame of the first camera the image of the tip of the little finger lies in the camera plane and the actual tip is
the point obtained by scaling this by the \demph{depth} $\lambda$.
\subsubsection{Minimal problems}
A reconstruction problem is \demph{minimal} if it has a finite number of (complex) solutions for general values of the parameters.
As in Sections~\ref{SS:parameterContinuation} and~\ref{SS:Monodromy}, a minimal problem gives rise to a
branched cover $\pi\colon\mathcal{M}\to\mathcal{P}$, where base space $\mathcal{P}$ the \demph{problem space} and total space $\mathcal{M}$ (incidence
variety) the \demph{problem-solution} manifold.
The number of the solutions, which is the degree of the branched cover, is the \demph{degree} of the problem.
\begin{example}\label{ex:3-points}
A classical minimal problem is computing the calibrated camera~\cite{HZ-2003} from three points in space and their image projections.
A classical formulation~\cite{Grunert-1841} is as a system of three quadratic polynomial equations
\begin{eqnarray*}
\|X_1-X_2\|^2\ =\ \|\lambda_1 x_1-\lambda_2 x_2\|^2 \\
\|X_2-X_3\|^2\ =\ \|\lambda_2 x_2-\lambda_3 x_3\|^2 \\
\|X_3-X_1\|^2\ =\ \|\lambda_3 x_3-\lambda_1 x_1\|^2
\end{eqnarray*}
in three unknown \demph{depths} $\lambda_1, \lambda_2,\lambda_3$.
The parameters are the three ($i=1,2,3$) world points points $X_i \in \mathbb{R}^3$ and the points $x_i = (x_{i1},x_{i2},1)^T \in \mathbb{R}^3$
representing three images in $\mathbb{P}^2$.
This formulation implicitly uses that $R$ is orthogonal and preserves the norm.
Recovery of the camera $C=[R\mid t]$ from the depths is an exercise in linear algebra.
This problem has degree eight.
That is, for generic $X_i$ and $x_i$, $i=1,2,3$, the system has eight complex solutions\cite{DBLP:conf/issac/FaugereMRD08}.
In practice, there are fewer solutions with positive depths $\lambda_i$.
This gives a branched cover of degree eight over the problem manifold $\mathcal{P}\cong\mathbb{C}^{15}$.\hfill$\diamond$
\end{example}
We formulate perhaps the most consequential of all 3D reconstruction problems.
\begin{example}
\label{ex:5-points}
The \demph{5pt problem} of computing the relative pose of two calibrated cameras from 5 point correspondences in two images is featured in
Figure~\ref{fig:5-points}.
Consider (paired) images $x_i = (x_{i1},x_{i2},1)^T$, $y_i = (y_{i1},y_{i2},1)^T$ and depths $\lambda_i$ and $\mu_i$, where $i=1,\dots,5$,
in the first and second cameras, respectively.
Write down all or sufficiently many of $\binom{5}{2}$ same-distance equations
\[
\|\lambda_{i} x_{i}-\lambda_{j} x_{j}\|^2\ =\ \|\mu_{i} y_{i}-\mu_{j} y_{j}\|^2, \quad(1 \leq i < j \leq 5)\,,
\]
between the image points, and one same-orientation equation
\begin{multline*}
\qquad\det[
\lambda_{1} x_{1}-\lambda_{2} x_{2}
\mid
\lambda_{1} x_{1}-\lambda_{3} x_{3}
\mid
\lambda_{1} x_{1}-\lambda_{4} x_{4}
]\\
=\ \det[
\mu_{1} x_{1}-\mu_{2} x_{2}
\mid
\mu_{1} x_{1}-\mu_{3} x_{3}
\mid
\mu_{1} x_{1}-\mu_{4} x_{4}
]\,.\qquad
\end{multline*}
These determinants are the signed volume of the same tetrahedron (formed by the world points $X_1,\dotsc,X_4$ in different coordinate
frames).
The equality of the volumes is implied by the same-distance equations but not the equality of signs.
Fix one depth, $\lambda_{1} = 1$, to fix the ambiguity in scaling.
This gives a system of equations in the remaining nine unknown depths $\lambda$ and $\mu$.
The solution space is the space of vectors of non-fixed depths $\mathcal{P}=\mathbb{R}^{9}$.
The projection from the incidence variety problem-solution mainfold to $\mathcal{P}$ gives a covering of degree $20$.\hfill$\diamond$
\end{example}
As in Section~\ref{SS:Monodromy}, we may analize the Galois group of the branched cover.
Decomposing a monodromy group of a minimal problem as shown in \cite{duff2022galois} may lead to an easier 3D reconstruction.
The classical \demph{epipolar geometry} approach to the 5pt problem~\cite[Sect.~9]{HZ-2003} is realized in this way.
This gives a two-stage procedure for the relative pose recovery with the essential stage being a problem of degree $10$.
\subsubsection{Engineering meets mathematics}
The 5pt problem of Example~\ref{ex:5-points} plays a practical role in solvers for geometric optimization problems in vision based on
RANSAC~\cite{Fischler-Bolles-ACM-1981,Raguram-USAC-PAMI-2013}.
This problem has many practical solutions based on or inspired by Gr\"obner basis techniques that also use the epipolar geometry
formulation~\cite{Nister-5pt-PAMI-2004}.
Recently, homotopy continuation has found practical use for minimal problems whose degrees are too high for efficient symbolic computation.
The first step toward practical fast computation was a special solver MINUS~\cite{MINUSwww} based on Macaulay2 core C++ code for homotopy
continuation and optimized for performance on modern hardware.
Featured in~\cite{TRPLP}, it is deployed on two minimal problems of degrees 216 and 312 involving point as well as line correspondences.
This computes \emph{all} complex solutions and uses postprocessing to filter out irrelevant solutions.
Unlike~\cite{TRPLP}, the work in~\cite{Hard-problems-CVPR2022} combines a neural network classifier with homotopy continuation.
Rather than compute all solutions and then pick a relevant one, it follows only one continuation path (over $\mathbb{R}$).
That strategy finds the relevant solution with high probability in several practical scenarios.
It is comparable to state-of-art algorithms for the 5pt problem and exceeds the performance for a 4pt problem.
While matching four points in three calibrated views is not a minimal problem, there is a relaxation of degree 272 that is minimal, and the
solution of the relaxation may be verified by using the original (overdetermined) formulation.
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
|
{
"arxiv_id": "2302.08633",
"language": "en",
"timestamp": "2023-02-20T02:04:14",
"url": "https://arxiv.org/abs/2302.08633",
"yymm": "2302"
} | \section{Introduction}
\label{sec:introduction}
Let $X$ be a complex projective $K3$ surface and let $T\colon X\to X$ be an automorphism with positive topological entropy $h>0$.
Thanks to a foundational results of Cantat \cite{Cantat}, there are closed positive currents $\eta_{\pm}$ which satisfy
\begin{equation*}
T^*\eta_{\pm}=e^{\pm h}\eta_{\pm},
\end{equation*}
and are normalized so that their cohomology classes satisfy $[\eta_+]\cdot[\eta_-]=1$. The classes $[\eta_{\pm}]$ belong to the boundary of the ample cone of $X$ and have vanishing self-intersection.
These eigencurrents have H\"older continuous local potentials \cite{DS}, and their wedge product $\mu=\eta_+\wedge\eta_-$ is well-defined by Bedford--Taylor theory, and is the unique $T$-invariant probability measure with maximal entropy.
When $(X,T)$ is not a Kummer example, it was shown by Cantat-Dupont \cite{CD} (with a new proof by the authors \cite{FT2} that also covers the nonprojective case) that $\mu$ is singular with respect to the Lebesgue measure $\dVol$; therefore there exists a Borel set of zero Lebesgue measure carrying the entire mass of $\mu$.
The authors conjectured (see \cite[Conjecture 7.3]{Tosatti_survey}) that the topological support $\supp \mu$ should nonetheless be equal to all of $X$, see also Cantat's \cite[Question 3.4]{Cantat2018_Automorphisms-and-dynamics:-a-list-of-open-problems}.
If this were true, it would also imply the same for each of the currents: $\supp \eta_{\pm}=X$.
In \cite{FilipTosatti2021_Canonical-currents-and-heights-for-K3-surfaces} the authors showed that, under mild assumptions on $X$, the eigencurrents $\eta_{\pm}$ fit into a continuous family of closed positive currents with continuous local potentials whose cohomology classes sweep out the boundary of the ample cone, perhaps after blowing up the boundary at the rational rays.
We called the corresponding closed positive currents the \emph{canonical currents}.
It is then natural to wonder whether \emph{all} such canonical currents are fully supported on $X$.
In this note we show that this is in fact not the case.
Namely, we show in \autoref{thm:gaps_in_the_support_of_canonical_currents} below:
\begin{theoremintro}[Gaps in the support]
\label{thm_intro:gaps_in_the_support}
There exists a projective K3 surface $X$ of type $(2,2,2)$, and an uncountable dense $F_{\sigma}$ set of rays $F\subset \partial \Amp(X)$ in the boundary of its ample cone, such that for every $f\in F$ the topological support of the unique canonical current $\eta_f$ is not all of $X$.
\end{theoremintro}
\noindent Note that because the rank of the Picard group of a very general K3 surface of type $(2,2,2)$ is $3$, there is no need to blow up the rational directions on the boundary.
Moreover, the canonical currents in the rational directions (i.e. those where the ray spanned by $f$ intersects $H^2(X,\mathbb{Q})$ nontrivially) have full support, see \autoref{rmk:avoidance_of_parabolic_points}.
The above result can be strengthened to show that there exist K3 surfaces defined over $\bR$, such that the supports of certain canonical currents are disjoint from the (nonempty) real locus (see \autoref{thm:full_gaps_in_the_real_locus}):
\begin{theoremintro}[Full gaps in the real locus]
\label{thm_intro:full_gaps_real_locus}
There exists a projective K3 surface $X$ of type $(2,2,2)$ defied over $\bR$ with $X(\bR)\neq \emptyset$, and an uncountable dense $F_{\sigma}$ set of rays $F\subset \partial \Amp(X)$ in the boundary of its ample cone, such that for every $f\in F$ the topological support of the unique canonical current $\eta_f$ is disjoint from $X(\bR)$.
\end{theoremintro}
In the examples we construct, $X(\bR)$ is homeomorphic to a $2$-sphere.
McMullen \cite[Thm.~1.1]{McMullen2002_Dynamics-on-K3-surfaces:-Salem-numbers-and-Siegel-disks} constructed \emph{nonprojective} K3 surfaces with automorphisms whose eigencurrents $\eta_{\pm}$ are not fully supported.
In fact, his examples have a Siegel disc: an invariant neighborhood of a fixed point on which the dynamics is holomorphically conjugate to a rotation, and where $\eta_{\pm}$ thus vanish.
Let us also note that Moncet \cite[Thm.~A]{Moncet2013_Sur-la-dynamique-des-diffeomorphismes-birationnels-des-surfaces-algebriques-reelles:-ensemble} constructed a birational automorphism of a rational surface $X$ defined over $\bR$, with positive dynamical degree and Fatou set containing $X(\bR)$.
Despite our \autoref{thm_intro:gaps_in_the_support} above, we do maintain hope that on projective K3 surfaces, the measure of maximal entropy (and therefore also $\eta_{\pm}$) is fully supported.
\subsubsection*{Acknowledgments}
\label{sssec:acknowledgements}
We are grateful to Roland Roeder for conversations on his work with Rebelo \cite{RebeloRoeder2021_Dynamics-of-groups-of-birational-automorphisms-of-cubic-surfaces-and-Fatou/Julia} that inspired this note, and to Serge Cantat for detailed feedback that improved our exposition.
We are also grateful to Serge Cantat for suggesting to combine our methods with an example of Moncet that led to \autoref{thm_intro:full_gaps_real_locus}.
This research was partially conducted during the period the first-named author served as a Clay Research Fellow, and was partially supported by the National Science Foundation under Grant No. DMS-2005470. The second-named author was partially supported by NSF grant DMS-2231783, and part of this work was conducted during his visit to the Center for Mathematical Sciences and Applications at Harvard University, which he would like to thank for the hospitality. This note is dedicated to the memory of Nessim Sibony, a dear colleague and friend, whose contributions to holomorphic dynamics and several complex variables remain an inspiration to us. He is greatly missed.
\section{Gaps in the support of canonical currents}
\label{sec:gaps_in_the_support_of_canonical_currents}
\paragraph{Outline}
We recall some constructions and estimates, originally based on an idea of Ghys \cite{Ghys1993_Sur-les-groupes-engendres-par-des-diffeomorphismes-proches-de-lidentite}, itself inspired by the Zassenhaus lemma on commutators of small elements in Lie groups.
In brief, the idea is that if two germs of holomorphic maps near the origin are close to the identity, then their commutator is even closer, and the estimates are strong enough to allow for an iteration argument.
The precise estimates that we need for \autoref{thm_intro:gaps_in_the_support} are contained in \autoref{prop:fixed_points_with_small_derivative}, and we follow Rebelo--Roeder \cite{RebeloRoeder2021_Dynamics-of-groups-of-birational-automorphisms-of-cubic-surfaces-and-Fatou/Julia} to establish the needed bounds.
We then recall some basic facts concerning the geometry of K3 surfaces in \autoref{ssec:2_2_2_surfaces_and_canonical_currents}, and establish the existence of gaps in the support of some of their canonical currents in \autoref{ssec:an_example_with_slow_commutators}.
\subsection{Commutator estimates}
\label{ssec:commutator_estimates}
In this section we introduce notation and collect some results, stated and proved by Rebelo--Roeder \cite{RebeloRoeder2021_Dynamics-of-groups-of-birational-automorphisms-of-cubic-surfaces-and-Fatou/Julia} but which have also been known and used in earlier contexts, e.g. by Ghys \cite{Ghys1993_Sur-les-groupes-engendres-par-des-diffeomorphismes-proches-de-lidentite}.
The results are concerned with commutators of germs of holomorphic maps in a neighborhood of $0\in \bC^d$.
\subsubsection{Derived series and commutators}
\label{sssec:derived_series}
Fix a set $S$, whose elements we regard as formal symbols which can be juxtaposed to form words.
Assume that $S$ is equipped with a fixed-point-free involution $s\mapsto s^{-1}$, i.e. any element has a unique corresponding ``inverse'' in the set.
Define the ``derived series'' of sets by
\[
S^{(0)}:= S \quad S^{(n+1)}:= \left[S^{(n)},S^{(n)}\right]
\]
where $[A,B]$ denotes the set of commutators $[a,b]:=aba^{-1}b^{-1}$ with $a\in A,b\in B$, and we omit the trivial commutators $[a,a^{-1}]$.
Denote the disjoint union by $S^{\bullet}:=\coprod_{n\geq 0} S^{(n)}$.
We will use the same notation in the case of a pseudogroup.
We also collect the next elementary result:
\begin{proposition}[Fast ramification]
\label{prop:fast_ramification}
Let $F_k$ denote the free group on $k$ generators $a_1,\ldots, a_k$.
Set $S^{(0)}:=\{a_1,\ldots, a_k,a_1^{-1},\ldots, a_k^{-1}\}$.
Then the ${\binom{k}{2}}$ elements $[a_i,a_j]\in S^{(1)}$ with $i<j$, generate a free subgroup of rank $\binom{k}{2}$ inside $F_k$.
\end{proposition}
\begin{proof}
Observe that $[a_i,a_j]^{-1}=[a_j,a_i]$.
Therefore, it suffices to check that any word of the form
\[
[a_{i_1},a_{j_1}]\cdots [a_{i_l},a_{j_l}] \cdots [a_{i_N},a_{j_N}]
\]
is never trivial, subject to the condition that for any $l$ the consecutive pairs don't obviously cancel out.
Equivalently, we assume that for any $l$ either $a_{i_l}\neq a_{j_{l+1}}$ or $a_{j_l}\neq a_{i_{l+1}}$.
But this can be immediately verified by writing out the expression in the generators $a_{\bullet}$, and observing that the only cancellations can occur if $a_{j_l}=a_{i_{l+1}}$.
However, the next cancellation is excluded by assumption so the reduced word has at least $4N-2(N-1)=2N+2$ letters and is nonempty.
\end{proof}
Later on, we will apply iteratively this proposition, starting with $k\geq 4$, an inequality which is preserved by $k\mapsto {\binom{k}{2}}$.
\subsubsection{Pseudogroup of transformations}
\label{sssec:pseudo_groups_of_transformations}
Let $B_0(\ve)\subset \bC^d$ denote the ball of radius $\ve>0$ centered at the origin in $\bC^d$.
Let $\gamma_1,\ldots, \gamma_k$ be injective holomorphic maps $\gamma_i\colon B_0(\ve)\to \bC^d$, which are thus biholomorphisms onto their ranges $\cR_{\gamma_i}:=\gamma_i(B_0(\ve))$.
Let $S$ denote the set with $2k$ symbols $\gamma_1,\ldots,\gamma_k,\gamma_1^{-1},\ldots,\gamma_k^{-1}$.
With $S^{\bullet}$ as in \autoref{sssec:derived_series}, assign to any element $\gamma\in S^{\bullet}$, whenever possible, the holomorphic map also denoted by $\gamma\colon \cD_{\gamma}\to \cR_{\gamma}$ with open sets $\cD_{\gamma},\cR_{\gamma}\subset \bC^d$ by expressing $\gamma$ in reduced form in the letters from $S$, and shrinking the domains/ranges according to the word.
For certain elements $\gamma$, these might well be empty sets.
Denote by $\id$ the identity transformation and by $\norm{f}_{C^0(K)}$ the supremum norm of a function or map $f$ on a set $K$.
\begin{theorem}[Common domain of definition]
\label{thm:common_domain_of_definition}
For any given $0<\ve\leq 1$, if
\[
\norm{\gamma_i^{\pm 1}-\id}_{C^0(B_{0}(\ve))} \leq \frac{\ve}{32},
\text{ for }i=1,\ldots,k
\]
then for every $n\geq 0$ and every $\gamma\in S^{(n)}$, its domain $\cD_{\gamma}$ contains $B_0(\ve/2)$ and furthermore it satisfies
\[
\norm{\gamma-\id}_{C^0(B_0(\ve/2))} \leq \frac{\ve}{2^n\cdot 32}.
\]
\end{theorem}
\noindent This result is proved as in \cite[Prop.~7.1]{RebeloRoeder2021_Dynamics-of-groups-of-birational-automorphisms-of-cubic-surfaces-and-Fatou/Julia} or \cite[Prop.~3.1]{Rebelo_Reis}, which state it for $k=2$.
Indeed, the estimates in the proof only involve the estimates on the ``seed'' transformations $\gamma_i$, and not their cardinality. We include the proof for the reader's convenience.
\begin{proof}We will show by induction on $n\geq 0$ that for every $\gamma\in S^{(n)}$ its domain $\cD_{\gamma}$ contains $B_0(\ve_n)$ where
$$\ve_n:=\ve-\frac{\ve}{4}\sum_{j=0}^{n-1}2^{-j}\geq \frac{\ve}{2},$$
and that
$$\norm{\gamma-\id}_{C^0(B_0(\ve_n))} \leq \frac{\ve}{2^n\cdot 32}.$$
The base case $n=0$ is obvious, and for the induction step the key result that we need is the following improvement \cite[Lemma 3.0]{Loray_Rebelo} of a result of Ghys \cite[Prop.~2.1]{Ghys1993_Sur-les-groupes-engendres-par-des-diffeomorphismes-proches-de-lidentite}: given constants $0<r,\delta,\tau<1$ with $4\delta+\tau<r$, if $f,g:B_0(r)\to \mathbb{C}^d$ are two injective holomorphic maps which satisfy
\begin{equation}\label{a}
\|f-\id\|_{C^0(B_{0}(r))}\leq \delta, \quad \|g-\id\|_{C^0(B_{0}(r))}\leq \delta,
\end{equation}
then their commutator $[f,g]$ is defined on $B_0(r-4\delta-\tau)$ and satisfies
\begin{equation}\label{b}
\|[f,g]-\id\|_{C^0(B_{0}(r-4\delta-\tau))}\leq \frac{2}{\tau}\|f-\id\|_{C^0(B_{0}(r))}\|g-\id\|_{C^0(B_{0}(r))}.
\end{equation}
We use this to prove the case $n+1$ of the induction by taking $$r:=\ve_n,\quad \delta:=\frac{\ve}{2^n\cdot 32},\quad \tau:=\frac{\ve}{2^n\cdot 8},$$
and applying it to two arbitrary $f,g\in S^{(n)}$. These satisfy \eqref{a} by induction hypothesis, and so $[f,g]$ is defined on the ball centered at the origin of radius
$$\ve_n-4\frac{\ve}{2^n\cdot 32}-\frac{\ve}{2^n\cdot 8}=\ve_{n+1},$$
and by \eqref{b} it satisfies
$$\|[f,g]-\id\|_{C^0(B_{0}(\ve_{n+1}))}\leq\frac{2}{\tau}\delta^2=\frac{\delta}{2}=\frac{\ve}{2^{n+1}\cdot 32},$$
as desired.
\end{proof}
The next result, appearing in \cite[Lemma~7.2]{RebeloRoeder2021_Dynamics-of-groups-of-birational-automorphisms-of-cubic-surfaces-and-Fatou/Julia}, will be useful in exhibiting explicit examples satisfying the assumptions of \autoref{thm:common_domain_of_definition}.
We will denote by $\id$ both the identity map and the identity matrix acting on $\bC^d$, and by $\norm{-}_{\rm Mat}$ the matrix norm on $n\times n$ matrices.
\begin{proposition}[Fixed points with small derivative]
\label{prop:fixed_points_with_small_derivative}
For any $0<\ve_0\leq 1$ and holomorphic map $\gamma:B_0(\ve_0)\to\bC^d$ satisfying
\[
\gamma(0)=0 \text{ and }\norm{D\gamma(0)-\id}_{\rm Mat} \leq \frac{1}{64},
\]
there exists $\ve_1$, depending on $\gamma$, with the following property.
For any $\ve\in (0,\ve_1)$, the map restricted to $B_0(\ve)$ satisfies
\begin{equation}\label{s}
\norm{\gamma -\id}_{C^0(B_0(\ve))}\leq \frac{\ve}{32}.
\end{equation}
\end{proposition}
\begin{proof}
For $0<\ve<\ve_1$ (where $\ve_1$ is to be determined), let $\Lambda_\ve(z_1,\dots,z_n)=(\ve z_1,\dots,\ve z_n)$ be the scaling map, and let $\gamma_\ve:=\Lambda_\ve^{-1}\circ\gamma\circ\Lambda_\ve$. This is a holomorphic map on $B_0(1)$ that satisfies
\[
\gamma_\ve(0)=0 \text{ and }\norm{D\gamma_\ve(0)-\id}_{\rm Mat} \leq \frac{1}{64}.
\]
An application of the Taylor formula gives
$$\norm{\gamma_\ve -\id}_{C^0(B_0(1))}\leq \norm{D\gamma_\ve(0)-\id}_{\rm Mat}+C_\gamma \ve\leq\frac{1}{64}+C_\gamma \ve,$$
for some constant $C_\gamma$ that depends on the size of the Hessian of $\gamma$. Thus, it suffices to choose $\ve_1=\frac{1}{64 C_\gamma},$ and we have
$$\norm{\gamma_\ve -\id}_{C^0(B_0(1))}\leq\frac{1}{32},$$
which is equivalent to \eqref{s}.
\end{proof}
\subsection{\texorpdfstring{$(2,2,2)$}{(2,2,2)}-surfaces and canonical currents}
\label{ssec:2_2_2_surfaces_and_canonical_currents}
For basic background on K3 surfaces, see \cite{BeauvilleBourguignonDemazure1985_Geometrie-des-surfaces-K3:-modules-et-periodes,Huybrechts2016_Lectures-on-K3-surfaces} and, for an introduction to complex automorphisms of K3 surfaces see \cite{Filip_notes_K3}.
Our main examples, the $(2,2,2)$-surfaces, were first noted by Wehler \cite{Wehler1988_K3-surfaces-with-Picard-number-2}.
\subsubsection{Setup}
\label{sssec:setup_2_2_2_surfaces_and_canonical_currents}
We work over the complex numbers.
Consider the $3$-fold $(\bP^1)^3$, with its family of smooth anticanonical divisors given by degree $(2,2,2)$-surfaces, i.e. let $\cU\subset \bC^{27}$ denote the parameter space of coefficients of an equation
\[
\sum_{0\leq i,j,k\leq 2}c_{ijk}x^iy^j z^k = 0 \quad \text{ in }(\bA^1)^3
\]
that yield smooth surfaces when compactified in $(\bP^1)^{3}$.
We will call these $(2,2,2)$-surfaces.
We consider for simplicity the full set of equations, without identifying surfaces equivalent under the action of $(\PGL_2)^{3}$.
\begin{definition}[Strict $(2,2,2)$ example]
\label{def:strict_222_example}
We will say that a $(2,2,2)$-surface is \emph{strict} if the rank of its \Neron--Severi group (over $\bC$) is the minimal possible, i.e. $3$.
\end{definition}
Note that a countable dense union of codimension-one subsets in $\cU$ consists of non-strict $(2,2,2)$-surfaces.
For strict $(2,2,2)$-surfaces, the \Neron--Severi group equipped with its intersection form is isometric to $\bR^{1,2}$ (after extension of scalars to $\bR$).
\subsubsection{Some recollections from topology}
\label{sssec:some_recollections_from_topology}
Recall that $F_\sigma$-sets are countable unions of closed sets, while $G_\delta$-sets are countable intersections of open ones.
It follows from standard results in the moduli theory of K3 surfaces that strict $(2,2,2)$-surfaces form a dense $G_\delta$-set in $\cU$, which in fact has full Lebesgue measure.
Indeed, parameters giving strict $(2,2,2)$-surfaces are the complement of countably many divisors in the full parameter space, see e.g. \cite{oguiso}.
\subsubsection{Involutions}
\label{sssec:involutions}
For any $u\in \cU$, denote the associated surface by $X_u\subset (\bP^1)^3$.
The projection onto one of the coordinate planes $X_u\to (\bP^1)^2$ is two-to-one and so $X_u$ admits an involution exchanging the two sheets.
Denote by $\sigma_x,\sigma_y,\sigma_z$ the three involutions obtained in this manner.
\subsubsection{Canonical currents}
\label{sssec:canonical_currents}
We can apply \cite[Thm.~1]{FilipTosatti2021_Canonical-currents-and-heights-for-K3-surfaces} to any strict $(2,2,2)$-surface $X_u$ with $u\in \cU$.
In that theorem a certain space $\partial^{\circ}\Amp_c(X_u)$ appears, which on strict $(2,2,2)$-surfaces reduces to the boundary of the ample cone $\partial\Amp(X_u)$, so it consists of nef cohomology classes $[\eta]\in \NS_\bR(X_u)\subset H^{1,1}(X_u)$ satisfying $[\eta]^2=0$.
Since $\NS_\bR(X_u)$ equipped with the intersection pairing is isometric to $\bR^{1,2}$, the space $\partial\Amp(X_u)$ is isomorphic to one component of the null-cone in this Minkowski space.
Note that in the general form of the result, one needs to replace the rational rays in $\partial\Amp(X_u)$ by their blowups; since in the case of a rank $3$ \Neron--Severi group it would mean blowing up rays on a surface, no extra points need to be added.
Next, \cite[Thm.~1]{FilipTosatti2021_Canonical-currents-and-heights-for-K3-surfaces} shows that each cohomology class $[\eta]\in \partial \Amp(X_u)$ has a canonical positive representative $\eta$, which additionally has $C^0$ potentials.
The representative is unique when the class is irrational, and a preferred representative in the rational (also called parabolic) classes exists that makes the entire family of currents continuous in the $C^0$-topology of the potentials for the currents.
We will show in \autoref{thm:gaps_in_the_support_of_canonical_currents} below that some of the canonical representatives do not have full support in $X_u$.
Specifically, we will show that there exists an open set $\cU_0\subset \cU$ and a dense $G_\delta$ set of $u\in \cU_0$ for which some of the canonical currents $\eta$ do not have full support in $X_u$.
But first, we will show that the set of cohomology classes $[\eta]$ for which the gaps in the support are constructed contain, after projectivization, a closed uncountable set.
\subsubsection{Free subgroups of automorphisms}
\label{sssec:free_subgroups_of_automorphisms}
We will consider subgroups of automorphisms of $X_u$ freely generated by five elements.
Specifically, $\sigma_x,\sigma_y,\sigma_z$ generate a group $\Gamma_{\sigma}\subseteq\Aut(X_u)$ isomorphic to $\left(\bZ/2\right)*\left(\bZ/2\right)*\left(\bZ/2\right)$, in other words there are no relations between them except that $\sigma_{i}^2=\id$ for $i=x,y,z$.
This can be verified by considering the action on the hyperbolic space inside the \Neron--Severi group of $X_u$ (see for instance \cite[Prop.6.1]{Filip2019_Tropical-dynamics-of-area-preserving-maps} for the explicit matrices corresponding to the action in the upper half-space model).
\begin{proposition}[Free group on five generators]
\label{prop:free_group_on_five_generators}
Consider the surjective homomorphism $\Gamma_{\sigma}\onto (\bZ/2)^{\oplus 3}$ sending $\sigma_x,\sigma_y,\sigma_z$ to $(1,0,0),(0,1,0),(0,0,1)$ respectively.
Then its kernel $K_{\sigma}$ is a free group on five generators.
\end{proposition}
The above homomorphism corresponds to evaluating the derivatives of the transformations at the common fixed of the transformations described in \autoref{sssec:fixed_point_and_derivatives}.
\begin{proof}
We will divide our analysis by looking at the homomorphisms $\Gamma_{\sigma}\onto (\bZ/2)^{\oplus 3}\onto \bZ/2$ where the last map sends each generator of a summand to the unique nonzero element.
Now the kernel of $\Gamma_{\sigma}\onto \bZ/2$ sending each $\sigma_i$ to $1\in \bZ/2$ is the free group on two letters, generated by $a:=\sigma_x\sigma_y$ and $b:=\sigma_y\sigma_z$.
Indeed this kernel is the fundamental group of the Riemann sphere with $3$ points removed.
Now $K_{\sigma}$ is contained with finite index in the free group on $a,b$, and is visibly given as the kernel of the surjection onto $(\bZ/2)^{\oplus 2}$ sending $a\mapsto (1,0)$ and $b\mapsto (0,1)$.
One can then work out the associated covering space and rank of free group, using the techniques in \cite[\S1.A]{Hatcher2002_Algebraic-topology}, and determine that $K_{\sigma}$ is a free group on $5$ generators.
Alternatively, the corresponding $(\bZ/2)^{\oplus 2}$-covering space of the triply punctured Riemann sphere can be visualized as a square-shaped ``pillowcase'' with four punctures at the corners, and two additional punctures in the center of the two faces.
The involutive automorphisms are rotations by $180^{\circ}$ through an axis that goes across two opposite punctures.
\end{proof}
\subsubsection{Largeness of the set currents with gaps}
\label{sssec:largeness_of_the_set_currents_with_gaps}
To continue, we select $\gamma_1,\gamma_2,\gamma_3,\gamma_4,\gamma_5\subset K_{\sigma}$ to be five elements freely generating the group.
Next, the construction of \autoref{sssec:derived_series} applies with $S:=\{\gamma_1,\ldots,\gamma_5,\gamma_1^{-1},\ldots\gamma_5^{-1}\}$ and yields a subset $S^{\bullet}\subset \Aut(X_u)$ consisting of iterated commutators.
Fix a \Kahler metric $\omega_0$ on $X_u$, with volume normalized to $[\omega_0]^2=1$, and let $\bH^2(X_u)$ denote the hyperbolic plane of all nef cohomology classes satisfying $[\omega]^2=1$.
\begin{proposition}[Uncountably many currents with gaps]
\label{prop:uncountably_many_currents_with_gaps}
The intersection of the closure of the set $S^{\bullet}\cdot [\omega_0]\subset \bH^2(X_u)$ with the boundary $\partial \bH^2(X_u)$ is an uncountable closed set.
\end{proposition}
\begin{proof}
That the set is closed follows from its definition.
To show that the set is uncountable, we will argue on the boundary of the free group on the five initial generators, and use that the natural map from the boundary of the free group to the hyperbolic space is injective, except perhaps at the countably many parabolic points.
For this, let $\cT$ denote the Cayley graph of the free group on five generators; it is a $10$-valent infinite regular tree.
Define the sequence of finite subtrees $\cT_k$, where $\cT_0$ consists of the identity vertex, and $\cT_{k+1}$ is obtained from $\cT_k$ by connecting the leaves of $\cT_k$ with the elements in $S^{(k+1)}$.
From \autoref{prop:fast_ramification} it follows that the number of new edges added to the leaves at each step is at least $3$.
Therefore, the number of infinite paths starting at the origin in $\cT_{\infty}:=\cup_{k\geq 0}\cT_k$ is uncountable, and the claim follows.
\end{proof}
\subsection{An example with slow commutators}
\label{ssec:an_example_with_slow_commutators}
\subsubsection{Setup}
\label{sssec:setup_an_example_with_slow_commutators}
To show that the assumptions of \autoref{prop:fixed_points_with_small_derivative} are satisfied in practice, we start with an explicit equation:
\begin{align}
\label{eqn:simplest_example}
(1+x^2)(1+y^2)(1+z^2) + xyz = 1
\end{align}
Let us note that \autoref{eqn:simplest_example} determines a \emph{singular} $(2,2,2)$-surface, with a singularity at the origin $0\in \bC^3$.
We will construct an open set $\cU_0$ of smooth $(2,2,2)$-surfaces by taking perturbations of the above equation.
\subsubsection{Automorphisms of ambient space}
\label{sssec:automorphisms_of_ambient_space}
Let $u_0\in \bC^{27}$ denote the point corresponding to the choice of parameters as in \autoref{eqn:simplest_example}, it lies outside $\cU$ but any analytic neighborhood of $u_0$ in $\bC^{27}$ intersects $\cU$ in a nonempty open set.
We have three explicit involutions $\sigma_{u_0,x},\sigma_{u_0,y},\sigma_{u_0,z}$:
\[
\sigma_{u_0,x}(x,y,z) = \left(\frac{-yz}{(1+y^2)(1+z^2)} - x, y, z\right)
\]
and similarly for $\sigma_{u_0,y},\sigma_{u_0,z}$, which we view as holomorphic maps defined in a neighborhood of $0\in \bC^3$.
\subsubsection{Fixed point and Derivatives}
\label{sssec:fixed_point_and_derivatives}
It is immediate from the explicit formulas that all three involutions preserve the point $0\in \bC^3$.
Furthermore, their derivatives at that point are matrices of order two:
\[
D\sigma_{u_0,x}(0,0,0) = \begin{bmatrix}
-1 & & \\
& 1 & \\
& & 1
\end{bmatrix}
\text{ and analogously for }\sigma_{u_0,y},\sigma_{u_0,z}.
\]
We now consider $\sigma_{u,x},\sigma_{u,y},\sigma_{u,z}$ for $u\in \bC^{27}$ in a sufficiently small neighborhood of $u_0$.
Then we can regard the $\sigma$'s as holomorphic maps defined in a neighborhood of $0\in \bC^3$, preserving the intersection of $X_u$ with the fixed neighborhood.
We can now use these observations to establish:
\begin{theorem}[Gaps in the support of canonical currents]
\label{thm:gaps_in_the_support_of_canonical_currents}
There exists a nonempty open set $\cU_0$ in the analytic topology of smooth $(2,2,2)$-surfaces with the following property.
For each strict K3 surface $X_u$ with $u\in \cU_0$, there exists a dense $F_\sigma$-set of rays $F$ on the boundary of the ample cone of $X_u$ such that for any $[\eta]\in F$, the canonical current $\eta$ provided by \cite[Thm.~1]{FilipTosatti2021_Canonical-currents-and-heights-for-K3-surfaces} is supported on a proper closed subset of $X_u$.
Furthermore $F$ determines an uncountable set of rays.
\end{theorem}
\noindent By a ``ray'' we mean one orbit of the $\bR_{>0}$-action by scaling, so that the ``set of rays'' is the projectivization of $\partial \Amp(X_u)$.
It is implicit in the the statement above that the set $F$ is disjoint from the countably many parabolic rays.
This is justified by \autoref{rmk:avoidance_of_parabolic_points} below.
\begin{proof}
We keep the notation as before the statement of the theorem and will consider $u\in \cU$ in a sufficiently small neighborhood of $u_0$.
Consider the subgroup $K_{\sigma}\subset \Aut(X)$ obtained by applying \autoref{prop:free_group_on_five_generators} to the group generated by the three involutions.
At the parameter $u_0$ all elements in $K_{\sigma}$ preserve the point with coordinate $(0,0,0)$ and have derivative equal to the identity there, see \autoref{sssec:fixed_point_and_derivatives}.
Fix now the five free generators $\gamma_{u,i} \in K_{\sigma}$ with $i=1,\ldots,5$, as per \autoref{prop:free_group_on_five_generators}.
Let $S^{(n)}$ denote the set of iterated commutators, as per \autoref{sssec:derived_series}.
\autoref{prop:uncountably_many_currents_with_gaps} yields for any strict $X_u$ an uncountable closed set $F_0\subset \partial \Amp(X_u)$ with the following property. Fixing $\omega_0$ a reference K\"ahler metric on $X_u$, for any $f\in F_0$ there exists a sequence $\{s_n\}$ of automorphisms of $X_u$, with $s_n\in S^{(n)}$, and a sequence of positive scalars $\lambda_n\to+\infty$ such that
\[ f = \lim_{n\to +\infty} \frac{1}{\lambda_n}(s_n)_{*}[\omega_0].\]
Note that $\lambda_n\to +\infty$ since the self-intersection of $(s_n)_*[\omega_0]$ is $1$, while the self-intersection of $f$ is zero.
Applying \cite[Thm.~4.2.2, pts. 4,5]{FilipTosatti2021_Canonical-currents-and-heights-for-K3-surfaces} then shows that in the weak sense of currents we have
\[
\eta_f = \lim_{n\to +\infty} \frac{1}{\lambda_n}(s_n)_{*}\omega_0,
\]
where $\eta_f$ is a canonical positive representative of the cohomology class $f$.
Furthermore, at this stage of the argument the cohomology class $f$ might be rational, but its canonical representative is in fact unique since we consider strict $(2,2,2)$-surfaces.
Nonetheless, see \autoref{rmk:avoidance_of_parabolic_points} below for why, in fact, this case does not occur.
\autoref{prop:fixed_points_with_small_derivative} applies to the finitely many generators $\gamma_{u_0,i}$, so \autoref{thm:common_domain_of_definition} applies to them as well on a fixed ball $B_0(\ve)$ around $0\in \bC^3$.
However, the assumptions of \autoref{thm:common_domain_of_definition} are stable under a small perturbation, so they hold for $\gamma_{u,i}$ for $u$ in a sufficiently small neighborhood of $u_0$.
Therefore, by \autoref{thm:common_domain_of_definition} \emph{all} the maps $s_n$ are within a bounded distance of the identity when restricted to $B_0(\ve/2)$.
However, the maps preserve the intersection of $X_u$ with $B_0(\ve/2)$ so the weak limit of $\frac{1}{\lambda_n}(s_n)_* \omega_0$ vanishes in $B_0(\ve/2)\cap X_u$.
We conclude that the support of $\eta_f$ avoids $B_0(\ve)$.
Finally, the action of $\Aut(X_u)$ on the (projectivized) boundary of the ample cone is minimal, i.e. every orbit is dense, and clearly the property of having a gap in the support is invariant under applying one automorphism.
It follows that the set $F:=\Aut(X_u)\cdot F_0$ is a dense $F_\sigma$-set with the required properties.
\end{proof}
\begin{remark}[Avoidance of parabolic points]
\label{rmk:avoidance_of_parabolic_points}
The set $F$ provided by \autoref{thm:gaps_in_the_support_of_canonical_currents} is disjoint from the countably many parabolic points.
The reason is that the canonical currents at the parabolic points have full support, since they are obtained as the pullback of currents from the base $\bP^1(\bC)$ of an elliptic fibration, but the corresponding currents on $\bP^1(\bC)$ have real-analytic potentials away from the finitely many points under the singular fibers.
The last assertion can be seen from following through the proof of \cite[Thm.~3.2.14]{FilipTosatti2021_Canonical-currents-and-heights-for-K3-surfaces} with real-analytic data.
\end{remark}
\begin{remark}[Zassenhausian points]
\label{rmk:zassenhausian_points}
Recall that relative to a lattice $\Gamma\subset \Isom(\bH^n)$ of isometries of a hyperbolic space, the boundary points in $\partial \bH^n$ can be called ``Liouvillian'' or ``Diophantine''.
Specifically, a Liouvillian point is one for which the geodesic ray with the point as its limit on the boundary makes very long excursions into the cusps of $\Gamma\backslash \bH^n$, while Diophantine points are ones for which the excursions into the cusps are controlled.
Both situations involve quantitative bounds.
The boundary points constructed using iterated commutators as in \autoref{sssec:derived_series}, with group elements lying deeper and deeper in the derived series of $\Gamma$, could then be called ``Zassenhausian''.
Note that in principle, geodesics with Zassenhausian boundary points will have good recurrence properties and will also be Diophantine.
It would be interesting to see if canonical currents corresponding to Liouvillian boundary points have full support or not.
\end{remark}
\subsection{An example with no support on the real locus}
\label{ssec:an_example_with_no_support_on_the_real_locus}
The above methods can be strengthened to construct an example of a current with no support on the real locus of a real projective K3 surface.
The starting point is a construction due to Moncet \cite[\S9.3]{Moncet2012_Geometrie-et-dynamique-sur-les-surfaces-algebriques-reelles}, who constructed real K3 surfaces with arbitrarily small entropy on the real locus.
We use some minor modifications for notational convenience, and emphasize that many different choices are possible for the initial singular real K3 surface.
Let us also note that these examples have a ``tropical'' analogue given by PL actions on the sphere, and the analogue of the finite-order action at the singular parameter corresponds to a finite order action by reflections on the cube, see \cite[\S6.2]{Filip2019_Tropical-dynamics-of-area-preserving-maps}.
\subsubsection{Setup}
\label{sssec:setup_an_example_with_no_support_on_the_real_locus}
Let $X_0$ denote the (singular) surface
\[
x^2 + y^2 + z^2 = 1
\]
compactified in $(\bP^1)^3$.
Its real locus $X_0(\bR)$ is a real $2$-dimensional sphere.
As before let $\cU\subset \bR^{27}$ be the subset of smooth $(2,2,2)$-surfaces, parametrized by the possible coefficients, and normalized such that the parameter $0\in \bR^{27}$ corresponds to $X_0$.
Note that $0\notin \cU$.
Let next $\cU'\subset \cU$ denote the subset of strict $(2,2,2)$-surfaces.
By the discussion in \autoref{sssec:some_recollections_from_topology} the set $\cU'$ is the complement of countably many divisors in $\cU$, and thus forms a dense $G_{\delta}$ set.
\begin{theorem}[Full gaps in the real locus]
\label{thm:full_gaps_in_the_real_locus}
There exists a nonempty open set $\cU_0\subset \cU\subset \bR^n$ in the analytic topology of smooth real $(2,2,2)$-surfaces with the following property.
For each strict K3 surface $X_u$ with $u\in \cU_0$, there exists a dense $F_\sigma$-set of rays $F$ on the boundary of the ample cone of $X_u$ such that for any $[\eta]\in F$, the support of the canonical current $\eta$ provided by \cite[Thm.~1]{FilipTosatti2021_Canonical-currents-and-heights-for-K3-surfaces} is disjoint from the real locus $X_u(\bR)$.
Furthermore $F$ determines an uncountable set of rays.
\end{theorem}
\subsubsection{Subgroup of slow automorphisms}
\label{sssec:subgroup_of_slow_automorphisms}
Let us first observe that the involution $\sigma_x$ acting on the surface $X_0$ in \autoref{sssec:setup_an_example_with_no_support_on_the_real_locus} is given by $\sigma_x(x,y,z)=(-x,y,z)$, and analogously for $\sigma_y,\sigma_z$.
Therefore, let $K_{\sigma}\subset \Gamma_{\sigma}$ be the group from \autoref{prop:free_group_on_five_generators} obtained as the kernel of this action; it is a free group on five generators $\gamma_i$ and acts nontrivially on any smooth and strict $(2,2,2)$-surface.
Even for smooth surfaces $X_u\subset (\bP^1)^{3}$, we will be interested only in their intersection with the affine chart $\bC^{3}$, and specifically a neighborhood of $X_0(\bR)$.
We will thus restrict to the locus where no additional real components arise.
\subsubsection{Good cover}
\label{sssec:good_cover}
Choose a finite cover of $X_0(\bR)\subset \bR^3$ by open sets $V_i\subset \bC^3$ such that we have biholomorphisms $\phi_i\colon V_i \to B_0(1)\subset \bC^3$ to a ball of radius $1$ around $0$, and the preimages of the smaller balls $V_i':=\phi_i^{-1}\left(B_0(\tfrac 14)\right)$ still cover $X_0(\bR)$.
Choose now a sufficiently small open neighborhood of the origin $\cU_0\subset \bR^{27}$ such that the following property is satisfied: For each of the five generators $\gamma_j$ of $K_{\sigma}$ and their inverses, we have for every chart $V_i$ that $\gamma_{ij}':=\phi_i \circ \gamma_j \circ \phi^{-1}_i$ satisfies:
\[
\gamma_{ij}'\colon B_0\left(\tfrac 12\right) \to B_0(1) \text{ is well-defined and }
\norm{\gamma_{ij}'-\id}_{B_{0}\left(\tfrac 12\right)}\leq \tfrac{1}{64}.
\]
Require also that for any $u\in \cU_0$ that $X_u(\bR)$ is nonempty and still covered by the sets $\{V_i'\}$.
\begin{proof}[Proof of \autoref{thm:full_gaps_in_the_real_locus}]
By \autoref{thm:common_domain_of_definition} all the commutators in the set $S^{(n)}$ as defined in \autoref{sssec:derived_series} are well-defined when conjugated to any of the charts $\phi_i$, and furthermore their distance to the identity transformation goes to zero as $n\to +\infty$.
As in the proof of \autoref{thm:gaps_in_the_support_of_canonical_currents}, let $s_n\in S^{(n)}$ be any sequence of such commutators such that the cohomology class $\tfrac{1}{\lambda_n}(s_n)_*[\omega_0]$ converges to some class $f$.
Then the canonical current $\eta_f$ has no support in the neighborhoods $V_i'$.
Since these still cover $X_u(\bR)$ for $u\in \cU_0$, the result follows.
\end{proof}
\bibliographystyle{sfilip_bibstyle}
|
{
"arxiv_id": "2302.08692",
"language": "en",
"timestamp": "2023-02-20T02:06:57",
"url": "https://arxiv.org/abs/2302.08692",
"yymm": "2302"
} |
\section{Introduction}
\label{sec:introduction}
Since the dawn of optimization, much effort has gone into developing algorithms which
use geometric information about the loss landscape to make optimization more
efficient and stable
\cite{nocedal_updating_1980, duchi_adaptive_2011, lewis_nonsmooth_2013}. In more modern machine
learning,
control of the large curvature eigenvalues of the loss landscape has been
a goal in and of itself
\cite{hochreiter_flat_1997, chaudhari_entropysgd_2019}. There is empirical
and theoretical evidence that
controlling curvature of the training landscape leads to benefits for generalization
\cite{keskar_largebatch_2017, neyshabur_exploring_2017},
although in general the relationship between the two is
complex \cite{dinh_sharp_2017}.
Recently the \emph{sharpness aware minimization} (SAM)
algorithm has emerged as a popular choice for
regularizing the curvature during training \cite{foret_sharpnessaware_2022}. SAM has the
advantage of being a tractable first-order method; for the cost of a single extra
gradient evaluation, SAM can control the large eigenvalues of the loss Hessian
and often leads to improved optimization and generalization \cite{bahri_sharpnessaware_2022}.
However, understanding the mechanisms behind the effectiveness of SAM is an open question. The SAM algorithm itself
is a first-order approximation of SGD on a modified loss function
$\tilde{\mathcal{L}}(\th) = \max_{||\delta\th||<\rho}\mathcal{L}(\th+\delta\th)$. Part of the original
motivation was that $\tilde{\mathcal{L}}$ explicitly penalizes sharp minima over flatter ones. However the approximation
performs as well or better than running gradient descent on $\tilde{\mathcal{L}}$ directly.
SAM often works better with small batch sizes as compared to larger ones
\cite{foret_sharpnessaware_2022, andriushchenko_understanding_2022}
These stochastic effects suggest that studying the deterministic gradient flow dynamics on $\tilde{\mathcal{L}}$
will not capture key features of SAM, since small batch size induces non-trivial
differences from gradient flow \cite{paquette_sgd_2021}.
In parallel to the development of SAM, experimental and theoretical work has uncovered
some of the curvature-controlling properties of first-order methods due to finite step
size - particularly in the full batch setting. At intermediate learning rates, a wide
variety of models and optimizers show a tendency for the largest Hessian eigenvalues
to stabilize near the \emph{edge of stability} (EOS) for long times
\cite{lewkowycz_large_2020, cohen_gradient_2022, cohen_adaptive_2022}. The EOS is the
largest eigenvalue which would lead to convergence for a quadratic loss landscape.
This effect can be explained in terms of a non-linear feedback between the
large eigenvalue and changes in the parameters in that eigendirection
\cite{ damian_selfstabilization_2022, agarwala_secondorder_2022}.
We will show that these two areas of research are in fact intimately linked: under
a variety of conditions, SAM displays a modified EOS behavior, which leads to stabilization
of the largest eigenvalues at a lower magnitude via non-linear, discrete dynamics.
These effects highlight the dynamical nature of eigenvalue regularization, and demonstrates
that SAM can have strong effects throughout a training trajectory.
\subsection{Related work}
Previous experimental work suggested that decreasing batch size causes
SAM to display
both stronger regularization and better generalization
\cite{andriushchenko_understanding_2022}. This analysis also suggested
that SAM may induce more sparsity.
A recent theoretical approach studied SAM close to a minimum, where the trajectory
oscillates about the minima and provably decreases
the largest eigenvalue \cite{bartlett_dynamics_2022}. A contemporaneous approach
studied the SAM algorithm in the limit of small learning rate and SAM radius, and quantified
how the implicit and explicit regularization of SAM differs between full batch and
batch size $1$ dynamics \cite{wen_how_2023}.
\subsection{Our contributions}
\hmb{Maybe slightly rewords the opening of this section, to say the high level take-home message of this study is that SAM's trajectory and behavior intricately relates to earlier stages of training, and analysis only near convergence cannot capture the full picture. Then you can say, For characterizing the following bullet points as the technical pieces of your story that when put together, reveal the important role of the trajectory in SAM's behavior.}
In contrast to other theoretical approaches, we study the behavior of SAM far from minima. We find
that SAM regularizes the eigenvalues throughout training through a dynamical phenomenon and analysis
only near convergence cannot capture the full picture. In particular, in simplified models we
show:
\begin{itemize}
\setlength\itemsep{0em}
\item Near initialization, full batch SAM provides limited suppression of large eigenvalues (Theorem
\ref{thm:j_dyn_gd}).
\item SAM induces a modified edge of stability (EOS) (Theorem \ref{thm:eos_SAM}).
\item For full batch training, the largest eigenvalues stabilize at the SAM-EOS, at a smaller
value than pure gradient descent (Section \ref{sec:basic_experiments}).
\item As batch size decreases, the effect of SAM is stronger and the
dynamics is no longer controlled by the Hessian alone (Theorem \ref{thm:zj_dyn_sgd}).
\end{itemize}
We then present experimental results on realistic models which show:
\begin{itemize}
\item The SAM-EOS predicts the largest eigenvalue for WideResnet 28-10 on CIFAR10.
\end{itemize}
Taken together, our results suggest that SAM can operate throughout the learning trajectory,
far from minima,
and that it can use non-linear, discrete dynamical effects to stabilize
large curvatures of the loss function.
\section{Quadratic regression model}
\label{sec:quad_model}
\subsection{Basic model}
We consider a \emph{quadratic regression model} \cite{agarwala_secondorder_2022}
which extends a linear regression
model to second order in the parameters. Given a $\P$-dimensional parameter vector $\th$,
the $D$-dimensional output is given by $\m{f}(\th)$:
\begin{equation}
\m{f}(\th) = \m{y}+\m{G}^\top\th+\frac{1}{2}\tens{Q}(\th, \th)\,.
\end{equation}
Here, $\m{y}$ is a $D$-dimensional vector,
$\m{G}$ is a $D\times\P$-dimensional matrix, and $\tens{Q}$ is a $D\times\P\times\P$-
dimensional
tensor symmetric in the last two indices - that is, $\tens{Q}(\cdot, \cdot)$ takes two
$\P$-dimensional vectors as input, and outputs a $D$-dimensional vector $\tens{Q}(\th, \th)_\alpha = \th^\top \tens{Q}_{\alpha} \th$.
If $\tens{Q} = \boldsymbol{0}$, the model corresponds to linear regression. $\m{y}$, $\m{G}$, and $\tens{Q}$
are all fixed at initialization.
Consider optimizing the model with under a squared loss. More
concretely, let $\m{y}_{tr}$ be a $D$-dimensional vector of
training targets. We focus on the MSE loss
\begin{equation}
\mathcal{L}(\th) = \frac{1}{2}||\m{f}(\th)-\m{y}_{tr}||^{2}
\end{equation}
We can write the dynamics in terms of the residuals $\m{z}$ and the Jacobian
$\m{J}$ defined by
\begin{equation}
\m{z} \equiv f(\th)-\m{y}_{tr},~\m{J} \equiv \frac{\partial\m{f}}{\partial\th} = \m{G}+\tens{Q}(\th,\cdot)\,.
\end{equation}
The loss can be written as $\mathcal{L}(\th) = \frac{1}{2}\m{z}\cdot\m{z}$.
The full batch gradient descent (GD) dynamics of the parameters are given by
\begin{equation}
\th_{t+1} = \th_{t}-\alpha\m{J}^{\top}_{t}\m{z}_{t}
\end{equation}
which leads to
\begin{equation}
\begin{split}
\m{z}_{t+1}-\m{z}_{t} & = -\alpha\m{J}_{t} \m{J}_{t}^{\top}\m{z}_{t} +\frac{1}{2}\alpha^2 \tens{Q}(\m{J}_{t}^{\top}\m{z}_{t},\m{J}_{t}^{\top}\m{z}_{t})\\
\m{J}_{t+1} -\m{J}_{t} & = -\alpha \tens{Q}(\m{J}_{t}^{\top}\m{z}_{t}, \cdot)\,.
\label{eq:GD_ingeneral}
\end{split}
\end{equation}
The $D\timesD$-dimensional matrix $\m{J}\J^{\top}$ is known as the
\emph{neural tangent kernel} (NTK) \cite{jacot_neural_2018},
and controls the dynamics for small $\alpha||\m{J}^{\top}\m{z}||$ \cite{lee_wide_2019}.
We now consider the dynamics of un-normalized SAM \cite{andriushchenko_understanding_2022}.
That is, given a loss function
$\mathcal{L}$ we study the update rule
\begin{equation}
\th_{t+1}-\th_{t} = -\alpha\nabla\mathcal{L}(\th_{t}+\rho\nabla\mathcal{L}(\th_{t}))
\label{eq:unnorm_sam}
\end{equation}
We are particularly interested
in small learning rate and small SAM radius. The dynamics in $\m{z}-\m{J}$ space
are given by
\begin{equation}
\begin{split}
\m{z}_{t+1}-\m{z}_{t} & = -\alpha\m{J}\J^{\top}(1+\rho\m{J}\J^{\top})\m{z}-\alpha\rho \m{z}\cdot\tens{Q}(\m{J}^{\top}\m{z}, \m{J}^{\top}\cdot)\\
&+\alpha^2\frac{1}{2}\tens{Q}(\m{J}^{\top}\m{z}, \m{J}^{\top}\m{z})+O(\alpha\rho(\alpha+\rho)||\m{z}||^{2})
\end{split}
\label{eq:z_dyn_expansion}
\end{equation}
\begin{equation}
\begin{split}
\m{J}_{t+1}-\m{J}_{t} & = -\alpha\left[\tens{Q}((1+\rho\m{J}^{\top}\m{J})\m{J}^{\top}\m{z}, \cdot)+\right.\\
&\left.\rho\tens{Q}(\m{z}\cdot\tens{Q}(\m{J}^{\top}\m{z}, \cdot), \cdot)\right]+O(\alpha\rho^2||\m{z}||^{2})
\end{split}
\label{eq:J_dyn_expansion}
\end{equation}
to lowest order in $\alpha$ and $\rho$.
From Equation \ref{eq:z_dyn_expansion} we see that for small $\alpha||\m{z}||$ and $\rho||\m{z}||$,
the dynamics of $\m{z}$ is controlled by the modified NTK $(1+\rho\m{J}\J^{\top})\m{J}\J^{\top}$.
The factor $1+\rho\m{J}\J^{\top}$ shows up in the dynamics of $\m{J}$ as well, and we will show
that this effective NTK can lead to dynamical stabilization of large eigenvalues. And note that when $\rho=0$, these dynamics coincide with that of gradient descent.
\subsection{Gradient descent theory}
\subsubsection{Eigenvalue dynamics at initialization}
A basic question is: how does SAM affect the eigenvalues of the NTK?
We can study this directly for early learning
dynamics by using random initializations. We have the following theorem
(proof in Appendix \ref{app:quad_average}):
\begin{theorem}
\label{thm:j_dyn_gd}
Consider a second-order regression model, with $\tens{Q}$ initialized randomly with
i.i.d. components with $0$ mean and variance $1$.
For a model trained with full batch gradient descent, with unnormalized SAM,
the change in $\m{J}$ at the
first step of the dynamics, averaged over $\tens{Q}$ is
\begin{equation}
\expect_{\tens{Q}}[\m{J}_{1} -\m{J}_{0}] = -\rho\alpha\P\m{z}_{0}\m{z}_{0}^{\top}\m{J}_{0}+O(\rho^2\alpha^2||\m{z}_{0}||^2)+O(\alpha^3||\m{z}_{0}||^3)
\end{equation}
The $\alpha$th
singular value $\sigma_{\alpha}$ of $\m{J}_{0}$ associated
with left and right singular vectors
$\m{w}_{\alpha}$ and $\v_{\alpha}$ can be approximated as
\begin{equation}
\begin{split}
&(\sigma_{\alpha})_{1}-(\sigma_{\alpha})_{0} =
\m{w}_{\alpha}^{\top}\expect_{\tens{Q}}[\m{J}_{1} -\m{J}_{0}]\v_{\alpha} +O(\alpha^2)\\
& = -\rho\alpha\P (\m{z}_{0}\cdot\m{w}_{\alpha})^{2}\sigma_{\alpha}+O(\alpha^2)
\end{split}
\end{equation}
for small $\alpha$.
\end{theorem}
Note that the singular vector $\m{w}_{\alpha}$ is an eigenvector of $\m{J}\J^{\tpose}$
associated with the eigenvalue $\sigma_{\alpha}^{2}$.
This analysis suggests that on average, at early times, the change in the
singular value is negative. However, the change also depends linearly
on $(\m{w}_{\alpha}\cdot\m{z}_{0})^{2}$. This suggests that if the component of $\m{z}$
in the direction of the singular vector becomes small, the stabilizing effect of SAM
becomes small as well. For large batch size/small learning rate with MSE loss,
we in fact expect $\m{z}\cdot\m{w}_{\alpha}$ to decrease rapidly early in training
\cite{cohen_gradient_2022, agarwala_secondorder_2022}.
Therefore the relative regularizing effect can be \emph{weaker} for larger
modes in the GD setting.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{plots/sam_eos_schematic.pdf}
\caption{Schematic of SAM-modified EOS. Gradient descent decreases loss
until a high-curvature area is reached, where large eigenmode is non-linearly
stabilized (orange, solid). SAM causes stabilization to happen earlier, at a smaller
value of the curvature (green, dashed).}
\label{fig:eos_schematic}
\end{figure}
\begin{figure*}[t]
\centering
\begin{tabular}{c c c}
\includegraphics[width=0.3\linewidth]{plots/lam_max_small_lr} &
\includegraphics[width=0.3\linewidth]{plots/lam_max_large_lr} &
\includegraphics[width=0.3\linewidth]{plots/lam_max_eff_large_lr}
\end{tabular}
\caption{Trajectories of largest eigenvalue $\lambda_{max}$ of $\m{J}\J^{\top}$
for quadratic regression model, $5$ independent initializations.
For gradient descent with small learning rate ($\alpha = 3\cdot10^{-3}$),
SAM ($\rho = 4\cdot10^{-2}$) does not regularize the large NTK eigenvalues (left).
For larger learning rate ($\alpha = 8\cdot10^{-2}$), SAM controls large eigenvalues (middle).
Largest eigenvalue can be predicted by SAM edge of stability $\alpha(\lambda_{max}+\rho\lambda_{max}^{2}) = 2$ (right).}
\label{fig:sam_quad_model}
\end{figure*}
\subsubsection{Edge of stability and SAM}
One of the most dramatic consequences of SAM for full batch training
is the shift of the \emph{edge of stability}.
We begin by reviewing the EOS phenomenology.
Consider full-batch gradient descent
training with respect to a twice-differentiable loss. Near a minimum of the loss,
the dynamics of the displacement $\m{x}$ from the minimum (in parameter space)
are well-approximated by
\begin{equation}
\m{x}_{t+1}-\m{x}_{t} = -\alpha\H\m{x}_{t}
\end{equation}
where $\H$ is the positive semi-definite Hessian at the minimum
$\m{x} = 0$. The dynamics converges
exponentially iff the largest eigenvalue of $\H$ is bounded by
$\alpha\lambda_{max} < 2$. We refer to $\alpha\lambda_{max}$ as the
\emph{normalized eigenvalue},
Otherwise, there is at least one component
of $\m{x}$ which is non-decreasing. The value $2/\alpha$ is often referred to as the
\emph{edge of stability} (EOS) for the dynamics.
Previous work has shown that for many non-linear models,
there is a range of learning rates
where the largest eigenvalue of the Hessian stabilizes
around the edge of stability \cite{cohen_gradient_2022}.
Equivalent phenomenology exists for other gradient-based methods \cite{cohen_adaptive_2022}.
The stabilization effect is due to feedback between the largest curvature eigenvalue and
the displacement in the largest eigendirection \cite{agarwala_secondorder_2022, damian_selfstabilization_2022}. For MSE loss, EOS behavior occurs for the large
NTK eigenvalues as well \cite{agarwala_secondorder_2022}.
We will show that SAM also induces
an EOS stabilization effect, but at a smaller eigenvalue than GD.
We can understand the shift intuitively by analyzing
un-normalized SAM on a loss $\frac{1}{2}\m{x}^{\top}\H\m{x}$. Direct
calculation gives the update rule:
\begin{equation}
\m{x}_{t+1}-\m{x}_{t} = -\alpha(\H+\rho\H^{2})\m{x}_{t}
\label{eq:low_order_dyn}
\end{equation}
For positive definite $\H$, $\m{x}_{t}$ converges exponentially to $0$
iff $\alpha(\lambda_{max}+\rho\lambda_{max}^2)<2$.
Recall from Section 2.1 that the SAM NTK is $(1 +\rho \m{J}\J^{\top})\m{J}\J^{\top}> \m{J}\J^{\top}$.
This suggests that
$\alpha(\lambda_{max}+\rho\lambda_{max}^2)$ is the \emph{SAM normalized eigenvalue}.
This bound gives a critical $\lambda_{max}$ which is smaller than that
in the GD case. This leads to the hypothesis that SAM can cause
a stabilization at the EOS in a flatter region of the loss, as schematically
illustrated in Figure \ref{fig:eos_schematic}.
We can formalize the \emph{SAM edge of stability} (SAM EOS) for any differentiable model
trained on MSE loss. Equation
\ref{eq:z_dyn_expansion} suggests the matrix
$\m{J}\J^{\top}(1+\rho\m{J}\J^{\top})$ - which has larger eigenvalues for
larger $\rho$ - controls the low-order
dynamics. We can formalize this intuition in the following theorem
(proof in Appendix \ref{app:sam_eos_proof}):
\begin{theorem}
\label{thm:eos_SAM}
Consider a $\mathcal{C}^{\infty}$ model $\m{f}(\th)$
trained using Equation \ref{eq:unnorm_sam} with MSE loss. Suppose that there
exists a point $\th^*$ where $\m{z}(\th^*) = 0$. Suppose that for some
${\epsilon}>0$, we have the lower bound
${\epsilon} < \alpha\lambda_{i}(1+\rho\lambda_{i})$ for the eigenvalues of the
positive definite symmetric matrix
$\m{J}(\th^*)\m{J}(\th^*)^{\top}$. Given a bound on the largest eigenvalue,
there are two regimes:
\textbf{Convergent regime.} If $\alpha\lambda_{i}(1+\rho\lambda_{i})<2-{\epsilon}$ for all
for all eigenvalues $\lambda_{i}$ of $\m{J}(\th^*)\m{J}(\th^*)^{\top}$, there exists a neighborhood
$U$ of $\th^*$ such that $\lim_{t\to\infty}\m{z}_{t} = 0$ with exponential convergence for
any trajectory initialized at $\th_{0}\in U$.
\textbf{Divergent regime.} If $\alpha\lambda_{i}(1+\rho\lambda_{i})>2+{\epsilon}$ for some eigenvector
$\v_{i}$ of $\m{J}(\th^*)\m{J}(\th^*)^{\top}$, then there exists some $q_{min}$
such that for any $q<q_{min}$, given $B_{q}(\th^*)$, the ball of radius $q$
around $\th^*$,
there exists some initialization $\th_{0}\in B_{q}(\th^*)$ such that the trajectory
$\{\th_{t}\}$ leaves $B_{q}(\th^*)$ at some time $t$.
\end{theorem}
Note that the theorem is proven for the NTK eigenvalues, which also show
EOS behavior for MSE loss in the GD setting \cite{agarwala_secondorder_2022}.
This theorem gives us the modified edge of stability
condition:
\begin{equation}
\alpha \lambda_{max}(1+\rho\lambda_{max}) \approx 2
\label{eq:eos_sam_con}
\end{equation}
For larger $\rho$, a smaller $\lambda_{max}$ is needed to meet the edge
of stability condition.
In terms of the
normalized eigenvalue $\tilde{\lambda} = \alpha\lambda$, the modified EOS can be written as
$\tilde{\lambda}(1+r\tilde{\lambda}) = 2$ with the ratio $r = \rho/\alpha$.
Larger values of $r$ lead to stronger regularization effects, and
for the quadratic regression model specifically
$\alpha$ can be factored out leaving $r$ as
the key dimensionless parameter (Appendix \ref{app:rescaled_dynamics}).
\subsection{SGD theory}
It has been noted that the effects of SAM have a strong dependence on
batch size \cite{andriushchenko_understanding_2022}. While a full analysis of
SGD is beyond the scope of this work, we can see
some evidence of stronger regularization for SGD in the quadratic regression
model.
Consider SGD dynamics, where a random fraction $\beta = B/D$ of the training
residuals $\m{z}$ are used to generate the dynamics at each step. We can represent
the sampling at each step with a random projection matrix $\m{P}_{t}$, and replacing
all instances of $\m{z}_{t}$ with $\m{P}_{t}\m{z}_{t}$. Under these dynamics, we can
can prove the following:
\begin{theorem}
\label{thm:zj_dyn_sgd}
Consider a second-order regression model, with $\tens{Q}$ initialized randomly with
i.i.d. components with $0$ mean and variance $1$.
For a model trained with SGD, sampling $B$ datapoints independently
at each step, the change in $\m{z}$ and $\m{J}$ at the first step, averaged
over $\tens{Q}$ and the sampling matrix $\m{P}_{t}$, is given by
\begin{equation}
\begin{split}
&\expect[\m{z}_{1}-\m{z}_{0}]_{\tens{Q},\m{P}} = -\alpha\beta\m{J}_{0}\m{J}_{0}^{\top}(1+\rho[\beta(\m{J}_{0}\m{J}_{0}^{\top})\\
&+(1-\beta){\rm diag}(\m{J}_{0}\m{J}_{0}^{\top})])\m{z}_{0}+O(\alpha^2||\m{z}||^{2})+O(D^{-1})
\end{split}
\end{equation}
\begin{equation}
\begin{split}
& \expect_{\tens{Q},\m{P}}[\m{J}_{1} -\m{J}_{0}] = -\rho\alpha\P (\beta^{2}\m{z}_{0}\m{z}_{0}^{\top}+\beta(1-\beta){\rm diag}(\m{z}_{0}\m{z}_{0}^{\top})) \m{J}_{0}\\
&+O(\rho^2\alpha^2||\m{z}||^2)+O(\alpha^3||\m{z}||^{3})
\end{split}
\end{equation}
where $\beta \equivB/D$ is the batch fraction.
\end{theorem}
The calculations are detailed in Appendix \ref{app:quad_average}.
This suggests that there are two possible sources of increased regularization for
SGD: the first being the additional terms proportional to $\beta(1-\beta)$.
In addition to the fact that $\beta(1-\beta)>\beta^2$ for $\beta<\frac{1}{2}$, we have
\begin{equation}
\v_{\alpha}{\rm diag}(\m{z}_{0}\m{z}_{0}^{\top})) \m{J}_{0}\m{w}_{\alpha} = \sigma_{\alpha} (\v_{\alpha}\circ\m{z}_{0})\cdot(\v_{\alpha}\circ\m{z}_{0})
\end{equation}
for left and right eigenvectors $\v_{\alpha}$ and $\m{w}_{\alpha}$ of $\m{J}_{0}$, where
$\circ$ is the Hadamard (elementwise) product. This term can be large even if
$\v_{\alpha}$ and $\m{z}_{t}$ have small dot product.
This is in contrast to
$\beta^2(\v_{\alpha}\cdot\m{z}_{0})^{2}$, which is small if $\m{z}_{0}$ does not have a large
component in the $\v_{\alpha}$ direction. This suggests that at short times, where
the large eigenmodes decay quickly, the SGD term can still be large.
Additionally, the onto the
largest eigenmode itself decreases more
slowly in the SGD setting \cite{paquette_sgd_2021}, which also suggests stronger
early time regularization for small batch size.
\begin{figure}[t]
\centering
\includegraphics[height=0.7\linewidth]{plots/cifar2_eigs}
\caption{Largest eigenvalues of $\m{J}\J^{\top}$ for a fully-connected network trained using
MSE loss on 2-class CIFAR. For gradient descent ($\alpha = 4\cdot10^{-3}$) largest eigenvalue
stabilizes according to the GD EOS $\alpha\lambda_{max} = 2$ (solid line, blue). SAM ($\rho = 10^{-2}$)
stabilizes to a
lower value (dashed line, blue), which is well-predicted by the SAM EOS
$\alpha(\lambda_{max}+\rho\lambda_{max}^2) = 2$ (dashed line, orange).}
\label{fig:cifar2_eigs}
\end{figure}
\section{Experiments on basic models}
\label{sec:basic_experiments}
\subsection{Quadratic regression model}
We can explore the effects of SAM and show the SAM EOS behavior
via numerical experiments on the quadratic
regression model. We use the update rule in Equation \ref{eq:unnorm_sam}, working
directly in $\m{z}$ and $\m{J}$ space as in \cite{agarwala_secondorder_2022}. Experimental details can be found in
Appendix \ref{app:quad_model_numerics}.
For small learning rates, we
see that SAM does not reduce the large eigenvalues of $\m{J}\J^{\top}$ in the dynamics
(Figure \ref{fig:sam_quad_model}, left). In fact in some cases the final eigenvalue
is \emph{larger} with SAM turned on. The projection onto
the largest eigenmodes of $\m{J}\J^{\top}$ exponentially decreases to $0$ quicker than
any other mode; as suggested by Theorem \ref{thm:j_dyn_gd}, this leads to only a small
decreasing pressure from SAM. The primary dynamics of the large eigenvalues is due
to the progressive sharpening phenomenology studied in
\cite{agarwala_secondorder_2022}, which tends to increase
the eigenmodes.
However, for larger learning rates, SAM has a strong suppressing effect on the
largest eigenvalues (Figure \ref{fig:sam_quad_model}, middle). The overall dynamics are
more non-linear than in the small learning rate case. The eigenvalues stabilize at the
modified EOS boundary $\alpha(\lambda_{max}+\rho\lambda_{max}^2) = 2$
(Figure \ref{fig:sam_quad_model}, right), suggesting non-linear stabilization
of the eigenvalues. In Appendix \ref{app:quad_model_numerics}
we conduct additional experiments which confirm that the boundary predicts
the largest eigenvalue for a range of $\rho$, and that consequently
generally increasing $\rho$
leads to decreased $\lambda_{max}$.
\subsection{CIFAR-$2$ with MSE loss}
We can see this phenomenology in more general non-linear models as well. We trained a
fully-connected network on the first $2$ classes of CIFAR with MSE loss, with both
full batch gradient descent and SAM.
We then computed the largest eigenvalues of $\m{J}\J^{\top}$ along the trajectory. We
can see that in both GD and SAM the largest eigenvalues stabilize, and the stabilization
threshold is smaller for SAM (Figure \ref{fig:cifar2_eigs}). The threshold is once again well predicted by
the SAM EOS.
\section{Connection to realistic models}
\begin{figure*}[t]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.31\linewidth]{plots/lam_max_mse.pdf} &
\includegraphics[width=0.31\linewidth]{plots/gd_eos_mse.pdf} &
\includegraphics[width=0.31\linewidth]{plots/sam_eos_mse.pdf}
\end{tabular}
\caption{Largest Hessian eigenvalues for CIFAR10 trained with MSE loss. Left: largest eigenvalues
increase at late times. Larger SAM radius mitigates eigenvalue increase.
Middle: eigenvalues normalized by learning rate decrease at late times,
and SGD shows edge of stability (EOS) behavior. Right: For larger $\rho$, SAM-normalized eigenvalues show modified EOS behavior.}
\label{fig:cifar10_mse_eos}
\end{figure*}
\label{sec:experiments}
In this section, we show that our analysis of quadratic models can bring insights
into the behavior of more realistic models.
\subsection{Setup}
{\bf Sharpness} For MSE loss, edge of stability dynamics can be shown in terms of either the NTK eigenvalues
\emph{or} the Hessian eigenvalues \cite{agarwala_secondorder_2022}. For more general loss
functions, EOS dynamics takes place with respect to the largest Hessian eigenvalues
\cite{cohen_gradient_2022, damian_selfstabilization_2022}.
Following these results and the analysis in Equation \ref{eq:low_order_dyn}, we chose
to measure the largest eigenvalue of the Hessian rather than the NTK.
We used a Lanczos method \cite{ghorbani_investigation_2019} to approximately compute
$\lambda_{max}$.
Any reference to $\lambda_{max}$ in this section refers to
eigenvalues computed in this way.
{\bf CIFAR-10} We conducted experiments on the popular CIFAR-10 dataset \citep{krizhevsky2009learning} using the WideResnet 28-10 architecture \citep{zagoruyko2016wide}. We report results for both the MSE loss and the cross-entropy loss. In the case of the MSE loss, we replace the softmax non-linearity with Tanh and rescale the one-hot labels ${\bf y}\in \{0,1\}$ to $\{-1, 1\}$. In both cases, the loss is averaged across the number of elements in the batch and the number of classes. For each setting, we report results for a single configuration of the learning rate $\eta$ and weight decay $\mu$ found from an initial cross-validation sweep. For MSE, we use $\eta=0.3, \mu=0.005$ and $\eta=0.4, \mu=0.005$ for cross-entropy. We use the cosine learning rate schedule \citep{loshchilov2016sgdr} and SGD instead of Nesterov momentum \citep{sutskever2013importance} to better match the theoretical setup. Despite the changes to the optimizer and the loss, the test error for the models remains in a reasonable range (4.4\% for SAM regularized models with MSE and 5.3\% with SGD). In accordance with the theory, we use unnormalized SAM in these experiments.
We keep all other hyper-parameters to the default values described in the original WideResnet paper.
\subsection{Results}
As shown in Figure \ref{fig:cifar10_mse_eos} (left), the maximum eigenvalue increases significantly throughout training for all approaches considered.
However, the normalized curvature
$\alpha\lambda_{max}$, which sets the edge of stability in GD,
remains relatively stable early on
in training when the learning rate is high, but necessarily decreases as the cosine schedule drives the
learning rate to $0$ (Figure \ref{fig:cifar10_mse_eos}, middle).
{\bf SAM radius drives curvature below GD EOS.}\quad As we increase the SAM radius, the largest
eigenvalue is more controlled (Figure \ref{fig:cifar10_mse_eos}, left) - falling below the
gradient descent
edge of stability (Figure \ref{fig:cifar10_mse_eos}, middle). The stabilizing effect
of SAM on the large eigenvalues is evident even early on in training.
{\bf Eigenvalues stabilize around SAM-EOS.}\quad If we instead plot the SAM-normalized eigenvalue
$\alpha(\lambda_{max}+\rho\lambda_{max}^{2})$, we see that the eigenvalues stay close to
(and often slightly above) the critical value of $2$, as predicted by theory
(Figure \ref{fig:cifar10_mse_eos}, right).
This suggests that there are settings where the control that SAM has
on the large eigenvalues of the Hessian comes, in part, from a modified
EOS stabilization effect.
\begin{figure*}[t]
\centering
\begin{tabular}{cc}
\includegraphics[height=0.35\linewidth]{plots/sam_sched_lam_max_mse.pdf} &
\includegraphics[height=0.35\linewidth]{plots/sam_sched_mse.pdf}
\end{tabular}
\caption{Maximum eigenvalues for CIFAR-10 model trained on MSE loss with a SAM
schedule. Starting out with SAM ($\rho = 0.05$, solid lines) and turning it off at $2500$
steps leads to initial suppression and eventual increase of $\lambda_{max}$; starting out with SGD
and turning SAM on after $2500$ steps leads to the opposite behavior (left). Eigenvalues
cross over quickly after the switch. Plotting GD normalized eigenvalues (blue, right) shows
GD EOS behavior in SGD phase; plotting SAM normalized eigenvalues (orange, right) shows SAM
EOS behavior in SAM phase.}
\label{fig:sam_sched}
\end{figure*}
{\bf Altering SAM radius during training can successfully move us between GD-EOS and SAM-EOS.}\quad Further evidence from EOS stabilization comes from using a \emph{SAM schedule}. We trained
the model with two settings: early SAM, where SAM is used for the first $2500$ steps ($50$ epochs),
after which the training proceeds with SGD ($\rho=0$), and late SAM, where SAM is used for the first
$2500$ steps, after which only SGD is used. The maximum eigenvalue is lower
for early SAM before $2500$ steps, after which there is a quick crossover and
late SAM gives better control (Figure \ref{fig:sam_sched}).
Both SAM schedules give improvement over SGD-only training. Generally, turning SAM on later or for the
full trajectory gave better generalization than turning SAM on early, consistent with the earlier
work of \citet{andriushchenko_understanding_2022}.
Plotting the eigenvalues for the early SAM and late SAM schedules, we see that when SAM is turned
off, the normalized eigenvalues lie above the gradient descent EOS (Figure \ref{fig:sam_sched},
right, blue curves). However when SAM is turned on, $\alpha\lambda_{max}$ is usually below the edge of stability
value of $2$; instead, the SAM-normalized value $\alpha(\lambda_{max}+\rho\lambda_{max}^{2})$ lies
close to the critical value of $2$ (Figure \ref{fig:sam_sched}, right,
orange curves). This suggests that turning SAM on or off during the intermediate part of training
causes the dynamics to quickly reach the appropriate edge of stability.
{\bf Networks with cross-entropy loss behave similarly.}\quad We found similar results for cross-entropy loss as well, which we detail in Appendix
\ref{app:x_ent_loss}.
The mini-batch gradient magnitude and eigenvalues vary more over the learning trajectories;
this may be related to effects of logit magnitudes which have been previously shown to affect
curvature and general training dynamics \cite{agarwala_temperature_2020, cohen_gradient_2022}.
{\bf Minibatch gradient norm varies little.}\quad Another quantity of interest is the magnitude of the mini-batch gradients. For SGD, the
gradient magnitudes were steady during the first half of training and dropped by a factor
of $4$ at late times
(Figure \ref{fig:mb_grad_mse}). Gradient magnitudes were very stable for SAM, particularly for larger
$\rho$. This suggests that in practice, there may not be much
difference between the normalized and un-normalized SAM algorithms.
This is consistent with previous work which
showed that the generalization of the two approaches is similar
\cite{andriushchenko_understanding_2022}.
\begin{figure}[ht]
\centering
\includegraphics[height=0.6\linewidth]{plots/mb_grad_mse.pdf}
\caption{Minibatch gradient magnitudes for CIFAR-10 model trained on MSE loss. Magnitudes
are steady early on in SGD training, but decrease at the end of training. Eigenvalue variation
is smaller for increasing sam radius $\rho$.}
\label{fig:mb_grad_mse}
\end{figure}
\section{Discussion}
\subsection{SAM as a dynamical phenomenon}
Much like the study of EOS before it, our analysis of SAM suggests that
sharpness dynamics near minima are insufficient to capture relevant phenomenology.
Our analysis of the quadratic regression model suggests that SAM already
regularizes the large eigenmodes at early times, and the EOS analysis shows how
SAM can have strong effects even in the large-batch setting. Our theory also
suggested that SGD has additional mechanisms to control curvature early on in
training as compared to full batch gradient descent.
The SAM schedule experiments provided further evidence that multiple phases of the optimization
trajectory are important for understanding the relationship between SAM and generalization.
If the important effect was the convergence to a particular minimum, then only late SAM
would improve generalization. If instead some form of ``basin selection'' was key,
then only early SAM would improve generalization. The fact that both are important \cite{andriushchenko_understanding_2022}
suggests that the entire optimization trajectory matters.
We note that while EOS effects are \emph{necessary} to understand some aspects of
SAM, they are certainly not \emph{sufficient}. As shown in Appendix \ref{app:quad_model_numerics}, the details of the behavior near the EOS have
a complex dependence on $\rho$ (and the model).
Later on in learning, especially
with a loss like cross entropy, the largest eigenvalues may decrease even without
SAM \cite{cohen_gradient_2022} - potentially leading the dynamics away from the EOS.
Small batch size may add other effects, and EOS effects become harder to understand
if multiple eigenvalues are at the EOS. Nonetheless, even in more complicated cases
the SAM EOS gives a good approximation to the control SAM has on the eigenvalues,
particularly at early times.
\subsection{Optimization and regularization are deeply linked}
This work provides additional evidence that understanding some regularization
methods may in fact require analysis of the optimization dynamics - especially
those at early or intermediate times. This is in contrast to approaches which seek
to understand learning by characterizing minima, or analyzing behavior
near convergence only.
A similar phenomenology has been observed in evolutionary dynamics - the basic
$0$th order optimization method - where the details of optimization trajectories
are often more important than the statistics of the minima to understand long-term
dynamics \cite{nowak_analysis_2015, park_dexceedance_2016, agarwala_adaptive_2019a}.
\section{Future work}
Our main theoretical analysis focused on the dynamics $\m{z}$ and $\m{J}$ under squared loss;
additional complications arise for non-squared losses like cross-entropy. Providing a
detailed quantitative characterization of the EOS dynamics under these more general
conditions is an important next step.
Another important open question is the analysis of SAM (and the EOS effect
more generally) under SGD.
While Theorem \ref{thm:zj_dyn_sgd} provides some insight
into the differences, a full understanding would require an analysis of
$\expect_{\m{P}}[(\m{z}\cdot\v_{i})^{2}]$ for the different eigenmodes $\v_{i}$ -
which has only recently been analyzed for a quadratic loss function \cite{paquette_sgd_2021, paquette_homogenization_2022, paquette_implicit_2022, lee_trajectory_2022}.
Our analysis of the CIFAR10 models showed that the SGD gradient magnitude does not change
much over training. Further characterization of the SGD gradient statistics will also
be useful in understanding the interaction of SAM and SGD.
More detailed theoretical and experimental analysis of more complex settings may allow
for improvements to the SAM algorithm and its implementation in practice. A more detailed
theoretical understanding could lead to proposals for $\rho$-schedules, or improvements
to the core algorithm itself - already a
field of active research \cite{zhuang_surrogate_2022}.
Finally, our work focuses on optimization and training dynamics; linking these properties
to generalization remains a key goal of any further research into SAM and other
optimization methods.
|
{
"arxiv_id": "2302.08580",
"language": "en",
"timestamp": "2023-02-20T02:02:02",
"url": "https://arxiv.org/abs/2302.08580",
"yymm": "2302"
} | \section{Introduction}
In this paper, we study
quasi-Newton methods to solve unconstrained optimization problems. This class of algorithms can be viewed as a modification of Newton's method, where the objective function Hessian is approximated using the gradient information. Specifically, a general template of quasi-Newton methods to minimize a continuously differentiable function $f$ is
\begin{equation}\label{eq:quasi_newton}
\vx_{k+1} = \vx_k - \rho_k \mB_k^{-1} \nabla f(\vx_k), \qquad k \geq 0,
\end{equation}
where $\rho_k$ is the step size and $\mB_k$ is a matrix that aims to approximate $\nabla^2{f(\vx_k)}$. Several rules for updating $\mB_k$ have been proposed in the literature, and the most prominent include the Davidon-Fletcher-Powell (DFP) method \citep{davidon1959variable,fletcher1963rapidly}, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method \citep{broyden1970convergence,fletcher1970new,goldfarb1970family,shanno1970conditioning}, and the symmetric rank-one (SR1) method \citep{conn1991convergence,khalfan1993theoretical}. %
The main advantage of quasi-Newton methods is their ability to achieve Q-superlinear convergence under suitable conditions on $f$, i.e., $\lim_{k\rightarrow \infty }\frac{\|\vx_{k+1}-\vx^*\|}{\|\vx_{k}-\vx^*\|} = 0$ where $\vx^*$ is the optimal solution of $f$.
\citet{broyden1973local,dennis1974characterization} established that DFP and BFGS are locally and Q-superlinearly convergent with unit step size (i.e., $\rho_k=1$ in \eqref{eq:quasi_newton}).
To ensure global convergence, it is necessary to incorporate quasi-Newton updates with a line search or a trust-region method. \citet{powell1971convergence,dixon1972variable} proved that DFP and BFGS converge globally and Q-superlinearly with an exact line search, which can be computationally prohibitive. Subsequently, \citet{powell1976some} showed that BFGS with an inexact line search retains global and superlinear convergence, and \citet{byrd1987global} later extended the result to the restricted Broyden class except for DFP. Along another line of research,
\citet{conn1991convergence,khalfan1993theoretical,byrd1996analysis} studied the SR1 method in a trust region context and also proved its global and superlinear convergence.
However, the above results are all of \textit{asymptotic} nature and they fail to provide an explicit upper bound on the distance to the optimal solution after a finite number of iterations.
To address this shortcoming, several recent papers \citep{rodomanov2021greedy,Rodomanov2021,rodomanov2021new,jin2022non,jin2022sharpened,lin2021greedy,ye2022towards} have studied \textit{local nonasymptotic} superlinear convergence rates of classic quasi-Newton methods or their greedy variants. In particular, \citet{rodomanov2021new} proved that in a local neighborhood of the optimal solution, if the initial Hessian approximation is set as $L_1\mI$, BFGS with unit step size converges at a superlinear rate of
$[e^{\frac{d}{k}\log\frac{L_1}{\mu}}-1]^{k/2}$, where $k$ is the number of iterations, $d$ is the problem dimension, $L_1$ is the smoothness parameter and $\mu$ is the strong convexity parameter. In a concurrent work, \citet{jin2022non} showed that if the initial Hessian approximation is set as the exact Hessian at the initial iterate, BFGS with unit step size can achieve a local superlinear convergence rate of $\left({1}/{k}\right)^{{k}/{2}}$. However, all these results are crucially based on \emph{local analysis}: they require the initial point $\vx_0$ to be close enough to the optimal solution $\vx^*$, and also impose certain restrictive conditions on the initial Hessian approximation matrix $\mB_0$.
Moreover, there is no clear way of combining these local non-asymptotic superlinear rates with global convergence guarantees for quasi-Newton methods. Specifically, if we use a line search scheme to ensure global convergence, it cannot be guaranteed that once the iterate reaches a local neighborhood of the solution, the Hessian approximation matrix $\mB_k$ will satisfy the specified conditions in \citep{rodomanov2021new,jin2022non} to achieve the local {non-asymptotic} superlinear convergence rates.
Hence, the following question remains open:
\begin{quote}
\textit{Can we design a globally convergent quasi-Newton method with an explicit superlinear convergence rate?}
\end{quote}
%
In this paper, we answer the above question in affirmative. We propose a novel quasi-Newton proximal extragradient (QNPE) method that achieves an explicit non-asymptotic superlinear convergence rate. Unlike prior works that use a \textit{local analysis}
requiring specific conditions on the initial iterate and the initial Hessian approximation,
our \emph{global superlinear} convergence guarantee holds for an arbitrary initialization of the iterate and Hessian approximation.
More precisely, for a $\mu$-strongly convex function $f$ with $L_1$-Lipschitz gradient and $L_2$-Lipschitz Hessian, the iterates $\{\vx_k\}_{k\geq 0}$ generated by our QNPE method satisfy the following guarantees:
\vspace{1mm}
\noindent\textbf{(i) Global convergence rates.} We have $
\frac{\|\vx_{k}-\vx^*\|^2}{\|\vx_0-\vx^*\|^2} \!\leq\! \min\bigl\{\big(1+\frac{\mu}{4 L_1}\big)^{-k},\bigl(1+ \frac{\mu}{4L_1} \sqrt{{k}/{C}}\bigr)^{-k}\bigr\}$, where $C = \bigO\Bigl(\frac{\|\mB_0-\nabla^2 f(\vx^*)\|^2_F}{L_1^2}+\frac{L_2^2\|\vx_0-\vx^*\|^2}{\mu L_1}\Bigr)$. Note that the two convergence bounds correspond to a linear rate and a superlinear rate, respectively.
In particular, the superlinear rate outperforms the linear rate when $k \geq C$ and is faster than any linear rate when $k$ is sufficiently large.
\vspace{2mm}
\noindent\textbf{(ii) Iteration complexity.}
Let $N_\epsilon$ denote the number of iterations required to reach $\epsilon$-accuracy. Then we have $N_{\epsilon} = \bigO\Bigl(\min\Bigl\{\frac{L_1}{\mu}\log\frac{1}{\epsilon},\, {\log\frac{1}{\epsilon}}\Big/{{ \log \Bigl(1+\frac{\mu}{L_1}\left({\frac{L_1}{ C\mu}\log\frac{1}{\epsilon}}\right)^{\nicefrac{1}{3}}\Bigr)}}\Bigr\}\Bigr)$. In particular, in the regime where $\epsilon$ is sufficiently small, we obtain $N_{\epsilon} = \bigO\left(\log\frac{1}{\epsilon}/\log\log\frac{1}{\epsilon}\right)$.
%
\vspace{2mm}
\noindent
\textbf{(iii) Computational complexity.} To achieve $\epsilon$-accuracy, the total number of gradient evaluations and matrix-vector products is bounded by $3N_{\epsilon}-1$ and $\bigO(N_{\epsilon}\sqrt{\frac{L_1}{\mu}}\log\frac{L_1 N_\epsilon^2d}{\mu\epsilon})$, respectively.
\vspace{1mm}
We obtain these results by taking a quite different route from the existing quasi-Newton literature. Instead of considering an update of the form \eqref{eq:quasi_newton}, we build our method based on the \textit{hybrid proximal extragradient} (HPE) framework \citep{solodov1999hybrid}, which can be interpreted as an inexact variant of the proximal point method \citep{Martinet1970,Rockafellar1976}. The general HPE method consists of two steps: an inexact proximal point update where $\hat{\vx}_k \approx \vx_k-\eta_k \nabla f(\hat{\vx}_k)$, and an extragradient step where $\vx_{k+1} = \vx_k - \eta_k \nabla f(\hat{\vx}_k)$. In our QNPE method, we implement the first step by using the linear approximation $\nabla f({\vx}_k)+\mB_k(\hat{\vx}_k-\vx_k)$ as a surrogate of $\nabla f(\hat{\vx}_k)$. Considering this approximation and by exploiting strong convexity, the QNPE update is given by
\begin{equation}\label{eq:HPE_update}
\hat{\vx}_{k} = \vx_k - \eta_k (\mI+\eta_k \mB_k)^{-1} \nabla f(\vx_k), \quad \vx_{k+1} = \frac{1}{1+2\eta_k\mu}(\vx_k - \eta_k \nabla f(\hat{\vx}_k))+\frac{2\eta_k\mu}{1+2\eta_k\mu} \hat{\vx}_k,
\end{equation}
where $\eta_k$ is the step size and $\mu$ is the strong convexity parameter.
Moreover, to ensure that QNPE preserves the fast convergence rate of HPE, we develop a novel scheme for the update of $\mB_k$ to control the error caused by the linear approximation.
As a result, unlike traditional quasi-Newton methods (such as BFGS and DFP) that update $\mB_k$ by mimicking some property of the Hessian such as the secant condition, our update rule is directly motivated by the convergence analysis of the HPE framework. %
Specifically, according to our analysis, it is sufficient to ensure that $\sum_{k} {1}/{\eta_k^2}<+\infty$ in order to guarantee a superlinear convergence rate for the QNPE method. As we discuss later, this sum can be explicitly bounded above by the cumulative loss $ \sum_k \ell_k(\mB_k)$,
where $\ell_k : \semiS^d \rightarrow \reals_+$ is a loss function that in some sense measures the approximation error. As a result, the update of $\mB_k$ boils down to running an online algorithm for solving an \emph{online convex optimization problem} in the space of positive definite matrices with bounded eigenvalues.
Finally, we address the challenge of computational efficiency by presenting a projection-free online learning algorithm for the update of $\mB_k$.
Note that most online learning algorithms for constrained problems are based on a projection oracle, but in our specific setting, such projection requires expensive eigendecomposition to ensure that the eigenvalues of $\mB_k$ are within a specific range. In contrast, our projection-free online learning algorithm is based on an approximate separation oracle (see Definition~\ref{def:gauge}) that can be efficiently constructed using matrix-vector products.
\section{Preliminaries}\label{sec:pre}
In this paper, we focus on the unconstrained minimization problem
\begin{equation}\label{eq:main_problem}
\min_{\vx \in \mathbb{R}^d}\;\;\; f(\vx),%
\end{equation}
where $f:\reals^d \rightarrow \reals$ is convex and twice differentiable and satisfies the following assumptions.
\begin{assumption}\label{assum:smooth_SC} There exist positive constants $\mu$ and $L_1$ such that
%
$\mu \mI \preceq \nabla^2 f(\vx) \preceq L_1 \mI$ for any $\vx\in \reals^d$,
%
where $\mI \in \reals^{d\times d}$ is the identity matrix. That is, $f$ is $\mu$-strongly convex and $L_1$-smooth.
\end{assumption}
\begin{assumption}\label{assum:Hessian_Lips} There exists $L_2>0$ such that
%
$\|\nabla^2 f(\vx) - \nabla^2 f(\vx^*)\|_{\op} \leq L_2 \|\vx-\vx^*\|_2$ for any $\vx\in \reals^d$, where $\vx^*$ is the optimal solution and $\|\mA\|_{\op} \triangleq \sup_{\vx:\|\vx\|_2 = 1} {\|\mA\vx\|_2}$.
%
\end{assumption}
We note that both assumptions are standard; Assumption~\ref{assum:smooth_SC} is common in the study of first-order methods, while Assumption~\ref{assum:Hessian_Lips} is also used in various papers on the superlinear convergence of classical quasi-Newton methods (e.g., see \citep{byrd1987global,jin2022non}).
Also, unless otherwise specified, throughout the paper we use $\|\cdot\|$ to denote the Euclidean norm.
\textbf{Hybrid Proximal Extragradient Framework.}
To set the stage for our algorithm, we briefly recap the hybrid proximal extragradient (HPE) framework in \citep{solodov1999hybrid,monteiro2010complexity,monteiro2012iteration}.
When specialized to the minimization problem in \eqref{eq:main_problem}, it can be described by the following two steps:
First, we perform an inexact proximal point update $\hat{\vx}_k \approx \vx_k-\eta_k \nabla f(\hat{\vx}_k)$ with step size $\eta_k$. More precisely, for a given parameter $\sigma \in [0,1)$, we find $\hat{\vx}_k$ that satisfies
\begin{equation}\label{eq:approx_pp}
\|{\hat{\vx}_k} - \vx_k+{\eta_k} \nabla f( \hat{\vx}_{k})\| \leq \sigma \|\hat{\vx}_k-\vx_k\|.
\end{equation}
Then, we perform an extragradient step and compute $\vx_{k+1}$ by
\begin{equation}\label{eq:extragradient}
\vx_{k+1} = \vx_k - \eta_k \nabla f(\hat{\vx}_k).
\end{equation}
While the proximal point update is only computed inexactly in \eqref{eq:approx_pp},
\citet{monteiro2010complexity} proved that the HPE method can achieve a similar convergence guarantee as the proximal point method. Specifically, when $f$ is convex, it holds that
$f(\bar{\vx}_{N-1})-f(\vx^*) \leq \frac{1}{2}\|\vx_0-\vx^*\|^2(\sum_{k=0}^{N-1}\eta_k)^{-1}$, where $\bar{\vx}_{N-1} \triangleq \nicefrac{\sum_{k=0}^{N-1}\eta_k\hat{\vx}_k}{\sum_{k=0}^{N-1} \eta_k} $ is the averaged iterate.
It is worth noting that the HPE method is not directly implementable, but rather a useful conceptual tool, as we still need to specify how to find $\hat{\vx}$ satisfying the condition in~\eqref{eq:approx_pp}. One systematic approach is to approximate the gradient operator $\nabla f$ by a simpler local model $P(\vx;\vx_k)$, and then compute $\hat{\vx}_k$ by solving the equation
\begin{equation}\label{eq:equation_model}
\hat{\vx}_k - \vx_k + \eta_k P(\hat{\vx}_k;\vx_k) = 0.
\end{equation}
Furthermore, we can see that the condition in \eqref{eq:approx_pp} becomes
\begin{equation}\label{eq:stepsize_model}
\eta_k\|\nabla f(\hat{\vx}_k) - P(\hat{\vx}_k;\vx_k)\| \leq \sigma \|\hat{\vx}_k-\vx_k\|,
\end{equation}
which imposes an upper bound on the step size depending on the approximation error.
For instance, if we take $P(\vx;\vx_k) = \nabla f(\vx_k)$, the update in \eqref{eq:equation_model} reads $\hat{\vx}_k = \vx_k-\eta_k \nabla f(\vx_k)$, leading to the classic extragradient method by \citet{Korpelevich1976}.
If we use $P(\vx;\vx_k) = \nabla f(\vx_k)+ \nabla^2 f(\vx_k)(\vx-\vx_k)$,
we obtain the Newton proximal extragradient (NPE) method by \citet{monteiro2010complexity,monteiro2012iteration},
which has a faster convergence rate in the convex setting. %
However, the NPE method requires access to the objective function Hessian, which could be computationally costly. In this paper, we propose a quasi-Newton proximal extragradient method that only requires access to gradients. Surprisingly, our update rule for the Hessian approximation matrix does not follow traditional update rules such as the ones in BFGS or DFP, but is instead guided by an online learning approach,
where we aim to minimize the regret corresponding to certain approximation error.
More details are in Section~\ref{sec:QNPE}.
\section{Quasi-Newton Proximal Extragradient Method}\label{sec:QNPE}
In this section, we propose the quasi-Newton proximal extragradient (QNPE) method.
An informal description is provided in Algorithm~\ref{alg:Full_EQN}.
On a high level, our method falls into the HPE framework described in Section~\ref{sec:pre}. In particular, we choose the local model in \eqref{eq:equation_model} and \eqref{eq:stepsize_model} as $P(\vx;\vx_k) = \nabla f(\vx_k) + \mB_k(\vx-\vx_k)$, where $\mB_k\in \semiS^d$ is the Hessian approximation matrix.
Specifically, the update at the $k$-th iteration consists of three major stages, which we describe in the following paragraphs.
%
\begin{algorithm}[!t]\small
\caption{Quasi-Newton Proximal Extragradient (QNPE) Method (informal)}\label{alg:Full_EQN}
\begin{algorithmic}[1]
\STATE \textbf{Input:} strong convexity parameter $\mu$, smoothness parameter $L_1$,
%
line search parameters $\alpha_1\geq 0$ and $\alpha_2>0$ such that $\alpha_1+\alpha_2 < 1$, and initial trial step size $\sigma_0>0$
\STATE \textbf{Initialization:} initial point $\vx_0\in \mathbb{R}^d$ and initial Hessian approximation $\mB_0$ such that $\mu \mI \preceq \mB_0 \preceq L_1 \mI$
%
\FOR{iteration $k=0,\ldots,N-1$}
%
\STATE Let $\eta_k$ be the largest possible step size in $\{\sigma_k\beta^i:i\geq 0\}$ such that \label{line:LS_begin} \tikzmark{top}
\vspace{-3mm}
\begin{align*}
\vspace{-6mm}
& \hspace{-7em}\hat{\vx}_{k} \approx_{\alpha_1} \vx_k - \eta_k (\mI+\eta_k \mB_k)^{-1} \nabla f(\vx_k), \qquad (\text{see Eq. \eqref{eq:inexact_linear_solver}})\tikzmark{right}\\ %
& \hspace{-7em}
{ \eta_k} \|\nabla f({\hat{\vx}_k})-\nabla f(\vx_k)-\mB_k({\hat{\vx}_k}-\vx_k)\| \leq {\alpha_2 \|{\hat{\vx}_k}-\vx_k\|}. %
\end{align*}%
\vspace{-6mm}
\STATE Set $\sigma_{k+1} \leftarrow \eta_{k}/\beta$ \label{line:intial_stepsize}\tikzmark{bottom}
\STATE Update
$
\vx_{k+1} \leftarrow \frac{1}{1+2\eta_k\mu}(\vx_k - \eta_k \nabla f(\hat{\vx}_k))+\frac{2\eta_k\mu}{1+2\eta_k\mu} \hat{\vx}_k
$ \label{line:extragradient_step}
\vspace{.3em}
%
\IF[Line search accepted the initial trial step size ]{$\eta_k = \sigma_k$ \tikzmark{top2} }
\STATE Set $\mB_{k+1} \leftarrow \mB_k$ \label{line:Hessian_unchanged}
\ELSE[Line search bactracked]
\STATE Let $\tilde{\vx}_k$ be the last rejected iterate in the line search
%
\STATE Set $\vy_k \leftarrow \nabla f(\tilde{\vx}_k) -\nabla f({\vx_k})$ and $\vs_k \leftarrow \tilde{\vx}_k-\vx_k$
\STATE Define the loss function $\ell_k(\mB) = \frac{\|\vy_k-\mB\vs_k\|^2}{2\|\vs_k\|^2}$
\STATE Feed $\ell_k(\mB)$ to an online learning algorithm and obtain $\mB_{k+1}$ \label{line:online_update} \phantom{\quad\quad}\tikzmark{right2}
\ENDIF \tikzmark{bottom2}
\ENDFOR
\AddNote{top}{bottom}{right}{\color{comment}\textit{\quad Line search subroutine;\\
\quad see Section~\ref{sec:line_search}}}
\AddNote{top2}{bottom2}{right2}{\color{comment}\textit{\quad Hessian approximation\\
\quad update subroutine; see\\ \quad Section~\ref{sec:no_regret}}}
\end{algorithmic}
\end{algorithm}
In the \textbf{first stage}, given the Hessian approximation matrix $\mB_k$ and the current iterate $\vx_k$, we select the step size $\eta_k$ and the point $\hat{\vx}_k$ such that
%
\begin{align}
\|{\hat{\vx}_k}-\vx_k+{\eta_k}(\nabla f(\vx_k)+\mB_k({\hat{\vx}_k}-\vx_k))\| &\leq {\alpha_1} \|\hat{\vx}_k-\vx_k\|, \label{eq:inexact_linear_solver} \\
{ \eta_k} \|\nabla f(\hat{\vx}_k)-\nabla f(\vx_k)-\mB_k(\hat{\vx}_k-\vx_k)\| &\leq {\alpha_2 \|{\hat{\vx}_k}-\vx_k\|} \label{eq:step size_requirement},
%
\end{align}
where $\alpha_1\in [0,1)$ and $\alpha_2\in (0,1)$ are user-specified parameters with $\alpha_1+\alpha_2<1$. The first condition in \eqref{eq:inexact_linear_solver}
requires $\hat{\vx}_k$ to be
an inexact solution of the linear system of equations $(\mI+{\eta_k}\mB_k)({\vx}-\vx_k) = -\eta_k\nabla f(\vx_k)$, where $\alpha_1$ controls the error of solving the linear system. In particular, when $\alpha_1=0$, it reduces to the update $\hat{\vx}_k=\vx_k -\eta_k(\mI+{\eta_k}\mB_k)^{-1}\nabla f(\vx_k)$ as in \eqref{eq:HPE_update}. %
%
The second condition in \eqref{eq:step size_requirement} ensures that
%
the approximation error between the gradient $\nabla f(\hat{\vx}_k)$ and its quasi-Newton approximation $\nabla f(\vx_k)+\mB_k(\hat{\vx}_k-\vx_k)$ is sufficiently small.
%
%
To satisfy the conditions in \eqref{eq:inexact_linear_solver} and \eqref{eq:step size_requirement} simultaneously, we need to determine the step size $\eta_k$ and the point $\hat{\vx}_k$ by a \textit{line search subroutine} (cf. Lines~\ref{line:LS_begin}-\ref{line:intial_stepsize} in Algorithm~\ref{alg:Full_EQN}). Specifically, for a given parameter $\beta \in (0,1)$, we choose the largest admissible step size %
from the set $\{\sigma_k \beta^i:i\geq 0\}$, where $\sigma_k = \eta_{k-1}/\beta$ for $k\geq 1$. This can be implemented by a backtracking line search scheme and we present the details in Section~\ref{sec:line_search}. %
In the \textbf{second stage}, we compute $\vx_{k+1}$ using the gradient at $\hat{\vx}_k$ (cf. Line~\ref{line:extragradient_step} in Algorithm~\ref{alg:Full_EQN}), but our update is slightly different from the one in \eqref{eq:extragradient} as we focus on the strongly-convex setting, while the update in \eqref{eq:extragradient} is designed for the convex setting. More precisely, we compute $\vx_{k+1}$ according to
%
\begin{equation}\label{eq:extragradient_mixing}
\vx_{k+1} = \frac{1}{1+2\eta_k \mu}(\vx_k - \eta_k \nabla f(\hat{\vx}_k))+\frac{2\eta_k\mu}{1+2\eta_k\mu} \hat{\vx}_k,
\end{equation}
where we choose the coefficients based on our analysis to obtain the best convergence rate.
Note that the above update in \eqref{eq:extragradient_mixing} reduces to \eqref{eq:extragradient} when $\mu=0$, and thus it can be viewed as an extension of HPE to the strongly-convex setting, which appears to be novel and of independent interest.
%
%
In the \textbf{third stage}, we update the Hessian approximation matrix $\mB_{k}$.
Here, we take a different approach from the classical quasi-Newton methods (such as BFGS and DFP) and let the convergence analysis guide our choice of $\mB_{k+1}$. As will be evident later, the convergence rate of Algorithm~\ref{alg:Full_EQN} is closely related to the cumulative loss $\sum_{k\in \mathcal{B}} \ell_k(\mB_k)$, where $\mathcal{B}$ denotes the set of iteration indices where the line search procedure backtracks. Here, the loss function is given by $\ell_k(\mB_k) \triangleq \frac{\|\vy_k-\mB_k\vs_k\|^2}{2\|\vs_k\|^2}$, where $\vy_k = \nabla f(\tilde{\vx}_k)-\nabla f({\vx}_k)$, $\vs_k = \tilde{\vx}_k-\vx_k $, and $\tilde{\vx}_k$ is an auxiliary iterate returned by our line search scheme.
%
Thus, the update of the Hessian approximation matrix naturally fits into the framework of \textit{online learning}.
%
More precisely,
if the line search accepts the initial trial step size (i.e., $k\notin \mathcal{B}$), we keep the Hessian approximation matrix unchanged (cf. Line~\ref{line:Hessian_unchanged} in Algorithm~\ref{alg:Full_EQN}). Otherwise, we follow a tailored projection-free online learning algorithm in the space of matrices (cf. Line~\ref{line:online_update} in Algorithm~\ref{alg:Full_EQN}). %
The details of the update of $\mB_k$ are in Section~\ref{sec:no_regret}.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
Finally, we provide a convergence guarantee for QNPE in Proposition~\ref{prop:IEQN}, which serves as a cornerstone for our convergence analysis.
We note that the following result does not require additional conditions on $\mB_k$, other than the ones in \eqref{eq:inexact_linear_solver} and \eqref{eq:step size_requirement} .
The proof is available in Appendix~\ref{appen:IEG}.
\begin{proposition}\label{prop:IEQN}
Let $\{\vx_k\}_{k\geq 0}$ be the iterates generated by \eqref{eq:inexact_linear_solver}, \eqref{eq:step size_requirement}, and \eqref{eq:extragradient_mixing} where $\alpha_1+\alpha_2 <1$. If $f$ is $\mu$-strongly convex, then
$\|\vx_{k+1}-\vx^*\|^2 \leq \|\vx_k-\vx^*\|^2 (1+2\eta_k\mu)^{-1}$.
%
%
%
\end{proposition}
Proposition~\ref{prop:IEQN} highlights the pivotal role of $\eta_k$ in the convergence rate: the larger the step size, the faster the convergence. On the other hand, $\eta_k$ is constrained by the condition in \eqref{eq:step size_requirement}, which, in turn, depends on the Hessian approximation matrix $\mB_k$. Thus, the central goal of our line search scheme and the Hessian approximation update is to make our step size $\eta_k$ as large as possible.
\subsection{Backtracking line search}\label{sec:line_search}
In this section, we describe a backtracking line search scheme for selecting the step size $\eta_k$ and the iterate $\hat{\vx}_k$ in the first stage of QNPE. For simplicity, we denote $\nabla f(\vx_k)$ by $\vg$ and drop the subscript $k$ in $\vx_k$ and $\mB_k$. %
Recall that at the $k$-th iteration, our goal is to find a pair $(\eta_{+}, \hat{\vx}_{+})$ such that
\begin{align}
%
\|\hat{\vx}_{+}-\vx+{\eta_{+}}(\vg+\mB({\hat{\vx}_{+}}-\vx))\| &\leq {\alpha_1} \|{\hat{\vx}_{+}}-\vx\|,
\label{eq:x_plus_update} \\
{ \eta_{+}} \|\nabla f({\hat{\vx}_{+}})-\vg-\mB({\hat{\vx}_{+}}-\vx)\| &\leq \alpha_2 \|{\hat{\vx}_{+}}-\vx\| \label{eq:step size_condition}.
\end{align}
As mentioned in the previous section, the condition in \eqref{eq:x_plus_update} can be satisfied if we solve the linear system $(\mI+\eta_{+}\mB)(\hat{\vx}_+-\vx) = -\eta_+\vg$ to a desired accuracy.
To formalize, we let
\begin{equation}\label{eq:linear_solver_update}
\vs_{+} = \mathsf{LinearSolver}(\mI+\eta_{+}\mB, -\eta_{+}\vg; \alpha_1) \quad \text{and} \quad \hat{\vx}_{+} = \vx+\vs_{+},
\end{equation}
where the oracle $\mathsf{LinearSolver}$ is defined as follows.
\begin{definition}\label{def:linear_solver}
The oracle $\mathsf{LinearSolver}(\mA,\vb; \alpha)$ takes a matrix $\mA \in \semiS^d$, a vector $\vb\in \reals^d$ and $\alpha \in (0,1)$ as input, and returns an approximate solution $\vs_{+}$
%
satisfying $\|\mA\vs_{+}-\vb\| \leq \alpha \|\vs_{+}\|$.
%
%
%
%
%
%
\end{definition}
By Definition~\ref{def:linear_solver}, the pair $(\eta_+,\hat{\vx}_+)$ is guaranteed to satisfy \eqref{eq:x_plus_update} when $\hat{\vx}_+$ is computed based on~ \eqref{eq:linear_solver_update}.
To implement the oracle $\mathsf{LinearSolver}(\mA,\vb; \alpha)$, the most direct way is to compute the exact solution $\vs_{+} = \mA^{-1}\vb$. In Appendix~\ref{appen:CR}, we will describe a more efficient implementation via the conjugate residual method \citep{saad2003iterative}, which only requires computing matrix-vector products.
\begin{subroutine}[!t]\small
\caption{Backtracking line search}\label{alg:ls}
\begin{algorithmic}[1]
\STATE \textbf{Input:} iterate $\vx \in \mathbb{R}^d$, gradient $\vg\in \reals^d$, Hessian approximation $\mB\in \semiS^{d}$, initial trial step size $\sigma>0$
\STATE \textbf{Parameters:} line search parameters $\beta\in (0,1)$, $\alpha_1\geq 0$ and $\alpha_2>0$ such that $\alpha_1+\alpha_2<1$
%
\STATE Set ${\eta}_{+} \leftarrow \sigma$
\WHILE{$\eta_{+} \|\nabla f(\hat{\vx}_{+})-\vg-\mB(\hat{\vx}_{+}-\vx)\|_2 > \alpha_2 \|\hat{\vx}_{+}-\vx\|_2$}
\STATE Set $\tilde{\vx} \leftarrow \hat{\vx}_{+}$ %
and $\eta_{+} \leftarrow \beta\eta_+ $
\STATE Compute $\vs_{+} \leftarrow \mathsf{LinearSolver}(\mI+\eta_{+}\mB, -\eta_{+}\vg; \alpha_1)$ and $\hat{\vx}_{+} \leftarrow \vx+\vs_{+}$
\ENDWHILE
\IF{$\eta_+ = \sigma$}
\STATE \textbf{Return} $\eta_{+}$ and $\hat{\vx}_{+}$ \label{line:return_1}
\ELSE
\STATE \textbf{Return} $\eta_{+}$, $\hat{\vx}_{+}$ and $\tilde{\vx}$ \label{line:return_2}
\ENDIF
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\end{algorithmic}
\end{subroutine}
Now we are ready to describe our backtracking line search scheme in Subroutine~\ref{alg:ls} assuming access to the $\mathsf{LinearSolver}$ oracle.
Specifically, given a user-defined parameter $\beta\in (0,1)$ and initial trial step size $\sigma>0$, we try the step sizes in $\{\sigma \beta^i: i\geq 0\}$ in decreasing order and compute $\hat{\vx}_+$ according to \eqref{eq:linear_solver_update}, until we find one pair $(\eta_+,\hat{\vx}_{+})$ that satisfies \eqref{eq:step size_condition}. Since \eqref{eq:x_plus_update} already holds true by following the update rule in \eqref{eq:linear_solver_update}, the line search scheme will return a pair $(\eta_+,\hat{\vx}_{+})$ satisfying both conditions in \eqref{eq:x_plus_update} and \eqref{eq:step size_condition} .
Regarding the output, we distinguish two cases. If we pass the test in \eqref{eq:step size_condition} on our first attempt, we accept the initial step size $\sigma$ and the corresponding iterate $\hat{\vx}_+$ (cf. Line~\ref{line:return_1}). Otherwise,
if $\sigma$ fails the test and we go through the backtracking procedure, along with the pair $(\eta_+,\hat{\vx}_{+})$, we also return an auxiliary iterate $\tilde{\vx}$, which is the last rejected point we compute from \eqref{eq:linear_solver_update} using the step size $\eta_+/\beta$ (cf. Line~\ref{line:return_2}). As we shall see in Lemma~\ref{lem:step size_lb}, the iterate $\tilde{\vx}$ is used to construct a lower bound on $\eta_+$, which will guide our update of the Hessian approximation matrix.
For ease of notation, let $\mathcal{B}$ be the set of iteration indices where the line search scheme backtracks, i.e., $\mathcal{B}\triangleq \{k:\eta_k < \sigma_{k}\}$. For these iterations in $\mathcal{B}$, the next lemma provides a lower bound on the step size $\eta_k$ returned by our line search scheme, which will be the key to our convergence analysis and the update of the Hessian approximation matrices. The proof can be found in Appendix~\ref{appen:step size_lb}.
\begin{lemma}\label{lem:step size_lb}
%
%
%
For $k\notin \mathcal{B}$ we have $\eta_k = \sigma_k$, while for $k\in \mathcal{B}$ we have
\begin{equation}\label{eq:step size_lower_bound}
\eta_k > \frac{\alpha_2 \beta\|{\tilde{\vx}_k}-\vx_k\|}{\|\nabla f({\tilde{\vx}_k})-\nabla f(\vx_k)-\mB_k({\tilde{\vx}_k}-\vx_k)\|} \quad \text{and} \quad \|{\tilde{\vx}_k}-\vx_k\| \leq \frac{1+\alpha_1}{\beta(1-\alpha_1)} \|{\hat{\vx}_k}-\vx_k\|.
\end{equation}
%
%
\end{lemma}
\vspace{0mm}
%
%
%
%
In Lemma~\ref{lem:step size_lb}, we lower bound the step size $\eta_k$ in terms of the ratio between $\|\tilde{\vx}_k-\vx_k\|$ and the approximation error $\|\nabla f({\tilde{\vx}_k})-\nabla f(\vx_k)-\mB_k({\tilde{\vx}_k}-\vx_k)\|$.
Hence, a better Hessian approximation matrix $\mB_k$ leads to a larger step size, which in turn implies faster convergence.
Also, we note that the lower bound depends on the auxiliary iterate $\tilde{\vx}_k$ that is not accepted as the actual iterate. As such, we will use the second inequality in \eqref{eq:step size_lower_bound} to relate $\|\tilde{\vx}_k-\vx_k\|$ with $\|\hat{\vx}_k-\vx_k\|$.
Finally, we remark that to fully characterize the computational cost of our method, we need to upper bound the total number of line search steps, each of which requires a call to $\mathsf{LinearSolver}$ and a call to the gradient oracle. This will be discussed later in Section~\ref{subsec:computational}.
\subsection{Hessian Approximation Update via Online Learning}\label{sec:no_regret}
In this section, we focus on the update rule for the Hessian approximation matrix $\mB_k$. Our goal is to develop a policy that leads to an explicit superlinear convergence rate for our proposed QNPE method. As mentioned earlier,
our new policy %
differs greatly from the traditional quasi-Newton updates
and is solely guided by the convergence analysis of our method.
Our starting point is Proposition~\ref{prop:IEQN},
which characterizes the convergence rate of QNPE in terms of the step size $\eta_k$.
It implies that if we can show $\eta_k \rightarrow \infty$, then a superlinear convergence rate follows immediately. Specifically, by repeatedly applying the result of Proposition~\ref{prop:IEQN}, we obtain
\begin{equation}\label{eq:superlinear_jensen}
\frac{\|\vx_{N}-\vx^*\|^2}{\|\vx_{0}-\vx^*\|^2} \leq \prod_{k=0}^{N-1} (1+2\eta_k\mu)^{-1} \leq \left(1+2\mu \sqrt{\frac{N}{\sum_{k=0}^{N-1} 1/\eta_k^2}}\right)^{-N},
\end{equation}
where the second inequality follows from Jensen's inequality. Hence, if we upper bound $\sum_{k=0}^{N-1} 1/\eta_k^2$ by a constant independent of $N$, it implies a global superlinear convergence rate of $\bigO({1}/\sqrt{N})^N$. Moreover, Lemma~\ref{lem:step size_lb} gives us the tool to control the step sizes and establish an upper bound on $\sum_{k=0}^{N-1} 1/\eta_k^2$, as shown in the following lemma. The proof is given in Appendix~\ref{appen:stepsize_bnd}.
\begin{lemma}\label{lem:stepsize_bnd}
%
%
Let $\{\eta_k\}_{k=0}^{N-1}$ be the step sizes in Algorithm~\ref{alg:Full_EQN} using the line search in Subroutine~\ref{alg:ls}. Then,
%
\begin{equation}\label{eq:goal}
\sum_{k=0}^{N-1}\frac{1}{\eta_k^2} \leq \frac{1}{(1-\beta^2)\sigma_0^2}+ \frac{1}{(1-\beta^2)\alpha_2^2\beta^2}\sum_{k\in \mathcal{B}} \frac{\|\vy_k-\mB_k\vs_k\|^2}{\|\vs_k\|^2},
%
\end{equation}
where $\vy_k \triangleq \nabla f(\tilde{\vx}_k) -\nabla f({\vx_k})$ and $\vs_k \triangleq \tilde{\vx}_k-\vx_k$.
\end{lemma}
Our key idea is to interpret the right-hand side of \eqref{eq:goal} as the cumulative loss incurred by our choice of $\mB_k$, and to update the Hessian approximation matrix by an online learning algorithm.
More formally, define the loss function at iteration $k$ as
\begin{equation}\label{eq:loss_of_Hessian}
\ell_k(\mB) \triangleq
\begin{cases}
0, & \text{if } k\notin \mathcal{B}, \\
\frac{\|\vy_k-\mB\vs_k\|^2}{2\|\vs_k\|^2}, & \text{otherwise}.
\end{cases}
\end{equation}
Then the online learning protocol works as follows:
(i) At the $k$-th iteration, we choose $\mB_k\in \mathcal{Z}'$, where $\mathcal{Z}' \triangleq \{\mB\in \semiS^d: \frac{\mu}{2}\mI \preceq \mB \preceq (L_1+\frac{\mu}{2}) \mI\}$;
%
%
%
(ii) We receive the loss function $\ell_k(\mB)$ defined in \eqref{eq:loss_of_Hessian};
(iii) We update our Hessian approximation to $\mB_{k+1}$.
Hence, minimizing the sum in \eqref{eq:goal} is equivalent to minimizing the cumulative loss $\sum_{k=0}^{N-1} \ell_k(\mB_k)$, which is exactly what online learning algorithms are designed for. In particular, we will show in Lemma~\ref{lem:small_loss} that the cumulative loss $\sum_{k=0}^{N-1} \ell_k(\mB_k)$ incurred by our online learning algorithm is comparable to $\sum_{k=0}^{N-1} \ell_k(\mH^*)$, where $\mH^* \triangleq \nabla^2 f(\vx^*)$ is the exact Hessian at the optimal solution $\vx^*$.
\begin{remark}\label{rem:constraint_on_Hessian}
By Assumption~\ref{assum:smooth_SC}, we know that $\mu\mI \preceq \nabla^2 f(\vx) \preceq L_1\mI$. Thus, it is natural to restrict $\mB_k$ to the set $\mathcal{Z} \triangleq \{\mB\in \semiS^d: {\mu}\mI \preceq \mB \preceq L_1 \mI\}$. On the other hand, this constraint is by no means mandatory, and looser bounds on the eigenvalues of $\mB_k$ would also suffice for our analysis. Hence, we exploit this flexibility and allow our algorithm to pick $\mB_k$ from a larger set $\mathcal{Z}'$, as it is easier to enforce such a constraint. %
We discuss this point in detail in Section~\ref{subsec:projection_free}.
\end{remark}
\begin{remark}
Since we have $\ell_k(\mB)=0$ when $k\notin \mathcal{B}$, we can simply keep $\mB_{k+1}$ unchanged for these iterations (cf. Line~\ref{line:Hessian_unchanged} in Algorithm~\ref{alg:Full_EQN}). With a slight abuse of notation, in the following, we relabel the indices in $\mathcal{B}$ by $t=0,1,\dots,T-1$ with $T \leq N$.
\end{remark}
Now that we formulated the Hessian approximation update as an online learning problem, one can update $\mB_k$ by an online learning method, such as the projected online gradient descent~\citep{zinkevich2003online}. This approach would indeed serve our purpose and lead to an explicit superlinear convergence rate. However, in our setting, implementing any projection-based online learning algorithm could be computationally expensive: the Euclidean projection onto the set $\mathcal{Z}'$ requires performing a full $d\times d$ matrix eigendecomposition, which typically incurs a complexity of $\mathcal{O}(d^3)$; please check Appendix~\ref{appen:ogd_regret} for more discussions.
In the following, we instead build upon a projection-free online learning algorithm proposed by~\citet{mhammedi2022efficient}.
\subsubsection{Online Learning with an Approximate Separation Oracle}\label{subsubsec:projection_free_online_learning}
To better illustrate our key idea, we take a step back and consider a general online learning problem. %
For $T$ consecutive rounds $t=0,1,\dots,T-1$, a learner chooses an action $\vx_t \in \reals^n$ from an action set and then observes a loss function $\ell_t:\reals^n \rightarrow \reals$. The goal is to minimize the regret defined by $\mathrm{Reg}_T(\vx) \triangleq \sum_{t=0}^{T-1} \ell_t(\vx_t) - \sum_{t=0}^{T-1} \ell_t(\vx)$, which is the difference between the cumulative loss of the learner and that of a fixed competitor $\vx$.
This is a standard online learning problem, but with a slight modification:
we restrict the competitor $\vx$ to be in a given competitor set~$\mathcal{C}$, while allow the learner to choose the action $\vx_t$ from a larger set $(1+\delta)\mathcal{C} \triangleq \{ (1+\delta)\vx: \vx\in \mathcal{C} \}$ for some given $\delta>0$.
As mentioned in Remark~\ref{rem:constraint_on_Hessian}, this setup is more suitable for our Hessian approximation update framework, where the constraint on $\mB_t$ is more flexible %
(note that $\vx_t$ and $\vx$ correspond to $\mB_t$ and $\mH^*$, respectively).
Finally, without loss of generality, we can assume that $0\in \mathcal{C}$. %
We also assume that the convex set $\mathcal{C}$ is bounded and contained in the Euclidean ball $\mathcal{B}_R(0)$ for some $R>0$.
%
To solve the above online learning problem, most existing algorithms require access to an oracle that computes the Euclidean projection on the action set. However, computing the projection is computationally costly in our setting (see Appendix~\ref{appen:ogd_regret}). Unlike these projection-based methods, here
we rely on an approximate separation oracle defined below.
As we discuss in Appendix~\ref{appen:SEP}, the following $\mathsf{SEP}$ oracle can be implemented much more efficiently than the Euclidean projection.
%
%
\begin{definition}\label{def:gauge}
The oracle $\mathsf{SEP}(\vw; \delta)$ takes $\vw\in \mathcal{B}_{R}(0)$ and $\delta>0$ as input and returns a scalar $\gamma>0$ and a vector $\vs\in \reals^n$ with one of the following possible outcomes:
%
\vspace{-2mm}
\begin{itemize}
\item Case I: $\gamma \leq 1$ which implies that $\vw \in (1+\delta)\mathcal{C}$;
%
\item Case II: $\gamma>1$ which implies that $\vw/\gamma \in (1+\delta)\mathcal{C} \ $ and $\ \langle \vs, \vw-\vx \rangle \geq {\gamma-1}$ $\quad\! \forall\vx\in \mathcal{C}$.
\end{itemize}
%
%
%
%
%
\end{definition}
%
\vspace{-0mm}
In summary, the oracle $\mathsf{SEP}(\vw;\delta)$ has two possible outcomes: it either certifies that $\vw$ is approximately feasible, i.e., $\vw\in (1+\delta)\mathcal{C}$, or it produces a scaled version of $\vw$ that is in $(1+\delta)\mathcal{C}$ and gives a strict separating hyperplane between $\vw$ and the set $\mathcal{C}$. {We also note that \citep{mhammedi2022efficient} relies on a different yet related oracle and we fully discuss the point in Appendix~\ref{appen:comparison_projection_free}. }
%
The key idea here is to introduce an auxiliary online learning problem on the larger set $\mathcal{B}_R(0)$ with surrogate loss functions $\tilde{\ell}_t (\vw) = \langle \tilde{\vg}_t,\vw\rangle$ for $0\leq t \leq T-1$, where $\tilde{\vg}_t$ is the surrogate gradient to be defined later. On a high level, we will run online projected gradient descent on this auxiliary problem to update the iterates $\{\vw_t\}_{t\geq 0}$ (note that the projection on $\mathcal{B}_R(0)$ is easy to compute), and then produce the actions $\{\vx_t\}_{t\geq 0}$ for the original problem by calling
$\mathsf{SEP}(\vw_t;\delta)$ in Definition~\ref{def:gauge}. Specifically, given $\vw_t$ at round $t$, we let $\gamma_t>0$ and $\vs_t\in \reals^n$ be the output of $\mathsf{SEP}(\vw_t;\delta)$.
%
%
If $\gamma_t\leq 1$, we are in \textbf{Case I}, where we set $\vx_t = \vw_t$, compute $\vg_t = \nabla \ell_t(\vx_t)$, and define the surrogate gradient by $\tilde{\vg}_t = \vg_t$.
Otherwise, if $\gamma_t> 1$, we are in \textbf{Case II}, where we set $\vx_t = \vw_t/\gamma_t$, compute $\vg_t = \nabla \ell_t(\vx_t)$, and define the surrogate gradient by $\tilde{\vg}_t = \vg_t+\max\{0, -\langle \vg_t, \vx_t \rangle\} \vs_t$. Note that Definition~\ref{def:gauge} guarantees $\vx_t\in (1+\delta)\mathcal{C}$ in both cases.
Finally,
%
we update $\vw_{t+1}$ using the standard online projected gradient descent with respect to the surrogate loss $\tilde{\ell}_t(\vw)$ and the set $\mathcal{B}_R(0)$:
\begin{equation*}
\vw_{t+1} = \Pi_{\mathcal{B}_R(0)}\left(\vw_t-\rho \tilde{\vg}_t\right) = \frac{R}{\max\{\|\vw_t-\rho \tilde{\vg}_t\|_2,R\}} (\vw_t-\rho \tilde{\vg}_t),
\end{equation*}
where $\rho$ is the step size.
%
%
%
%
To give some intuition, the surrogate loss functions $\{\tilde{\ell}_t (\vw)\}_{t=1}^T$ are constructed in such a way that the immediate regret $\tilde{\ell}_t{(\vw_t)}-\tilde{\ell}_t(\vx)$ serves as an upper bound on $\ell_t(\vx_t)-\ell_t(\vx)$ for any $\vx \in \mathcal{C}$.
%
%
%
%
%
%
%
%
%
Therefore, we can upper bound the regret of the original problem by that of the auxiliary problem, which
%
%
%
can be further bounded using the standard analysis for online projected gradient descent. The full algorithm is described in Algorithm~\ref{alg:projection_free_online_learning} in Appendix~\ref{appen:small_loss}.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\subsubsection{Projection-free Hessian Approximation Update}\label{subsec:projection_free}
Now we are ready to describe our projection-free online learning algorithm for updating $\mB_k$, which is a special case of the algorithm described in the previous section.
Recall that in our online learning problem in Section~\ref{sec:no_regret}, the competitor set is $\mathcal{Z} = \{\mB\in \semiS^d: \mu \mI \preceq \mB \preceq L_1 \mI\}$. Since the discussed projection-free scheme requires the competitor set $\mathcal{C}$ to contain the origin, we first translate and rescale $\mB$ via
the transform $\hat{\mB} \triangleq \frac{2}{L_1-\mu}\bigl(\mB-\frac{L_1+\mu}{2}\mI\bigr)$ and define $\mathcal{C} \triangleq \{\hat{\mB}\in \sS^d: -\mI \preceq \hat{\mB} \preceq \mI\} = \{\hat{\mB}\in \sS^d: \|\hat{\mB}\|_{\op}\leq 1\}$, which satisfies $0\in \mathcal{C}$ and $\mathcal{C} \subset \mathcal{B}_{\sqrt{d}}(0) = \{\mW \in \mathbb{S}^d:\|\mW\|_F \leq \sqrt{d}\}$.
It can be verified that $\mB\! \in\! \mathcal{Z} \!\iff\! \hat{\mB} \in \mathcal{C}$, and also $\mB \in \mathcal{Z}' \!\iff \!\hat{\mB} \in (1+\delta)\mathcal{C}$ with $\delta= \mu/(L_1-\mu)$.
\begin{subroutine}[!t]\small
\caption{Online Learning Guided Hessian Approximation Update}\label{alg:hessian_approx}
\begin{algorithmic}[1]
\STATE \textbf{Input:} Initial matrix $\mB_0\in \sS^d$ s.t. $\mu \mI \preceq \mB_0 \preceq L_1\mI$, step size $\rho>0$, $\delta>0$, $\{q_t\}_{t=1}^{T-1}$
\STATE \textbf{Initialize:} set $\mW_0 \leftarrow \frac{2}{L_1-\mu}(\mB_0-\frac{L_1+\mu}{2}\mI)$, $\mG_0 \leftarrow \frac{2}{L_1+\mu}\nabla \ell_0(\mB_0)$ and $\tilde{\mG}_0 \leftarrow \mG_0$
\FOR{$t=1,\dots,T-1$}
\STATE Query the oracle $(\gamma_t,\mS_t) \leftarrow \mathsf{ExtEvec}(\mW_t;\delta, q_t)$ %
\IF[Case I]{$\gamma_t \leq 1$}
%
\STATE Set $\hat{\mB}_t \leftarrow \mW_t$ and $\mB_t \leftarrow \frac{L_1-\mu}{2}\hat{\mB}_t+\frac{L_1+\mu}{2}\mI$
\STATE Set $\mG_t \leftarrow \frac{2}{L_1-\mu}\nabla \ell_t(\mB_t)$ and $\tilde{\mG}_t \leftarrow \mG_t$
%
\vspace{.2em}
\ELSE[Case II]
\vspace{.2em}
\STATE Set $\hat{\mB}_t \leftarrow \mW_t/\gamma_t$ and $\mB_t \leftarrow \frac{L_1-\mu}{2}\hat{\mB}_t+\frac{L_1+\mu}{2}\mI$
\STATE Set $\mG_t \leftarrow \frac{2}{L_1-\mu}\nabla \ell_t(\mB_t)$ and $\tilde{\mG}_t \leftarrow \mG_t+\max\{0,-\langle \mG_t, \mB_t \rangle\} \mS_t$
%
\ENDIF
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\STATE %
%
Update
$\mW_{t+1} \leftarrow \frac{\sqrt{d}}{\max\{\sqrt{d},\|\mW_t - \rho \tilde{\mG}_t\|_F\}}(\mW_t - \rho \tilde{\mG}_t)$ \COMMENT{Euclidean projection onto $\mathcal{B}_{\sqrt{d}}(0)$}
%
%
%
%
%
\ENDFOR
\end{algorithmic}
\end{subroutine}
The only remaining question is how we can build the $\mathsf{SEP}$ oracle in Definition~\ref{def:gauge} for our specific set $\mathcal{C}$.
To begin with, we observe that this is closely related to
computing the extreme eigenvalues and the associated eigenvectors of a given matrix $\mW$. In fact, let $\lambda_{\max}$ and $\vv_{\max}\in \reals^d$ be the largest magnitude eigenvalue of $\mW$ and its associated unit eigenvector, respectively. Since $\|\mW\|_\op = |\lambda_{\max}|$, it is easy to see that: (i) If $|\lambda_{\max}| \leq 1$, then $\mW \in \mathcal{C}$; (ii) Otherwise, if $|\lambda_{\max}| > 1$, then we can let $\gamma = |\lambda_{\max}|$, which satisfies $\mW/\gamma \in \mathcal{C}$, and $\mS = \sign(\lambda_{\max}) \vv_{\max}\vv_{\max}^\top \in \mathbb{S}^d$, which defines a separating hyperplane between $\mW$ and $\mathcal{C}$. Indeed, note that we have $\langle \mS,\mW\rangle = |\lambda_{\max}| = \gamma$ and $\langle \mS,\hat{\mB}\rangle \leq |\vv_{\max}^\top \hat{\mB}\vv_{\max}| \leq 1$ for any $\hat{\mB}\in \mathcal{C}$, which implies $\langle \mS,\mW-\hat{\mB}\rangle \geq \gamma -1$. Hence, we can build the separation oracle in Definition~\ref{def:gauge} if we compute $\lambda_{\max}$ and $\vv_{\max}$ for the given matrix $\mW$.
However, the exact computation of $\lambda_{\max}$ and $\vv_{\max}$ could be costly. Thus, we propose to compute the extreme eigenvalues and the corresponding eigenvectors inexactly by the randomized Lanczos method~\citep{kuczynski1992estimating}, which leads to the randomized oracle $\mathsf{ExtEvec}$ defined below. We defer the specific implementation details of $\mathsf{ExtEvec}$ to Section~\ref{appen:SEP}.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\vspace{1mm}
\begin{definition}\label{def:extevec}
The oracle $\mathsf{ExtEvec}(\mW;\delta,q)$ takes $\mW \in \mathbb{S}^d$, $\delta>0$, and $q\in (0,1)$ as input and returns a scalar $\gamma>0$ and a matrix $\mS\in \mathbb{S}^d$ with one of the following possible outcomes:
\vspace{-4mm}
\begin{itemize}
\item Case I: $\gamma \leq 1$, which implies that, with probability at least $1-q$, $\|\mW\|_{\op} \leq 1+\delta$;
%
\item Case II: $\gamma>1$, which implies that, with probability at least $1-q$, $\|\mW/\gamma\|_{\op} \leq 1+\delta$, $\|\mS\|_F = 1$ and $\langle \mS,\mW-\hat{\mB}\rangle \geq \gamma -1$ for any $\hat{\mB}$ such that $\|\hat{\mB}\|_{\op} \leq 1$.
\end{itemize}
%
%
%
%
%
%
\end{definition}
Note that $\mathsf{ExtEvec}$ is an approximate separation oracle for the set $\mathcal{C}$ in the sense of Definition~\ref{def:gauge} (with success probability at least $1-q$), and it also guarantees that $\|\mS\|_F = 1$ in Case II. Equipped with this oracle, we describe the complete Hessian approximation update in Subroutine~\ref{alg:hessian_approx}.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\section{Complexity Analysis of QNPE}\label{sec:convergence}
By now, we have fully described our QNPE method in Algorithm~\ref{alg:Full_EQN}, where we select the $\eta_k$ by Subroutine~\ref{alg:ls} and update the Hessian approximation matrix $\mB_k$ by Subroutine~\ref{alg:hessian_approx}.
In the following, we shall establish the convergence rate and characterize the computational cost of QNPE.
Next, we state our main convergence result. Our results hold for any $\alpha_1, \alpha_2 \in (0,\frac{1}{2})$ and $\beta\in(0,1)$, but to simplify our expressions we report the results for $\alpha_1 = \alpha_2 = \frac{1}{4}$ and $\beta = \frac{1}{2}$.
\begin{theorem}[Main Theorem]\label{thm:main}
Let $\{\vx_k\}_{k\geq 0}$ be the iterates generated by Algorithm~\ref{alg:Full_EQN} using the line search scheme in Subroutine~\ref{alg:ls}, where $\alpha_1 = \alpha_2 = \frac{1}{4}$, $\beta = \frac{1}{2}$, and $\sigma_0 \geq \alpha_2\beta/L_1$, and the Hessian approximation update in Subroutine~\ref{alg:hessian_approx}, where $\rho = \frac{1}{18}$, $\delta = \min\{\frac{\mu}{L_1-\mu},1\}$, and $q_t = \nicefrac{p}{2.5(t+1)\log^2(t+1)}$ for $t \geq 1$. Then with probability at least $1-p$, the following statements hold:
\begin{enumerate}[(a)]
\vspace{-1mm}
\item (Linear convergence) For any $k\geq 0$, we have
%
$ \frac{\|\vx_{k+1}-\vx^*\|^2}{\|\vx_k-\vx^*\|^2 }\leq \left(1+\frac{\mu}{4L_1}\right)^{-1}$.
%
\item (Superlinear convergence) We have $\lim_{k\rightarrow \infty}\frac{\|\vx_{k+1}-\vx^*\|^2}{\|\vx_k-\vx^*\|^2 } = 0$. Furthermore, for any $k\geq 0$,
%
\begin{equation*}
\!\!\!\!\!\frac{\|\vx_k-\vx^*\|^2}{\|\vx_0-\vx^*\|^2} \leq \Biggl(1+ \frac{\sqrt{3}}{8}\mu\sqrt{{\frac{k}{L_1^2+{36}\|\mB_0-\nabla^2 f(\vx^*)\|^2_F + \left(27+
\!\frac{16L_1}{\mu}\right)\!L_2^2\|\vx_0-\vx^*\|^2}}}\Biggr)^{\!\!-k}\!\!\!\!.
\end{equation*}
%
\end{enumerate}
\end{theorem}
%
%
\noindent \textbf{Proof sketch.}
By using a simple union bound, we can show that
the $\mathsf{ExtEvec}$ oracle in Subroutine~\ref{alg:hessian_approx} is successful in all rounds with probability at least $1-p$.
%
Thus, throughout the proof, we assume that every call of $\mathsf{ExtEvec}$ is successful.
We first prove the linear convergence rate in (a). As we discussed in Section~\ref{subsec:projection_free}, Subroutine~\ref{alg:hessian_approx} ensures that $\frac{\mu}{2}\mI \preceq \mB_k \preceq L_1+\frac{\mu}{2}\mI$ for any $k\geq 0$. Combining this with Lemma~\ref{lem:step size_lb}, we obtain the following universal lower bound on the step size $\eta_k$.
\begin{lemma}\label{lem:stepsize_const_bound}
%
For any $k\geq 0$, we have $\eta_k \geq 1/(8L_1)$.
\end{lemma}
In light of Lemma~\ref{lem:stepsize_const_bound}, the linear convergence result in (a) now follows directly from Proposition~\ref{prop:IEQN}.
Next, we prove the superlinear convergence rate in (b) by considering the following steps.
\vspace{0em}\noindent\textbf{Step 1:} Using regret analysis, we bound the cumulative loss $\sum_{t=0}^{T-1}\ell_t(\mB_t)$ incurred by our online learning algorithm in Subroutine~\ref{alg:hessian_approx}. In particular, by exploiting the smooth property of the loss function $\ell_t$, we prove a small-loss bound in the following lemma, where the cumulative loss of the learner is bounded by that of a fixed action in the competitor set \citep{srebro2010smoothness}.
\begin{lemma}
\label{lem:small_loss}
%
%
For any ${\mH}\in \mathcal{{Z}}$, we have
%
$\sum_{t=0}^{T-1} \ell_t({\mB}_t) \leq 18\|\mB_0-{\mH}\|_F^2+ 2 \sum_{t=0}^{T-1} \ell_t({\mH})$.
%
\end{lemma}
Note that in Lemma~\ref{lem:small_loss}, we have the freedom to choose any competitor $\mH$ in the set $\mathcal{Z}$. To further obtain an explicit bound, a natural choice would be $\mH^* \triangleq \nabla^2 f(\vx^*)$, which leads to our next step.
\vspace{0em}\noindent \textbf{Step 2:} We upper bound the cumulative loss $\sum_{t=0}^{T-1} \ell_t({\mH}^*)$ in the following lemma. The proof relies crucially on Assumption~\ref{assum:Hessian_Lips} as well as the linear convergence result we proved in (a).
\begin{lemma}\label{lem:comparator}
We have
%
%
$\sum_{t=0}^{T-1} \ell_t(\mH^*) \leq \left(\frac{27}{4}+\frac{4L_1}{ \mu}\right)L_2^2\|\vx_0-\vx^*\|^2$.
%
\end{lemma}
%
%
\noindent\textbf{Step 3}: Combining Lemma~\ref{lem:comparator} and Lemma~\ref{lem:small_loss}, we obtain a constant upper bound on the cumulative loss $ \sum_{t=0}^{T-1} \ell_t({\mB}_t)$. By Lemma~\ref{lem:stepsize_bnd}, this further implies an upper bound on $\sum_{k=0}^{N-1} 1/\eta_k^2$, which leads to the superlinear convergence result in (b) by Proposition~\ref{prop:IEQN} and the observation in \eqref{eq:superlinear_jensen}. $\hfill\blacksquare$
\noindent \textbf{Discussions.}
To begin with, Part (a) of Theorem~\ref{thm:main} guarantees that QNPE converges linearly and is at least as fast as gradient descent. Moreover, in Part (b) we prove Q-superlinear convergence of QNPE, where the explicit global superlinear rate is faster than the linear rate for sufficiently large~$k$. Specifically, if we define $N_\mathrm{tr} \triangleq \frac{4}{3}+\frac{48}{L_1^2}\|\mB_0-\nabla^2 f(\vx^*)\|^2_F + \left(\frac{36}{L_1^2}+\frac{64}{3\mu L_1}\right)\!L_2^2\|\vx_0-\vx^*\|^2$, then the superlinear rate can be written as $(1+\frac{\mu}{4L_1}\sqrt{\frac{k}{N_\mathrm{tr}}})^{-k}$, which is superior to the linear rate when $k \geq N_{\mathrm{tr}}$.
Moreover, we can also derive an explicit complexity bound from Theorem~\ref{thm:main}. Let $N_{\epsilon}$ denote the number of iterations required by QNPE to achieve $\epsilon$-accurate solution, i.e., $\|\vx_k-\vx^*\|^2 \leq \epsilon$. As we show in Appendix~\ref{appen:complexity},
if the error tolerance $\epsilon$ is in the regime where ${\epsilon} > \exp(-\frac{\mu}{L_1} N_\mathrm{tr})$, the linear rate in Part (a) is faster and we have $N_{\epsilon}= \bigO(\frac{L_1}{\mu}\log\frac{1}{\epsilon})$. Otherwise, if ${\epsilon} < \exp(-\frac{\mu}{L_1} N_\mathrm{tr})$, the superlinear rate in Part (b) excels and we have $N_{\epsilon}= \bigO\Bigl( \Bigl[{ \log \Bigl(1+\frac{\mu}{L_1}\left({\frac{L_1}{ N_\mathrm{tr}\mu}\log\frac{1}{\epsilon}}\right)^{\nicefrac{1}{3}}\Bigr)}\Bigr]^{-1}\log\frac{1}{\epsilon}\Bigr)$.
A couple of additional remarks about Theorem~\ref{thm:main} follow. First, the expression $\|\mB_0-\nabla^2 f(\vx^*)\|^2_F$ is bounded above by $L_1^2 d$ in the worst-case, showing that at worst $N_\mathrm{tr}$ scales linearly with the dimension $d$. On the other hand, $N_\mathrm{tr}$ could be much smaller if the initial Hessian approximation matrix $\mB_0$ is close to $\nabla^2 f(\vx^*)$. Second, Theorem~\ref{thm:main} provides a global result, as both bounds hold for any initial point $\vx_0$ and any initial Hessian approximation $\mB_0$. %
On the contrary, the existing non-asymptotic results on quasi-Newton methods in \citep{rodomanov2021new,jin2022non} require special initialization for $\mB_0$ and closeness of $\vx_0$ to the optimal solution $\vx^*$.
\subsection{Characterizing the Computational Cost}\label{subsec:computational}
As for most optimization algorithms, we measure the computational cost of our QNPE method in two aspects: the number of gradient evaluations and the number of matrix-vector product evaluations.
In particular, each backtracking step of the line search scheme in Subroutine~\ref{alg:ls} requires one call to the gradient oracle, while the implementation of $\mathsf{LinearSolver}$ in Definition~\ref{def:linear_solver} and $\mathsf{ExtEvec}$ in Definition~\ref{def:extevec} requires multiple matrix-vector products. Due to space limitations, we defer the details to Appendix~\ref{appen:computational_cost} and summarize the complexity results in the following theorem.
\begin{theorem}\label{thm:computational_cost}
Let $N_\epsilon$ denote the minimum number of iterations required by Algorithm~\ref{alg:Full_EQN} to find an $\epsilon$-accurate solution according to Theorem~\ref{thm:main}. Then, with probability at least $1-p$:
\begin{enumerate}[(a)]
\item The total number of gradient evaluations is bounded by $3N_\epsilon+\log_{1/\beta}(4\sigma_0 L_1)$.
%
%
\item The total number of matrix-vector products in $\mathsf{ExtEvec}$ and $\mathsf{LinearSolver}$ are bounded by\\ $\bigO \left(N_\epsilon \sqrt{\frac{L_1}{\mu}} \log \left(\frac{dN_{\epsilon}^2}{p^2}\right)\right)$~and~$\bigO\left(N_{\epsilon}\sqrt{\frac{L_1}{\mu}}\log \left(\frac{L_1 \|\vx_0-\vx^*\|^2}{\mu \epsilon}\right)\right)$, respectively.
%
%
%
%
%
\end{enumerate}
\end{theorem}
As a direct corollary, on average QNPE requires at most $3$ gradient evaluations per iteration if we set $\sigma_0 = 1/(4L_1)$.
%
Moreover, by summing the complexity of both $\mathsf{ExtEvec}$ and $\mathsf{LinearSolver}$, we can bound the total number of matrix-vector products by $\bigO\left(N_{\epsilon}\sqrt{\frac{L_1}{\mu}}\log\frac{L_1 N_\epsilon^2d}{\mu\epsilon}\right)$.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\section{Conclusion}
We proposed the quasi-Newton proximal extragradient (QNPE) method for unconstrained minimization problems. We showed that QNPE converges at an explicit non-asymptotic superlinear rate of $(1+\Omega(\sqrt{k}))^{-k}$.
Moreover, if $N_{\epsilon}$ denotes the number of iterations to find an $\epsilon$-accurate solution, we showed that the number of gradient evaluations is bounded by $3N_{\epsilon}$, while the number of matrix-vector product evaluations is bounded by $\bigO(N_{\epsilon}\sqrt{\frac{L_1}{\mu}}\log\frac{L_1 N_\epsilon^2d}{\mu\epsilon})$. To the best of our knowledge, this is the first quasi-Newton method with an explicit global superlinear convergence rate.
\section*{Acknowledgements}
The research of R. Jiang, Q. Jin and A. Mokhtari is supported in part by NSF Grants 2007668, 2019844, and 2112471, ARO Grant W911NF2110226, the Machine Learning Lab (MLL) at UT Austin, and the Wireless Networking and Communications Group (WNCG) Industrial Affiliates Program.
\newpage
\printbibliography
\newpage
|
{
"arxiv_id": "2302.08671",
"language": "en",
"timestamp": "2023-02-20T02:06:08",
"url": "https://arxiv.org/abs/2302.08671",
"yymm": "2302"
} | \section{Introduction}
\label{sec-intro}
In recent years, Graph Neural Networks (GNNs) have been the state-of-the-art (SOTA) method on graph classification~\cite{zhang2018end,wang2021curgraph}, a popular task which can be applied into various domains, e.g., chemistry~\cite{gilmer2017neural}, bioinformatics~\cite{ying2018hierarchical,wang2022graph}, text~\cite{zhang2020every} and social networks~\cite{xu2018powerful,wang2021mixup,yoo2022model}.
In general, they encode the graph structure to learn node embeddings with the help of the message-passing scheme~\cite{gilmer2017neural}, i.e., aggregating messages from the connected nodes. Then, the graph representation vector can be generated with one readout operation, e.g., take the mean or summation of all node embeddings.
In the literature, two- or three-layer GNNs are widely used since the performance decreases as the network goes deeper~\cite{li2018deeper}.
However, it limits the ability of GNNs to capture the \textit{long-range dependencies},
i.e., incorporate information from long-distant neighbors~\cite{lukovnikov2021improving}, which may be useful in the graph classification task~\cite{jain2021representing,dwivedi2022long}.
For the graph classification task, the majority of methods focus on designing the pooling operations to extract the hierarchical information in the graph~\cite{lee2019self,ying2018hierarchical}.
We review the literature
from the perspective of long-range dependencies, and then observe that not all pooling operations are helpful in extracting the long-range dependencies.
The grouping-based pooling operations~\cite{ying2018hierarchical,bianchi2020spectral,yuan2019structpool} group nodes into clusters and then re-design the edges between these clusters.
These newly added edges may shorten the distance between node pairs, and then faster the feature propagation in the graph.
On the contrary, the selection-based ones~\cite{lee2019self,gao2019graph} construct the coarse graph by removing the low-ranked nodes and edges connected with them,
which means the distance between nodes will not be shortened and are not helpful in incorporating the long-distant neighbors.
Apart from pooling operations, several methods tend to incorporate the higher-order neighbors directly.
For example,
GraphTrans~\cite{jain2021representing} adopts the transformer module which allows the interactions between all node pairs. Similarly, \cite{pham2017graph} provides one virtual node which connected with all nodes, and \cite{alon2020bottleneck} designs one fully adjacent (FA) aggregation layer in which all nodes are connected directly.
Despite the popularity of these methods in graph classification, they are deficient when capturing long-range dependencies on graphs. The aforementioned methods update the graph structure and may result in information loss in graph structure learning, either by generating the coarse graph or connecting to higher-order neighbors.
Considering this, we revisit the stacking-based GNNs, which is a direct approach to obtaining the long-range dependencies while keeping the graph structures unchanged.
In general, scaling the model depth is the most common way to improve the model performance~\cite{tan2019efficientnet,krizhevsky2017imagenet,simonyan2014very,he2016deep}.
and deeper stacking-based GNNs further bring the larger receptive field to incorporate the longer-distant neighbors.
Nevertheless, the major concern to utilize the deeper GNNs is the over-smoothing problem~\cite{li2018deeper,xu2018representation},
i.e., the connected nodes tend to have similar features as the network goes deeper, which results in a performance drop in the node classification task.
Yet, in this paper, we justify the smaller influence of this problem on the graph classification task compared with the node classification task both in theoretical analysis and experimental results. Therefore, designing stacking-based GNNs is indeed a feasible solution to capture the long-range dependencies.
Motivated by this, we propose a novel method LRGNN (\underline{L}ong-\underline{R}ange \underline{G}raph \underline{N}eural \underline{N}etworks), and employ the stacking-based GNNs to capture the long-range dependencies in graphs.
There are two aspects that will affect the utilization of the long-distant neighbors:
(a) sufficient model depth is required to incorporate the longer-distant neighbors; (b) adaptive skip-connection schemes are required considering the information mixing from different ranges of neighbors.
The former is correlated with the inter-layer connections between the consecutive layers while the latter one related to the inconsecutive layers.
Therefore, designing effective stacking-based GNNs to capture the long-range dependencies can be achieved by designing the inter-layer connections adaptively.
We adopt the widely used neural architecture search (NAS) methods to achieve this.
To be specific, we provide one framework to design the inter-layer connections, which contains a set of learnable connections in the directed acyclic graphs (DAG). Two candidate choices are provided to represent the ``used'' and ``unused'' states in learnable connections, and then designing the inter-layer connections is transformed into deciding which choice should be adopted in each learnable connection.
Then, the search space of LRGNN can be constructed based on the combinations of those connection choices, while the search space size is growing exponentially as the network goes deeper.
Considering the search efficiency, we further provide one cell-based framework that only enables the connection designs in each cell, on top of which two variants LRGNN-Diverse and LRGNN-Repeat can be constructed.
The differentiable search algorithm is adopted in this paper to enable the adaptive architecture design.
In the experiments, we first evaluate the rationality of LRGNN by showing the higher performance achieved with sufficient model depth and various skip-connection schemes.
Then, the extensive experimental results demonstrate that the proposed LRGNN can achieve the SOTA performance by designing GNNs with the inter-layer connections adaptively, on top of which the effectiveness of the proposed method can be verified.
To summarize, our contributions are as follows:
\begin{itemize}
\item We evoke the importance of stacking-based GNNs in extracting the long-range dependencies, and verify the smaller influence of the over-smoothing problem on the graph classification, on top of which two design needs are provided for designing stacking-based GNNs , i.e., the sufficient GNN depth and adaptive skip-connection schemes.
\item To meet these two design needs, we firstly unify them into designing the inter-layer connections in stacking-based GNNs, and then achieve this with the help of NAS.
\item We conduct extensive experiments on five datasets from different domains, and the proposed LRGNN achieves the SOTA performance by designing GNNs with sufficient model depth and adaptive skip-connection schemes.
\end{itemize}
\noindent\textbf{Notations.}
We represent a graph as $\mathcal{G} =(\mathcal{V}, \mathcal{E}) $,where $\mathcal{V}$ and $\mathcal{E}$ represent the node and edge sets. $\textbf{A} \in \mathbb{R}^{|\mathcal{V}| \times |\mathcal{V}|}$ is the adjacency matrix of this graph where $|\mathcal{V}|$ is the node number. $\mathcal{N}(u)$ is the neighbors of node $u$. $\textbf{H} \in \mathbb{R}^{|\mathcal{V}| \times d}$ is the node feature matrix and $d$ is the feature dimension, and $\textbf{h}_u$ is the feature representation of node $u$.
\section{Related Work: Graph neural network~(GNN)}
\label{sec-pre-mpnn}
GNNs have advantages in encoding the graph structure information with the help of the message-passing scheme~\cite{gilmer2017neural},
i.e., aggregating the messages from connected nodes. It can be represented as $\textbf{h}_v = \textbf{W}(\textbf{h}_v, \text{AGGR}\{\textbf{h}_u, u \in \mathcal{N}(v)\})$,
where $\textbf{W}$ is the learnable parameter, \text{AGGR} is the aggregation function used in this aggregation operation. Diverse aggregation operations are proposed and widely used in graph representation learning~\cite{kipf2016semi,hamilton2017inductive,velivckovic2017graph,chen2020simple}.
Based on these aggregation operations, one GNN can be constructed by stacking these aggregation operations.
\subsection{Long-range Dependency Extraction in Graph Classification }
\label{sec-related-longrange}
In the graph classification task, existing literature can be grouped into three categories according to their methods in extracting the long-range dependencies, i.e., designing pooling operations, incorporating the higher-order neighbors, and stacking GNN layers.
\subsubsection{Designing pooling operations}
\label{sec-related-pooling}
Pooling operations are widely used in the graph classification task, and they aim to generate one coarse graph to extract the hierarchical information.
They can be classified into two groups, i.e., the selection-based ones and the grouping-based ones.
The selection-based methods~\cite{lee2019self,gao2019graph} focus on evaluating the node importance.
The top-ranked nodes are preserved, and then they construct the coarse graph by dropping the edges connected with the un-selected nodes.
On the contrary, the grouping-based methods~\cite{ying2018hierarchical,bianchi2020spectral,yuan2019structpool} aim to group nodes into several clusters based on their similarities.
They first design one assignment matrix, on top of which the cluster features and new edges are constructed.
Despite the success of these methods in the graph classification task, not all pooling operations help obtain the long-range dependencies in the graph.
The selection-based operations drop the edges connected with the un-selected nodes, and then the shortest path between node pairs will not decrease. Therefore, these pooling operations cannot faster the feature propagation in the graph.
The grouping-based pooling operations reconstruct the edges in the coarse graph, and these edges may shorten the distance between the node pairs.
\subsubsection{Incorporating higher-order neighbors}
\label{sec-related-higherorder}
The general aggregation operation only propagates messages from the connected neighbors.
By connecting nodes with higher-order neighbors,
the long-range dependencies can be obtained with fewer aggregation operations.
In the graph classification task, \cite{pham2017graph} provides the virtual node which is connected with the other nodes, on top of which the distance for each node pair is less than two.
\cite{alon2020bottleneck} provides a fully-adjacent layer at the end of GNNs, in which every pair of nodes are connected with an edge.
Transformer modules~\cite{vaswani2017attention} are designed to communicate with other nodes. Existing methods designed diverse positional encoders to learn graph structures.
For example, GraphTrans~\cite{jain2021representing} uses the stacked GNNs to encode the graph structure.
Graphormer~\cite{ying2021transformers} designs three encoders to embed the node centrality, node pairs, and edge features separately, and then applied the Transformer on these node sets.
\subsubsection{Stacking GNN layers}
\label{sec-related-stacking}
Apart from designing specific operations to obtain the long-range dependencies, stacking more GNN layers can incorporate the message from longer-distant neighbors. To be specific, each aggregation operation aggregates messages from the connected neighbors, on top of which the receptive field of each node can be expanded one hop away. Therefore, for one $k$-layer stacking GNN, each node can incorporate the messages from its $k$-hop neighbors.
Therefore, to extract the long-range dependencies which may be important in the graph classification task, one GNN can stack more aggregation operations.
In the graph classification task, GIN~\cite{xu2018powerful} aims to design power aggregation operations, and then five layers are applied in GNN. DGCNN~\cite{zhang2018end} stacks three graph convolution layers, and then ranks nodes based on the node features, on top of which the graph representations can be generated with those top-ranked ones.
\subsection{Graph Neural Architecture Search}
\label{sec-related-gnas}
Researchers tried to design GNN architectures by neural architecture search (NAS) automatically~\cite{wang2022automated}.
The majority of these methods focus on designing the aggregation layers in GNNs, e.g., GraphNAS~\cite{gao2019graphnas} and GraphGym~\cite{you2020design} provide diverse dimensions and candidate operations to design the GNN layers. Besides, SANE~\cite{zhao2021search}, SNAG~\cite{zhao2020simplifying}, F2GNN~\cite{wei2022designing} and AutoGraph~\cite{li2020autograph} design the skip-connections based on the stacked aggregation operations.
Policy-GNN~\cite{lai2020policy} and NWGNN~\cite{wang2022nwgnn} aim to design the GNN depth.
Apart from these methods designed for the node classification task, NAS-GCN~\cite{jiang2020graph} learns adaptive global pooling functions additionally, and PAS~\cite{wei2021pooling} is proposed to design global and hierarchical pooling methods adaptively for the graph classification task.
Despite the success of these methods, they are usually shallow, e.g., use two- or three-layer GNNs in general.
DeepGNAS~\cite{feng2021search} designs deep architectures in the block and architecture stages for the node classification task.
As to the search algorithm,
differentiable search algorithms are preferred in recent years~\cite{liu2018darts} considering the search efficiency. The discrete search space is relaxed into continuous with one relaxation function~\cite{liu2018darts,xie2018snas}, on top of which the gradient descent can be applied.
More graph neural architecture search methods can be found in~\cite{zhang2021automated,wang2022automated,wang2021autogel,wang2022profiling}. Compared with existing methods which use shallow GNNs or only design deep GNNs in the node classification task, LRGNN evokes the importance of deep stacking-based GNNs in the graph classification task, and the rationality has been justified in Section~\ref{sec-method-twoneeds}.
\section{Method}
In this section, we first show the feasibility of employing the stacking-based GNNs to extract the long-range dependencies, on top of which two design needs for these GNNs are provided. Then, we will introduce the proposed method LRGNN.
\subsection{The Feasibility of Stacking-based GNNs}
\label{sec-method-twoneeds}
For the graph classification task, one GNN aims to learn one representation vector for each input graph to predict the graph label, and it is related to the interactions between long distance pairs~\cite{jain2021representing}, e.g., counting local substructures~\cite{dwivedi2022long}. Therefore, how effectively embracing long-range dependencies is the key factor in designing neural networks.
\label{sec:method}
However, existing methods mentioned in Section~\ref{sec-related-pooling} and \ref{sec-related-higherorder} are deficient in capturing long-range dependencies. These methods update the graph structures either by generating the coarse graph or connecting to higher-order neighbors, which may result in insufficient discriminability, i.e., the discriminative graph structures may become indistinguishable anymore. For example, the pooling operations may generate the same coarse graph based on two distinguishable graphs, and the instances are provided in Appendix~\ref{sec-appendix-pooling-illustrations}.
Considering the deficiency of these methods, we turn to the stacking-based GNNs and search for better solutions to capture long-range dependencies.
In general, scaling the model depth is the common way to improve the model performance in computation vision~\cite{tan2019efficientnet,he2016deep}, and deeper GNNs can enlarge the receptive field which enables the extraction of longer-range dependencies.
Although, the over-smoothing problem is hard to evade in deep GNNs~\cite{li2018deeper,xu2018representation}, i.e., the connected nodes will have similar representations as the network becomes deeper, which will result in a performance drop on the node classification task. This problem hinders the development of deeper GNNs, and two- or three-layer shallow GNNs are widely used.
However, as shown in proposition~\ref{prop-smooth}, we theoretically justify that the over-smoothing problem has smaller influence on the graph classification task compared with the node classification task. The proof is provided in Appendix~\ref{sec-appendix-prop2}.
Therefore, it is one potential solution to obtaining the long-range dependencies by stacking sufficient GNN layers.
In this paper, we extract the long-range dependencies by stacking sufficient GNN layers as mentioned in Section~\ref{sec-related-stacking}.
Compared with the aforementioned methods, it can preserve the graph structure information in the computational graph without modifying the graph structure.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth]{./fig/example}
\vspace{-10pt}
\caption{(a): The illustrations of two molecule graphs ($C_{12}H_{26}$) where the green node is \textbf{carbon} and the orange node is \textbf{hydrogen}.
(b): We visualize the computation graph used in the 2-layer GNNs for each node in the table.
Two graphs have the same subtree distribution and lead to the same graph representation.
(c): Especially, we visualize the computation graph for node 2 in $\mathcal{G}_2$ used in the 4-layer GNN. It contains $S1$ and $S3$ simultaneously, which is unique in $\mathcal{G}_2$. }
\label{fig-long-range-example}
\vspace{-10pt}
\end{figure}
\begin{prop}
Let $\mathcal{G}_1$ and $\mathcal{G}_2$ are two different graphs that satisfy: (a) have no bipartite components, and (b) at least one of the eigenvectors or eigenvalues are different.
Then, for any learnable parameter $\textbf{W}$, $(I-L_1^{sym})^{\infty}\textbf{W} \neq (I-L_2^{sym})^{\infty}\textbf{W}$, where $L_1^{sym}$ and $L_2^{sym}$ are the symmetric Laplacian matrices of $\mathcal{G}_1$ and $\mathcal{G}_2$, respectively.
\label{prop-smooth}
\end{prop}
\subsection{Design Needs for Stacking-based GNNs}
When using the stacking-based GNNs to capture the long-range dependencies, two aspects affect the utilization of long-distant neighbors, i.e., the GNN depth and the skip-connection schemes. The former reflects the longest distance one node can access, and the latter is related to
the information mixing from different ranges of neighbors.
Therefore, to effectively capture the long-range dependencies, these two design needs should be considered when designing the stacking GNNs in this paper.
\subsubsection{Depth needs to be sufficient.}
\label{sec-sufficient-depth}
By stacking more aggregation operations, deeper GNNs can incorporate the interactions from more distant neighbors which affect the prediction results. We theoretically justify the discriminative power, i.e., how well GNNs with different depths can distinguish the non-isomorphic graphs, in Proposition~\ref{prop-long-range}.
The proof is proved in Appendix~\ref{sec-appendix-prop1}. Use the cases shown in Figure~\ref{fig-long-range-example} (a) as an example,
the key difference between these two graphs is the graph structures within nodes 0 and 11, and these two graphs can be distinguished by the 1-WL test in the $4$-th iteration.
As shown in Figure~\ref{fig-long-range-example} (b), two-layer GNN cannot distinguish two graphs since the same computation graphs are obtained in two molecular graphs, and the same graph representation vector will be obtained based on this.
However, as shown in Figure~\ref{fig-long-range-example} (c), the visualized computation graph of node $2$ in $\mathcal{G}_2$ contains $S1$ and $S3$ simultaneously, while it cannot be achieved in $\mathcal{G}_1$. On top of this, a four-layer GNN can distinguish these two graphs.
Combined with the 1-WL test, deeper GNNs are more expressive than shallower ones, while two- or three-layer GNNs are widely used in the graph classification task. Therefore, it is necessary to enlarge the model depth when capturing the long-range dependencies in the graph classification.
\noindent\begin{prop}
For any two graphs $\mathcal{G}_1$ and $\mathcal{G}_2$ which are non-isomorphic and can be distinguished by the first-order Weisfeiler-Lehman test in $k$-th iteration, one aggregation-based $L$-layer GNN $\mathcal{A}_L : \mathcal{G}\rightarrow\mathbb{R}^d$ can come up the following conclusions:
\begin{align}
& \mathcal{A}_L(\mathcal{G}_1) \neq \mathcal{A}_L(\mathcal{G}_2), L \geq k, \nonumber \\
& \mathcal{A}_l(\mathcal{G}_1) = \mathcal{A}_l(\mathcal{G}_2), \forall \, 0 \leq l< k. \nonumber
\end{align}
\label{prop-long-range}
\vspace{-10pt}
\end{prop}
\begin{figure}[ht]
\centering
\includegraphics[width=0.85\linewidth]{./fig/diameter_connection3}
\vspace{-10pt}
\caption{The performance comparisons with GNNs in different model depth and skip-connection schemes. Each dataset has a different number of graphs in diameter of $5$ and $14$. Graph label is determined by its diameter. The details are provided in Appendix~\ref{sec-appendix-adaptive-exp}.}
\vspace{-10pt}
\label{fig-diameter-connection}
\end{figure}
\subsubsection{Skip-connection needs to be adaptive.}
\label{sec-adaptive-connection}
Based on proposition~\ref{prop-long-range}, different graph pairs may have different distinguishable iteration $k$ in the 1-WL test. Therefore, it is unreasonable to incorporate the distant neighbors with only one variety, i.e., only from the $k$-hop away, when facing those graphs in the datasets.
In stacking-based GNNs, each node can increasingly expand the receptive field as the network goes deeper, and then designing the skip-connection schemes in GNN can make up for this deficiency since the information extracted from different ranges of neighbors can be mixed based on the skip-connections, on top of which the extracted information can be enriched.
As shown in Figure~\ref{fig-diameter-connection}, we design six GNNs with different model depths and skip-connection schemes, and then evaluate them on three datasets that have different graph diameter distributions. We observe that: (a) deeper GNNs achieve higher performance in general, which demonstrates the importance of sufficient large layer numbers in the graph classification task; (b) different skip-connection schemes result in diverse performances in each dataset, and they also have different ranks. Therefore, it is a necessity to design the skip-connections data-specifically~\cite{zhao2020simplifying,zhao2021search}.
\subsection{Designing Inter-layer Connections in Stacking-based GNNs}
\label{sec-design-space}
When using the stacking-based GNNs to extract the long-range dependencies, the model depth and the skip-connection schemes should be considered. These two designing needs can be unified by designing inter-layer connections, which correspond to the inter-layer connections between the consecutive and inconsecutive layers, respectively.
To effectively extract the long-range dependencies, we adopt NAS to design the inter-layer connections in GNNs, on top of which the GNN depth and skip-connection schemes will be obtained adaptively.
\begin{figure}[ht]
\centering
\vspace{-10pt}
\includegraphics[width=0.95\linewidth]{./fig/cell_framework}
\vspace{-10pt}
\caption{(a) GNN framework used to design the inter-layer connections.
(b) Cell-based GNN architecture. Each cell contains $B/C$ ($B=8 $ and $ C=2$ for example) aggregation operations and a post-processing operation.
(c) One architecture design example.
For aggregation operation 3, the connection from operation $0$ is unused,
and two inputs from operation $1$ and $2$ are merged with operation \texttt{SUM}, on top of which the aggregation operation $3$ can be operated.}
\label{fig-cell-framework}
\vspace{-10pt}
\end{figure}
\subsubsection{Architecture framework}
As shown in Figure~\ref{fig-cell-framework} (a), we provide a unified framework to design the inter-layer connections in GNNs,
which is constructed with an ordered sequence of $B$ aggregation operations ($B=4$ and GCN~\cite{kipf2016semi} operation for example).
Motivated by GraphGym~\cite{you2020design}, we provide one pre-processing operation $0$ and one post-processing operation $B+1$, each of which is one two-layer MLP (Multilayer Perceptron) to support the integration in the following procedures.
At the end of this framework, one readout operation is provided to learn the graph representations.
By deciding whether to use these inter-layer connections, i.e., the dashed lines shown in the framework, we can obtain GNNs with different depths and skip-connections to extract the long-range dependencies.
\subsubsection{Designing search space based on the framework.}
We provide a set of candidates to construct the search space.
As shown in Figure~\ref{fig-cell-framework} (a), the connections among different operations (with dashed lines) are learnable, and each of them is an ``on-off switch'' when utilizing features, i.e., only ``used'' and ``unused'' states existed. Therefore, we provide two candidate choices to represent these two states on each learnable connection in the framework, i.e., $\mathcal{O}_c=\{$\texttt{ON}, \texttt{OFF}$\}$. They can be represented as $o(\textbf{H})=\textbf{H}$ and $o(\textbf{H})=\textbf{0}$, respectively.
For aggregation operations and the post-processing operation, more than one connection may be used.
A straightforward manner is to add up these selected features. To further improve the model expressiveness, we provide one merge operation to incorporate these selected features.
Based on the literature, we provide six candidates to merge them with the concatenation, LSTM cell, attention mechanism, summation, average and maximum. Then the merge operations are denoted as $\mathcal{O}_m=\{$\texttt{CONCAT}, \texttt{LSTM}, \texttt{ATT}, \texttt{SUM}, \texttt{MEAN}, \texttt{MAX} $\}$.
For readout operation, we provide seven candidates to obtain the graph representation vector and it can be denoted as $\mathcal{O}_r=\{$\texttt{GMEAN}, \texttt{GMAX}, \texttt{GSUM}, \texttt{GSORT}, \texttt{GATT}, \texttt{SET2SET }, \texttt{MEMA}$\}$. These candidates represent:
three simple global mean, max and sum functions; four readout operations derived from \cite{zhang2018end}, \cite{li2015gated}, \cite{vinyals2015order} and \cite{lee2019self}, respectively.
As shown in Figure~\ref{fig-cell-framework} (a), the operations used in each learnable connection are designed independently, and then different depth and skip-connection schemes will be obtained in GNNs. Meanwhile, designing the merge and readout operations can further improve the expressiveness of GNNs ~\cite{xu2018powerful}. This variant is denoted as LRGNN-Full in this paper.
As mentioned in Section~\ref{sec-sufficient-depth}, the GNN depth should be sufficiently large, which demonstrates the large number of aggregation operations in the framework. Then, the search space size is growing exponentially along with the increasing number of aggregation operations.
Considering the search efficiency, we adopt the cells which are widely used in NAS~\cite{liu2018darts,pham2018efficient} and provide one cell-based framework as shown in Figure~\ref{fig-cell-framework} (b).
Each cell can be treated as one basic unit to design the stacking-based GNNs.
$C$ cells are provided and then these $B$ aggregation operations are assigned to the cells equally.
The operations in each cell can be learned independently, and we will obtain different cells with diverse connection schemes and merge operations in the searched GNN, which is dubbed LRGNN-Diverse. On the contrary, we can share the operations among cells, on top of which the same cell is used in the searched GNN, which is dubbed LRGNN-Repeat in this paper.
\subsubsection{Architecture search from the search space.}
The architecture design can be formulated as the bi-level optimization problem as shown in the following:
\begin{align}
\min\nolimits_{\bm{\alpha} \in \mathcal{A}} & \;
\mathcal{L}_{\text{val}} (\textbf{W}^*(\bm{\alpha}), \bm{\alpha}),
\nonumber
\\\
\text{\;s.t.\;} \textbf{W}^*(\bm{\alpha})
& = \argmin\nolimits_\textbf{W} \mathcal{L}_{\text{train}}(\textbf{W}, \bm{\alpha}),
\nonumber
\end{align}
$ \mathcal{A}$ represents the search space, and $\bm{\alpha}$ represents the neural architecture in $ \mathcal{A}$. $\textbf{W}$ represents the parameters of a model from the search space, and $ \textbf{W}^*(\bm{\alpha}) $ represents the corresponding operation parameter after training. $\mathcal{L}_{\text{train}}$ and $\mathcal{L}_{\text{val}}$ represent the training and validation loss, respectively.
In this paper, $\bm{\alpha}=\{\bm{\alpha}_c, \bm{\alpha}_m, \bm{\alpha}_r\}$ has three components need to be designed.
As shown in Figure~\ref{fig-cell-framework} (c), $\bm{\alpha}^{(c,i,j)}_c \in \mathbb{R}^{|\mathcal{O}_c|}$ represent the learnable connections between the operation $i$ and $j$ in $c$-th cell.
$\bm{\alpha}^{(c,i)}_m\in \mathbb{R}^{|\mathcal{O}_m|}$ represent the merge operation used in the aggregation operation $i$ in $c$-th cell.
The LRGNN-Full variant adopts $c=1$ merely, while LRGNN-Repeat share the parameters $\bm{\alpha}^{(i,j)}_c$ and $\bm{\alpha}^{(i)}_m$ cross cells.
The differentiable method is employed in this paper which is widely used in the NAS methods~\cite{liu2018darts,xie2018snas}. We adopted the Gumbel Softmax to relax the search space directly. As shown in the following:
\begin{align}
\label{eq-gumble-softmax}
c_k &=g(\mathcal{O},\bm{\alpha}, k)=\frac{\exp((\log\bm{\alpha}_k + \textbf{G}_k)/\lambda)}{\sum_{j=1}^{\left| \mathcal{O}\right|} \exp((\log\bm{\alpha}_j + \textbf{G}_j)/\lambda)}, \\ \nonumber
\end{align}
$\textbf{G}_k=-\log(-\log(\textbf{U}_k ))$ is the Gumble random variable, and $\textbf{U}_k$ is a uniform random variable, $\lambda$ is the temperature of softmax.
As shown in Figure~\ref{fig-cell-framework} (c), we use the aggregation operation $3$ as an example, the output can be represented as
\begin{align}
\label{eq-connection-results}
\textbf{H}^{(i,3)} &= \sum_{k=1}^{|\mathcal{O}_c|} c_ko_k(\textbf{H}^{i}), c_k = g(\mathcal{O}_c, \bm{\alpha}^{(1,i,3)}_c, k), \\
\label{eq-merge-results}
\textbf{H}^3_{in} &= \sum_{k=1}^{|\mathcal{O}_m|} c_ko_k(\{\textbf{H}^{(i,3)}| 0\leq i \leq 2\}), c_k = g(\mathcal{O}_m, \bm{\alpha}^{(1,3)}_m,k),\\
\label{eq-agg-results}
\textbf{H}^3 &= \text{AGG}(\textbf{A}, \textbf{H}^3_{in}).
\end{align}
$\textbf{H}^i$ is the output feature matrix of operation $i$, and $\textbf{H}^{(i,3)}$ represent the results collected based on the connection from operation $i$ to operation $3$. Then, these selected features are merged and the results are denoted as $\textbf{H}^3_{in}$, on top of which one aggregation operation \text{AGG} can be applied and the output of operation $3$, i.e., $\textbf{H}^3$, will be generated.
The graph representation can be generated as shown in the Alg.~\ref{algo-graph-representation} in Appendix~\ref{sec-appendix-algos}.
We optimize the parameters $\bm{\alpha}$ and $\textbf{W}$ with the gradient descent, and the details are provided in Alg.~\ref{algo-optimization} in Appendix~\ref{sec-appendix-algos}. After the training finished, we obtained the searched GNNs by preserving the operations with the largest weights.
In summary, LRGNN aims to design the inter-layer connections in deep stacking-based GNNs to capture the long-range dependencies in the graph classification task. Compared with existing methods which use shallow GNNs or only design deep GNNs in the node classification task, LRGNN evokes the importance of deep stacking-based GNNs in the graph classification task, and the rationality has been justified in Section~\ref{sec-method-twoneeds}.
Besides, the methods which aim to design adaptive aggregation operations are orthogonal with LRGNN, and they can be incorporated into this paper directly.
In the following, we empirically evaluate the rationality and effectiveness of LRGNN.
\section{Experiments}
We evaluate the proposed LRGNN against a number of SOTA methods and widely used GNNs, with the goal of answering the following research questions:
\noindent\textbf{Q1:} How does the over-smoothing problem and two design needs empirically affect the performance of stacking-based GNNs in the graph classification task? (Section~\ref{sec-oversmooth})?
\noindent\textbf{Q2:} How does the proposed LRGNN compare to other methods when extracting the long-range dependencies (Section~\ref{sec-performance-comparisons})?
\noindent\textbf{Q3:} How does each component affect the method performance, i.e., the merge and readout operations used in the search space, and the cell-based search space designed considering the search efficiency (Section~\ref{sec-ablation}).
\subsection{Experimental Settings}
\subsubsection{Datasets}
As shown in Tab.~\ref{tb-graph-dataset},
we use five datasets with different graph diameters from different domains.
NCI1 and NCI109 are datasets of chemical compounds~\cite{wale2006comparison}.
DD and PROTEINS datasets are both protein graphs~\cite{dobsondistinguishing}.
IMDB-BINARY dataset is movie-collaboration datasets~\cite{yanardag2015deep}.
\begin{table}[ht]
\setlength\tabcolsep{1pt}
\centering
\footnotesize
\caption{Statistics of the datasets from three domains.}
\vspace{-10pt}
\label{tb-graph-dataset}
\begin{tabular}{c|C{28pt}|C{28pt}|C{28pt}|C{32pt}|C{30pt}|c}
\hline
Dataset & \# of Graphs & \# of Feature & \# of Classes & Avg.\# of Nodes & Avg.\# of Edges & Domain \\ \hline
NCI1 & 4,110 & 89 & 2 & 29.87 & 32.3 & Chemistry \\ \hline
NCI109 & 4,127 & 38 & 2 & 29.69 & 32.13 & Chemistry \\ \hline
DD & 1,178 & 89 & 2 & 384.3 & 715.7 & Bioinfo \\ \hline
PROTEINS & 1,113 & 3 & 2 & 39.1 & 72.8 & Bioinfo \\ \hline
IMDB-BINARY & 1,000 & 0 & 2 & 19.8 & 96.5 & Social \\ \hline
\end{tabular}
\vspace{-20pt}
\end{table}
\subsubsection{Baselines}
We provide three kinds of baselines in this paper:
\noindent$\bullet$ For stacking-based GNNs, we adopt three baselines in this paper which use different skip-connection schemes: GCN~\cite{kipf2016semi}, ResGCN~\cite{li2019deepgcns} and GCNJK~\cite{xu2018representation}. We vary the GNN layers in $\{4,8,12,16\}$, and then report the best methods in Tab.~\ref{tb-performance} (More results can be found in Appendix~\ref{sec-appendix-setting}).
\noindent $\bullet$ We provide three pooling methods used in this task: DGCNN~\cite{zhang2018end} baseline with $8$-layer stacked GCN operations and the designed readout function; $2$-layer SAGPool~\cite{lee2019self} and DiffPool~\cite{ying2018hierarchical} baselines in which each layer has one GCN operation and one pooling operation.
\noindent $\bullet$ Considering the methods which incorporate the higher-order neighbors, we provide four baselines. Firstly, we use the 4-layer stacked GCN, and in each layer nodes aggregate messages from the neighbors exactly one and two hops away.
This baseline is denoted as TwoHop(L4) in this paper~\cite{abu2019mixhop}.
For GCNFA(L8)~\cite{alon2020bottleneck} baseline, we use the 8-layer stacked GCN in which the $8$-th layer uses the fully-adjacency matrix instead. For GCNVIR(L8)~\cite{pham2017graph}, we use the $8$-layer stacked GNN, and in each layer, we add one virtual node. For GraphTrans~\cite{jain2021representing}, we adopt the small variant in this paper which use three GCN layers and four transformer layers.
Compared with these baselines, we provide four LRGNN variants: LRGNN-Full with 8 and 12 aggregation operations, which denoted as B8C1 and B12C1, respectively; LRGNN-Repeat and LRGNN-Diverse with 12 aggregation operations and 3 cells, which denoted as Repeat B12C3 and Diverse B12C3, respectively.
\subsubsection{Implementation details}
For all datasets, we perform 10-fold cross-validation to evaluate the model performance and an inner holdout technique with a 80\%/10\%/10\% training/validation/test for model training and selection.
In this paper, we set the training and finetuning stages to get the performance for the proposed LRGNN.
In the training stage, we derived the candidate GNNs from the corresponding search space.
For the searched GNNs and the baselines, we fine-tune the hyper-parameters of these methods in the finetuning stage.
The details are provided in Appendix~\ref{sec-appendix-setting}.
\subsection{The Rationality of LRGNN}
\label{sec-oversmooth}
\begin{table*}[ht]
\small
\centering
\caption{Performance comparisons. We first show the average diameters of the datasets.
Then, we report the mean test accuracy and the standard deviations based on the 10-fold cross-validation data. For the proposed method, BICJ represent these architectures contains $i$ aggregation operations and $j$ cells. $[Li]$ represents the model depth is $i$. ``OOM'' means out of memory. The best result in each group is underlined, and the best result in this dataset is highlighted in gray. The average rank on all datasets is provided and the Top-3 methods are highlighted. For GCNFA and GraphTrans, the averaged rank is calculated based the other four datasets.
}
\vspace{-10pt}
\label{tb-performance}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
& Method & NCI1 & NCI109 & DD & PROTEINS & IMDB-BINARY & Avg. rank \\ \hline
&Diameter & 13.33 & 13.13 & 19.90 & 11.57 & 1.86 & \\ \hline
& GCN & 78.98(3.24) [L16] & 76.64(1.04) [L16] & 75.63(2.95) [L16] & 75.11(4.51) [L8] & 73.80(5.13) [L4] & 7.4 \\
& ResGCN & 77.88(2.24) [L12] & {\ul 76.71(1.83) [L16]} & {\ul 76.65(2.73) [L8]} & 75.11(3.22) [L8] & 73.70(5.70) [L12] & 7.2 \\
\multirow{-3}{*}{Stacking} & GCNJK & {\ul 79.24(2.11) [L12] } & 75.91(3.61) [L12] & 73.16(5.12) [L8] & {\ul 75.24(4.15) [L8]} & {\ul 74.20(3.76) [L16]} & 6.6 \\ \hline
& DGCNN(L8) & {\ul 76.08(1.03)} & {\ul 74.58(3.99)} & 61.63(5.33) & 73.95(3.04) & 71.70(3.65) & 11.6 \\
& SAGPool & 72.23(3.68) & 69.78(2.26) & 70.52(5.48) & 71.89(4.03) & {\ul 73.40(3.02)} & 12 \\
\multirow{-3}{*}{Pooling}
& DiffPool & 75.04(1.98) & 71.48(2.46) & {\ul 77.85(3.53)} & {\ul 75.11(2.14)} & 72.27(4.47) & 9.2 \\ \hline
& TwoHop(L4) & 76.42(3.83) & 75.33(2.15) & {\ul 74.53(5.24)} & {\ul 75.30(4.27)} & 72.40(5.61) & 8 \\
& GCNVIR(L8) & 70.32(4.32) & 67.99(2.97) & 71.98(4.34) & 71.08(3.14) & 71.50(4.90) & 13.2 \\
& GCNFA(L8) & 76.39(3.72) & 73.93(2.87) & OOM & 74.31(4.16) & 73.60(5.62) & 10.25 \\
& GraphTrans & {\ul 82.04(1.43)} & {\ul 80.18(1.97)} & OOM & 75.12(4.89) & {\ul 74.00(5.74)} & 4.75 \\ \hline
\multirow{-5}{*}{\begin{tabular}[c]{@{}c@{}}Higher-\\ order\end{tabular}}
& Full B8C1 & 81.82(1.74) [L4] & \cellcolor[HTML]{C0C0C0}{\ul 81.39(1.92) [L6]} & 78.01(3.69) [L6] & \cellcolor[HTML]{C0C0C0}{\ul 75.39(4.40) [L6]} & \cellcolor[HTML]{C0C0C0}{\ul 76.20(5.18) [L4]} & \cellcolor[HTML]{C0C0C0}1.6 \\
& Full B12C1 & \cellcolor[HTML]{C0C0C0}{\ul 82.51(1.37) [L7]} & 80.64(2.46) [L7] & \cellcolor[HTML]{C0C0C0}{\ul 78.18(2.02) [L9]} & 75.29(4.51) [L3] & 74.50(3.31) [L5] & \cellcolor[HTML]{C0C0C0}2.4 \\
& Repeat B12C3 & 81.31(1.68) [L6] & 80.79(1.24) [L6] & 77.25(2.90) [L6] & 75.30(4.77) [L3] & 74.20(5.41) [L6] & \cellcolor[HTML]{C0C0C0}3.6 \\
\multirow{-4}{*}{LRGNN} & Diverse B12C3 & 80.97(2.73) [L6] & 80.98(1.96) [L9] & 77.67(3.35) [L5] & 74.93(5.15) [L7] & 74.20(3.35) [L5] & 4.8 \\ \hline
\end{tabular}
\end{table*}
\subsubsection{The importance of sufficient model depth}
In this paper, we use the averaged node pair distance, which is calculated based on $D(h_i, h_j) = \Vert \frac{h_i}{\Vert h_i \Vert_1} - \frac{h_j}{\Vert h_j \Vert_1} \Vert_1$, to evaluate the over-smoothing problem.
As shown in Figure~\ref{fig-smooth-acc}, the lower distance values are observed on two datasets as the network goes deeper, which demonstrates the smoother features between node pairs are observed and the over-smoothing problem appeared in both the node classification and graph classification tasks.
However, the upward trend of model performance is apparent in the NCI1 dataset, while the opposite trend is observed in the Cora dataset. It indicates the smaller influence of the over-smoothing problem on the graph classification task compared with the node classification task. Therefore, stacking sufficient layers to obtain the long-range dependencies is a feasible way.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\linewidth]{./fig/depth_cora}
\includegraphics[width=0.45\linewidth]{./fig/depth_nci1}
\vspace{-10pt}
\caption{The test accuracy and the averaged distance comparisons on the node classification dataset Cora (Left) and graph classification dataset NCI1 (Right). }
\label{fig-smooth-acc}
\vspace{-15pt}
\end{figure}
\subsubsection{The necessity of adaptive skip-connections}
As mentioned in Section~\ref{sec-adaptive-connection}, designing adaptive skip-connections can enhance the information utilization from different ranges of neighbors.
In this part, we empirically evaluate the skip-connection schemes on two real-world datasets.
As shown in Figure~\ref{fig-skipconn-acc}, GNNs with different skip-connection schemes have diverse performances in each dataset. Besides, the SOTA performance is achieved by designing the skip-connections, i.e., ResGCN achieves the SOTA performance on the NCI109 dataset and GCNJK ranks first on the NCI1 dataset.
Given the fixed GNN depth, these three baselines have different ranks on two datasets.
All these results demonstrate the necessity of data-specific skip-connection schemes.
\begin{figure}[ht]
\vspace{-10pt}
\centering
\includegraphics[width=0.4\linewidth]{./fig/connection_nci1}
\includegraphics[width=0.4\linewidth]{./fig/connection_nci109}
\vspace{-10pt}
\caption{The influence of different skip-connection schemes on NCI1 (Left) and NCI109 (Right) datasets.}
\label{fig-skipconn-acc}
\vspace{-15pt}
\end{figure}
In summary, sufficient GNN depth and adaptive skip-connection schemes are empirically important for stacking-based GNNs in extracting the long-range dependencies. In the following, we compare the proposed LRGNN with the SOTA methods to show its effectiveness.
\subsection{The Effectiveness of LRGNN}
\label{sec-performance-comparisons}
As shown in Table~\ref{tb-performance}, the variants B8C1, B12C1 and LRGNN-Repeat B12C3 can outperform all baselines, which demonstrates the effectiveness of the LRGNN method by designing the inter-layer connections in GNNs to utilize the long-range dependencies adaptively.
Besides, for these methods which achieve the best results on these datasets (the results highlighted in gray in Tab.~\ref{tb-performance}), we observe that the model depth in these methods are very close to the graph radius (diameter/$2$).
This observation highlights the significance of sufficient model depth in GNNs.
LRGNN-Repeat and LRGNN-Diverse variants constrain the search space and group the aggregation operations into different cells.
Compared with LRGNN-Full, these two variants have fewer parameters based on the same aggregation operations (see Tab.~\ref{tb-cell-memory}), which enables the explorations on much deeper GNNs. However, they have limited search space, which may filter out the expressive architectures and lead to a performance decrease as shown in Tab.~\ref{tb-performance}.
We visualize the searched inter-layer connections of LRGNN-Full B8C1 in Figure~\ref{fig-searched}, and more results can be found in Appendix~\ref{sec-appendix-searched}. It is obvious that different inter-layer connection schemes are obtained on different datasets, and each operation has its own preference for turning on the connections from previous operations.
The connections between the consecutive layers are emphasized in the NCI109 dataset, while the connections between the inconsecutive layers are widely used in the NCI1 dataset. We highlight one of the longest paths in the searched GNNs, and the searched architecture on the NCI109 datasets is deeper than that on the NCI1 dataset.
Combine with the SOTA performance in Tab.~\ref{tb-performance}, the effectiveness of designing inter-layer connections is obvious.
\begin{figure}[ht]
\centering
\includegraphics[width=0.55\linewidth]{./fig/searched}
\vspace{-10pt}
\caption{The searched inter-layer connections of LRGNN-Full B8C1 on NCI1 (Top) and NCI109 (Bottom) datasets. We highlight one of the longest path in the GNNs, on top of which the model depth can be obtained.}
\label{fig-searched}
\vspace{-10pt}
\end{figure}
Compared with the stacking-based methods, we observe that three baselines could achieve higher performance with more GNN layers. Besides, by utilizing different skip-connection schemes, ResGCN and GCNJK achieve higher performance than GCN baseline which has no skip-connections. The performance benefits a lot from the inter-layer connections, and the proposed LRGNN achieves better performance by designing the inter-layer connections adaptively.
As to the other two kinds of baselines, GraphTrans has the strongest ability in modeling the long-range information with the help of the Transformer module, which has been proved in the related methods~\cite{jain2021representing,ying2021transformers}.
Besides, the global pooling method DGCNN and two hierarchical pooling methods, i.e., SAGPool and DiffPool, have different performances on these datasets, and no absolute winner from these two kinds of pooling methods on all datasets, which is consistent with existing methods~\cite{degio2020rethinking,errica2019fair,wei2021pooling}. The grouping-based methods, e.g., DiffPool,
achieves higher ranks than the selection-based method SAGPool in general.
Compared with these methods, LRGNN has better performance due to the strong ability of the stacking-based GNNs in preserving the graph structures.
\vspace{-5pt}
\subsection{Ablation Study}
\label{sec-ablation}
To evaluate the contributions of different components, in this section, we conduct ablation studies on the cell-based search space, and two operations added to improve the model expressiveness.
Considering the limited space, we show the ablation study experiments on aggregation operation and the search algorithm in Appendix~\ref{sec-appendix-ablation}.
\begin{table}[ht]
\centering
\footnotesize
\vspace{-5pt}
\caption{Evaluations on the cell-based architecture.
We show the comparisons of the performance, the parameter numbers(M) in the supernet and the search cost (GPU second) in the training stage. }
\setlength\tabcolsep{3pt}
\vspace{-10pt}
\begin{tabular}{c|c|c}
\hline
& DD & PROTEINS \\ \hline
Full B4C1 & 77.16(2.98) / 0.23M / 469s & 75.30(4.67) / 1.01M / 247s \\ \hline
Full B8C1 & {\ul 78.01(3.69) / 0.92M / 1306s} & {\ul 75.39(4.40) / 3.62M / 394s} \\ \hline
Full B12C1 & \cellcolor[HTML]{C0C0C0}78.18(2.02) / 2.28M / 1458s & 75.29(4.51) / 9.06M / 573s \\
Repeat B12C3 & 77.25(2.90) / 0.24M / 1226s & 75.30(4.77) / 1.05M / 500s \\
Diverse B12C3 & 77.67(3.35) / 0.68M / 1286s & 74.93(5.15) / 2.79M / 545s \\ \hline
Full B16C1 & OOM & 74.48(3.89) / 18.39M / 879s \\
Repeat B16C4 & 77.24(3.59) / 0.25M / 1697s & 74.48(4.75) / 1.07M / 622s \\
Diverse B16C4 & 77.59(3.24) / 0.94M /1404s & \cellcolor[HTML]{C0C0C0}75.56(4.12) / 3.68M / 665s \\ \hline
\end{tabular}
\label{tb-cell-memory}
\vspace{-10pt}
\end{table}
\subsubsection{The evaluation on cell-based search space}
Considering the search efficiency, we provide one cell-based framework which can constrain the search space by designing the connections in cells. To evaluate the efficiency and effectiveness of the cell-based architectures, we varify the cell numbers and the results are shown in Tab.~\ref{tb-cell-memory}
It is obvious that LRGNN-Full cannot work on DD dataset given $16$ aggregation operations in one cell, while it can work when we assign these 16 aggregation operations into $4$ cells.
With the same GPU, the cell-based LRGNN-Repeat and LRGNN-Diverse raise the ceiling of the GNN depth compared with the LRGNN-Full. Besides, the Repeat and Diverse variants have fewer parameters and search costs, which demonstrates the search efficiency when using cells. As to the performance, these variants have a smaller search space, and they may filter out the expressive architectures. Therefore, they may be outperformed by LRGNN-Full in some cases. As a conclusion, LRGNN-Full can achieve higher-performance while the other two variants have advantages in the training efficiency.
\subsubsection{The evaluation of the merge and readout operations.}
In the search space, we provide the merge operation and readout operation to improve the model expressiveness as mentioned in Section~\ref{sec-design-space}. Then, we conduct ablation studies on these two operations to show their effectiveness in improving the model performance.
In this paper, we provide a set of merge operations that are expected to improve the model expressiveness.
As shown in Figure~\ref{fig-ablation-readoutmerge} (a), compared with LRGNN which designs the merge operations adaptively, using the pre-defined merge operations may not achieve higher performance in general. The searched results are visualized in Appendix~\ref{sec-appendix-searched}, from which we can observe the different merge operations are utilized. Combine with better performance, the effectiveness to design the merge operations adaptively is obvious.
\begin{figure}[ht]
\centering
\vspace{-10pt}
\includegraphics[width=0.9\linewidth]{./fig/ablation_mergereadout}
\vspace{-10pt}
\caption{Evaluations on the merge and readout operation. These methods are constructed based on LRGNN-Full B12C1 variant and different merge or readout operations.}
\label{fig-ablation-readoutmerge}
\vspace{-10pt}
\end{figure}
The readout operation is responsible for generating the graph representation vector for the given graph.
In this paper, we provide seven candidates and learn to design this operation adaptively.
As shown in Figure~\ref{fig-ablation-readoutmerge} (b),
LRGNN achieves the SOTA performance in three datasets, and all of them utilize the \texttt{GSUM} readout operation in the searched architectures.
Besides, the baseline which uses the predefined readout operation \texttt{GSUM} achieves comparable performance as well.
Nevertheless, using the other two readout operations lead to a performance drop, and then it is important to design the readout operation adaptively.
\vspace{-5pt}
\section{Conclusion}
In this paper, we provide a novel method LRGNN to capture the long-range dependencies with stacking GNNs in the graph classification task.
We justify that the over-smoothing problem has smaller influence on the graph classification task, and then employ the stacking-based GNNs to extract the long-range dependencies. Two design needs, i.e., sufficient model depth and adaptive skip-connections, are provided when designing the stacking-based GNNs.
To meet these two design needs, we unify them into inter-layer connections, and then design these connections with the help of NAS.
Extensive experiments demonstrate the rationality and effectiveness of the proposed LRGNN.
For future work, we will evaluate the importance of different hops' neighbors in different graph pairs, and then extract this information for each graph independently.
\section*{Acknowledgment}
Q. Yao was in part sponsored by NSFC (No. 92270106) and CCF-Tencent Open Research Fund.
\clearpage
\balance
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.08673",
"language": "en",
"timestamp": "2023-02-20T02:06:14",
"url": "https://arxiv.org/abs/2302.08673",
"yymm": "2302"
} |
\section{Introduction}\label{sec:org477f836}
Recently, the number of online learning students continually increased with the development of intelligent online education, such as Massive Open Online Courses (MOOCs) \cite{DBLP:conf/kdd/LiuTLZCMW19} and Online Judging (OJ) \cite{DBLP:conf/ijcai/WuLLCSCH15} systems.
The teacher-student ratio is getting higher and higher.
A teacher may serve thousands of learners simultaneously and cannot provide personalized service for learners. Therefore, there is an increasing demand for recommendation applications based on online intelligent education, including exercise and learning path recommendations \cite{DBLP:conf/edm/AiCGZWFW19}. A key issue in such applications is knowledge tracing, i.e., capturing students' knowledge status over time \cite{DBLP:conf/nips/PiechBHGSGS15}.
As shown in Fig. \ref{fig:ExampleKnowledgeTracing}, knowledge tracing models track the change in students' past knowledge status over time utilizing problem information (knowledge concepts contained in problems), students' historical assignment problems and their responses. On this basis, knowledge tracing models predict students' future knowledge status and learning performance. Such knowledge tracing models can help teachers adjust teaching workloads and future teaching plans.
\begin{figure}[!htbp]
\centering{
\includegraphics[width=0.45\textwidth]{./figure/FCS-ToyExample-fig1.pdf}}
\caption{\label{fig:ExampleKnowledgeTracing} A toy example of the knowledge tracking task.}
\end{figure}
In a real environment, teachers generally want to obtain interpretable results of students' knowledge status through knowledge tracing models.
Markov chain-based knowledge tracking (MCKT) models
\cite{DBLP:conf/ijcai/WuLLCSCH15,DBLP:conf/cikm/ChenLHWCWSH17,Sun2022,Wong2021,Liu2022,Zhang2018}
can provide interpretable results of students' knowledge status (knowledge concept mastery probability) for teachers. MCKT models regard the observed student's exercise performance and the unobservable student's knowledge status as observed and latent random variables, respectively. Then, MCKT models the relationship between random variables based on prior knowledge and then tracks knowledge concept mastery probability over time.
However, as the number of knowledge concepts increases, the number of latent variables keeps increasing. These latent variables may all relate to an observed variable (student assignment performance), which exponentially increases the time complexity of MCKT in calculating the student performance probability.
It is also known as the \textit{explaining away problem} \cite{204911}.
In addition, the existing MCKT models only consider the relationship between students' knowledge status and their performance when modeling students' exercise response but ignore the relationship between knowledge concepts involved in a learning task.
To address these challenges, we propose an inTerpretable pRob-
AbilistiC gEnerative moDel (also called as TRACED) for tracking knowledge concept mastery probability over time.
Reasonable prior is critical to the performance of probabilistic graphical models.
So, to better model the change of knowledge state over time, we adopt the learning curve and forgetting curve as priors to track students' knowledge concept mastery probabilities over time.
To better model the students' exercise response (i.e., students' performance), we consider the relationship among students' knowledge status, knowledge concept and problems.
Specifically, we design a logarithmic linear module with three interactive strategies, which model students' exercise responses by considering the relationship among students' knowledge status, knowledge concept and problems.
Our designed logarithmic linear module can also acquire distributed representation of students' knowledge status, knowledge concepts and problems.
To solve \textit{explaining away problem} (i.e., the high time complexity of posterior estimation and students' future performance prediction), we utilize two LSTM networks to predict students' future performance and approximate the posterior distribution in model inference, respectively.
Furthermore, we propose a heuristic algorithm to train probabilistic generative models and LSTMs jointly.
The joint training of LSTMs and probabilistic generative models is divided into two phases, i.e., the Wake and Sleeping Phases.
During the Wake Phase, we utilize an LSTM network to approximate the posterior distribution and then sample the latent variables for each training sample based on the LSTM network. Then we learn the parameters of the probabilistic generative model by maximizing the Evidence Lower BOund (ELBO).
At the Sleep Phase, we first generate new data with completed latent variables by using our probabilistic generative model.
Then these new data are used to train the LSTM network for approximating the posterior distribution.
Repeating the above two stages, we jointly train the probability generation model and the LSTM.
To verify the effectiveness of our model, we conduct experiments with four real-world datasets in three knowledge-driven tasks.
Three knowledge-driven tasks are future student performance prediction, knowledge concept relation prediction, and knowledge concept prediction for problems.
The experimental results in future student performance prediction show that TRACED outperforms other knowledge tracing methods in predicting students' future performance.
The experimental results in knowledge concept relation prediction and knowledge concept prediction for problems show that our model can learn the relationship among students' knowledge status, knowledge concept and problem from students' exercise sequences.
Moreover, the distributed representation of knowledge concepts and problems learned by TRACED is effective.
Further, we perform several case studies to show that TRACED exhibits excellent interpretability and has the potential for personalized automatic feedback in a real-world educational environment.
The main contributions are outlined as follows:
\begin{itemize}
\item To the best of our knowledge, this is the first comprehensive attempt to track students' numerous knowledge concept mastery probabilities over time instead of tracking knowledge state by modeling the mastery of knowledge concepts as unexplainable model parameters or hidden states.
\item In order to better model students' exercise responses, we design a logarithmic linear module with three interactive strategies, which model students' exercise responses by considering the relationship among students' knowledge status, knowledge concept and problems.
\item To solve the \emph{explaining away problem} (i.e., the high time complexity of posterior estimation and students' future performance prediction), we utilize two LSTM-based networks to approximate the posterior distribution and predict students' future performance, respectively. Then we propose a heuristic algorithm to train LSTMs and probabilistic generative model jointly.
\item Experiments show that TRACED is efficient on four real-world datasets for three knowledge driven tasks (future performance prediction, knowledge concept relation prediction and concept prediction for problems) and exhibits excellent interpretability.
\end{itemize}
\section{Conclusions}
\label{sec:org1dcb972}
In this paper, we proposed an inTerpretable pRobAbilistiC gEnerative moDel (TRACED), which can track numerous students' knowledge concept mastery probabilities over time.
To better model students' learning process, we adopted the learning and forgetting curves as priors to capture the dynamic changes in students' knowledge concept mastery over time.
Second, we designed a logarithmic linear model with three interactive strategies to model students' exercise responses by considering the interactions among knowledge concepts, problems, and students.
To solve \emph{explain away} problem, we design LSTM-based networks to approximate the posterior distribution and propose a heuristic algorithm to learn model parameters.
Lastly, We conduct experiments with four real-world datasets in three knowledge-driven tasks.
The experimental results for predicting students' future performance demonstrate the effectiveness of TRACED as a knowledge tracing model.
The experimental results for predicting the relationship between knowledge concepts and the relationship between knowledge concepts and problems demonstrate the effectiveness of the distributed representation of knowledge concepts and problems.
Besides, We conduct several case studies. The case studies show that TRACED exhibits an excellent interpretable capability and has the potential for personalized automatic feedback in a real-world educational environment.
\section{Acknowledgments}
The work described in this paper was supported by National Natural Science Foundation of China (Nos.62272093, 62137001, U1811261, 61902055).
\section{Problem Definition and Study \\ Overview}
\label{sec:org55cd7f2}
In this section, we first formally introduce knowledge tracing and three knowledge-driven tasks. Then we introduce our study overview.
Assume that there are \(N\) students, \(M\) problems, and \(K\) knowledge concepts in a learning system.
In this system, student exercise logs record the results of exercises performed by the students at different times.
Student \(i\)'s exercise sequence is represented as \(S_i = \left\{ S_{i,1}, S_{i,2}, S_{i,3}..., \right\}\), and
\(S_{i,t} = \left\{ e_{i,t}, r_{i,t}, \tau_{i,t} \right\}\) represents that student \(i\) submitted problem \(e_{i,t}\) as part of the \emph{t}-th exercise record. The submission time was \(\tau_{i,t}\), and the result was \(r_{i,t}\) (either ``True" or ``False'', and we code True=1 and False=0).
Additionally, we have a Q-matrix, which is represented as a binary matrix \(Q \in \mathbb{R}^{M*K}\). \(Q_{j,k}=1\) means that problem \(j\) is related to the knowledge concept \(k\), where a value of 0 indicates that the corresponding problem and concept are unrelated.
Without a loss of generality, knowledge tracing and three knowledge-driven tasks are formulated as follows:
\noindent \textbf{Knowledge Tracing.} Given students' exercise sequence \(S\), and Q-matrix labeled by educational experts, our goal is three-fold: 1) modeling students' knowledge concepts mastery probabilities from the 1-st to $t$-th exercise; 2) predicting students' knowledge concepts mastery probabilities at the ($t+1$)-th exercise; 3) calculating the distributed representation of students' exercise records from the 1-st to ($t+1$)-th exercise, the distributed representation of knowledge concepts, and the distributed representation of problems.
\noindent \textbf{Future Performance Prediction Task.}
Given the distributed representations of students' exercise records from the 1st to ($t+1$)-th exercise, our goal is to predict students' responses at the ($t+1$)-th exercise.
\noindent \textbf{Knowledge Concept Relation Prediction Task.} Given distributed representations of a set of knowledge concepts, the goal is to predict the relationships among knowledge concepts. The relationships between knowledge concepts include superior relationships, subordinate relationships, and disparate relationships.
\noindent \textbf{Concept Prediction for Problems Task.} Given distributed representations of knowledge concepts and problems, the goal is to predict the knowledge concepts of the given problems.
As shown in Figure \ref{fig:IKTFramework}, our solution is a two-stage framework, which contains a modeling stage and predicting stage: 1) In the modeling stage, given students' exercises records and Q-matrix,
we first model students' learning and forgetting behavior to track students' knowledge concept mastery probabilities over time.
Then, we proposed a logarithmic linear model to model students' exercise responses by analyzing the interactions among students, problems, and knowledge concepts. After that, we can obtain students' knowledge concept probabilities \(U^{1}, ..., U^{t}\) at different times and students' learning trajectories. 2) In predicting stage, TRACED predicts students' future responses \(R^{t+1}\) and future knowledge concept mastery probabilities \(U^{t+1}\) in the future.
Besides, we also provide the learned distributed representations of knowledge concepts and problems as pre-trained parameters to the fully connected neural network for knowledge concept relation prediction and concept prediction for problems.
\section{Related Work}
\label{sec:orgb8b682f}
We summarize existing knowledge tracing methods in the following three categories: Factor Analysis-based Knowledge Tracing (FAKT) model, Probabilities Graph-based Knowledge Tracing (PGKT) model and Deep Learning-based Knowledge Tracing (DLKT) model.
\subsection{Factor Analysis-based Knowledge Tracing}
FAKT models tend to learn common factors (such as students' ability, problem difficulty, e.g.) in data (students' history exercise) to predict students' performance.
Moreover, these models consider time factors by counting temporal features (such as the number of wrong attempts and correct attempts, and soon on.). The most simple model of FAKT is the 1-parameter logistic Item Response Theory (IRT) \cite{van2013handbook}, also known as the Rasch model; it does not consider learning among several attempts. The additive factor model (AFM) \cite{cen2006learning} considers the number of attempts a student has made to a problem. The performance factor analysis (PFA) model \cite{pavlik2009performance} separately counts positive and negative attempts as temporal features. Vie and Kashima proposed a family of models named Knowledge Tracing Machines (KTM) \cite{DBLP:conf/aaai/VieK19}, which encompasses IRT, the AFM and PFA as special cases. DAS3H \cite{DBLP:conf/edm/ChoffinPBV19} takes into account both memory decay and multiple knowledge concept tagging by including a representation of the temporal distribution of past exercise on the knowledge concepts involved by a problem. However, FAKT models ignore the order of students' historical exercises. Although FAKT can complement the data with temporal features such as simple counters, FAKT can only get a global students' knowledge status and cannot track knowledge status over time.
\subsection{Probabilistic Graph-based Knowledge Tracing}
PGKT models regard the observed students' exercise performance and the unobservable students' knowledge status as observed and latent random variables. Then PGKT models the relationship between random variables based on prior knowledge and predicts students' future performance by analyzing and modeling students' exercise process.
Probabilistic graph-based knowledge tracing can be divided into two categories, real-time knowledge tracing (also called Markov chain-based Knowledge Tracing, MCKT) and non-real-time probabilistic graph-based knowledge tracing.
\noindent \textbf{Markov Chain-based Knowledge Tracing.} The representative of real-time knowledge tracing is Bay-esian Knowledge Tracing (BKT).
BKT \cite{corbett1994knowledge} assumes that a student's knowledge mastery can be represented as a set of binary variables. Each binary variable indicates whether a student has mastered a particular knowledge concept, and a student's knowledge mastery is modeled as a latent variable in a hidden Markov model.
Gorgun et al. \cite{Gorgun2022} analyzed the influence of student disengagement on prediction accuracy in BKT.
Zhang et al.\cite{Zhang2018} proposes a bayesian knowledge tracing model with three learning states. They divide a learning process into three sections by using an evaluation function for three-way decisions.
FBKT \cite{Liu2022} are proposed to address continuous score scenarios (e.g., subjective examinations) so that the applicability of BKT models may be broadened.
\noindent \textbf{Non-real-time probabilistic graph-based knowledge tracing.} The representative of the non-real-time knowledge tracing is Deterministic Input Noisy-and-gAte (DINA). Although they can diagnose the knowledge states of learners at a certain moment, they need to efficiently track the knowledge states of learners over time and consider factors such as learning and forgetting in the learning process of the learners.
DINA \cite{de2009dina} is a parsimonious and interpretable model that models knowledge mastery as a multidimensional binary latent vector and requires only two parameters (i.e., slipping parameter and guessing parameter) for each problem regardless of the number of knowledge concepts considered.
Fuzzy Cognitive Diagnosis Framework \cite{DBLP:conf/ijcai/WuLLCSCH15} combines fuzzy set theory and educational hypotheses to model a student's knowledge proficiency and predicts a student's performance by considering both the slipping factor and guessing factor.
Knowledge Proficiency Tracing \cite{DBLP:conf/cikm/ChenLHWCWSH17} is an explanatory probabilistic method that tracks the knowledge mastery of students over time by leveraging educational priors (i.e., Q-matrix).
Knowledge Tracing model with Learning Transfer \cite{liu2020tracking} proposed a probabilistic gra-phical model which tracks a student's knowledge proficiency, abstract principle mastery level, and knowledge structure by applying the cognitive structure migration theory \cite{ausubel1968educational} as priors.
However, these models have strong assumptions since the partition function is difficult to calculate and \emph{explain away problem}. They can not track students' numerous knowledge concept mastery probabilities over time.
\subsection{Deep Learning-based Knowledge Tracing}
With the development of deep learning in recent years, LSTM, MANN, and Attention Mechanisms have been applied to the knowledge tracing task. Deep Knowledge Tracing (DKT) \cite{DBLP:conf/nips/PiechBHGSGS15} was the first model to apply deep learning algorithms for knowledge tracing. DKT uses flexible recurrent neural networks that are `deep' in time to track students' knowledge mastery. Subsequently, the Exercise-Enhanced Recurrent Neural Network model \cite{DBLP:conf/aaai/SuLLHYCDWH18} (EERNN) has been proposed based on DKT to take advantage of students' learning records and the text of each problem. Exercise-aware Knowledge Tracing \cite{huang2019ekt} is a framework extended from the EERNN by incorporating knowledge concept information, where the student's integrated state vector is now extended to a knowledge state matrix. However, the knowledge state matrix still needs to be explained.
Lu et al. \cite{lu2020towards} applying the layer-wise relevance propagation method to interpret the RNN-based DLKT model by backpropagating the relevance from the model's output layer to its input layer.
However, this method can only solve the interpretability of the model, not the interpretability of the tracking results of knowledge concept mastery.
DKVMN \cite{DBLP:conf/www/ZhangSKY17} applies key-value memory networks to exploit the relationships among the underlying knowledge and directly outputs a student's knowledge proficiency. DKVMN-Context Aware \cite{DBLP:conf/edm/AiCGZWFW19} modifies the DKVMN to design its memory structure based on the course's concept list and explicitly considers the knowledge concept mapping relationship during knowledge tracing.
DKVMN Decision Tree \cite{sun2019muti} improves the performance of the DKVMN model by incorporating additional features to the input, which applies a DT classifier to preprocess the behavior features.
Self-Attentive Knowledge Tracing \cite{DBLP:conf/edm/PandeyK19} is an approach that uses an attention mechanism to identify a problem from the student's past activities that are relevant to the given problem and predicts students' exercise responses.
Relation Aware Knowledge Tracing \cite{DBLP:conf/cikm/PandeyS20} is a relation-aware self-attention model that uses a relation-aware self-attention layer to incorporate contextual information. The contextual information integrates exercise relation information via their textual content as well as students' performance data and forgotten behavior information.
Attentive Knowledge Tracing \cite{ghosh2020context} employs a monotonic attention mechanism, which relates students' future responses to assessment problems to their past responses to predict students' future performance.
Zhu et al. \cite{zhu2020learning} propose an effective attention-based model for tracking knowledge state, which captures the relationships among each item of the input regardless of the length of the input sequence.
And Yu et al. \cite{yu2022contextkt} propose a context-based knowledge tracing model, which combines students’ historical performance and their studying contexts during knowledge mastery.
However, since the internal logic to achieve the desired output or result that is un-understandable and unexplainable, DLKT is less interpretable in tracing students' knowledge status.
Although there are some works \cite{DBLP:conf/kdd/0001HL20} that attempt to address \textit{black box problems} of DLKT, they explain how DLKT predicts student exercise feedback by showing the impact weight of students' historical exercise records on predicting students' current exercise responses.
\section{Experiment}
\label{sec:orgf5eee29}
\subsection{Datasets}
\label{sec:org033968f}
We employ four real-world datasets: POJ, HDU, algebra06 and algebra08. The HDU and POJ datasets were obtained from the Hangzhou Dianzi University Online Judge platform (\url{http://acm.hdu.edu.cn}, accessed on October 22 2021) and the Peking University Online Judge platform (\url{http://poj.org}, accessed on 2 January 2023). Moreover, the algebra06 and algebra08 datasets obtained from the KDD Cup 2010 EDM Challenge (\url{https://pslcdatashop.web.cmu.edu/KDDCup/downloads.jsp}, accessed on 2 January 2023).
\begin{table}[!tbp]
\centering{
\caption{Statistics of the Datasets\label{table:statisticsofdatasets} }
\begin{tabular}{c|c|c|c|c}
\hline
Dataset & HDU & POJ & algebra06 & algebra08 \\
\hline
\#Student & 9,859 & 3,507 & 1,072 & 2,385 \\
\#Problem & 2,101 & 970 & 1,218 & 736 \\
\#Records & 1,042,661 & 288,594 & 1,199,438 & 1,895,107 \\
\#Concepts & 193 & 146 & 319 & 304 \\
Avg.rec & 105 & 82 & 1,118 & 795 \\
\hline
\end{tabular}}
\end{table}
The HDU and POJ dataset includes records submitted from June 2018 to November 2018, the algebra06 dataset includes records submitted from October 2006 to February 2007, and the algebra08 dataset includes records submitted from September 2008 to January 2009.
Students are allowed to resubmit their codes for a problem until they pass.
We filter out students with fewer than 30 records and an acceptance rate of less than 10\%, as well as problems with fewer than 30 records.
After filtering, the statistics of the datasets are shown in Table \ref{table:statisticsofdatasets}, and Avg.rec in Table \ref{table:statisticsofdatasets} represents the average number of students exercise records.
Specifically, numerous problems in the HDU and POJ datasets lacked knowledge concept labels. The knowledge concept of the problem in the HDU and POJ dataset is highly consistent with the knowledge concept of the problem in ICPC (\url{https://icpc.global}, accessed on 2 January 2023). It covers a wide range. The granularity of knowledge concepts is coarse, such as dynamic programming, graph theory, and soon on. Therefore, we gathered five experts who have won the medal of the ICPC Asian regional contest to mark the problems with knowledge concept labels. If more than three experts mark a knowledge concept label for a certain problem, we add the label to the problem.
\subsection{Evaluation Metrics}
\label{sec:orgcc7a275}
We evaluate the models for predicting students' future performance from regression and classification perspectives \cite{fogarty2005case,DBLP:conf/aaai/SuLLHYCDWH18}.
For regression, we use the \emph{Mean Absolute Error} (MAE) and the \emph{Root-Mean-Square Error} (RMSE) to quantify the distance between the predicted scores and the actual ones.
For classification, we select the \emph{Area Under the Receiver Operating Characteristic Curve} (AUC), \emph{Prediction Accuracy} (ACC), \emph{Precision} (PRE) and \emph{Recall} (REC) as measures of performance. The larger the values of these metrics are, the better the results are. The threshold adopted by the classification task in this paper is 0.5.
Liu et al. \cite{liu2022learning} noted that various learning rate schedules can have a notable effect on performance. To ensure the fairness of the experiment, constant learning rate was utilized for all models.
And all models are implemented by Python, and all experiments are run on a Linux server with eight 3.6 GHz Intel W-2123 CPUs, 64 G memory and an NVIDIA RTX-2080Ti GPU.
\begin{table*}[!htbp]
\small
\caption{Results for Predicting Future Student Performance on the HDU and POJ datasets}
\label{table:FuturePerformanceResultHDU}
\centering
\begin{tabular}{c|cccccc|cccccc}
\hline
& \multicolumn{6}{c|}{HDU} & \multicolumn{6}{c}{POJ}\\
\hline
Model & AUC & ACC & PRE & REC & RMSE & MAE & AUC & ACC & PRE & REC & RMSE & MAE\\
\hline
IRT & 0.6329 & 0.6407 & 0.5652 & 0.3007 & 0.4741 & 0.4398 & 0.6067 & 0.6594 & 0.5642 & 0.1294 & 0.2206 & 0.4303 \\
MIRT & 0.6376 & 0.6410 & 0.5596 & 0.3285 & 0.4731 & 0.4493 & 0.6099 & 0.6602 & 0.5593 & 0.1486 & 0.2193 & 0.4403 \\
AFM & 0.5669 & 0.6155 & 0.5276 & 0.0426 & 0.4840 & 0.4669 & 0.5154 & 0.6488 & 0.3269 & 0.0108 & 0.2275 & 0.4546 \\
PFA & 0.6394 & 0.6349 & 0.6169 & 0.1417 & 0.4738 & 0.4488 & 0.5337 & 0.6506 & 0.5536 & 0.0215 & 0.2262 & 0.4523 \\
KTM & 0.6760 & 0.6619 & 0.6104 & 0.3423 & 0.4639 & 0.4291 & 0.6149 & 0.6603 & 0.5525 & 0.1683 & 0.2194 & 0.4340 \\
DASH & 0.6808 & 0.6644 & 0.6068 & 0.3705 & 0.4621 & 0.4223 & 0.6149 & 0.6603 & 0.5525 & 0.1683 & 0.2194 & 0.4340 \\
DAS3H & 0.6794 & 0.6633 & 0.5957 & 0.3966 & 0.4627 & 0.4236 & 0.6084 & 0.6528 & 0.5148 & 0.1815 & 0.2210 & 0.4409 \\
DKT & 0.6986 & 0.6752 & 0.6224 & 0.4327 & 0.4136 & 0.4581 & 0.6601 & 0.6757 & 0.5627 & 0.2762 & 0.2012 & 0.4123 \\
DKVMN & 0.6959 & 0.6761 & 0.6304 & 0.4126 & 0.4134 & 0.4589 & 0.6578 & 0.6804 & 0.5814 & 0.2642 & 0.2094 & 0.4121 \\
AKT & 0.7019 & 0.6805 & 0.6201 & 0.3715 & 0.4136 & 0.4544 & 0.5913 & 0.6618 & 0.5627 & 0.0894 & 0.2213 & 0.4392 \\
TRACED & \textbf{0.7328} & \textbf{0.7096} & \textbf{0.6412} & \textbf{0.4346} & \textbf{0.4074} & \textbf{0.4489} & \textbf{0.6674} & \textbf{0.6962} & \textbf{0.5884} & \textbf{0.2846} & \textbf{0.2011} & \textbf{0.4094}\\
\hline
\end{tabular}
\end{table*}
\subsection{Baselines for Comparison}
\label{sec:org47282c8}
We compare TRACED with the following eleven state-of-the-art methods with well-tuned parameters.
\begin{itemize}
\item \emph{IRT} and \emph{MIRT} \cite{embretson2013item}: a popular cognitive diagnostic model, which discover students' knowledge levels through ranking with a logistic-like function. When d $>$ 0, IRT becomes MIRT, a variant of MIRT that considers a user bias.
\item \emph{AFM} and \emph{PFA} \cite{pavlik2009performance,cen2006learning}: the factor analysis models, which take account the number of attempts (AFM) and the number of positive and negative attempts (PFA).
\item \emph{KTM} \cite{DBLP:conf/aaai/VieK19}: a factorization machine model, which encompasses IRT, AFM, and PFA as special cases.
\item \emph{DASH} \cite{lindsey2014improving,mozer2016predicting}: a knowledge tracing model that bridges the gap between factor analysis and memory models. It stands for difficulty, ability, and student history.
\item \emph{DAS3H} \cite{DBLP:conf/edm/ChoffinPBV19}: a framework based on factor analysis, which takes both memory decay and multiple knowledge concept tagging into account.
\item \emph{DKT} \cite{DBLP:conf/nips/PiechBHGSGS15}: the first model to apply deep learning algorithms for knowledge tracing, which uses LSTM to track students’ knowledge mastery over time.
\item \emph{DKVMN} \cite{DBLP:conf/www/ZhangSKY17}: a key-value memory networks, which exploit the relationship among the underlying knowledge and directly outputs a student’s knowledge proficiency.
\item \emph{AKT} \cite{DBLP:conf/kdd/0001HL20}: a monotonic attention mechanism model, which predict students' future performance by relating a students' future responses to assessment problems to their past responses.
\end{itemize}
We do not perform a comparison with FuzzyCDF \cite{DBLP:conf/ijcai/WuLLCSCH15}, DINA \cite{de2009dina}, and KPT \cite{DBLP:conf/cikm/ChenLHWCWSH17}.
Because we regard exercise records of a student as complete data, which either appear in the training set or the test set, however, FuzzyCDF and DINA contain student parameters that need to be trained. Thus, a student’s exercise record must appear in both the training and test sets. The KPT model cannot predict student performance in real time.
\subsection{Predicting Future Student Performance}
\label{sec:org15c97a2}
In this section, we compare the predictions of our model and other baseline models in students' future performance. The experimental parameters of TRACED are \(\Delta \hat{\tau} = 86400s\), \(d_e = 20\), \(d_{h} = 2 * K\), \(d_{p} = 80 + 4 * K\) and the number of exercise record embedding dimensions is \(40 + 2 * K\). Besides, we randomly initialize all parameters in the TRACED to the Glorot normal \cite{pmlr-v9-glorot10a}.
To prevent overfitting, we add L2 regularization for TRACED. We perform 5-fold cross-validation for each dataset and average the results over five folds. For each fold, we use 80\% of the data for training and use the remaining 20\% for testing.
The experimental results are shown in Table \ref{table:FuturePerformanceResultHDU} and \ref{table:FuturePerformanceResultKDD}.
It can be seen that the results of all models in the algebra06 and algebra08 datasets are much better than the results with the HDU and POJ datasets, because the average number of students' exercise submissions with the algebra06 and algebra08 datasets are much higher than that with the POJ and HDU datasets. Besides, learners in POJ and HDU datasets mostly learn to program independently, and they often submit repeated submissions to test the accuracy and efficiency of the code. Learners in algebra06 and algebra08 datasets are mostly after-school math exercises. Although learners will submit many times, the number of repeated submissions is less.
Most models in the algebra06 and HDU datasets are much better than those in the algebra08 and POJ datasets. Because most models perform better in large datasets than in small datasets when using the same type of data set.
The performance of TRACED is significantly better than that of other models.
The reason why the performance of TRACED is better than that of FAKT (DAS3H, KTM, etc.) is that the TRACED model can be regarded as a kind of DLKT for predicting students' future performance tasks.
DLKT is significantly better than FAKT for predicting students' future performance tasks.
The TRACED model is better than other DLKT models because TRACED does not directly use the original exercise records but uses the distributed representations of records to predict students' future performance. Compared with the original records, the distributed representations of exercise records contain a substantial amount of contextual information.
\begin{table*}[htbp]
\small
\caption{Results for Predicting Future Student Performance on the algebra06 and algebra08 datasets}
\label{table:FuturePerformanceResultKDD}
\centering
\begin{tabular}{c|cccccc|cccccc}
\hline
& \multicolumn{6}{c|}{algebra06} & \multicolumn{6}{c}{algebra08}\\
\hline
Model & AUC & ACC & PRE & REC & RMSE & MAE & AUC & ACC & PRE & REC & RMSE & MAE\\
\hline
IRT & 0.6663 & 0.8451 & 0.8477 & 0.9957 & 0.1244 & 0.2397 & 0.6668 & 0.8123 & 0.8148 & 0.9948 & 0.3798 & 0.2904 \\
MIRT & 0.6625 & 0.8455 & 0.8467 & \textbf{0.9979} & 0.1247 & 0.2577 & 0.6656 & 0.8123 & 0.8144 & \textbf{0.9956} & 0.3802 & 0.2998 \\
AFM & 0.6663 & 0.8451 & 0.8477 & 0.9957 & 0.1244 & 0.2597 & 0.6737 & 0.8288 & 0.8190 & 0.9862 & 0.3820 & 0.2927 \\
PFA & 0.7120 & 0.8418 & 0.8567 & 0.9761 & 0.1220 & 0.2319 & 0.7040 & 0.8143 & 0.8179 & 0.9918 & 0.3746 & 0.2806 \\
KTM & 0.7440 & 0.8484 & 0.8546 & 0.9890 & 0.1155 & 0.2298 & 0.7173 & 0.8161 & 0.8214 & 0.9883 & 0.3717 & 0.2762 \\
DASH & 0.7464 & 0.8512 & 0.8548 & 0.9927 & 0.1143 & 0.2425 & 0.7090 & 0.8142 & 0.8172 & 0.9930 & 0.3742 & 0.2934 \\
DAS3H & 0.7328 & 0.8419 & 0.8580 & 0.9743 & 0.1227 & 0.2790 & 0.7234 & 0.8164 & 0.8214 & 0.9887 & 0.3704 & 0.2738 \\
DKT & 0.7513 & 0.8536 & 0.8497 & 0.9826 & 0.1124 & 0.2310 & 0.7462 & 0.8182 & 0.8315 & 0.9728 & 0.2638 & 0.3663 \\
DKVMN & 0.7564 & 0.8579 & 0.8592 & 0.9910 & 0.1117 & 0.2284 & 0.7453 & 0.8188 & 0.8288 & 0.9785 & 0.2662 & 0.3662 \\
AKT & 0.7573 & 0.8621 & 0.8588 & 0.9954 & 0.1106 & 0.2193 & 0.7173 & 0.8090 & 0.8158 & 0.9857 & 0.2750 & 0.3769 \\
TRACED & \textbf{0.7604} & \textbf{0.8623} & \textbf{0.8596} & 0.9957 & \textbf{0.1098} & \textbf{0.2154} & \textbf{0.7724} & \textbf{0.8336} & \textbf{0.8496} & 0.9894 & \textbf{0.2539} & \textbf{0.3659}\\
\hline
\end{tabular}
\end{table*}
The performance of FAKT models is constantly improving with an increase in the time features extracted by humans. The performance is reflected in the performance comparison of DAS3H, DASH, KTM, PFA, AFM MIRT and IRT (DAS3H $>$ DASH $>$ KTM $>$ PFA, AFM $>$ MIRT $>$ IRT).
Specifically, AFM and PFA are the factor analysis models considering the number of attempts and positive and negative attempts. Since the number of attempts in the HDU and POJ datasets cannot accurately reflect whether learners master knowledge concepts, the performance of AFM and PFA in the HDU and POJ datasets is poor, even worse than IRT and MIRT. However, the performance of AFM and PFA in the algebra06 and algebra08 datasets is better than IRT and MIRT.
The performance gap among the three types of DLKT is tiny.
The reason why the performance of DKVMN is not much different from that of DKT may be that we may not need too many storage units to obtain better performance in these four datasets.
The reason for the unsatisfactory effect of AKT is that the AKT model is a model designed for problems with a single knowledge concept, and problems in the HDU, POJ, algebra06 and algebra08 datasets contain a substantial amount of knowledge concepts.
Besides, the reason for the poor performance of AKT in the POJ dataset is that the number of learners and submissions in the POJ dataset is too small.
\subsection{Predicting Relationships Between Concepts}
\label{sec:org9798341}
Since we do not have experts in the field of algebra, we have not constructed a knowledge graph in the field of algebra. In the end, we chose to conduct the experiment of predicting the relationship between knowledge concepts in the HDU and POJ datasets. We use a knowledge concept graph to train a fully connected neural network to identify the inclusion relationships between knowledge concepts. The fully connected neural network has 30 hidden units, and the number of dimensions of the knowledge concept embeddings is 40; the network structure is shown in Eq. \ref{equ:PredictingRelationshipsbetweenKnowledgeConcepts}.
The sparsity of the knowledge concept graph leads to an extreme imbalance between the number of positive and negative samples in the data.
To overcome this problem, we sample 2 negative samples for each positive sample, following the negative sampling approach proposed in \cite{mikolov2013distributed}.
Finally, each data point is represented as \((k_1, k_2, r)\), where \(r\) represents the relationship between knowledge concepts \(k_1\) and \(k_2\) (\(r=1\) means that knowledge concept \(k_1\) includes knowledge concept \(k_2\); otherwise, \(r=0\)).
For each fold, we use 80\% of the data as training data and the remaining 20\% as test data. The experimental results are shown in Table \ref{table:RelationshipResult}.
The performance of the fully connected neural networks that consider the distributed representations of the knowledge concepts is much better than that of the networks that do not consider the distributed representations of the knowledge concepts.
Moreover, the distributed representations based on both dynamic and static interactions yield better performance than the other methods for predicting the relationships between knowledge concepts.
\begin{table}[!tbp]
\small
\caption{Results of Predicting Relationships Between Concepts}
\label{table:RelationshipResult}
\centering
\begin{tabular}{m{3.1cm} m{0.7cm} m{0.7cm} m{0.7cm} m{0.7cm}}
\hline
Model & AUC & ACC & RMSE & MAE \\
\hline
\multicolumn{5}{c}{HDU}\\
\hline
NN & 0.780 & 0.683 & 0.390 & 0.282 \\
NN + EK,KK & 0.829 & 0.715 & 0.371 & 0.302 \\
NN + EK,UE & 0.807 & 0.656 & 0.371 & 0.305 \\
NN + KK,UE & 0.812 & 0.698 & 0.385 & 0.308 \\
NN + EK,KK,UE & \textbf{0.848} & \textbf{0.746} & \textbf{0.360} & \textbf{0.300} \\
\hline
\multicolumn{5}{c}{POJ}\\
\hline
NN & 0.713 & 0.619 & 0.463 & 0.353 \\
NN + EK,KK & 0.751 & 0.609 & 0.412 & 0.352 \\
NN + EK,UE & 0.732 & 0.566 & 0.432 & 0.373 \\
NN + KK,UE & 0.751 & 0.604 & 0.413 & 0.354 \\
NN + EK,KK,UE & \textbf{0.812} & \textbf{0.768} & \textbf{0.393} & \textbf{0.349} \\
\hline
\end{tabular}
\end{table}
\begin{table}[!tbp]
\small
\caption{Results of Predicting Concepts of Problems}
\label{table:PredictConceptResult}
\centering
\begin{tabular}{m{3.1cm} m{0.7cm} m{0.7cm} m{0.7cm} m{0.7cm}}
\hline
Model & AUC & ACC & RMSE & MAE \\
\hline
\multicolumn{5}{c}{HDU}\\
\hline
NN & 0.688 & 0.530 & 0.458 & 0.377 \\
NN + EK,KK & 0.759 & 0.683 & 0.417 & 0.363 \\
NN + EK,UE & 0.756 & 0.682 & 0.418 & 0.363 \\
NN + KK,UE & 0.753 & 0.668 & 0.422 & 0.378 \\
NN + EK,KK,UE & \textbf{0.764} & \textbf{0.670} & \textbf{0.416} & \textbf{0.365} \\
\hline
\multicolumn{5}{c}{POJ}\\
\hline
NN & 0.678 & 0.511 & 0.463 & 0.379 \\
NN + EK,KK & 0.769 & 0.706 & 0.416 & 0.375 \\
NN + EK,UE & 0.767 & 0.684 & 0.418 & 0.379 \\
NN + KK,UE & 0.763 & 0.707 & 0.427 & 0.399 \\
NN + EK,KK,UE & \textbf{0.772} & \textbf{0.717} & \textbf{0.413} & \textbf{0.366} \\
\hline
\end{tabular}
\end{table}
The distributed representations that consider KK interactions show better performance for this task than those that consider EK interactions.
In summary, using UE, EK, and KK interaction strategies can learn the relationship information between knowledge concepts, and the KK interaction strategy plays a more important role than the EK, UE interaction strategy in the task of learning knowledge concept relationship information.
The experimental results show that the distributed representation of knowledge concepts learned by TRACED contains effective information about the relationship between knowledge concepts.
\subsection{Predicting Concepts Contained by Problems}
\label{sec:org7a625c6}
We use the Q-matrix to train a fully connected neural network to judge whether a certain problem contains a certain knowledge concept.
The fully connected neural network has 30 hidden units, and the number of dimensions of the knowledge concept embeddings is 40; the network structure is shown in Eq. \ref{eq:PredictingConceptsInProblems}.
Due to the sparsity of the Q-matrix, we sample 2 negative samples for each positive sample. Finally, each data point is represented as \((e, k, r)\), where \(r = 1\) indicates that problem \(e\) is related to knowledge concept \(k\).
For each fold, we use 80\% of the data for training and the remaining 20\% for testing.
The experimental results are shown in Table \ref{table:PredictConceptResult}.
Again, the performance of the fully connected neural networks that consider the distributed representations of the knowledge concepts is much better than that of the networks that do not consider the distributed representations of the knowledge concepts.
The distributed representations based on both dynamic and static interactions yield better performance than the other methods for predicting the knowledge concepts contained by problems.
Besides, the distributed representations that consider EK interactions show better performance for this task than those that consider KK interactions.
In summary, using UE, EK, and KK interaction strategies can learn the relationship information between knowledge concept and problem, and the EK interaction strategy plays a more important role than the KK, UE interaction strategy in the task of learning knowledge concept relationship information.
The experimental results show that the distributed representation of knowledge concepts learned by TRACED contains effective information about the relationship between knowledge concept and problem.
\begin{figure}[!ht]
\centering
\subfigure[the loss in wake phase]{\includegraphics[width=0.24\textwidth]{figure/FCS-wakeloss-fig4.pdf}}
\subfigure[the loss in sleep phase]{\includegraphics[width=0.24\textwidth]{figure/FCS-sleeploss-fig5.pdf}}
\caption{\label{fig:LossHDU}\textbf{The loss values of TRACED}}
\end{figure}
\subsection{Convergence of TRACED}
\label{sec:org1227ebd}
As previously stated, TRACED is optimized by a heuristic algorithm. We prove the convergence of the heuristic algorithm we proposed with TRACED via experiments.
We have conducted experiments on four datasets, and all the experimental results show that TRACED can converge.
As shown in Figures \ref{fig:LossHDU}, TRACED converges in the Wake and Sleep phase with four datasets.
In the Wake phase, TRACED converges smoothly.
In the Sleep phase, although there are some fluctuations in the convergence process, the overall trend is convergent.
The fluctuation in the Sleep phase is caused by the unstable data generated in the Wake phase in the initial situation.
The model parameters are all obtained by random initialization. The training in the wake phase will use both the data sampled in the sleep phase and the real data, but the training data in the sleep phase is completely generated by the Wake phase. In the initial stage of training, the model parameters change too quickly, so the training data of the sleep phase is unstable and may be optimized in the wrong direction. The sleep loss will show an upward trend at the beginning. The wake phase uses the data sampled in the sleep phase and the real data, so the wake loss will not show an upward trend at the beginning.
The experimental results prove that the heuristic algorithm we proposed for the training model is a suitable and effective optimization method for learning TRACED parameters.
\begin{figure*}[!ht]
\centering
\subfigure[HDU]{\includegraphics[width=0.48\textwidth]{./figure/FCS-heatMapHDU-fig6.pdf}}
\subfigure[POJ]{\includegraphics[width=0.48\textwidth]{./figure/FCS-heatMapPOJ-fig7.pdf}}
\caption{\label{fig:Visualization}\textbf{The visualization of prior and posterior of TRACED on the HDU and POJ datasets}}
\end{figure*}
\subsection{Visualization of Mastery Probability}
\label{sec:orgcb40507}
Figure \ref{fig:Visualization} visualizes the prior and posterior probabilities of students' knowledge concept mastery on the HDU and POJ datasets.
The figure shows the dynamic changes in the prior and posterior probability of a student's knowledge concept mastery as students continuously practice the problem over time.
Because the number of knowledge concepts and students is vast, we only show the dynamic change in the probability of mastering the three relevant knowledge concepts of the problem during the practice of a certain problem by a student.
With the increase in the number of exercises, the prior and posterior probability of students mastering knowledge concepts is also steadily improving.
The continuous and slow changes in the prior and posterior probabilities are consistent with our empirical understanding of the learning process.
The calculation of the prior and posterior probabilities uses the student’s exercise problem sequence \(e_i\) and exercise time sequence \(\tau_i\), and the posterior probability can be regarded as the prior probability that is corrected based on the student’s exercise response \(r_i\). Thus, there is not much difference between the prior and posterior probability.
The learners in the HDU and POJ datasets are the same type of learners. They are all independently programmed in the Online Judge system, so figure \ref{fig:Visualization} (a) and figure \ref{fig:Visualization} (b) can be compared horizontally. Figure \ref{fig:Visualization} (a) shows a learner who repeatedly submits the correct code. He may be adjusting the efficiency of the code. Therefore, the probability of mastering knowledge concepts increased significantly after submitting it correctly for the first time but slowly increased in subsequent submissions. Figure \ref{fig:Visualization}(b) shows that a learner is constantly practicing. With the initial six incorrect submissions, the probability of mastering the knowledge concept slowly increases. After the seventh correct submission, the probability of mastering the knowledge concept of the learner reaches its peak. Correct submission will no longer increase the probability of mastering the knowledge concept.
The above experimental results show that TRACED can provide learners and teachers with an interpretable cognitive diagnosis that changes dynamically over time.
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.48\textwidth]{./figure/FCS-Embedding-fig8.pdf}
\caption{\label{fig:ExperimentFigure1} Visualization of the learned distributed representations of students, knowledge concept and problem for the HDU dataset, where the learned representations have been reduced to 2 dimensions by means of PCA}
\end{figure}
\subsection{Visualization of Learning Trajectory}
\label{sec:orgb6ac773}
The distributed representations of the student, knowledge concepts and problems acquired using TRACED are shown in Figure \ref{fig:ExperimentFigure1}, where the acquired representations have been reduced to 2 dimensions utilizing Principal Component Analysis(PCA).
We have drawn the learning trajectory of a learner; each point on the learning trajectory corresponds to the distributed representation of a learner after a certain exercise.
We also constructed distributed representations of problems and knowledge concepts involved in the learner's exercises.
With continuous practice, the distributed representation of the learner gradually approaches the distributed representation of knowledge concepts involved in the problem.
Besides, the result of the learner's distributed representation will cluster spontaneously, and there is a certain distance between each cluster. The result of the learner's distributed representation within the class shows a straight line. Because in addition to learning by practicing on the online education platform, students can also learn in other ways (offline teachers teach, read paper textbooks, and soon on.). We cannot collect the data generated by these learning processes. Thus, there are significant jumps between two certain clusters on the student's learning trajectory.
The reason why the learning trajectory of the learner in the class shows a straight line is that the results of the distributed representation of the learner in the class are generally produced by a phased exercise. The knowledge concepts involved in the phase exercises are relatively similar, so the learner's embedding result shifts toward the specified direction.
By using the learning trajectory of students, teachers can quickly understand the learning habits and learning process of students and then can customize personalized learning plans for students conveniently and quickly.
\section{The Interpretable Probabilistic Generative Model}
\label{sec:org556a3dd}
In this section, we introduce the technical details of TRACED and how to apply it to three knowledge-driven tasks. For better illustration, key notations in TRACED, which can be divided into four categories, are summarized in Table \ref{table:somekeynotationsinkcrl}, namely, dataset description, model parameters, random variable, and hyperparameter notation.
\begin{figure}[!tbp]
\centering{
\includegraphics[width=0.48\textwidth]{./figure/FCS-TRACEDFramework-fig2.pdf}}
\caption{\label{fig:IKTFramework} The Framework of the TRACED model.}
\end{figure}
\subsection{Modeling Students' Exercise Responses}
\label{sec:orgb5a2624}
Students' performance is affected by many aspect, such as, student knowledge concept mastery, the knowledge concept contained in the problem and the similarity between difference knowledge concept.
To modeling these different factors, inspired by factor analysis-based knowledge tracing models \cite{DBLP:conf/aaai/VieK19,DBLP:conf/edm/ChoffinPBV19},
we design three interactive strategies for modeling students' exercise responses, namely,
strategies based on student problem interactions (UE), problem knowledge interactions (EK), and knowledge knowledge interactions (KK). UE, EK, and KK interactions are shown in Eq.\ref{eq:UEKK}. With these three interactive strategies, we predict students' exercise responses, and distributed representations of knowledge concepts and problems can be learned by exploring the three connections: a) connections between students and problems; b) connections between problems and concepts; and c) connections among knowledge concepts.
The proposed logarithmic linear model, which incorporates the three interactive strategies, is defined as follows:
\begin{equation}
\label{eq:UEKK}
\begin{aligned}
&logit{(p(\hat{r}_{i,t} = 1 | u_i^t, e_{i,t}))} = \mu + w_{e, e_{i,t}} \\
& + \sum\limits_{k=1}^K Q_{e_{i,t}, k} w_{c, k} + \underbrace{\sum\limits_{k_1=1}^K \sum\limits_{k_2=1}^K Q_{e_{i,t},k_1} Q_{e_{i,t},k_2} E_{c,k_1} E_{c,k_2}}_{\text{KK interactions}}\\
& + \underbrace{(\sum\limits_{k=1}^{K} u_{i,k}^t E_{c,k}) E_{e, e_{i,t}}}_{\text{UE interactions}} + \underbrace{\sum\limits_{k=1}^K Q_{e_{i,t},k} E_{e,e_{i,t}} E_{c,k}}_{\text{EK interactions}} \\
\end{aligned}
\end{equation}
where $p(\hat{r}_{i,t} = 1 | u_i^t, e_{i,t})$ represents the probability of student with knowledge concept mastery $u_i^t$ correctly answers the problem $e_{i,t}$ without considering slipping and guessing factors, \(logit (x) = \frac{x}{1-x}\), \(\hat{r}_{i,t}\) is student \(i\)'s exercise response in the \emph{t}-th exercise record; \(u_i^t\) is student \(i\)'s knowledge concept mastery at the time of the \emph{t}-th exercise record; \(\sum\limits_{k=1}^K u_{i,k}^t E_{c,k}\) is the distributed representation of student \(i\) in the \emph{t}-th exercise record; \(E_{c,k}, E_{e,e'} \in \mathbb{R}^{d_e}\) are the distributed representations of knowledge concepts \(k\) and problems \(e'\); \(\mu\) is a global bias; \(w_e\), and \(w_{c}\) are the biases for the problems and knowledge concepts, respectively; and \(Q_{e,k}\) indicates whether problem \(e\) is related to knowledge concept \(k\).
\begin{figure}[!tbp]
\centering{
\includegraphics[width=0.48\textwidth]{./figure/FCS-TRACED-fig3.pdf}}
\caption{\label{fig:IKTFigure} Graphical representation of TRACED.}
\end{figure}
In addition, we consider the students' guessing and slipping behaviors when modeling the students' exercise responses as follows:
\begin{equation}
\begin{aligned}
&p \left( r_{i,t} = 1 | u_i^t \right) = p(\hat{r}_{i,t} = 1 | u_i^t) (1 - p(s_{e_{i,t}} = 1))\\
& \qquad \qquad \qquad+ p(\hat{r}_{i,t} = 0 | u_i^t) p(g_{e_{i,t}} = 1)\\
\end{aligned}
\end{equation}
The probability of making a mistake with problem \(e\) \(p(s_{e}=1)\) and the probability of correctly answering problem \(e\) by guessing \(p(g_{e}=1)\) are defined as:
\begin{equation}
\begin{aligned}
p \left( s_{e} = 1 \right) &= \frac{1}{1+e^{-\theta_{s,e}}}\\
p \left( g_{e} = 1 \right) &= \frac{1}{1+e^{-\theta_{g,e}}}\\
\end{aligned}
\end{equation}
where \(\theta_{s,e}\) and \(\theta_{g,e}\) are the guessing parameters and slipping parameters, respectively, for problem \(e\).
\subsection{Modeling Students' Knowledge Concept Mastery Probabilities Over Time}
\label{sec:orgb2040d8}
As students learn new knowledge concepts and forget the original knowledge concepts, the students' mastery of each knowledge concept varies over time.
To better track students' knowledge concept mastery over time, we use a combination of learning and forgetting factors to model students' knowledge concept mastery. Student \(i\)'s mastery of knowledge concept \(k\) at the time $t$ is given as:
\begin{equation}
\begin{aligned}
p(u_{i,k}^t = 1) = &p(u_{i,k}^{t-1} = 1) (1 - p(F_k = 1)) \\
&\quad + p(u_{i,k}^{t-1} = 0) p(L_{k} = 1)\\
\end{aligned}
\end{equation}
where \(p(u_{i,k}^t = 1)\) is the probability that student \(i\) has mastered knowledge concept \(k\) at the time of the \emph{t}-th exercise; \(p(F_k=1)\) is the probability that a student will forget knowledge concept \(k\), and \(p(L_k=1)\) is the probability that a student will mastery knowledge concept \(k\) via learning.
In this formula, the first component $p(u_{i,k}^{t-1} = 1) (1 - p(F_k = 1))$ represents the student has previously mastered knowledge concept $k$ and doesn't forget the knowledge concept $k$, and the second component $p(u_{i,k}^{t-1} = 0) p(L_{k} = 1)$ represents the student hasn't previously mastered knowledge concept $k$ but learned knowledge concept $k$.
To facilitate the subsequent derivation, we define students' knowledge concept mastery in matrix form as follows:
\begin{equation}
\label{eq:pu}
p\left(u_{i, k}^t\right)=\left\{\begin{array}{lll}p\left(u_{i, k}^{t-1}\right) p\left(u_{i, k}^t \mid u_{i, k}^{t-1}\right) & \text { if } & t>1 \\ \pi(k) & \text { if } & t=1\end{array}\right .
\end{equation}
where \(p(u_{i,k}^t) = [p(u_{i,k}^t = 1), p(u_{i,k}^t = 0)]\) represents the probability of whether student $i$ masters knowledge concept $k$ at timestamp $t$, $\pi(k) = [\pi(k, 1), \pi(k, 0)]$ represents the probability of whether student masters knowledge concept $k$ when timestamp $t=1$, and \(p(u_{i,k}^t | u_{i,k}^{t-1})\) is the transfer matrix.
Specifically, \(p(u_{i,k}^t | u_{i,k}^{t-1})\) is defined as:
\begin{equation}
\label{eq:pu2u}
\begin{aligned}
&p(u_{i,k}^t | u_{i,k}^{t-1}) =
\left [
\begin{array}{cc}
1 - p(F_k = 1) & p(F_k = 1) \\
p(L_{k} = 1) & 1 - p(L_{k} = 1) \\
\end{array}
\right ]
\end{aligned}
\end{equation}
Inspired by the learning curve and forgetting curve \cite{anzanello2011learning,von2007understanding}, we design the forgetting probability \(p(F_k = 1)\) and learning probability \(p(L_k = 1)\) for knowledge concept \(k\) based on the following principles:
First, the longer the interval between two exercises on a particular knowledge concept, the higher the probability of forgetting that knowledge concept.
Second, the large the number of exercises for a particular knowledge concept within a short time, the higher the probability of learning that knowledge concept.
\(p(F_k = 1)\) and \(p(L_k = 1)\) as follow:
\begin{align}
& p(F_k = 1) = \sigma (\frac{\Delta \tau_k}{\theta_{f,k}} + b_{f,k}) \label{eq:Fk} \\
& p(L_k = 1) = \sigma (\frac{\theta_{l1,k}*f_k}{f_k+\theta_{l2,k}} + b_{l,k}) \label{eq:Tk}
\end{align}
where \(\sigma\) is sigmoid activation function; \(\Delta \tau_k\) is the time interval between the current exercise record for knowledge concept \(k\) and the previous record for knowledge concept \(k\), \(f_k\) denotes the exercise frequency for knowledge concept \(k\) within a specified time \(\Delta \hat{\tau}\); \(\theta_{f,k}\), \(\theta_{l1,k}\), and \(\theta_{l2,k}\) are the forgetting and learning parameters for knowledge concept \(k\);
\(b_{f,k}\) and \(b_{l,k}\) are the forgetting and learning biases for knowledge concept \(k\)
To better understand our proposed TRACED, we summarize the graphical representation of our model in Figure \ref{fig:IKTFigure}, where the shaded variables and unshaded variables indicate the observed variables and latent variables, respectively. Here, what we can observe are students' responses \(R\) with \(M\) problems and \(T\) exercises and the Q-matrix with \(K\) knowledge concepts.
The mastery \(U_{k}^{t}\) of knowledge concept \(k\) at the \emph{t}-th exercise depend on the mastery \(U_{k}^{t-1}\) of knowledge concept \(k\) at \emph{(t-1)}-th exercise, the time interval \(\Delta \tau_k\) between \(U_k^t\) and \(U_k^{t-1}\) and the exercise frequency \(f_{k}\) within \(\Delta \hat{\tau}\).
The response \(R_{e}^{t}\) about problem \(e\) at the \emph{t}-th exercise is influenced by problem information \(Q_{e}\), guessing factor \(g_{e}\), slipping factor \(s_{e}\) and knowledge concept mastery \(U^t\) at the \emph{t}-th exercise.
\subsection{Model Inference}
\label{sec:org18a4951}
In this section, we first infer the joint probability \(p(u_i, r_i)\) and marginal probability \(p(r_{i})\) and then explain the technical challenges encountered in model learning. Then, we detail the solutions proposed to address these technical challenges.
\subsubsection{$p(u_i, r_i)$ and $p(r_i)$ Inference}
\label{sec:orgc78cd47}
We obtain the joint distribution of the observed and hidden variables by combining Eqs. \ref{eq:pu} and \ref{eq:pu2u}.
\begin{equation}
\label{eq:pur}
\begin{aligned}
&p(u_i, r_i) = p(u_i^1) \prod\limits_{t=2}^{T} p(u_i^t | u_i^{t-1}) \prod\limits_{ t=1 }^{ T } p(r_{i,t} | u_i^t) \\
&= \prod\limits_{k=1}^{K} p(u_{i,k}^1) \prod\limits_{t=2}^{T} \prod\limits_{k=1}^{K} p(u_{i,k}^t | u_{i,k}^{t-1}) \prod\limits_{ t=1 }^{T} p(r_{i,t} | u_{i,1}^t,...,u_{i,K}^t )
\end{aligned}
\end{equation}
By using maximum likelihood estimation to learn model parameters, we need to maximize the probability of observations (maximize the marginal probability distribution \(p(r_i)\)). The marginal probability distribution \(p(r_i)\) can be obtained as:
\begin{equation}
\label{eq:pr}
\begin{aligned}
p(r_i) &= \sum\limits_{u_i} p(u_i, r_i ) \\
&= \sum\limits_{u_i} (p(u_{i}^1 ) \prod\limits_{ t=2 }^{T} p(u_i^t | u_i^{t-1}) \prod\limits_{ t=1 }^{ T } p(r_{i,t} | u_i^t))\\
&= \sum\limits_{u_i} (p(u_{i}^1) p(r_{i,1} | u_i^1) \prod\limits_{ t=2 }^{ T } p(u_i^t | u_i^{t-1}) p(r_{i,t} | u_i^{t}))\\
&=(\sum\limits_{u_i^{1}} p(u_i^1) p(r_{i,1}|u_i^1))*...* (\sum\limits_{u_i^{T}} p(u_i^T | u_i^{T-1}) p(r_{i,T}|u_i^T))\\
\end{aligned}
\end{equation}
where \(\sum\limits_{u_i^{t}} p(u_i^t) p(r_{i,t}|u_i^t)\) is defined as follows:
\begin{equation}
\begin{aligned}
&\sum\limits_{u_i^{t}} p(u_i^t) p(r_{i,t}|u_i^t) = \sum\limits_{u_{i}^t} p(u_{i,1}^t) ... p(u_{i,K}^t) p(r_i | u_{i,1}^t,...,u_{i,K}^{t})
\end{aligned}
\end{equation}
Due to \emph{explaining away problem} (observation \(r_{i,t}\) depends on multiple hidden variables \(u_{i,1}^t, u_{i,2}^t ,..., u_{i,K}^{t}\) \cite{204911}, we cannot use the forward algorithm \cite{rabiner1989tutorial} to reduce the time complexity; hence, the time complexity of calculating \(P(r_i)\) is \(O(2^K*T)\), which is unacceptably high. Therefore, we cannot use maximum likelihood estimation for parameter optimization.
By using the Expectation-Maximization (EM) algorithm to learn model parameters, we need to maximize \(\sum\limits_{u} p(u | r) \log p(u, r)\). The posterior distribution \(p(u_i | r_i)\) can be obtained by combining Eq.\ref{eq:pur} and Eq.\ref{eq:pr} as follows:
\begin{equation}
\label{eq:101}
\begin{aligned}
p (u_i| r_i) = \frac{p(u_i, r_i)}{p(r_i)}
\end{aligned}
\end{equation}
Since the time complexity of calculating the posterior distribution \(p(u_i | r_i)\) is also \(O(2^K * T)\),
we cannot quickly sample the latent variables \(u_i\). Therefore, we cannot use the EM algorithm for parameter optimization.
To solve this problem, we improve an LSTM network to approximate the posterior distribution and propose a heuristic algorithm to learn the model parameters.
\subsubsection{Approximate Inference By Using LSTM}
\label{sec:orgc92de86}
We propose an LSTM-based network to approximate the posterior distribution \(p(u_i | r_i)\) as:
\begin{equation}
\label{eq:qphi}
\begin{aligned}
&q_{\phi}(u_{i}^{t}) = \sigma(W_q*LSTM(\tilde{x}_{i,t})+b_{q})\\
\end{aligned}
\end{equation}
where \(q_{\phi}(u_i^t) \in \mathbb{R}^{K}\) denotes the approximate result of the posterior distribution \([p(u_{i,1}^t|r_{i,t}),\) \(...,p(u_{i,K}^{t}|r_{i,t})]\),
\(\phi\) denotes all the training parameters in \(q_{\phi}\), and \(\sigma\) is the sigmoid activation function.
To better utilize and preserve the information of exercise responses $r_{i,t}$, we extend the students' exercise responses \(r_{i,t}\) to a feature vector \(\mathbf{0} = (0,0,...,0)\) with the same \(2d_e\) dimensions as the exercise embedding \(x_{i,t}\) and then learn the combined input vector \(\tilde{x}_{i,t} \in \mathbb{R}^{4d_e+K}\) as:
\begin{equation}
\begin{aligned}
&\widetilde{x}_{i,t}=\left\{\begin{array}{ll}
{\left[x_{i,t} \oplus \mathbf{0} \oplus p(u_{i}^t)\right]} & \text { if } r_{i,t}=1 \\
{\left[\mathbf{0} \oplus x_{i,t} \oplus p(u_{i}^t)\right]} & \text { if } r_{i,t}=0 \\
\end{array}\right. \\
\end{aligned}
\end{equation}
where \(\oplus\) is the operation of concatenating two vectors, and \(p(u_{i}^t) = [p(u_{i,1}^{t}),...,p(u_{i,K}^{t})]\) can be obtained from Eq.\ref{eq:pu}. By this way, LSTM can better identify and utilize exercise response information $r_{i,t}$ from embedding vector $\widetilde{x}_{i,t}$. And the exercise embedding \(x_{i,t}\) is defined as:
\begin{align}
& x_{i,t} = [E_{e,e_{i,t}}, \hat{E}_{e,e_{i,t}}] \label{eq:17_1}\\
& \hat{E}_{e,e_{i,t}} = \sum\limits_{j=1}^K Q_{e_{i,t}, j} E_{k,j} \label{eq:17_2}
\end{align}
where \(E_{k,j}, E_{e, e_{i,t}} \in \mathbb{R}^{d_e}\) are the distributed representations of the knowledge concept $j$ and problem $e_{i,t}$, and $Q_{e_{i,t},j}$ represents whether problem $e_{i,t}$ contains the knowledge concept $j$.
With the combined student \(i\)'s posterior information sequence \({\tilde{x}_{i,1},...,\tilde{x}_{i,T}}\), the hidden state \(h_{i,t} \in \mathbb{R}^{d_h}\) at t-th exercise is updated based on current combined input \(\tilde{x}_{i,t}\) and previous student state \(h_{i,t-1}\) as follows.
\begin{equation}
\begin{aligned}
i_{i,t} &=\sigma\left(\mathbf{Z}_{\mathbf{\tilde { x }i}} \tilde{x}_{i,t}+\mathbf{Z}_{\mathbf{h} \mathbf{i}} h_{i,t-1}+\mathbf{b}_{\mathbf{i}}\right) \\
f_{i,t} &=\sigma\left(\mathbf{Z}_{\widetilde{\mathbf{x}} \mathbf{f}} \tilde{x}_{i,t}+\mathbf{Z}_{\mathbf{h} \mathbf{f}} h_{i,t-1}+\mathbf{b}_{\mathbf{f}}\right) \\
o_{i,t} &=\sigma\left(\mathbf{Z}_{\tilde{\mathbf{x}} \mathbf{o}} \tilde{x}_{i,t}+\mathbf{Z}_{\mathbf{h} \mathbf{o}} h_{i,t-1}+\mathbf{b}_{\mathbf{o}}\right) \\
c_{i,t} &=f_{i,t} \cdot c_{i,t-1}+i_{i,t} \cdot \tanh \left(\mathbf{Z}_{\tilde{\mathbf{x}} \mathbf{c}} \tilde{x}_{i,t}+\mathbf{Z}_{\mathbf{h} \mathbf{c}} h_{i,t-1}+\mathbf{b}_{\mathbf{c}}\right) \\
h_{i,t} &=o_{i,t} \cdot \tanh \left(c_{i,t}\right)
\end{aligned}
\end{equation}
where \(\mathbf{Z_{\tilde{x}*}} \in \mathbb{R}^{d_h \times (4d_e+K)}\), \(\mathbf{Z}_{h*} \in \mathbb{R}^{d_h \times d_h}\) and \(\mathbf{b_{* }} \in \mathbb{R}^{d_h}\) are the parameters in LSTM.
\begin{algorithm}[htbp]
\caption{The training algorithm of TRACED}
\label{algorithm:KCRLESTraining}
\begin{algorithmic}[1]
\Require
Student exercise sequences $S = \left\{ e, t, r \right\}$.
\Ensure
The trained parameters $\phi$, and $\theta$.
\For {iteration = 1 to IterNum}
\For {i = 1 to N}
\For {t = 1 to T}
\State Sample ${u_{i}^{t}}' \sim q_{\phi}(u_i^t | r_{i,t})$;
\EndFor
\EndFor
\State Update $\theta = \arg \min\limits_{\theta} - \frac{1}{N} \sum\limits_{i=1}^{N} \log p(u_i',r_{i})$;
\For {i = 1 to N}
\For {t = 1 to T}
\State Sample ${u_i^t}' \sim p(u_i^t | {u_i^{t-1}}'))$;
\State Sample ${r_{i,t}}' \sim p(r_{i,t}|{u_i^t}')$;
\EndFor
\EndFor
\State Update $\phi = \arg \min\limits_{\phi} - \frac{1}{N} \sum\limits_{i=1}^N \log q_{\phi}({u_i}' | {r_i}')$;\
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Model Learning}
\label{sec:orgf38d0ea}
This section explains in detail how the model's parameters are learned. The process of TRACED parameter learning is shown in Algorithm \ref{algorithm:KCRLESTraining}.
Inspired by \cite{hinton1995wake}, parameter learning is divided into the wake phase and the sleep phase.
In the wake phase, we use \(q_{\phi}(u_i|r_i)\) to sample the hidden variables \(u_i\) and then optimize the parameters \(\theta\) of \(p(u_i, r_i)\) by using the \(u_i\) obtained via sampling.
In the sleep phase, we use \(p(u_i, r_i)\) to sample the hidden variables \(u_i\) and observed variables \(r_i\) and then optimize the parameters \(\phi\) of \(q_{\phi}(u_i | r_i)\) by the \(u_i\) and \(r_i\) obtained via sampling. In addition, Wake Phase and Sleep Phase both use real data, sample observe and hidden variables based on real data, and use real datas and sampled datas to train the model.
\subsubsection{The loss function in the Wake Phase.}
\label{sec:org0ba821f}
In the Wake Phase, we utilize an LSTM-based network to approximate the posterior distribution \(p(u | r, \theta)\) and learn the parameters of TRACED by maximizing Evidence Lower BOund (ELBO).
The derivation process of ELBO is as follows.
\begin{equation}
\begin{aligned}
&\log p(r | \theta) = \log \frac{p(u, r| \theta)}{p(u|r, \theta)}\\
&= \sum \limits_u q_\phi (u | r) \log \frac{p(u, r| \theta)}{q_{\phi}(u | r)} \frac{q_{\phi}(u | r)}{p(u|r, \theta)}\\
&= \underbrace{\sum\limits_{u} q_{\phi} (u | r) \log p(u|r, \theta) + H[q_{\phi}]}_{\text{Evidence Lower Bound (ELBO)}} \\
& \qquad + KL (q_{\phi} (u | r) || p(u| r, \theta))\\
\end{aligned}
\end{equation}
where KL represents Kullback-Leibler divergence, and \(H[q_{\phi}] = - \sum\limits_u q_\phi(u | r) \log q_\phi(u | r)\).
The derivation process of the loss function in the Wake Phase is expressed as follows:
\begin{equation}
\begin{aligned}
\theta &= \arg \max_{\theta} ELBO\\
&= \arg \max_{\theta} \sum\limits_{u} q_{\phi} (u | r) \log p(u|r, \theta) + H[q_{\phi}]\\
&\propto \arg \max_{\theta} \sum\limits_{u} q_{\phi} (u | r) \log p \left( u, r| \theta \right)\\
&= \arg \min_{\theta} - \frac{1}{N} \sum\limits_{i=1}^N \log p \left( u_i', r_i| \theta \right)
\end{aligned}
\end{equation}
where \(\theta\) represents all parameters of \(p(u,r|\theta)\), \(u_i'\) is sampled by \(q_{\phi}(u_i | r_i)\), and \(r_i\) is the real data.
\subsubsection{The loss function in the Sleep Phase.}
\label{sec:org5042490}
In the Sleep Phase, we learn the parameters \(\phi\) of the LSTM network. We minimize \(KL(p(u|r, \theta) || q_{\phi} (u|r, \phi))\) to better approximate the posterior distribution \(p(u|r, \theta)\). The derivation process of the loss function in the Sleep Phase is defined as follows.
\begin{equation}
\begin{aligned}
\phi &= \arg \min_{\phi} KL(p( u | r, \theta) || q_{\phi} (u | r, \phi) )\\
&\propto \arg \max_{\phi} \sum\limits_{u} p(r | \theta) p(u | r, \theta) \log \frac { q_{\phi} (u | r)} {p(u | r, \theta)}\\
& \quad- \sum\limits_{u} p(u,r| \theta) \log p(u | r, \theta) \\
&\propto \arg \max_{\phi} E_{u,r \sim p(u, r | \theta)} (q_{\phi} (u | r))\\
&= \arg \min_{\phi} -\frac{1}{N} \sum\limits_{i=1}^N \log q_{\phi}(u_i' | r_i')\\
\end{aligned}
\end{equation}
where \(\phi\) represents all parameters of \(q_{\phi}(u_i | r_i)\), and \(u_i'\) and \(r_i'\) are sampled by \(p(u_i, r_i | \theta)\).
\subsection{Predicting Student Future Knowledge Mastery Probabilities and Performance}
\label{sec:orgc51de35}
Before predicting students' future performance, we must predict students' future knowledge concept mastery probabilities. Given the previous \(t\) exercise records of student \(i\), we predict student \(i\)'s knowledge concepts mastery when the (t+1)-th exercise is performed at time \(\tau_{i,t+1}\) as:
\begin{equation}
\begin{aligned}
&p(u_i^{t+1} | r_{i,1}, r_{i,2}, ..., r_{i,t}, e_i, \tau_i)\\
&\approx \sum\limits_{u_i^t} q_{\phi} (u_i^t | r_{i,1}, ..., r_{i,t}, e_i, \tau_i) p(u_i^{t+1} | u_i^t, \tau_i)\\
&= (\sum\limits_{u_{i,1}^{t}} q_{\phi} (u_{i,j}^t | r_{i,1}, ..., r_{i,t}, e_i, \tau_i) p(u_{i,1}^{t+1} | u_{i,1}^t, \tau_i))* ... *\\
& \quad (\sum\limits_{u_{i,K}^{t}} q_{\phi} (u_{i,K}^t | r_{i,1}, ..., r_{i,t}, e_i, \tau_i) p(u_{i,K}^{t+1} | u_{i,K}^t, \tau_i))
\end{aligned}
\end{equation}
where student \(i\)'s exercise time \(\tau_i = \left\{ \tau_{i,1},...,\tau_{i,t+1}\right\}\) and student \(i\)'s exercise problem \(e_i = \left\{ e_{i,1}, ..., e_{i,t} \right\}\).
Furthermore, we can predict the probability that student \(i\) correctly answers problem \(e_{i,t+1}\) at time \(\tau_{i,t+1}\) as:
\begin{equation}
\begin{aligned}
&p(r_{i,T+1}| r_{i,1}, r_{i,2}, ... , r_{i,T}, e_{i}, \tau_i)\\
&= \frac{p(r_{i,T+1}, u_i^{T+1}| r_{i,1}, r_{i,2}, ... , r_{i,T}, e_i, \tau_i)}{p(u_i^{T+1}| r_{i,1}, r_{i,2}, ..., r_{i,T}, r_{i, T+1}, e_i, \tau_i)}\\
&= \frac{p(u_i^{T+1}| r_{i,1}, r_{i,2}, ... , r_{i,T}, e_i, \tau_i) p(r_{i, T+1} | u_i^{T+1}, \tau_i)}{p(u_i^{T+1}| r_{i,1}, r_{i,2}, ...,r_{i,T}, r_{i,T+1}, \tau_i)} \\
&\approx \frac{ p(r_{i,T+1} | u_i^{T+1}, e_i) (\sum\limits_{u_i^T} q_{\phi} (u_i^T | r_{i,1\sim T}, e_i, \tau_i) p(u_i^{T+1} | u_i^T, \tau_i))}{q_{\phi} (u_i^{T+1} | r_{i,1 \sim T+1}, e_i, \tau_i)}
\end{aligned}
\end{equation}
Since the time complexity of \(\sum\limits_{u_i^T} q_{\phi} (u_i^T | r_{i,1\sim T}, e_i, \tau_i)\) \(p(u_i^{T+1} | u_i^T, \tau_i)\) is \(O(2^K)\), we propose to improve an LSTM network to approximate \(p(r_{i,T+1}| r_{i,1}, r_{i,2}, ... \) \(, r_{i,T}, e_{i}, \tau_i)\) by the distributed representations of students' exercise records. Finally, we predict students' future exercise responses as follows.
\begin{align}
& y_p(i,t+1) = \sigma(W_n*LSTM(\hat{x}_{i,t+1})+b_{n})\\
& \hat{x}_{i,t+1}= \left[x_{i,t+1} \oplus q_{\phi}(u_{i}^{t}) \oplus p(u_{i}^{t+1})\right]
\end{align}
where \(\hat{x}_{i,t+1}\) represents the distributed representation of student \emph{i}'s \emph{t+1}-th exercise record without exercise response, which contains all the information needed to calculate \(p(r_{i,T+1}| r_{i,1}, r_{i,2}, ... , r_{i,T}, e_{i}, \tau_i)\) (priori, posterior, and distributed representation); \(x_{i,t+1}\), \(q_{\phi}(u_i^{t})\), and \(p(u_i^{t+1})\) can be obtained via Eqs. \ref{eq:17_1}, \ref{eq:17_2}, \ref{eq:qphi}, and \ref{eq:pu}, respectively; and \(y_{p}(i,t+1)\) represents the probability that student \(i\) will answer correctly during the \emph{t+1}-th exercise.
\subsection{Predicting Relationships between Concepts}
\label{sec:org1ea799b}
Given part of a knowledge concept graph, we can build a supervised model to fill in the gaps in the graph. In this paper, this goal is simplified to the prediction of only the inclusion relationships between knowledge concepts.
The inclusion relationship between knowledge concepts \(i\) and \(j\) is predicted as follows:
\begin{equation}
\label{equ:PredictingRelationshipsbetweenKnowledgeConcepts}
\begin{aligned}
&h_r = tanh(W_{r,h} * E_r + b_{r,h})\\
&y_{r} (i,j) = \sigma(W_{r,o} * h_r + b_{r,o})
\end{aligned}
\end{equation}
where \(E_r = [E_{e,i}, E_{e,j}, E_{e,i} - E_{e,j}]\) and \(W_{r,h}\), \(W_{r,o}\), \(b_{r,h}\), and \(b_{r,o}\) are the training parameters of the fully connected neural network.
\subsection{Predicting Concepts contained by Problems}
\label{sec:orga2021d1}
Given part of a Q-matrix, we can use the given Q-matrix to train a supervised model to judge whether a certain problem contains a certain knowledge concept. This model can be used to fill in the missing values of the Q-matrix and to fuzzify the Q-matrix by replacing the original 01 matrices with a continuous probability matrix.
We predict the inclusion relationships between knowledge concept and problem as follows:
\begin{equation}
\label{eq:PredictingConceptsInProblems}
\begin{aligned}
&h_Q = tanh(W_{Q,h}*E_Q + b_{Q,h})\\
&y_{Q} (i,j) = \sigma(W_{Q,o} * h_Q + b_{Q,o})
\end{aligned}
\end{equation}
where \(E_Q = [E_{e,i}, E_{k,j}, E_{e,i} - E_{k,j}]\) and \(W_{Q,h}\), \(W_{Q,o}\), \(b_{Q,h}\), and \(b_{Q,o}\) are the training parameters of the fully connected neural network.
\section{Introduction}\label{sec:org477f836}
Recently, the number of online learning students continually increased with the development of intelligent online education, such as Massive Open Online Courses (MOOCs) \cite{DBLP:conf/kdd/LiuTLZCMW19} and Online Judging (OJ) \cite{DBLP:conf/ijcai/WuLLCSCH15} systems.
The teacher-student ratio is getting higher and higher.
A teacher may serve thousands of learners simultaneously and cannot provide personalized service for learners. Therefore, there is an increasing demand for recommendation applications based on online intelligent education, including exercise and learning path recommendations \cite{DBLP:conf/edm/AiCGZWFW19}. A key issue in such applications is knowledge tracing, i.e., capturing students' knowledge status over time \cite{DBLP:conf/nips/PiechBHGSGS15}.
As shown in Fig. \ref{fig:ExampleKnowledgeTracing}, knowledge tracing models track the change in students' past knowledge status over time utilizing problem information (knowledge concepts contained in problems), students' historical assignment problems and their responses. On this basis, knowledge tracing models predict students' future knowledge status and learning performance. Such knowledge tracing models can help teachers adjust teaching workloads and future teaching plans.
\begin{figure}[!htbp]
\centering{
\includegraphics[width=0.45\textwidth]{./figure/FCS-ToyExample-fig1.pdf}}
\caption{\label{fig:ExampleKnowledgeTracing} A toy example of the knowledge tracking task.}
\end{figure}
In a real environment, teachers generally want to obtain interpretable results of students' knowledge status through knowledge tracing models.
Markov chain-based knowledge tracking (MCKT) models
\cite{DBLP:conf/ijcai/WuLLCSCH15,DBLP:conf/cikm/ChenLHWCWSH17,Sun2022,Wong2021,Liu2022,Zhang2018}
can provide interpretable results of students' knowledge status (knowledge concept mastery probability) for teachers. MCKT models regard the observed student's exercise performance and the unobservable student's knowledge status as observed and latent random variables, respectively. Then, MCKT models the relationship between random variables based on prior knowledge and then tracks knowledge concept mastery probability over time.
However, as the number of knowledge concepts increases, the number of latent variables keeps increasing. These latent variables may all relate to an observed variable (student assignment performance), which exponentially increases the time complexity of MCKT in calculating the student performance probability.
It is also known as the \textit{explaining away problem} \cite{204911}.
In addition, the existing MCKT models only consider the relationship between students' knowledge status and their performance when modeling students' exercise response but ignore the relationship between knowledge concepts involved in a learning task.
To address these challenges, we propose an inTerpretable pRob-
AbilistiC gEnerative moDel (also called as TRACED) for tracking knowledge concept mastery probability over time.
Reasonable prior is critical to the performance of probabilistic graphical models.
So, to better model the change of knowledge state over time, we adopt the learning curve and forgetting curve as priors to track students' knowledge concept mastery probabilities over time.
To better model the students' exercise response (i.e., students' performance), we consider the relationship among students' knowledge status, knowledge concept and problems.
Specifically, we design a logarithmic linear module with three interactive strategies, which model students' exercise responses by considering the relationship among students' knowledge status, knowledge concept and problems.
Our designed logarithmic linear module can also acquire distributed representation of students' knowledge status, knowledge concepts and problems.
To solve \textit{explaining away problem} (i.e., the high time complexity of posterior estimation and students' future performance prediction), we utilize two LSTM networks to predict students' future performance and approximate the posterior distribution in model inference, respectively.
Furthermore, we propose a heuristic algorithm to train probabilistic generative models and LSTMs jointly.
The joint training of LSTMs and probabilistic generative models is divided into two phases, i.e., the Wake and Sleeping Phases.
During the Wake Phase, we utilize an LSTM network to approximate the posterior distribution and then sample the latent variables for each training sample based on the LSTM network. Then we learn the parameters of the probabilistic generative model by maximizing the Evidence Lower BOund (ELBO).
At the Sleep Phase, we first generate new data with completed latent variables by using our probabilistic generative model.
Then these new data are used to train the LSTM network for approximating the posterior distribution.
Repeating the above two stages, we jointly train the probability generation model and the LSTM.
To verify the effectiveness of our model, we conduct experiments with four real-world datasets in three knowledge-driven tasks.
Three knowledge-driven tasks are future student performance prediction, knowledge concept relation prediction, and knowledge concept prediction for problems.
The experimental results in future student performance prediction show that TRACED outperforms other knowledge tracing methods in predicting students' future performance.
The experimental results in knowledge concept relation prediction and knowledge concept prediction for problems show that our model can learn the relationship among students' knowledge status, knowledge concept and problem from students' exercise sequences.
Moreover, the distributed representation of knowledge concepts and problems learned by TRACED is effective.
Further, we perform several case studies to show that TRACED exhibits excellent interpretability and has the potential for personalized automatic feedback in a real-world educational environment.
The main contributions are outlined as follows:
\begin{itemize}
\item To the best of our knowledge, this is the first comprehensive attempt to track students' numerous knowledge concept mastery probabilities over time instead of tracking knowledge state by modeling the mastery of knowledge concepts as unexplainable model parameters or hidden states.
\item In order to better model students' exercise responses, we design a logarithmic linear module with three interactive strategies, which model students' exercise responses by considering the relationship among students' knowledge status, knowledge concept and problems.
\item To solve the \emph{explaining away problem} (i.e., the high time complexity of posterior estimation and students' future performance prediction), we utilize two LSTM-based networks to approximate the posterior distribution and predict students' future performance, respectively. Then we propose a heuristic algorithm to train LSTMs and probabilistic generative model jointly.
\item Experiments show that TRACED is efficient on four real-world datasets for three knowledge driven tasks (future performance prediction, knowledge concept relation prediction and concept prediction for problems) and exhibits excellent interpretability.
\end{itemize}
\section{Conclusions}
\label{sec:org1dcb972}
In this paper, we proposed an inTerpretable pRobAbilistiC gEnerative moDel (TRACED), which can track numerous students' knowledge concept mastery probabilities over time.
To better model students' learning process, we adopted the learning and forgetting curves as priors to capture the dynamic changes in students' knowledge concept mastery over time.
Second, we designed a logarithmic linear model with three interactive strategies to model students' exercise responses by considering the interactions among knowledge concepts, problems, and students.
To solve \emph{explain away} problem, we design LSTM-based networks to approximate the posterior distribution and propose a heuristic algorithm to learn model parameters.
Lastly, We conduct experiments with four real-world datasets in three knowledge-driven tasks.
The experimental results for predicting students' future performance demonstrate the effectiveness of TRACED as a knowledge tracing model.
The experimental results for predicting the relationship between knowledge concepts and the relationship between knowledge concepts and problems demonstrate the effectiveness of the distributed representation of knowledge concepts and problems.
Besides, We conduct several case studies. The case studies show that TRACED exhibits an excellent interpretable capability and has the potential for personalized automatic feedback in a real-world educational environment.
\section{Acknowledgments}
The work described in this paper was supported by National Natural Science Foundation of China (Nos.62272093, 62137001, U1811261, 61902055).
\section{Problem Definition and Study \\ Overview}
\label{sec:org55cd7f2}
In this section, we first formally introduce knowledge tracing and three knowledge-driven tasks. Then we introduce our study overview.
Assume that there are \(N\) students, \(M\) problems, and \(K\) knowledge concepts in a learning system.
In this system, student exercise logs record the results of exercises performed by the students at different times.
Student \(i\)'s exercise sequence is represented as \(S_i = \left\{ S_{i,1}, S_{i,2}, S_{i,3}..., \right\}\), and
\(S_{i,t} = \left\{ e_{i,t}, r_{i,t}, \tau_{i,t} \right\}\) represents that student \(i\) submitted problem \(e_{i,t}\) as part of the \emph{t}-th exercise record. The submission time was \(\tau_{i,t}\), and the result was \(r_{i,t}\) (either ``True" or ``False'', and we code True=1 and False=0).
Additionally, we have a Q-matrix, which is represented as a binary matrix \(Q \in \mathbb{R}^{M*K}\). \(Q_{j,k}=1\) means that problem \(j\) is related to the knowledge concept \(k\), where a value of 0 indicates that the corresponding problem and concept are unrelated.
Without a loss of generality, knowledge tracing and three knowledge-driven tasks are formulated as follows:
\noindent \textbf{Knowledge Tracing.} Given students' exercise sequence \(S\), and Q-matrix labeled by educational experts, our goal is three-fold: 1) modeling students' knowledge concepts mastery probabilities from the 1-st to $t$-th exercise; 2) predicting students' knowledge concepts mastery probabilities at the ($t+1$)-th exercise; 3) calculating the distributed representation of students' exercise records from the 1-st to ($t+1$)-th exercise, the distributed representation of knowledge concepts, and the distributed representation of problems.
\noindent \textbf{Future Performance Prediction Task.}
Given the distributed representations of students' exercise records from the 1st to ($t+1$)-th exercise, our goal is to predict students' responses at the ($t+1$)-th exercise.
\noindent \textbf{Knowledge Concept Relation Prediction Task.} Given distributed representations of a set of knowledge concepts, the goal is to predict the relationships among knowledge concepts. The relationships between knowledge concepts include superior relationships, subordinate relationships, and disparate relationships.
\noindent \textbf{Concept Prediction for Problems Task.} Given distributed representations of knowledge concepts and problems, the goal is to predict the knowledge concepts of the given problems.
As shown in Figure \ref{fig:IKTFramework}, our solution is a two-stage framework, which contains a modeling stage and predicting stage: 1) In the modeling stage, given students' exercises records and Q-matrix,
we first model students' learning and forgetting behavior to track students' knowledge concept mastery probabilities over time.
Then, we proposed a logarithmic linear model to model students' exercise responses by analyzing the interactions among students, problems, and knowledge concepts. After that, we can obtain students' knowledge concept probabilities \(U^{1}, ..., U^{t}\) at different times and students' learning trajectories. 2) In predicting stage, TRACED predicts students' future responses \(R^{t+1}\) and future knowledge concept mastery probabilities \(U^{t+1}\) in the future.
Besides, we also provide the learned distributed representations of knowledge concepts and problems as pre-trained parameters to the fully connected neural network for knowledge concept relation prediction and concept prediction for problems.
\section{Related Work}
\label{sec:orgb8b682f}
We summarize existing knowledge tracing methods in the following three categories: Factor Analysis-based Knowledge Tracing (FAKT) model, Probabilities Graph-based Knowledge Tracing (PGKT) model and Deep Learning-based Knowledge Tracing (DLKT) model.
\subsection{Factor Analysis-based Knowledge Tracing}
FAKT models tend to learn common factors (such as students' ability, problem difficulty, e.g.) in data (students' history exercise) to predict students' performance.
Moreover, these models consider time factors by counting temporal features (such as the number of wrong attempts and correct attempts, and soon on.). The most simple model of FAKT is the 1-parameter logistic Item Response Theory (IRT) \cite{van2013handbook}, also known as the Rasch model; it does not consider learning among several attempts. The additive factor model (AFM) \cite{cen2006learning} considers the number of attempts a student has made to a problem. The performance factor analysis (PFA) model \cite{pavlik2009performance} separately counts positive and negative attempts as temporal features. Vie and Kashima proposed a family of models named Knowledge Tracing Machines (KTM) \cite{DBLP:conf/aaai/VieK19}, which encompasses IRT, the AFM and PFA as special cases. DAS3H \cite{DBLP:conf/edm/ChoffinPBV19} takes into account both memory decay and multiple knowledge concept tagging by including a representation of the temporal distribution of past exercise on the knowledge concepts involved by a problem. However, FAKT models ignore the order of students' historical exercises. Although FAKT can complement the data with temporal features such as simple counters, FAKT can only get a global students' knowledge status and cannot track knowledge status over time.
\subsection{Probabilistic Graph-based Knowledge Tracing}
PGKT models regard the observed students' exercise performance and the unobservable students' knowledge status as observed and latent random variables. Then PGKT models the relationship between random variables based on prior knowledge and predicts students' future performance by analyzing and modeling students' exercise process.
Probabilistic graph-based knowledge tracing can be divided into two categories, real-time knowledge tracing (also called Markov chain-based Knowledge Tracing, MCKT) and non-real-time probabilistic graph-based knowledge tracing.
\noindent \textbf{Markov Chain-based Knowledge Tracing.} The representative of real-time knowledge tracing is Bay-esian Knowledge Tracing (BKT).
BKT \cite{corbett1994knowledge} assumes that a student's knowledge mastery can be represented as a set of binary variables. Each binary variable indicates whether a student has mastered a particular knowledge concept, and a student's knowledge mastery is modeled as a latent variable in a hidden Markov model.
Gorgun et al. \cite{Gorgun2022} analyzed the influence of student disengagement on prediction accuracy in BKT.
Zhang et al.\cite{Zhang2018} proposes a bayesian knowledge tracing model with three learning states. They divide a learning process into three sections by using an evaluation function for three-way decisions.
FBKT \cite{Liu2022} are proposed to address continuous score scenarios (e.g., subjective examinations) so that the applicability of BKT models may be broadened.
\noindent \textbf{Non-real-time probabilistic graph-based knowledge tracing.} The representative of the non-real-time knowledge tracing is Deterministic Input Noisy-and-gAte (DINA). Although they can diagnose the knowledge states of learners at a certain moment, they need to efficiently track the knowledge states of learners over time and consider factors such as learning and forgetting in the learning process of the learners.
DINA \cite{de2009dina} is a parsimonious and interpretable model that models knowledge mastery as a multidimensional binary latent vector and requires only two parameters (i.e., slipping parameter and guessing parameter) for each problem regardless of the number of knowledge concepts considered.
Fuzzy Cognitive Diagnosis Framework \cite{DBLP:conf/ijcai/WuLLCSCH15} combines fuzzy set theory and educational hypotheses to model a student's knowledge proficiency and predicts a student's performance by considering both the slipping factor and guessing factor.
Knowledge Proficiency Tracing \cite{DBLP:conf/cikm/ChenLHWCWSH17} is an explanatory probabilistic method that tracks the knowledge mastery of students over time by leveraging educational priors (i.e., Q-matrix).
Knowledge Tracing model with Learning Transfer \cite{liu2020tracking} proposed a probabilistic gra-phical model which tracks a student's knowledge proficiency, abstract principle mastery level, and knowledge structure by applying the cognitive structure migration theory \cite{ausubel1968educational} as priors.
However, these models have strong assumptions since the partition function is difficult to calculate and \emph{explain away problem}. They can not track students' numerous knowledge concept mastery probabilities over time.
\subsection{Deep Learning-based Knowledge Tracing}
With the development of deep learning in recent years, LSTM, MANN, and Attention Mechanisms have been applied to the knowledge tracing task. Deep Knowledge Tracing (DKT) \cite{DBLP:conf/nips/PiechBHGSGS15} was the first model to apply deep learning algorithms for knowledge tracing. DKT uses flexible recurrent neural networks that are `deep' in time to track students' knowledge mastery. Subsequently, the Exercise-Enhanced Recurrent Neural Network model \cite{DBLP:conf/aaai/SuLLHYCDWH18} (EERNN) has been proposed based on DKT to take advantage of students' learning records and the text of each problem. Exercise-aware Knowledge Tracing \cite{huang2019ekt} is a framework extended from the EERNN by incorporating knowledge concept information, where the student's integrated state vector is now extended to a knowledge state matrix. However, the knowledge state matrix still needs to be explained.
Lu et al. \cite{lu2020towards} applying the layer-wise relevance propagation method to interpret the RNN-based DLKT model by backpropagating the relevance from the model's output layer to its input layer.
However, this method can only solve the interpretability of the model, not the interpretability of the tracking results of knowledge concept mastery.
DKVMN \cite{DBLP:conf/www/ZhangSKY17} applies key-value memory networks to exploit the relationships among the underlying knowledge and directly outputs a student's knowledge proficiency. DKVMN-Context Aware \cite{DBLP:conf/edm/AiCGZWFW19} modifies the DKVMN to design its memory structure based on the course's concept list and explicitly considers the knowledge concept mapping relationship during knowledge tracing.
DKVMN Decision Tree \cite{sun2019muti} improves the performance of the DKVMN model by incorporating additional features to the input, which applies a DT classifier to preprocess the behavior features.
Self-Attentive Knowledge Tracing \cite{DBLP:conf/edm/PandeyK19} is an approach that uses an attention mechanism to identify a problem from the student's past activities that are relevant to the given problem and predicts students' exercise responses.
Relation Aware Knowledge Tracing \cite{DBLP:conf/cikm/PandeyS20} is a relation-aware self-attention model that uses a relation-aware self-attention layer to incorporate contextual information. The contextual information integrates exercise relation information via their textual content as well as students' performance data and forgotten behavior information.
Attentive Knowledge Tracing \cite{ghosh2020context} employs a monotonic attention mechanism, which relates students' future responses to assessment problems to their past responses to predict students' future performance.
Zhu et al. \cite{zhu2020learning} propose an effective attention-based model for tracking knowledge state, which captures the relationships among each item of the input regardless of the length of the input sequence.
And Yu et al. \cite{yu2022contextkt} propose a context-based knowledge tracing model, which combines students’ historical performance and their studying contexts during knowledge mastery.
However, since the internal logic to achieve the desired output or result that is un-understandable and unexplainable, DLKT is less interpretable in tracing students' knowledge status.
Although there are some works \cite{DBLP:conf/kdd/0001HL20} that attempt to address \textit{black box problems} of DLKT, they explain how DLKT predicts student exercise feedback by showing the impact weight of students' historical exercise records on predicting students' current exercise responses.
\section{Experiment}
\label{sec:orgf5eee29}
\subsection{Datasets}
\label{sec:org033968f}
We employ four real-world datasets: POJ, HDU, algebra06 and algebra08. The HDU and POJ datasets were obtained from the Hangzhou Dianzi University Online Judge platform (\url{http://acm.hdu.edu.cn}, accessed on October 22 2021) and the Peking University Online Judge platform (\url{http://poj.org}, accessed on 2 January 2023). Moreover, the algebra06 and algebra08 datasets obtained from the KDD Cup 2010 EDM Challenge (\url{https://pslcdatashop.web.cmu.edu/KDDCup/downloads.jsp}, accessed on 2 January 2023).
\begin{table}[!tbp]
\centering{
\caption{Statistics of the Datasets\label{table:statisticsofdatasets} }
\begin{tabular}{c|c|c|c|c}
\hline
Dataset & HDU & POJ & algebra06 & algebra08 \\
\hline
\#Student & 9,859 & 3,507 & 1,072 & 2,385 \\
\#Problem & 2,101 & 970 & 1,218 & 736 \\
\#Records & 1,042,661 & 288,594 & 1,199,438 & 1,895,107 \\
\#Concepts & 193 & 146 & 319 & 304 \\
Avg.rec & 105 & 82 & 1,118 & 795 \\
\hline
\end{tabular}}
\end{table}
The HDU and POJ dataset includes records submitted from June 2018 to November 2018, the algebra06 dataset includes records submitted from October 2006 to February 2007, and the algebra08 dataset includes records submitted from September 2008 to January 2009.
Students are allowed to resubmit their codes for a problem until they pass.
We filter out students with fewer than 30 records and an acceptance rate of less than 10\%, as well as problems with fewer than 30 records.
After filtering, the statistics of the datasets are shown in Table \ref{table:statisticsofdatasets}, and Avg.rec in Table \ref{table:statisticsofdatasets} represents the average number of students exercise records.
Specifically, numerous problems in the HDU and POJ datasets lacked knowledge concept labels. The knowledge concept of the problem in the HDU and POJ dataset is highly consistent with the knowledge concept of the problem in ICPC (\url{https://icpc.global}, accessed on 2 January 2023). It covers a wide range. The granularity of knowledge concepts is coarse, such as dynamic programming, graph theory, and soon on. Therefore, we gathered five experts who have won the medal of the ICPC Asian regional contest to mark the problems with knowledge concept labels. If more than three experts mark a knowledge concept label for a certain problem, we add the label to the problem.
\subsection{Evaluation Metrics}
\label{sec:orgcc7a275}
We evaluate the models for predicting students' future performance from regression and classification perspectives \cite{fogarty2005case,DBLP:conf/aaai/SuLLHYCDWH18}.
For regression, we use the \emph{Mean Absolute Error} (MAE) and the \emph{Root-Mean-Square Error} (RMSE) to quantify the distance between the predicted scores and the actual ones.
For classification, we select the \emph{Area Under the Receiver Operating Characteristic Curve} (AUC), \emph{Prediction Accuracy} (ACC), \emph{Precision} (PRE) and \emph{Recall} (REC) as measures of performance. The larger the values of these metrics are, the better the results are. The threshold adopted by the classification task in this paper is 0.5.
Liu et al. \cite{liu2022learning} noted that various learning rate schedules can have a notable effect on performance. To ensure the fairness of the experiment, constant learning rate was utilized for all models.
And all models are implemented by Python, and all experiments are run on a Linux server with eight 3.6 GHz Intel W-2123 CPUs, 64 G memory and an NVIDIA RTX-2080Ti GPU.
\begin{table*}[!htbp]
\small
\caption{Results for Predicting Future Student Performance on the HDU and POJ datasets}
\label{table:FuturePerformanceResultHDU}
\centering
\begin{tabular}{c|cccccc|cccccc}
\hline
& \multicolumn{6}{c|}{HDU} & \multicolumn{6}{c}{POJ}\\
\hline
Model & AUC & ACC & PRE & REC & RMSE & MAE & AUC & ACC & PRE & REC & RMSE & MAE\\
\hline
IRT & 0.6329 & 0.6407 & 0.5652 & 0.3007 & 0.4741 & 0.4398 & 0.6067 & 0.6594 & 0.5642 & 0.1294 & 0.2206 & 0.4303 \\
MIRT & 0.6376 & 0.6410 & 0.5596 & 0.3285 & 0.4731 & 0.4493 & 0.6099 & 0.6602 & 0.5593 & 0.1486 & 0.2193 & 0.4403 \\
AFM & 0.5669 & 0.6155 & 0.5276 & 0.0426 & 0.4840 & 0.4669 & 0.5154 & 0.6488 & 0.3269 & 0.0108 & 0.2275 & 0.4546 \\
PFA & 0.6394 & 0.6349 & 0.6169 & 0.1417 & 0.4738 & 0.4488 & 0.5337 & 0.6506 & 0.5536 & 0.0215 & 0.2262 & 0.4523 \\
KTM & 0.6760 & 0.6619 & 0.6104 & 0.3423 & 0.4639 & 0.4291 & 0.6149 & 0.6603 & 0.5525 & 0.1683 & 0.2194 & 0.4340 \\
DASH & 0.6808 & 0.6644 & 0.6068 & 0.3705 & 0.4621 & 0.4223 & 0.6149 & 0.6603 & 0.5525 & 0.1683 & 0.2194 & 0.4340 \\
DAS3H & 0.6794 & 0.6633 & 0.5957 & 0.3966 & 0.4627 & 0.4236 & 0.6084 & 0.6528 & 0.5148 & 0.1815 & 0.2210 & 0.4409 \\
DKT & 0.6986 & 0.6752 & 0.6224 & 0.4327 & 0.4136 & 0.4581 & 0.6601 & 0.6757 & 0.5627 & 0.2762 & 0.2012 & 0.4123 \\
DKVMN & 0.6959 & 0.6761 & 0.6304 & 0.4126 & 0.4134 & 0.4589 & 0.6578 & 0.6804 & 0.5814 & 0.2642 & 0.2094 & 0.4121 \\
AKT & 0.7019 & 0.6805 & 0.6201 & 0.3715 & 0.4136 & 0.4544 & 0.5913 & 0.6618 & 0.5627 & 0.0894 & 0.2213 & 0.4392 \\
TRACED & \textbf{0.7328} & \textbf{0.7096} & \textbf{0.6412} & \textbf{0.4346} & \textbf{0.4074} & \textbf{0.4489} & \textbf{0.6674} & \textbf{0.6962} & \textbf{0.5884} & \textbf{0.2846} & \textbf{0.2011} & \textbf{0.4094}\\
\hline
\end{tabular}
\end{table*}
\subsection{Baselines for Comparison}
\label{sec:org47282c8}
We compare TRACED with the following eleven state-of-the-art methods with well-tuned parameters.
\begin{itemize}
\item \emph{IRT} and \emph{MIRT} \cite{embretson2013item}: a popular cognitive diagnostic model, which discover students' knowledge levels through ranking with a logistic-like function. When d $>$ 0, IRT becomes MIRT, a variant of MIRT that considers a user bias.
\item \emph{AFM} and \emph{PFA} \cite{pavlik2009performance,cen2006learning}: the factor analysis models, which take account the number of attempts (AFM) and the number of positive and negative attempts (PFA).
\item \emph{KTM} \cite{DBLP:conf/aaai/VieK19}: a factorization machine model, which encompasses IRT, AFM, and PFA as special cases.
\item \emph{DASH} \cite{lindsey2014improving,mozer2016predicting}: a knowledge tracing model that bridges the gap between factor analysis and memory models. It stands for difficulty, ability, and student history.
\item \emph{DAS3H} \cite{DBLP:conf/edm/ChoffinPBV19}: a framework based on factor analysis, which takes both memory decay and multiple knowledge concept tagging into account.
\item \emph{DKT} \cite{DBLP:conf/nips/PiechBHGSGS15}: the first model to apply deep learning algorithms for knowledge tracing, which uses LSTM to track students’ knowledge mastery over time.
\item \emph{DKVMN} \cite{DBLP:conf/www/ZhangSKY17}: a key-value memory networks, which exploit the relationship among the underlying knowledge and directly outputs a student’s knowledge proficiency.
\item \emph{AKT} \cite{DBLP:conf/kdd/0001HL20}: a monotonic attention mechanism model, which predict students' future performance by relating a students' future responses to assessment problems to their past responses.
\end{itemize}
We do not perform a comparison with FuzzyCDF \cite{DBLP:conf/ijcai/WuLLCSCH15}, DINA \cite{de2009dina}, and KPT \cite{DBLP:conf/cikm/ChenLHWCWSH17}.
Because we regard exercise records of a student as complete data, which either appear in the training set or the test set, however, FuzzyCDF and DINA contain student parameters that need to be trained. Thus, a student’s exercise record must appear in both the training and test sets. The KPT model cannot predict student performance in real time.
\subsection{Predicting Future Student Performance}
\label{sec:org15c97a2}
In this section, we compare the predictions of our model and other baseline models in students' future performance. The experimental parameters of TRACED are \(\Delta \hat{\tau} = 86400s\), \(d_e = 20\), \(d_{h} = 2 * K\), \(d_{p} = 80 + 4 * K\) and the number of exercise record embedding dimensions is \(40 + 2 * K\). Besides, we randomly initialize all parameters in the TRACED to the Glorot normal \cite{pmlr-v9-glorot10a}.
To prevent overfitting, we add L2 regularization for TRACED. We perform 5-fold cross-validation for each dataset and average the results over five folds. For each fold, we use 80\% of the data for training and use the remaining 20\% for testing.
The experimental results are shown in Table \ref{table:FuturePerformanceResultHDU} and \ref{table:FuturePerformanceResultKDD}.
It can be seen that the results of all models in the algebra06 and algebra08 datasets are much better than the results with the HDU and POJ datasets, because the average number of students' exercise submissions with the algebra06 and algebra08 datasets are much higher than that with the POJ and HDU datasets. Besides, learners in POJ and HDU datasets mostly learn to program independently, and they often submit repeated submissions to test the accuracy and efficiency of the code. Learners in algebra06 and algebra08 datasets are mostly after-school math exercises. Although learners will submit many times, the number of repeated submissions is less.
Most models in the algebra06 and HDU datasets are much better than those in the algebra08 and POJ datasets. Because most models perform better in large datasets than in small datasets when using the same type of data set.
The performance of TRACED is significantly better than that of other models.
The reason why the performance of TRACED is better than that of FAKT (DAS3H, KTM, etc.) is that the TRACED model can be regarded as a kind of DLKT for predicting students' future performance tasks.
DLKT is significantly better than FAKT for predicting students' future performance tasks.
The TRACED model is better than other DLKT models because TRACED does not directly use the original exercise records but uses the distributed representations of records to predict students' future performance. Compared with the original records, the distributed representations of exercise records contain a substantial amount of contextual information.
\begin{table*}[htbp]
\small
\caption{Results for Predicting Future Student Performance on the algebra06 and algebra08 datasets}
\label{table:FuturePerformanceResultKDD}
\centering
\begin{tabular}{c|cccccc|cccccc}
\hline
& \multicolumn{6}{c|}{algebra06} & \multicolumn{6}{c}{algebra08}\\
\hline
Model & AUC & ACC & PRE & REC & RMSE & MAE & AUC & ACC & PRE & REC & RMSE & MAE\\
\hline
IRT & 0.6663 & 0.8451 & 0.8477 & 0.9957 & 0.1244 & 0.2397 & 0.6668 & 0.8123 & 0.8148 & 0.9948 & 0.3798 & 0.2904 \\
MIRT & 0.6625 & 0.8455 & 0.8467 & \textbf{0.9979} & 0.1247 & 0.2577 & 0.6656 & 0.8123 & 0.8144 & \textbf{0.9956} & 0.3802 & 0.2998 \\
AFM & 0.6663 & 0.8451 & 0.8477 & 0.9957 & 0.1244 & 0.2597 & 0.6737 & 0.8288 & 0.8190 & 0.9862 & 0.3820 & 0.2927 \\
PFA & 0.7120 & 0.8418 & 0.8567 & 0.9761 & 0.1220 & 0.2319 & 0.7040 & 0.8143 & 0.8179 & 0.9918 & 0.3746 & 0.2806 \\
KTM & 0.7440 & 0.8484 & 0.8546 & 0.9890 & 0.1155 & 0.2298 & 0.7173 & 0.8161 & 0.8214 & 0.9883 & 0.3717 & 0.2762 \\
DASH & 0.7464 & 0.8512 & 0.8548 & 0.9927 & 0.1143 & 0.2425 & 0.7090 & 0.8142 & 0.8172 & 0.9930 & 0.3742 & 0.2934 \\
DAS3H & 0.7328 & 0.8419 & 0.8580 & 0.9743 & 0.1227 & 0.2790 & 0.7234 & 0.8164 & 0.8214 & 0.9887 & 0.3704 & 0.2738 \\
DKT & 0.7513 & 0.8536 & 0.8497 & 0.9826 & 0.1124 & 0.2310 & 0.7462 & 0.8182 & 0.8315 & 0.9728 & 0.2638 & 0.3663 \\
DKVMN & 0.7564 & 0.8579 & 0.8592 & 0.9910 & 0.1117 & 0.2284 & 0.7453 & 0.8188 & 0.8288 & 0.9785 & 0.2662 & 0.3662 \\
AKT & 0.7573 & 0.8621 & 0.8588 & 0.9954 & 0.1106 & 0.2193 & 0.7173 & 0.8090 & 0.8158 & 0.9857 & 0.2750 & 0.3769 \\
TRACED & \textbf{0.7604} & \textbf{0.8623} & \textbf{0.8596} & 0.9957 & \textbf{0.1098} & \textbf{0.2154} & \textbf{0.7724} & \textbf{0.8336} & \textbf{0.8496} & 0.9894 & \textbf{0.2539} & \textbf{0.3659}\\
\hline
\end{tabular}
\end{table*}
The performance of FAKT models is constantly improving with an increase in the time features extracted by humans. The performance is reflected in the performance comparison of DAS3H, DASH, KTM, PFA, AFM MIRT and IRT (DAS3H $>$ DASH $>$ KTM $>$ PFA, AFM $>$ MIRT $>$ IRT).
Specifically, AFM and PFA are the factor analysis models considering the number of attempts and positive and negative attempts. Since the number of attempts in the HDU and POJ datasets cannot accurately reflect whether learners master knowledge concepts, the performance of AFM and PFA in the HDU and POJ datasets is poor, even worse than IRT and MIRT. However, the performance of AFM and PFA in the algebra06 and algebra08 datasets is better than IRT and MIRT.
The performance gap among the three types of DLKT is tiny.
The reason why the performance of DKVMN is not much different from that of DKT may be that we may not need too many storage units to obtain better performance in these four datasets.
The reason for the unsatisfactory effect of AKT is that the AKT model is a model designed for problems with a single knowledge concept, and problems in the HDU, POJ, algebra06 and algebra08 datasets contain a substantial amount of knowledge concepts.
Besides, the reason for the poor performance of AKT in the POJ dataset is that the number of learners and submissions in the POJ dataset is too small.
\subsection{Predicting Relationships Between Concepts}
\label{sec:org9798341}
Since we do not have experts in the field of algebra, we have not constructed a knowledge graph in the field of algebra. In the end, we chose to conduct the experiment of predicting the relationship between knowledge concepts in the HDU and POJ datasets. We use a knowledge concept graph to train a fully connected neural network to identify the inclusion relationships between knowledge concepts. The fully connected neural network has 30 hidden units, and the number of dimensions of the knowledge concept embeddings is 40; the network structure is shown in Eq. \ref{equ:PredictingRelationshipsbetweenKnowledgeConcepts}.
The sparsity of the knowledge concept graph leads to an extreme imbalance between the number of positive and negative samples in the data.
To overcome this problem, we sample 2 negative samples for each positive sample, following the negative sampling approach proposed in \cite{mikolov2013distributed}.
Finally, each data point is represented as \((k_1, k_2, r)\), where \(r\) represents the relationship between knowledge concepts \(k_1\) and \(k_2\) (\(r=1\) means that knowledge concept \(k_1\) includes knowledge concept \(k_2\); otherwise, \(r=0\)).
For each fold, we use 80\% of the data as training data and the remaining 20\% as test data. The experimental results are shown in Table \ref{table:RelationshipResult}.
The performance of the fully connected neural networks that consider the distributed representations of the knowledge concepts is much better than that of the networks that do not consider the distributed representations of the knowledge concepts.
Moreover, the distributed representations based on both dynamic and static interactions yield better performance than the other methods for predicting the relationships between knowledge concepts.
\begin{table}[!tbp]
\small
\caption{Results of Predicting Relationships Between Concepts}
\label{table:RelationshipResult}
\centering
\begin{tabular}{m{3.1cm} m{0.7cm} m{0.7cm} m{0.7cm} m{0.7cm}}
\hline
Model & AUC & ACC & RMSE & MAE \\
\hline
\multicolumn{5}{c}{HDU}\\
\hline
NN & 0.780 & 0.683 & 0.390 & 0.282 \\
NN + EK,KK & 0.829 & 0.715 & 0.371 & 0.302 \\
NN + EK,UE & 0.807 & 0.656 & 0.371 & 0.305 \\
NN + KK,UE & 0.812 & 0.698 & 0.385 & 0.308 \\
NN + EK,KK,UE & \textbf{0.848} & \textbf{0.746} & \textbf{0.360} & \textbf{0.300} \\
\hline
\multicolumn{5}{c}{POJ}\\
\hline
NN & 0.713 & 0.619 & 0.463 & 0.353 \\
NN + EK,KK & 0.751 & 0.609 & 0.412 & 0.352 \\
NN + EK,UE & 0.732 & 0.566 & 0.432 & 0.373 \\
NN + KK,UE & 0.751 & 0.604 & 0.413 & 0.354 \\
NN + EK,KK,UE & \textbf{0.812} & \textbf{0.768} & \textbf{0.393} & \textbf{0.349} \\
\hline
\end{tabular}
\end{table}
\begin{table}[!tbp]
\small
\caption{Results of Predicting Concepts of Problems}
\label{table:PredictConceptResult}
\centering
\begin{tabular}{m{3.1cm} m{0.7cm} m{0.7cm} m{0.7cm} m{0.7cm}}
\hline
Model & AUC & ACC & RMSE & MAE \\
\hline
\multicolumn{5}{c}{HDU}\\
\hline
NN & 0.688 & 0.530 & 0.458 & 0.377 \\
NN + EK,KK & 0.759 & 0.683 & 0.417 & 0.363 \\
NN + EK,UE & 0.756 & 0.682 & 0.418 & 0.363 \\
NN + KK,UE & 0.753 & 0.668 & 0.422 & 0.378 \\
NN + EK,KK,UE & \textbf{0.764} & \textbf{0.670} & \textbf{0.416} & \textbf{0.365} \\
\hline
\multicolumn{5}{c}{POJ}\\
\hline
NN & 0.678 & 0.511 & 0.463 & 0.379 \\
NN + EK,KK & 0.769 & 0.706 & 0.416 & 0.375 \\
NN + EK,UE & 0.767 & 0.684 & 0.418 & 0.379 \\
NN + KK,UE & 0.763 & 0.707 & 0.427 & 0.399 \\
NN + EK,KK,UE & \textbf{0.772} & \textbf{0.717} & \textbf{0.413} & \textbf{0.366} \\
\hline
\end{tabular}
\end{table}
The distributed representations that consider KK interactions show better performance for this task than those that consider EK interactions.
In summary, using UE, EK, and KK interaction strategies can learn the relationship information between knowledge concepts, and the KK interaction strategy plays a more important role than the EK, UE interaction strategy in the task of learning knowledge concept relationship information.
The experimental results show that the distributed representation of knowledge concepts learned by TRACED contains effective information about the relationship between knowledge concepts.
\subsection{Predicting Concepts Contained by Problems}
\label{sec:org7a625c6}
We use the Q-matrix to train a fully connected neural network to judge whether a certain problem contains a certain knowledge concept.
The fully connected neural network has 30 hidden units, and the number of dimensions of the knowledge concept embeddings is 40; the network structure is shown in Eq. \ref{eq:PredictingConceptsInProblems}.
Due to the sparsity of the Q-matrix, we sample 2 negative samples for each positive sample. Finally, each data point is represented as \((e, k, r)\), where \(r = 1\) indicates that problem \(e\) is related to knowledge concept \(k\).
For each fold, we use 80\% of the data for training and the remaining 20\% for testing.
The experimental results are shown in Table \ref{table:PredictConceptResult}.
Again, the performance of the fully connected neural networks that consider the distributed representations of the knowledge concepts is much better than that of the networks that do not consider the distributed representations of the knowledge concepts.
The distributed representations based on both dynamic and static interactions yield better performance than the other methods for predicting the knowledge concepts contained by problems.
Besides, the distributed representations that consider EK interactions show better performance for this task than those that consider KK interactions.
In summary, using UE, EK, and KK interaction strategies can learn the relationship information between knowledge concept and problem, and the EK interaction strategy plays a more important role than the KK, UE interaction strategy in the task of learning knowledge concept relationship information.
The experimental results show that the distributed representation of knowledge concepts learned by TRACED contains effective information about the relationship between knowledge concept and problem.
\begin{figure}[!ht]
\centering
\subfigure[the loss in wake phase]{\includegraphics[width=0.24\textwidth]{figure/FCS-wakeloss-fig4.pdf}}
\subfigure[the loss in sleep phase]{\includegraphics[width=0.24\textwidth]{figure/FCS-sleeploss-fig5.pdf}}
\caption{\label{fig:LossHDU}\textbf{The loss values of TRACED}}
\end{figure}
\subsection{Convergence of TRACED}
\label{sec:org1227ebd}
As previously stated, TRACED is optimized by a heuristic algorithm. We prove the convergence of the heuristic algorithm we proposed with TRACED via experiments.
We have conducted experiments on four datasets, and all the experimental results show that TRACED can converge.
As shown in Figures \ref{fig:LossHDU}, TRACED converges in the Wake and Sleep phase with four datasets.
In the Wake phase, TRACED converges smoothly.
In the Sleep phase, although there are some fluctuations in the convergence process, the overall trend is convergent.
The fluctuation in the Sleep phase is caused by the unstable data generated in the Wake phase in the initial situation.
The model parameters are all obtained by random initialization. The training in the wake phase will use both the data sampled in the sleep phase and the real data, but the training data in the sleep phase is completely generated by the Wake phase. In the initial stage of training, the model parameters change too quickly, so the training data of the sleep phase is unstable and may be optimized in the wrong direction. The sleep loss will show an upward trend at the beginning. The wake phase uses the data sampled in the sleep phase and the real data, so the wake loss will not show an upward trend at the beginning.
The experimental results prove that the heuristic algorithm we proposed for the training model is a suitable and effective optimization method for learning TRACED parameters.
\begin{figure*}[!ht]
\centering
\subfigure[HDU]{\includegraphics[width=0.48\textwidth]{./figure/FCS-heatMapHDU-fig6.pdf}}
\subfigure[POJ]{\includegraphics[width=0.48\textwidth]{./figure/FCS-heatMapPOJ-fig7.pdf}}
\caption{\label{fig:Visualization}\textbf{The visualization of prior and posterior of TRACED on the HDU and POJ datasets}}
\end{figure*}
\subsection{Visualization of Mastery Probability}
\label{sec:orgcb40507}
Figure \ref{fig:Visualization} visualizes the prior and posterior probabilities of students' knowledge concept mastery on the HDU and POJ datasets.
The figure shows the dynamic changes in the prior and posterior probability of a student's knowledge concept mastery as students continuously practice the problem over time.
Because the number of knowledge concepts and students is vast, we only show the dynamic change in the probability of mastering the three relevant knowledge concepts of the problem during the practice of a certain problem by a student.
With the increase in the number of exercises, the prior and posterior probability of students mastering knowledge concepts is also steadily improving.
The continuous and slow changes in the prior and posterior probabilities are consistent with our empirical understanding of the learning process.
The calculation of the prior and posterior probabilities uses the student’s exercise problem sequence \(e_i\) and exercise time sequence \(\tau_i\), and the posterior probability can be regarded as the prior probability that is corrected based on the student’s exercise response \(r_i\). Thus, there is not much difference between the prior and posterior probability.
The learners in the HDU and POJ datasets are the same type of learners. They are all independently programmed in the Online Judge system, so figure \ref{fig:Visualization} (a) and figure \ref{fig:Visualization} (b) can be compared horizontally. Figure \ref{fig:Visualization} (a) shows a learner who repeatedly submits the correct code. He may be adjusting the efficiency of the code. Therefore, the probability of mastering knowledge concepts increased significantly after submitting it correctly for the first time but slowly increased in subsequent submissions. Figure \ref{fig:Visualization}(b) shows that a learner is constantly practicing. With the initial six incorrect submissions, the probability of mastering the knowledge concept slowly increases. After the seventh correct submission, the probability of mastering the knowledge concept of the learner reaches its peak. Correct submission will no longer increase the probability of mastering the knowledge concept.
The above experimental results show that TRACED can provide learners and teachers with an interpretable cognitive diagnosis that changes dynamically over time.
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.48\textwidth]{./figure/FCS-Embedding-fig8.pdf}
\caption{\label{fig:ExperimentFigure1} Visualization of the learned distributed representations of students, knowledge concept and problem for the HDU dataset, where the learned representations have been reduced to 2 dimensions by means of PCA}
\end{figure}
\subsection{Visualization of Learning Trajectory}
\label{sec:orgb6ac773}
The distributed representations of the student, knowledge concepts and problems acquired using TRACED are shown in Figure \ref{fig:ExperimentFigure1}, where the acquired representations have been reduced to 2 dimensions utilizing Principal Component Analysis(PCA).
We have drawn the learning trajectory of a learner; each point on the learning trajectory corresponds to the distributed representation of a learner after a certain exercise.
We also constructed distributed representations of problems and knowledge concepts involved in the learner's exercises.
With continuous practice, the distributed representation of the learner gradually approaches the distributed representation of knowledge concepts involved in the problem.
Besides, the result of the learner's distributed representation will cluster spontaneously, and there is a certain distance between each cluster. The result of the learner's distributed representation within the class shows a straight line. Because in addition to learning by practicing on the online education platform, students can also learn in other ways (offline teachers teach, read paper textbooks, and soon on.). We cannot collect the data generated by these learning processes. Thus, there are significant jumps between two certain clusters on the student's learning trajectory.
The reason why the learning trajectory of the learner in the class shows a straight line is that the results of the distributed representation of the learner in the class are generally produced by a phased exercise. The knowledge concepts involved in the phase exercises are relatively similar, so the learner's embedding result shifts toward the specified direction.
By using the learning trajectory of students, teachers can quickly understand the learning habits and learning process of students and then can customize personalized learning plans for students conveniently and quickly.
\section{The Interpretable Probabilistic Generative Model}
\label{sec:org556a3dd}
In this section, we introduce the technical details of TRACED and how to apply it to three knowledge-driven tasks. For better illustration, key notations in TRACED, which can be divided into four categories, are summarized in Table \ref{table:somekeynotationsinkcrl}, namely, dataset description, model parameters, random variable, and hyperparameter notation.
\begin{figure}[!tbp]
\centering{
\includegraphics[width=0.48\textwidth]{./figure/FCS-TRACEDFramework-fig2.pdf}}
\caption{\label{fig:IKTFramework} The Framework of the TRACED model.}
\end{figure}
\subsection{Modeling Students' Exercise Responses}
\label{sec:orgb5a2624}
Students' performance is affected by many aspect, such as, student knowledge concept mastery, the knowledge concept contained in the problem and the similarity between difference knowledge concept.
To modeling these different factors, inspired by factor analysis-based knowledge tracing models \cite{DBLP:conf/aaai/VieK19,DBLP:conf/edm/ChoffinPBV19},
we design three interactive strategies for modeling students' exercise responses, namely,
strategies based on student problem interactions (UE), problem knowledge interactions (EK), and knowledge knowledge interactions (KK). UE, EK, and KK interactions are shown in Eq.\ref{eq:UEKK}. With these three interactive strategies, we predict students' exercise responses, and distributed representations of knowledge concepts and problems can be learned by exploring the three connections: a) connections between students and problems; b) connections between problems and concepts; and c) connections among knowledge concepts.
The proposed logarithmic linear model, which incorporates the three interactive strategies, is defined as follows:
\begin{equation}
\label{eq:UEKK}
\begin{aligned}
&logit{(p(\hat{r}_{i,t} = 1 | u_i^t, e_{i,t}))} = \mu + w_{e, e_{i,t}} \\
& + \sum\limits_{k=1}^K Q_{e_{i,t}, k} w_{c, k} + \underbrace{\sum\limits_{k_1=1}^K \sum\limits_{k_2=1}^K Q_{e_{i,t},k_1} Q_{e_{i,t},k_2} E_{c,k_1} E_{c,k_2}}_{\text{KK interactions}}\\
& + \underbrace{(\sum\limits_{k=1}^{K} u_{i,k}^t E_{c,k}) E_{e, e_{i,t}}}_{\text{UE interactions}} + \underbrace{\sum\limits_{k=1}^K Q_{e_{i,t},k} E_{e,e_{i,t}} E_{c,k}}_{\text{EK interactions}} \\
\end{aligned}
\end{equation}
where $p(\hat{r}_{i,t} = 1 | u_i^t, e_{i,t})$ represents the probability of student with knowledge concept mastery $u_i^t$ correctly answers the problem $e_{i,t}$ without considering slipping and guessing factors, \(logit (x) = \frac{x}{1-x}\), \(\hat{r}_{i,t}\) is student \(i\)'s exercise response in the \emph{t}-th exercise record; \(u_i^t\) is student \(i\)'s knowledge concept mastery at the time of the \emph{t}-th exercise record; \(\sum\limits_{k=1}^K u_{i,k}^t E_{c,k}\) is the distributed representation of student \(i\) in the \emph{t}-th exercise record; \(E_{c,k}, E_{e,e'} \in \mathbb{R}^{d_e}\) are the distributed representations of knowledge concepts \(k\) and problems \(e'\); \(\mu\) is a global bias; \(w_e\), and \(w_{c}\) are the biases for the problems and knowledge concepts, respectively; and \(Q_{e,k}\) indicates whether problem \(e\) is related to knowledge concept \(k\).
\begin{figure}[!tbp]
\centering{
\includegraphics[width=0.48\textwidth]{./figure/FCS-TRACED-fig3.pdf}}
\caption{\label{fig:IKTFigure} Graphical representation of TRACED.}
\end{figure}
In addition, we consider the students' guessing and slipping behaviors when modeling the students' exercise responses as follows:
\begin{equation}
\begin{aligned}
&p \left( r_{i,t} = 1 | u_i^t \right) = p(\hat{r}_{i,t} = 1 | u_i^t) (1 - p(s_{e_{i,t}} = 1))\\
& \qquad \qquad \qquad+ p(\hat{r}_{i,t} = 0 | u_i^t) p(g_{e_{i,t}} = 1)\\
\end{aligned}
\end{equation}
The probability of making a mistake with problem \(e\) \(p(s_{e}=1)\) and the probability of correctly answering problem \(e\) by guessing \(p(g_{e}=1)\) are defined as:
\begin{equation}
\begin{aligned}
p \left( s_{e} = 1 \right) &= \frac{1}{1+e^{-\theta_{s,e}}}\\
p \left( g_{e} = 1 \right) &= \frac{1}{1+e^{-\theta_{g,e}}}\\
\end{aligned}
\end{equation}
where \(\theta_{s,e}\) and \(\theta_{g,e}\) are the guessing parameters and slipping parameters, respectively, for problem \(e\).
\subsection{Modeling Students' Knowledge Concept Mastery Probabilities Over Time}
\label{sec:orgb2040d8}
As students learn new knowledge concepts and forget the original knowledge concepts, the students' mastery of each knowledge concept varies over time.
To better track students' knowledge concept mastery over time, we use a combination of learning and forgetting factors to model students' knowledge concept mastery. Student \(i\)'s mastery of knowledge concept \(k\) at the time $t$ is given as:
\begin{equation}
\begin{aligned}
p(u_{i,k}^t = 1) = &p(u_{i,k}^{t-1} = 1) (1 - p(F_k = 1)) \\
&\quad + p(u_{i,k}^{t-1} = 0) p(L_{k} = 1)\\
\end{aligned}
\end{equation}
where \(p(u_{i,k}^t = 1)\) is the probability that student \(i\) has mastered knowledge concept \(k\) at the time of the \emph{t}-th exercise; \(p(F_k=1)\) is the probability that a student will forget knowledge concept \(k\), and \(p(L_k=1)\) is the probability that a student will mastery knowledge concept \(k\) via learning.
In this formula, the first component $p(u_{i,k}^{t-1} = 1) (1 - p(F_k = 1))$ represents the student has previously mastered knowledge concept $k$ and doesn't forget the knowledge concept $k$, and the second component $p(u_{i,k}^{t-1} = 0) p(L_{k} = 1)$ represents the student hasn't previously mastered knowledge concept $k$ but learned knowledge concept $k$.
To facilitate the subsequent derivation, we define students' knowledge concept mastery in matrix form as follows:
\begin{equation}
\label{eq:pu}
p\left(u_{i, k}^t\right)=\left\{\begin{array}{lll}p\left(u_{i, k}^{t-1}\right) p\left(u_{i, k}^t \mid u_{i, k}^{t-1}\right) & \text { if } & t>1 \\ \pi(k) & \text { if } & t=1\end{array}\right .
\end{equation}
where \(p(u_{i,k}^t) = [p(u_{i,k}^t = 1), p(u_{i,k}^t = 0)]\) represents the probability of whether student $i$ masters knowledge concept $k$ at timestamp $t$, $\pi(k) = [\pi(k, 1), \pi(k, 0)]$ represents the probability of whether student masters knowledge concept $k$ when timestamp $t=1$, and \(p(u_{i,k}^t | u_{i,k}^{t-1})\) is the transfer matrix.
Specifically, \(p(u_{i,k}^t | u_{i,k}^{t-1})\) is defined as:
\begin{equation}
\label{eq:pu2u}
\begin{aligned}
&p(u_{i,k}^t | u_{i,k}^{t-1}) =
\left [
\begin{array}{cc}
1 - p(F_k = 1) & p(F_k = 1) \\
p(L_{k} = 1) & 1 - p(L_{k} = 1) \\
\end{array}
\right ]
\end{aligned}
\end{equation}
Inspired by the learning curve and forgetting curve \cite{anzanello2011learning,von2007understanding}, we design the forgetting probability \(p(F_k = 1)\) and learning probability \(p(L_k = 1)\) for knowledge concept \(k\) based on the following principles:
First, the longer the interval between two exercises on a particular knowledge concept, the higher the probability of forgetting that knowledge concept.
Second, the large the number of exercises for a particular knowledge concept within a short time, the higher the probability of learning that knowledge concept.
\(p(F_k = 1)\) and \(p(L_k = 1)\) as follow:
\begin{align}
& p(F_k = 1) = \sigma (\frac{\Delta \tau_k}{\theta_{f,k}} + b_{f,k}) \label{eq:Fk} \\
& p(L_k = 1) = \sigma (\frac{\theta_{l1,k}*f_k}{f_k+\theta_{l2,k}} + b_{l,k}) \label{eq:Tk}
\end{align}
where \(\sigma\) is sigmoid activation function; \(\Delta \tau_k\) is the time interval between the current exercise record for knowledge concept \(k\) and the previous record for knowledge concept \(k\), \(f_k\) denotes the exercise frequency for knowledge concept \(k\) within a specified time \(\Delta \hat{\tau}\); \(\theta_{f,k}\), \(\theta_{l1,k}\), and \(\theta_{l2,k}\) are the forgetting and learning parameters for knowledge concept \(k\);
\(b_{f,k}\) and \(b_{l,k}\) are the forgetting and learning biases for knowledge concept \(k\)
To better understand our proposed TRACED, we summarize the graphical representation of our model in Figure \ref{fig:IKTFigure}, where the shaded variables and unshaded variables indicate the observed variables and latent variables, respectively. Here, what we can observe are students' responses \(R\) with \(M\) problems and \(T\) exercises and the Q-matrix with \(K\) knowledge concepts.
The mastery \(U_{k}^{t}\) of knowledge concept \(k\) at the \emph{t}-th exercise depend on the mastery \(U_{k}^{t-1}\) of knowledge concept \(k\) at \emph{(t-1)}-th exercise, the time interval \(\Delta \tau_k\) between \(U_k^t\) and \(U_k^{t-1}\) and the exercise frequency \(f_{k}\) within \(\Delta \hat{\tau}\).
The response \(R_{e}^{t}\) about problem \(e\) at the \emph{t}-th exercise is influenced by problem information \(Q_{e}\), guessing factor \(g_{e}\), slipping factor \(s_{e}\) and knowledge concept mastery \(U^t\) at the \emph{t}-th exercise.
\subsection{Model Inference}
\label{sec:org18a4951}
In this section, we first infer the joint probability \(p(u_i, r_i)\) and marginal probability \(p(r_{i})\) and then explain the technical challenges encountered in model learning. Then, we detail the solutions proposed to address these technical challenges.
\subsubsection{$p(u_i, r_i)$ and $p(r_i)$ Inference}
\label{sec:orgc78cd47}
We obtain the joint distribution of the observed and hidden variables by combining Eqs. \ref{eq:pu} and \ref{eq:pu2u}.
\begin{equation}
\label{eq:pur}
\begin{aligned}
&p(u_i, r_i) = p(u_i^1) \prod\limits_{t=2}^{T} p(u_i^t | u_i^{t-1}) \prod\limits_{ t=1 }^{ T } p(r_{i,t} | u_i^t) \\
&= \prod\limits_{k=1}^{K} p(u_{i,k}^1) \prod\limits_{t=2}^{T} \prod\limits_{k=1}^{K} p(u_{i,k}^t | u_{i,k}^{t-1}) \prod\limits_{ t=1 }^{T} p(r_{i,t} | u_{i,1}^t,...,u_{i,K}^t )
\end{aligned}
\end{equation}
By using maximum likelihood estimation to learn model parameters, we need to maximize the probability of observations (maximize the marginal probability distribution \(p(r_i)\)). The marginal probability distribution \(p(r_i)\) can be obtained as:
\begin{equation}
\label{eq:pr}
\begin{aligned}
p(r_i) &= \sum\limits_{u_i} p(u_i, r_i ) \\
&= \sum\limits_{u_i} (p(u_{i}^1 ) \prod\limits_{ t=2 }^{T} p(u_i^t | u_i^{t-1}) \prod\limits_{ t=1 }^{ T } p(r_{i,t} | u_i^t))\\
&= \sum\limits_{u_i} (p(u_{i}^1) p(r_{i,1} | u_i^1) \prod\limits_{ t=2 }^{ T } p(u_i^t | u_i^{t-1}) p(r_{i,t} | u_i^{t}))\\
&=(\sum\limits_{u_i^{1}} p(u_i^1) p(r_{i,1}|u_i^1))*...* (\sum\limits_{u_i^{T}} p(u_i^T | u_i^{T-1}) p(r_{i,T}|u_i^T))\\
\end{aligned}
\end{equation}
where \(\sum\limits_{u_i^{t}} p(u_i^t) p(r_{i,t}|u_i^t)\) is defined as follows:
\begin{equation}
\begin{aligned}
&\sum\limits_{u_i^{t}} p(u_i^t) p(r_{i,t}|u_i^t) = \sum\limits_{u_{i}^t} p(u_{i,1}^t) ... p(u_{i,K}^t) p(r_i | u_{i,1}^t,...,u_{i,K}^{t})
\end{aligned}
\end{equation}
Due to \emph{explaining away problem} (observation \(r_{i,t}\) depends on multiple hidden variables \(u_{i,1}^t, u_{i,2}^t ,..., u_{i,K}^{t}\) \cite{204911}, we cannot use the forward algorithm \cite{rabiner1989tutorial} to reduce the time complexity; hence, the time complexity of calculating \(P(r_i)\) is \(O(2^K*T)\), which is unacceptably high. Therefore, we cannot use maximum likelihood estimation for parameter optimization.
By using the Expectation-Maximization (EM) algorithm to learn model parameters, we need to maximize \(\sum\limits_{u} p(u | r) \log p(u, r)\). The posterior distribution \(p(u_i | r_i)\) can be obtained by combining Eq.\ref{eq:pur} and Eq.\ref{eq:pr} as follows:
\begin{equation}
\label{eq:101}
\begin{aligned}
p (u_i| r_i) = \frac{p(u_i, r_i)}{p(r_i)}
\end{aligned}
\end{equation}
Since the time complexity of calculating the posterior distribution \(p(u_i | r_i)\) is also \(O(2^K * T)\),
we cannot quickly sample the latent variables \(u_i\). Therefore, we cannot use the EM algorithm for parameter optimization.
To solve this problem, we improve an LSTM network to approximate the posterior distribution and propose a heuristic algorithm to learn the model parameters.
\subsubsection{Approximate Inference By Using LSTM}
\label{sec:orgc92de86}
We propose an LSTM-based network to approximate the posterior distribution \(p(u_i | r_i)\) as:
\begin{equation}
\label{eq:qphi}
\begin{aligned}
&q_{\phi}(u_{i}^{t}) = \sigma(W_q*LSTM(\tilde{x}_{i,t})+b_{q})\\
\end{aligned}
\end{equation}
where \(q_{\phi}(u_i^t) \in \mathbb{R}^{K}\) denotes the approximate result of the posterior distribution \([p(u_{i,1}^t|r_{i,t}),\) \(...,p(u_{i,K}^{t}|r_{i,t})]\),
\(\phi\) denotes all the training parameters in \(q_{\phi}\), and \(\sigma\) is the sigmoid activation function.
To better utilize and preserve the information of exercise responses $r_{i,t}$, we extend the students' exercise responses \(r_{i,t}\) to a feature vector \(\mathbf{0} = (0,0,...,0)\) with the same \(2d_e\) dimensions as the exercise embedding \(x_{i,t}\) and then learn the combined input vector \(\tilde{x}_{i,t} \in \mathbb{R}^{4d_e+K}\) as:
\begin{equation}
\begin{aligned}
&\widetilde{x}_{i,t}=\left\{\begin{array}{ll}
{\left[x_{i,t} \oplus \mathbf{0} \oplus p(u_{i}^t)\right]} & \text { if } r_{i,t}=1 \\
{\left[\mathbf{0} \oplus x_{i,t} \oplus p(u_{i}^t)\right]} & \text { if } r_{i,t}=0 \\
\end{array}\right. \\
\end{aligned}
\end{equation}
where \(\oplus\) is the operation of concatenating two vectors, and \(p(u_{i}^t) = [p(u_{i,1}^{t}),...,p(u_{i,K}^{t})]\) can be obtained from Eq.\ref{eq:pu}. By this way, LSTM can better identify and utilize exercise response information $r_{i,t}$ from embedding vector $\widetilde{x}_{i,t}$. And the exercise embedding \(x_{i,t}\) is defined as:
\begin{align}
& x_{i,t} = [E_{e,e_{i,t}}, \hat{E}_{e,e_{i,t}}] \label{eq:17_1}\\
& \hat{E}_{e,e_{i,t}} = \sum\limits_{j=1}^K Q_{e_{i,t}, j} E_{k,j} \label{eq:17_2}
\end{align}
where \(E_{k,j}, E_{e, e_{i,t}} \in \mathbb{R}^{d_e}\) are the distributed representations of the knowledge concept $j$ and problem $e_{i,t}$, and $Q_{e_{i,t},j}$ represents whether problem $e_{i,t}$ contains the knowledge concept $j$.
With the combined student \(i\)'s posterior information sequence \({\tilde{x}_{i,1},...,\tilde{x}_{i,T}}\), the hidden state \(h_{i,t} \in \mathbb{R}^{d_h}\) at t-th exercise is updated based on current combined input \(\tilde{x}_{i,t}\) and previous student state \(h_{i,t-1}\) as follows.
\begin{equation}
\begin{aligned}
i_{i,t} &=\sigma\left(\mathbf{Z}_{\mathbf{\tilde { x }i}} \tilde{x}_{i,t}+\mathbf{Z}_{\mathbf{h} \mathbf{i}} h_{i,t-1}+\mathbf{b}_{\mathbf{i}}\right) \\
f_{i,t} &=\sigma\left(\mathbf{Z}_{\widetilde{\mathbf{x}} \mathbf{f}} \tilde{x}_{i,t}+\mathbf{Z}_{\mathbf{h} \mathbf{f}} h_{i,t-1}+\mathbf{b}_{\mathbf{f}}\right) \\
o_{i,t} &=\sigma\left(\mathbf{Z}_{\tilde{\mathbf{x}} \mathbf{o}} \tilde{x}_{i,t}+\mathbf{Z}_{\mathbf{h} \mathbf{o}} h_{i,t-1}+\mathbf{b}_{\mathbf{o}}\right) \\
c_{i,t} &=f_{i,t} \cdot c_{i,t-1}+i_{i,t} \cdot \tanh \left(\mathbf{Z}_{\tilde{\mathbf{x}} \mathbf{c}} \tilde{x}_{i,t}+\mathbf{Z}_{\mathbf{h} \mathbf{c}} h_{i,t-1}+\mathbf{b}_{\mathbf{c}}\right) \\
h_{i,t} &=o_{i,t} \cdot \tanh \left(c_{i,t}\right)
\end{aligned}
\end{equation}
where \(\mathbf{Z_{\tilde{x}*}} \in \mathbb{R}^{d_h \times (4d_e+K)}\), \(\mathbf{Z}_{h*} \in \mathbb{R}^{d_h \times d_h}\) and \(\mathbf{b_{* }} \in \mathbb{R}^{d_h}\) are the parameters in LSTM.
\begin{algorithm}[htbp]
\caption{The training algorithm of TRACED}
\label{algorithm:KCRLESTraining}
\begin{algorithmic}[1]
\Require
Student exercise sequences $S = \left\{ e, t, r \right\}$.
\Ensure
The trained parameters $\phi$, and $\theta$.
\For {iteration = 1 to IterNum}
\For {i = 1 to N}
\For {t = 1 to T}
\State Sample ${u_{i}^{t}}' \sim q_{\phi}(u_i^t | r_{i,t})$;
\EndFor
\EndFor
\State Update $\theta = \arg \min\limits_{\theta} - \frac{1}{N} \sum\limits_{i=1}^{N} \log p(u_i',r_{i})$;
\For {i = 1 to N}
\For {t = 1 to T}
\State Sample ${u_i^t}' \sim p(u_i^t | {u_i^{t-1}}'))$;
\State Sample ${r_{i,t}}' \sim p(r_{i,t}|{u_i^t}')$;
\EndFor
\EndFor
\State Update $\phi = \arg \min\limits_{\phi} - \frac{1}{N} \sum\limits_{i=1}^N \log q_{\phi}({u_i}' | {r_i}')$;\
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Model Learning}
\label{sec:orgf38d0ea}
This section explains in detail how the model's parameters are learned. The process of TRACED parameter learning is shown in Algorithm \ref{algorithm:KCRLESTraining}.
Inspired by \cite{hinton1995wake}, parameter learning is divided into the wake phase and the sleep phase.
In the wake phase, we use \(q_{\phi}(u_i|r_i)\) to sample the hidden variables \(u_i\) and then optimize the parameters \(\theta\) of \(p(u_i, r_i)\) by using the \(u_i\) obtained via sampling.
In the sleep phase, we use \(p(u_i, r_i)\) to sample the hidden variables \(u_i\) and observed variables \(r_i\) and then optimize the parameters \(\phi\) of \(q_{\phi}(u_i | r_i)\) by the \(u_i\) and \(r_i\) obtained via sampling. In addition, Wake Phase and Sleep Phase both use real data, sample observe and hidden variables based on real data, and use real datas and sampled datas to train the model.
\subsubsection{The loss function in the Wake Phase.}
\label{sec:org0ba821f}
In the Wake Phase, we utilize an LSTM-based network to approximate the posterior distribution \(p(u | r, \theta)\) and learn the parameters of TRACED by maximizing Evidence Lower BOund (ELBO).
The derivation process of ELBO is as follows.
\begin{equation}
\begin{aligned}
&\log p(r | \theta) = \log \frac{p(u, r| \theta)}{p(u|r, \theta)}\\
&= \sum \limits_u q_\phi (u | r) \log \frac{p(u, r| \theta)}{q_{\phi}(u | r)} \frac{q_{\phi}(u | r)}{p(u|r, \theta)}\\
&= \underbrace{\sum\limits_{u} q_{\phi} (u | r) \log p(u|r, \theta) + H[q_{\phi}]}_{\text{Evidence Lower Bound (ELBO)}} \\
& \qquad + KL (q_{\phi} (u | r) || p(u| r, \theta))\\
\end{aligned}
\end{equation}
where KL represents Kullback-Leibler divergence, and \(H[q_{\phi}] = - \sum\limits_u q_\phi(u | r) \log q_\phi(u | r)\).
The derivation process of the loss function in the Wake Phase is expressed as follows:
\begin{equation}
\begin{aligned}
\theta &= \arg \max_{\theta} ELBO\\
&= \arg \max_{\theta} \sum\limits_{u} q_{\phi} (u | r) \log p(u|r, \theta) + H[q_{\phi}]\\
&\propto \arg \max_{\theta} \sum\limits_{u} q_{\phi} (u | r) \log p \left( u, r| \theta \right)\\
&= \arg \min_{\theta} - \frac{1}{N} \sum\limits_{i=1}^N \log p \left( u_i', r_i| \theta \right)
\end{aligned}
\end{equation}
where \(\theta\) represents all parameters of \(p(u,r|\theta)\), \(u_i'\) is sampled by \(q_{\phi}(u_i | r_i)\), and \(r_i\) is the real data.
\subsubsection{The loss function in the Sleep Phase.}
\label{sec:org5042490}
In the Sleep Phase, we learn the parameters \(\phi\) of the LSTM network. We minimize \(KL(p(u|r, \theta) || q_{\phi} (u|r, \phi))\) to better approximate the posterior distribution \(p(u|r, \theta)\). The derivation process of the loss function in the Sleep Phase is defined as follows.
\begin{equation}
\begin{aligned}
\phi &= \arg \min_{\phi} KL(p( u | r, \theta) || q_{\phi} (u | r, \phi) )\\
&\propto \arg \max_{\phi} \sum\limits_{u} p(r | \theta) p(u | r, \theta) \log \frac { q_{\phi} (u | r)} {p(u | r, \theta)}\\
& \quad- \sum\limits_{u} p(u,r| \theta) \log p(u | r, \theta) \\
&\propto \arg \max_{\phi} E_{u,r \sim p(u, r | \theta)} (q_{\phi} (u | r))\\
&= \arg \min_{\phi} -\frac{1}{N} \sum\limits_{i=1}^N \log q_{\phi}(u_i' | r_i')\\
\end{aligned}
\end{equation}
where \(\phi\) represents all parameters of \(q_{\phi}(u_i | r_i)\), and \(u_i'\) and \(r_i'\) are sampled by \(p(u_i, r_i | \theta)\).
\subsection{Predicting Student Future Knowledge Mastery Probabilities and Performance}
\label{sec:orgc51de35}
Before predicting students' future performance, we must predict students' future knowledge concept mastery probabilities. Given the previous \(t\) exercise records of student \(i\), we predict student \(i\)'s knowledge concepts mastery when the (t+1)-th exercise is performed at time \(\tau_{i,t+1}\) as:
\begin{equation}
\begin{aligned}
&p(u_i^{t+1} | r_{i,1}, r_{i,2}, ..., r_{i,t}, e_i, \tau_i)\\
&\approx \sum\limits_{u_i^t} q_{\phi} (u_i^t | r_{i,1}, ..., r_{i,t}, e_i, \tau_i) p(u_i^{t+1} | u_i^t, \tau_i)\\
&= (\sum\limits_{u_{i,1}^{t}} q_{\phi} (u_{i,j}^t | r_{i,1}, ..., r_{i,t}, e_i, \tau_i) p(u_{i,1}^{t+1} | u_{i,1}^t, \tau_i))* ... *\\
& \quad (\sum\limits_{u_{i,K}^{t}} q_{\phi} (u_{i,K}^t | r_{i,1}, ..., r_{i,t}, e_i, \tau_i) p(u_{i,K}^{t+1} | u_{i,K}^t, \tau_i))
\end{aligned}
\end{equation}
where student \(i\)'s exercise time \(\tau_i = \left\{ \tau_{i,1},...,\tau_{i,t+1}\right\}\) and student \(i\)'s exercise problem \(e_i = \left\{ e_{i,1}, ..., e_{i,t} \right\}\).
Furthermore, we can predict the probability that student \(i\) correctly answers problem \(e_{i,t+1}\) at time \(\tau_{i,t+1}\) as:
\begin{equation}
\begin{aligned}
&p(r_{i,T+1}| r_{i,1}, r_{i,2}, ... , r_{i,T}, e_{i}, \tau_i)\\
&= \frac{p(r_{i,T+1}, u_i^{T+1}| r_{i,1}, r_{i,2}, ... , r_{i,T}, e_i, \tau_i)}{p(u_i^{T+1}| r_{i,1}, r_{i,2}, ..., r_{i,T}, r_{i, T+1}, e_i, \tau_i)}\\
&= \frac{p(u_i^{T+1}| r_{i,1}, r_{i,2}, ... , r_{i,T}, e_i, \tau_i) p(r_{i, T+1} | u_i^{T+1}, \tau_i)}{p(u_i^{T+1}| r_{i,1}, r_{i,2}, ...,r_{i,T}, r_{i,T+1}, \tau_i)} \\
&\approx \frac{ p(r_{i,T+1} | u_i^{T+1}, e_i) (\sum\limits_{u_i^T} q_{\phi} (u_i^T | r_{i,1\sim T}, e_i, \tau_i) p(u_i^{T+1} | u_i^T, \tau_i))}{q_{\phi} (u_i^{T+1} | r_{i,1 \sim T+1}, e_i, \tau_i)}
\end{aligned}
\end{equation}
Since the time complexity of \(\sum\limits_{u_i^T} q_{\phi} (u_i^T | r_{i,1\sim T}, e_i, \tau_i)\) \(p(u_i^{T+1} | u_i^T, \tau_i)\) is \(O(2^K)\), we propose to improve an LSTM network to approximate \(p(r_{i,T+1}| r_{i,1}, r_{i,2}, ... \) \(, r_{i,T}, e_{i}, \tau_i)\) by the distributed representations of students' exercise records. Finally, we predict students' future exercise responses as follows.
\begin{align}
& y_p(i,t+1) = \sigma(W_n*LSTM(\hat{x}_{i,t+1})+b_{n})\\
& \hat{x}_{i,t+1}= \left[x_{i,t+1} \oplus q_{\phi}(u_{i}^{t}) \oplus p(u_{i}^{t+1})\right]
\end{align}
where \(\hat{x}_{i,t+1}\) represents the distributed representation of student \emph{i}'s \emph{t+1}-th exercise record without exercise response, which contains all the information needed to calculate \(p(r_{i,T+1}| r_{i,1}, r_{i,2}, ... , r_{i,T}, e_{i}, \tau_i)\) (priori, posterior, and distributed representation); \(x_{i,t+1}\), \(q_{\phi}(u_i^{t})\), and \(p(u_i^{t+1})\) can be obtained via Eqs. \ref{eq:17_1}, \ref{eq:17_2}, \ref{eq:qphi}, and \ref{eq:pu}, respectively; and \(y_{p}(i,t+1)\) represents the probability that student \(i\) will answer correctly during the \emph{t+1}-th exercise.
\subsection{Predicting Relationships between Concepts}
\label{sec:org1ea799b}
Given part of a knowledge concept graph, we can build a supervised model to fill in the gaps in the graph. In this paper, this goal is simplified to the prediction of only the inclusion relationships between knowledge concepts.
The inclusion relationship between knowledge concepts \(i\) and \(j\) is predicted as follows:
\begin{equation}
\label{equ:PredictingRelationshipsbetweenKnowledgeConcepts}
\begin{aligned}
&h_r = tanh(W_{r,h} * E_r + b_{r,h})\\
&y_{r} (i,j) = \sigma(W_{r,o} * h_r + b_{r,o})
\end{aligned}
\end{equation}
where \(E_r = [E_{e,i}, E_{e,j}, E_{e,i} - E_{e,j}]\) and \(W_{r,h}\), \(W_{r,o}\), \(b_{r,h}\), and \(b_{r,o}\) are the training parameters of the fully connected neural network.
\subsection{Predicting Concepts contained by Problems}
\label{sec:orga2021d1}
Given part of a Q-matrix, we can use the given Q-matrix to train a supervised model to judge whether a certain problem contains a certain knowledge concept. This model can be used to fill in the missing values of the Q-matrix and to fuzzify the Q-matrix by replacing the original 01 matrices with a continuous probability matrix.
We predict the inclusion relationships between knowledge concept and problem as follows:
\begin{equation}
\label{eq:PredictingConceptsInProblems}
\begin{aligned}
&h_Q = tanh(W_{Q,h}*E_Q + b_{Q,h})\\
&y_{Q} (i,j) = \sigma(W_{Q,o} * h_Q + b_{Q,o})
\end{aligned}
\end{equation}
where \(E_Q = [E_{e,i}, E_{k,j}, E_{e,i} - E_{k,j}]\) and \(W_{Q,h}\), \(W_{Q,o}\), \(b_{Q,h}\), and \(b_{Q,o}\) are the training parameters of the fully connected neural network.
|
{
"arxiv_id": "2302.08712",
"language": "en",
"timestamp": "2023-02-20T02:07:36",
"url": "https://arxiv.org/abs/2302.08712",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
Anomalies indicate a departure of a system from its normal behaviour. In Industrial systems, they often lead to failures. By definition, anomalies are rare events. As a result, from a Machine Learning standpoint, collecting and classifying anomalies pose significant challenges. For example, when anomaly detection is posed as a classification problem, it leads to extreme class imbalance (data paucity problem). Though several current approaches use semi-supervised neural network to detect anomalies~\cite{Forero:2019,Sperl2020}, these approaches
still require some labeled data. In the recent past, there have been approaches that attempt to model normal dataset and consider any deviation from the normal as an anomaly. For instance, autoencoder-based family of models ~\cite{Jinwon:2015} use some form of thresholds to detect anomalies. Another class of approaches relied on reconstruction errors~\cite{Sakurada:2019}, as an anomaly score. If the reconstruction error of a datapoint is higher than a threshold, then the datapoint is declared as an anomaly. However, the threshold value can be specific to the domain and the model, and deciding the threshold on the reconstruction error can be cumbersome.
In this paper, we have introduced the notion of {\em quantiles} in multiple versions of the LSTM-based anomaly detector. Our proposed approach is principled on:\\
\begin{itemize}
\item training models on a normal dataset
\item modeling temporal dependency
\item proposing an adaptive solution that does not require manual tuning of the activation
\end{itemize}
Since our proposed model tries to capture the normal behavior of an industrial device, it does not require any expensive dataset labeling. Our approach also does not require re-tuning of threshold values across multiple domains and datasets. We have exhibited through empirical results later in the paper (see Table \ref{table:datasetschar} of Appendix \ref{appendix:datasetchar}
) that the distributional variance does not impact the prediction quality.
Our contributions are three folds: \\
\textbf{(1)} Introduction of {\em quantiles}, free from the assumptions on data distributions, in design of quantile-based LSTM techniques and their application in anomaly identification.\\
\textbf{(2)} Proposal of the {\em Parameterized Elliot} as a 'flexible-form, adaptive, learnable' activation function in LSTM, where the parameter is learned from the dataset. Therefore, it does not require any manual retuning when the nature of the dataset changes. We have shown empirically that the modified LSTM architecture with \ac{pef} performed better than the \ac{ef} and showed that such behavior might be attributed to the slower saturation rate of \ac{pef}.\\
\textbf{(3)} Demonstration of {\em superior performance} of the proposed \ac{lstm} methods over state-of-the-art (SoTA) deep learning (Autoencoder ~\cite{Yin}, DAGMM~\cite{zong2018deep}, DevNet~\cite{pang2019deep}) and non-deep learning algorithms (\ac{if}~\cite{iforest}, Elliptic envelope ~\cite{envelope}
The rest of the paper is organized as follows. The proposal and discussion of various LSTM-based algorithms are presented in section \ref{sec:varlstm}. Section \ref{sec:background} describes the LSTM structure and introduces the \ac{pef}. This section also explains the intuition behind choosing a parameterized version of the AF and better variability due to it. Experimental results are presented in section \ref{sec:experiment}. Section \ref{related} discusses relevant literature in anomaly detection. We conclude the paper in section \ref{sec:conclusion}.
\section{Anomaly detection with Quantile LSTMs}\label{sec:varlstm}
Since {\em distribution independent} and {\em domain independent} anomaly detection are the two key motivation behind this work, we borrow the concept of quantiles from Descriptive and Inferential Statistics to address this challenge.
\subsection{Why Quantile based approach?}
Quantiles are used as a robust alternative to classical conditional means in Econometrics and Statistics ~\cite{koenker}. In a previous work, Tambwekar et.al.[~\cite{Tambwekar:2022} extended the notion of conditional quantiles to the binary classification setting, allowing to quantify the uncertainty in the predictions and provide interpretations into the functions learnt by the models via a new loss called binary quantile regression loss (sBQC). The estimated quantiles are leveraged to obtain individualized confidence scores that provide an accurate measure of a prediction being misclassified. Since quantiles are a natural choice to quantify uncertainty, they are a natural candidate for anomaly detection. However, to the best of our knowledge, quantile based method has not been used for anomaly detection, however natural it seems.
\par Empirically, if the data being analyzed are not actually distributed according to an assumed distribution, or if there are other potential sources for anomalies that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment-related statistics. Quantiles can be used to identify probabilities of the range of normal data instances such that data lying outside the defined range are conveniently identified as anomalies.
The important aspect of distribution-free anomaly detection is the anomaly threshold being agnostic to the data from different domains.
Simply stated, once a threshold is set (in our case, 10-90), we don't need to tune the threshold in order to detect anomalous instances for different data sets. Quantiles allows using distributions for many practical purposes, including looking for confidence intervals. Quantile divides a probability distribution into areas of equal probability i.e. quantiles offer us to quantify chances that a given parameter is inside a specified range of values. This allows us to determine the confidence level of an event (anomaly) actually occurring.
Though the mean of a distribution is a useful measure when it is symmetric, there is no guarantee that actual data distributions are symmetric. If there are potential sources for anomalies are far removed from the mean, then medians are more robust than means, particularly in skewed and heavy-tailed data. It is well known that quantiles minimize check loss~\cite{Horowitz:1992}, which is a generalized version of Mean Absolute Error (MAE) arising from medians rather than means. Thus, quantiles have less susceptibility to long-tailed distributions and outliers, in comparison to mean~\cite{Dunning2021}.
Therefore, it makes practical sense to investigate the power of quantiles in detecting anomalies in data distributions. Unlike the methods for anomaly detection in the literature, our proposed quantile-based thresholds applied in the quantile-LSTM are generic and not specific to the domain or dataset. The need to isolate anomalies from the underlying distribution is significant since it allows us to detect anomalies irrespective of the assumptions on the underlying data distribution. We have introduced the notion of quantiles in multiple versions of the LSTM-based anomaly detector in this paper, namely (i) quantile-LSTM (ii) iqr-LSTM
and (iii) Median-LSTM.
All the LSTM versions are based on estimating the quantiles instead of the mean behaviour of an industrial device. Note, the median is $50\%$ quantile.
\subsection{Various quantile-LSTM Algorithms}
Before we discuss quantile-based anomaly detection, we describe the data structure and processing setup, with some notations. Let us consider $x_i, i=1,2,..,n$ be the $n$ time-series training datapoints. We consider $T_k = \{ x_i: i=k,\cdots,k+t \}$ be the set of $t$ datapoints, and let $T_k$ be split into $w$ disjoint windows with each window of integer size $m=\frac{t}{w}$ and $T_k = \{ T_k^1,\cdots,T_k^w \}$. Here, $T_k^j = \{ x_{k+m (j-1)},...,x_{k+ m(j)-1}\}$. In Figure~\ref{fig:movement}, we show the sliding characteristics of the proposed algorithm on a hypothetical dataset, with $t=9,m=3$. Let $Q_{\tau}(D)$ be the sample quantile of the datapoints in the set $D$. The training data consists of, for every $T_k$, $X_{k,\tau} \equiv \{Q_{\tau}(T_k^j)\}, j=1,\cdots,w$ as predictors with $y_{k,\tau} \equiv Q_{\tau}(T_{k+1})$, sample quantile at a future time-step, as the label or response. Let $\hat{y}_{k,\tau}$ be the predicted value by an LSTM model.
\begin{figure*}[!htp]
\centering
\includegraphics[width=0.8\textwidth]{Figures/wtwme.png}
\caption{Sliding movement of a time period}
\label{fig:movement}
\end{figure*}
A general recipe we are proposing to detect anomalies is to: (i) estimate quantile $Q_{\tau}(x_{k+t+1})$ with $\tau \in (0,1)$ and (ii) define a statistic that measures the outlier-ness of the data, given the observation $x_{k+t+1}$. Instead of using global thresholds, thresholds are adaptive i.e. they change at every time-point depending on quantiles.
\subsubsection{quantile-LSTM} As the name suggests, in quantile-LSTM, we forecast two quantiles $q_{low}$ and $q_{high}$ to detect the anomalies present in a dataset. We assume the next quantile values of the time period after sliding the time period by one position are dependent on the quantile values of the current time period.
\begin{figure*}[!htb]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{Figures/quantile-lstm_v2.png}
\caption{Anomaly detection process using quantile-LSTM}
\label{fig:quantilelstm}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{Figures/median-lstm_v1.png}
\caption{Anomaly detection process using median-LSTM}
\label{fig:mediumlstm}
\end{subfigure}
\caption{ Sigmoid function has been applied as an recurrent function, which is applied on the outcome of the forget gate ($f_t=\sigma(W_f*[h_{t-1},x_t]+b_f)$) as well as input gate ($ i_t=\sigma(W_i*[h_{t-1},x_t]+b_i)$). \ac{pef} decides the information to store in cell $ c\hat{} _{t}=PEF(W_c*[h_{t-1},x_t]+b_c)$.}
\end{figure*}
\begin{comment}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{Figures/quantile-lstm_v2.png}
\caption{Anomaly detection process using quantile-LSTM. }
\label{fig:quantilelstm}
\centering
\end{figure*}
\end{comment}
It is further expected that, nominal range of the data can be gleaned from $q_{low}$ and $q_{high}$. Using these $q_{low}$ and $q_{high}$ values of the current time windows, we can forecast $q_{low}$ and $q_{high}$ values of the next time period after sliding by one position. Here, it is required to build two LSTM models, one for $q_{low}$ (LSTM$_{qlow}$) and another for $q_{high}$ (LSTM$_{qhigh}$). Let's take the hypothetical dataset as a training set from Figure \ref{fig:quantilelstm}. It has three time windows from time period $x_1\cdots x_9$. Table \ref{table:firsttp} defines the three time windows of the time period $x_1\cdots x_9$ and the corresponding $q_{low}$, $q_{high}$ values against the time window.
\begin{table}[!htpb]
\centering
\begin{tabular}{c c c }
\hline
TW &$q_{low}$ & $q_{high}$ \\
\hline
$x_1,x_2,x_3$ & $X_{1,low} \equiv Q_{low}(T_1^1)$ & $X_{1,high} \equiv Q_{high}(T_1^1)$\\
\hline
$x_4,x_5,x_6$ & $X_{2,low} \equiv Q_{low}(T_1^2)$ & $X_{2,high} \equiv Q_{high}(T_1^2)$\\
\hline
$x_7,x_8,x_9$ & $X_{3,low} \equiv Q_{low}(T_1^3)$ & $X_{3,high} \equiv Q_{high}(T_1^3)$\\
\hline
\end{tabular}
\caption{The first time period and its corresponding time windows}
\label{table:firsttp}
\end{table}
The size of the inputs to the LSTM depends on the number of time windows $w$ and one output. Since three time windows have been considered for a time period in this example, both the LSTM models will have three inputs and one output. For example, the LSTM predicting the lower quantile, would have $X_{1,low}$, $X_{2,low}$, $X_{3,low}$ as its puts and $y_{1,low}$ as its output, for one time-period. A total of $n-t+1$ instances will be available for training the LSTM models assuming no missing values.
After building the LSTM models, for each time period it predicts the corresponding quantile value and slides one position to the next time period on the test dataset. quantile-LSTM applies a following anomaly identification approach. If the observed value $x_{k+t+1}$ falls outside of the predicted $(q_{low}, q_{high)}$, then the observation will be declared as an anomaly. For example, the observed value $x_{10}$ will be detected as an anomaly if $x_{10}< \hat{y}_{1,low}$ or $x_{10} > \hat{y}_{1,high}$. Figure \ref{fig:quantilelstm} illustrates the anomaly identification technique of the quantile-LSTM on a hypothetical test dataset.
\par
\subsubsection{IQR-LSTM} iqr-LSTM is a special case of quantile-LSTM where $q_{low}$ is 0.25 and $q_{high}$ is the 0.75 quantile. In addition, another LSTM model predicts median $q_{0.5}$ as well. Effectively, at every time index $k$, three predictions are made $\hat{y}_{k,0.25},\hat{y}_{k,0.5}, \hat{y}_{k,0.75}$. Based on this, we define the Inter Quartile Range (IQR) $\hat{y}_{k,0.75} - \hat{y}_{k,0.25}$. Using IQR, the following rule identifies an anomaly when
$x_{t+k+1} > \hat{y}_{k,0.5} + \alpha (\hat{y}_{k,0.75} - \hat{y}_{k,0.25})$ or
$x_{t+k+1} < \hat{y}_{k,0.5} - \alpha (\hat{y}_{k,0.75} - \hat{y}_{k,0.25})$
\subsubsection{Median-LSTM} Median-LSTM, unlike quantile-LSTM, does not identify the range of the normal datapoints; rather, based on a single LSTM, distance between the observed value and predicted median ($x_{t+k+1}-\hat{y}_{k,0.5}$) is computed, as depicted in Figure \ref{fig:mediumlstm}, and running statistics are computed on this derived data stream. The training set preparation is similar to quantile-LSTM.
To detect the anomalies, Median-LSTM uses an implicit adaptive threshold. It is not reasonable to have a single threshold value for the entire time series dataset when dataset exhibits seasonality and trends. We introduce some notations to make description concrete. Adopting the same conventions introduced before, define $d_k \equiv x_{t+k+1}-Q_{0.5}(T_{k+1}), k=1,2,\hdots,n-t$ and partition the difference series into $s$ sets of size $t$ each, i.e., $D \equiv {D_p, p=1,\hdots,s}$, where $D_p = \{ d_i: i=(s-1)t+1,\hdots,st \}$. After computing the differences on the entire dataset, for every window $D_p$, mean ($\mu_p$) and standard deviation ($\sigma_p$) for the individual time period $D_p$. As a result, $\mu_p$ and $\sigma_p$ will differ from one time period to another time period. Median-LSTM detects the anomalies using upper threshold and lower threshold parameters of a particular time period $D_p$ and they are computed as follows:
$$T_{p,lower}=\mu_p+w\sigma_p; T_{p,higher}=\mu_p-w\sigma_p$$
An anomaly can be flagged for $d_k \in T_p$ when either $d_k > T_{p,higher}$ or $d_k < T_{p,lower}$ Now, what should be the probable value for $w$? If we consider $w=2$, it means that any datapoint beyond two standard deviations away from the mean on either side will be considered as an anomaly. It is based on the intuition that differences of the normal datapoints should be close to the mean value, whereas the anomalous differences will be far from the mean value.
Hence 95.45\% datapoints are within two standard deviations distance from the mean value. It is imperative to consider $w=2$ since there is a higher probability of the anomalies falling into the 4.55\% datapoints. We can consider $w=3$ too where 99.7\% datapoints are within three standard deviations. However, it may miss the border anomalies, which are relatively close to the normal datapoints and only can detect the prominent anomalies. Therefore we have used $w=2$ across the experiments.
\subsection{Probability Bound}
In this subsection, we analyze different datasets by computing the probability of occurrence of anomalies using the quantile approach. We have considered 0.1, 0.25, 0.75, 0.9, and 0.95 quantiles and computed the probability of anomalies beyond these values, as shown in Table \ref{table:probabilitybound}
of appendix section.
The multivariate datasets are not considered since every feature may follow a different quantile threshold. Hence it is not possible to derive a single quantile threshold for all the features. It is evident from Table \ref{table:probabilitybound} of Appendix \ref{appendix:pobabilitybound} of
that the probability of a datapoint being an anomaly is high if the datapoint's quantile value is either higher than 0.9 or lower than 0.1. However, if we increase the threshold to 0.95, the probability becomes 0 across the datasets. This emphasizes that a higher quantile threshold does not detect anomalies. It is required to identify the appropriate threshold value, and it is apparent from the table that most of the anomalies are nearby 0.9 and 0.1 quantile values. Table \ref{table:probabilitybound}
also demonstrates the different nature of the anomalies present in the datasets. For instance, the anomalies of Yahoo Dataset$_1$ to Yahoo Dataset$_6$ are present nearby the quantile value 0.9, whereas the anomalies in Yahoo Dataset$_7$ to Yahoo Dataset$_9$ are close to both quantile values 0.9 and 0.1. Therefore, it is possible to detect anomalies by two extreme quantile values. We can consider these extreme quantile values as higher and lower quantile thresholds and derive a lemma. We provide a proof in the appendix section.
\textbf{Lemma 1:} For an univariate dataset $\mathcal{D}$,
the probability of an anomaly $\mathcal{P(A)}=\mathcal{P}(\mathcal{E} > \alpha_{high}) +\mathcal{P(F}<\alpha_{low})$, where $\alpha_{high}, \alpha_{low}$ are the higher and lower level quantile thresholds respectively.
\begin{comment}
\begin{proof}
A datapoint is declared an anomaly if its quantile value is higher than $\alpha_{high}$ or lower than $\alpha_{low}$. Here $\alpha_{high}$, $\alpha_{low}$ are the higher and lower quantile threshold value.
$\mathcal{P}(\mathcal{E} > \alpha_{high})$ is the probability of an anomaly whose quantile value is higher than $\alpha_{high}$. On the other side, $\mathcal{P(F}<\alpha_{low})$ is the probability of quantile value of anomalous datapoint lower than $\alpha_{low}$. Hence the presence of an anomaly in a dataset is possible if one of the events is true. Therefore
\begin{align*}
&\mathcal{P(A)}=P(\mathcal{E} > \alpha_{high} \cup \mathcal{F}<\alpha_{low})\\
&\mathcal{P(A)}=\mathcal{P}(\mathcal{E} > \alpha_{high})+\mathcal{P(F}<\alpha_{low})-P(\mathcal{E} > \alpha_{high} \cap \mathcal{F}<\alpha_{low})
\end{align*}
\
Both the events $\mathcal{E, F}$are mutually exclusive. Hence the above Equation can be written as
\begin{equation}\label{eq:quantilethreshold}
\mathcal{P(A)}=\mathcal{P}(\mathcal{E} > \alpha_{high})+\mathcal{P(F}<\alpha_{low})
\end{equation}
\end{proof}
\end{comment}
The lemma entails the fact that anomalies are trapped outside the high and low quantile threshold values. The bound is independent of data distribution as quantiles assume nominal distributional characteristics.
\section{LSTM with Parameterized Elliot Activation (\ac{pef})}\label{sec:background}
We introduce the novel parameterized Elliot activation function
\ac{pef}, an adaptive variant of usual activation, wherein we modify the LSTM architecture by replacing the activation function of the LSTM gates with \ac{pef} as follows.
A single LSTM block is composed of four major components: an input gate, a forget gate, an output gate, and a cell state. We have applied the parameterized Elliot Function (PEF) as activation.
\subsection{Parameterized Elliot Function \ac{pef}}
\ac{pef} is represented by
\begin{equation}\label{eq:pef}
f(x)= \frac{\alpha x}{1+|x|}
\end{equation}
with the first order derivative of \ac{pef} as: $f'(x)=\frac{\alpha}{(|x|+1)^2}$. The function is equal to 0, and the derivative is also equal to the parameter $\alpha$ at the origin. After the introduction of the PEF, the hidden state equation is:$h_t=O_t\alpha_c PEF(C_t)$. By chain rule, $$\frac{\partial J}{\partial \alpha_c}=\frac{\partial J}{\partial \alpha_c}=\frac{\partial J}{\partial h_t}O_t *Elliot(C_t)$$. After each iteration, the $\alpha_c$ is updated by gradient descent $\alpha_c^{(n+1)}=\alpha_c^n+\delta*\frac{\partial J}{\partial \alpha_c}$ (See appendix \ref{appendix:backpropa} for back propagation of \ac{lstm} with \ac{pef}).
Salient features of the PEF are:
\begin{enumerate}
\item The $\alpha$ in equation \ref{eq:pef} is learned during the back-propagation like other weight parameters of the LSTM model. Hence, this parameter, which controls the shape of the activation, is learned from data. Thus, if the dataset changes, so does the final form of the activation, which saves the ``parameter tuning'' effort.
\item The cost of saturation of standard activation functions impedes training and prediction, which is an important barrier to overcome. While PEF derivative also saturates as the $|x|$ increases, the saturation rate is less than other activation functions, such as $\tanh$, $sigmoid$.
\item \ac{pef} further decreases the rate of saturation in comparison to the non-parameterized Elliot function.
\end{enumerate}
To the best of our knowledge, insights on ’learning’ the parameters of an activation function are not available in literature except for the standard smoothness or saturation properties activation functions are supposed to possess. It is, therefore, worthwhile to investigate the possibilities of learning an activation function within a framework or architecture that uses the inherent patterns and variances from data.
\begin{comment}
Let us discuss on the major components and the relevant forward propagation equation.
\ac{pef} combined with the output $o_t$ to produce the $h_t$. The final output $h_t$ is $o_t*PEF(C_t)$ (See additional file for back-propagation of LSTM with \ac{pef}).
\end{comment}
\begin{figure*}[!htb]
\centering
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\textwidth]{Figures/activationcomp_v1.png}
\caption{Derivatives comparisons of various activation functions. }
\label{fig:activationcomp}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\textwidth]{Figures/pef_plot_v2.png}
\caption{LSTM values for 4 layers and 50 epochs using PEF as activation function using AWS2.}
\label{fig:pefplot}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\textwidth]{Figures/sigmoid_v2.jpg}
\caption{LSTM values for 4 layers and 50 epochs using Sigmoid as activation function using AWS2.}
\label{fig:sigmoidplot}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\textwidth]{Figures/tanh_v2.jpg}
\caption{LSTM values for 4 layers and 50 epochs using Tanh as activation function using AWS2.}
\label{fig:tanhplot}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\linewidth]{Figures/tablediffalpha.png}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\textwidth]{Figures/plot_beta_v1.png}
\caption{The final $\alpha$ values learn on each dataset. We can see the final $\alpha$ value is different for different datasets.}
\label{fig:betaplot}
\end{subfigure}
\caption{Slow saturation rate as well as behavioral comparison of the different layers of \ac{lstm} model after the introduction of \ac{pef} with other activation functions. It also shows the final value of the learned parameter $\alpha$ on various datasets.
}
\end{figure*}
\setlength\belowcaptionskip{-1ex}
\subsection{PEF saturation} The derivative of the PEF is represented by: $ =\frac{\alpha}{x^2}EF^2$. While the derivatives of the sigmoid and tanh are dependent on x, PEF is dependent on both $\alpha$ and x. Even if $\frac{EF^2(x)}{x^2}$ saturates, the learned parameter $\alpha$ will help the \ac{pef} escape saturation. The derivatives of the sigmoid, tanh saturate when $x>5$ or $x<-5$. However, it is not true with PEF as evident from fig \ref{fig:activationcomp}.
\begin{comment}
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.5\textwidth]{Figures/activationcomp_v1.png}
\caption{Derivatives comparisons of various activation functions. }
\label{fig:activationcomp}
\end{figure}
\end{comment}
As empirical evidence, the layer values for every epoch of the model are captured using various activation functions like \ac{pef}, sigmoid and tanh. It is observed that, after about 10 epochs, the values of the layers becomes more or less constant for sigmoid and tanh (fig \ref{fig:sigmoidplot} and fig \ref{fig:tanhplot}), indicating their values have already saturated whereas for PEF, variation can be seen till it reaches 50 epochs (fig \ref{fig:pefplot}). This shows that in comparison to sigmoid and tanh as activation functions, PEF escapes saturation due to its learned parameter $\alpha$. \textit{The parameter $\alpha$ in \ac{pef}} changes its value as the model trains over the training dataset while using PEF as the activation function. Since it is a self training parameter, it returns different values for different datasets at the end of training. These values have been documented in table 2 and plotted in fig \ref{fig:betaplot}.
Table 2 demonstrates the variations in $\alpha$ values across multiple datasets as these values get updated.
\begin{comment}
\cite{Glorot2010UnderstandingTD} hypothesized that in neural networks, the logistic layer output softmax(b+Wh) might initially rely more on the biases b and hence push the activation value h towards 0, thus resulting in error gradients of smaller values. They referred to this as the saturation property of neural networks. This results in slower training and prevents the gradients from propagating backward until the layers close to the input learns. This saturation property is observed in the sigmoid. The sigmoid is non-symmetric around zero and obtains smaller error gradients when the sigmoid outputs a value close to 0. Similarly, tanh in all layers tends to saturate towards 1, which leads to layer saturation. All the layers attain a particular value, which is detrimental to the propagation of gradients.
However, this issue of attaining saturation would be less pronounced in cases where two different activation functions are used. Since each activation function behaves differently in terms of gradients, i.e., sigmoid outputs are in the range [0,1], and the gradients are minimum at the maximum and minimum values of the function. The $\tanh$ on the other hand, has minimum gradients at -1 and 1 and reaches its maximum at 0. Therefore, even if the layers begin to saturate to a common value, some of the layers would escape the saturation regime of their activations and would still be able to learn essential features. As an outcome, this might result in fewer instances of vanishing gradients. This assumption would mean that networks with two different activations would learn faster and converge faster to a minima, and the same premise is supported by a Convergence study (details in section V). As demonstrated by Glorot and Bengio, if the saturation ratio of layers is less pronounced, it leads to better results in terms of accuracy. A standard neural network with N layers is given by $ h^{l} = \sigma(h^{l-1}W^{l}+b)$ and $s^{l} = h^{l-1}W^{l}+b$. Here $h^{l}$ is the output of the first hidden layer, $\sigma$ is a non-linear activation function, and b is the bias. We compute the gradients as $\frac{\partial Cost}{\partial s^{l}_{k}} = f'(s^{l}_{k}) W^{l}_{k,\cdot}\frac{\partial Cost}{\partial s^{l+1}}$; $ \frac{\partial Cost}{\partial W^{l}_{m,n}} = z^{i}_{l}\frac{\partial Cost}{\partial s^{l}_{k}}$. Now, we find the variances of these values. As the network propagates, we must ensure that the variances are equal to keep the information flowing. Essentially, when $\forall(l,l'), Var[h^l] = Var[h^{l^{'}}]$, it ensures that forward propagation does not saturate, and when $\forall(l,l'), Var[\frac{\partial Cost}{\partial s^{l}}] = Var[\frac{\partial Cost}{\partial s^{l^{'}}}]$, it ensures that backward propagation flows at a constant rate. Now, what remains is to calculate these variance values. Let us consider an elaborate example.
Firstly, we attempt to find variance for two sigmoid activations in a network. The derivative of each activation output is approximately 0.25($\sigma'(0) = 0.25$), as the weights are uniformly initialized, and the input features are assumed to have the same variance. Hence,
\begin{equation*}
f'(s^{l}_{k}) = 0.25
\end{equation*}
\begin{equation*}
Var[z^{2}] = Var[x]((0.25)^{2}n_{1}Var[W^{1'}]*(0.25)^{2}n_{2}Var[W^{2'}])
\end{equation*}
We see that this diminishing factor of $0.25^{N}$ steeply drops the variance during the forward pass. Similarly, we observe that the gradient,
\begin{equation*}
\frac{\partial Cost}{\partial s^{l}_{k}} = f'(s^{l}_{k}) W^{l}_{k,\cdot}\frac{\partial Cost}{\partial s^{l+1}}
\end{equation*}
has $f'(s^{l}_{k})$ as one of the factors, and thus the diminishing factor is tied to the variance. Even when $N=2$ the variance reduces by a factor of $4^{4} = 256$. \\
Let's compute variance for neural network with two hidden layers using sigmoid and tanh activations. For tanh, if the initial values are uniformly distributed around 0, the derivative is $f'(s^{l}_{k}) = 1$. Therefore, the variance for the second layer output is,
$Var[z^{2}] = Var[x]*((0.25)^{2}*n_{1}*Var[W^{1'}]*n_{2}*Var[W^{2'}])$.
We see that the diminishing factor is just $4^{2} = 16$, and this results in a much better variance when compared to the previous case. Therefore, using different AFs instead of the same implies a reduction in vanishing gradients and results in a much better flow of information because the variance value is preserved for longer.
\end{comment}
\begin{comment}
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.5\textwidth]{Figures/pef_plot_v2.png}
\caption{LSTM values for 4 layers and 50 epochs using PEF as activation function using AWS2.}
\label{fig:pefplot}
\end{figure}
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.5\textwidth]{Figures/sigmoid_v2.jpg}
\caption{LSTM values for 4 layers and 50 epochs using Sigmoid as activation function using AWS2.}
\label{fig:sigmoidplot}
\end{figure}
\begin{figure}[!btp]
\centering
\includegraphics[width=0.5\textwidth]{Figures/tanh_v2.jpg}
\caption{LSTM values for 4 layers and 50 epochs using Tanh as activation function using AWS2.}
\label{fig:tanhplot}
\end{figure}
\end{comment}
\begin{comment}
\begin{table}[!htpb]
\centering
\begin{tabular}{|c|c|c|}
\hline
Dataset & $\alpha$ after training &$\alpha$ initial value\\
\hline
AWS Dataset$_1$ & 1.612&0.1 \\
\hline
AWS Dataset$_2$ & 0.895&0.1 \\
\hline
AWS Dataset$_3$ & 1.554 &0.1\\
\hline
AWS DatasetSyn$_1$ & 1.537&0.1 \\
\hline
AWS DatasetSyn$_2$ & 0.680&0.1 \\
\hline
AWS DatasetSyn$_3$ & 1.516&0.1 \\
\hline
Yahoo Dataset$_1$ & 1.432&0.1 \\
\hline
Yahoo Dataset$_2$ & 1.470&0.1 \\
\hline
Yahoo Dataset$_3$ & 1.658&0.1 \\
\hline
Yahoo Dataset$_5$ & 1.686&0.1 \\
\hline
Yahoo Dataset$_6$ & 1.698&0.1 \\
\hline
Yahoo Dataset$_7$ & 1.725&0.1 \\
\hline
Yahoo Dataset$_8$ & 1.850&0.1 \\
\hline
Yahoo Dataset$_9$ & 1.640&0.1 \\
\hline
\end{tabular}
\caption{Different $\alpha$ values for each Dataset after the training.}
\label{table:betaval}
\end{table}
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.5\textwidth]{Figures/plot_beta_v1.png}
\caption{The final $\alpha$ values learn on each dataset. We can see the final $\alpha$ value is different for different datasets.}
\label{fig:betaplot}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\linewidth]{Figures/tablediffalpha.png}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\includegraphics[width=\textwidth]{Figures/plot_beta_v1.png}
\caption{The final $\alpha$ values learn on each dataset. We can see the final $\alpha$ value is different for different datasets.}
\label{fig:betaplot}
\end{subfigure}
\caption{}
\end{figure}
\end{comment}
\section{Experiment}\label{sec:experiment}
In this section, we have evaluated the performance of the quantile-LSTM techniques on multiple datasets. We have identified multiple baseline methods, such as \ac{if}, Elliptic envelope, Autoencoder and several deep learning based approaches for comparison purposes (See section \ref{related} for more details on baseline methods).
\footnote{\ac{lstm} code: https://github.com/PreyankM/Quantile-LSTM}
\subsection{Datasets}
The dataset properties have been shown in Table \ref{table:datasetschar} of Appendix \ref{appendix:datasetchar}. A total of 29 datasets, including real industrial datasets and synthetic datasets, have been considered in the experiments. The industrial datasets include Yahoo webscope \footnote{https://webscope.sandbox.yahoo.com/}, AWS cloudwatch \footnote{https://github.com/numenta/NAB/tree/master/data}, GE. There are a couple of datasets with either one or few anomalies, such as AWS$_1$, AWS$_2$. We have injected anomalies in AWS, Yahoo, and GE datasets to produce synthetic data for fair comparison purposes. The datasets are univariate, unimodal or binodal and follow mostly Weibull, Gamma and Log normal distribution. The highest anomaly percentage is 1.47 (GE Dataset$_2$), whereas AWS Dataset$_2$ has reported the lowest percentage of anomaly i.e. 0.08 (For more details see Table \ref{table:datasetschar}
2 of Appendix \ref{appendix:datasetchar}
).
\subsection{Results-Industrial Datasets}
Table \ref{table:quantilecomp} demonstrates the performance comparison of various LSTM techniques. Precision and Recall, two performance metrics, are shown in the table. The Median-LSTM has achieved Recall 1 in most datasets (10 out of 15 datasets). In comparison to existing benchmarks, LSTM methods are SOTA on most of the datasets in terms of Recall. For comparison purposes, we have first compared the Recall. If the Recall is the same for two different methods, then we have compared the Precision. The method which has a higher Recall and Precision will be considered as a better performer. In AWS datasets, most of the techniques have achieved the highest Recall apart from DAGMM and DevNet. DevNet needs minimum two anomalies hence it is not applicable for AWS1 and AWS2. However, as per Precision, iqr-LSTM has performed better than other methods. In the case of GE$1$, DevNet has produced a better result, whereas quantile based LSTM techniques has outperformed others on GE$_2$. Median-LSTM has demonstrated better result in Ambient temperature. In the case of Yahoo datasets, Median-LSTM has achieved the highest Recall on four datasets; however, quantile-LSTM and iqr-LSTM have produced better results on several datasets. For example, Median-LSTM and iqr-LSTM both achieved Recall 1 on Yahoo$_1$. However, if we compare the Precision, iqr-LSTM has shown better results. It is evident from the table \ref{table:quantilecomp} that all these LSTM versions are performing very well on these industrial datasets. We compared our method with a recent anomaly detection method based on Graph Neural Network (GNN)~\cite{deng2021graph}
We observe that GNN has not shown superior performance in comparison to the quantile based technique. For example, GNN's recall value is less in comparison to the recall value of 1 quantile based techniques have produced (on AWS2, AWS3, Yahoo1, Yahoo2, Yahoo9). In terms of precision, GNN produced better results than quantile LSTM only on two datasets, namely, Yahoo1 and Yahoo9.
\begin{table*}[!htb]
\renewcommand\thetable{3}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Dataset&Anomaly& \multicolumn{2}{c|}{iqr-LSTM}&\multicolumn{2}{c|}{Median-LSTM}&\multicolumn{2}{c|}{quantile-LSTM}&\multicolumn{2}{c|}{Autoencoder}&\multicolumn{2}{c|}{GAN} &\multicolumn{2}{c|}{DAGMM} &\multicolumn{2}{c|}{DevNet} &\multicolumn{2}{c|}{\ac{if}} &\multicolumn{2}{c|}{Envelope} \\
\hline
& & Precision &Recall& Precision &Recall&Precision &Recall&Precision &Recall&Precision &Recall&Precision &Recall&Precision &Recall &Precision &Recall&Precision &Recall\\
\hline
AWS1&1& \B 0.5& \B 1&0.052& 1&0.041&1&0.045&1&0.047&1&0.125&1 &NA&NA&0.0087&1&0.009&1\\
\hline
AWS2&2 &0.13&1& \B 0.22& \B 1& 0.0042&1&0.1&0.5&0.18&1&0.11&1 &NA&NA&0.0062&1&0.04&1\\
\hline
AWS3&1& \B 1& \B 1& 0.37& 1&0.0181&1&0.0344&1&0.055&1&0&0 &NA&NA&0.005&1&0.006&1\\
\hline
Ambient temperature&1& 0.03&1& \B 0.0769& \B 1&0.02&1&0.055&1&0&0&0&0 &NA&NA&0.01&1&0.02&1\\
\hline
GE1&3& 0.019&1& 0.048& 1&0.0357&1& \B 0.093& \B 1&0.041&0.33&0&0 &0.12&1&0.004&1&0.2&1\\
\hline
GE2&8& \B 1& \B 1& 0.66& 1& \B 1& \B 1& \B 1& \B 1&0&0&0.8&1 &0.8&1&0.16&1&0.034&1\\
\hline
Yahoo1&2& \B 0.076& \B 1& 0.0363& 1&0.0465&1&1&0.5&0.066&1&0.07&0.5 &0&0&0.005&1&0.009&1\\
\hline
Yahoo2&8& 0.75&0.375& \B 0.8& \B 1&1&0.375&1&0.25&0.19&0.625&0.10&0.25 &0&0&0.04&0.875&0.055&1\\
\hline
Yahoo3&8& \B 0.615& \B 1& 0.114& 0.675&0.088&1&0.023&0.25&0.11&0.875&0.15&0.62 &0.39&0.5&0.04&0.875&0.032&0.875\\
\hline
Yahoo5&9& 0.048&0.33& 0.1& 0.33&0.022&0.66&0.05&0.33&0&0&0.23&0.33 &\B 0.67& \B 1&0.029&0.66&0.029&0.66\\
\hline
Yahoo6&4& 0.12&1& 0.222& 1&0.0275&1&0.048&1&0&0&0.041&1 &\B 1& \B 1&0.0073&1&0.0075&1\\
\hline
Yahoo7&11& 0.096&0.54& \B 0.16& \B 0.63&0.066&0.54&0.083&0.45&0.035&0.54&0.058&0.09 &0.33&0.29&0.0082&0.33&0.017&0.54\\
\hline
Yahoo8&10&0.053& 0.7& \B 0.142& \B 0.8& 0.028& 0.3&0&0&0&0&0&0 &0.063 & 0.11 &0.01&0.6&0.010&0.6\\
\hline
Yahoo9&8&1& 0.75& \B 0.333& \B 1& 0.0208& 0.75&1&0.37&0&0&0.5&0.375 &0.07&0.8&0.04&1&0.047&1\\
\hline
\end{tabular}
}
\caption{Performance comparison of various quantile LSTM techniques with other state of the art algorithms.}
\label{table:quantilecomp}
\end{table*}
\begin{table*}[!htb]
\renewcommand\thetable{4}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Dataset&Anomaly& \multicolumn{2}{c|}{iqr-LSTM}&\multicolumn{2}{c|}{Median-LSTM}&\multicolumn{2}{c|}{quantile-LSTM}&\multicolumn{2}{c|}{iForest} &\multicolumn{2}{c|}{Envelope} &\multicolumn{2}{c|}{Autoencoder} &\multicolumn{2}{c|}{GAN}&\multicolumn{2}{c|}{DAGMM} &\multicolumn{2}{c|}{DevNet}\\
\hline
& & Precision &Recall& Precision &Recall&Precision &Recall&Precision &Recall&Precision &Recall&Precision &Recall&Precision &Recall&Precision &Recall &Precision &Recall\\
\hline
AWS\_syn1&11&0.769& 0.909& 0.687& 1& 1& 0.909& 0.034& 1& 0.10& 1&1&0.63& \B 0.84& \B 1&0.71&0.90 &0.09&0.73\\
\hline
AWS\_syn2&22 &0.7& 1& \B 0.733& \B 1& 0.6875& 1& 0.065& 1& 0.33& 1&0.5&0.63&0.70&1&0.56&1 &0.44&0.27\\
\hline
AWS\_syn3&11& 1& 0.9 & 0.47& 1& \B 1& \B 1& 0.025& 1& 0.072& 1&0.64&0.5&0.68&1&0&0 &0.2&0.45\\
\hline
GE\_syn1&13& 0.0093& 1& 0.203& 1& 0.071& 0.769& 0.0208& 1& 0.135& 1&0.23&0.11&0.25&0.61&0&0 & \B 0.33& \B 1\\
\hline
GE\_syn2&18& 0.0446& 1& \B 1& \B 1& \B 1& \B 1& 0.3& 1& 0.409 &1&1&0.38&0.9&1&0.9&1 &0.9&1\\
\hline
Yahoo\_syn1&12& \B 1& \B 1& 0.217& 0.833& 0.375& 1& 0.027&1& 0.056& 1&1&0.83& 0.31&1 &0.29&0.41 &0&0\\
\hline
Yahoo\_syn2&18& 0.181& 0.55& 0.653& 0.944& 1& 0.611& \B 0.233& \B 1& 0.124& 1&1&0.42& 1&0.61& 0.55&0.61 &0&0\\
\hline
Yahoo\_syn3&18& 0.89& 0.94& 0.3333& 0.555& \B 0.6& \B 1& 0.0410& 1& 0.0762& 0.944&1&0.88& 0.81&0.71& 0.3&0.66 &0.17&0.63\\
\hline
Yahoo\_syn5&19& 0.081& 0.52& 0.521& 0.631& 0.0625& 0.578& 0.03125& 0.842& 0.0784 &0.842&0.15&0.47& 0.42&0.53& 0.52&0.52 & \B 0.73& \B 0.92\\
\hline
Yahoo\_syn6&14& 0.065& 0.85& 0.65& 0.928& 0.764& 0.928& \B 0.01825& \B 1& 0.00761& 0.285&0.05&0.28& 0.8&0.29 &0.041&0.28 &0&0\\
\hline
Yahoo\_syn7&21& 0.61& 0.59& 0.375& 0.714& 0.411& 0.66& \B 0.032& \B 0.952& 0.052& 0.85&0.18&0.42& 0.14&0.38& 0.058&0.047 &0.11&0.64\\
\hline
Yahoo\_syn8&20&0.32& 0.65& \B 0.482& \B 0.823& 0.197& 0.7& 0.0192&0.75& 0.023& 0.7&0.009&0.05& 0.25&0.1& 0&0 &0.23&0.64\\
\hline
Yahoo\_syn9&18&1& 0.77& \B 1& \B 1& 1& 0.94& 0.053& 1& 0.048& 1&0.875&0.388& 0.72& 1& 0.57&0.22 &0.03&0.29\\
\hline
\end{tabular}
}
\caption{Performance comparison of various quantile LSTM techniques on synthetic datasets with other state of the art algorithms.}
\label{table:quantilesyncomp}
\end{table*}
Table \ref{table:quantilesyncomp} shows the comparison with other baseline algorithms on multiple synthetic datasets. As in the previous table, Recall and Precision have been shown as performance metrics. As per these metrics, quantile-based approaches have outperformed \ac{if} and other deep learning based algorithms on 7 out of 13 datasets. If we consider the Precision alone, the quantile LSTM based techniques have demonstrated better performance on 10 synthetic datasets.
There are multiple reasons for the better performance demonstrated by the quantile based LSTM approaches. First is the efficacy of the LSTM, which is well documented. Median-LSTM has detected the anomalies for each time period utilizing mean and standard deviation. It also has helped to capture the trend and seasonality. quantile-LSTM do not have any predefined threshold, which has improved their performance. Additionally, the flexibility of the parameter $\alpha$ in determining the shape of the activation helped in isolating the anomalies. This is evident from Fig \ref{fig:betaplot} which represents the variation in $\alpha$ values of the \ac{pef} function across the datasets. $\alpha$ has been initialized to $1.5$ for all the datasets.
\subsection{Results-Non-Industrial Datasets}\label{appendix:nonindustrail}
We have tested our approach on non-industrial datasets shown in Table \ref{table:nonindustrial}. Here, Deviation Networks gives NA because it does not work for single anomaly containing datasets.
On analysis of the results, we find that the quantile based technique is better in three of the seven datasets while Autoencoder is better for two of the seven datasets.
\begin{table}[!htb]
\renewcommand\thetable{5}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Dataset&Anomaly& \multicolumn{2}{c|}{quantile-LSTM}&\multicolumn{2}{c|}{Autoencoder}&\multicolumn{2}{c|}{GAN} &\multicolumn{2}{c|}{DevNet} &\multicolumn{2}{c|}{\ac{if}} &\multicolumn{2}{c|}{Envelope} \\
\hline
& & Precision &Recall& Precision &Recall&Precision &Recall&Precision &Recall &Precision &Recall&Precision &Recall\\
\hline
TravelTime$_{387}$ &3 & \B 0.011& \B 0.67 & 1&0.33 &0.024&0.33 & 0.01&0.33 & 0.0039&0.6667 & 0.0107&0.6667\\
\hline
TravelTime$_{451}$ &1 &0.006&1 & 0&0 & \B 0.016& \B 1 & NA&NA & 0.0028&1 & 0.0062&1\\
\hline
Occupancy$_{6005}$ &1 & \B 0.03& \B 1 & 0&0 &0.007&1 & NA&NA & 0.0019&1 & 0.0042&1\\
\hline
Occupancy$_{t4013}$ &2 & \B 0.06& \B 1 & 0.438&0.5 &0.014&0.5 & 0.02&1 & 0.0038&1 & 0.0078&1\\
\hline
Speed$_{6005}$ &1 &0.014&1 & \B 0.103& \B 1 &0.009&1 & NA&NA & 0.002&1 & 0.0038&1\\
\hline
Speed$_{7578}$ &4 &0.086&1 & \B 0.792& \B 1 &0.2&0.9 & 0.16&0.75 & 0.0153&1 & 0.0247&1\\
\hline
Speed$_{t4013}$ &2 &0.053&1 & 0.75&0.5 &0.043&1 & \B 0.1& \B 1 & 0.0036&1 & 0.007&1\\
\hline
\end{tabular}
}
\caption{Performance comparison of quantile LSTM techniques on various non-industrial datasets}
\label{table:nonindustrial}
\end{table}
\begin{comment}
\begin{table}[!htp]
\centering
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Dataset&\multicolumn{2}{c|}{\ac{uvmgbtai}(K-means)}&\multicolumn{2}{c|}{\ac{mgbtai}(K-means)}& \multicolumn{2}{c|}{quantile-LSTM}\\
\hline
& Precision& Recall&Precision& Recall &Precision& Recall\\
\hline
AWS Dataset$_1$ & 100\% & 100\%&100\% &100\%&5.2\%&100\%\\
\hline
AWS Dataset$_2$& 18\%&100\%&100\%&50\%&2.2\%&100\%\\
\hline
Yahoo Dataset$_1$&15\%&100\%&50\%&50\%&3.6\%&100\%\\
\hline
Yahoo Dataset$_2$&66\%& 22\%& 100\%&12\%&61\%&100\%\\
\hline
Yahoo Dataset$_3$&87\%&87\%&100\%&12\%&11\%&87\%\\
\hline
Yahoo Dataset$_5$&33\%&33\%&100\%&33\%&10\%&33\%\\
\hline
Yahoo Dataset$_6$&100\%&100\%&100\%&100\%&22\%&100\%\\
\hline
Yahoo Dataset$_7$&77\%&63\%&50\%&27\%&16\%&63\%\\
\hline
Yahoo Dataset$_8$&60\%&60\%&100\%&10\%&14\%&80\%\\
\hline
Yahoo Dataset$_9$&100\%&100\%&100\%&62\%&33\%&100\%\\
\hline
\end{tabular}
}
\caption{Performance comparison of \ac{uvmgbtai} with other standard anomaly identifier algorithms}
\label{table:treebasedcomp}
\end{table}
\end{comment}
\subsection{Comparison between Elliot Function and \ac{pef}}\label{appendix:pefvsef}
In order to compare the performance of the Elliot function and parameterized Elliot function (PEF) as activation functions, we experimented with them by using them as activation functions in the LSTM layer of the models and comparing the results after they run on multiple datasets. The results are shown in Table \ref{table:pefvsef}.
\begin{table}[!htpb]
\centering
\renewcommand\thetable{6}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
Dataset & \multicolumn{2}{c|}{Elliot Function}&\multicolumn{2}{c|}{Parameterized Elliot Function}\\
\hline
& Precision & Recall & Precision & Recall\\
\hline
AWS Dataset$_1$ & 0&0 & \B 0.041& \B1\\
\hline
AWS Dataset$_2$ & 0.002&1 & \B 0.0042& \B 1\\
\hline
AWS Dataset$_3$ & \B 0.04 & \B 1 & 0.0181&1\\
\hline
AWS DatasetSyn$_1$ & 0.02& 0.73 & \B 1& \B 0.909\\
\hline
AWS DatasetSyn$_2$ & 0.39&0.77 & \B 0.6875&\B 1\\
\hline
AWS DatasetSyn$_3$ & 0.06&0.73 & \B 1& \B 1\\
\hline
Yahoo Dataset$_1$ & 0.006&0.25 & \B 0.0465& \B 1\\
\hline
Yahoo Dataset$_2$ & \B 0.02& \B 1 & 1&0.375\\
\hline
Yahoo Dataset$_3$ & 0.05&1 & \B 0.088& \B 1\\
\hline
Yahoo Dataset$_5$ & 0.001&0.33 & \B 0.022& \B 0.66\\
\hline
Yahoo Dataset$_6$ & 0.002&0.17 & \B 0.0275&\B 1\\
\hline
Yahoo Dataset$_7$ & 0.03&0.09 & \B 0.066& \B 0.54\\
\hline
Yahoo Dataset$_8$ & \B 0.017& \B 0.4 & 0.028&0.3\\
\hline
Yahoo Dataset$_9$ & \B 0.43& \B 0.75 & 0.0208&0.75\\
\hline
Yahoo DatasetSyn$_1$ & 0.14&0.86 & \B 0.375&\B 1\\
\hline
Yahoo DatasetSyn$_2$ & \B 0.04& \B 0.72 & 1&0.611\\
\hline
Yahoo DatasetSyn$_3$ & 0.1&0.78 & \B 0.6& \B 1\\
\hline
Yahoo DatasetSyn$_5$ & 0.004&0.31 & \B 0.0625& \B 0.578\\
\hline
Yahoo DatasetSyn$_6$ & 0.015&0.69 & \B 0.764& \B 0.928\\
\hline
Yahoo DatasetSyn$_7$ & 0.35&0.43 & \B 0.411& \B 0.66\\
\hline
Yahoo DatasetSyn$_8$ & 0.024&0.5 & \B 0.197& \B 0.7\\
\hline
Yahoo DatasetSyn$_9$ & 0.27&0.67 & \B 1& \B0.94\\
\hline
\end{tabular}
}
\caption{Comparison of Precision and Recall score for LSTM with Elliot Function and PEF as Activation Function}
\label{table:pefvsef}
\end{table}
According to the data gathered after running the models, we found that parameterized Elliot function has better Precision and Recall for as except for four of the datasets. Thus, we could conclude that using parameterized Elliot function as an activation function gave better performance for quantile-LSTM.
\subsection{Impact of Varying Thresholds}\label{sub:thresholds}
Deep-learning based algorithms such as Autoencoder~\cite{Sakurada:2019}, GAN~\cite{Zenati2018},
DAGMM~\cite{Zong2018} and DevNet~\cite{pang2019deep} consider upper threshold and lower thresholds on reconstruction errors or predicted value. To understand the impact of different thresholds on the performance, we have considered three baseline algorithms GAN, Autoencoder and Devnet. The baseline methods have considered three different sets of threshold values for upper and lower thresholds. The sets are shown in column head of tables \ref{table:gan_percentile}, \ref{table:autoenc_percentile} and \ref{table:devnet_percentile}, where the first threshold is the upper percentile and the second threshold is the lower percentile. In contrast, q-LSTM is robust against thresholds as data sets vary i.e. it captures all anomalies successfully within the $0.1$ and $0.9$ quantile threshold.
\begin{table}[!htpb]
\centering
\renewcommand\thetable{7}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
GAN & \multicolumn{2}{c|}{99.25 and 0.75}&\multicolumn{2}{c|}{99.75 and 0.25}&\multicolumn{2}{c|}{99.9 and 0.1}\\
\hline
Dataset & Precision & Recall & Precision & Recall &Precision & Recall\\
\hline
Yahoo Dataset$_1$ & 0.09&1 & 0.25&1 &0.5&1\\
\hline
Yahoo Dataset$_2$ & 0.348&1 & 0.333&0.375 &0.4&0.25\\
\hline
Yahoo Dataset$_3$ & 0.28&0.5 & 0.444&0.286 &0.28&0.5\\
\hline
Yahoo Dataset$_5$ & 0&0 & 0.375&0.333 &0.6&0.333\\
\hline
Yahoo Dataset$_6$ & 0.5&0.5 & 0.5&1 &0.182&1\\
\hline
Yahoo Dataset$_7$ & 0.154&0.364 & 0.3&0.273 &0.5&0.182\\
\hline
Yahoo Dataset$_8$ & 0.038&0.1 & 0.1&0.1 &0.25&0.1\\
\hline
Yahoo Dataset$_9$ & 0.192&0.625 & 0.5&0.625 &0.5&0.25\\
\hline
\end{tabular}
}
\caption{Comparison of Precision and Recall score for GAN with varying thresholds for anomaly Upper Bound and Lower Bound}
\label{table:gan_percentile}
\end{table}
\begin{table}[!htpb]
\centering
\renewcommand\thetable{8}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Autoencoders & \multicolumn{2}{c|}{99.25 and 0.75}&\multicolumn{2}{c|}{99.75 and 0.25}&\multicolumn{2}{c|}{99.9 and 0.1}\\
\hline
Dataset & Precision & Recall & Precision & Recall &Precision & Recall\\
\hline
Yahoo Dataset$_1$ & 0.5&0.07 & 0.5&0.036 &0.5&0.019\\
\hline
Yahoo Dataset$_2$ & 0.5&0.4 & 0.333&0.5 &0.2&0.5\\
\hline
Yahoo Dataset$_3$ & 0.44&0.5 & 0.4&0.5 &0.25&0.333\\
\hline
Yahoo Dataset$_5$ & 0.5&0.5 & 0.5&0.5 &0.5&0.5\\
\hline
Yahoo Dataset$_6$ & 0.5&1 & 1&1 &0.25&1\\
\hline
Yahoo Dataset$_7$ & 0.5&0.5 & 0.5&0.5 &0.5&0.5\\
\hline
Yahoo Dataset$_8$ & 0.875&0.875 & 0.375&0.375 &0.5&0.75\\
\hline
Yahoo Dataset$_9$ & 0.75&0.5 & 0.25&0.5 &0.5&0.5\\
\hline
\end{tabular}
}
\caption{Comparison of Precision and Recall score for Autoencoders with varying thresholds for anomaly Upper Bound and Lower Bound}
\label{table:autoenc_percentile}
\end{table}
\begin{table}[!htpb]
\centering
\renewcommand\thetable{9}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Devnet & \multicolumn{2}{c|}{99.25 and 0.75}&\multicolumn{2}{c|}{99.75 and 0.25}&\multicolumn{2}{c|}{99.9 and 0.1}\\
\hline
Dataset & Precision & Recall & Precision & Recall &Precision & Recall\\
\hline
Yahoo Dataset$_1$ & 0.002&1 & 0.002&1 &0.001&1\\
\hline
Yahoo Dataset$_2$ & 0.005&1 & 0.005&1 &0.005&1\\
\hline
Yahoo Dataset$_3$ & 0.0078&1 & 0.0078&1 &0.0078&1\\
\hline
Yahoo Dataset$_5$ & 0.111&0.5 & 0.333&0.5 &0.333&0.5\\
\hline
Yahoo Dataset$_6$ & 0.167&1 & 0.5&1 &0.5&0.667\\
\hline
Yahoo Dataset$_7$ & 0.054&0.2 & 0.125&0.2 &0.25&0.2\\
\hline
Yahoo Dataset$_8$ & 0&0 & 0&0 &0&0\\
\hline
Yahoo Dataset$_9$ & 0&0 & 0&0 &0&0\\
\hline
\end{tabular}
}
\caption{Comparison of Precision and Recall score for Devnet with varying thresholds for anomaly Upper Bound and Lower Bound}
\label{table:devnet_percentile}
\end{table}
It is evident from the above tables that performance varies significantly based on the thresholds decided by the algorithm. Therefore it is very important to decide on a correct threshold that can identify all the probable anomalies from the dataset.
\subsection{Experiments on Normal Instances}\label{sub:Normal}
A relevant question to ask is: how would the anomaly detection methods perform on normal data instances that does not have any anomaly?
We investigate this by removing anomalies from some data sets. We observe that on these data sets (AWS1, AWS2, AWS3, Yahoo1, Yahoo2, Yahoo3), q-LSTM and its variants reported very negligible false alarms (Average 40 false alarms) while other state-of-the-art methods, such as \ac{if}, Elliptic Envelope produce higher flag false positives. Elliptic envelope has reported, on average, 137 false alarms whereas \ac{if} reported an average of 209 false alarms across the datasets. Autoencoder and GAN, both have reported average false alarms 46 and 123 respectively, which is higher than the false positive rate of q-LSTM. This establishes the robustness of the proposed method.
\section{Related Work}\label{related}
Well-known supervised machine learning approaches such as Linear Support Vector Machines (SVM), Random Forest (RF), and Random Survival Forest (RSF)~\cite{Voronov, Verma} have been explored for fault diagnosis and the lifetime prediction of industrial systems. \cite{Anton} have explored SVM and RF to detect intrusion based on the anomaly in industrial data.
Popular unsupervised approaches, such as Anomaly Detection Forest \cite{Sternby2020AnomalyDF}, and K-means based Isolation Forest \cite{Karczmarek} try to isolate the anomalies from the normal dataset. These methods do not require labeled data. \cite{Karczmarek} considered K-means based anomaly isolation, but the approach is tightly coupled with a clustering algorithm. Anomaly Detection Forest like k-means based \ac{if} requires a training phase with a subsample of the dataset under consideration. A wrong selection of the training subsample can cause too many false alarms.
The notion of ``likely invariants'' uses operational data to identify a set of invariants to characterize the normal behavior of a system, which is similar to our strategy. Such as an approach has been attempted to discover anomalies of cloud-based systems~\cite{Russo}. However, such an approach requires labeling of data and retuning of parameters when the nature of datasets vary.
Recently, \ac{dl} models based on auto-encoders, long-short term memory \cite{Erfani,RobustAutoencoder} are increasingly gaining attention for anomaly detection. \cite{Yin} have proposed an integrated model of \ac{cnn} and \ac{lstm} based auto-encoder for Yahoo Webscope time-series anomaly detection. For reasons unknown,~\cite{Yin} have taken only one Yahoo Webscope data to demonstrate their approach's efficacy. The DeepAnT \cite{Munir} approach employs \ac{dl} methods and it uses unlabeled data for training. However, the approach is meant for time-series data sets such as Yahoo Webscope, Real traffic, AWS cloudwatch. A stacked \ac{lstm} \cite{Malhotra} is used for time series anomaly prediction, and the network is trained on a normal dataset. The hierarchical Temporal Memory (HTM) method has been applied recently on sequential streamed data and compared with other time series forecasting models \cite{Osegi:2021}. The authors in \cite{Saurav:2018} have performed online time-series anomaly detection using deep RNN. The incremental retraining of the neural network allows to the adoption of concept drift across multiple datasets. There are various works~\cite{Forero:2019,Sperl2020}, which attempt to address the data imbalance issue of the anomaly datasets since anomalies are very rare and occur occasionally. Hence they propose semi-supervised approaches. However, the semi-supervised approach cannot avoid the expensive dataset labeling. Some approaches~\cite{zong2018deep} apply predefined thresholds, such as fixed percentile values to detect the anomalies. However, a fixed threshold value may not be equally effective on different domain datasets. Deep Autoencoding Gaussian Mixture Model (DAGMM) is an unsupervised DL-based anomaly detection algorithm~\cite{zong2018deep}, where it utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point and is further fed into a Gaussian Mixture Model (GMM).
Deviation Network(DevNet)~\cite{pang2019deep} is a novel method that harnesses anomaly scoring networks, Z-score based deviation loss, and Gaussian prior together to increase efficiency for anomaly detection.
\section{Discussion and Conclusion }\label{sec:conclusion}
In this paper, we have proposed multiple versions of the SoTA anomaly detection algorithms along with a forecasting-based LSTM method. We have demonstrated that combining the quantile technique with LSTM can be successfully implemented to detect anomalies in industrial and non-industrial datasets without label availability for training. We have also exploited the parameterized Elliot activation function and shown anomaly distribution against quantile values, which helps in deciding the quantile anomaly threshold. The design of a flexible form activation, i.e., \ac{pef}, also helps in accommodating variance in the unseen data as the shape of the activation is learned from data. PEF, as seen in Table \ref{table:pefvsef} captures anomalies better than vanilla Elliot. The quantile thresholds are generic and will not differ for different datasets. The proposed techniques have addressed the data imbalance issue and expensive training dataset labeling in anomaly detection. These methods are useful where data is abundant. Traditional deep learning-based methods use classical conditional means and assume random normal distributions as the underlying structure of data. These assumptions make the methods vulnerable to capturing the uncertainty in prediction and make them incapable of modeling tail behaviors. Quantile in LSTM (for time series data) is a robust alternative that we leveraged in isolating anomalies successfully. This is fortified by the characteristics of quantiles making very few distributional assumptions.
The distribution-agnostic behavior of Quantiles turned out to be a useful tool in modeling tail behavior and detecting anomalies. Anomalous instances, by definition, are rare and could be as rare as just one anomaly in the entire data set. Our method detects such instances (singleton anomaly) while some, recent state of art algorithms such as DAGMM require at least two anomalies to be effective. Extensive experiments on multiple industrial timeseries datasets (Yahoo, AWS, GE, machine sensors, Numenta and VLDB Benchmark data) and non-time series data
show evidence of effectiveness and superior performance of LSTM-based quantile
techniques in identifying anomalies. The proposed methods have a few drawbacks \begin{enumerate*}
\item quantile based \ac{lstm} techniques are applicable only on univariate datasets.
\item A few of the methods such as quantile-LSTM, iqr-LSTM have a dependency on multiple thresholds.
\end{enumerate*}
We intend to introduce the notion of multiple dimensions in our quantile-based approaches to detect anomalies in multivariate time series data in the future.
|
{
"arxiv_id": "2302.08717",
"language": "en",
"timestamp": "2023-02-20T02:07:47",
"url": "https://arxiv.org/abs/2302.08717",
"yymm": "2302"
} | \section{Introduction}
With the development of quantum metrology, its theoretical and experimental progress has attracted increasing attention by scientists recently. Optical interferometer, a common tool for high-precision measurement, possesses a wide array of potential applications, including gravitational wave detection \cite{barsotti2018,mehmet2018}, optical lithography \cite{dowling2000}, optical gyroscope \cite{lefere1997fundamentals,fink2019entanglement,wu2020}. As depicted in Fig. \ref{fig1}(a), a typical Mach-Zehnder interferometer (MZI) consisting of two beam splitters works as follows: (i) an incident coherent-state beam is split into two modes by the first beam splitter; (ii) one mode experiences a relative phase shift $\phi$ while the other one retains as a reference; (iii) these two modes are then recombined at the second beam splitter. Finally, one extracts the expected information about the parameter $\phi$ by monitoring the output modes. Mathematically, the measurement quantity is regarded as an observable $A$. Accordingly, the phase sensitivity can be inferred from the variance of the observable $A$ via the linear error propagation formula $\Delta \phi = \langle \Delta A \rangle / |\partial \langle A \rangle/ \partial \phi|$.
As a matter of fact, the phase sensitivity in the traditional MZI is restricted by the shot-noise limit (SNL) \cite{caves1981}, $1/\sqrt{n}$, due to the shot noise (the quantum nature of light) where $n$ is the total incident mean photon number. To improve the phase sensitivity (even beyond the SNL), there usually exist two pathways: (i) decreasing $\langle \Delta A \rangle$ (i.e. reducing the quantum noise), which could be realized by injecting a beam with a sub-shot noise, for instance, squeezed states \cite{walls1983,walls2007,barnett2002,aasi2013}; (ii) increasing $|\partial \langle A \rangle/ \partial \phi|$ (i.e., enhancing the effective detection signal), which could be achieved by various methods, such as utilizing nonlinear beam splitter \cite{yurke1986,zhang2018super,zhang2021phase,ma2020enhanced,you2019conclusive,ma2018sub,gong2017intramode,gong2017phase,jiao2020,hu2018phase,jing2021} or using nonlinear phase shift \cite{woolley2008nonlinear,joo2012quantum,berrada2013quantum,cheng2014quantum,jiao2020nonlinear}.
In experiment, one can choose various measurement strategies, such as intensity detection \cite{marino2012effect,fritschel1992,regehr1995,mckenzie2002}, parity detection \cite{gerry2000heisenberg,seshadreesan2011parity,li2016phase,li2018effects}, and homodyne detection \cite{chen2016effects,kong2013experimental,hudelist2014quantum,li2014phase}. Particularly, we mainly focus on the homodyne detection (a measurement technique being used to monitor the quadrature of light). It is worth noting that homodyne detection is usually performed only on one output mode, while the other one is ignored in the traditional scheme \cite{hudelist2014quantum,li2014phase,pra2019} as shown in Fig. \ref{fig1}(a). Nevertheless, the discarded mode $b$ also contains the information about the parameter $\phi$. Naturally, we wonder whether it is able to enhance the phase sensitivity by reusing the ignored beam. Following this route, we propose an alternative scheme where the output mode $b$ is re-injected into the input port $b$ (so-called photon recycling) as shown in Fig. \ref{fig1}(b).
In fact, the technique of photon recycling has been theoretically proposed and experimentally realized in the Michelson interferometer (MI) \cite{a0prd1988,a0cqg2004,a0prd2003,a0ao1992,a0lrr2011,a0oe2021}. It has been proved that the photon recycling is an efficient technique for further signal increase, capitalizing both on an increase in circulating mean photon number and an increase in phase shift \cite{a0ao1992}. Nevertheless, the photon recycling is rarely applied in a MZI. From this point of view, it is also desired to explore the performance of the photon-recycled Mach-Zehnder interferometer.
In this work, we investigate the performance of this photon-recycled scheme from three related but distinct aspects: (i) the phase sensitivity via homodyne detection; (ii) the quantum Cram\'er-Rao bound (QCRB); (iii) the total mean photon number inside the interferometer (the photons experiencing and sensing the phase shift). First of all, it is demonstrated that this modified interferometer can realize an enhanced phase sensitivity compared with the traditional MZI. Second, we illustrate that this modified scheme can achieve a QCRB beyond the traditional one. In addition, unlike the traditional scheme where the QCRB is independent of the phase shift $\phi$, the QCRB of our scheme depends on $\phi$. Finally, we discuss the underlying mechanisms behind the enhanced phase sensitivity by analyzing the total mean photon number inside the interferometer.
This manuscript is organized as follows. In Sec. II, we briefly introduce the theoretical model of propagation of photons through this modified optical circuit. In Sec. III, we analyze the phase sensitivity via homodyne detection and QCRB in our scheme, and compare the performance between our scheme and the traditional one. In Sec. IV, we present the underlying mechanisms behind the enhanced phase sensitivity. Final remarks are given in Sec. V.
\section{Theoretical model}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.5\textwidth]{01_fig.eps}
\end{center}
\caption{Schemes for phase estimation: (a) the traditional MZI with the output beam $b$ being discarded, (b) the modified scheme with the output mode $b$ being re-injected into the input mode $b$. We have assumed that the output beam $b$ experiences a phase shift $\theta_0$ and photon loss $L$ before the re-injection. HD: homodyne detection, LO: local oscillator.}
\label{fig1}
\end{figure}
In contrast to the traditional scheme where the output beam $b$ is discarded (see Fig. \ref{fig1}(a)), we propose an alternative one where the output mode $b$ is reused via photon recycling (the output beam $b$ is re-injected into the input port $b$ as illustrated in Fig. \ref{fig1}(b)). In practical experiment, the photon loss is inevitable. Therefore, we consider the effect of photon loss induced by the light propagation in the recycling arm. In addition, we assume that the output beam $b$ experiences a phase shift $\theta_0$ before it is re-injected into the input port $b$.
In our scheme as depicted in Fig. \ref{fig1}(b), the input-output relation is found to be
\begin{align}
\begin{pmatrix}
\hat{a}^{\rm{out}} \\
\hat{b}^{\rm{out}}
\end{pmatrix} &= S_{\rm{MZI}}
\begin{pmatrix}
\hat{a}^{\rm{in}} \\
\hat{b}^{\rm{in}}
\end{pmatrix},
\label{eq001}
\end{align}
where $\hat{a}^{\rm{in}}$ ($\hat{a}^{\rm{out}}$) and $\hat{b}^{\rm{in}}$ ($\hat{b}^{\rm{out}}$) are the annihilation operators of input (output) modes. $S_{\rm{MZI}} = S_{\rm{BS}_2}S_{\phi}S_{\rm{BS}_1}$ represents the scattering matrix of a conventional MZI with
\begin{align}
S_{\rm{BS}_1} = S_{\rm{BS}_2} = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & i \\
i & 1 \\
\end{pmatrix},
\end{align} and
\begin{align}
S_{\phi} = \begin{pmatrix}
e^{-i\phi} & 0 \\
0 & 1 \\
\end{pmatrix}.
\end{align}
Generally, the photon loss could be modeled by adding a fictitious beam splitter as shown in Fig. \ref{fig1}(b). Assume the photon-recycling arm with a loss rate $L$. After passing through the fictitious beam splitter, the mode transform of field $\hat{b}^{\rm{out}}$ is given by $\hat{b}^{\rm{out}\prime} = \sqrt{1-L} \hat{b}^{\rm{out}} + \sqrt{L} \hat{v}_{b}$ ($\hat{v}_{b}$ corresponds to the vacuum state). In the presence of phase shift $\theta_0$ and photon loss $L$, the re-injected mode $b$ could be expressed as
\begin{align}
\hat{b}^{\rm{in}} = \sqrt{1-L} \hat{b}^{\rm{out}} e^{-i \theta_0} + \sqrt{L} \hat{v}_{b}.
\label{eqn1a}
\end{align}
Based on Eqs. (\ref{eq001}) and (\ref{eqn1a}), the output annihilation operators in Fig. \ref{fig1}(b) arrive at
\begin{align}
\hat{a}^{\rm{out}} = &(s_{11} + \frac{s_{12} s_{21} \sqrt{1-L} }{e^{i\theta_0} - s_{22}\sqrt{1-L} }) \hat{a}^{\rm{in}} \nonumber\\
& + (1+\frac{s_{22}\sqrt{1-L} }{e^{i\theta_0} - s_{22} \sqrt{1-L} })s_{12}\sqrt{L}\hat{v}_b,
\label{aouteqn}\\
\hat{b}^{\rm{out}} = & \frac{s_{21}}{1 - s_{22} \sqrt{1-L} e^{-i\theta_0} } \hat{a}^{\rm{in}} +\frac{s_{22} \sqrt{L}}{1 - s_{22} \sqrt{1-L} e^{-i\theta_0} } \hat{v}_b,
\label{bouteqn}
\end{align}
where we have set
\begin{align}
S_{\rm{MZI}} = \begin{pmatrix}
s_{11} & s_{12} \\
s_{21} & s_{22} \\
\end{pmatrix},
\label{eq3}
\end{align}
with
\begin{align}
s_{11}&=\frac{1}{2}(e^{-i \phi} -1), \quad
s_{12}=\frac{i}{2}(e^{-i \phi} +1), \nonumber\\
s_{21}&=\frac{i}{2}(e^{-i \phi} +1), \quad
s_{22}=\frac{1}{2}(1-e^{-i \phi}).
\label{eq3}
\end{align}
Eq. (\ref{aouteqn}) is the output in our scheme in Fig. \ref{fig1}(b). To verify this result, we analyze our scheme via an alternative method and obtain the same expression. For the sake of clarity, the detailed analysis is shown in Appendix \ref{app000}. Since we monitor the final output mode $a$ in our scheme, Eq. (\ref{aouteqn}) would be used to estimate the phase shift $\phi$.
\section{phase sensitivity}
\subsection{Homodyne detection}
To extract the information of parameter $\phi$, we measure the quadrature of output mode $a$ via homodyne detection as show in Fig. \ref{fig1}(b). Mathematically, the output quadrature is defined as
\begin{eqnarray}
\label{eqn_xa}
\hat{x}_{a} = \hat{a}^{\rm{out}\dagger} + \hat{a}^{\rm{out}},
\label{xaeqn}
\end{eqnarray}
where $\hat{a}^{\rm{out} \dagger}$ is the Hermitian conjugate of $\hat{a}^{\rm{out}}$. According to the linear error propagation formula, the phase sensitivity is given by
\begin{eqnarray}
\Delta \phi = \frac{\langle \Delta \hat{O} \rangle}{\left|\frac{\partial \langle\hat{O} \rangle}{\partial \phi}\right|},
\label{delta2}
\end{eqnarray}
where $\hat{O}$ is the operator of observable quantity and $\langle \Delta \hat{O} \rangle = \sqrt{\langle (\hat{O})^2 \rangle -\langle \hat{O} \rangle^2}$ is the corresponding variance. In homodyne detection, the measurement quantity is quadrature $\hat{O} = \hat{x}_a$. Accordingly, the phase sensitivity can be obtained by
\begin{align}
\Delta \phi = \frac{\langle \Delta \hat{x}_a \rangle}{\left|\frac{\partial \langle\hat{x}_a \rangle}{\partial \phi}\right|},
\label{delta3}
\end{align}
where it requires to calculate $\langle \hat{x}_a \rangle$ and $\langle \Delta \hat{x}_a \rangle$.
Consider a coherent state $|\alpha\rangle$ as input as shown in Fig. \ref{fig1}(b) (the complex number $\alpha = |\alpha|e^{i\theta_{\alpha}}$ denotes the amplitude). Without loss of generality, we set the irrelevant phase $\theta_{\alpha} = 0$. In this situation, by combining Eqs. (\ref{aouteqn}) and (\ref{xaeqn}), it is easily found that
\begin{align}
\langle \hat{x}_a \rangle = \Upsilon \alpha + \Upsilon^{\ast} \alpha^{\ast}, \quad
\langle \Delta \hat{x}_a \rangle = 1,
\label{eqdelta}
\end{align}
where
\begin{align}
\Upsilon=&\frac{e^{i\theta_0}(1- e^{i \phi}) - 2 \sqrt{1-L}}{2 e^{i (\phi+\theta_0)} + \sqrt{1-L} - e^{i \phi} \sqrt{1-L}},
\end{align}
$\alpha^{\ast}$ ($\Upsilon^{\ast}$) is the complex conjugate of $\alpha$ ($\Upsilon$), and the corresponding derivation is presented in Appendix \ref{app001}. If we suppose that $L=1$ (namely {\emph{block}} the photon-recycled arm), $\Upsilon=(e^{-i\phi}-1)/2$ is independent of $\theta_0$ (the phase shift on the photon-recycled arm) which sounds reasonable.
By inserting Eq. (\ref{eqdelta}) into Eq. (\ref{delta3}), the phase sensitivity yields
\begin{align}
\Delta \phi^{\rm{PR}} = \frac{1}{\Lambda_1} \Delta \phi^{\rm{Con}}_{\rm{SNL}},
\label{deltam}
\end{align}
where the superscript PR (Con) stands for the photon-recycling (conventional) scheme, $\Delta \phi^{\rm{Con}}_{\rm{SNL}} = 1/|\alpha|$ is the so-called SNL in the conventional scheme, and the explicit expression of $\Lambda_1$ is shown in Appendix \ref{appb}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.5\textwidth]{02_fig.eps}
\end{center}
\caption{The enhancement factor $\Lambda_{1}$ as a function of $\phi$ and $\theta_0$ in the presence of photon loss (a) $L=0.05$, (b) $L=0.10$, (c) $L=0.15$, and (d) $L=0.20$.}
\label{fig2}
\end{figure}
In fact, $\Lambda_1$ can be regarded as the enhancement factor of phase sensitivity. Fig. \ref{fig2} depicts the factor $\Lambda_1$ as a function of $\phi$ and $\theta_0$ with various losses $L=0.05, 0.10, 0.15,$ and $0.20$. It is shown that in the presence of moderate loss, the enhancement factor can reach above unity, $\Lambda_1>1$, in a {\emph{horn-shaped}} region which indicates that our scheme could achieve a phase sensitivity beyond the traditional one. It is worth pointing out that the maximum $\Lambda_1$ is achieved around the tip of the ``horn".
In an attempt to gain insight into the effect of loss on the minimum phase sensitivity, we quantitatively analyze the maximum $\Lambda_{1,\rm{max}}(= \Delta \phi_{\rm{SNL}}^{\rm{Con}}/\Delta \phi_{\rm{min}}^{\rm{PR}})$ from Eq. (\ref{eqn15}). Fig. \ref{fig3} plots $\Lambda_{1,\rm{max}}$ as a function of loss. It indicates that the minimum phase sensitivity increases with the increase of loss. Although the factor $\Lambda_{1,\rm{max}}$ decreases, it is still larger than unity with a moderate photon loss. In particular, when $L=0.10$, the maximum enhancement of phase sensitivity is roughly equal to $\Lambda_{1,\rm{max}} \simeq 9.32$ with $\theta_0\simeq0.3524$ rad and $\phi\simeq2.5702$ rad.
\begin{figure}[bt]
\begin{center}
\includegraphics[width=0.4\textwidth]{03_fig.eps}
\end{center}
\caption{The maximum enhancement factor $\Lambda_{1,\rm{max}}$ as a function of photon loss $L$. QCRB: quantum Cram\'er-Rao bound, HD: homodyne detection.}
\label{fig3}
\end{figure}
\subsection{Quantum Cram\'er-Rao bound}
\label{sec3}
Since the quantum Cram\'er-Rao bound (QCRB) \cite{helstrom1967minimum,helstrom1976quantum,braunstein1994statistical} gives the ultimate limit to precision, it is regarded as a reliable figure of merit when quantifying the performance of phase-estimation measurement scheme, which motivates us to investigate the QCRB in our scheme.
Theoretically, the output mode $a$ is always a pure Gaussian state in our analysis model. This is due to the facts: (i) in our proposal, the incident beam is a coherent state which is a pure Gaussian state; (ii) since the beam splitter acts as a Gaussian operation, the output is still a pure Gaussian state when a pure Gaussian state propagates through beam splitters (including the fictitious BS corresponding to the photon loss).
According to Ref. \cite{pinel2012}, the QCRB for a pure Gaussian state can be written as
\begin{align}
\Delta \phi_{\rm{QCRB}} = \left(\overline{X}'^{\top} \Gamma^{-1} \overline{X}' + \frac{\rm{tr(\Gamma' \Gamma^{-1})^2}}{4} \right)^{-1/2},
\label{eq10}
\end{align}
where the column vector of the expectation values of the quadratures $\overline{X}(= (\langle \hat{x}_{a}\rangle, \langle \hat{p}_{a}\rangle)^{\top})$ with $\hat{p}_{a} = i(\hat{a}^{\rm{out}\dagger} - \hat{a}^{\rm{out}})$, and the symmetrized covariance matrix
\begin{align}
\Gamma = \begin{pmatrix}\langle (\Delta \hat{x}_{a})^2\rangle&\langle\Delta(\hat{x}_a,\hat{p}_a)\rangle\\\langle\Delta(\hat{p}_a,\hat{x}_a)\rangle&\langle (\Delta \hat{p}_{a})^2\rangle \end{pmatrix},
\end{align}
with $\langle\Delta(\hat{O}_1,\hat{O}_2)\rangle = \frac{1}{2}\langle \hat{O}_{1} \hat{O}_{2} + \hat{O}_{2} \hat{O}_{1}\rangle - \langle\hat{O}_{1} \rangle \langle\hat{O}_{2} \rangle$ ($\hat{O}_1,\hat{O}_2 = \hat{x}_a,\hat{p}_a$), $O'=\partial O/\partial \phi$, and $O^{\top}$ ($O^{-1}$) is the transpose (inverse) of $O$.
In this modified scheme, $\overline{X}$ and $\Gamma$ are found to be
\begin{align}
\overline{X} = \big(2{\rm{Re}}(\Upsilon \alpha), 2{\rm{Im}}(\Upsilon \alpha)\big)^{\top}, \quad
\Gamma =
\begin{pmatrix}
1&0\\
0&1
\end{pmatrix},
\label{eq11}
\end{align}
where Re($\Upsilon\alpha$) (Im($\Upsilon\alpha$)) is the real (imaginary) part of $\Upsilon\alpha$.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.5\textwidth]{04_fig.eps}
\end{center}
\caption{The enhancement factor $\Lambda_2$ as a function of $\phi$ and $\theta_0$ with photon loss (a) $L=0.05$, (b) $L=0.10$, (c) $L=0.15$, and (d) $L=0.20$.}
\label{fig4}
\end{figure}
By substituting Eq. (\ref{eq11}) into Eq. (\ref{eq10}), the QCRB is cast into
\begin{align}
\Delta \phi_{\rm{QCRB}}^{\rm{PR}} = \frac{1}{\Lambda_2} \Delta \phi_{\rm{QCRB}}^{\rm{Con}},
\end{align}
where $\Delta \phi_{\rm{QCRB}}^{\rm{Con}} = 1/|\alpha|$ is the QCRB of the traditional scheme and the explicit expression of $\Lambda_2$ is present in Appendix \ref{appb}. Fig. \ref{fig4} shows $\Lambda_2$ as a function of $\phi$ and $\theta_0$ with different loss $L=0.05,0.10,0.15$ and $0.20$. Similar to $\Lambda_1$, $\Lambda_2$ also reaches above unity in a {\emph{horn-shaped}} region and the maximum $\Lambda_2$ is achieved around the tip of the ``horn" as well.
According to Eq. (\ref{eqn20}), one can numerically compute the maximum $\Lambda_{\rm{2,\rm{max}}}$. As shown in Fig. \ref{fig3}, with the increase of loss, the factor $\Lambda_{\rm{2,\rm{max}}}$ decreases, which yields that as the loss increases, the minimum QCRB becomes worse. Although $\Lambda_{\rm{2,\rm{max}}}$ decreases, it is still larger than one. That is to say, with a moderate loss, our scheme can still achieve a QCRB beyond the tradition scheme. It is worth noting that in contrast to the traditional MZI where the QCRB is independent of $\phi$, the QCRB in our scheme depends on $\phi$.
\section{discussion}
We would like to discuss the underlying mechanisms behind the enhanced phase sensitivity. Let us consider the total mean photon number, $\hat{n}_{\rm{T}}$, inside the interferometer where $\hat{n}_{\rm{T}}$ is defined as
\begin{align}
\langle \hat{n}_{\rm{T}} \rangle = \langle \hat{n}_{a}^{\rm{inside}} \rangle + \langle \hat{n}_{b}^{\rm{inside}} \rangle,
\label{eqn23}
\end{align}
with $\hat{n}_{j}^{\rm{inside}} = \hat{j}^{\rm{inside} \dagger} \hat{j}^{\rm{inside}}$ ($j = a,b$) ($\hat{j}^{\rm{inside}}$ corresponds to the mode $j$ inside the interferometer as shown in Fig. \ref{fig1}(b)). The total mean photon number is found to be
\begin{align}
\langle \hat{n}_{\rm{T}} \rangle & =\Lambda_3|\alpha|^2,
\label{eq17}
\end{align}
where the explicit expression of $\Lambda_3$ is shown in Appendix \ref{app002}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.5\textwidth]{05_fig.eps}
\end{center}
\caption{The factor $\Lambda_3$ as a function of $\phi$ and $\theta_0$ with loss (a) $L=0.05$, (b) $L=0.10$, (c) $L=0.15$, and (d) $L=0.20$.}
\label{fig5}
\end{figure}
Fig. \ref{fig5} illustrates $\Lambda_3$ as a function of $\phi$ and $\theta_0$ with various losses $L=0.05$, $0.10$, $0.15$ and $0.20$. It is easy to check that $\Lambda_3 \ge 1$ is always valid, which indicates that our scheme possesses more photons inside the interferometer than the conventional one. In fact, photons inside the interferometer are usually regarded as the essential resource to experience and sense the relative phase shift. Intuitively, the more resource we use, the better phase sensitivity it can achieve \cite{taylor2016quantum}. Therefore, the phase sensitivity is improved in this modified scheme. It is worth noting that the mean photon number inside the interferometer $\langle \hat{n}_{\rm{T}}\rangle$ depends on $\phi$, leading to the phenomenon of QCRB being dependent of $\phi$ in our scheme. The photon-recycled Michelson interferometers have been proposed. Appendix \ref{appcomp} compares our scheme with these previous light-recycled ones. It is found that our scheme possesses a different structure of optical circuit.
\section{conclusion}
In summary we propose an alternative scheme for phase estimation in a MZI with photon recycling. We investigate the performances of our scheme, including the phase sensitivity via homodyne detection and quantum Cram\'er-Rao bound. It is demonstrated that this modified scheme is able to achieve an enhanced performance in contrast to the traditional MZI. Moreover, we present the physical mechanisms behind the improved phase sensitivity. We believe that our scheme may offer another route to implement a high-precision phase estimation exploiting photon recycling.
\section*{acknowledgments}
This research is supported by the National Natural Science Foundation of China (NSFC) (Grants No. 12104423, No. 12175204, No. 61875178, No. 62004180).
|
{
"arxiv_id": "2302.08707",
"language": "en",
"timestamp": "2023-02-20T02:07:27",
"url": "https://arxiv.org/abs/2302.08707",
"yymm": "2302"
} | \section{Introduction}\label{Sec1}
Let $\mathcal{I}_{d,g,r}$ denote the union of irreducible components of the Hilbert scheme whose general points correspond to smooth irreducible non-degenerate curves of degree $d$ and genus $g$ in $\mathbb{P}^r$. The minimal possible dimension that a component of $\mathcal{I}_{d,g,r}$ can have is $\lambda_{d,g,r} := (r+1)d - (r-3)(g-1)$. Recall that this number is called the \emph{expected dimension}. Note that $\lambda_{d,g,r} = h^0 (C, N_{C/\mathbb{P}^r}) - h^1 (C, N_{C/\mathbb{P}^r})$, where $N_{C/\mathbb{P}^r}$ is the normal bundle to a curve $C$ represented by a point of $\mathcal{I}_{d,g,r}$. For the tangent space to a component $\mathcal{H}$ of $\mathcal{I}_{d,g,r}$ at a point $[C] \in \mathcal{H}$ we have
\[
\dim \mathcal{H} \leq \dim T_{[C]} \mathcal{H} = h^0 (C, N_{C/\mathbb{P}^r}) \, .
\]
If for a general $[C] \in \mathcal{H}$ we have equality, then the component is generically smooth. Whenever $\dim \mathcal{H} < \dim T_{[C]} \mathcal{H} $ at a general $[C] \in \mathcal{H}$, the component is non-reduced.
After obtaining two series of components of the Hilbert scheme $\mathcal{I}_{d,g,r}$ of curves in \cite[Theorem 4.3, Theorem 4.4]{CIK17}, we realized that the curves parametrized by them are found on cones, which allowed us to compute $h^0 (X, N_{C/\mathbb{P}^r})$ for a curve $C$ represented by a general point of such component. In this way we were able to strengthen some of the results proved in \cite{CIK17} and to describe the corresponding curves in a more geometric fashion. In \cite[Theorem A]{CIK21} we identified a series of generically smooth components of $\mathcal{I}_{2g-4\gamma+2, g, r}$ for every $\gamma \geq 10$ and $\gamma \leq r \leq g - 3\gamma + 2$, which extended \cite[Theorem 4.3]{CIK17}. In our paper \cite{CIK22} we found a series of non-reduced components of $\mathcal{I}_{2g-4\gamma+1, g, g-3\gamma+1}$ for every $\gamma \geq 7$ and $g \geq 6\gamma + 5$. We proved that the corresponding non-reduced components parametrize curves that lie on cones, pass through the vertex of the corresponding cone and are double covers of its general hyperplane section, which is a linearly normal nonspecial curve of genus $\gamma$. We remark that the non-reduced components from \cite{CIK22} are related to those in \cite[Theorem 4.4]{CIK17}.
In the present work we continue our study of smooth curves on cones that pass through the vertex of a cone and are $m$-covers, $m \geq 3$, of the hyperplane section of the cone. The main result in the paper concerns the case $m = 3$ and says that under suitable numerical assumptions such families of curves give rise to non-reduced components of the Hilbert scheme of curves. It is formulated in the next theorem.
\begin{theoremM}\label{TheoremB3}
Assume that $e$ and $\gamma$ are integers such that $e \geq 4 \gamma +5$ and $\gamma \geq 3$. Let
\[
g := 3\gamma + 3e \, , \qquad d:= 3e + 1 \quad \mbox{ and } \quad r:= e - \gamma + 1 \, .
\]
Then the Hilbert scheme $\mathcal{I}_{d, g, r}$ possesses a \emph{non-reduced} component $\mathcal{H}$ such that
\begin{enumerate}[label=(\roman*), leftmargin=*, font=\rmfamily]
\item $\dim \mathcal{H} = r^2 + 7e + 4$;
\item at a general point $[X] \in \mathcal{H}$ we have $\dim T_{[X]} \mathcal{H} = \dim \mathcal{H} + 1$;
\item a general point $[X] \in \mathcal{H}$ represents a curve $X$ lying on a cone $F$ over a smooth curve $Y$ of genus $\gamma$ and degree $e$ in $\mathbb{P}^{r-1}$ such that
\begin{enumerate}[label=\theenumi.\arabic*.]
\item $X \subset \mathbb{P}^{r}$ is projectively normal and passes through the vertex $P$ of the cone $F$;
\item there is a line $l$ from the ruling of $F$ that is tangent to $X$ at $P$ as the intersection multiplicity is two;
\item the projection from $P$ to the hyperplane in $\mathbb{P}^{r}$ containing the curve $Y$ induces a morphism $\varphi : X \to Y$ of degree three ;
\item the ramification divisor $R_{\varphi}$ is linearly equivalent to the divisor cut on $X$ by a quadric hypersurface together with $Q_1 + Q_2$, where $Q_1$ and $Q_2$ are the remaining two points in which $l$ intersects $X$ besides $P$.
\end{enumerate}
\end{enumerate}
\end{theoremM}
Although our main result fits in the context of \cite{CIK21}, \cite{CIK22} and \cite{FS21}, it is independent of them. To obtain it, we develop the approach used in \cite{CIK22}, use the characterization of smooth curves on a cone that pass through its vertex given in \cite{CG99}, and apply similar arguments to those used in \cite{CLM96} and \cite{Cil87} to deduce that every deformation of a curve from the family of curves constructed in the theorem yields a curve from the same family. We remark that the technique used in the proof of our {\rm Main Theorem} cannot be by applied in the proof of \cite[Theorem B]{CIK22}, as we explain the reasons for this in {\rm Remark \ref{Sec3Remark2-3case}}. On the other hand, the possibility for curves on cones, which are algebraically equivalent to a high degree hypersurface intersection plus a line, to yield a non-reduced component of the Hilbert scheme of curves has already been suggested in \cite[Remark 4.12]{CLM96}. In this sense our work was inspired by \cite{CLM96}.
The free resolution of the ideal of a smooth curve on a cone passing through its vertex, obtained by Catalisano and Gimigliano in \cite{CG99}, plays an essential role in the proof of our main result. For this reason we describe their result in section \ref{Sec2} using a setup that fits the framework of the Main Theorem. Further in the same section we prove several results about smooth curves on cones that are $m:1$ covers of the base of the cone and pass through its vertex. Also, for $m = 3$ we prove a technical result, namely {\rm Proposition \ref{Sec2PropPushForwardStrSheaf}}, that plays an important role in the proof of our Main Theorem, which is given in section \ref{Sec3}.
We work over the field $\mathbb{C}$. By \emph{curve} we understand a smooth integral projective algebraic curve. Given a line bundle $L$ on a smooth projective variety $X$, or a divisor $\Delta$ associated to $L$, we denote by $|L|$ or $|\Delta|$ the complete linear series $\mathbb P\left(H^0(X,L)\right)$ on $X$. For a line bundle $L$ and a divisor $\Delta$ on a variety $X$, we abbreviate, occasionally, the notation of the line bundle $L \otimes \mathcal{O}_X (\Delta)$ to simply $L(\Delta)$. We use $\sim$ to denote linear equivalence of divisors. Given a finite morphism $\varphi : X \to Y$ of curves and a divisor $\Delta = \sum n_i P_i$ on $X$, we denote by $\varphi ( \Delta )$ the divisor $\sum n_i \varphi (P_i)$ on $Y$.
When $X$ is an object of a family, we denote by $[X]$ the corresponding point of the Hilbert scheme representing the family. For all other definitions and properties of objects not explicitly introduced in the paper the reader can refer to \cite{Hart77} and \cite{ACGH}.
\section{ Preliminary results }\label{Sec2}
In our paper \cite{CIK22} we constructed a series of non-reduced components of the Hilbert scheme of curves using curves that lie on cones as each curve passes through the vertex of the corresponding cone. There, we considered only curves that are double covers of the base of the cone. On the other hand, curves on cones that are $m:1$ covers of the base, $m \geq 2$, and pass through the vertex have been studied by Catalisano and Gimigliano in \cite{CG99} with a different aim. Motivated by an earlier work of Jaffe about smooth curves on a cone that pass through its vertex, see \cite{Jaf91}, Catalisano and Gimigliano showed in \cite{CG99} that such curves are projectively normal, provided that the base curve of cone is, and gave a resolution of the ideal of such a curve in terms of a resolution of the ideal of the base curve. We will formulate below the main result of \cite{CG99}. For this assume that:
\begin{itemize}[font=\sffamily, leftmargin=1.0cm,
style=nextline]
\item[$\Gamma$] is a smooth integral curve of genus $\gamma$,
\item[$E$] is a divisor of degree $e \geq 2\gamma + 1$ on $\Gamma$,
\item[$q$] is a point on $\Gamma$,
\item[$S$] is the ruled surface $S = \mathbb{P} (\mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E))$,
\item[$f$] is the natural projection morphism $f : S \to \Gamma$,
\item[$\Gamma_0$] is the section of minimal self-intersection of $f : S \to \Gamma$, that is, the one that corresponds to the exact sequence
\[
0 \to \mathcal{O}_{\Gamma} \to \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E) \to \mathcal{O}_{\Gamma} (-E) \to 0
\]
with $\Gamma_0^2 = \deg \mathcal{O}_{\Gamma} (-E) = -e$,
\item[$\Psi$] is the morphism determined by the linear series $|\Gamma_0 + E\mathfrak{f}|$ on $S$.
\end{itemize}
\noindent We remark that $\Psi$ is isomorphism away from $\Gamma_0$ and contracts $\Gamma_0$ to a point, see \cite{FP05NM} for more details. Thus, $\Psi$ maps $S$ into a cone, so we denote by
\begin{itemize}[font=\sffamily, leftmargin=1.0cm,
style=nextline]
\item[$F$] the image of $S$ under $\Psi$, that is, $F = \Psi (S)$, and
\item[$P$] the vertex of $F$, that is, $P = \Psi (\Gamma_0)$.
\end{itemize}
\noindent Set $r := \dim |\Gamma_0 + E\mathfrak{f}|$. Then the embedding $F \subset \mathbb{P}^r$ is complete and the hyperplane sections of $F$ are the images, under $\Psi$, of the divisors from the linear series $|\Gamma_0 + E\mathfrak{f}|$ on $S$. Let
\begin{itemize}[font=\sffamily, leftmargin=1.0cm,
style=nextline]
\item[$\sigma_D$] be a section of $f : S \to \Gamma$ for whose image $\sigma_D (\Gamma) =: D$ we have that
\item[$D$] is a smooth curve in the linear series $|\Gamma_0 + E \mathfrak{f}|$ on $S$, and let
\item[$Y$] be the image of $D$ under $\Psi$, that is, $Y = \Psi (D)$.
\end{itemize}
\noindent The curves $\Gamma$, $D$ and $Y$ are isomorphic to one another since $\Psi$ is an isomorphism away from $\Gamma_0$ and $D \cdot \Gamma_0 = (\Gamma_0 + E \mathfrak{f}) \cdot \Gamma_0 = 0$. Also, by \cite[Proposition 1]{CIK21}, $r = e-\gamma+1$, and $Y$ is a smooth, linearly normal curve of genus $\gamma$ and degree $e$ in $\mathbb{P}^{r-1}$. In fact, due to $e \geq 2\gamma + 1$, it follows by \cite{Mum1969} that $Y$ is projectively normal. Thus, we can consider $F$ as a cone in $\mathbb{P}^r$ over the projectively normal curve $Y \subset \mathbb{P}^{r-1}$.
\begin{enumerate}[label=\normalfont(\alph*), leftmargin=*, font=\rmfamily]
\item[(MS)] \emph{We call the above assortment of assumptions about $\Gamma$, $E$, $q$, $S$, $f$, $\Gamma_0$, $\Psi$, $F$, $P$, $r$, $D$, $\sigma_D$ and $Y$, and the properties we described, the \emph{Main Setup}, and we abbreviate it as {\rm (MS)}}.
\end{enumerate}
Catalisano-Gimigliano's result can now be formulated as follows.
\begin{prop}(\cite[Proposition 2]{CG99})\label{Sec2PropCatGim}
Assume the conditions and notations of {\rm (MS)}. Let $C_m \in |m\Gamma_0 + (mE + q) \mathfrak{f}|$ be general and $X_m = \Psi (C_m)$ be the image of $C_m$ on $F$, where $m \geq 2$ is an integer. Then
\begin{enumerate}[label=\normalfont(\alph*), leftmargin=*, font=\rmfamily]
\item $X_m$ is a smooth integral projectively normal curve that passes through the vertex $P$;
\item given a free resolution of the ideal sheaf $\mathcal{I}_{Y}$ of $Y$
\begin{equation}\label{Sec2ResolY}
0 \to \mathcal{F}_{r-2} \to \mathcal{F}_{r-3} \to \cdots \to \mathcal{F}_{1} \to \mathcal{I}_Y \to 0 \, ,
\end{equation}
with $\mathcal{F}_{i} = \bigoplus\limits^{\beta_i}_{j=1} \mathcal{O}_{\mathbb{P}^r} (-\beta_{i,j})$, $i = 1, \ldots , r-2$, the ideal sheaf $\mathcal{I}_{X_m}$ of $X_m$ has a free resolution
\begin{equation}\label{Sec2ResolXm}
0 \to \mathcal{P}_{r-1} \to \mathcal{P}_{r-2} \to \cdots \to \mathcal{P}_{1} \to \mathcal{I}_{X_m} \to 0 \, ,
\end{equation}
where
\begin{itemize}[leftmargin=*, font=\rmfamily]
\item[] $\mathcal{P}_{1} = \bigoplus\limits^{r-1}_{1} \mathcal{O}_{\mathbb{P}^r} (-m-1) \oplus \bigoplus\limits^{\beta_1}_{j=1} \mathcal{O}_{\mathbb{P}^{r}} (-\beta_{1,j})$
\item[] $\mathcal{P}_k = \bigoplus\limits^{\binom{r-1}{k}}_1 \mathcal{O}_{\mathbb{P}^{r}}(-m-k) \oplus \bigoplus\limits^{\beta_k}_{j=1} \mathcal{O}_{\mathbb{P}^{r}} (-\beta_{k,j}) \oplus \bigoplus\limits^{\beta_{k-1}}_1 \mathcal{O}_{\mathbb{P}^{r}}(-m-\beta_{k-1,j})$, for $2 \leq k \leq r-2$
\item[] $\mathcal{P}_{r-1} = \mathcal{O}_{\mathbb{P}^{r}}(-m-r+1) \oplus \bigoplus\limits^{\beta_{r-2}}_1 \mathcal{O}_{\mathbb{P}^{r}}(-m-\beta_{r-2,j})$.
\end{itemize}
\end{enumerate}
\end{prop}
\begin{remark}\label{Sec2RemarkCatGim}
Note that for any point $z \in \Gamma$ the morphism $\Psi$ maps the fiber $z\mathfrak{f}$ to a line from the ruling of $F$ passing through the point $\Psi (\sigma_D (z))$ on $Y$. Let $l_q \subset F$ be the line corresponding to $q$.
As it is pointed out in \cite[section 1]{CG99}, the curve $X_m$, together with $(e-1)$ lines $L_1, \ldots , L_{e-1}$ from the ruling of $F$, is cut on $F$ by a degree $(m+1)$ hypersurface $G_{m+1} \subset \mathbb{P}^r$, where $L_1, \ldots , L_{e-1}$ are the residual lines on $F$ cut by a hyperplane that contains the line $l_q$. We remark also that the smoothness of a general $C_m \in |m\Gamma_0 + (mE+q)\mathfrak{f}|$ follows by \cite{Jaf91} and \cite{CG99}.
\end{remark}
Note that since the curve $C_m$ is in linear equivalence class of $m\Gamma_0 + (mE + q)\mathfrak{f}$, the adjunction formula gives about its genus $g$
\[
\begin{aligned}
2g-2 & = (-2\Gamma_0 + (K_{\Gamma} - E)\mathfrak{f} + m\Gamma_0 + (mE + q)\mathfrak{f}) \cdot (m\Gamma_0 + (mE + q)\mathfrak{f}) \\
& = m(m-1)e + 2m\gamma - 2 \, ,
\end{aligned}
\]
hence $g = \binom{m}{2}e+m\gamma$. Likewise, $(\Gamma_0 + E\mathfrak{f}) \cdot C_m = me + 1$, so $X_m$ is a smooth curve of degree $d = me + 1$ and same genus $g$. We remark also that if $q_0$ is the point in which the fiber $q \mathfrak{f}$ meets $\Gamma_0$, then it follows by
\cite[Proposition 36]{FP05NM} that the linear series $|m\Gamma_0 + (mE + q)\mathfrak{f}|$ has a unique base point at $q_0$. This allows us to make the following observation about $X_m$.
\begin{prop}
Assume the conditions and notations of {\rm (MS)}. Let $l_q$ be as in {\rm Remark \ref{Sec2RemarkCatGim}} and $X_m$ be as above. The line $l_q$ is tangent to $X_m$ at the point $P$ as their intersection multiplicity at $P$ is exactly two.
\end{prop}
\begin{proof}
The morphism $\Psi : S \to F$ is in fact the resolution of singularity of $F$ at the vertex $P$. Since $C_m$ is the proper transform of $X_m$ and $q\mathfrak{f}$ is the proper transform of $l_q$, they wouldn't meet on $\Gamma_0 = \Psi^{-1} (P)$ unless the intersection of $X_m$ and $l_q$ at $P$ is of multiplicity at least two. On the other hand $C_m \in |m\Gamma_0 + (mE + q)\mathfrak{f}|$ is general, hence $q\mathfrak{f}$ meets $C_m$ in additional $m-1$ points, all of which are distinct and away from $\Gamma_0$. Since $\Psi$ is an isomorphism away from $\Gamma_0$, the images of those $m-1$ points will be distinct points on $l_q$ and away from $P$. The inner projection with center $P$ to the hyperplane containing $Y$ yields an $m:1$ covering $X_m \to Y$, therefore the intersection multiplicity of $X_m$ and $l_q$ at $P$ can only be two.
\end{proof}
It is convenient to have an explicit notation for the morphism mentioned in the proof of the lemma, so denote by $\varphi : X_m \to Y$ the $m:1$ covering morphism induced by the inner projection with center $P$ to the hyperplane containing the curve $Y$. We remark that the image $\varphi (P)$ of the point $P$ is by definition the point in which the tangent line $l_q$ to $X_m$ at $P$ meets the hyperplane, which is the point $\Psi (\sigma_D (q)) =: Q$. Consider also the morphism $\phi : C_m \to D$ defined as the composition $\phi := \sigma_D \circ (f_{|_{C_m}})$. Note that the morphism $\phi$ coincides with the composition $(\Psi^{-1})_{|_Y} \circ \varphi \circ (\Psi_{|_{C_m}})$. Next we will derive a few facts involving the ramification divisor of $\varphi$ but before that we summarize, for convenience of the reader, the \emph{additional notations}. We will refer to them as {\rm (AN)}.
\begin{itemize}[font=\sffamily, leftmargin=1.0cm, style=nextline]
\item[(AN)]
\begin{itemize}[font=\sffamily, leftmargin=1.0cm, style=nextline]
\item[$C_m$] is a general curve in the linear series $|m\Gamma_0 + (mE + q)\mathfrak{f}|$,
\item[$q_0$\phantom{.}] is the unique base point of $|m\Gamma_0 + (mE + q)\mathfrak{f}|$; note that $q_0 \in \Gamma_0$,
\item[$X_m$] is the image $\Psi (C_m) \subset F \subset \mathbb{P}^r$ of $C_m$, which is smooth according to {\rm Remark \ref{Sec2RemarkCatGim}},
\item[$\varphi$\phantom{.}] is the $m:1$ covering morphism $\varphi : X_m \to Y$ induced by the projection with center $P$ to the hyperplane in $\mathbb{P}^r$ containing $Y$,
\item[$\phi$\phantom{.}] is the $m:1$ covering morphism $\phi : C_m \to D$ defined as $\phi := \sigma_D \circ (f_{|_{C_m}})$,
\item[$Q$\phantom{.}] is the point on $Y$ defined as the image $\Psi (\sigma_D (q))$ of $q \in \Gamma$.
\end{itemize}
\end{itemize}
\begin{prop}\label{Sec2PropRamBranch}
Assume the conditions and notations {\rm (MS)} and {\rm (AN)}. Denote by $R_{\varphi}$ be the ramification divisors of the $\varphi$ and let $B_{\varphi}$ be an ``one-half-the-branch-divisor'' of $\varphi$, that is, $B_{\varphi}$ is such that $2B_{\varphi} \sim \varphi (R_{\varphi})$. Then
\begin{enumerate}[label=\normalfont(\alph*), leftmargin=*, font=\rmfamily]
\item $R_{\varphi}$ is linearly equivalent to the divisor cut on $X_m$ by a hypersurface of degree $(m-1)$ together with the $(m-1)$ points, besides $P$, in which the line $l_q$ meets $X_m$ ;
\item $\deg R_{\varphi} = (m-1)(me + 2)$ ;
\item $\varphi^{\ast} \mathcal{O}_Y (1) \cong \mathcal{O}_{X_m} (1) \otimes \mathcal{O}_{X_m} (-P) $ ;
\item
$
\mathcal{O}_Y (B_{\varphi}) \cong \mathcal{O}_Y (m(m-1)/2) \otimes \mathcal{O}_Y ((m-1)Q)
$ .
\end{enumerate}
\end{prop}
\begin{proof}
Since $C_m$ and $X_m$ are isomorphic, we can transform some of the claims about $R_{\varphi}$ into claims about the ramification divisor $R_{\phi}$ of the morphism $\phi$, which are easier to prove.
\begin{enumerate}[label=(\alph*), leftmargin=*, font=\rmfamily]
\item For $R_{\phi}$ we have
\[
\begin{aligned}
R_{\phi} & \sim K_{C_m} - \phi^{\ast} K_D \\
& \sim (-2\Gamma_0 + (K_{\Gamma}-E)\mathfrak{f} + m\Gamma_0 + (mE + q)\mathfrak{f})_{|_{C_m}} - \phi^{\ast} K_D \\
& \sim ((m-2)\Gamma_0 + ((m-1) E + q)\mathfrak{f})_{|_{C_m}} + K_{\Gamma} \mathfrak{f}_{|_{C_m}} - \phi^{\ast} K_D \, .
\end{aligned}
\]
The divisor $K_D$ is the restriction of $K_S + D \sim -2\Gamma_0 + (K_{\Gamma}-E)\mathfrak{f} + \Gamma_0 + E \mathfrak{f} \sim -\Gamma_0 + K_{\Gamma}\mathfrak{f}$ on the curve $D$. However, $D$ doesn't meet $\Gamma_0$, therefore $\phi^{\ast} K_D \sim \phi^{\ast} ((-\Gamma_0 + K_{\Gamma}\mathfrak{f})_{|_D}) = \phi^{\ast} (K_{\Gamma}\mathfrak{f}_{|_D}) \sim K_{\Gamma} \mathfrak{f}_{|_{C_m}}$. Therefore
\[
R_{\phi} \sim ((m-2)\Gamma_0 + ((m-1) E + q)\mathfrak{f})_{|_{C_m}} \, .
\]
By the commutativity of the diagram
\begin{equation}\label{Sec2PropCommutDiag}
\begin{tikzcd
C_m \ar[r, "{\Psi_{|_{C_m}}}", "{\cong}"'] \ar[d, "{\phi}"'] & X_m \ar[d, "{\varphi}"] \\
D \ar[r, "{\Psi_{|_{D}}}", "{\cong}"']& Y
\end{tikzcd}
\end{equation}
we have that the restriction of $\Psi$ on $C_m$ takes a divisor that is linearly equivalent to the ramification divisor of $\phi$ into a divisor that is linearly equivalent to the ramification divisor of $\varphi$. Consider
\[
R_{\phi} \sim ((m-2)\Gamma_0 + ((m-1) E + q)\mathfrak{f})_{|_{C_m}} \sim ((m-1)(\Gamma_0 + E \mathfrak{f}) + (q \mathfrak{f} - \Gamma_0))_{|_{C_m}} \, .
\]
Since $\Gamma_0$ and $C_m$ meet exactly at the point $q_0$ in which the fiber $q\mathfrak{f}$ meets $\Gamma_0$, it follows that $(q \mathfrak{f} - \Gamma_0))_{|_{C_m}}$ is an effective divisor on $C_m$ that consists of $m-1$ points, say $q_1, \ldots , q_{m-1}$ on $C_m$, in which $q\mathfrak{f}$ intersects $C_m$ besides $q_0$. Hence,
\[
R_{\phi} \sim ((m-1)(\Gamma_0 + E \mathfrak{f}))_{|_{C_m}} + q_1 + \cdots + q_{m-1} \, .
\]
The morphism $\Psi : S \to \mathbb{P}^r$ is defined by the linear series $|\Gamma_0 + E \mathfrak{f}|$ on $S$, so $\Psi$ maps the restriction $((m-1)(\Gamma_0 + E \mathfrak{f}))_{|_{C_m}}$ to the divisor on $X_m$ cut by a hypersurface of degree $m-1$. Also, $\Psi$ maps the fiber $q \mathfrak{f}$ into the line $l_q$. The images of the points $q_1, \ldots , q_{m-1}$ under $\Psi$ will be the $m-1$ points in which $l_q$ meets $X_m$ besides $P$. Therefore, $R_{\varphi}$ is linearly equivalent to the divisor cut on $X_m$ by a hypersurface of degree $(m-1)$ together with the images of the points $q_1, \ldots , q_{m-1}$, which lie on $l_q$ as claimed.
\item Since $\deg X_m = (\Gamma_0 + E\mathfrak{f}) \cdot (m\Gamma_0 + (mE+q)\mathfrak{f}) = me+1$, it follows by {\rm (a)} that
\[
\begin{aligned}
\deg R_{\varphi} & = (m-1) \deg X_m + (m-1)
= (m-1)(me + 2) \, .
\end{aligned}
\]
\item The claim and its proof are contained in the proof of \cite[Proposition 2]{CIK22}.
\item To verify the last claim we show for the branch divisor $\phi (R_{\phi})$ of $\phi : C_m \to D$ that
\[
\phi (R_{\phi}) \sim ((m(m-1)E + 2(m-1)q)\mathfrak{f})_{|_{D}} \, .
\]
\noindent Recall first that the map $\phi : \Div (C_m) \to \Div (D)$ is linear in the sense that $\phi(\sum n_{P_j} P_j) = \sum n_{P_j} \phi(P_j)$. Note also that according to \cite[Ex. IV.2.6, p. 306]{Hart77}, $\phi(\sum n_{P_j} P_j)$ depends only on the linear equivalence class of $\sum n_{P_j} P_j$. As we claim just linear equivalence, it is sufficient to verify that
\begin{enumerate}[label=(\arabic*), leftmargin=*, font=\rmfamily]
\item $\phi \left( (m-2)(\Gamma_0 + E\mathfrak{f})_{|_{C_m}} \right) \sim ((m(m-2)E + (m-2)q)\mathfrak{f})_{|_{D}}$, and
\item $\phi \left( (E+q)\mathfrak{f} )_{|_{C_m}} \right) \sim (( m(E+q))\mathfrak{f})_{|_{D}}$.
\end{enumerate}
The first claim follows easily from the fact that $\Gamma_0$ and $C_m$ intersect exactly at $q_0$, and $\phi (q_0) = q\mathfrak{f}_{|_D}$ and $\phi : C_m \to D$ being an $m:1$ covering. Similarly with the second claim. This implies about the branch divisor on $D$ that
\[
\begin{aligned}
\phi (R_{\phi}) & \sim \phi ((m-2)(\Gamma_0 + E\mathfrak{f})_{|_{C_m}}) + \phi \left( (E+q)\mathfrak{f} )_{|_{C_m}} \right) \\
& \sim ((m(m-2)E + (m-2)q)\mathfrak{f})_{|_{D}} + (( m(E+q))\mathfrak{f})_{|_{D}} \\
& \sim ((m(m-1)E + 2(m-1)q)\mathfrak{f})_{|_{D}} \, .
\end{aligned}
\]
By the commutativity of diagram {\rm (\ref{Sec2PropCommutDiag})} we have that
\[
\varphi (R_{\varphi}) \sim \Psi_{|_D} (\phi (R_{\phi})) \sim \Psi_{|_D} (\phi ( ((m(m-1)E + 2(m-2)q)\mathfrak{f})_{|_{D}} )) \, .
\]
Recall that $\Gamma_0$ and $D$ do not intersect and $E\mathfrak{f}_{|_{D}} \equiv (\Gamma_0 + E\mathfrak{f})_{|_{D}}$. Since the divisors from $|\Gamma_0 + E\mathfrak{f}|$ are mapped by $\Psi$ into hyperplane sections of $F$, it follows that the branch divisor $\varphi (R_{\varphi})$ is linearly equivalent to a divisor on $D$ cut by a hypersurface of degree $m(m-1)$ together with the with the divisor $2(m-1)Q$, where $Q$ is the point in which the line $l_q$ meets $Y$. Therefore, for the ``one-half-the-branch-divisor'' $B_{\varphi}$ on $Y$ we obtain that
\[
\mathcal{O}_Y (B_{\varphi}) \cong \mathcal{O}_Y \left( \frac{m(m-1)}{2} \right) \otimes \mathcal{O}_Y ((m-1)Q) \, .
\]
\end{enumerate}
\end{proof}
We remark explicitly the following fact which is derived easily from the proof of {\rm Proposition \ref{Sec2PropRamBranch}(d)}.
\begin{remark}\label{Sec2RemBnu}
If $\nu : C_m \to \Gamma$ is the morphism defined as the restriction of $f$ on $C_m$ and $B_{\nu}$ is the ``one-half-the-branch-divisor'' of $\nu$, that is, $2 B_{\nu} \sim \nu (R_{\nu})$, where $R_{\nu}$ is the ramification divisor of $\nu$, then $B_{\nu}$ is linearly equivalent to the divisor $\frac{m(m-1)}{2} E + (m-1)q$ on $\Gamma$.
\end{remark}
The proposition that follows will be used in the proof of the {\rm Main Theorem} to identify the curves of given degree and genus that lie a cone in terms of the linear equivalence class of a specific divisor on the desingularization of the cone.
\begin{prop}\label{Sec2PropIdentifyCurves}
Suppose that $F \subset \mathbb{P}^r$ is a cone over a smooth integral linearly normal curve $Y$ of genus $\gamma$ and degree $e \geq 2\gamma + 1$ in $\mathbb{P}^{r-1}$. Let $S$ be the ruled surface defined as the blow-up of $F$ at its vertex, and let $f : S \to Y$ be the natural surjective morphism with a section $Y_0$ of minimal self-intersection. If $X$ is a smooth integral curve of degree $d = me + 1$ and genus $g = \binom{m}{2}e + m\gamma$ on $F$, then its proper transform $C$ on $S$ is linearly equivalent to $m Y_0 + (mE+q)\mathfrak{f}$, where $E$ is an effective divisor of degree $e$ on $Y$ such that $S \cong \mathbb{P} (\mathcal{O}_Y \oplus \mathcal{O}_Y (-E))$ and $q$ is a point on $Y$.
\end{prop}
\begin{proof}
Since $S$ is the blow-up of $F$ at its vertex, it must be a decomposable ruled surface over $Y$. Since $Y_0$ is the section of minimal self-intersection of $f : S \to Y$, we must have that $\deg E = -Y^2_0 = \deg Y = e$. The Picard group of $S$ is generated by $Y_0$ and the pull-backs via $f^{\ast}$ of the divisors on $Y$. Hence, $C \sim a Y_0 + B \mathfrak{f}$ for a divisor $B$ on $Y$. For the degree of $X$ we have
\[
me + 1 = \deg X = (Y_0 + E\mathfrak{f}) \cdot (aY_0 + B\mathfrak{f}) = -ae + ae + \deg B \, ,
\]
so $\deg B = me + 1$. Applying the adjunction theorem for $C$ we get
\[
\begin{aligned}
2g-2 & = (K_{C} + C) \cdot C \\
& = (-2Y_0 + (K_{Y} - E) \mathfrak{f} + aY_0 + B \mathfrak{f}) \cdot (a Y_0 + B \mathfrak{f}) \\
& = ((a-2)Y_0 + (K_{Y} - E + B)\mathfrak{f}) \cdot (a Y_0 + B \mathfrak{f}) \\
& = a(a-2)(-e) + (me+1)(a-2) + a(2\gamma-2 -e + me+1) \\
& = -ea^2 + 2ae + (me+1)a - 2me-2 + (2\gamma-1+(m-1)e)a \, .
\end{aligned}
\]
Since $2g-2 = m(m-1)e + 2m\gamma - 2$, we obtain
\begin{equation}\label{Sec2EqFora}
ea^2 - ((2m+1)e+2\gamma)a + m(m+1)e + 2m\gamma = 0 \, .
\end{equation}
Solving {\rm (\ref{Sec2EqFora})} for $a$ we obtain solutions $a = m$ and $a = m+1 + \frac{2\gamma}{e}$. Since $e \geq 2\gamma + 1$, the second number is not an integer, so $a = m$ is the only solution.
It remains to prove the claim about $B$, that is, $B \sim mE + q$ for some point $q \in Y$. An argument similar to that in \cite[Prop. V.2.6, p.371]{Hart77} shows that $j_{\ast} \mathcal{O}_{Y_0} (Y_0) \cong \mathcal{O}_{Y} (-E)$, where $j$ is the isomorphism $j : Y_0 \to Y$. Namely, consider the exact sequence
\[
0 \to \mathcal{O}_{S} \to \mathcal{O}_{S}(Y_0) \to \mathcal{O}_{Y_0} \otimes \mathcal{O}_{S} (Y_0) \to 0 \, ,
\]
and push it down to $Y$. By Grauert's theorem we have
\[
0 \to f_{\ast} \mathcal{O}_{S} \to f_{\ast} \mathcal{O}_{S}(Y_0) \to j_{\ast} ( \mathcal{O}_{Y_0} (Y_0) ) \to 0 \, .
\]
Since $f_{\ast} \mathcal{O}_{S} \equiv \mathcal{O}_{Y}$ and $f_{\ast} \mathcal{O}_{S}(Y_0) \cong \mathcal{O}_{Y} \oplus \mathcal{O}_{Y} (-E)$, we deduce that $j_{\ast} \mathcal{O}_{Y_0} (Y_0) \cong \mathcal{O}_{Y} (-E)$. Further, $C$ is a smooth curve on $S$ and $C \cdot Y_0 = (m Y_0 + B \mathfrak{f}) \cdot Y_0 = 1$, so $C$ intersects $Y_0$ in a single point, say $z = C \cap Y_0$. Since $C \sim m Y_0 + B \mathfrak{f}$, the restrictions $C_{|_{Y_0}}$ and $(m Y_0 + B \mathfrak{f})_{|_{Y_0}}$ must be linearly equivalent too. Hence,
\[
z \sim (m Y_0 + B \mathfrak{f})_{|_{Y_0}} \, ,
\]
or equivalently, $j (z) \sim -mE + B$ on $Y_0$. Taking $q := j (z)$, we obtain $B \sim mE + q$.
\end{proof}
In the proof of the main theorem in section \ref{Sec3} we will use $\varphi_{\ast} \mathcal{O}_{X_m}$ and $\varphi_{\ast} (\mathcal{O}_{X_m} (P))$, or equivalently $\phi_{\ast} \mathcal{O}_{C_m}$ and $\phi_{\ast} (\mathcal{O}_{C_m} (\Gamma_0))$. The statement giving explicit expressions of those bundles develops an idea encountered in \cite[Proposition 2.2]{FP05MN}. However, due to obvious reasons, we give a formulation and a proof only in the case $m = 3$, which is sufficient for our purposes.
\begin{prop}\label{Sec2PropPushForwardStrSheaf}
Assume the conditions and notations {\rm (MS)} and {\rm (AN)}. Fix $m = 3$ and denote $C_3 =: C$ and $X_3 =: X$. Then
\begin{enumerate}[label=\normalfont(\alph*), leftmargin=*, font=\rmfamily]
\item $\varphi_{\ast} (\mathcal{O}_{X} (P)) \cong \mathcal{O}_{Y} \oplus \mathcal{O}_{Y} (-1) \oplus ( \mathcal{O}_{Y} (-2) \otimes \mathcal{O}_{Y} (-Q) )$
\item $\varphi_{\ast} \mathcal{O}_{X} \cong \mathcal{O}_{Y} \oplus (\mathcal{O}_{Y} (-1) \otimes \mathcal{O}_{Y} (-Q) )\oplus ( \mathcal{O}_{Y} (-2) \otimes \mathcal{O}_{Y} (-Q) )$
\end{enumerate}
\end{prop}
\begin{proof}
The corresponding statements for $\phi : C \to D$ appear as
\begin{enumerate}[label=(\alph*'), leftmargin=*, font=\rmfamily]
\item $\phi_{\ast} (\mathcal{O}_{C} (\Gamma_0)) \cong \mathcal{O}_{D} \oplus \mathcal{O}_{D} (-E\mathfrak{f}) \oplus \mathcal{O}_{D} (-(2E+q)\mathfrak{f} )$,
\item $\phi_{\ast} \mathcal{O}_{C} \cong \mathcal{O}_{D} \oplus \mathcal{O}_{D} (-(E+q)\mathfrak{f}) \oplus \mathcal{O}_{D} (-(2E+q)\mathfrak{f} )$.
\end{enumerate}
If we denote by $\nu$ the morphism $f_{|_C} : C \to \Gamma$, or equivalently, if $\iota$ is the embedding $\iota : C \hookrightarrow S$ and $\nu$ is the composition $f \circ \iota$, the claims translate into
\begin{enumerate}[label=(\alph*''), leftmargin=*, font=\rmfamily]
\item $\nu_{\ast} (\mathcal{O}_{C} (\Gamma_0)) \cong \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E ) \oplus \mathcal{O}_{\Gamma} (-2E-q)$,
\item $\nu_{\ast} \mathcal{O}_{C} \cong \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E-q) \oplus \mathcal{O}_{\Gamma} (-2E-q)$.
\end{enumerate}
It is sufficient to prove claims {\rm (a'')} and {\rm (b'')}, which we will do next. We remark that claim {\rm (b'')} has been proved by Miranda for varieties of arbitrary dimension, see \cite[Proposition 8.1, p.1150]{Mir85}. For convenience of the reader we give a proof of it as well as it is easy to do in our context.
Since $C \in |3\Gamma_0 + (3E + q)\mathfrak{f}|$, there is an exact sequence
\[
0 \to \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f})
\to \mathcal{O}_{S} ( \Gamma_0)
\to \iota_{\ast} \mathcal{O}_{C} ( \Gamma_0) \equiv \iota_{\ast} \mathcal{O}_{C} ( q_0 )
\to 0 \, .
\]
Pushing it down to $\Gamma$ via $f_{\ast}$, we get the exact sequence
\begin{equation}\label{PushForward}
\begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}, transform canvas={scale=0.95}]
\matrix (m) [
matrix of math nodes,
row sep=1em,
column sep=2.5em,
text height=1.5ex, text depth=0.25ex
]
{ 0 & f_{\ast} \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f}) & f_{\ast} \mathcal{O}_{S} ( \Gamma_0) & f_{\ast} \iota_{\ast} \mathcal{O}_{C} ( q_0 ) \\
& R^1 f_{\ast} \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f}) & R^1 f_{\ast} \mathcal{O}_{S} ( \Gamma_0 ) & R^1 f_{\ast} \iota_{\ast} \mathcal{O}_{C} ( q_0) \\
};
\path[overlay,->, font=\scriptsize,>=latex]
(m-1-1) edge (m-1-2)
(m-1-2) edge (m-1-3)
(m-1-3) edge (m-1-4)
(m-1-4) edge[out=355,in=175] node[descr,yshift=0.3ex] {} (m-2-2)
(m-2-2) edge (m-2-3)
(m-2-3) edge (m-2-4);
\end{tikzpicture}
\end{equation}
\medskip
\noindent For every point $z \in \Gamma$ we have that $z\mathfrak{f} \cong \mathbb{P}^1$ and $\deg (-2\Gamma_0 -(3E +q)\mathfrak{f})_{|_{z\mathfrak{f}}}) = \deg (-2\Gamma_0 \cdot \mathfrak{f})= -2$, hence
\[
h^i (z\mathfrak{f}, \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f})_{|_{z\mathfrak{f}}}) = h^i (\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1} (-2)) =
\begin{cases}
0 & \mbox{ if } i = 0 \\
1 & \mbox{ if } i = 1
\end{cases}
\]
By Grauert's theorem, see \cite[Theorem III.12.9]{Hart77}, it follows that $f_{\ast} \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f})$ vanishes, while $R^1 f_{\ast} \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f})$ must be a locally free sheaf of rank one, that is, a line bundle on $\Gamma$. From the definition of $S$ we have $f_{\ast} \mathcal{O}_{S} ( \Gamma_0 ) \cong \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E)$, and since $h^1 (z\mathfrak{f}, \mathcal{O}_{S} (\Gamma_0)_{|_{z\mathfrak{f}}}) = h^1 (\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1} (1)) = 0$, the Grauert's theorem implies that {\rm (\ref{PushForward})} reduces to
\begin{equation}\label{PushForwardRed}
0 \to \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E) \to \nu_{\ast} \mathcal{O}_{C} ( q_0 ) \to R^1 f_{\ast} \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f}) \to 0 \, .
\end{equation}
Since $\nu : C \to \Gamma$ is a triple covering morphism, the push-forward $\nu_{\ast} \mathcal{O}_{C}$ must split as
\[
\nu_{\ast} \mathcal{O}_{C} = \mathcal{O}_{\Gamma} \oplus \mathcal{E}^{\vee} \, ,
\]
where $\mathcal{E}$ is a vector bundle of rank two on $\Gamma$ for which its dual bundle $\mathcal{E}^{\vee}$ is the Tschirnhausen module of $\nu$. By \cite[Ex. IV.2.6(d), p.306]{Hart77} and {\rm Remark \ref{Sec2RemBnu}}, it follows that $\det (\nu_{\ast} \mathcal{O}_{C}) = \det \mathcal{E}^{\vee} = \mathcal{O}_{\Gamma} (-3E-2q)$. Since ${\Gamma_0}$ and $C$ meet exactly at the point $q_0$ which is mapped by $\nu$ into $q$ on $\Gamma$, we obtain by \cite[Ex. IV.2.6(a), p.306]{Hart77} that
\[
\det (\nu_{\ast} \mathcal{O}_{C} ( q_0)) \cong \det (\nu_{\ast} \mathcal{O}_{C}) \otimes \mathcal{O}_{\Gamma} (q) \cong \mathcal{O}_Y (-3E-q) \, .
\]
Therefore
\[
\begin{aligned}
R^1 f_{\ast} \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f})
& \cong \det (\nu_{\ast} \mathcal{O}_{C} ( q_0)) \otimes (\det (\mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E))^{-1} \\
& \cong \mathcal{O}_Y (-3E-q) \otimes \mathcal{O}_Y (E) \\
& \cong \mathcal{O}_Y (-2E-q) \, .
\end{aligned}
\]
Since
\[
\begin{aligned}
\Ext^1 (R^1 f_{\ast} \mathcal{O}_{S} (-2\Gamma_0 -(3E +q)\mathfrak{f}), \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E))
& = \Ext^1 (\mathcal{O}_Y (-2E-q), \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E)) \\
& = H^1 (\Gamma , \mathcal{O}_{\Gamma} (2E+q) \oplus \mathcal{O}_{\Gamma} (E+q)) \\
& = 0 \, ,
\end{aligned}
\]
the exact sequence {\rm (\ref{PushForwardRed})} splits and we obtain $\nu_{\ast} \mathcal{O}_{C} (q_0) \cong \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E) \oplus \mathcal{O}_Y (-2E-q)$. This completes the proof of {\rm (a)}.
In order to prove {\rm (b)}, we recall that $\nu : C \to \Gamma$ is a triple covering for which the rank two vector bundle $\mathcal{E}$ on $\Gamma$ is its Tschirnhausen module, that is, $\nu_{\ast} \mathcal{O}_{C} \cong \mathcal{O}_{\Gamma} \oplus \mathcal{E}^{\vee}$. Since the Tschirnhausen module of the triple cover $\nu : C \to \Gamma$ is determined uniquely up to a twist by a line bundle on $\Gamma$ and here $\iota : C \hookrightarrow S = \mathbb{P} (\mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma}(-E))$, it follows by \cite[Theorem 1.3, p.439]{CE96} that
\[
\nu_{\ast} \mathcal{O}_{C} \cong \mathcal{O}_{\Gamma} \oplus ((\mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma}(-E)) \otimes \mathcal{L} ) \, ,
\]
for some line bundle $\mathcal{L}$ on $\Gamma$. As it was seen in {\rm (a)}, $\det (\nu_{\ast} \mathcal{O}_C) \cong \mathcal{O}_{\Gamma} (-3E-2q)$, from where we obtain $\mathcal{L} \cong \mathcal{O}_{\Gamma} (-E-q)$. Therefore
\[
\nu_{\ast} \mathcal{O}_{C} \cong \mathcal{O}_{\Gamma} \oplus ((\mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma}(-E)) \otimes \mathcal{L} ) \cong \mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma}(-E-q) \oplus \mathcal{O}_{\Gamma}(-2E-q) \, .
\]
\end{proof}
\medskip
Finally, we recall one more result that will be used in the proof of the {\rm Main Theorem} in section \ref{Sec3}.
\begin{prop}\label{PropInnerProjSESNormBund}(\cite[Proposition 2]{CIK22})
Let $X$ be a non-degenerate smooth integral curve in $\mathbb{P}^r$, where $r \geq 3$. Let $H$ be a hyperplane in $\mathbb{P}^r$ and $P$ be a point on $X$. Suppose that the inner projection $\varphi \, : \, X \to H \cong \mathbb{P}^{r-1}$ with center $P$ maps $X$ to a non-degenerate smooth integral curve $Y$ in $H$. Denote by $R_{\varphi}$ the ramification divisor of $\varphi$. Then
\[
0 \to \mathcal{O}_X (R_{\varphi}) \otimes \mathcal{O}_X (1) \otimes \mathcal{O}_X (2P)
\to N_{X / \mathbb{P}^r}
\to \varphi^{\ast} N_{Y / \mathbb{P}^{r-1}} \otimes \mathcal{O}_X (P)
\to 0 \, ,
\]
where $N_{X / \mathbb{P}^r}$ is the normal bundle of $X$ in $\mathbb{P}^r$ and $N_{Y / \mathbb{P}^{r-1}}$ is the normal bundle of $Y$ in $H \cong \mathbb{P}^{r-1}$.
\end{prop}
\bigskip
\section{ Proof of the main theorem }\label{Sec3}
Recall the basic numerical assumptions in the theorem: $\gamma \geq 3$ and $e \geq 4\gamma + 5$. Throughout this section we also fix
\[
g := 3e + 3 \gamma \, , \quad d := 3e+1 = g - 3\gamma + 1 \quad \mbox{ and } \quad r := e-\gamma + 1 = \frac{g}{3} - 2\gamma + 1 \, .
\]
The technique used in the proof is derived from \cite{CIK22}, \cite{CLM96} and \cite{Cil87}.
The proof itself proceeds in three main steps:
\begin{itemize}[font=\sffamily, leftmargin=1.8cm, style=nextline]
\item[{\rm \bf Step I.}] We construct a family $\mathcal{F}$ of curves satisfying the condition of the theorem, then we consider the closure $\mathcal{H}$ of the subset of $\mathcal{I}_{d, g, r}$ parametrizing
the family $\mathcal F$ and show that
\[
\dim \mathcal{H} = r^2 + 7e + 4 \, .
\]
\item[{\rm \bf Step II.}] For a general curve $X$ from the family $\mathcal{F}$ we show that
\[
\dim T_{[X]} \mathcal{H} = h^0 (X, N_{X / \mathbb{P}^{r}})
= r^2 + 7e + 5 = \dim \mathcal{H} + 1 \, .
\]
\item[{\rm \bf Step III.}] We show that $\mathcal{H}$ forms an irreducible component of $\mathcal{I}_{d, g, r}$.
\end{itemize}
\bigskip
{\bf Step I.} Construction of the family.
Let $\Gamma \in \mathcal{M}_{\gamma}$ be a general curve of genus $\gamma$ and $E$ be a general divisor of degree $e \geq 4\gamma + 5$ on $\Gamma$. Let $q \in \Gamma$. Consider the ruled surface $S := \mathbb{P} (\mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma}(-E))$ with natural projection $f : S \to \Gamma$. Denote by $\Gamma_0$ the section of minimal self-intersection on $S$, that is, $\Gamma^2_0 = -e$. As it was mentioned in section \ref{Sec2}, $\Pic (S) \cong \mathbb{Z}[\Gamma_0] \oplus {f}^{\ast} (\Pic (\Gamma))$. Just as there, for a divisor $\Delta \in \Div (\Gamma)$ we denote by $\Delta \mathfrak{f}$ the divisor ${f}^{\ast} (\Delta)$ on $S$. Consider the morphism $\Psi := \Psi_{|\Gamma_0 + E\mathfrak{f}|} : S \to \mathbb{P}^r$ determined by the linear series $\mathcal{O}_S (\Gamma_0 + E\mathfrak{f})$ on $S$. Define $\mathcal{F}$ as the family of curves that are images of the divisors from the linear series $|3{\Gamma}_0 + (3E + q){\mathfrak{f}}|$ on $S$ under the morphism $\Psi$, by varying $\Gamma$ in $\mathcal{M}_{\gamma}$, running $E$ through the set of all effective divisors of degree $e$ on $\Gamma$ and $q \in \Gamma$. For the dimension of $ \mathcal{F}$ we have
\noindent $\dim \mathcal{F} = $
\begin{itemize}[font=\sffamily, leftmargin=1.3cm, style=nextline]
\item[$ + $] $3\gamma - 3$ \ : \ number of parameters of curves $\Gamma \in \mathcal{M}_{ \gamma }$
\item[$ + $] $\gamma$ \ : \ number of parameters of line bundles $\mathcal{O}_{\Gamma} (E) \in \Pic (\Gamma)$ of degree $e \geq 4\gamma + 5$
necessary to fix the geometrically ruled surface $\mathbb{P} (\mathcal{O}_{\Gamma} \oplus \mathcal{O}_{\Gamma} (-E))$
\item[$ + $] $(r+1)^2 - 1 = \dim (\Aut (\mathbb{P}^{r}))$
\item[$ + $] $1$ \ : \ number of parameters necessary to fix $q \in \Gamma$
\item[$ - $] $(e- \gamma + 2) = \dim G_{F}$, where $G_{F}$ is the subgroup of $\Aut (\mathbb{P}^{r})$ fixing the scroll $F$, see \cite[Lemma 6.4, p. 148]{CCFM2009}
\item[$ + $] $6e - 3\gamma + 6 = \dim |3{\Gamma}_0 + (3E + q){\mathfrak{f}}|$ \ : \ number of parameters to choose a curve in the linear equivalence class of $3{\Gamma}_0 + (3E + q){\mathfrak{f}}$ on $S$.
\end{itemize}
\noindent Define $\mathcal{H}$ as the closure in $\mathcal{I}_{d, g, r}$ of the set parametrizing $\mathcal{F}$. Accounting the above numbers we get
\[
\dim \mathcal{H} = \dim \mathcal{F}
= r^2 + 7e + 4 \, .
\]
This completes {\rm Step I} of the proof.
\medskip
{\bf Step II.} Computation of the tangent space to $\mathcal{H}$.
Let $X \in \mathcal{F}$ be a general curve from the family, that is, $X$ is the image $\Psi (C)$ of a general $C \in |3\Gamma_0 + (3E + q)\mathfrak{f}|$ on $S$, the base curve $\Gamma \in \mathcal{M}_{\gamma}$ is general, and $E \in \Div^e (\Gamma)$ and $q \in \Gamma$ are also general. Also, $X$ lies on the cone $F := \Psi (S)$ over a curve $Y \subset \mathbb{P}^{r-1}$ that is the image $Y := \Psi (D)$ of a general $D \in |\Gamma_0 + E\mathfrak{f}|$. Let $l_q$ be the line from the ruling of $F$ that is the image of $q\mathfrak{f}$ and $Q = l_q \cap Y$. Denote by $\varphi : X \to Y$ the projection with center $P$ of $X$ to the hyperplane containing $Y$. It is a $3:1$ covering morphism. By {\rm Proposition \ref{Sec2PropRamBranch}} its ramification divisor $R_{\varphi}$ is linearly equivalent to the divisor on $X$ cut by a quadric hypersurface and the two points, say $Q_1$ and $Q_2$, besides $P$, in which the line $l_q$ meets $X$. Applying {\rm Proposition \ref{PropInnerProjSESNormBund}}, we obtain the short exact sequence
\begin{equation}\label{Sec3InnerProjSESNormBund}
0 \to \mathcal{O}_X (3) \otimes \mathcal{O}_X (Q_1 + Q_2 + 2P)
\to N_{X / \mathbb{P}^r}
\to \varphi^{\ast} N_{Y / \mathbb{P}^{r-1}} \otimes \mathcal{O}_X (P)
\to 0 \, ,
\end{equation}
in which $N_{X / \mathbb{P}^r}$ is the normal bundle of $X$ in $\mathbb{P}^r$ and $N_{Y / \mathbb{P}^{r-}}$ is the normal bundle of $Y$ in $\mathbb{P}^{r-1}$. Due to $e \geq 2\gamma + 1$, the Hilbert scheme $\mathcal{I}_{e, \gamma, e-\gamma}$ is irreducible and generically smooth of the expected dimension $\dim \mathcal{I}_{e, \gamma, e-\gamma} = \lambda_{e, \gamma, r-1} = er - (r-4)(\gamma - 1)$. Since $\Gamma \in \mathcal{M}_{\gamma}$ is general and $Y$ is isomorphic to $\Gamma$ and $\deg Y = e$, it follows that
\[
h^0 (Y, N_{Y / \mathbb{P}^{r-1}}) = er - (r-4)(\gamma - 1) \, .
\]
\noindent For the degree of the line bundle $\mathcal{O}_X (3) \otimes \mathcal{O}_X (Q_1 + Q_2 + 2P)$ in {\rm (\ref{Sec3InnerProjSESNormBund})} we have
\[
\deg \left( \mathcal{O}_X (3) \otimes \mathcal{O}_X (Q_1 + Q_2 + 2P) \right)
= 3 \deg X + 4 = 3(3e+1) + 4 = 9e + 7 \, .
\]
According to the assumptions in the theorem, $g = 3e + 3\gamma$ and $e \geq 4\gamma + 5$, so for the degree of the line bundle $\mathcal{O}_X (3) \otimes \mathcal{O}_X (Q_1 + Q_2 + 2P)$ we obtain $9e + 7 > 2g-2$. Thus, $\mathcal{O}_X (3) \otimes \mathcal{O}_X (Q_1 + Q_2 + 2P)$ is nonspecial, hence
\begin{equation}\label{h0NXinPr}
h^0 (X, N_{X / \mathbb{P}^r}) = h^0 (X, \mathcal{O}_X (3) \otimes \mathcal{O}_X (Q_1 + Q_2 + 2P)) + h^0 (X, \varphi^{\ast} N_{Y / \mathbb{P}^{r-1}} \otimes \mathcal{O}_X (P)) \, .
\end{equation}
By the Riemann-Roch theorem
\[
h^0 (X, \mathcal{O}_X (3) \otimes \mathcal{O}_X (Q_1 + Q_2 + 2P))
= 6e - 3\gamma + 8 \, .
\]
To compute $h^0 (X, \varphi^{\ast} N_{Y / \mathbb{P}^{r-1}} \otimes \mathcal{O}_X (P) )$ we use the projection formula, that is,
\[
\begin{aligned}
h^0 (X, \varphi^{\ast} N_{Y / \mathbb{P}^{r-1}} \otimes \mathcal{O}_X (P) )
& = h^0 (Y, {\varphi}_{\ast} ( \, {{\varphi}}^{\ast} N_{Y / \mathbb{P}^{r-1}} \otimes \mathcal{O}_{X} (P) \, ) ) \\
& = h^0 (Y, N_{Y / \mathbb{P}^{r-1}} \otimes {\varphi}_{\ast} \mathcal{O}_{X} (P) ) \, .
\end{aligned}
\]
By {\rm Proposition \ref{Sec2PropPushForwardStrSheaf}} we have $\varphi_{\ast} (\mathcal{O}_{X} (P)) \cong \mathcal{O}_{Y} \oplus \mathcal{O}_{Y} (-1) \oplus ( \mathcal{O}_{Y} (-2) \otimes \mathcal{O}_{Y} (-Q) )$, so it follows that
\begin{multline*}
h^0 (Y, N_{Y / \mathbb{P}^{r-1}} \otimes {\varphi}_{\ast} \mathcal{O}_{X} (P) ) \\
= h^0 (Y, N_{Y / \mathbb{P}^{r-1}}) + h^0 (Y, N_{Y / \mathbb{P}^{r-1}}(-1)) + h^0 (Y, N_{Y / \mathbb{P}^{r-1}}(-2) \otimes \mathcal{O}_{Y} (-Q) ) \, .
\end{multline*}
Since $Y \cong \Gamma$ is general in $\mathcal{M}_{\gamma}$, $\gamma \geq 3$, and $E$ is a general divisor of degree $e \geq 4\gamma+5$ on $\Gamma$, it follows by \cite[Proposition 2.1 and Proposition 2.12]{CLM96} that $h^0 (Y, N_{Y / \mathbb{P}^{r-1}} (-1)) = r$ and $h^0 (Y, N_{Y / \mathbb{P}^{r-1}} (-2)) = 0$. The last implies $h^0 (Y, N_{Y / \mathbb{P}^{r-1}} (-2) \otimes \mathcal{O}_{Y} (-Q) ) = 0$. Using that $r - 1 = e-\gamma$, we find
\[
\begin{aligned}
h^0 (X, {{\varphi}}^{\ast} N_{Y / \mathbb{P}^{r-1}} \otimes \mathcal{O}_{X} (P))
& = h^0 (Y, N_{Y / \mathbb{P}^{r-1}}) + h^0 (Y, N_{Y / \mathbb{P}^{r-1}}(-1)) \\
& = er - (r-4)(\gamma - 1) + r \\
& = r^2 + r + 4\gamma - 4 \, .
\end{aligned}
\]
The exact sequence {\rm (\ref{h0NXinPr})} then gives $h^0 (X, N_{X / \mathbb{P}^r}) = (6e - 3\gamma + 8) + (r^2 + r + 4\gamma - 4) = r^2 + r + \gamma + 6e + 4 = r^2 + (e - \gamma + 1) + \gamma + 6e + 4 = r^2 + 7e + 5$. Therefore,
\begin{equation}\label{Sec3DimTH}
\dim T_{[X]} \mathcal{H} = \dim \mathcal{H}+1 = r^2 + 7e + 5 \, .
\end{equation}
This completes {\rm Step II}.
\medskip
{\bf Step III.} Showing that $\mathcal{H}$ forms an irreducible component of $\mathcal{I}_{d, g, r}$.
By definition, $\mathcal{H} \subset \mathcal{I}_{d,g,r}$ is the closure of the set parametrizing smooth integral curves of degree $d$ and genus $g$ on cones in $\mathbb{P}^{r}$ over the curves parametrized by $\mathcal{I}_{e, \gamma, r-1}$, as a general $[X] \in \mathcal{H}$ is in the linear equivalence class $3\Gamma_0 + (3E+q)\mathfrak{f}$ on the desingularization $S$ of a cone $F \subset \mathbb{P}^r$ over $Y \subset \mathbb{P}^{r-1}$ for a general $[Y] \in \mathcal{I}_{e, \gamma, r-1}$. The set $\mathcal{H}$ is clearly irreducible. To show that it is a component, we use that that every flat deformation of a curve from $\mathcal{F}$ is a again a curve on a cone in $\mathbb{P}^r$ over a curve from $\mathcal{I}_{e, \gamma, r-1}$.
\begin{lemma}\label{Sec3DeformLemma}
Let $p_{\mathcal{X}} : \mathcal{X} \to T$ be a flat family of projective curves in $\mathbb{P}^r$ for which there exists a closed point $t_0 \in T$ such that:
\begin{enumerate}[label=(\roman*), leftmargin=*, font=\rmfamily]
\item $\mathcal{X}_{t_0}$ is a smooth integral projectively normal curve of genus $g = 3e + 3\gamma$ and degree $3e + 1$;
\item $\mathcal{X}_{t_0}$ is contained in a cone $F$ over a curve $Y$ corresponding to a general point of $\mathcal{I}_{e, \gamma, r-1}$.
\end{enumerate}
Then there is a neighborhood $U$ of $t_0$ in $T$ such that, for all closed points $t \in U$, $\mathcal{X}_t$ is again a curve on a cone over a smooth integral projectively normal curve of genus $\gamma$ and degree $e$ in $\mathbb{P}^{r-1}$.
\end{lemma}
Assuming the validity of {\rm Lemma \ref{Sec3DeformLemma}}, the proof proceeds as follows. Suppose that $\tilde{X}$ is a flat deformation of $X$. {\rm Lemma \ref{Sec3DeformLemma}} implies that $\tilde{X}$ is contained in a cone $\tilde{F} \subset \mathbb{P}^r$ over a curve $\tilde{Y}$, where $[\tilde{Y}] \in \mathcal{I}_{e, \gamma, r-1}$ is general. Let $\tilde{S}$ be the desingularization of $\tilde{F}$ and $\tilde{C}$ be the proper transform of $\tilde{X}$ on $\tilde{S}$. {\rm Proposition \ref{Sec2PropIdentifyCurves}} implies that $\tilde{C} \sim 3\tilde{Y}_0 + (3 \tilde{E} + \tilde{q})\tilde{\mathfrak{f}}$, where $\tilde{\mathfrak{f}} : \tilde{S} \to \tilde{Y}$ is the corresponding surjective morphism, $\tilde{Y}_0$ is the section of minimal self-intersection, $\tilde{Y}^2_0 = -e$, $\tilde{E}$ is a divisor of $\tilde{Y}$ of degree $e$ such that $\tilde{S} \cong \mathbb{P} (\mathcal{O}_{\tilde{Y}} \oplus \mathcal{O}_{\tilde{Y}} (-\tilde{E}))$ and $\tilde{q}$ is a point on $\tilde{Y}$. Also, $\tilde{X}$ is the image of a curve in the linear series $|3\tilde{Y}_0 + (3 \tilde{E} + \tilde{q})\tilde{\mathfrak{f}}|$ under the morphism associated to $|\tilde{Y}_0 + \tilde{E} \tilde{\mathfrak{f}}|$. Because of the definition of $\mathcal{F}$, the above means that $\tilde{X}$ is a curve from the same family. Therefore $\mathcal{H}$ is a component of $\mathcal{I}_{3e+1, g, e-\gamma+1}$. The other statements formulated in the theorem are clear due to the properties of the curves in the family $\mathcal{F}$ and {\rm Proposition \ref{Sec2PropRamBranch}}.
To complete the proof of the theorem it remains to prove the lemma.
\begin{proof}[Proof of Lemma \ref{Sec3DeformLemma}]
To a large extent our proof repeats the steps of the proofs of similar statements given in \cite[Proposition 1.6, p.354--356]{Cil87} and \cite[Proposition 4.1, p.176--178]{CLM96}. For this reason, we refer, whenever possible, to the statements formulated and proved there.
The statement is local, so we can assume that $T = \Spec A$ for a Noetherian ring $A$. Thus, we have a flat family
\[
\mathcal{X} \subset \Proj A[x_0, x_1, \ldots , x_r] =: \mathbb{P}^r_A \, .
\]
Since projective normality is an ``open property'' and $\mathcal{X}_{t_0}$ is supposed to be projectively normal, we can assume further that the family $\mathcal{X}$ consists of projectively normal curves. By \cite[Ex. III.9.5, p.267]{Hart77} the family $\mathcal{X}$ must be \emph{very flat}. In particular, the number of generators in any degree $n$ of the ideal $I(\mathcal{X}_t)$ of a curve $\mathcal{X}_t \subset \mathbb{P}^r$ from the family is the same for all $t \in T$. Consider the homogeneous ideal $I(\mathcal{X})$ of $\mathcal{X}$ in the ring
\[
R := A[x_0, x_1, \ldots , x_r]
\]
and let $I(\mathcal{X})_2$ be the vector space of its elements of degree two, that is,
\[
I(\mathcal{X})_2 := H^0 (\mathbb{P}^r_A, \mathcal{I}_{\mathcal{X}} (2)) \, ,
\]
where $\mathcal{I}_{\mathcal{X}}$ is the ideal sheaf of $\mathcal{X}$. Take $J \subset R$ to be the ideal
\[
J := \langle I(\mathcal{X})_2 \rangle \,
\]
generated by the elements of degree two. Consider the closed subscheme $\mathcal{W} \subset \mathbb{P}^r_A$ defined as
$
\mathcal{W} := \Proj (R / J) \subset \mathbb{P}^r_A \, .
$
It is indeed a family $p_\mathcal{W} : \mathcal{W} \to T$ parametrized by $T = \Spec (A)$ and we have a commutative diagram
\[
\begin{tikzcd}[cells={anchor=west}]
\mathcal{X} \subset \mathcal{W} \subset \mathbb{P}^r_A
\ar[d, "{p_\mathcal{X}}"', start anchor={[shift={(12pt,-6pt)}]west}, end anchor={[shift={(18pt,-4pt)}]north west}]
\ar[d, "{p_\mathcal{W}}", start anchor={[shift={(37pt,-6pt)}]west}, end anchor={[shift={(-28pt,-4pt)}]north east}] \\
\Spec (A)
\end{tikzcd}
\]
The goal is to show that $p_\mathcal{W} : \mathcal{W} \to T = \Spec (A)$ is a flat family.
By assumption $\mathcal{X}_{t_0}$ is a smooth curve of genus $g = 3e + 3\gamma$ and degree $3e + 1$ contained in a cone $F$ over a smooth integral projectively normal curve $Y$ of genus $\gamma$ and degree $e$ in $\mathbb{P}^{r-1}$. By {\rm Proposition \ref{Sec2PropIdentifyCurves}} this means that the proper transform of $\mathcal{X}_{t_0}$ on the desingularization $S$ of $F$ is in the linearly equivalence class of $3Y_0 + (3E + q)\mathfrak{f}$, where, just as before, $f : S \to Y$ is the surjective morphism for the decomposable ruled surface $S$, $Y_0$ is the section of minimal self-intersection, $E$ is a divisor of degree $e$ on $Y$ such that $S \cong \mathbb{P}(\mathcal{O}_Y \oplus \mathcal{O}_Y (-E))$ and $q \in Y$ is a point. Since $Y$ is a general curve of genus $\gamma \geq 3$ and degree $e \geq 4\gamma + 5$ in $\mathbb{P}^{r-1}$, it follows by \cite{GL86} that the first several terms of the minimal free resolution of its ideal sheaf $\mathcal{I}_{Y}$ appear as
\[
\begin{tikzcd}[row sep=0pt, cells={anchor=west}]
\cdots \ar[r] & \bigoplus\limits^{\beta_3}_{j=1} \mathcal{O}_{\mathbb{P}^{r-1}} (-4) \ar[r] & \bigoplus\limits^{\beta_2}_{j=1} \mathcal{O}_{\mathbb{P}^{r-1}} (-3) \ar[r] & \bigoplus\limits^{\beta_1}_{j=1} \mathcal{O}_{\mathbb{P}^{r-1}} (-2) \ar[r] & \mathcal{I}_{Y} \ar[r] & 0 \,
\end{tikzcd}
\]
where $\beta_1, \beta_2, \ldots $ are the \emph{Betti numbers}. By \cite[Proposition 2, p. 232]{CG99} it follows that the first several terms of the minimal free resolution of the ideal sheaf $\mathcal{I}_{\mathcal{X}_{t_0}}$ of $\mathcal{X}_{t_0} \subset \mathbb{P}^r$ are
\begin{equation}\label{Sec3ResolIdSheafX}
\begin{tikzcd}[row sep=0pt, cells={anchor=west}]
\cdots \ar[r] & \mathcal{P}_3 \ar[r] & \mathcal{P}_2 \ar[r] & \mathcal{P}_1 \ar[r] & \mathcal{I}_{\mathcal{X}_{t_0}} \ar[r] & 0 \, ,
\end{tikzcd}
\end{equation}
where
\begin{itemize}
\item $\mathcal{P}_1 = \bigoplus\limits^{r-1}_1 \mathcal{O}_{\mathbb{P}^{r}}(-4) \oplus \bigoplus\limits^{\beta_1}_{j=1} \mathcal{O}_{\mathbb{P}^{r}} (-2)$
\item $\mathcal{P}_2 = \bigoplus\limits^{\binom{r-1}{2}}_1 \mathcal{O}_{\mathbb{P}^{r}}(-5) \oplus \bigoplus\limits^{\beta_2}_{j=1} \mathcal{O}_{\mathbb{P}^{r}} (-3) \oplus \bigoplus\limits^{\beta_1}_1 \mathcal{O}_{\mathbb{P}^{r}}(-5)$
\item $\mathcal{P}_3 = \bigoplus\limits^{\binom{r-1}{3}}_1 \mathcal{O}_{\mathbb{P}^{r}}(-6) \oplus \bigoplus\limits^{\beta_3}_{j=1} \mathcal{O}_{\mathbb{P}^{r}} (-4) \oplus \bigoplus\limits^{\beta_2}_1 \mathcal{O}_{\mathbb{P}^{r}}(-6)$
\end{itemize}
To deduce the flatness of the family $p_{\mathcal{W}} : \mathcal{W} \to T$ we make use of resolutions of the ideal $I(\mathcal{X}) \subset R$ of $\mathcal{X}$, the ideal $I(\mathcal{X}_{t_0})$ of $\mathcal{X}_{t_0}$ in the localization $R_{t_0}$ of $R$ at $t_0$, and also of the ideal $J$ of $\mathcal{W} \subset \mathbb{P}^r_A$. Remark that due to {\rm (\ref{Sec3ResolIdSheafX})}, the ideal $I(\mathcal{X}_{t_0})$ has a presentation
\begin{equation}\label{Sec3ResolIdealX0}
\begin{tikzcd}[row sep=0pt, cells={anchor=west}]
P_2 \ar[r] & P_1 \ar[r] & I(\mathcal{X}_{t_0}) \ar[r] & 0 \, ,
\end{tikzcd}
\end{equation}
where
$P_1 = \bigoplus\limits^{\beta_1}_{j=1} R_{t_0} (-2) \oplus \bigoplus\limits^{r-1}_1 R_{t_0} (-4)$
and
$P_2 = \bigoplus\limits^{\beta_2}_{j=1} R_{t_0} (-3) \oplus \bigoplus\limits^{\binom{r-1}{2} + \beta_1}_1 R_{t_0} (-5)$. By the result of Catalisano and Gimigliano explained in section \ref{Sec2}, the zero locus of the degree two generators of $I(\mathcal{X}_{t_0})$ is precisely the cone $F$ containing the curve $\mathcal{X}_{t_0}$, that is, $V (J_{t_0}) \equiv F$, where $J_{t_0}$ is the ideal of the fiber $\mathcal{W}_{t_0}$ of $\mathcal{W}$ at the point $t_0$. Just like in the proof of \cite[Proposition 1.6]{Cil87} it is obtained that there is a commutative diagram
\begin{equation}\label{Sec3CommuDiagramX0W0}
\begin{tikzcd}[row sep=25pt, cells={anchor=west}]
Q_2 \ar[r, "{\delta}"] \ar[d, hook] & Q_1 \ar[r] \ar[d, hook] & I(\mathcal{W}_{t_0}) \ar[r] \ar[d, hook] & 0 \\
P_2 \ar[r, "{\theta}"] & P_1 \ar[r] & I(\mathcal{X}_{t_0}) \ar[r] & 0
\end{tikzcd}
\end{equation}
\noindent with exact rows, where
$Q_1 = \bigoplus\limits^{\beta_1}_{j=1} R_{t_0} (-2)$
and
$Q_2 = \bigoplus\limits^{\beta_2}_{j=1} R_{t_0} (-3)$, $\delta$ is represented by a $\beta_1 \times \beta_2$ matrix of linear forms and $\theta$ is represented by a matrix $M$ of the form
\[
M = \begin{pmatrix}
M_{1,1} & M_{1,2} \\
0 & M_{2,2}
\end{pmatrix} \, ,
\]
for which $M_{1,1}$ is a $\beta_1 \times \beta_2$ matrix of linear forms, $M_{1,2}$ is a $\beta_1 \times \left( \binom{r-1}{2} + \beta_1 \right)$ matrix and $M_{2,2}$ is a $(r-1) \times \left( \binom{r-1}{2} + \beta_1 \right)$ matrix. As it is explained in \cite[Proposition 1.6]{Cil87}, because of the very flatness of the family $p_{\mathcal{X}} : \mathcal{X} \to T$ there is a presentation
\begin{equation}\label{Sec3ResolIdealX}
\begin{tikzcd}[row sep=0pt, cells={anchor=west}]
\mathscr{P}_2 \ar[r, "{\Theta}"] & \mathscr{P}_1 \ar[r] & I(\mathcal{X}) \ar[r] & 0 \, ,
\end{tikzcd}
\end{equation}
of $I(\mathcal{X})$ by free $R$-modules such that the localization of the sequence {\rm (\ref{Sec3ResolIdealX})} at $k(t_0):= A/m_{t_0}$ gives {\rm (\ref{Sec3ResolIdealX0})}, where $m_{t_0} \subset A$ is the ideal corresponding to the point $t_0 \in T$ and $\Theta$ is a homogeneous map represented by a matrix $\mathscr{M}$ of the form
\[
\mathscr{M} = \begin{pmatrix}
\mathscr{M}_{1,1} & \mathscr{M}_{1,2} \\
\mathscr{M}_{2,1} & \mathscr{M}_{2,2}
\end{pmatrix} \, ,
\]
which modulo the ideal $m_{t_0}$ of $t_0$ gives the matrix $M$. The same degree reasoning argument as in \cite[Proposition 1.6]{Cil87} gives that $\mathscr{M}_{2,1} = 0$ and one can ``chop-off'' from {\rm (\ref{Sec3ResolIdealX})} an exact sequence
\begin{equation}\label{Sec3ResolIdealW}
\begin{tikzcd}[row sep=0pt, cells={anchor=west}]
\mathscr{Q}_2 \ar[r, "{\Delta}"] & \mathscr{Q}_1 \ar[r] & J \ar[r] & 0 \, ,
\end{tikzcd}
\end{equation}
where $\Delta$ is homogeneous map, such that tensoring {\rm (\ref{Sec3ResolIdealW})} with $k(t_0)$ we get the first row of {\rm (\ref{Sec3CommuDiagramX0W0})}. This means that the corank of the map $\Delta$ at each localization at $k(t) = A/m_t$, where $m_t$ is the ideal corresponding to $t \in T$, is same for all $t$, or equivalently, that $\dim (J_t)_d$ is same for all $t \in T$. This implies that the family $p_{\mathcal{W}} : \mathcal{W} \to T$ is (very) flat. In particular, it is a family of surfaces in $\mathbb{P}^r$ one of whose fibers, namely $\mathcal{W}_{t_0}$, is a cone over a smooth integral projectively normal curve in $\mathbb{P}^{r-1}$ of genus $\gamma \geq 3$ and degree $e \geq 4\gamma + 5$.
For the remaining part of the proof of the lemma we refer to \cite[Proposition 4.1, p.176--178]{CLM96}. It is proven there that if $p_{\mathcal{W}} : \mathcal{W} \to T$ is a flat family of surfaces in $\mathbb{P}^r$, one of whose fibers is a cone like $\mathcal{W}_{t_0}$ above, then the remaining fibers of the family are also cones over smooth curves of the same genus and degree in $\mathbb{P}^{r-1}$. We remark that the proof uses a result of Pinkham, namely \cite[Theorem 7.5, p.45]{Pin74Ast} about cones over curves of genus $\gamma$ and degree $e$, in which it is required that $e \geq 4\gamma + 5$. Thus the lemma is proved.\end{proof}
This completes Step III and thus the proof of the {\rm Main Theorem}.
\begin{remark}\label{Sec3Remark2-3case}
The technique used in {\rm Step III} of the proof can not be applied to prove \cite[Theorem B]{CIK22}. In that paper we considered a family of curves that also lie on cones such that each curve is a double cover of the base of the cone that contains it and also passes through its vertex. {\rm Proposition \ref{Sec2PropCatGim}} can be applied to obtain a resolution of the ideal of a curve from the family. However, the ideal is generated by polynomials of degree two and three, which is insufficient to deduce the existence of a presentation like {\rm (\ref{Sec3ResolIdealW})} of the ideal of a similarly defined variety like $\mathcal{W}$ here. That is, one couldn't conclude that $\mathscr{M}_{2,1} = 0$ like we were able to do here due to $I(\mathcal{X}_t)$ being generated by polynomials of degree two and four. In a sense, our present work grew-out from our failure to apply some techniques used in \cite{CLM96} and \cite{Cil87} in the proof of \cite[Theorem B, step III]{CIK22}.
\end{remark}
\begin{remark}
For a component $\mathcal{D}$ of the Hilbert scheme $\mathcal{I}_{d,g,r}$ the difference $\sigma(\mathcal{D}) := \dim \mathcal{D} -\lambda_{d,g,r}$ is called \emph{superabundance}. It is not difficult to compute about our $\mathcal{H} \subset \mathcal{I}_{3e+1, 3e+3\gamma, e-\gamma+1}$ that $\sigma (\mathcal{H}) = (r-4)e+2(r-5)(e-r)-3$, and using the numerical assumptions in our {\rm Main Theorem}, $\sigma (\mathcal{H}) \geq 224$.
\end{remark}
|
{
"arxiv_id": "2302.08714",
"language": "en",
"timestamp": "2023-02-20T02:07:39",
"url": "https://arxiv.org/abs/2302.08714",
"yymm": "2302"
} | \section{Introduction}
With the development of deep learning, embedding-based retrieval (EBR) achieves great advances in real-world applications, such as web search \cite{wu2020zero}, social search~\cite{huang2020embedding}, e-commerce search~\cite{li2021embedding}, \textit{etc}.
Generally speaking, a typical industrial search-related system is composed of a ``recall-rerank'' architecture (as demonstrated in Figure \ref{Fig.overview}), in which the efficiency of the recall module with EBR algorithms is the bottleneck of the whole system as it needs to process massive documents.
Unlike conventional inverted index-based term matching~\cite{robertson2009probabilistic} that measures similarity through lexical analysis, EBR represents queries and documents as dense feature vectors.
Given a query, EBR retrieves a set of relevant documents according to their embedding similarities in the latent space.
The enormous scale of documents and high concurrent queries pose great challenges to an industrial EBR system, including retrieval latency, computation cost, storage consumption, and embedding upgrades.
There are previous attempts to develop more efficient EBR systems with advanced ANN (Approximate Nearest Neighbor) algorithms, \textit{e.g.}, HNSW \cite{malkov2018efficient}.
Though the achievements in saving computations, they need elaborate designs to be adapted and plugged into existing systems.
Given the large number and variety of EBR systems for Tencent products, the development costs of upgrading all the existing ANN algorithms are non-negligible and even unaffordable.
Toward this end, we focus on the most fundamental component of EBR, that is, embedding, also known as representation learning in the deep learning community.
Properly compressed embeddings are compatible with mainstream ANN algorithms and can be seamlessly integrated into existing EBR systems.
In this work, we propose a binary embedding-based retrieval (BEBR) engine that has several appealing benefits:
(i) customizable embedding compression rates to receive a trade-off between accuracy and costs;
(ii) a task-agnostic and modal-agnostic efficient training paradigm for easy generalization and data security protection;
(iii) a free embedding upgrading mechanism with backward compatibility, \textit{i.e.}, no need to refresh the index.
BEBR has been well deployed on multiple Tencent products equipped with various ANN algorithms (\textit{e.g.}, IVF \cite{moffat1996self}, HNSW \cite{malkov2018efficient}) with almost no accuracy loss and 30$\sim$50\% cost savings at the system level.
Specifically, inspired by recurrent binary embeddings~\cite{shan2018recurrent} that progressively refine a base binary vector with binary residual vectors to meet task accuracy, BEBR develops a universal binarization algorithm with state-of-the-art performances across modalities. Rather than the simple linear transformations used in \cite{shan2018recurrent}, BEBR adopts multilayer perception (MLP) blocks with non-linear layers (\textit{i.e.}, ReLU \cite{agarap2018deep}) for both binarization (float$\to$binary) and reconstruction (binary$\to$float) in the recurrent learning paradigm.
As illustrated in Figure \ref{Fig.RBE}, the binarization, reconstruction, and residual blocks together form the recurrent binarization module with a customized number of loops, \textit{i.e.}, bits per dimension.
The recurrent binary embeddings with richer representations are much more discriminative than ordinary hash vectors \cite{cao2017hashnet}.
In previous works, the binarization (or hashing) module is usually optimized end-to-end with the backbone network, \textit{e.g.}, CNNs \cite{su2018greedy} for vision, and Transformers \cite{ou2021refining} for text. The training is expensive considering the heavy backbone models for accurate retrieval.
We, therefore, introduce an efficient training paradigm that requires only floating-point vectors as input.
The lightweight binarization module is trained individually without accessing the backbone models, forming a universal training procedure for all the modalities and tasks.
To enable effective representation learning in such an embedding-to-embedding paradigm, we use contrastive learning with queue-based hard negative mining as the training objectives.
Besides the index costs, large-scale EBR systems heavily suffer from the computational overhead required by embedding model upgrades. In particular, all the embeddings in the index need to be re-extracted before the deployment of a new model, which may take weeks or even months for industrial applications. Thanks to the pioneering work in compatible representation learning \cite{shen2020towards,hu2022learning}, we take the first step to investigate compatible training of binary embeddings. Equipped with backward-compatible learning, our BEBR engine is able to harvest the benefit of the new model immediately, \textit{i.e.}, the queries encoded by the new model can be directly indexed among the old index.
We further propose Symmetric Distance Calculation (SDC) of recurrent binary embeddings, a novel technique that achieves significant speedup over
the conventional Hamming-based distance calculation
in \cite{shan2018recurrent}.
SDC leverages the in-register cache to perform fast SIMD (Single Instruction Multiple Data) look-up instructions and is especially in favor of CPU-based computation platforms.
Comprehensive experiments on public benchmarks, internal datasets, and online A/B tests on Tencent products fully demonstrate the effectiveness of our BEBR engine. It has been successfully deployed on almost all ranges of index-based applications in Tencent PCG, including web search (Sogou), video search, copyright detection, video recommendation, \textit{etc}.
The contributions are four-fold.
\begin{itemize}
\item[$\bullet$] We propose a binary embedding-based retrieval (BEBR) engine that efficiently indexes among tens of billions of documents in Tencent products. The proposed method can be equipped with various ANN algorithms and integrated into existing systems seamlessly.
\item[$\bullet$] BEBR drastically reduces both the memory and disk consumption while achieving superior retrieval performance with the benefit of tailored recurrent binarization and symmetric distance calculation.
\item[$\bullet$] BEBR develops a universal training paradigm for all modalities without accessing the raw data and backbone networks, \textit{i.e.}, the binary embeddings are trained efficiently in a task-agnostic embedding-to-embedding manner.
\item[$\bullet$] BEBR enables backward-compatible upgrades of embedding models, that is, the new model can be immediately deployed without refreshing the index embeddings. We for the first time study compatible learning on binary embeddings.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/overview1.pdf}
\vspace{-5pt}
\caption{A brief structure of the Tencent search system, composed of the introduced binary embedding-based retrieval (BEBR) engine for recall and a re-rank post-process layer.
}\vspace{-5pt}
\label{Fig.overview}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{figures/framework6.pdf}
\caption{Our binary embedding-based retrieval (BEBR) framework. The full-precision float embeddings, extracted by the backbone networks, are transformed to recurrent binary vectors using a parametric binarization module $\varphi$ in a task-agnostic embedding-to-embedding manner. BEBR enables backfill-free upgrades for the binarization model, that is, the new model can be immediately deployed for encoding better query embeddings without refreshing the index.
}
\label{Fig.framework}
\end{figure*}
\section{Related work}
\subsection{Embedding-based Retrieval in Search}
Representation learning for embedding-based retrieval has attracted much attention in academia and industry given its great success in various domains.
For example, as a social search engine, Facebook learns semantic embeddings for personalized search, which serves an EBR system with ANN parameter tuning \cite{huang2020embedding}.
For e-commerce search, Taobao proposes a Multi-Grained Deep Semantic Product Retrieval (MGDSPR) \cite{li2021embedding} system to capture the relation between user query semantics and his/her personalized behaviors. JD proposes Deep Personalized and Semantic Retrieval (DPSR)\cite{zhang2020towards} to combine text semantics and user behaviors. Amazon develops a Siamese network to address the semantic gap problem for semantic product retrieval \cite{nigam2019semantic}. For web search, Google adopts zero-shot heterogeneous transfer learning in the recommendation system to improve search performance \cite{wu2020zero}.
While none of the aforementioned methods studies the trade-off between performance and costs in EBR implementation, this paper discusses the binary embedding-based retrieval system, which can achieve near-lossless performance with significant cost reduction.
\subsection{ANN Methods}
Plenty of research has been devoted to developing efficient ANN algorithms.
Some of them build graphs from datasets to avoid the exhaustive search where each vertex in the graph is associated with a data point. Others encode the embeddings into compact codes to reduce memory consumption and speed up distance calculation.
Specifically, the graph-based methods generally leverage the k-Nearest Neighbor graph to allow fast navigation in the index. For example, ~\cite{malkov2012scalable} proposes a proximity graph algorithm called Navigable Small World (NSW), which utilizes navigable graphs. Hierarchical NSW ~\cite{malkov2018efficient} offers a much better logarithmic complexity scaling with a controllable hierarchy. More recently, Navigating Spreading-out Graph (NSG) ~\cite{fu2017fast} proposes a novel graph structure that guarantees very low search complexity. It strongly outperforms previous state-of-the-art approaches. Although these graph-based algorithms achieves high search performance at high precision, they need more memory space and data-preprocessing time than product quantization and hashing-based methods. Therefore, in frequent updating scenarios, building the index from the graph-based algorithm on the large dataset is impractical.
Product quantization (PQ) ~\cite{jegou2010product} decomposes the space into a Cartesian product of low dimensional subspaces and quantizes each subspace separately. Cartesian K-means (CKM)~\cite{norouzi2013cartesian} and Optimized Product Quantizers (OPQ)~\cite{ge2013optimized} extends the idea of space decomposition and optimizes the sub-space decomposition by arbitrarily rotating and permutating vector components. In addition, variants of quantization models ~\cite{babenko2014additive,babenko2015tree,zhang2014composite} inspired by PQ have been proposed. These models offer a lower quantization error than PQ or OPQ. Recently, PQ Fast Scan~\cite{andre2019derived} pioneers the use of SIMD for Asymmetrical Distance Calculation (ADC) evaluation, and later works \cite{andre2019quicker} \cite{blalock2017bolt} \cite{andre2017accelerated} are proposed to optimize the quantization scheme for achieving lower search latency for indexed databases. Inspired by the ADC techniques, we propose a symmetrical distance calculation (SDC) to enable efficient search in BEBR retrieval.
Hashing-based algorithms have recently gained popularity due to their advantages in computing and storage.
Existing hashing-based algorithms can be organized into two categories: locality-sensitive hashing ~\cite{charikar2002similarity, indyk1998approximate} and learning to hash. Locality-sensitive hashing (LSH) maps the original data into several hash buckets where similar items are more likely to fall into the same bucket. Despite the fact that tremendous efforts~\cite{indyk1998approximate, broder1997syntactic, gan2012locality, ji2012super, li2006very} have been exerted to improve the performance of LSH. It still requires multiple hash tables to maintain the recall rate of search, which limits its application on large-scale datasets. Learning to hash is a data-dependent approach that learns hash functions from a specific dataset. With the development of deep learning, many methods ~\cite{cao2017hashnet, fan2020deep, li2015feature, liu2018deep, su2018greedy} adopt the powerful capabilities of deep neural network (DNN) to learn the complex hash functions and obtain binary codes through an end-to-end manner. Instead of converting data into normal binary vectors, ~\cite{shan2018recurrent} proposes recurrent binary embedding to achieve a balanced goal of retrieval performance, speed, and memory requirement. Specifically, it progressively adds a residual binary vector to the base binary vector. A GPU-based k-nearest neighbor (K-NN) selection algorithm is also implemented, enabling exhaustive real-time search on billion-scale data sets. In this paper, towards the goal of efficient and low-cost embedding quantization, we use the off-the-shelf float-point-based embeddings as input to learn recurrent binary embeddings. Furthermore, we provide an efficient method to calculate the distance between recurrent binary embeddings using CPUs, the most common computing devices in the industrial retrieval system.
\subsection{Compatibility of Deep Neural Networks}
Compatible representation learning aims at making embeddings comparable across models.
It has attracted increasingly extensive attention in industry and academia due to its ability to reduce computation costs in embedding upgrades. Specifically, there are two types of compatibility: cross-model and backward compatibility. Cross-model compatibility learning usually trains transformation modules to map the embeddings from different models into a common space. $R^3AN$ ~\cite{chen2019r3} firstly introduces the problem of cross-model compatibility in face recognition and tackles it by learning a transformation that transforms source features into target features through a process of reconstruction, representation, and regression. ~\cite{wang2020unified} strikingly ameliorates the cross-model compatibility performance by coming up with a unified representation learning framework. In detail, they design a lightweight Residual Bottleneck Transformation (RBT) module and optimize it with a classification loss, a similarity loss, and a KL-divergence loss.
While cross-model compatibility handles embeddings from different models, backward compatibility fixes its attention on model updates where new models are trained with additional compatibility constraints. In a sense, it enables compatibility between new embeddings and old embeddings without any extra transformation processes. ~\cite{shen2020towards} is the first work that utilizes backward compatibility to conduct model upgrades. It introduces an influence loss when training the new model to enable direct comparison between new embeddings and old embeddings. Under this backward compatibility framework, several works make attempts to ameliorate the performance of backward compatibility by leveraging hot-refresh backward-compatible model upgrades ~\cite{zhang2022hot}, asymmetric retrieval constraints ~\cite{budnik2021asymmetric}, embedding cluster alignment loss ~\cite{meng2021learning}, and neural architecture search ~\cite{duggal2021compatibility}. In this paper, we adopt backward compatibility training to learn backward compatible binary embeddings. To our best knowledge, this is the first time compatible learning has been applied to binary embedding.
\section{Binary Embedding-based Retrieval}
\subsection{Preliminary}
Given a query $q$ (generally a text in a web search or a video in copyright detection), an embedding-based retrieval (EBR) system aims to rank the documents $\left \{ d_0, d_1,\cdots,d_n \right \}$ according to their similarities.
There are two key factors in EBR, the embedding model(s) and the distance calculation metric $\mathcal{D}(\cdot, \cdot)$.
The cosine similarity is widely used as $\mathcal{D}(\cdot, \cdot)$ for full precision embeddings (float vectors).
Formally, the similarity between a certain query and a document is
\begin{equation}
\mathcal{S}_\text{EBR}(q, d_k) = \mathcal{D}\left(\psi(q), \phi(d_k)\right), ~~~\forall k\in\{1,\cdots,n\},
\end{equation}
where $\psi$ and $\phi$ are embedding models for queries and documents. $\psi$ and $\phi$ can be designed to be identical or distinct to handle different retrieval tasks with homogeneous or heterogeneous input~\cite{guo2022semantic}.
For example, we may have ResNet~\cite{he2016deep} for image data and Transformer~\cite{devlin2018bert} for text.
Without loss of generality, we consider homogeneous architecture (\textit{i.e.}, $\psi$ and $\phi$ are identical and both denoted as $\phi$) in the following cases.
To tackle the billion-level indexing at a moderate cost, we introduce binary embedding-based retrieval (BEBR) engine with much more efficient similarity calculation between queries and documents, such as
\begin{equation}\label{eq:sim_bebr}
\mathcal{S}_\text{BEBR}(q, d_k) = \mathcal{D}\left(\phi\circ\varphi(q), \phi\circ\varphi(d_k)\right), ~~~\forall k\in\{1,\cdots,n\},
\end{equation}
where $\varphi(\cdot)$ is the binarization process and is generally realized by a parametric network.
In the following sections, we will introduce the detailed designs of $\varphi(\cdot)$ in Section \ref{sec:rb} and $\mathcal{D}(\cdot, \cdot)$ in Section \ref{sec:deploy}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/rbe_flow.pdf}
\vspace{-10pt}
\caption{The architecture of recurrent binary embedding model $\varphi$. $\ominus$ and $\oplus$ denote minus and plus operations between two input embeddings, respectively. }\vspace{-5pt}
\label{Fig.RBE}
\end{figure}
\subsection{Recurrent Binarization}\label{sec:rb}
\subsubsection{Architecture}
To tackle the problem of learning to binarization, the straightforward solution is to adopt hashing networks \cite{zheng2020deep} ended up with a binarization function $\rho$, which plays an important role in converting float vectors into binary ones composed of either $-1$ or $+1$. In the forward pass, $\rho$ is formulated as
$$ \rho(x)\equiv\text{sign}(x)=\left\{
\begin{aligned}
-1 &, ~~~ x\le0 \\
1 &, ~~~ x>0
\end{aligned}
\right.
$$
Since the gradient of the sign function vanishes and thus cannot be back-propagated directly, \cite{courbariaux2016binarized} introduced a straight-through estimator (STE) that takes the gradient of the identity function instead, that is, $\rho'(x)=1$ when $|x|\le1$ and $\rho'(x)=0$ otherwise.
Conventional hashing methods generally convert the float vectors to binary embeddings once with some learnable layers and a binarization function introduced above. However, such methods suffer from unsatisfactory performance due to the limited representation ability of the hash codes that only have $-1$ or $+1$ values, \textit{i.e.}, 1 bit per dimension.
Thanks to the pioneering work \cite{shan2018recurrent}, binary embeddings are able to be progressively refined with customized bits per dimension.
Specifically, following the insight of using residual operations to gradually narrow the gap between original float vectors and the learned binary ones, we introduce a recurrent binarization module with customized loops, as demonstrated in Figure \ref{Fig.RBE}.
There are three main components, including binarization block, reconstruction block, and residual block. The binarization block performs almost the same as conventional hashing networks, where a binary embedding $b_0$ is encoded from the float vector $f$ as input: $b_0 = \rho(\boldsymbol{W}_0(f))\in \{-1, 1\}^{m}$,
where $\rho$ is the binarization function, and $\boldsymbol{W}_0$ is the multi-layer perception (MLP) that consists of linear, batch normalization, and ReLU layers.
The encoded binary embedding $b_0$ is then reconstructed back to float vectors, such as $\hat{f}_0 = \|\boldsymbol{R}_0(b_0)\|$,
where $\boldsymbol{R}_0$ is also multi-layer perception (MLP).
The residual between the original $f$ and reconstructed $\hat{f}_0$, therefore, reflects the representation loss of the binarization process, which can be further narrowed by repeating the above steps to binarize the residual parts.
The residual binary vector can be formulated as $r_0=\rho(\boldsymbol{W}_1(f-\hat{f}_0))$, which is further added to the base binary vector $b_0$ via $b_1=b_0+\frac{1}{2}r_0$. The weight $\frac{1}{2}$ is chosen to ease the similarity calculation with only $xor$ and $popcount$ (find detailed derivation from the original paper of \cite{shan2018recurrent}).
Until now, we have introduced the process of recurrent binarization when the loop is set as $1$, \textit{i.e.}, repeat once. In real-world applications, the loop can be customized to trade off accuracy and efficiency. Formally, the whole process of recurrent binarization with $u \geq 1$ loops can be defined as
\begin{equation}
\text{base binarization:~~~}b_0 = \rho(\boldsymbol{W}_0(f)), \nonumber
\end{equation}
\begin{equation}
\text{residual binarization loops:~~~}\left\{
\begin{aligned}
\hat{f}_{u-1} &= \|\boldsymbol{R}_{u-1}(b_{u-1})\|, \\
r_{u-1} &= \rho(\boldsymbol{W}_{u}(f - \hat{f}_{u-1})), \\
b_u &= b_{u-1} + 2^{-u} \boldsymbol{r}_{u-1}. \nonumber
\end{aligned}
\right.
\end{equation}
The recurrent binary embedding $b_u$ is the output of the binarization process in Eq. (\ref{eq:sim_bebr}), \textit{i.e.}, $b_u=\varphi(f)$ given $f=\phi(q)$ or $f=\phi(d_k)$. Given the output dimension of $\boldsymbol{W}$ as $m$, the overall bits of $b_u$ can be calculated as $m\times(u+1)$.
\subsubsection{Task-agnostic training}
As shown in Eq. (\ref{eq:sim_bebr}), the backbone network $\phi$ and the binarization module $\varphi$ are commonly jointly optimized in an end-to-end manner in previous learning to hash methods \cite{fan2020deep}. Though feasible, the training is not efficient given the heavy backbone network for accurate representation learning, and task-dependent as the raw data (\textit{e.g.}, text, images) must be accessed for training end-to-end, rendering an inflexible solution, especially for the data-sensitive applications.
To tackle the challenge, we introduce a universal training solution that only requires the float vectors as input, \textit{i.e.}, extracting the embeddings using off-the-shelf backbone networks $\phi$. The binarization module $\varphi$ is therefore trained alone in a task-agnostic and modality-agnostic manner. The objective function of such embedding-to-embedding training can be formulated as
\begin{align}
\mathop{\arg\min}_{\varphi} \ \ \mathcal{L}(\mathcal{F}; \varphi),
\end{align}
where $\mathcal{F}$ is the set of all float vectors for training.
Given the great success of contrastive loss \cite{chen2020simple} in representation learning research, we adopt an NCE-form contrastive objective to regularize the binarization module,
\begin{align}\label{Eq.binary_loss}
\mathcal{L}(\mathcal{F}; \varphi)=\frac{1}{|\mathcal{F}|}\sum_{f\in\mathcal{F}}-\log\frac{\exp\langle\varphi(f),\varphi(k_+)\rangle}{\sum_{k\in\mathcal{B}}\exp\langle\varphi(f),\varphi(k)\rangle},
\end{align}
where $k$ is the float features within the same batch $\mathcal{B}$ as the anchor $f$. $k_+$ is the positive sample constructed from another augmented view of an image or the query-document pair from the web. $\langle\cdot,\cdot\rangle$ is cosine similarity between recurrent binary embeddings.
Besides the positive pairs collected by manual annotations or user behaviors, we employ hard negative mining to further improve the discriminativeness of the learned binary embeddings.
Hard negative mining has proven to be a useful technique for improving the accuracy of classification tasks~\cite{shrivastava2016training, devlin2018bert, schroff2015facenet} in the deep learning community.
Recent work~\cite{huang2020embedding, li2021embedding} on semantic retrieval has also successfully applied this technique to improve the retrieval accuracies.
There are online and offline hard negative mining approaches
to collect hard enough negative samples and improve the model's ability to identify similar but irrelevant query-document pairs.
Online hard negative mining is efficient as it is conducted on the fly within mini-batches. Offline hard mining is performed off-the-shelf before each training epoch and is extremely time-consuming, even with the help of the ANN algorithm. However, offline mining is proven to be more effective as it can search among the whole training set and discover the most difficult samples.
How to enable global hard mining as the offline method while at the same time maintaining the efficiency of online methods turn out to be a challenging but critical problem.
Inspired by \citet{he2020momentum}, we tackle this problem by maintaining a queue $\mathcal{Q}\in\mathbb{R}^{L\times m}$ of negative sample embeddings. Specifically, we extend the mini-batches with a fix-length (\textit{i.e.}, $L$) queue (about 16$\times$ larger than the mini-batch) and mine hard samples in the queue on the fly.
At each training step, the binary embeddings of the current mini-batch are added to the queue, and the oldest mini-batch in the queue is removed if the maximum capacity is reached. Note that we perform momentum updates of the binarization module to encode embeddings for the queue in order to keep latent consistency among different batches, following the practice in \cite{he2020momentum}.
We select the top-$k$ hardest negative samples in the queue for contrastive objectives in Eq. (\ref{Eq.binary_loss}), \textit{i.e.}, the samples that receive the highest similarity scores with the anchor feature.
Therefore, the set of training samples $\mathcal{B}$ in Eq. (\ref{Eq.binary_loss}) becomes
\begin{equation}
\mathcal{B} = \{ k_+, \kappa(\mathcal{Q})\},
\end{equation}
where $\kappa(\mathcal{Q})$ denotes the operation for selecting top-$k$ hardest negative samples from $\mathcal{Q}$.
Once $\varphi$ is learned, the recurrent binary embeddings for queries and documents can be produced in an efficient embedding-to-embedding paradigm. Both training and deployment processes are task-agnostic since only full-precision embeddings are needed as input, which enables universal embedding binarization across all the modalities and tasks.
\subsubsection{Backward-compatible training}
As illustrated in Figure \ref{Fig.overview},
the retrieval stage needs to process billions or trillions of documents. The huge scale of data poses challenges in embedding upgrades since all the index embeddings need to be re-extracted before the deployment of a new model. Such a process is quite time-consuming and computationally expensive.
In this paper, we for the first time investigate the potential of backward-compatible learning \cite{shen2020towards} with binary embeddings. To be specific, compatible learning requires the embeddings encoded by the old model and the new model to be interchangeable in a consistent latent space. Embedding model upgrades with backward compatibility can deploy the new model immediately without refreshing the index, that is, the new query embeddings can be directly compared with the old document embeddings. The upgrading objective can be formulated as
\begin{align}
\mathcal{S}_\text{BEBR-BC}(q_\text{new}, d^+_\text{old}) & \geq \mathcal{S}_\text{BEBR}(q_\text{old}, d^+_\text{old}), \\
\mathcal{S}_\text{BEBR-BC}(q_\text{new}, d^-_\text{old}) & \leq \mathcal{S}_\text{BEBR}(q_\text{old}, d^-_\text{old}),
\end{align}
where $d^+$ denotes relevant documents to user query $q$, and $d^-$ denotes the irrelevant ones. $\mathcal{S}_\text{BEBR-BC}(\cdot, \cdot)$ calculates the similarity between the new binary embedding of query and old binary embedding of the document, which is formulated as:
\begin{equation}
\begin{split}
\mathcal{S}_\text{BEBR-BC}(q, d_k) = \mathcal{D}\left(\tilde{\phi}\circ\varphi_{\text{new}}(q), \phi\circ\varphi_{\text{old}}(d_k)\right)& \\
\forall k\in\{1,\cdots,n\}&,
\end{split}
\end{equation}
where $\varphi_{\text{new}}(\cdot)$ is the new version of the recurrent binary transformation module and $\varphi_{\text{old}}(\cdot)$ is the old one, $\tilde{\phi}$ denotes a new or identical float backbone model determined by specific applications. BEBR-BC stands for backward compatible BEBR system.
The training objective can be formulated as
\begin{align}
\mathop{\arg\min}_{\varphi_\text{new}} \ \ \mathcal{L}(\mathcal{F}; \varphi_\text{new})+\mathcal{L}_\text{BC}(\mathcal{F}; \varphi_\text{new}, \varphi_\text{old}),
\end{align}
where $\mathcal{L}$ is the same as Eq. (\ref{Eq.binary_loss}), and
$\mathcal{L}_\text{BC}$ is also in the form of an NCE loss but across old and new models, \textit{i.e.},
\begin{align}\label{Eq.binary_bc_loss}
\mathcal{L}_\text{BC}(\mathcal{F};& ~~\varphi_\text{new}, \varphi_\text{old})\nonumber\\
&=\frac{1}{|\mathcal{F}|}\sum_{f\in\mathcal{F}}-\log\frac{\exp\langle\varphi_\text{new}(\tilde{f}),{\varphi}_\text{old}(k_+)\rangle}{\sum_{k\in\mathcal{B}}\exp\langle\varphi_\text{new}(\tilde{f}),{\varphi}_\text{old}(k)\rangle}.
\end{align}
$\tilde{f}$ is encoded by $\tilde{\phi}(\cdot)$. $\varphi_\text{new}$ is optimized individually with the other parametric modules fixed.
$\mathcal{L}$ maintains self-discrimination of the new binarization model while $\mathcal{L}_\text{BC}$ regularizes the cross-model compatibility.
Queue-based hard mining is also applied for $\mathcal{L}_\text{BC}$.
\subsection{Deployment}\label{sec:deploy}
\subsubsection{Dot product of recurrent binary embeddings: A revisit} \label{SSec.binary_and_quantization}
In \citet{shan2018recurrent}, the cos similarity of recurrent binary embedding is decomposed into the dot product of hash codes as in Eq. (\ref{eq18}), where the subscript $q$ and $d$ denote the query and documentation. Thus, the calculation of hash codes can be implemented efficiently with the bit-wise operation as in Eq. (\ref{eq19}), where ${x}$, ${y}$, are binary vectors in $\{1,-1\}^m$, $popc$, $\wedge$ and $>>$ are the population count, XOR, and logical right shift operations.
\begin{equation} \label{eq18}
\begin{aligned}
\mathcal{D}(b^q_u,b^d_u)\propto &\frac{1}{||b^d||}(b^q_0\cdot b^d_0 +
\sum_{j=0}^{u-1}\sum_{i=0}^{u-1} (\frac{1}{2})^{j+i+2} r^q_j\cdot r^d_i \\
& + \sum_{j=0}^{u-1}(\frac{1}{2})^{j+1}b^q_0\cdot r^d_j+\sum_{i=0}^{u-1}(\frac{1}{2})^{i+1}b^d_0\cdot r^q_i)
\end{aligned}
\end{equation}
\begin{equation} \label{eq19}
x\cdot y = (popc(x\wedge y) >> 1) + m
\end{equation}
Although the bit-wise operation is fast with population count, the computation complexity grows rapidly with the increase of ${u}$. Hence, it relies on GPU to offer high performance, and an optimized k-NN selection algorithm is developed.
\subsubsection{Symmetric distance calculation (SDC)}
Unfortunately, the GPU-Enabled NN search algorithm limits its usefulness and applicability in practical cases. In this paper, we develop a Symmetric Distance Calculation (SDC) of recurrent binary embedding around the CPU platform, which is applicable to most scenarios. Specifically, SDC allows computing the distance between the uncompressed recurrent binary features. It relies on SIMD in-register shuffle operation to provide a high-performance calculation procedure, which can be combined with inverted indexes. For simplicity's sake, we explain the SDC uses 128-bit SIMD in the following content.
Similar to \cite{andre2017accelerated},\cite{andre2019quicker}, and \cite{blalock2017bolt}, SIMD registers and in-register shuffles are used to store lookup tables and perform lookups. However, these methods use sub-quantizers to obtain different centroids without normalization during calculation. Therefore, algorithmic changes are required to obtain the fixed centroids and magnitude of embeddings for normalization. More specifically, SDC relies on 4-bit code as a basic unit and uses 8-bit integers for storing lookup tables. The resulting lookup tables comprise 16 8-bits integers (128 bits). Once lookup tables are stored in SIMD registers, in-register shuffles can perform 16 lookups in 1 cycle, enabling large performance gains.
\textbf{Memory layout.} By setting $u\in\{2,4\}$, we first generate the recurrent binary vectors and organize an inverted list of features with the standard memory layout. As shown in the upper of Figure \ref{figure_rbe_adc_calculate}. $a_i$ is the 4-bit code, and $a_{norm}$ is the quantized magnitude value of the vector appended at the end. Notably, for $u=2$, the $a_i$ represents two adjacent dimensions of the feature. To efficiently shuffle lookup tables, the standard memory layout of inverted lists needs to be transposed because the transposed data are contiguous in memory, and the SIMD register can be loaded in a single memory read. This transition process is performed offline and does not influence the search speed.
\textbf{Lookup tables.} As mentioned early, the centroids in SDC are fixed with the setting of $u$, and it can uncompressed represent the recurrent binary vectors. When $u=4$, the distance table in 128-bit registers can be reconstructed directly because the centroids of SDC are presented as 4-bit integers, and the inner product distance range is 8-bit integers. When $u=2$, we use two adjacent dimensions of recurrent binary vector to form 4-bit code, and the distance can be calculated by adding the inner products result of two 2-bit respectively.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/rbe_calculation.png}
\vspace{-5pt}
\caption{Symmetric distance calculation (SDC) using SIMD calculation.} \vspace{-5pt}
\label{figure_rbe_adc_calculate}
\end{figure}
\textbf{SIMD computation.} As the lookup tables and inverted list are prepared, each inverted list is scanned block by block. We depicted this process in Figure \ref{figure_rbe_adc_calculate}. First, the index codes are packed as 8-bit in each cell of 128-bit registers, and we unpacked the subcodes using shifts and masks. For each set of subcodes, the partial distances are yielded using lookup implementation through a combination of shuffle and blends. This process is repeated $u \dot m/4$ times, and the distances are obtained by summing each partial distance with saturated arithmetic. Lastly, each distance is normalized by dividing its magnitude value. In practice, we multiply the distance by the reciprocal of the magnitude value since the multiply operation is fast in SIMD.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{figures/ann_architecture.png}
\vspace{-5pt}
\caption{Overview of ANN systems equipped with BEBR.}
\vspace{-5pt}
\label{Fig.rbe_system_overview}
\end{figure}
\subsubsection{ANN systems}
We deployed a distributed search system based on ANN search algorithms as in Figure \ref{Fig.rbe_system_overview}. At run time, a query embedding is generated on-the-fly by the embedding learning model. Then, the proxy module dispatches the query to the leaf module, where the main search process happens. Each leaf module equipped various ANN indexes with symmetric distance calculation since our work is orthogonal to ANN algorithms and compatible with any type of index. Therefore, we can choose different algorithms according to the different requirements of the product. For instance, the inverted index (IVF) has two layers for embedding search, one is the coarse layer quantizes embedding vectors into the coarse cluster typically through the $K$-$means$ algorithm, and the other is the fine-grained layer does the efficient calculation of embedding distances. Both layers can be supported by symmetric distance calculation with recurrent binary embeddings used. Lastly, the result from all leaves will be used to produce the top N result through the selection merge process.
\section{Experiments \label{Sec.exp}}
\subsection{Implementation Details}
The Adam optimizer ~\cite{kingma2014adam} is employed with an initial learning rate of 0.02. The temperature parameter $\tau$ of softmax is set to 0.07. We also adopt gradient clipping when the norm of the gradient exceeds a threshold of 5.
When training on a server of 8 Nvidia V100 GPUs, the batch size is set to 4096 for binary representation learning and 128 for compatible learning.
The binarization experiments are based on the PyTorch framework.
We implemented 256-bit SDC in C++, using compiler intrinsics to access SIMD instructions. g++ version 8.1 are selected and enables SSE, AVX, AVX2, and AVX512. Besides, we use Intel MKL 2018 for BLAS. We carry our experiments on Skylake-based servers, which are Tencent cloud instances, built around Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz.
\subsection{Datasets}
We evaluate the proposed BEBR on both public and industrial datasets. For the public dataset, we use image and text data from the MS COCO captioning dataset. For the industrial dataset, we use data collected from two applications. One is web search which returns relevant web pages given a user search query. The other one is video copyright detection which identifies duplicated, replicated, and/or slightly modified versions of a given video sequence (query) in a reference video dataset.
\textbf{Offline datasets:} For web search, we collect search logs of user queries and clicks from \textit{Sogou Search Engine}. After data pre-processing, the training set contains 400 million samples and we use an additional 3 million samples for evaluation. For video copyright detection, we use 8 million images extracted from video sequences to train the model and manually label 30k queries and 600k reference images for validation. Furthermore, we use COCO captioning dataset which contains about 110k
training images and 5k validation images. For each image in the training and validation set, five independent human-generated captions are provided.
\textbf{Online datasets:} We deploy the proposed BEBR in the production environment of the aforementioned two applications. Web search documents are approximately 6 billion in size, covering the most active web pages on the Internet. The size of image embeddings extracted from video sequences in video copyright detection is about 10 billion.
\subsection{Evaluation Metrics}
\textbf{Offline evaluation.} We use the metric of Recall@k to evaluate the offline performance of the proposed binary-based embedding retrieval method. Specifically, given a query $q$, its relevant documents $\mathcal{D^+} = \{d_1^+, \cdots, d_N^+\}$, and the top-k candidates returned by a model as the retrieval set $\hat{\mathcal{D}} = \{ d_1, \cdots, d_k \}$. $k\gg N$ in practice. Recall@k is defined as:
\begin{equation}
\text{Recall}@k = \frac{|\mathcal{D^+}\cap\hat{\mathcal{D}}|}{N}
\end{equation}
\noindent\textbf{Online Evaluation} We use different metrics to evaluate the effectiveness of our proposed BEBR system.
For web search, we adopt the metrics of click-through rate (CTR) and query rewrite rate (QRR) which are believed to be good indicators of search satisfaction.
For video copyright detection, we conduct the human evaluation for the performance of retrieved documents, Specifically, we ask human evaluators to label the relevance of results from the BEBR system and baseline system. Apart from precision, we report the ratio between the number of copied videos (positive results) and the number of traffic (denoted as detection ratio) to analyze the model's effect on the entire system. A higher detection ratio indicates better performance.
Furthermore, to evaluate the efficiency of the BEBR system,
We calculate queries per second (QPS) by measuring the amount of search traffic the retrieval stage receives in one second. We also investigate the memory consumption of the search index built in the retrieval stage.
\subsection{Offline Evaluation}
\begin{table}[t]
\centering
\caption{Retrieval performance of different embedding forms on MS COCO caption dataset.}\vspace{-5pt}
\label{Table.binary_public_dataset}
\begin{tabular}{ccccc}
\toprule
Embedding & Bits & Recall@1 & Recall@5 & Recall@10 \\
\hline
hash & 1024 & 0.348 & 0.632 & 0.730 \\
{ours} & {1024} & \textbf{0.360} & \textbf{0.646} & \textbf{0.751} \\
float & 16384 & \textcolor[RGB]{150,150,150}{0.361} & \textcolor[RGB]{150,150,150}{0.649} & \textcolor[RGB]{150,150,150}{0.744} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Retrieval performance of different embedding forms on industrial dataset collected from web search and video copyright detection applications.}\vspace{-5pt}
\label{Table.binary_compare}
\begin{tabular}{ccc}
\toprule
\multirow{2}{*}{Embedding} & Web search & video copyright detection \\
\cline{2-3}
& Recall@10 & Recall@20 \\
\hline
hash & 0.819 & 0.688 \\
ours & \textbf{0.853} & \textbf{0.727} \\
float & \textcolor[RGB]{150,150,150}{0.856} & \textcolor[RGB]{150,150,150}{0.734} \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Effectiveness of recurrent binary embedding.} We investigate the effectiveness of recurrent binary embeddings on both public (academic) and private (industrial) benchmarks. As demonstrated in Tables. \ref{Table.binary_public_dataset}\&\ref{Table.binary_compare}, we compare with the baseline hash \cite{wang2017deep} (1 bit per dimension) and the oracle float (full precision embedding, 32 bits per dimension).
For the academic dataset, we conduct image-to-text retrieval experiments using the MS COCO caption dataset. Specifically, we employ CLIP ~\cite{radford2021learning} model of ResNet101 to produce float embedding for image and text data. The float embedding of size 16384 bits is then compressed into recurrent binary embedding and hash vector with a size of 1024 bits, achieving a 16x compression ratio. As shown in Table \ref{Table.binary_public_dataset}, recurrent binary embedding surpasses hash embedding and achieves comparable results with float embedding.
For industrial datasets, the vector size of float embeddings in web search and video copyright detection are 8192 and 4096 bits respectively. We adopt the same compression ratio setting of 16x by compressing them into binary embeddings with sizes of 512 and 256 bits respectively.
The results are shown in Table \ref{Table.binary_compare}, we achieve comparable retrieval performance with float embedding in web search and video copyright detection applications and surpass hash embedding by 2.4\% and 3.9\% respectively.
\begin{table}[t]
\centering
\caption{Comparison with alternative options of binary training pipeline. Experiments are conducted on a web search dataset with 400 million training samples. $\varphi$ denotes recurrent binary model, $\phi$ and $\psi$ denote encoder model for queries and documents.}\vspace{-5pt}
\label{Table.binary_training}
\begin{tabular}{ccc}
\toprule
Training pipeline & Recall@10 & Training time \\
\hline
end-to-end & 0.855 & 125 GPU hours\\
train $\varphi$ only (fixed $\phi$, $\psi$) & 0.853 & 125 GPU hours \\
{embedding-to-embedding} & {0.853} & {\textbf{11} GPU hours} \\
\bottomrule
\end{tabular}
\end{table}
\noindent\textbf{Comparison with alternative options of binary training.}
To investigate the effectiveness and efficiency of our task-agnostic binary training pipeline, we compare it with two alternative options of binary training pipeline. One is end-to-end training where the recurrent binarization module is optimized end-to-end with the backbone network ($\psi$ and $\phi$). The other one adopts a similar pipeline to the end-to-end training but with parameters in $\psi$ and $\phi$ fixed. The fixed $\psi$ and $\phi$ models help improve the retrieval performance by providing data augmentations. Results are shown in Table \ref{Table.binary_training}. Our proposed task-agnostic embedding-to-embedding training pipeline achieves comparable performance with the other two end-to-end training pipelines while reducing training time by 91.2\%.
\begin{table}[t]
\centering
\caption{Comparison with alternative options of backward-compatible training. ($\varphi_{new}$, $\varphi_{old}$) denotes using binary embeddings produced by the new binary model to search binary embeddings produced by the old binary model.}\vspace{-5pt}
\label{Table.bct}
\begin{tabular}{ccc}
\toprule
Learning strategy & Comparison pair & Recall@20 \\
\hline
baseline & ($\varphi_{old}$, $\varphi_{old}$) & 0.727 \\
\hline
normal bct & ($\varphi_{new}$, $\varphi_{old}$) & 0.765 \\
two-stage bct & ($\varphi_{new}$, $\varphi_{old}$) & 0.783 \\
ours & ($\varphi_{new}$, $\varphi_{old}$) & \textbf{0.801} \\
\bottomrule
\end{tabular}
\end{table}
\noindent\textbf{Comparison with alternative options of backward compatible training.}
We investigate the effectiveness of our proposed backward-compatible training pipeline by comparing with two other pipelines. We denote the first alternative pipeline as normal bct where backward compatible training is conducted between new query encoder $\psi_{new}$ and old document encoder $\phi_{old}$. During deployment, backward-compatible binary embeddings are obtained by mapping full precision queries and document embeddings into a common space using the old binary model $\varphi_{old}$. The second alternative pipeline (denote as two-stage bct) contains a two-stage training process where the first stage learns backward compatible full precision embeddings, and the second stage learns backward compatible recurrent binary embeddings based on the compatible output of the first stage.
All experiments of compatible learning are conducted on an offline dataset collected from video copyright detection applications. Results are shown in Table \ref{Table.bct}. All three learning strategies achieve solid backward compatibility by surpassing the baseline where indexing is conducted between the old version of recurrent binary embeddings, indicating the applicability of backward-compatible training in binary embeddings. Among them, our proposed learning paradigm learns better backward compatibility which outperforms normal bct and two-stage bct by 3.6\% and 1.8\%.
\begin{table}[t]
\centering
\caption{Latency for exhaustive search on CPU platform in video copyright detection application.}\vspace{-5pt}
\label{table.latency_cpu_search}
\begin{tabular}{ccccc}
\toprule
Embedding & Index type & Bits & Search(s)$\downarrow$ & QPS$\uparrow$ \\
\hline
hash code & bitwise & 256 & 0.0024 & 414 \\
ours ($u=2$)& bitwise & 256 & 0.0032 & 312 \\
ours ($u=2$)& SDC & 256 & \textbf{0.0020} & \textbf{480} \\
ours ($u=4$)& bitwise &256 & 0.0054 & 185 \\
ours ($u=4$)& SDC & 256 & \textbf{0.0020} & \textbf{480} \\
float & flat &4096 & 0.05106 & 19 \\
\bottomrule
\end{tabular}
\end{table}
\noindent\textbf{Search latency on CPU platform.} As mentioned in Section \ref{SSec.binary_and_quantization}, the standard search based on recurrent binary embedding \cite{shan2018recurrent} relies on GPU to offer high performance. We implement the standard distance calculation between recurrent binary embeddings on the CPU by using the $popcount$ operation and carry exhaustive search experiment on the offline video copyright detection dataset with recall@20. The experiment loop is, by default, run on a single CPU in a single thread. The comparison between bit-wise operation and SDC results is shown in Table \ref{table.latency_cpu_search}. We observe the bit-wise-based method continues to decrease in QPS with an increase of $u$, and the SDC is almost 2 times faster than the bit-wise operation at $u=4$. Notably, SDC is slightly faster than hash code since the shuffle instructions used in SDC are faster than $popc$.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{figures/perf_hnsw.png}
\vspace{-5pt}
\caption{ Comparison of retrieval efficiency before and after the deployment of BEBR. Experiments are conducted on an offline dataset collected from web search. }
\vspace{-10pt}
\label{Fig.perf_hnsw}
\end{figure}
\noindent\textbf{Integration of BEBR into ANN algorithms.} Besides the exhaustive search experiments in Table \ref{table.latency_cpu_search}, we also conduct experiments that integrate BEBR into ANN algorithms. Specifically, we equip the HNSW algorithm with the symmetric distance calculation component and leverage recurrent binary embedding for search. Results are illustrated in Figure \ref{Fig.perf_hnsw}. After deploying BEBR, HNSW achieves significant improvements in retrieval efficiency.
\subsection{Online A/B Test}
We deploy the binary embedding-based retrieval system to web search and video copyright detection applications in Tencent and compare them to strong baselines which utilize full precision embedding for retrieval. Note that we substitute full precision embedding with recurrent binary embedding only in the retrieval stage. The subsequent re-rank stages are identical for both settings. Here, we would like to focus on the balance of performance and efficiency, where resource usage is recorded.
The live experiment is conducted over 30\% of the service traffic during one week. Table \ref{Table.ab_test_web} and Table \ref{Table.ab_test_video} show the great benefits of resource and efficiency while retaining performance at the system level. Specifically, BEBR conserves 73.91\% memory usage and increase the QPS of retrieval by 90\%, while CTR and QRR of web search application decrease slightly by 0.02\% and 0.07\% respectively. In video copyright detection, memory usage is reduced by 89.65\%, and QPS is increased by 72\%, while the precision and detection ratio decreases slightly by 0.13\% and 0.21\%. The improvements in retrieval efficiency and storage consumption lead to overall cost reduction. After deploying BEBR, the overall costs of retrieval in web search and video copyright detection are reduced by 55\% and 31\% respectively.
\begin{table}[t]
\centering
\caption{Online A/B tests of BEBR in web search.}\vspace{-5pt}
\label{Table.ab_test_web}
\begin{tabular}{ccccc}
\toprule
CTR & QRR & Memory usage & QPS\\
\hline
-0.02\% & -0.07\% & \textbf{-73.91\%} & \textbf{+90\%} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Online A/B tests of BEBR in video copyright detection.}\vspace{-5pt}
\label{Table.ab_test_video}
\begin{tabular}{cccc}
\toprule
Precision & Detection ratio & Memory usage & QPS\\
\hline
-0.13\% & -0.21\% & \textbf{-89.65\%} & \textbf{+72\%} \\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusion}
The paper presents binary embedding-based retrieval (BEBR) to improve retrieval efficiency and reduce storage consumption while retaining retrieval performance in Tencent products.
Specifically, we
1) compress full-precision embedding into recurrent binary embedding using a lightweight transformation model; 2) adopt a new task-agnostic embedding-to-embedding strategy to enable efficient training and deployment of binary embeddings; 3) investigate backward-compatible training in binary embeddings to enable refresh-free embedding model upgrades; 4) propose symmetric distance calculation equipped with ANN algorithms to form an efficient index system. BEBR has been successfully deployed into Tencent's products, including web search (Sogou), QQ, and Tencent Video. We hope our work can well inspire the community to effectively deliver research achievements into real-world applications.
\begin{acks}
We sincerely appreciate all the colleagues in the project of BEBR development, including but not limited to Yang Li, Qiugen Xiao, Dezhang Yuan, Yang Fang, Chen Xu, and Xiaohu Qie, for their valuable discussions, efforts, and support.
We thank all the reviewers and chairs for their time and constructive comments.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.08695",
"language": "en",
"timestamp": "2023-02-20T02:07:00",
"url": "https://arxiv.org/abs/2302.08695",
"yymm": "2302"
} | \section*{Introduction}
Cancerous tumors are believed to be most frequently initiated by a single mutated cell that generates a population of cells (a tumor) with elevated mutation rates and genome instability. Mutations are categorized into two types: driver and passenger mutations. Driver mutations confer selective advantages to the cell by increasing its fitness and passenger mutations are viewed as neutral or slightly deleterious. A survey of cancer models with neutral passenger mutations is found in \cite{durrett2015branching}. Authors of \cite{bozic2016quantifying} consider the accumulation of neutral passenger mutations to be a molecular clock that provides information regarding the evolutionary history of the tumor. Recent theoretical and experimental studies challenge the hypothesis of neutrality of passenger mutations. Theory in \cite{mcfarland2014tug} and experiments in \cite{mcfarland2017damaging} suggest passenger mutations are mildly deleterious and they inhibit tumor growth and reduce metastasis when passenger load exceeds a threshold.
In this paper, we analyze and simulate a branching process model with countably many types inspired by the original model in \cite{mcfarland2014tug} that captures the experimental outcomes in \cite{mcfarland2017damaging} but is simple enough for rigorous analysis. The cell population either grows without bound or becomes extinct. One feature that distinguishes our model from that of \cite{mcfarland2014tug} is that our model's mean dynamics always diverge to infinity, regardless of the rate of passenger mutations. This is because our model does not take environmental pressures into account, that is, death rate does not vary with population size. Intuitively, passenger mutations increase extinction probabilities, reducing the chance of progression. We show that the extinction probability given any initial type is strictly less than $1$. Numerical values are computed according to an algorithm in \cite{hautphenne2013extinction}. The fact that extinction does not always occur allows us to investigate the type transitions process conditional on non-extinction, which provides insights into the average fitness of the cell population as well as tumor heterogeneity. When driver mutations dominate, the continuous-time type transition process is transient and a limiting distribution does not exist. On the other hand, when passenger mutations dominate, continuous-time type transition process admits a limiting distribution. Existence of the limiting distribution for discrete-time type transition is also analyzed and our analysis reveals a similar dichotomy as the continuous-time type transition. Simulations and numerical computations are conducted to support our results.
\section*{Model}
In this section, we present the derivation of our model and motivate simplifications that lead to the form of the model we analyze. Mathematical details of construction of the model are available at \nameref{appendix}.
\subsection*{Continuous-time Markov Chain}
The birth and death rates of a cell with $n_{d}$ drivers and $n_{p}$ passengers (the cell type is denoted as $(n_{d},n_{p})$) are equal to $b(n_{d},n_{p}) = b_{0}(1+s_{d})^{n_{d}}(1+s_{p})^{-n_{p}}$ and $d(n_{d},n_{p}) = d=1$, respectively. The tumor starts from a single cancer cell with birth rate $b_{0} > 1$ (type $(0,0)$ cell), where $1$ is assumed to be the birth or death rate of a normal cell. $s_{d}$ and $s_{p}$ are selection coefficients for driver and passenger mutations and since passenger mutations are only mildly deleterious, $s_{d} > s_{p}$. Let $\mu$ denote the driver mutation rate and $\nu$ denote the passenger mutation rate $(\mu < \nu)$. It appears reasonable to assume $d > \nu$ since mutations are rare events. Numerical values for mutation rate and death rate supporting the assumption $d>\nu$ can be found in \cite{durrett2015branching} and \cite{mcfarland2014tug}. A type $(n_{d},n_{p})$ cell can take four actions: division (producing two $(n_{d},n_{p})$ cells), acquiring a driver mutation (producing a $(n_{d}+1,n_{p})$ cell), acquiring a passenger mutation (producing a $(n_{d},n_{p}+1)$ cell), and death. The four actions occur at rates $b(n_{d},n_{p}),\mu,\nu$, and $d$, respectively. In \nameref{appendix} Lemma $1$, we shall see a mild condition in addition to $d > \nu$ that implies non-explosion to the process. The random life time of a type $(n_{d},n_{p})$ cell follows an exponential distribution with rate $b(n_{d},n_{p})+\mu+\nu+d$ by the competing exponentials argument. In \nameref{appendix}, we show that under the driver-passenger relation $\eqref{eq:dp_relation}$, the state space can be reduced to a countable set which is large enough to support the process. However, other properties of the continuous-time model are hard to derive and this motivates us to limit ourselves to the discrete-time embedded branching process.
\subsection*{Embedded Branching Process}
In the embedded branching process, random lifetimes are replaced by constant time units equivalent to single generations. Let $\delta(n_{d},n_{p})=b(n_{d},n_{p})+\mu+\nu+d$; then the probabilities of a $(n_{d},n_{p})$ cell dividing, acquiring a driver, acquiring a passenger, or dying are equal to $b(n_{d},n_{p})/\delta(n_{d},n_{p}), \mu/\delta(n_{d},n_{p}), \nu/\delta(n_{d},n_{p})$, and $d/\delta(n_{d},n_{p})$, respectively.
\subsection*{Reduced Process: Driver-Passenger Relation}
In the remainder, we will analyze the process under an additional hypothesis that a single driver mutation's effect on fitness can be cancelled by $L$ passenger mutations, that is, $1+s_{d} = (1+s_{p})^{L}$, or
\begin{equation}
\begin{aligned}
\log_{(1+s_{p})}(1+s_{d})=L \in \mathbb{N}, L \geq 2.
\end{aligned}
\label{eq:dp_relation}
\end{equation}
This simplification seems a step in the right direction since it is always possible to take the floor or ceiling of $\log_{(1+s_{p})}(1+s_{d})$ to obtain an approximation of the process. Therefore, $b(n_{d},n_{p}) = b_{0}(1+s_{p})^{n_{d}L-n_{p}} = b_{0}(1+s_{p})^{i} = b(i)$ and type $i$ is now defined by $i=n_{d}L-n_{p}$. This driver-passenger relation transforms the type space from $\mathbb{Z}_{+} \times \mathbb{Z}_{+}$ to $\mathbb{Z}$ which is easier to work with. Throughout this paper, this form of driver-passenger relation is always assumed. As we will see in the section on the type transition process, there is a dichotomy of existence of a limiting distribution for the type transition depending on $\mathbbm{1}_{\{\nu >\mu L\}}$. Define $\lceil\log_{(1+s_{p})}(1+s_{d})\rceil = \Bar{L}$ and $\lfloor\log_{(1+s_{p})}(1+s_{d})\rfloor=\underline{L}$. Since $\Bar{L}-\underline{L} \leq 1$ and $\mu$ is small relative to $\nu$, $\mathbbm{1}_{\{\nu >\mu \Bar{L}\}}$ is likely to be the same as $\mathbbm{1}_{\{\nu >\mu \underline{L}\}}$. Therefore, for a wide range of mutation rates $\mu$ and $\nu$, qualitative properties of our model in this paper are determined by the reduced process.
To use the tools of analysis of denumerable nonnegative matrices from \cite{seneta2006non} and of denumerable-type Galton-Watson processes from \cite{hautphenne2013extinction}, we relabel the types in the way that will be clear in later sections. We define the relabeling function $\pi: \mathbb{Z}\to \mathbb{Z}_{+}=\{0,1,\cdots\}$ as follows. The pattern can be extended indefinitely.
\begin{equation}
\begin{aligned}
\pi(0) &= 0,\pi(1)=1,\cdots, \pi(L) = L, \\
\pi(-1) &= L+1,\pi(-2) = L+2,\cdots, \pi(-L) = 2L, \\
\pi(L+1) &= 2L+1,\pi(L+2) = 2L+2,\cdots, \pi(2L) = 3L, \\
&\quad\vdots
\end{aligned}
\label{eq:relabel}
\end{equation}
\subsection*{Connections between the Embedded Process and the Continuous-time Process}
Let $(\mathbf{Z}_{t})$ be the continuous-time branching process and $(\mathbf{E}_{n})$ be the embedded branching process. By the Corollary of Lemma $1$ in \nameref{appendix}, if $\nu + d > \mu L$ the continuous-time process $(\mathbf{Z}_{t})$ becomes extinct if and only if the embedded process $(\mathbf{E}_{n})$ becomes extinct. The condition $\nu + d > \mu L$ is sufficient for $(\mathbf{Z}_{t})$ to be non-explosive and we verify its sufficiency in Lemma $1$.
In the section on mean dynamics, we will show that $\lim_{n\to\infty}\mathbb{E}(||\mathbf{E}_{n}||_{\mathcal{\ell}^{1}}) = \infty$. Intuitively, there is always a non-zero probability that initial cell acquires sufficient number of driver mutations so that its fitness exceeds $\nu+d$, and this guarantees that the expected population size diverges to $\infty$. The same logic can be applied to $(\mathbf{Z}_{t})$ to deduce $\lim_{t\to\infty}\mathbb{E}(||\mathbf{Z}_{t}||_{\mathcal{\ell}^{1}}) = \infty$. We also analyzed the asymptotic growth rate of types in the $n$th generation but the results cannot be extended to the continuous-time process.
The type transition along a surviving lineage is analyzed in more detail in the type transition section. The qualitative behavior of the continuous-time type transition follows the jump chain given non-explosion. Through the study of the jump chain and the embedded process, we reveal the role of passenger mutations in the evolution of our model of cancer dynamics.
\section*{Results}
We present the analysis of the embedded process in this section along with simulations and computations. Table \ref{table:1} contains a list of parameters for simulations. Note the initial birth rate for the passenger dominance case is higher than that for driver dominance case. This is for simulation purposes, to provide enough surviving lineages.
\begin{table}[!ht]
\centering
\begin{tabular}{|p{1.5cm}|p{.6cm}|p{.6cm}|p{.6cm}|p{.6cm}|p{.6cm}|p{.6cm}|}
\hline
\multicolumn{7}{|c|}{Parameter Specifications} \\
\hline
Parameters &$b_{0}$ &$\mu$ &$\nu$ &$s_{p}$ &$L$ &$\frac{\nu}{\mu L}$ \\
\hline
Spec. $D1$ &1.1 &0.0251 &0.05 &0.002 &2 &$\leq 1$ \\
Spec. $D2$ &1.1 &0.011 &0.05 &0.002 &5 &$\leq 1$ \\
Spec. $D3$ &1.1 &0.0051 &0.05 &0.002 &10 &$\leq 1$ \\
Spec. $P1$ &1.15 &0.055 &0.3 &0.002 &2 &$> 1$ \\
Spec. $P2$ &1.15 &0.02 &0.3 &0.002 &5 &$>1$ \\
Spec. $P3$ &1.15 &0.003 &0.3 &0.002 &10 &$>1$ \\
\hline
\end{tabular}
\caption{Parameter specifications for simulations. The $D$ or $P$ after specification indicates driver or passenger dominance. Dominance criterion is $\frac{\nu}{\mu L}$, which can be viewed as a relative measure of passenger load.}
\label{table:1}
\end{table}
\subsection*{Mean Matrix}
Let us denote $\delta(i)=b(i)+\mu+\nu+d$ for $i \in \mathbb{Z}$. The mean matrix of the embedded branching process $(\mathbf{E}_{n})$ has the form
\begin{align*}
&M_{i,i-1} = \frac{\nu}{\delta(i)} \to 0 \text{ as } i\to\infty \\
&M_{i,i} = 2\frac{b(i)}{\delta(i)}\to 2 \text{ as } i\to\infty \\
&M_{i,i+L} = \frac{\mu}{\delta(i)} \to 0 \text{ as } i\to\infty.
\end{align*}
\noindent The matrix is irreducible and aperiodic. By Theorem $6.1$ in \cite{seneta2006non}, the convergence parameter $R$ (reciprocal of the spectral radius $r$) is determined by
\begin{align*}
r= \frac{1}{R} = \lim_{n\to\infty}[(M^{n})_{i,j}]^{1/n},
\end{align*}
\noindent and the limit is independent of the indices $i,j$. Proposition $1$ in \nameref{appendix} proves $r = 2$.
\subsection*{Mean Dynamics}
If the mean matrix had been $R-$positive-recurrent, there would have existed a non-zero limit of $(RM)^{n}$ as $n\to\infty$ for each element. Proposition $2$ in \nameref{appendix} shows exactly the opposite is true and
\begin{align*}
\left( \frac{M}{c} \right)^{n} \to
\begin{cases}
0, \mathrm{\; elementwise\; if\; } c \geq 2 \\
\infty, \mathrm{\; elementwise\; if\; } 0 < c < 2.
\end{cases}
\end{align*}
\noindent Therefore, the mean dynamics for the discrete embedded branching process cannot be stabilized via division by an exponential function and the expected size of each type diverges to infinity. To see this intuitively, let us view acquiring a passenger mutation as death of the cell; then the new death rate is $\nu+d$. There is a positive probability that the original type $0$ cell acquires enough driver mutations for its fitness to satisfy $b(i) > \nu+d$, which implies by properties of a branching process that the expected population size diverges to infinity.
The limit in Proposition $2$ contains more refined information. The average population size for each type grows asymptotically like a binary fission, but it never achieves the growth of a binary fission ($2^{n}$). To interpret this, fix a small positive value $\epsilon \in (0,2)$ and pick $i$ such that $M_{i,i}>2-\epsilon$. Then the expected type $i$ population at the $n$th generation will eventually dominate $C(2-\epsilon)^{n}$ where $C$ is an arbitrary constant. In each generation, we expect to have a portion $\nu/\delta(i)$ of type $i$ cells acquiring an additional passenger mutation. Therefore, the expected type $i-1$ population at generation $n$ will also dominate $C(2-\epsilon)^{n}$ for large enough $n$. Figure $\ref{mean dynamics}$ contains plots of simulated dynamics and mean dynamics. The simulated mean dynamics increase without bound as suggested by Proposition $2$ and the rate of growth increases as the population acquires driver mutations. In the simulated dynamics, many simulations end up being extinct, and those surviving populations diverge to infinity in size.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{mean_dynamics.png}
\caption{Simulated dynamics and mean dynamics.}\label{mean dynamics}
\end{figure*}
Unlike the conclusion in \cite{mcfarland2014tug}, the passengers in our model cannot force the expected population size to decrease. In our model, the passengers increase the extinction probabilities as we will see in the next section. The tug-of-war interaction cannot be observed at a population size level. Instead, it is reflected in the type transition process, at the scale of a single surviving lineage.
\subsection*{Extinction Probabilities}
For a branching process with infinitely many types, there are two modes of extinction. Global extinction is the event of the entire population eventually becoming extinct and partial extinction is the event that all types will eventually become extinct. For a precise mathematical definition, define $\mathbf{q}$ as the vector of global extinction probabilities conditional on initial types and $\tilde{\mathbf{q}}$ to be the vector of partial extinction probabilities conditional on initial types. Let $\underline{e}_{i}$ be the bi-infinite vector indexed by $\mathbb{Z}$ whose entries are all zeros except for the $i$th entry being one. Hence, $\{\mathbf{Z}(0)=\underline{e}_{i}\}$ indicates the population is initiated by a type $i$ cell.
\begin{align*}
&q_{i} = \mathbb{P}(\lim_{n\to\infty}||\mathbf{Z}(n)||_{\ell^{1}}=0 \mid \mathbf{Z}(0)=\underline{e}_{i});\\
\;\;&\tilde{q}_{i} = \mathbb{P}(\forall k\in\mathbb{Z}, \lim_{n\to\infty}\mathbf{Z}(n)_{k}=0 \mid \mathbf{Z}(0)=\underline{e}_{i}).
\end{align*}
The probability generating function (PGF) of the progeny distribution is
\begin{align*}
&\mathbf{G}(\mathbf{s})=(\cdots,G_{-1}(\mathbf{s}),G_{0}(\mathbf{s}),G_{1}(\mathbf{s}),\cdots),\\
&\text{where }G_{i}(\mathbf{s}) = \frac{d}{\delta(i)}+\frac{\nu}{\delta(i)}s_{i-1}+\frac{b(i)}{\delta(i)}s_{i}^{2}+\frac{\mu}{\delta(i)}s_{i+L}.
\end{align*}
\noindent Since the $k$th truncation of $\widetilde{M}$ (see proof of Proposition $1$ in \nameref{appendix}) is irreducible for $k\geq L+1$ and $r>1$, we invoke Proposition $4.1$ in \cite{hautphenne2013extinction} and conclude $\tilde{\mathbf{q}}<\underline{1}$, where $\underline{1}$ is the vector $(\cdots,1,1,1,\cdots)^{T}$. Furthermore, $\tilde{M}$ is irreducible, and this implies $\tilde{\mathbf{q}}=\mathbf{q}$ by Lemma $2.1$ in \cite{hautphenne2013extinction}. This implies that partial extinction as well as global extinction are not almost sure events, regardless of the values of the parameters $\mu,\nu,L,s_{p}$. Again, this can be explained by viewing occurrence of a passenger mutation as death of the cell. No matter how large $\nu$ is, there is still a chance that the original type $0$ cell acquires enough driver mutations to have fitness $b(i) > \nu+d$. Hence, there is always a positive probability for a cancer cell to escape from extinction.
Now we can deduce $\mathbf{q}$ is the minor fixed point of the progeny PGF (\underline{1} is always a fixed point for the PGF). Therefore,
\begin{align*}
q_{i} &= \frac{d}{\delta(i)}+\frac{\nu}{\delta(i)}q_{i-1}+\frac{b(i)}{\delta(i)}q_{i}^{2}+\frac{\mu}{\delta(i)}q_{i+L}.
\end{align*}
\noindent Let us recall that $q_{i}$ is the probability of extinction when the process is initiated with a type $i$ cell. As we demonstrate in Theorem $1$ in \nameref{appendix}, $i>j$ implies $q_{i}\leq q_{j}$. Therefore, $\lim_{i\to\infty}q_{i} = q_{\infty}$ and $\lim_{i\to-\infty}q_{i}=q_{-\infty}$ exist by monotonicity and boundedness. As a consequence,
\begin{align*}
&q_{\infty} = q_{\infty}^{2} \Rightarrow q_{\infty} = 0 \\
&q_{-\infty} = \frac{d+\nu q_{-\infty} + \mu q_{-\infty}}{\mu+\nu+d} \Rightarrow q_{-\infty} = 1.
\end{align*}
\noindent This is not surprising since as $i \to \infty$, the process tends to a pure birth process that can never become extinct. As $i\to -\infty$, the process tends to a left-continuous random walk with an absorbing state which is doomed to become extinct.
Explicit expression for $q_{i}$ is difficult to find since the difference equation is quadratic, inhomogeneous, and varying for each $i$. We resort to an algorithm in \cite{hautphenne2013extinction} to obtain an approximation of the extinction probability. We are particularly interested in $q_{0}$ when the process is initiated by a single cell of type $0$.
The algorithm can be applied to the ``doubly''-infinite type space such as the set of all integers $\mathbb{Z}$. Using notation of \cite{hautphenne2013extinction}, let $q_{0}^{(k)}$ be the probability that the process becomes extinct before reaching types in the set $\{i:\;i>k\}\cup\{i:\;i\leq -k\}$ (taboo types). Let $T_{e}$ be the time of extinction and $\tau_{k}$ be the time of first arrival into the set of taboo types, then
\begin{align*}
q_{0}^{(k)} = \mathbb{P}(T_{e}<\tau_{k}\mid \mathbf{Z}(0)=\underline{e}_{0}).
\end{align*}
\noindent It holds that $\lim_{k\to\infty}\tau_{k}=\infty$ almost surely and Lemma $3.1$ in \cite{hautphenne2013extinction} shows that $q_{0}^{(k)}$ can be used to approximate the extinction probability since
\begin{align*}
\lim_{k\to\infty}q_{0}^{(k)}&=\mathbb{P}(T_{e}<\lim_{k\to\infty}\tau_{k}\mid \mathbf{Z}(0)=\underline{e}_{0})\\
&=\mathbb{P}(T_{e}<\infty \mid \mathbf{Z}(0)=\underline{e}_{0})\\
&=q_{0}.
\end{align*}
\noindent We define the taboo type set as in \cite{hautphenne2013extinction} algorithm, $\Delta_{nL}=\{nL+1,nL+2,\cdots\}\cup \{-nL-1,-nL-2,\cdots\}$ which is equivalent to $\{2nL+1,2nL+2,\cdots\}$ in the $\pi$-labeling $\eqref{eq:relabel}$. Computed extinction probabilities are compared with simulated extinction probabilities for the embedded branching process in Table $\ref{table:2}$.
\begin{table}[ht]
\centering
\begin{tabular}{|p{1.5cm}||p{1.5cm}|p{1.5cm}|}
\hline
\multicolumn{3}{|c|}{Extinction Probabilities} \\
\hline
&Simulated $q_{0}$ &Computed $q_{0}$ \\
\hline
Spec. $D1$ &0.920 &0.909 \\
Spec. $D2$ &0.905 &0.909 \\
Spec. $D3$ &0.910 &0.909 \\
Spec. $P1$ &0.874 &0.871 \\
Spec. $P2$ &0.872 &0.872 \\
Spec. $P3$ &0.876 &0.873 \\
\hline
\end{tabular}
\caption{Simulated extinction and computed extinction probability when the population is initiated with a type $0$ cell. All probabilities are rounded to the third decimal place.}
\label{table:2}
\end{table}
Figure $\ref{extinction probabilities}$ contains extinction probabilities with different initial types. The extinction probabilities tend to $1$ as initial types tend to $-\infty$ and the extinction probabilities decreases monotonically as types tend to $\infty$. Notice that as type $i\to-\infty$, there are abrupt increments in extinction probabilities for passenger dominance cases. This phenomenon can be explained by negative selection and natural selection imposes a ``barrier'' for small types. In the next section, we will see how natural selection regulates the impact of passenger mutations on fitness.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{ext_prob.png}
\caption{Extinction probabilities computed using the algorithm in \cite{hautphenne2013extinction} section $3.1$.}\label{extinction probabilities}
\end{figure*}
\subsection*{Type Transition Process}
Now we focus on a single non-extinct lineage and model the transition between types on the evolutionary path. This corresponds to investigation of transition between types in the backbone of the branching process. In \cite{durrett2015branching}, the backbone of a branching process consists of individuals that have descendants for infinitely many generations. This is feasible since we have shown the probabilities of global and partial extinction are less than $1$. The probability transition matrix $T$ has the following form.
\begin{align*}
&T_{i,i-1} = \frac{\nu}{b(i)+\mu+\nu+d}\frac{1-q_{i-1}}{1-q_{i}} \\
&T_{i,i} = \frac{b(i)}{b(i)+\mu+\nu+d}\frac{1-q_{i}^{2}}{1-q_{i}} = \frac{b(i)}{b(i)+\mu+\nu+d}(1+q_{i}) \\
&T_{i,i+L} = \frac{\mu}{b(i)+\mu+\nu+d}\frac{1-q_{i+L}}{1-q_{i}}.
\end{align*}
\noindent To verify the row sums are equal to one, recall that the probability of extinction is the fixed point of the progeny PGF, that is, for all $i \in \mathbb{Z}$,
\begin{align*}
q_{i}= &\frac{d}{\delta(i)}+\frac{\nu}{\delta(i)}q_{i-1}+\frac{b(i)}{\delta(i)}q_{i}^{2}+\frac{\mu}{\delta(i)}q_{i+L} \\
\Rightarrow &\;\;1 = \frac{\nu}{\delta(i)}\frac{1-q_{i-1}}{1-q_{i}}+\frac{b(i)}{\delta(i)}\frac{1-q_{i}^{2}}{1-q_{i}}+\frac{\mu}{\delta(i)}\frac{1-q_{i+L}}{1-q_{i}}.
\end{align*}
Denote the type transition process as $(X_{n})_{n=1}^{\infty}$. We have $X_{0} = 0$, since our model is initiated with a single type-$0$ cell. The continuous type transition process $(Y_{t})_{t\in\mathbb{R}_{+}}$ can be reconstructed by appending an exponential holding time to each state. The distribution of the holding time for state $i$ can be represented as a random sum
\begin{align*}
&\sum_{k=1}^{N_{i}}E_{i,k}, \text{ where } E_{i,k} \stackrel{IID}{\sim} \mathrm{Exp}(\delta(i)),\\
\;\;&\text{where } N_{i} \sim Geom(1-T_{i,i}) \text{ on } \{1,2,\cdots\} \\
&\text{and } \{N_{i}, E_{i,1},E_{i,2},\cdots\}\; \text{is independent}.
\end{align*}
\noindent The distribution of this random sum follows an $\mathrm{Exp}(\delta(i)(1-T_{i,i}))$ distribution. Moreover, since
\begin{align*}
&\lim_{i\to\infty}\delta(i)(1-T_{i,i}) = \mu+\nu, \lim_{i\to-\infty}\delta(i)(1-T_{i,i}) = \mu+\nu+d \\
&\Rightarrow \;\sup_{i\in\mathbb{Z}}\{\delta(i)(1-T_{i,i})\} < \infty \text{ and the process is non-explosive.}
\end{align*}
\noindent Therefore, it suffices to analyze the jump chain $(\tilde{X}_{n})$ with transition probability matrix $\tilde{T}$ such that
\begin{align*}
&\tilde{T}_{i,i-1} = \frac{\nu}{(b(i)+\mu+\nu+d)(1-T_{i,i})}\frac{1-q_{i-1}}{1-q_{i}} \\
&\tilde{T}_{i,i+L} = \frac{\mu}{(b(i)+\mu+\nu+d)(1-T_{i,i})}\frac{1-q_{i+L}}{1-q_{i}}.
\end{align*}
\noindent Conditional increments for two processes are
\begin{equation}
\begin{aligned}
&\mathbb{E}(X_{n+1}\mid X_{n}=i) - i = \frac{\mu L(1-q_{i+L}) -\nu(1-q_{i-1})}{\delta(i)(1-q_{i})}, \\
&\mathbb{E}(\tilde{X}_{n+1}\mid \tilde{X}_{n}=i) - i = \frac{\mu L(1-q_{i+L}) -\nu(1-q_{i-1})}{\delta(i)(1-q_{i})(1-T_{i,i})}.
\end{aligned}
\label{eq:cond_incre}
\end{equation}
\noindent Note that the numerators are identical. By investigating the conditional increment of the process $(X_{n})$ and the jump chain $(\tilde{X}_{n})$, we arrive at a criterion to categorize them.
\subsubsection*{Driver Dominance $(\nu\leq \mu L)$}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{dri_dom.png}
\caption{Discrete and continuous type transition process simulations with fitness and average fitness under driver dominance.}\label{driver dominance}
\end{figure*}
Let us consider the numerator of the increment in $\eqref{eq:cond_incre}$,
\begin{align*}
&\mu L(1-q_{i+L}) -\nu(1-q_{i-1}) > 0, \forall i \in\mathbb{Z} \\
\iff &\frac{\nu}{\mu L} < \frac{1-q_{i+L}}{1-q_{i-1}}, \forall i \in \mathbb{Z} \\
\iff &\frac{\nu}{\mu L} \leq 1.
\end{align*}
\noindent Accordingly, both $(X_{n})$ and $(\tilde{X}_{n})$ are submartingales (processes whose conditional increments are non-decreasing) when $\nu \leq \mu L$. Under this condition, $\mathbb{E}(X_{n})$ and $\mathbb{E}(b(X_{n}))$ diverge to infinity due to to Lemma $2$ and Theorem $2$ in \nameref{appendix}. $\mathbb{E}(b(X_{n}))$ can be interpreted as the average fitness at the $n$th generation. Analogously, $\mathbb{E}(\tilde{X}_{n})$ and $\mathbb{E}(b(\tilde{X}_{n}))$ both diverge to infinity, too.
According to Theorem $2$, both $(X_{n})$ and $(\tilde{X}_{n})$ are transient. A sufficient condition for transience given in the proof is $q_{i} \leq d/b(i)$ for all $i\in\mathbb{Z}$, which has a probabilistic interpretation. Since $b(i)$ is decreasing exponentially fast as $i\to-\infty$, $d/b(i) \geq 1 > q_{i}$ for small $i$ and the condition is automatically satisfied. If $d/b(i) < 1$ for some fixed $i$, it is a solution to
\begin{align*}
x = \frac{d}{b(i)+d} + \frac{b(i)}{b(i)+d}x^{2}, x \in (0,1).
\end{align*}
\noindent Since $d/b(i)$ is the minor fixed point of the PGF for a binary fission with birth rate $b(i)$ and death rate $d$, it is the extinction probability of a cell population initiated by a type $i$ cell with $\mu = \nu = 0$. Intuitively, if driver mutations dominate, extinction is less likely to occur and $q_{i} \leq d/b(i)$ should hold. Verification of this condition is in the proof of Theorem $2$.
Due to transience, we have $T^{n} \to 0$ elementwise as $n\to\infty$ and there is no limiting distribution for $(X_{n})$ and $(Y_{t})$. According to Figure $\ref{driver dominance}$, the simulated type transition process becomes more and more diffuse in the positive direction as generation count or time unfolds. In addition, the simulated average fitness has a monotonic trend and diverges to $\infty$.
\subsubsection*{Passenger Dominance $(\nu > \mu L)$}
Results in this section are based on Lemma $2$ in \nameref{appendix}, which guarantees that the process cannot be a supermartingale if $\nu > \mu L$. To see this, let $\alpha$ be the minimal solution to $\mu x^{L+1}-(\mu+\nu+d)x+\nu=0$ subject to the constraint $x \geq 1$; then $\alpha = \lim_{i\to-\infty}\frac{1-q_{i}}{1-q_{i-1}} > 1$ by Lemma $3$ in \nameref{appendix}. Suppose the process is a supermartingale (a process whose conditional increment is non-positive), then
\begin{align*}
&\mu L(1-q_{i+L}) -\nu(1-q_{i-1}) \leq 0, \forall i \in \mathbb{Z} \\
\iff &\frac{\nu}{\mu L} \geq \frac{1-q_{i+L}}{1-q_{i-1}}, \forall i \in \mathbb{Z} \\
\Rightarrow &\frac{\nu}{\mu L} \geq \lim_{i\to-\infty}\frac{1-q_{i+L}}{1-q_{i-1}} = \alpha^{L+1}.
\end{align*}
\noindent If we note that $\frac{1-q_{i+L}}{1-q_{i-1}}=\prod_{j=0}^L\frac{1-q_{i+j}}{1-q_{i+j-1}}$. This contradicts the inequality $\frac{\nu}{\mu L} < \alpha^{L+1}$ in Lemma $3$. Therefore, there exists an index $I$ such that $\mathbb{E}(X_{n+1}\mid X_{n}=i) > i$ for all $i< I$. The index $I$ can be viewed as the location of the drift-barrier. For details, see further on. Computed values for $I$ are correspondingly -53, -54, and -47, for specifications P1, P2, and P3 and they correspond to the abrupt changes in extinction probability in Figure $\ref{extinction probabilities}$.
By Theorem $3$ in \nameref{appendix}, $(Y_{t})$ admits a limiting distribution when $\nu > \mu L$ and $(X_{n})$ admits a limiting distribution when $\frac{\nu}{\mu L} > (1+s_{p})^{L}$. This can be interpreted as eventual stabilization of the composition of the tumor as shown in Figure $\ref{passenger dominance}$. Average fitness for the discrete type transition seems to decrease and become stable in Figure $\ref{passenger dominance}$ and this behavior is analogous to that in the experiment in \cite{mcfarland2017damaging}.
\noindent\textbf{Remark:} $(X_{n})$ and $(\tilde{X}_{n})$ can never be martingales since $\frac{\nu}{\mu L} = \frac{1-q_{i+L}}{1-q_{i-1}}$ for all $i$ implies $\frac{1-q_{i+L}}{1-q_{i-1}}$ is a constant, which is impossible since $\frac{1-q_{i+L}}{1-q_{i-1}}\to 1$ as $i\to \infty$ and $\frac{1-q_{i+L}}{1-q_{i-1}}>1$ for all $i$.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{pas_dom.png}
\caption{Discrete and continuous type transition process simulations with fitness and average fitness under passenger dominance.}\label{passenger dominance}
\end{figure*}
\section*{Discussion}
Cancer cells are believed to have elevated mutation rates. Mutations may be advantageous, neutral, or deleterious. In cancer, the relatively rare advantageous mutations are named ``drivers'', while the more frequent selectively neutral or slightly deleterious mutations are named ``passengers''. This convenient conceptual dichotomy prevails in biomedical and mathematical biology literature, including \cite{mcfarland2014tug,mcfarland2017damaging}. Specifically, \cite{mcfarland2014tug} introduces the tug-of-war interaction between driver and passenger mutations which we analyze in the current paper, in the branching process framework. Since our model does not take environmental pressure in to account, it can only model cancer progression in the early phase, before growth slowdown. However, even in this framework, the deleterious effects of passenger mutations are observed. If passengers dominate, quasi-stationarity is observed, leading to stability of tumor composition conditional on non-extinction.
In the present work, we remove part of the complexity of the original problem, by converting the two-dimensional type-space walk of each individual cell state $(n_d,n_p)\in\mathbb{Z}_+^2$ into a one-dimensional type-space random walk over $x\in\mathbb{Z}$ along the trajectories in which a new driver mutation corresponds to a jump up of $L$ steps, while a new passenger mutation corresponds to a jump down of $1$ step, where $L$ satisfies the relationship in \eqref{eq:dp_relation}, i.e.,
$$
\log_{(1+s_{p})}(1+s_{d})=L \in \mathbb{N}, \;\;L \geq 2.
$$
This leads us to a denumerable-type branching process of the type similar to the models in Chapter 8 of the Kimmel and Axelrod monograph \cite{kimmel2015branching}, mathematically developed in the works of Hautphenne and co-workers \cite{hautphenne2013extinction}. Using results from these works, as well as those of Seneta \cite{seneta2006non}, we show that under driver dominance, the process is transient (``escapes to infinity''), while under passenger dominance, it is recurrent and leads to a limit distribution, given the infinitely long lineage. The analytical results are supported by simulations and numerical approximations.
Our analysis reveals an asymmetric relationship between the impact of the two types of mutations. In particular, the process is never driven to extinction with probability $1$. Under downward-trend regime (passenger dominance) there is with positive probability a ``reservoir'' of cells with a wide range of types (fitnesses). In the context of cancer a cell population, this may mean that ``indolent'' cancer cell colonies may allow the biological process to rebound if conditions change, as in the ``punctuated equilibria'' theory of cancer evolution in \cite{gao2016punctuated} and \cite{davis2017tumor}.
In our model, we investigate interactions between two types of mutations and their influences on the cancer cell population. We show that passenger mutations do not drive the mean dynamics to zero if the death rate of a cell is a constant. The impact of passenger mutations is reflected in extinction probabilities, average fitness, and the composition of a surviving cancer cell population (the tumor heterogeneity). Our analysis along with numerical computations and simulations reveals the asymmetric relationship between two types of mutations. This phenomenon, known as the ``drift barrier'' has been studied for a number of years in different population genetics models \cite{bachtrog2004adaptive, good2014deleterious, good2012distribution, rouzine2003solitary}. The difference between the classical drift barrier and the effects we observe is that we see quasi-stability at the bottom end of the state ladder, while the classical authors observe a reverse effect that slows down the upward movement towards perfect adaptation. Both result from stochastic genetic drift.
The current model is an alternative version of the models used by Kurpas and Kimmel \cite{kurpas2022modes} to obtain, computationally, the site frequency spectra (SFS) of a range of types of human tumors. However, those models were framed in the terms of a Moran process with constant population size. This other setting allows us to model a form of environmental constraints, as the competitive malignant clones are gradually suppressing normal tissue (see Figure 15 in \cite{kurpas2022modes}); however differently from the saturation mechanism assumed in \cite{mcfarland2014tug}. The Moran-process Tug-of-War was also used by Kurpas et al. \cite{kurpas2022genomic} to model evolution of consecutive waves of viral infections. In that setting, the individuals are not cells but viral genomes. Consecutive waves of virus variants modeled by the Tug-of-War, depicted in Figure 15 in \cite{kurpas2022genomic}, are qualitatively similar to the data-based version in their Figure 7.
\section*{Materials and Methods}
Extinction probabilities are computed using an algorithm in section $3.1$ of \cite{hautphenne2013extinction}. Details for simulations are in the Simulations section of \nameref{appendix}.
\section*{Acknowledgements}
Ren-Yi Wang was supported by the Department of Statistics at Rice University and by the NIH grant R01268380 to Wenyi Wang. Marek Kimmel was supported in part by the NSF DMS RAPID COLLABORATION grant 2030577 to Marek Kimmel and Simon Tavar\'e.
|
{
"arxiv_id": "2302.08708",
"language": "en",
"timestamp": "2023-02-20T02:07:27",
"url": "https://arxiv.org/abs/2302.08708",
"yymm": "2302"
} | \section{Introduction}
In this paper, we are concerned with finite simple graphs.
Also, for each positive integer $r$, the symbol $rK_2$ stands for an $r$-matching, that is, a matching of size $r$.
If $G$ is a graph, then by $\left( G , rK_2 \right)$, which is called {\it the matching Kneser graph}, we mean
a graph whose vertex set consists of all $r$-matchings of $G$;
and two vertices in $\left( G , rK_2 \right)$ are declared to be adjacent whenever their corresponding $r$-matchings do not intersect, or in other words, their corresponding $r$-matchings contain no edge in common.
\noindent We say that a graph $H$ is a {\it spanning subgraph} of a graph $G$ whenever $H$ is a subgraph of $G$ and $V(G)=V(H)$.
\noindent A graph is called {\it $\left( rK_2 \right)$-free} whenever it contains no matching of size $r$. For a graph $G$, {\it the generalized Turán number} ${\rm ex}\left( G , rK_2 \right)$ stands for the maximum number of edges of an $\left( rK_2 \right)$-free
spanning subgraph of $G$ \cite{ALISHAHI20172366, alishahi_hossein_2020}.
By this definition, one finds the following proposition :
\begin{pro} \label{Proposition1}
The expression $|E(G)| - {\rm ex}\left( G , rK_2 \right)$ is equal to the minimum number of edges that must be removed from $G$ in order to obtain a remaining $\left( rK_2 \right)$-free
spanning subgraph of $G$.
\end{pro}
Alishahi and Hajiabolhassan \cite{ALISHAHI20172366, alishahi_hossein_2020} observed that $|E(G)| - {\rm ex}\left( G , rK_2 \right)$
provides an upper bound for the chromatic number of $\left( G , rK_2 \right)$.
\begin{theorem} \cite{ALISHAHI20172366, alishahi_hossein_2020}
For any graph $G$ we have
$$\chi \left( G , rK_2 \right) \leq |E(G)| - {\rm ex}\left( G , rK_2 \right).$$
\end{theorem}
Also, Alishahi and Hajiabolhassan \cite{alishahi_hossein_2020} by means of some powerful tools raised in \cite{ALISHAHI2015186}, found some important and interesting families
of graphs for which the equality $\chi \left( G , rK_2 \right) = |E(G)| - {\rm ex}\left( G , rK_2 \right)$ holds.
They \cite{alishahi_hossein_2020} pointed out that the inequality
$\chi \left( G , rK_2 \right) \leq |E(G)| - {\rm ex}\left( G , rK_2 \right)$
becomes strict for some disconnected graphs.
They \cite{alishahi_hossein_2020} illustrated this point by noting the celebrated Lov{\'a}sz's theorem \cite{MR514625}
which asserts that if $l \geq 2r-1$ then $\chi \left( lK_2 , rK_2 \right) = l-2r + 2$ while
$\left| E\left( lK_2 \right) \right| - {\rm ex}\left( lK_2 , rK_2 \right) = l-r+1$.
Alishahi and Hajiabolhassan \cite{alishahi_hossein_2020} conjectured that the strict inequality could occur just for some
disconnected graphs.
\begin{con} (\cite{alishahi_hossein_2020}) \label{Conjecture1}
Every connected graph $G$ satisfies the equality $$\chi \left( G , rK_2 \right) = |E(G)| - {\rm ex}\left( G , rK_2 \right) .$$
\end{con}
This conjecture was recently disproved by Iradmusa \cite{Iradmusa2023}
by means of an elegant use of a class of graphs, called snarks.
A {\it snark} \cite{10.2307/24950311} is a $3$-regular connected graph which is bridgeless and has chromatic index equal to $4$.
Iradmusa \cite{Iradmusa2023} proved that if $G$ is a snark and $|V(G)| = 2r$, then $G$ satisfies
$\chi \left( G , rK_2 \right) = 1$ and $|E(G)| - {\rm ex} \left( G , rK_2 \right) = 3$; and thus $G$
provides a counterexample to Conjecture \ref{Conjecture1}.
In Iradmusa's counterexample, we have $\chi \left( G , rK_2 \right) = 1$ and also,
$$\begin{array}{lcl}
\Bigl( |E(G)| - {\rm ex} \left( G , rK_2 \right) \Bigr) - \chi \left( G , rK_2 \right) = 2 & {\rm and} &
\frac{|E(G)| - {\rm ex} \left( G , rK_2 \right)}{\chi \left( G , rK_2 \right)} = 3
\end{array}$$
for all values of $r \in \{3,4,5, \dots \}$.
So, some natural questions still unanswered may be raised as follows :
\begin{qu} \label{Question1}
Does there exist a counterexample $G$ for Conjecture \ref{Conjecture1} with $\chi \left( G , rK_2 \right) > 1$?
\end{qu}
\begin{qu} \label{Question2}
For any arbitrary but fixed positive integer $\Theta$, does there exist a sequence
of graphs $\left( G_r \right) _{r=3} ^{\infty}$ with
$\chi \left( G_r , rK_2 \right) = \Theta$ for all $r \geq 3$
in such a way that
$\Bigl( |E\left( G_r \right)| - {\rm ex} \left( G_r , rK_2 \right) \Bigr) - \chi \left( G_r , rK_2 \right) $
tends to infinity as $r$ tends to infinity?
\end{qu}
\begin{qu} \label{Question3}
For any arbitrary but fixed positive integer $\Theta$, does there exist a sequence
of graphs $\left( G_r \right) _{r=3} ^{\infty}$ with
$\chi \left( G_r , rK_2 \right) = \Theta$ for all $r \geq 3$
in such a way that
$$\lim_{r \rightarrow \infty} \frac{|E\left( G_r \right)| - {\rm ex} \left( G_r , rK_2 \right)}{\chi \left( G_r , rK_2 \right)} = + \infty $$
holds?
\end{qu}
An affirmative answer to Question \ref{Question3} will also provide an affirmative answer to Question \ref{Question2} immediately.
\begin{qu} \label{Question4}
Does there exist a tree $G$ as a counterexample to Conjecture \ref{Conjecture1}?
\end{qu}
In this paper, we provide affirmative answers to the above questions.
Our results make no appeal to \cite{Iradmusa2023}.
\section{The Main Results}
This section is devoted to the main results of this paper.
We state and prove our main result in Theorem \ref{key}.
As a consequence of Theorem \ref{key}, in Corollary \ref{key2} we explicitly construct an appropriate tree as an affirmative answer to Question \ref{Question1}; and moreover,
we explicitly construct an appropriate sequence of
trees as affirmative answers to Question \ref{Question2} and Question \ref{Question3}.
\begin{theorem} \label{key}
For any positive integers $r$, $\Theta$, and $\Gamma$ such that $r \geq 3$ and $\Gamma \leq r-2$,
there exists a connected bipartite graph $G$ for which
$$\begin{array}{lcr}
\chi \left( G , rK_2 \right) = \Theta & {\rm and} & |E(G)| - {\rm ex}\left( G , rK_2 \right) = \Theta + \Gamma .
\end{array}$$
\end{theorem}
\begin{proof}
{
Put $t:=(r-1)- \Gamma$.
So, we have $1\leq t \leq r-2$.
Also, put $$l:= \Theta + 2 \Gamma = \Theta + 2 (r-1-t) . $$
Now, consider a connected bipartite graph $G$ with a bipartition $V(G) := V_{1} \cup V_{2}$ where
\begin{itemize}
\item $V_1 := \{x_1 , x_2 , \dots , x_l\} \cup \left\{w_1 , w_2 , \dots , w_{t {l \choose {r-t}} + l } \right\} ,$
\item $V_2 := \{y_1 , y_2 , \dots , y_l\} \cup \{z_1 , z_2 , \dots , z_t\} ;$
\end{itemize}
whose edge set $E(G)$ is defined as follows :
$$E(G) := \{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\} \cup \{v z_j : v \in V_1 \ {\rm and} \ 1\leq j \leq t\} .$$
Indeed, $E(G)$ is constructed by a matching $\{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\}$ together with all edges with one end in
$\{z_1 , z_2 , \dots , z_t\}$ and the other end in $V_1$.
Since each $r$-matching in $G$ has at least $r-t$ edges in $\{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\}$, one could find a graph
homomorphism from $\left( G , rK_2 \right)$ to $\left( lK_2 , (r-t)K_2 \right) $;
which implies
$$\chi \left( G , rK_2 \right) \leq \chi \left( lK_2 , (r-t)K_2 \right) .$$
On the other hand, to each matching ${\cal M}$ of size $r-t$ in $\{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\}$, we can assign an $r$-matching
${\cal M} \cup \widetilde{{\cal M}}$ in $G$ with the following three properties :
\begin{itemize}
\item $\widetilde{{\cal M}}$ is a $t$-matching.
\item Both ends of each edge of $\widetilde{{\cal M}}$ lie in
$\{z_1 , \dots , z_t\} \cup \left\{w_1 , \dots , w_{t {l \choose {r-t}} + l } \right\}$.
\item For any two distinct $(r-t)$-matchings ${\cal M}$ and ${\cal N}$ in
$\{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\}$, two $t$-matchings $\widetilde{{\cal M}}$ and
$\widetilde{{\cal N}}$ are edge-disjoint.
\end{itemize}
Now, the assignment ${\cal M} \longmapsto {\cal M} \cup \widetilde{{\cal M}}$ defines a graph homomorphism
from the graph
$\left( lK_2 , (r-t)K_2 \right) $ to the graph $\left( G , rK_2 \right)$; which implies
$$\chi \left( lK_2 , (r-t)K_2 \right) \leq \chi \left( G , rK_2 \right) .$$
We conclude that
$$\chi \left( G , rK_2 \right) = \chi \left( lK_2 , (r-t)K_2 \right) = \Theta .$$
Suppose that one chooses $\Theta + \Gamma$ arbitrary edges from $\{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\}$
and then removes them from $G$. Doing so, in the resulting subgraph, $V_2$ has exactly $\Gamma + t$
non-isolated vertices. Since $\Gamma + t = r-1$, the resulting subgraph contains no $rK_2$; and
therefore, Proposition \ref{Proposition1} implies that
$$|E(G)| - {\rm ex}\left( G , rK_2 \right) \leq \Theta + \Gamma .$$
We shall have established the theorem if we prove that
$|E(G)| - {\rm ex}\left( G , rK_2 \right) \geq \Theta + \Gamma .$
In this regard, it is sufficient to show that if $A$ is an arbitrary subset of $E(G)$ such that
$|A| = \Theta + \Gamma - 1$, then the resulting subgraph of $G$ obtained by removing all
edges of $A$ from $G$ still contains an $rK_2$.
We denote the resulting subgraph by $G-A$.
\noindent
Since $\left| \{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\} \setminus A \right| \geq \left| \{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\} \right| - |A|
= l - (\Theta + \Gamma - 1) = (\Theta + 2 \Gamma) - (\Theta + \Gamma - 1) = \Gamma + 1$,
we find that $G-A$ contains at least $\Gamma + 1$ edges from $\{x_1 y_1 , x_2 y_2 , \dots , x_l y_l\}$.
Without losing the generality, we may assume that $$\{x_1 y_1 , x_2 y_2 , \dots , x_{\Gamma + 1} y_{\Gamma + 1}\} \subseteq E(G-A).$$
Since $|A| = \Theta + \Gamma - 1 \leq \Theta + 2 \Gamma = l$, we find that there exist $t$ vertices in
$\left\{w_1 , w_2 , \dots , w_{t {l \choose {r-t}} + l } \right\}$
that are incident with none of edges of $A$.
Without losing the generality, we may assume that these mentioned vertices are
$w_1 , w_2 , \dots , w_t$.
So, $$\left\{ w_1 z_1 , w_2 z_2 , \dots , w_t z_t\right\} \subseteq E(G-A) .$$
We conclude that
$$\{x_1 y_1 , x_2 y_2 , \dots , x_{\Gamma + 1} y_{\Gamma + 1}\} \cup \left\{ w_1 z_1 , w_2 z_2 , \dots , w_t z_t\right\} \subseteq E(G-A).$$
Hence, $G-A$ contains a matching of size $\Gamma + 1 + t = r$; and we are done.
}
\end{proof}
It is worth pointing out that for the case where $\Gamma = r-2$, the constructed graph $G$ in the Proof of Theorem \ref{key}
is a tree of radius $2$.
\begin{cor} \label{key2}
For any two positive integers $r$ and $\Theta$ with $r\geq 3$, there exists a tree $T$ of radius $2$ such that
$$\begin{array}{lcr}
\chi \left( T , rK_2 \right) = \Theta & {\rm and} & |E(T)| - {\rm ex}\left( T , rK_2 \right) = \Theta + r - 2 .
\end{array}$$
\end{cor}
\bibliographystyle{plain}
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
|
{
"arxiv_id": "2302.08649",
"language": "en",
"timestamp": "2023-02-20T02:04:58",
"url": "https://arxiv.org/abs/2302.08649",
"yymm": "2302"
} | \section{Introduction}
\label{sec:1}
\noindent
The endpoints of an open string induce the non-commutativity to the target spaces.
The Seiberg-Witten (SW) map \cite{Seiberg:1999vs,Cornalba:1999ah,Okawa:1999cm,Asakawa:1999cu,Ishibashi:1999vi} establishes the equivalence between the commutative and non-commutative descriptions.
The endpoints lie in a ($p+1$)-dimensional hypersurface, called a Dirichlet (D)-brane.
The non-commutative description replaces the ordinary product with the Moyal product.
The Moyal product introduces all-order effects of stringy correction ($\alpha^{\prime}$) through the non-commutativity parameter, proportional to the inverse of Neveu-Schwarz–Neveu-Schwarz (NS-NS) two-form field background.
Hence the non-commutative description is more convenient.
\\
\noindent
The non-commutative geometry is also applicable in M-branes.
The entropy of coincident $N$-branes scales as $N^2$ but M5-branes (M2-M5) scale as $N^3$.
Therefore, the gauge formulation of M-branes should be novel \cite{Chu:2011fd,Chu:2012um}.
The stack of M2-branes provides the Nambu-Poisson (NP) M5-brane theory, a non-commutative description of the M5-brane in a large $C$-field (three-form) background \cite{Ho:2008nn, Ho:2008ei}.
The new gauge symmetry, {\it volume-preserving diffeomorphism} (VPD), appears in NP M5-brane theory \cite{Ho:2008nn, Ho:2008ei}.
The Nambu-Poisson bracket (or three-bracket) generates VPD.
The double dimensional reduction in the direction of $C$-field background (three-form field background) provides a D4-brane in a large NS-NS $B$-field background \cite{Ho:2008ve}.
The non-commutative description of a D4-brane (D2-D4 brane) in a large Ramond-Ramond (R-R) $C$-field background appears through the double dimensional reduction not alone the direction of $C$-field background \cite{Ho:2011yr}.
One can also use the following conditions to show the unique bosonic sector of D$p$-brane (D($p-2$)-D$p$ brane) in a large R-R $(p-1)$-form field background \cite{Ho:2013paa}:
\begin{itemize}
\item{Partial Lorentz Symmetry: SO(1, 1)$\times$SO($p-1$)$\times$SO($9-p$);
}
\item{Gauge Symmetry: U(1) and VPD;}
\item{Field Content: one-form gauge field, non-dynamical $(p-2)$-form gauge potential, and transversal scalar fields;}
\item{Duality: D$p$-brane is relevant to D($p\pm 1$)-brane via Target-Space duality (T-duality);}
\item{Leading Order in Large R-R Field Background: the low-energy theory agrees with the trivial background but with a different metric.
}
\end{itemize}
The scaling limit for the R-R D$p$-brane gives a good approximation \cite{Ho:2013paa}:
\begin{eqnarray}
l_s\sim\epsilon^{1/2}; \
g_s\sim\epsilon^{-1/2}; \
C_{\dot{\mu_1}\dot{\mu_2}\cdots\dot{\mu}_{p-1}}\sim\epsilon^0; \
g_{\alpha\beta}\sim\epsilon^0; \
g_{\dot{\mu}\dot{\nu}}\sim\epsilon; \
\epsilon\rightarrow 0,
\end{eqnarray}
where $l_s\equiv(\alpha^{\prime})^{1/2}$ is the string length, $g_s$ is the string coupling constant, and $g_{\alpha\beta}$ and $g_{\dot{\mu}\dot{\nu}}$ are the spacetime metric.
The $C_{\dot{\mu}_1\dot{\mu}_2\cdots\dot{\mu}_{p-1}}$ is a large constant R-R ($p-1$)-form field background.
Therefore, the field background only has one non-trivial component.
The $\alpha=0, 1$ denotes directions of spacetime are not parallel to the field background.
We denote other directions from $\dot{\mu}=2, 3, \cdots, p-1$.
The {\it $(p-1)$-bracket} generates VPD and the large $(p-1)$-form field background \cite{Ho:2013paa}.
Therefore, this study shows that the $(p-1)$-bracket strongly constrains the R-R D-branes.
\\
\noindent
In this paper, we generalize the ($p-1$)-bracket and apply it to the D$p$-branes.
The $(p-1)$-bracket in a single D-brane is
\begin{eqnarray}
\{f_1, f_2 ,\cdots, f_{p-1}\}_{(p-1)}\equiv \epsilon^{\dot{\mu}_1\dot{\mu}_2\cdots\dot{\mu}_{p-1}}(\partial_{\dot{\mu}_1}f_1)(\partial_{\dot{\mu}_2f_2})\cdots(\partial_{\dot{\mu}_{p-1}}f_{p-1}).
\end{eqnarray}
This bracket satisfies the generalized Jacobi identity
\begin{eqnarray}
&&
\{f_1, f_2, \cdots, f_{p-2}, \{g_1, g_2, \cdots, g_{p-1}\}_{(p-1)}\}_{(p-1)}
\nonumber\\
&=&\{\{ f_1, f_2, \cdots, f_{p-2}, g_1\}_{(p-1)}, \cdots, g_{p-1}\}_{(p-1)}
\nonumber\\
&&
+\{g_1, \{f_1, f_2, \cdots, f_{p-2}, g_2\}_{(p-1)}, \cdots, g_{p-1}\}_{(p-1)}
+\cdots
\nonumber\\
&&
+\{ g_1, g_2, \cdots, g_{p-2}, \{f_1, f_2, \cdots, f_{p-2}, g_{p-1}\}_{(p-1)}\}_{(p-1)}.
\end{eqnarray}
This identity implies that the ($p-1$)-bracket of ($p-1$) VPD covariant fields $F_j$,
where
\begin{eqnarray}
&&
\hat{\delta}_{\hat{\Lambda}} F_j=\{f_1, f_2, \cdots, f_{p-2}, F_1\}_{(p-1)}=\hat{\kappa}^{\dot{\mu}}\partial_{\dot{\mu}}F_j;
\nonumber\\
&&
\hat{\kappa}^{\dot{\mu}}=\epsilon^{\dot{\mu}_1\dot{\mu}_2\cdots\dot{\mu}_{p-2}\dot{\mu}}
(\partial_{\dot{\mu}_1}f_1)(\partial_{\dot{\mu}_2}f_2)\cdots (\partial_{\dot{\mu}_{p-2}}f_{p-2}),
\end{eqnarray}
is also VPD covariant
\begin{eqnarray}
\hat{\delta}_{\hat{\Lambda}}\{F_1, F_2, \cdots, F_{p-1}\}_{(p-1)}
=\hat{\kappa}^{\dot{\mu}}\partial_{\dot{\mu}}\{F_1, F_2, \cdots, F_{p-1}\}_{(p-1)}.
\end{eqnarray}
The $\hat{\kappa}^{\dot{\mu}}$ generates the VPD, and it is divergenceless
\begin{eqnarray}
\partial_{\dot{\mu}}\hat{\kappa}^{\dot{\mu}}=0.
\end{eqnarray}
Hence the bracket shows the manifest VPD symmetry structure.
\\
\noindent
The Ref. \cite{Ho:2011yr} first proposed the non-commutative multiple D$p$-branes in large R-R ($p-1$)-form field background.
The field contents are the one-form U($N$) gauge field, $\hat{a}$, and non-dynamical ($p-2$)-form U(1) gauge potential, $\hat{b}$ \cite{Ho:2011yr}.
After integrating the non-dynamical gauge potential, one can obtain the Yang-Mills gauge theory \cite{Ho:2011yr}.
The construction is non-trivial because the field strengths:
\begin{eqnarray}
\hat{{\cal H}}^{\dot{\mu}_1\dot{\mu_2}\cdots\dot{\mu}_{p-1}}&\equiv& g^{p-2}
\{\hat{X}^{\dot{\mu}_1}, \hat{X}^{\dot{\mu}_2}, \cdots, \hat{X}^{\dot{\mu}_{p-1}}\}_{(p-1)}-\frac{1}{g}\epsilon^{\dot{\mu}_1\dot{\mu}_2\cdots\dot{\mu}_{p-1}};
\nonumber\\
\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}&\equiv& \hat{F}_{\dot{\mu}\dot{\nu}}
+g(\partial_{\dot{\sigma}}\hat{b}^{\dot{\sigma}}\hat{F}_{\dot{\mu}\dot{\nu}}
-\partial_{\dot{\mu}}\hat{b}^{\dot{\sigma}}\hat{F}_{\dot{\sigma}\dot{\nu}}
-\partial_{\dot{\nu}}\hat{b}^{\dot{\sigma}}\hat{F}_{\dot{\mu}\dot{\sigma}});
\nonumber\\
\hat{{\cal F}}_{\alpha\dot{\mu}}&\equiv& (\hat{V}^{-1})_{\dot{\mu}}{}^{\dot{\nu}}(\hat{F}_{\alpha\dot{\nu}}+g\hat{F}_{\dot{\nu}\dot{\delta}}\hat{B}_{\alpha}{}^{\dot{\delta}});
\nonumber\\
\hat{{\cal F}}_{\alpha\beta}&\equiv&\hat{F}_{\alpha\beta}+
g(-\hat{F}_{\alpha\dot{\mu}}\hat{B}_{\beta}{}^{\dot{\mu}}-\hat{F}_{\dot{\mu}\beta}\hat{B}_{\alpha}{}^{\dot{\mu}})+
g^2\hat{F}_{\dot{\mu}\dot{\nu}}\hat{B}_{\alpha}{}^{\dot{\mu}}\hat{B}_{\beta}{}^{\dot{\nu}},
\end{eqnarray}
where
\begin{eqnarray}
\hat{V}_{\dot{\nu}}{}^{\dot{\mu}}\equiv\delta_{\dot{\nu}}{}^{\dot{\mu}}+g\partial_{\dot{\nu}}\hat{b}^{\dot{\mu}}; \qquad
\hat{X}^{\dot{\mu}}\equiv\frac{y^{\dot{\mu}}}{g}+\hat{b}^{\dot{\mu}},
\end{eqnarray}
are covariant under the closed gauge transformation:
\begin{eqnarray}
\hat{\delta}_{\hat{\Lambda}}\hat{b}^{\dot{\mu}}&=&\hat{\kappa}^{\dot{\mu}}
+g\hat{\kappa}^{\dot{\nu}}\partial_{\dot{\nu}}\hat{b}^{\dot{\mu}};
\nonumber\\
\hat{\delta}_{\hat{\Lambda}}\hat{a}_{\dot{\mu}}&=&\partial_{\dot{\mu}}\hat{\lambda}+
i\lbrack\hat{\lambda}, \hat{a}_{\dot{\mu}}\rbrack
+g(\hat{\kappa}^{\dot{\nu}}\partial_{\dot{\nu}}\hat{a}_{\dot{\mu}}
+\hat{a}_{\dot{\nu}}\partial_{\dot{\mu}}\hat{\kappa}^{\dot{\nu}});
\nonumber\\
\hat{\delta}_{\hat{\Lambda}}\hat{a}_{\alpha}&=&\partial_{\alpha}\hat{\lambda}+
i\lbrack\hat{\lambda}, \hat{a}_{\alpha}\rbrack
+g(\hat{\kappa}^{\dot{\nu}}\partial_{\dot{\nu}}\hat{a}_{\alpha}
+\hat{a}_{\dot{\nu}}\partial_{\alpha}\hat{\kappa}^{\dot{\nu}}),
\end{eqnarray}
where the commutator is
\begin{eqnarray}
\lbrack{\cal O}_1, {\cal O}_2\rbrack\equiv {\cal O}_1^c{\cal O}_2^d(T^cT^d-T^dT^c).
\end{eqnarray}
The $T^c$ is the generator of a Lie algebra.
We denote the indices of a Lie algebra by $c, d$.
The ${\cal F}_{\dot{\mu}\dot{\nu}}$ needs to have the modification when $p>4$ due to the T-duality \cite{Ho:2013paa}.
This field strength in the single D-brane is \cite{Ho:2013paa}
\begin{eqnarray}
\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}=\frac{g^{p-3}}{(p-3)!}\epsilon_{\dot{\mu}\dot{\nu}\dot{\mu}_1\cdots\dot{\mu}_{p-3}}
\{\hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{p-3}}, \hat{a}_{\dot{\rho}}, \hat{y}^{\dot{\rho}}\}.
\end{eqnarray}
The $x^{\alpha}$ and $y^{\dot{\mu}}$ are the worldvolume coordinates.
It is convenient when transforming the $(p-2)$-form field, $\hat{b}_{\dot{\mu}_2\dot{\mu}_3\cdots\dot{\mu}_{p-1}}$ to the one-form field
\begin{eqnarray}
\hat{b}^{\dot{\mu}_1}\equiv\frac{1}{(p-2)!}\epsilon^{\dot{\mu}_1\dot{\mu}_2\cdots\dot{\mu}_{p-1}}\hat{b}_{\dot{\mu}_2\dot{\mu}_3\cdots\dot{\mu}_{p-1}}.
\end{eqnarray}
For example, the field $\hat{X}^{\dot{\mu}}$ is VPD covariant.
The coupling constant $g$ is the inverse of the $(p-1)$-form field background
\begin{eqnarray}
g\equiv\frac{1}{C_{23\cdots p}}.
\end{eqnarray}
We raise or lower the spacetime indices by the flat metric, $\eta_{AB}\equiv\mathrm{diag}(-, +, +, \cdots, +)$,
where $A\equiv(\alpha, \dot{\mu})$.
The $\hat{b}^{\dot{\mu}}$, $\hat{B}_{\alpha}{}^{\dot{\mu}}$, and $\hat{\kappa}^{\dot{\mu}}$ take the value of U(1), and
\begin{eqnarray}
\hat{F}_{AB}\equiv\partial_A\hat{a}_B-\partial_B\hat{a}_A-i\lbrack \hat{a}_A, \hat{a}_B\rbrack
\end{eqnarray}
is the ordinary covariant field strength (for the trivial background or $g=0$).
The $\hat{B}_{\alpha}{}^{\dot{\mu}}$ satisfies the following equation
\begin{eqnarray}
\hat{V}_{\dot{\mu}}{}^{\dot{\nu}}(\partial^{\alpha}\hat{b}_{\dot{\nu}}-\hat{V}^{\dot{\rho}}{}_{\dot{\nu}}\hat{B}^{\alpha}{}_{\dot{\rho}})
+\epsilon^{\alpha\beta}\hat{F}^{\mathrm{U}(1)}_{\beta\dot{\mu}}
+g\epsilon^{\alpha\beta}\hat{F}^{\mathrm{U}(1)}_{\dot{\mu}\dot{\nu}}\hat{B}_{\beta}{}^{\dot{\nu}}=0,
\end{eqnarray}
where
\begin{eqnarray}
\hat{F}_{AB}^{\mathrm{U}(1)}\equiv\partial_A\hat{a}_B-\partial_B\hat{a}_A
\end{eqnarray}
is the Abelian strength.
\\
\noindent
The construction has the issue of comparing to the commutative description \cite{Ma:2020msx}.
The issue happens because the ($p-2$)-form gauge potential is Abelian \cite{Ma:2020msx}.
We first generalize the ($p-1$)-bracket to reproduce the result of Ref. \cite{Ho:2011yr}.
We then show the Lagrangian description by non-Abelianization of $(p-2)$-form gauge potential at the leading order.
Our main results are in Fig. \ref{Generalization.pdf}.
\begin{figure}
\begin{center}
\includegraphics[width=1.\textwidth]{Generalization.pdf}
\end{center}
\caption{Our results show the generalization of bracket and $(p-2)$-form gauge potential.}
\label{Generalization.pdf}
\end{figure}
To summarize our results:
\begin{itemize}
\item{We generalize the ($p-1$)-bracket construction from a single D$p$-brane to the multiple D$p$-branes.
The generalization is to replace $\partial$ with the ordinary covariant derivative ($D$).
Therefore, the approach is straightforward.
The bracket generates VPD and strongly constrains the D-branes theory.
}
\item{We do the dimensional reduction for the bracket recursively beginning from $p=9$.
When $p\le 6$, we can write the compact expression for the interaction terms relevant to the brackets.
When the ($p-2$)-form gauge potential takes the U(1) value, it is consistent with the existing formulation.
The result of U($N$) ($p-2$)-form potential should provide a similar but more complicated interaction from the bracket and commutator.
}
\item{We keep the symmetry group U(1) for VPD gauge parameter, but $\hat{b}$ is non-Abelian.
Therefore, the R-R field background only lives in the U(1) sector.
The non-Abelianization is to replace $\partial$ with $D$.
We then write the Lagrangian for the non-Abelian ($p-2$)-form gauge potential at the leading order.
The Lagrangian description of multiple D-branes is similar to the single D-brane in this approach.
}
\end{itemize}
\noindent
The organization of this paper is as follows: We generalize the ($p-1$)-bracket in Sec.~\ref{sec:2}.
We then show the compact expression for interacting terms from this bracket when $p\le 6$ in Sec.~\ref{sec:3}.
The result of the non-Abelianization ($p-2$)-form gauge potential is in Sec.~\ref{sec:4}.
Finally, we discuss our results and conclude in Sec.~\ref{sec:5}.
\section{($p-1$)-Bracket}
\label{sec:2}
\noindent
The VPD covariant and U(1) invariant object is
\begin{eqnarray}
\hat{{\cal O}}_{nmml}\equiv\bigg\{\hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_n},
\hat{a}_{\dot{\nu}_1}, \cdots, \hat{a}_{\dot{\nu}_m},
\frac{y^{\dot{\nu}_1}}{g}, \cdots, \frac{y^{\dot{\nu}_m}}{g},
\hat{X}^{I_1}, \cdots, \hat{X}^{I_l}\bigg\}_{(p-1)},
\end{eqnarray}
where
\begin{eqnarray}
n, m, l\ge 0;\ n+2m+l=p-1,
\end{eqnarray}
in the R-R D$p$-brane \cite{Ho:2013paa}.
The D$p$-brane Lagrangian relevant to this ($p-1$)-bracket is \cite{Ho:2013paa}
\begin{eqnarray}
{\cal L}_0=-\frac{g^{2(p-2)}}{2}\sum_{n, m, l\in S}C^{p-1}_{nmml}({\cal O}_{nmml})^2,
\end{eqnarray}
where the indices ($n, m, l$) are in the set
\begin{eqnarray}
S\equiv\{(n, m, l)| n, m, l\ge 0;\ n+2m+l=p-1\},
\end{eqnarray}
and the coefficient is
\begin{eqnarray}
C^{p-1}_{nmml}\equiv\frac{1}{n!m!m!l!}.
\end{eqnarray}
The ${\cal L}_0$ is closed for the T-duality (or dimensional reduction) \cite{Ho:2013paa}.
Therefore, the symmetry structure from this $(p-1)$-bracket restricts the dynamics of the D-brane.
\\
\noindent
Now we demonstrate the generalization of U($N$) symmetry group in D4-branes:
\begin{itemize}
\item{
Covariant Field Strength:
\begin{eqnarray}
\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}=g^2\epsilon_{\dot{\mu}\dot{\nu}\dot{\rho}}
\bigg\{\hat{X}^{\dot{\rho}}, \hat{a}_{\dot{\sigma}}, \frac{y^{\dot{\sigma}}}{g}\bigg\}_{(3)};
\end{eqnarray}
}
\item{Covariant Derivative of Scalar Fields:
\begin{eqnarray}
\hat{D}_{\dot{\mu}}\hat{X}^I=\frac{g^2}{2}\epsilon_{\dot{\mu}\dot{\nu}\dot{\rho}}
\{\hat{X}^{\dot{\nu}}, \hat{X}^{\dot{\rho}}, \hat{X}^I\}_{(3)},
\end{eqnarray}
where the gauge transformation of scalar fields is
\begin{eqnarray}
\hat{\delta}_{\hat{\Lambda}}\hat{X}^I=i\lbrack\hat{\lambda}, \hat{X}^I\rbrack+g\hat{\kappa}^{\dot{\rho}}\partial_{\dot{\rho}}\hat{X}^I,
\end{eqnarray}
derived from the dimensional reduction on the $p$-th direction (or the following replacement):
\begin{eqnarray}
\hat{a}_p\rightarrow \hat{X}^{I=p}; \qquad \partial_p(\mathrm{field})\rightarrow 0.
\end{eqnarray}
}
\item{Commutator of Scalar Fields:
\begin{eqnarray}
\lbrack\hat{X}^I, \hat{X}^J\rbrack\Rightarrow\frac{g^3}{3!}\epsilon_{\dot{\mu}\dot{\nu}\dot{\rho}}
\{\hat{X}^{\dot{\mu}}, \hat{X}^{\dot{\nu}}, \hat{X}^{\dot{\rho}}\}_{(3)}
\lbrack\hat{X}^I, \hat{X}^I\rbrack,
\end{eqnarray}
where $\Rightarrow$ means the equivalence up to the leading order in $g$.
}
\end{itemize}
The NP-bracket introduces the necessary terms in multiple D4-branes.
We change the ordinary derivative to another derivative operator ${\cal D}$ in the bracket:
\begin{eqnarray}
{\cal D}_{\dot{\mu}}\hat{X}^I\equiv \partial_{\dot{\mu}}\hat{X}^I-i\lbrack\hat{a}_{\dot{\mu}}, \hat{X}^I\rbrack\equiv D_{\dot{\mu}}X^I;
\qquad
{\cal D}_{\dot{\mu}}\hat{a}_{\dot{\nu}}\equiv(\partial_{\dot{\mu}}-i\hat{a}_{\dot{\mu}})\hat{a}_{\dot{\nu}},
\end{eqnarray}
where $D_{\dot{\mu}}$ is the ordinary covariant derivative.
For the U(1) field, the ${\cal D}_{\dot{\mu}}$ is same as the ordinary derivative $\partial_{\dot{\mu}}$.
The combination of $a_{\dot{\mu}}$ and $y^{\dot{\mu}}$ shows the ordinary covariant field strength
\begin{eqnarray}
({\cal D}_{\lbrack\dot{\mu}}\hat{a}_{\dot{\rho}})(\partial_{\dot{\nu}\rbrack}y^{\dot{\rho}})=\hat{F}_{\dot{\mu}\dot{\nu}},
\end{eqnarray}
in which we use the notation
\begin{eqnarray}
A_{\lbrack\dot{\mu}}B_{\dot{\nu}\rbrack}\equiv A_{\dot{\mu}}B_{\dot{\nu}}-A_{\dot{\nu}}B_{\dot{\mu}}.
\end{eqnarray}
Now the symmetry group of $\hat{b}^{\dot{\mu}}$ is still U(1) for reproducing the result of Ref. \cite{Ho:2011yr}.
We expect that the description of D-branes is not complete.
Performing the dimensional reduction can show the action.
The calculation is more simple than using U($N$) $\hat{b}^{\dot{\mu}}$.
However, it is enough to know how the T-duality induces the new structure of bracket and commutator in the Lagrangian.
The non-Abelian generalization of $\hat{b}^{\dot{\mu}}$ should provide similar but more complicated interacting terms to the Lagrangian.
\\
\noindent
We can show that ${\cal O}_{nmml}$ is the VPD- and U($N$)-covariant object when promoting the symmetry group of $\hat{a}_{\dot{\mu}}$ from U(1) to U($N$).
The $\hat{X}^{\dot{\mu}}$ and $\hat{X}^I$ are VPD covariant.
Because $y^{\dot{\mu}}$ is not a dynamical field, the VPD transformation cannot act on it.
The $\hat{a}_{\dot{\mu}}$ is not VPD covariant.
We need to check the combination of $\hat{a}_{\dot{\mu}}$ and $y^{\dot{\mu}}$,
\begin{eqnarray}
\hat{\delta}_{\hat{\Lambda}}\big(({\cal D}_{\lbrack\dot{\mu}}\hat{a}_{\dot{\rho}})(\partial_{\dot{\nu}\rbrack}y^{\dot{\rho}})\big)
=-i\lbrack \hat{F}_{\dot{\mu}\dot{\nu}}, \hat{\lambda}\rbrack
+g\hat{\kappa}^{\dot{\rho}}\partial_{\dot{\rho}}\hat{F}_{\dot{\mu}\dot{\nu}}
+g\big(({\cal D}_{\dot{\mu}}\hat{a}_{\dot{\rho}})(\partial_{\dot{\nu}}\hat{\kappa}^{\dot{\rho}})
-({\cal D}_{\dot{\nu}}\hat{a}_{\dot{\rho}})(\partial_{\dot{\mu}}\hat{\kappa}^{\dot{\rho}})\big).
\end{eqnarray}
The first term of the right-hand side shows the U($N$) covariant.
The second term shows the VPD covariant.
The third term shows the non-VPD covariant, but the VPD covariant $y^{\dot{\mu}}$:
\begin{eqnarray}
\hat{\delta}_{\hat{\Lambda}}y^{\dot{\mu}}=g\hat{\kappa}^{\dot{\nu}}\partial_{\dot{\nu}}y^{\dot{\mu}}=g\hat{\kappa}^{\dot{\mu}}
\end{eqnarray}
can generate this term.
The gauge transformation of the pair $(\hat{\delta}_{\hat{\Lambda}}\hat{a}_{\dot{\mu}}, y^{\dot{\mu}})$ is equivalent to treating the pair as a VPD covariant object.
A similar result already happens in R-R D-brane \cite{Ho:2013paa}.
The non-VPD covariant part is due to the ordinary derivative $\partial_{\dot{\mu}}$.
Therefore, we can apply the generalized Jacobi identity to show the VPD covariance.
The $\hat{F}_{\dot{\mu}\dot{\nu}}$ and $D_{\dot{\mu}}$ are U($N$) covariant.
Hence the $(p-1)$-bracket is VPD- and U($N$)-covariant.
The non-Abelian generalization of this bracket is not closed under the T-duality.
It is also necessary to introduce the commutator.
Later we will recursively use the dimensional reduction to show the interaction between the bracket and commutator.
\section{T-Duality}
\label{sec:3}
\noindent
We expect that the infinitely recursive operation of dimensional reduction will introduce infinite interaction terms.
Therefore, it is hard to write the analytical expression for the action.
We begin with D9-branes or the pure gauge theory.
The field contents relevant to the 8-form bracket are $\hat{a}_{\dot{\mu}}$ and $\hat{b}^{\dot{\mu}}$.
The Lagrangian relevant to the 8-form bracket is
\begin{eqnarray}
&&
{\cal L}_1^{(p=9)}
\nonumber\\
&=&
-\frac{g^{2(p-2)}}{2}\sum_{n, m=0;\ n+2m=p-1}C^{p-1}_{nmm0}
\nonumber\\
&&\times
\mathrm{Str}\bigg\lbrack\bigg(\bigg\{\hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_n},
\hat{a}_{\dot{\nu}_1}, \cdots, \hat{a}_{\dot{\nu}_m},
\frac{y^{\dot{\nu}_1}}{g}, \cdots, \frac{y^{\dot{\nu}_m}}{g}\bigg\}_{(p-1)}\bigg)^2\bigg\rbrack,
\nonumber\\
\end{eqnarray}
where
\begin{eqnarray}
\mathrm{Str}({\cal O}_1{\cal O}_2\cdots{\cal O}_n)&\equiv&
\mathrm{Tr}\big(\mathrm{Sym}({\cal O}_1{\cal O}_2\cdots{\cal O}_n)\big);
\nonumber\\
\mathrm{Sym}({\cal O}_1{\cal O}_2\cdots{\cal O}_n)&\equiv&\frac{1}{n!}({\cal O}_1{\cal O}_2\cdots {\cal O}_n+ \mathrm{all\ permutations}).
\end{eqnarray}
When performing the dimensional reduction in the $p$-th direction, we fix the following gauge condition
\begin{eqnarray}
\hat{b}^{\dot{\mu}=p}=0
\end{eqnarray}
for simplifying the expression of the action.
When $p\le 8$, the ${\cal L}_1^{(p)}$ is
\begin{eqnarray}
&&
{\cal L}_1^{(p)}
\nonumber\\
&=&
-\frac{g^{2(p-2)}}{2}\sum_{n, m, l=0;\ n+2m+l=p-1;\ l\le 9-p}C^{p-1}_{nmml}
\nonumber\\
&&\times
\mathrm{Str}\bigg\lbrack\bigg(\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{n}},
\hat{a}_{\dot{\nu}_1}, \cdots, \hat{a}_{\dot{\nu}_m},
\frac{y^{\dot{\nu}_1}}{g}, \cdots, \frac{y^{\dot{\nu}_m}}{g},
\hat{X}^{I_l}
\bigg\}_{(p-1)}\bigg)^2\bigg\rbrack
\end{eqnarray}
We obtain the result of $p=8$ by performing the dimensional reduction in the 9th direction.
The dimensional reduction identifies $\hat{a}_9$ as $\hat{X}^{I=9}$.
Therefore, the multiple D8-branes have one more scalar field than $p=9$.
When $p=8$, we only have one scalar field.
Hence the Lagrangian does not have a commutator.
Later we will show the interaction terms for $p=6, 7$.
We also use the compact expression to show the field strength, covariant derivative and commutator of scalar fields, and quadratic term of $\hat{{\cal H}}$.
Our computation shows the result of Ref. \cite{Ho:2011yr}.
\subsection{$p=7$}
\noindent
The dimensional reduction in the 7th direction introduces one additional scalar field to D7-branes.
Two transversal scalar fields can form a bracket.
Therefore, more terms appear in the Lagrangian when concerning $p=7$.
When $p\le 7$, we separate the Lagrangian from three terms.
The first term is
\begin{eqnarray}
&&
{\cal L}_1^{(p)}
\nonumber\\
&=&
-\frac{g^{2(p-2)}}{2}\sum_{n, m, l=0;\ n+2m+l=p-1;\ l\le 9-p}C^{p-1}_{nmml}
\nonumber\\
&&\times
\mathrm{Str}\bigg\lbrack
\bigg(\bigg\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{n}},
\hat{a}_{\dot{\nu}_1}, \cdots, \hat{a}_{\dot{\nu}_m},
\frac{y^{\dot{\nu}_1}}{g}, \cdots, \frac{y^{\dot{\nu}_m}}{g},
\hat{X}^{I_1}, \cdots, \hat{X}^{I_l}
\bigg\}_{(p-1)}\bigg)^2 \bigg\rbrack.
\nonumber\\
\end{eqnarray}
The second term is
\begin{eqnarray}
&&
{\cal L}_2^{(p)}
\nonumber\\
&=&
-\frac{g^{2(p-2)+1}}{2}\sum_{n, m=0;\ n+2m=p-1}(-1)^{m}C^{p-1}_{n(m-1)m0}
\nonumber\\
&&\times
\mathrm{Str}\bigg\lbrack
\bigg\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{n}},
\hat{a}_{\dot{\nu}_1}, \cdots, \hat{a}_{\dot{\nu}_{m-1}},
\frac{y^{\dot{\nu}_1}}{g}, \cdots, \frac{y^{\dot{\nu}_{m-1}}}{g},
\hat{X}^{I_1}, \hat{X}^{I_{2}}
\bigg\}_{(p-1)}
\nonumber\\
&&\times
\bigg\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{n}},
\hat{a}_{\dot{\bar{\nu}}_1}, \cdots, \hat{a}_{\dot{\bar{\nu}}_{m}},
\frac{y^{\dot{\bar{\nu}}_1}}{g}, \cdots, \frac{y^{\dot{\bar{\nu}}_{m}}}{g}
\bigg\}_{(p-1)}
(-i)\lbrack\hat{X}^{I_1}, \hat{X}^{I_2}\rbrack\bigg\rbrack.
\end{eqnarray}
The last term is
\begin{eqnarray}
&&
{\cal L}_3^{(p)}
\nonumber\\
&=&
\frac{g^{2(p-1)}}{4}\sum_{n, m=0;\ n+2m=p-1 }C^{p-1}_{nmm0}
\nonumber\\
&&\times
\mathrm{Str}\bigg\lbrack
\bigg(\bigg\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{n}},
\hat{a}_{\dot{\nu}_1}, \cdots, \hat{a}_{\dot{\nu}_m},
\frac{y^{\dot{\nu}_1}}{g}, \cdots, \frac{y^{\dot{\nu}_m}}{g}
\bigg\}_{(p-1)}
\lbrack\hat{X}^{I_1}, \hat{X}^{I_2}\rbrack
\bigg)^2\bigg\rbrack.
\end{eqnarray}
The first term is the straightforward generalization without the commutator.
The bracket and commutator simultaneously appear in the second and third terms.
The $l$ in $C^{p-1}_{nmml}$ is the lowest number of traversal scalar fields in the brackets.
Because the third term also has the square of $(p-1)$-bracket, the dimensional reduction result is similar to the first term (but introduces the additional commutators of scalar fields).
The second term is the most complicated for performing dimensional reduction.
When going to the lower dimensions, a more complicated contraction of indices will appear.
We will show the result for D6-branes.
Then it is easy to understand the expectation.
\subsection{$p=6$}
\noindent
We can generalize ${\cal L}_2^{(p)}$ and ${\cal L}_3^{(p)}$ to include $p=6$:
\begin{eqnarray}
&&
{\cal L}_2^{(p)}
\nonumber\\
&=&
-\frac{g^{2(p-2)+1}}{2}\sum_{n, l=0;\ m=1;\ n+2m+l=p-1;\ l\le 7-p}(-1)^{m}C^{p-1}_{n(m-1)ml}
\nonumber\\
&&\times
\mathrm{Str}\bigg\lbrack
\bigg\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{n}},
\hat{a}_{\dot{\nu}_1}, \cdots, \hat{a}_{\dot{\nu}_{m-1}},
\frac{y^{\dot{\nu}_1}}{g}, \cdots, \frac{y^{\dot{\nu}_{m-1}}}{g},
\hat{X}^{I_1}, \cdots, \hat{X}^{I_{l+2}}
\bigg\}_{(p-1)}
\nonumber\\
&&\times
\bigg\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{n}},
\hat{a}_{\dot{\bar{\nu}}_1}, \cdots, \hat{a}_{\dot{\bar{\nu}}_{m}},
\frac{y^{\dot{\bar{\nu}}_1}}{g}, \cdots, \frac{y^{\dot{\bar{\nu}}_{m}}}{g},
\hat{X}^{I_1}, \cdots, \hat{X}^{I_{l}}
\bigg\}_{(p-1)}
\nonumber\\
&&\times
(-i)\lbrack\hat{X}^{I_{l+1}}, \hat{X}^{I_{l+2}}\rbrack\bigg\rbrack;
\end{eqnarray}
\begin{eqnarray}
&&
{\cal L}_3^{(p)}
\nonumber\\
&=&
\frac{g^{2(p-1)}}{4}\sum_{n, m, l=0;\ n+2m+l=p-1;\ l\le 7-p}C^{p-1}_{nmml}
\nonumber\\
&&\times
\mathrm{Str}\bigg\lbrack
\bigg(\bigg\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{n}},
\hat{a}_{\dot{\nu}_1}, \cdots, \hat{a}_{\dot{\nu}_m},
\frac{y^{\dot{\nu}_1}}{g}, \cdots, \frac{y^{\dot{\nu}_m}}{g},
\hat{X}^{\lbrack I_1}, \cdots, \hat{X}^{I_l}
\bigg\}_{(p-1)}
\nonumber\\
&&\times
\lbrack\hat{X}^{I_{l+1}}, \hat{X}^{I_{l+2}\rbrack}\rbrack
\bigg)^2\bigg\rbrack.
\end{eqnarray}
The ${\cal L}_1^{(p)}$ remains the same expression.
Each index of traversal scalar fields is not the same in the commutator and bracket.
Therefore, we introduce the antisymmetrized indices to the transversal scalar fields in the bracket and commutator in ${\cal L}_3^{(p)}$.
In ${\cal L}_1^{(p)}$ and ${\cal L}_2^{(p)}$, each index of transversal scalar fields is not the same even without the antisymmetrized notation.
If we proceed with more dimensional reduction, it is easy to find that ${\cal L}_2^{(p)}$ will generate a more complicated contraction between the bracket and the commutator.
However, the computation is enough to reproduce the result of Ref. \cite{Ho:2011yr}.
\\
\noindent
When we choose $n=p-3$, $m=1$, and $l=0$ in ${\cal L}_1^{(p)}$, we obtain the quadratic term of $\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}$
\begin{eqnarray}
-\frac{g^{2(p-2)}}{2(p-3)!}
\mathrm{Str}\bigg\lbrack
\bigg(\bigg\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{p-3}},
\hat{a}_{\dot{\nu}}, \frac{y^{\dot{\nu}}}{g}\bigg\}_{(p-1)}\bigg)^2\bigg\rbrack
=-\frac{1}{4}\mathrm{Str}\bigg(\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}\hat{{\cal F}}^{\dot{\mu}\dot{\nu}}\bigg),
\end{eqnarray}
where
\begin{eqnarray}
\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}=\frac{g^{p-3}}{(p-3)!}\epsilon_{\dot{\mu}\dot{\nu}\dot{\mu}_1\cdots\dot{\mu}_{p-3}}
\{\hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{p-3}}, \hat{a}_{\dot{\rho}}, \hat{y}^{\dot{\rho}}\}.
\end{eqnarray}
When $p\le 4$, the result is the same as in Ref. \cite{Ho:2011yr}.
The $\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}$ shows a straightforward generalization of a single D-brane (only replaces the U(1) gauge group by U($N$)) when $p>4$.
The quadratic term of $\hat{{\cal H}}$ also appears in ${\cal L}_1^{(p)}$ when choosing $n=p-1$ and $m=l=0$
\begin{eqnarray}
-\frac{g^{2(p-2)}}{2(p-1)!}
\bigg(\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{p-1}}
\}_{(p-1)}\bigg)^2
=-\frac{1}{2(p-1)!}
\bigg(\hat{{\cal H}}_{\dot{\mu_1}\cdots\dot{\mu}_{p-1}}
+\frac{1}{g}\epsilon_{\dot{\mu}_1\cdots\dot{\mu}_{p-1}}\bigg)^2.
\end{eqnarray}
Because the symmetry group of gauge potential $\hat{b}$ is U(1), the result is the same as in R-R D-brane \cite{Ho:2011yr}.
Therefore, the result is consistent with Ref. \cite{Ho:2011yr}.
Now we discuss the transversal scalar fields.
We first choose $n=p-2$, $m=0$, and $l=1$ in ${\cal L}_1^{(p)}$ to obtain the kinetic term of the scalar fields $\hat{X}^I$,
\begin{eqnarray}
-\frac{g^{2(p-2)}}{2(p-2)!}
\mathrm{Str}\bigg\lbrack
\bigg(\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{p-2}},
\hat{X}^{I}
\}_{(p-1)}\bigg)^2
\bigg\rbrack
=-\frac{1}{2}
\mathrm{Str}\bigg\lbrack
(\hat{D}_{\dot{\mu}}\hat{X}^I)^2\bigg\rbrack,
\end{eqnarray}
where
\begin{eqnarray}
\hat{D}_{\dot{\mu}}\hat{X}^I
=\frac{(-1)^p}{(p-2)!}g^{p-2}\epsilon_{\dot{\mu}_1\cdots\dot{\mu}_{p-1}}
\{\hat{X}^{\dot{\mu}_2}, \cdots, \hat{X}^{\dot{\mu}_{p-1}}, \hat{X}^I\}.
\end{eqnarray}
The commutator of scalar fields also appears in the leading-order term of ${\cal L}_3^{(p)}$ when choosing $n=p-1$ and $m=l=0$ in ${\cal L}_3^{(p)}$,
\begin{eqnarray}
\frac{g^{2(p-1)}}{4(p-1)!}
\mathrm{Str}\bigg\lbrack
\bigg(\{ \hat{X}^{\dot{\mu}_1}, \cdots, \hat{X}^{\dot{\mu}_{p-1}}
\}_{(p-1)}
\lbrack\hat{X}^{I_{1}}, \hat{X}^{I_{2}\rbrack}\rbrack
\bigg)^2
\bigg\rbrack
=
\frac{1}{4}
\mathrm{Str}\big\lbrack
(\lbrack\hat{X}^I, \hat{X}^J\rbrack)^2\big\rbrack+\cdots.
\end{eqnarray}
Hence our result also contains the expected terms of transversal scalar fields for the general $p$.
\\
\noindent
The above calculation keeps the gauge group of $\hat{b}$ as U(1).
The non-Abelianization needs to concern the consistency from all components of $\hat{{\cal F}}$.
So far, we did not have such a construction for the R-R D-branes.
Later we will replace the $\partial$ with the $D$.
We then can show the consistent result from the gauge symmetry.
The gauge theory also has the same form as in the R-R D-brane.
Hence we expect that the dimensional reduction should show a similar result (but needs to introduce more commutator terms relevant to $\hat{X}^{\dot{\mu}}$ and $\hat{X}^I$).
\section{Discussion of Non-Abelianization}
\label{sec:4}
\noindent
The main difficulty for the non-Abelianization is $\hat{B}_{\alpha}{}^{\dot{\mu}}$, which satisfies the non-linear equation in R-R D-brane theory.
The non-linearity makes the complexity of the multiple branes with VPD symmetry.
The study of ($p-1$)-bracket shows the clues.
The naive generalization (replaces the ordinary derivative with the ordinary covariant derivative) allows the VPD symmetry.
Our study or the generalization of ($p-1$)-bracket is only for the non-Abelianization of $\hat{a}$.
Indeed, we can also have a similar generalization for the $\hat{b}$ if the gauge group of the VPD parameter is still U(1).
Therefore, the R-R field background is only dominant in the U(1) sector.
The gauge transformation of $\hat{b}$ is
\begin{eqnarray}
\hat{\delta}_{\hat{\Lambda}}\hat{b}^{\dot{\mu}}
=\hat{\kappa}^{\dot{\mu}}
+i\lbrack\hat{\lambda}, \hat{b}^{\dot{\mu}}\rbrack
+g\hat{\kappa}^{\dot{\nu}}\partial_{\dot{\nu}}\hat{b}^{\dot{\mu}}.
\end{eqnarray}
The $\hat{X}^{\dot{\mu}}$ is still a VPD- and U($N$)-covariant object.
Therefore, we can also use the ($p-1$)-bracket to define the covariant field strength
\begin{eqnarray}
\hat{{\cal H}}^{\dot{\mu}_1\dot{\mu_2}\cdots\dot{\mu}_{p-1}}&\equiv& g^{p-2}
\{\hat{X}^{\dot{\mu}_1}, \hat{X}^{\dot{\mu}_2}, \cdots, \hat{X}^{\dot{\mu}_{p-1}}\}_{(p-1)}-\frac{1}{g}\epsilon^{\dot{\mu}_1\dot{\mu}_2\cdots\dot{\mu}_{p-1}}
\end{eqnarray}
but now the derivative operator ${\cal D}$ is the ordinary covariant derivative.
In the R-R D4-brane, the gauge transformation of $\hat{b}$ comes from the NP M5-brane \cite{Ho:2011yr}.
Therefore, we can use the R-R D4-branes to explore the gauge structure of the M5-brane.
Our gauge transformation of $\hat{b}$ is consistent with one construction of multiple M5-branes \cite{Chu:2012um}.
Our realization of VPD symmetry in R-R D-branes is not just a naive guess and has supporting evidence from M2-M5 branes.
\\
\noindent
We propose the Lagrangian description of R-R D-branes similar to the single D-brane.
The Lagrangian for the gauge sector of R-R D$p$-branes theory is
\begin{eqnarray}
{\cal L}_{RR}=
-\frac{1}{2}\hat{{\cal H}}_{\dot{\mu}_1\cdots\dot{\mu}_{p-1}}\hat{{\cal H}}^{\dot{\mu}_1\cdots\dot{\mu}_{p-1}}
-\frac{1}{4}\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}\hat{{\cal F}}^{\dot{\mu}\dot{\nu}}
+\frac{1}{2}\hat{{\cal F}}_{\alpha\dot{\mu}}\hat{{\cal F}}^{\alpha\dot{\mu}}
+\frac{1}{2g}\epsilon^{\alpha\beta}\hat{{\cal F}}_{\alpha\beta}.
\end{eqnarray}
We will use the ($p-1$)-bracket to define $\hat{{\cal H}}_{\dot{\mu}_1\dot{\mu}_2\cdots\dot{\mu}_{p-1}}$ and
$\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}$.
Other components of $\hat{{\cal F}}$ needs to concern $\hat{B}_{\alpha}{}^{\dot{\mu}}$.
Therefore, we examine the gauge transformation and modify the non-linear equation, satisfied by $\hat{B}_{\alpha}{}^{\dot{\mu}}$.
We will examine the leading-order result.
Let us remind the result of the R-R D-brane.
One can do the small $g$ expansion to solve the non-linear equation in R-R D-brane \cite{Ho:2011yr}.
The $\hat{B}_{\alpha}^{\dot{\mu}}$ is \cite{Ho:2011yr}
\begin{eqnarray}
\hat{B}_{\alpha}{}^{\dot{\mu}}=
\partial_{\alpha}\hat{b}^{\dot{\mu}}
+\epsilon_{\alpha\beta}\hat{F}^{\beta\dot{\mu}}+{\cal O}(g).
\end{eqnarray}
Then one can read the gauge transformation \cite{Ho:2011yr}
\begin{eqnarray}
\delta\hat{B}_{\alpha}{}^{\dot{\mu}}=\partial_{\alpha}\hat{\kappa}^{\dot{\mu}}+{\cal O}(g).
\end{eqnarray}
Now we replace $\partial$ with the $D$ in the multiple D-branes.
The non-linear equation becomes
\begin{eqnarray}
\hat{B}_{\alpha}{}^{\dot{\mu}}=
D_{\alpha}\hat{b}^{\dot{\mu}}
+\epsilon_{\alpha\beta}\hat{F}^{\beta\dot{\mu}}+{\cal O}(g),
\end{eqnarray}
where $\hat{F}$ is the ordinary covariant field strength.
Then we can get the gauge transformation
\begin{eqnarray}
\delta\hat{B}_{\alpha}{}^{\dot{\mu}}=\partial_{\alpha}\hat{\kappa}^{\dot{\mu}}+i\lbrack\hat{\lambda}, \hat{B}_{\alpha}{}^{\dot{\mu}}\rbrack+{\cal O}(g).
\end{eqnarray}
The non-linear equation that constrains $\hat{B}_{\alpha}{}^{\dot{\mu}}$ is also closed
\begin{eqnarray}
i\lbrack\hat{\lambda}, D_{\alpha}\hat{b}^{\dot{\mu}}+\epsilon_{\alpha\beta}\hat{F}^{\beta\dot{\mu}}-\hat{B}_{\alpha}{}^{\dot{\mu}}\rbrack
={\cal O}(g).
\end{eqnarray}
\\
\noindent
Now we discuss the low-energy theory in the leading order.
We first review the R-R D$p$-brane theory \cite{Ho:2011yr}.
The field strengths are \cite{Ho:2011yr}:
\begin{eqnarray}
\hat{{\cal H}}_{23\cdots p}&=&\hat{H}+{\cal O}(g)\equiv \partial_{\dot{\mu}}\hat{b}^{\dot{\mu}}+{\cal O}(g);
\nonumber\\
\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}&=&\hat{F}_{\dot{\mu}\dot{\nu}}+{\cal O}(g);
\nonumber\\
\hat{{\cal F}}_{\alpha\dot{\mu}}&=&\hat{F}_{\alpha\dot{\mu}}+{\cal O}(g);
\nonumber\\
\frac{1}{2g}\epsilon^{\alpha\beta}\hat{{\cal F}}_{\alpha\beta}&=&
\frac{1}{2g}\epsilon^{\alpha\beta}\hat{F}_{\alpha\beta}
-\epsilon^{\alpha\beta}\hat{F}_{\alpha\dot{\mu}}
(\partial_{\beta}\hat{b}^{\dot{\mu}}+\epsilon_{\beta\gamma}\hat{F}^{\gamma\dot{\mu}})
+{\cal O}(g).
\end{eqnarray}
The R-R D-brane Lagrangian is \cite{Ho:2011yr}
\begin{eqnarray}
{\cal L}_{RR}\sim-\frac{1}{2}\hat{H}^2
-\frac{1}{4}\hat{F}_{\dot{\mu}\dot{\nu}}\hat{F}^{\dot{\mu}\dot{\nu}}
-\frac{1}{2}\hat{F}_{\alpha\dot{\mu}}\hat{F}^{\alpha\dot{\mu}}
-\hat{F}_{01}\hat{H}+{\cal O}(g),
\end{eqnarray}
where $\sim$ is the equivalence up to a total derivative term.
Because $\hat{b}$ does not have a time-derivative term, we can integrate out it, equivalent to substituting \cite{Ho:2011yr}
\begin{eqnarray}
\hat{H}=-\hat{F}_{01}+f,
\end{eqnarray}
where
\begin{eqnarray}
\partial_{\dot{\mu}}f=0,
\end{eqnarray}
to ${\cal L}_{RR}$.
Therefore, we get the low-energy theory \cite{Ho:2011yr}
\begin{eqnarray}
{\cal L}_{RR0}\sim -\frac{1}{4}\hat{F}_{AB}\hat{F}^{AB}-\frac{f^2}{2}+{\cal O}(g).
\end{eqnarray}
Because $f$ does not interact with the dynamical field, we can ignore it and obtain the expected kinetic term of $\hat{a}_A$.
The extension of the non-Abelian gauge group is similar.
We will replace the $\partial$ with $D$ and promote the U(1) invariant quantity to the U($N$) covariant object.
The field strengths become:
\begin{eqnarray}
\hat{{\cal H}}_{23\cdots p}&=&\hat{H}+{\cal O}(g)\equiv D_{\dot{\mu}}\hat{b}^{\dot{\mu}}+{\cal O}(g);
\nonumber\\
\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}&=&\hat{F}_{\dot{\mu}\dot{\nu}}+{\cal O}(g);
\nonumber\\
\hat{{\cal F}}_{\alpha\dot{\mu}}&=&\hat{F}_{\alpha\dot{\mu}}+{\cal O}(g);
\nonumber\\
\frac{1}{2g}\epsilon^{\alpha\beta}\hat{{\cal F}}_{\alpha\beta}&=&
\frac{1}{2g}\epsilon^{\alpha\beta}\hat{F}_{\alpha\beta}
-\epsilon^{\alpha\beta}\hat{F}_{\alpha\dot{\mu}}
(D_{\beta}\hat{b}^{\dot{\mu}}+\epsilon_{\beta\gamma}\hat{F}^{\gamma\dot{\mu}})
+{\cal O}(g).
\end{eqnarray}
The Lagrangian description of R-R D-branes remains the same form
\begin{eqnarray}
{\cal L}_{RR}\sim-\frac{1}{2}\hat{H}^2
-\frac{1}{4}\hat{F}_{\dot{\mu}\dot{\nu}}\hat{F}^{\dot{\mu}\dot{\nu}}
-\frac{1}{2}\hat{F}_{\alpha\dot{\mu}}\hat{F}^{\alpha\dot{\mu}}
-\hat{F}_{01}\hat{H}+{\cal O}(g).
\end{eqnarray}
Integrating out the non-dynamical field $\hat{b}$ is equivalent to substituting
\begin{eqnarray}
\hat{H}=-\hat{F}_{01}+f,
\end{eqnarray}
where
\begin{eqnarray}
D_{\dot{\mu}}f=0.
\end{eqnarray}
The $f$ also does not interact with $\hat{a}_A$,
\begin{eqnarray}
{\cal L}_{RR0}\sim -\frac{1}{4}\hat{F}_{AB}\hat{F}^{AB}-\frac{f^2}{2}+{\cal O}(g).
\end{eqnarray}
Hence the classical Lagrangian is the Yang-Mills theory in the leading order.
After integrating the $\hat{b}$, it leaves a determinant depending on $\hat{a}$ in the measure.
Therefore, it is necessary to concern the determinant for the quantum action.
The difference between the Abelian and non-Abelian gauge groups also happened in the multiple M5-branes \cite{Chu:2012um}.
The $\hat{a}_{\alpha}$ in R-R D4-brane theory does not have a direct connection to the NP M5-brane theory \cite{Ho:2011yr}.
Therefore, we need to dualize a field and then find $\hat{a}_{\alpha}$ \cite{Ho:2011yr}.
The dualization also leaves the determinant in the measure \cite{Ho:2011yr}.
The R-R D4-brane theory does not include this quantum contribution \cite{Ho:2011yr}.
Our construction is the generalization of R-R D-brane theory.
Hence we only can discuss the classical Lagrangian.
\section{Discussion and Conclusion}
\label{sec:5}
\noindent
We studied the D-branes by turning on the large R-R ($p-1$)-form field background.
The ($p-1$)-bracket generates the VPD symmetry and this R-R field background in the D-brane \cite{Ho:2011yr}.
This bracket is closed under the T-duality transformation \cite{Ho:2013paa}.
Therefore, the generalization of the bracket should strongly constrain the Lagrangian description.
We generalized the ($p-1$)-bracket when the non-Abelian one-form gauge field appears.
The generalization is straightforward by replacing $\partial$ with $D$.
The bracket also provides the VPD symmetry and the R-R field background.
The Ref. \cite{Ho:2011yr} first constructed the R-R D-branes.
The generalized bracket reproduces the gauge theory with the VPD symmetry.
Now the ($p-1$)-bracket is not closed under the T-duality.
Therefore, we need the interaction terms between the commutator and bracket in the multiple D-branes.
The non-Abelianization of the ($p-2$)-form gauge potential should make complicated interaction terms.
We expect that the interaction structure is still similar to our computation.
In the end, we discussed the non-Abelianization of the ($p-2$)-form gauge potential.
We considered that the R-R field background only lies in the U(1) sector.
Therefore, we can apply the generalized bracket to non-Abelianization.
The field strength defined by the bracket is the VPD covariant.
We proposed a Lagrangian description similar to the R-R D-brane theory.
The realization is through the modified non-linear equation, satisfied by $\hat{B}_{\alpha}{}^{\dot{\mu}}$.
This modification also leads to the modified gauge transformation.
Our leading-order result shows the expected result.
The gauge transformation is the same as in Ref. \cite{Chu:2012um}.
Hence our study also has supporting evidence from the multiple M5-branes.
\\
\noindent
The ($p-1$)-bracket only constrains $\hat{{\cal F}}_{\dot{\mu}\dot{\nu}}$ and $\hat{{\cal H}}_{23\cdots p}$.
One needs to concern the proper non-linear equation that $\hat{B}_{\alpha}{}^{\dot{\mu}}$ satisfies for showing other components of $\hat{{\cal F}}$.
Our naive generalization works for the leading order.
The perturbation to all orders is possible.
It should be interesting to proceed in this direction and show the complete information on R-R D-branes.
\\
\noindent
One already found the solution SW map for the Abelian $\hat{b}$ case \cite{Ma:2020msx}.
The generalization (to the case of non-Abelian $\hat{b}$) should not be difficult.
The SW map should help examine the consistency for the low-energy description from the commutative side \cite{Ma:2020msx}.
The non-commutative NS-NS D-brane is non-trivial because the quadratic term contains all-order effects through the Moyal product \cite{Seiberg:1999vs}.
So far, there is no evidence to show that a similar thing should also appear in the R-R D-branes.
Hence it should be interesting to use the SW map to examine.
\\
\noindent
The most non-trivial direction is the low-energy construction of M2-M5 branes.
Because $\hat{b}$ has the same gauge transformation as in Ref. \cite{Chu:2012um}, the multiple M5-branes should be relevant to our construction.
Our construction can have the higher-order terms in $g$.
Hence our study possibly leads to the higher-order deformation of the M5-branes.
In Ref. \cite{Chu:2012um}, they introduced the constraint to implementing self-duality.
After performing the double dimensional reduction on a circle, the M5-branes become the Yang-Mills theory and the quantum action \cite{Chu:2012um}.
The appearance of quantum action is due to the constraint \cite{Chu:2012um}.
The reason for introducing the constraint to the M5-branes is the technical details (implementing the self-duality).
We still do not know the physical origin.
The complete description of R-R D4-branes also has the quantum contribution from the measure \cite{Ho:2011yr}.
Therefore, it is interesting to find the consistent quantum action between the multiple M5-branes and R-R D4-branes.
It should help explore the Lagrangian description of low-energy M5-branes.
\section*{Acknowledgments}
\noindent
The author would like to thank Chong-Sun Chu, Xing Huang, and Yiwen Pan for their helpful discussion and thank Nan-Peng Ma for his encouragement.
\\
\noindent
The author acknowledges the YST Program of the APCTP;
Post-Doctoral International Exchange Program (Grant No. YJ20180087);
China Postdoctoral Science Foundation, Postdoctoral General Funding: Second Class (Grant No. 2019M652926);
Foreign Young Talents Program (Grant No. QN20200230017);
Science and Technology Program of Guangzhou (Grant No. 2019050001).
\baselineskip 22pt
|
{
"arxiv_id": "2302.08647",
"language": "en",
"timestamp": "2023-02-28T02:04:01",
"url": "https://arxiv.org/abs/2302.08647",
"yymm": "2302"
} |
\section{Introduction} \label{sec:intro}
Macromolecules are long-range and hierarchical structures as they consist of many substructures. While small molecules in existing datasets \cite{Ruddigkeit2012, Ramakrishnan2014, Sterling2015} comprise less than 50 atoms connected by simple rings and bonds, this number in a macromolecule can be dozens or even hundreds. Substructures such as repeating units and functional groups are intrinsic parts of macromolecules; they present unique chemical reactions regardless of other compositions in the same molecules \cite{functional}. Therefore, studying the multiresolution characteristic of large molecules is imperative to gain comprehensive knowledge about real-life materials like polymers or proteins \cite{multiscale}. In recent years, several works \cite{machine_learning_exploit, screening_macro, macro_molecule} have been proposed to apply machine learning algorithms to learn macromolecules at multiple scales. These approaches, however, rely on thorough feature selection and extraction, which are inefficient when learning from large databases of multicomponent materials \cite{machine_learning_exploit}.
Message passing is a prevailing paradigm for designing neural networks that operate on graph-structured data. Previous studies \cite{PMLR-V70-gilmer17a, gcn, gat, ppa, gin} have proposed different strategies to perform message passing on graphs and achieved remarkable results across various domains. However, message-passing-dominated graph neural networks (GNNs) have some inherent limitations, such as limited expressiveness capability \cite{high-order,gin}, over-smoothing \cite{chen-oversmoothing, li-2018, oono2020graph}, over-squashing \cite{alon2021on} issues. Over-smoothing exists in graph neural networks that consist of a sufficiently large number of layers, and node representations are likely to converge to a constant after going through these deep networks. Over-squashing problems occur when messages are ineffectively propagated and aggregated through bottlenecks on long-range graph structures. These two shortcomings hinder GNNs from making good predictions on long-range and hierarchically structured data. Furthermore, the molecular properties of large molecules are formed not only by interactions among atoms within neighborhoods but also by distant atoms. Therefore, local information is not sufficient to model macromolecules.
Transformers are classes of deep learning models that leverage self-attention mechanisms to handle long-range dependencies in various data domains, such as natural language processing \cite{vaswari, bert} or computer vision \cite{vitrans, Swin}. In graph domains, Transformer-like architectures \cite{san, generalization_trans, gps} have proved their effectiveness in learning node representations as they can overcome the over-smoothing and over-squashing issues by directly measuring the pairwise relationships between the nodes. Contrary to GNNs, graph transformers do not use the graph structure as hard-coded information. They, instead, encode positional and structural information on graphs as soft inductive bias, making them flexible learners in graph learning problems \cite{san}. Node positional representations can be derived based on spectral \cite{graph_generalize, rw} or spatial \cite{anchor, distance} domains. Most existing spectral-based methods decompose the graph Laplacian into sets of eigenvectors and eigenvalues. However, these eigenvectors have sign ambiguity and are unstable due to eigenvalue multiplicities. On the other hand, spatial-based approaches compute the shortest distances among the nodes; however, these encoding methods do not consider the structural similarity between nodes and their neighborhoods \cite{sat}.
\paragraph{Contribution} We propose Multiresolution Graph Transformer (MGT) and
Wavelet Positional Encoding (WavePE), using multiresolution analysis on both spectral and spatial domains for learning to represent hierarchical structures. Our contributions are four-fold:
\begin{compactitem}
\item We design Multiresolution Graph Transformer (MGT), a Transformer-like architecture that can operate on macromolecules at multiple scales. Our proposed model can learn the atomic representations and group them into meaningful clusters via a data-driven algorithm. Finally, the substructures, i.e. clusters, are fed to a Transformer encoder to calculate the representations of several substructures in macromolecules.
\item We introduce Wavelet Positional Encoding (WavePE), a new positional encoding scheme for graph-structured data. Since wavelet analysis can provide localization in both spatial and spectral domains, we construct a set of wavelets to capture the structural information on graphs at different scales. Then, we apply equivariant encoding methods to project the wavelet tensors into positional representations for the atoms.
\item We show the effectiveness of our methodology by reporting its superior performance on three molecular property prediction benchmarks. These datasets contain macromolecules, i.e. peptides and polymers, that are highly hierarchical and consist of up to hundreds of atoms.
\item Our visualization demonstrates the comprehensiveness of our proposed methods in learning to represent large molecules. In general, we show the representations of molecules produced by MGT and how MGT determines and groups the atoms in long-chain molecules.
\end{compactitem}
\section{Related work} \label{sec:related}
\paragraph{Hierachical Learning on Molecule}
Functional groups or repeating units are essential phenomena in chemistry. While functional groups constitute large molecules, repeating units are the primary parts that produce complete polymer chains. We regard them as substructures. In particular, similar substructures undergo similar chemical reactions regardless of the remaining compositions existing in the molecules \cite{functional}. Previous work has leveraged the hierarchical property of molecules to improve the performance in molecular representation learning and generation. \citet{knowledge_constrastive}, \citet{grover}, and \citet{hyper_message} used functional groups as prior knowledge to guide the models to predict accurate molecular properties. For the molecular generation task, \citet{structural_motifs} and \citet{hpgraph} used chemical rules to extract substructures and constructed a vocabulary of structural motifs to generate the molecules.
\paragraph{Graph Transformers} Earlier research efforts have adopted Transformer-like architectures to graph-structured data. \citet{graph_generalize} proposed an early approach to generalize Transformers to graphs using Laplacian positional encoding and performing self-attention on one-hop neighbors surrounding center nodes. On the other hand, \citet{san} computes attention scores on the entire graph with differentiation between positive and negative edges, while also using Laplacian positional encoding. \citet{grover} introduced GTransformer that utilizes vectorized outputs from local GNNs as inputs for a Transformer encoder, making up an effective combination between node local and global information. \citet{gps} proposed a general framework that integrates essential components of Graph Transformers, including positional or structural encoding, graph feature extraction, local message passing, and self-attention. Also, \citet{sat} extracted multiple k-hop subgraphs and fed them to local GNNs to compute their embeddings, which are then moved to a Transformer encoder. Graphormer proposed in \cite{graphormer} uses attention mechanisms to estimate several types of encoding, such as centrality, spatial, and edge encodings. In addition, \citet{pure_transformer} treated all nodes and edges as independent tokens augmented with orthonormal node identifiers and trainable type identifiers, and fed them to a standard Transformer encoder.
Moreover, \citet{graph_transformer_networks} generated multiple meta-paths, i.e. views, of a graph and computed their pairwise attention scores, before aggregating them into a final representation for the entire graph.
\paragraph{Graph Positional Encoding} Several approaches have been proposed to encode the positional or structural representations into node features to improve the expressiveness of GNNs and Graph Transformers. Node positions can be determined via spectral or spatial domains. Spectral-based methods include Laplacian positional encoding \cite{graph_generalize, san} and random walk positional encoding (RWPE) \cite{rw}. For spatial-based methods, \citet{anchor} computed distances of sets of nodes to anchor nodes, whereas \citet{distance} calculated the shortest distances between pairs of nodes.
\paragraph{Multiresolution Analysis and Wavelet Theory}
Multiresolution Analysis (MRA) has been proposed by \cite{192463, 10.5555/1525499} as a method to approximate signals at multiple scales in which the signals are decomposed over elementary waveforms chosen from a family called wavelets (i.e. mother wavelets and father wavelets), including Haar \cite{Haar1910ZurTD}, Daubechies \cite{Daubechies1988OrthonormalBO}, etc., to produce the sparse representations. In graph and discrete domains, \citet{HAMMOND2011129} introduced spectral graph wavelets that are determined by applying the wavelet operator on the graph Laplacian at multi-levels. \citet{COIFMAN200653} proposed diffusion wavelet that is a fast multiresolution framework for analyzing functions on discretized structures such as graphs and manifolds. In the deep learning era, \citet{10.5555/2999611.2999723} and \citet{xu2018graph} leveraged the power of neural networks for graph wavelet construction and computation.
\section{Preliminaries} \label{sec:background}
\subsection{Notation}
A molecule can be represented as an undirected graph in which nodes are the atoms and edges are the bonds between them. In paticular, we refer to a molecular graph as $G = (\mathcal{V}, \mathcal{E}, \mathbf{A}, \mathbf{X}, \mathbf{P}, \mathcal{V}_s)$, where $G$ is an undirected graph having $\mathcal{V}$ ($|\mathcal{V}| = n$) and $\mathcal{E}$ as sets of nodes and edges respectively; also, $\mathbf{A} \in \mathbb{R} ^ {n \times n}$ is the graph's adjacency matrix. When a graph is attributed, we augment $G$ with a set of node feature vectors $\mathcal{X} = \{x_1,..., x_n\}, x_i \in \mathbb{R}^d$ and a set of node positional vectors $\mathcal{P} = \{p_1, ..., p_n\}, p_i \in \mathbb{R}^p$. These two types of attributes are stored in $\mathbf{X} \in \mathbb{R} ^ {n \times d}$ and $\mathbf{P} \in \mathbb{R} ^ {n \times p}$ correspondingly.
In addition to the atom-level representation of $G$, $\mathcal{V}_s = \{v_{s_1}, ... , v_{s_k}\}$ denotes the substructure set in which $v_{s_i} \subset \mathcal{V}$, i.e. $v_{s_i}$ is a subset of atoms of the molecule.
\subsection{Hierachical Learning on Molecules}
\label{heirachical_coarsen}
Molecular property prediction is regarded as a graph-level learning task. We need to aggregate node embeddings into graph-level vectors which are then fed to a classifier to make predictions on graphs. Specifically, a function $f: \mathcal{V} \xrightarrow{} \mathcal{Z}$ that maps the atom $u \in \mathcal{V}$ to a $d_o$-dimensional vector $z_u \in \mathcal{Z} \subset \mathbb{R} ^ {d_o}$ should learn to produce atom-level representations. Most existing graph neural networks compute the vector $z =\zeta (\{f(u) | u \in \mathcal{V}\})$ that indicates a representation for the entire molecular graph, where $\zeta$ can be sum, mean, max, or more sophisticated operators. For hierarchical learning, substructure-level representations can be derived in addition to atom-level representations by aggregating node representations in the same substructures as $z_s =\zeta (\{f(u) | u \in v_s \land v_s \in \mathcal{V}_s\})$. Instead of atom vectors, we aggregate the substructure vectors to represent the entire graph, i.e. $z = \zeta (\{z_s | z_s \in \mathcal{V}_s\})$. Finally, a classifier $g$ given $z$ as inputs is trained to predict the molecular properties.
\subsection{Transformers on Graphs}
While GNNs learn node embeddings by leveraging the graph structure via local message-passing mechanisms, Transformers ignore localities and directly infer the relations between pairs of nodes using only node attributes. In other words, the node connectivity is not utilized in pure transformer-like architectures \cite{vaswari}, reducing the graph conditions to a set learning problem. Given a tensor of node features $\mathbf{X} \in \mathbb{R} ^ {n \times d}$, Transformers compute three matrices including query ($\mathbf{Q}$), key ($\mathbf{K}$), and value ($\mathbf{V}$) via three linear transformations $\mathbf{Q} = \mathbf{X} \mathbf{W}_q^T$, $\mathbf{K} = \mathbf{X} \mathbf{W}_k ^ T$, and $\mathbf{V} = \mathbf{X} \mathbf{W}_v ^ T$. A self-attention tensor ($\mathbf{H}$) can be computed as follows:
\begin{equation}
\mathbf{H} = \text{softmax}(\frac{\mathbf{Q}\mathbf{K}^T}{\sqrt{d_o}}) \mathbf{V}
\label{eq:1}
\end{equation}
where $\mathbf{W}_q$, $\mathbf{W}_k$, and $\mathbf{W}_v$ are learnable parameters in $\mathbb{R} ^ {d_o \times d}$, resulting in $\mathbf{H} \in \mathbb{R} ^{n \times d_o}$. Furthermore, each $\mathbf{H}$ in Eq. \ref{eq:1} denotes an attention head. To improve effectiveness, multiple $\{\mathbf{H}\}_{i = 1}^ h$ are computed, which is known as multi-head attention. All of the attention heads are concatenated to form a final tensor: $\mathbf{H}_o = \text{concat}(\mathbf{H}_1,..., \mathbf{H}_h) $, where $h$ is the number of attention heads. Finally, the output $\mathbf{X}^\prime$, i.e. new node representations, can be computed by feeding $\mathbf{H}_o$ into a feed-forward neural network (FFN), i.e. $\mathbf{X} ^\prime = \text{FFN}(\mathbf{H}_o)$. It is easy to see that Transformers operating on inputs without positional encoding are permutation invariant.
\paragraph{Positional Encoding} As pure Transformer encoders only model sets of nodes without being cognizant of the graph structures, positional or structural information between nodes and their neighborhoods should be incorporated into node features. In particular, node positional representations can be added or concatenated with node features, resulting in comprehensive inputs for Transformer-like architectures operating on graph-structured data.
\section{Wavelet Positional Encoding}
\label{wavelet}
\subsection{Spectral Graph Wavelets}
Let $\mathcal{A} \in \mathbb{R} ^ {n\times n}$ be the adjacency matrix of an undirected graph $G = (\mathcal{V}, \mathcal{E})$. The normalized graph Laplacian is defined as $\mathcal{L} = \mathcal{I}_n - \mathcal{D}^{-1/2} \mathcal{A} \mathcal{D} ^ {-1/2}$, where $\mathcal{I}_n$ is the identity matrix and $\mathcal{D}$ is the diagonal matrix of node degrees. $\mathcal{L}$ can be decomposed into a complete set of orthonormal eigenvectors $U = (u_1, u_2, ..., u_n)$ associated with real and non-negative eigenvalues $\{\lambda\}_1^n$. While graph Fourier transform uses $U$ as a set of bases to project the graph signal from the vertex domain to the spectral domain, graph wavelet transform constructs a set of spectral graph wavelets as bases for this projection via:
\[
\psi_s = U \Sigma_s U^T
\]
where $\Sigma_s=\text{diag}(g(s\lambda_1), g(s\lambda_2), ..., g(s\lambda_n))$ is a scaling matrix of eigenvalues, $\psi_s=(\psi_{s1}, \psi_{s2}, ..., \psi_{sn})$ and each wavelet $\psi_{si}$ indicates how a signal diffuses away from node $i$ at scale $s$; we choose $g(s\lambda) = e ^ {-s \lambda}$ as a heat kernel \cite{wavelet2018}. Since a node's neighborhoods can be adjusted by varying the scaling parameter $s$ \cite{wavelet}, using multiple sets of wavelets at different scales can provide comprehensive information on the graph's structure. It means that larger values of $s_i$ correspond to larger neighborhoods surrounding a center node. Figure \ref{fig:diffusion_wavelet} illustrates how wavelets can be used to determine neighborhoods at different scales on a molecular graph. In this work, we leverage this property of graph wavelets to generate node positional representations that can capture the structural information of a center node on the graph at different resolutions. We employ $k$ diffusion matrices $\{\psi_{s_i}\}_{i=1}^k$ in which each $\psi_{s_i}$ has a size of $n \times n$, resulting in a tensor of graph
wavelets $\mathcal{P} \in \mathbb{R} ^ {n \times n \times k}$. Additionally, WavePE is a generalized version of RWPE \cite{rw} as the random walk process can be regarded as a type of discrete diffusion. In the following section, we demonstrate the use of tensor contractions to generate a tensor of node positional representations $\mathbf{P} \in \mathbb{R} ^ {n \times k}$ from $\mathcal{P}$. In general, Fig.\ref{fig:wavepe} demonstrates our wavelet positional encoding method.
\begin{figure}%
\centering
\captionsetup[subfloat]{captionskip=-2mm}
\subfloat[s = 3]{%
\label{fig:peptides}%
\includegraphics[scale = 0.2]{Figure/scale_3.pdf}}%
\subfloat[s = 5]{%
\label{fig:polymer}%
\includegraphics[scale = 0.2]{Figure/scale_5.pdf}}%
\subfloat[s = 10]{
\label{fig:scale_10}
\includegraphics[scale = 0.2]{Figure/scale_10.pdf}
}
\caption{Visualization of some of the wavelets with scaling parameters on the Aspirin $\text{C}_9\text{H}_8\text{O}_4$ molecular graph with 13 nodes (i.e. heavy atoms). The center node is colored yellow. The colors varying from bright to dark illustrate the diffusion rate from the center node to the others, i.e. nodes that are closer to the center node have brighter colors.
Low-scale wavelets are highly localized, whereas the high-scale wavelets can spread out more nodes on the molecular graphs}
\label{fig:diffusion_wavelet}
\end{figure}
\subsection{Equivariant Encoding}
It is important to note that our spectral graph wavelets computed from the previous section must be further encoded in a permutation-equivariant manner. For simplicity, that means if we permute (i.e. change the order) the set of nodes,
their position encodings must be transformed accordingly. In this section, we formally define permutation symmetry, i.e. symmetry to the action of the symmetric group, $\mathbb{S}_n$, and construct permutation-equivariant neural networks to encode graph wavelets. An element $\sigma \in \mathbb{S}_n$ is a permutation of order $n$, or a bijective map from $\{1,\ldots, n\}$ to $\{1,\ldots, n\}$. For example, the action of $\mathbb{S}_n$ on an adjacency matrix $\mathcal{A} \in \mathbb{R}^{n \times n}$ and on a latent matrix $\mathcal{Z} \in \mathbb{R}^{n \times d_z}$ are:
\[
[\sigma \cdot \mathcal{A}]_{i_1, i_2} = \mathcal{A}_{\sigma^{-1}(i_1), \sigma^{-1}(i_2)}, \ \ \ \
[\sigma \cdot \mathcal{Z}]_{i, j} = \mathcal{Z}_{\sigma^{-1}(i), j},
\]
for $\sigma \in \mathbb{S}_n$. Here, the adjacency matrix $\mathcal{A}$ is a second-order tensor with a single feature channel, while the latent matrix $\mathcal{Z}$ is a first-order tensor with $d_z$ feature channels. In general, the action of $\mathbb{S}_n$ on a $k$-th order tensor $\mathcal{X} \in \mathbb{R}^{n^k \times d}$ (i.e. the last index denotes the feature channels) is defined similarly as:
\[
[\sigma \cdot \mathcal{X}]_{i_1, .., i_k, j} = \mathcal{X}_{\sigma^{-1}(i_1), .., \sigma^{-1}(i_k), j}, \hspace{20pt} \sigma \in \mathbb{S}_n.
\]
\noindent
Formally, we define these equivariant and invariant properties in Def.~\ref{def:Sn-equivariant} and equivariant neural networks in Def.~\ref{def:Sn-network}.
\begin{definition} \label{def:Sn-equivariant}
An $\mathbb{S}_n$-equivariant (or permutation equivariant) function is a function $f\colon \mathbb{R}^{n^k \times d} \rightarrow \mathbb{R}^{n^{k'} \times {d'}}$ that satisfies $f(\sigma \cdot \mathcal{X}) = \sigma \cdot f(\mathcal{X})$ for all $\sigma \in \mathbb{S}_n$ and $\mathcal{X} \in \mathbb{R}^{n^k \times d}$.
Similarly, we say that $f$ is $\mathbb{S}_n$-invariant (or permutation invariant) if and only if $f(\sigma \cdot \mathcal{X}) = f(\mathcal{X})$.
\end{definition}
\begin{definition} \label{def:Sn-network}
An $\mathbb{S}_n$-equivariant network is a function $f: \mathbb{R}^{n^k \times d} \rightarrow \mathbb{R}^{n^{k'} \times d'}$
defined as a composition of $\mathbb{S}_n$-equivariant linear functions $f_1, .., f_T$ and $\mathbb{S}_n$-equivariant nonlinearities $\gamma_1, .., \gamma_T$:
\[
f = \gamma_T \circ f_T \circ .. \circ \gamma_1 \circ f_1.
\]
On the another hand, an $\mathbb{S}_n$-invariant network is a function $f: \mathbb{R}^{n^k \times d} \rightarrow \mathbb{R}$ defined as a composition of an $\mathbb{S}_n$-equivariant network $f'$ and an $\mathbb{S}_n$-invariant function on top of it, e.g., $f = f'' \circ f'$.
\end{definition}
\noindent
In order to build permutation-equivariant neural networks, we revisit some basic tensor operations:
the tensor product $A \otimes B$ (see Def.~\ref{def:product}) and tensor contraction ${A_{\downarrow}}_{x_1, .., x_p}$ (see Def.~\ref{def:contraction}).
It can be shown that these tensor operations respect permutation equivariance \citep{HyEtAl2018} \citep{Kondor2018}.
\begin{definition} \label{def:product}
The \textbf{tensor product} of $A \in \mathbb{R}^{n^a}$ with $B \in \mathbb{R}^{n^b}$ yields a tensor $C = A \otimes B \in \mathbb{R}^{n^{a + b}}$ where
\[
C_{i_1, i_2, .., i_{a + b}} = A_{i_1, i_2, .., i_a} B_{i_{a + 1}, i_{a + 2}, .., i_{a + b}}.
\]
\end{definition}
\begin{definition} \label{def:contraction}
The \textbf{contraction} of $A \in \mathbb{R}^{n^a}$ along the pair of dimensions $\{x, y\}$ (assuming $x < y$) yields a $(a - 2)$-th order tensor
\[
C_{i_1, .., i_{x - 1}, j, i_{x + 1}, .., i_{y - 1}, j, i_{y + 1}, .., i_a} = \sum_{i_x, i_y} A_{i_1, .., i_a},
\]
where we assume that $i_x$ and $i_y$ have been removed from amongst the indices of $C$. Using Einstein notation, this can be written more compactly as
\[
C_{\{i_1, i_2, .., i_a\} \setminus \{i_x, i_y\}} = A_{i_1, i_2, .., i_a} \delta^{i_x, i_y},
\]
where $\delta$ is the Kronecker delta. In general, the contraction of $A$ along dimensions $\{x_1, .., x_p\}$ yields a tensor $C = {A_\downarrow}_{x_1, .., x_p} \in \mathbb{R}^{n^{a - p}}$ where
\[
{A_\downarrow}_{x_1, .., x_p} = \sum_{i_{x_1}} \sum_{i_{x_2}} ... \sum_{i_{x_p}} A_{i_1, i_2, .., i_a},
\]
or compactly as
\[
{A_\downarrow}_{x_1, .., x_p} = A_{i_1, i_2, .., i_a} \delta^{i_{x_1}, i_{x_2}, ..,i_{x_p}}.
\]
\end{definition}
Based on these tensor contractions and Def.~\ref{def:Sn-equivariant}, we can construct the second-order $\mathbb{S}_n$-equivariant networks encoding a graph with adjacency matrix $\mathcal{A} \in \mathbb{R}^{n \times n}$, node features $\mathcal{F}_v \in \mathbb{R}^{n \times d_v}$ and edge features $\mathbb{R}^{n \times n \times d_e}$ as in Section ~\ref{exp:2nd-order}:
$$f = \gamma \circ \mathcal{M}_T \circ .. \circ \gamma \circ \mathcal{M}_1.$$
The ``raw'' graph wavelets can be treated as a second-order tensor of size $n \times n \times k$ where $k$ is the number of scales, similarly as the edge features. We employ the higher-order permutation-equivariant message passing proposed by \cite{maron2018invariant}, \cite{HyEtAl2018} and \cite{Kondor2018} to encode the ``raw'' graph wavelets from size $n \times n \times k$ into $n \times k$ that will be further used as nodes/tokens' embeddings of our Transformer architecture (see Fig.~\ref{fig:overall-idea}).
\subsubsection{Higher-order message passing} \label{exp:2nd-order}
The second order message passing has the message $\mathcal{H}_0 \in \mathbb{R}^{|\mathcal{V}| \times |\mathcal{V}| \times (d_v + d_e)}$ initialized by promoting the node features $\mathcal{F}_v$ to a second order tensor (e.g., we treat node features as self-loop edge features), and concatenating with the edge features $\mathcal{F}_e$. Iteratively,
$$
\mathcal{H}_t = \gamma(\mathcal{M}_t), \ \ \ \
\mathcal{M}_t = \mathcal{W}_t \bigg[ \bigoplus_{i, j} (\mathcal{A} \otimes {\mathcal{H}_{t - 1})_\downarrow}_{i, j} \bigg],
$$
where $\mathcal{A} \otimes \mathcal{H}_{t - 1}$ results in a fourth order tensor while $\downarrow_{i, j}$ contracts it down
to a second order tensor along the $i$-th and $j$-th dimensions, $\oplus$ denotes concatenation along the feature channels, and $\mathcal{W}_t$ denotes a multilayer perceptron on the feature channels.
We remark that the popular MPNNs \citep{10.5555/3305381.3305512} is a lower-order one and a special case in which $\mathcal{M}_t = \mathcal{D}^{-1}\mathcal{A}\mathcal{H}_{t - 1}\mathcal{W}_{t - 1}$ where $\mathcal{D}_{ii} = \sum_j \mathcal{A}_{ij}$ is the diagonal matrix of node degrees.
The message $\mathcal{H}_T$ of the last iteration is still second order, so we contract it down to the first order latent $\mathcal{Z} = \bigoplus_i {{\mathcal{H}_T}_\downarrow}_i$.
\section{Multiresolution Graph Transformers} \label{sec:method_mgt}
In this section, we present Multiresolution Graph Transformers (MGT), a neural network architecture for learning hierarchical structures. MGT uses Transformers to yield the representations of macromolecules at different resolutions. While previous work either neglects the hierarchical characteristics of large molecules or fails to model global interactions between distant atoms, our proposed approach can satisfy these two properties via multiresolution analysis.
Figs. \ref{fig:atom_level_mgt} and \ref{fig:substructure_level_mgt} show an overview of our framework. MGT consists of three main components: an atom-level encoder, a module to extract substructures, and a substructure-level encoder. We use a graph transformer to generate the atomic embeddings. Then, substructures present in molecules are extracted by a learning-to-cluster algorithm. The molecular graph is coarsened into a set of substructures, and we use a pure Transformer encoder to learn their relations.
\subsection{Atom-Level Encoder}
To utilize the proposed wavelet positional encoding demonstrated in Section \ref{wavelet}, we leverage the design of the graph transformer proposed in \cite{gps}, which is a general, powerful, and scalable Graph Transformer (GraphGPS) for graph representation learning. Let $\mathbf{A} \in \mathbb{R} ^ {n \times n}$ be the adjacency matrix of a graph with $n$ nodes and $e$ edges; $\textbf{X}^l$ and $\textbf{E}^l$ are node and edge features at layer $l$-th, respectively. In addition, $\textbf{X} ^ 0 \in \mathbb{R} ^{n \times d}$ and $\textbf{E}^0 \in \mathbb{R}^ {e \times d}$ are initial atom and bond features embedded in $d$-dimensional spaces created by two embedding layers. The wavelet positional vectors $\textbf{p} \in \mathbb{R} ^ {n \times k}$ are fed to an encoder (e.g., a feed-forward neural network or a linear transformation), yielding a tensor of positional features $\textbf{P} \in \mathbb{R} ^ {n \times d_p}$. We let $\textbf{X}^0:= \text{concat}(\textbf{X}^0, \textbf{P})$ to produce new node features $\textbf{X}^0 \in \mathbb{R} ^ {n \times (d + d_p)}$. From here, we define $d := d + d_p $, and for convenience, the output dimensions of all layers are equal to $d $.
Each layer of GraphGPS uses a message-passing neural network ($\text{MPNN}^l$) to exchange information (i.e., messages) within the neighborhood and a self-attention layer ($\text{SA}^l$) described in Eq. (\ref{eq:1}) to compute global interactions among distant nodes:
\begin{align}
\textbf{X}_L^{l+1}, \textbf{E}^{l+1} & = \text{MPNN} ^ l(\textbf{X}^l, \textbf{E}^l, \textbf{A})\\
\textbf{X}_G ^ {l+1} &= \text{SA} ^ l(\textbf{X}^{l}) \\
\textbf{X} ^ {l+1} &= \text{FFN} ^ l(\textbf{X}_L ^ {l+1} + \textbf{X}_G ^ {l+1}) \label{eq:4}
\end{align}
where $\textbf{X}_L ^ {l+1}$ and $\textbf{X}_G ^ {l+1}$ are node local and global representations; they are unified into $\textbf{X} ^ {l+1}$ via Eq. \ref{eq:4}. Popular techniques such as Dropout \cite{dropout} and normalization \cite{batchnorm, layernorm} are omitted for the sake of clarity. By feeding the molecular graph through $L$ layers, we attain two tensors $\textbf{X}_a:=\textbf{X}^L$ and $\textbf{E}_a:=\textbf{E}^L$ indicating the node and edge embeddings, respectively.
\subsection{Learning to Cluster}
In this work, we use a message-passing neural network augmented with differentiable pooling layers \cite{diffpool} to cluster the atoms into substructures automatically:
\begin{align}
\textbf{Z} &= \text{MPNN}_\text{e}(\textbf{X}_a, \textbf{E}_a, \textbf{A}) \label{eq:5} \\
\textbf{S} &= \text{Softmax}(\text{MPNN}_\text{c}(\textbf{X}_a, \textbf{E}_a, \textbf{A}) )
\label{eq:6}
\end{align}
where $\text{MPNN}_\text{e}$ and $\text{MPNN}_\text{c}$ are two-layer message-passing networks that learn to generate node embeddings ($\textbf{Z} \in \mathbb{R} ^ {n \times d}$) and a clustering matrix ($\textbf{S} \in \mathbb{R} ^ {n \times C})$, respectively; $C$ denotes the number of substructures in molecules. A tensor of features $\textbf{X}_s \in \mathbb{R} ^ {C \times d}$ for the substructures is computed:
\begin{equation}
\textbf{X}_s = \textbf{S} ^ T \textbf{Z}
\end{equation}
This learning-to-cluster module is placed after the atom-level encoder. Intuitively, atom nodes updated with both local and global information should be classified into accurate substructures.
\subsection{Substructure-level Encoder}
Given a set of substructures $\mathcal{V}_s$ with a tensor of features $\mathbf{X}_s \in \mathbb{R} ^ {C \times d}$, we forward $\mathbf{X}_s$ to $L$ conventional Transformer encoder layers \cite{vaswari} to capture their pairwise semantics:
\begin{align}
\textbf{H}_\text{1}^{l+1} &= \text{Norm}(\text{SA}^l(\textbf{H} ^ l) + \textbf{H} ^ l)\\
\textbf{H} ^ {l+1} &= \text{Norm}(\text{FFN}(\textbf{H}_\text{1}^{l+1}) + \textbf{H}_\text{1}^{l+1})
\end{align}
where $\text{SA}$ refers to (multi-head) self-attention described in Eq. (\ref{eq:1}), and $\textbf{H}^0$ is equal to $\textbf{X}_s$. Additionally, we add a long-range skip connection to alleviate gradient vanishing as:
\begin{equation}
\textbf{H}_s = \text{FFN}(\text{concat}(\textbf{H}^0, \textbf{H}^{L}))
\end{equation}
$\textbf{H}_s \in \mathbb{R} ^ {C \times d}$ is the output indicating the representations for the substructures. Finally, we aggregate all $C$ vectors $h_s \in \textbf{H}_s$ to result in a unique representation $z \in \mathbb{R} ^ d$ for the molecules (refer to Section \ref{heirachical_coarsen}), before feeding it to a feed-forward network to compute the final output $y \in \mathbb{R} ^ c$ for property prediction:
\begin{align}
z &= \zeta (\{h_s\}_{s = 1} ^ C) \\
\hat{y} &= \text{FFN}(z)
\end{align}
\paragraph{Training Objective} We train MGT by minimizing $\mathcal{L}$:
\begin{equation}
\mathcal{L} = \mathcal{L}_1 + \lambda_1 \mathcal{L}_{LP} + \lambda_2 \mathcal{L}_{E}
\end{equation}
where $\mathcal{L}_1 = l(\hat{y}, y)$ denotes the loss function between predicted values and ground truths (e.g., cross-entropy or mean-squared error), $\mathcal{L}_{LP} =||\textbf{A} - \textbf{S}^T \textbf{S}||_F $ indicates auxiliary link prediction loss ($||\cdot||_F$ denotes the Frobenius norm), and $L_E = \frac{1}{n} \sum_{i=1}^n H(\textbf{S}_i)$ denotes the entropy regularization of the cluster assignment, i.e. each atom should be assigned into a unique cluster. Additionally, $\lambda_1$ and $\lambda_2$ are hyperarameters.
\section{Experiments} \label{sec:experiments}
\begin{figure}%
\centering
\captionsetup[subfloat]{captionskip=1mm}
\subfloat[]{%
\label{fig:peptides}%
\includegraphics[scale = 0.55]{Figure/peptides_crop.pdf}}%
\qquad
\subfloat[]{%
\label{fig:polymer}%
\includegraphics[scale = 0.4]{Figure/polymer_crop.pdf}}%
\caption{Examples of two macromolecules. a) An example of a peptide that consists of many functional groups. b) An example of a polymer that consists of many repeating units}
\label{fig:macromolecules}
\end{figure}
We empirically validate our proposed approach in two types of macromolecules including peptides and polymers. Figure \ref{fig:macromolecules} illustrates two examples of macromolecules in the datasets. Our PyTorch implementation is publicly available at \url{https://github.com/HySonLab/Multires-Graph-Transformer}.
\subsection{Peptides Property Prediction}
Peptides are small chains of amino acids found in nature that serve a variety of critical biological roles \cite{peptides}; however, they are far shorter than proteins. Because each amino acid is made up of several heavy atoms, a peptide's molecular graph is significantly greater than that of a tiny drug-like molecule. Since peptides are formed by sequences of amino acids, they are naturally hierarchical and long-range dependencies \cite{long_range}, i.e. a peptide should be ideally segmented into an exact set of amino acids. Therefore, we evaluate our method on peptide structures to demonstrate its superiority.
\paragraph{Experimental Setup}
We run experiments on two real-world datasets including (1) Peptides-struct and (2) Peptides-func \cite{long_range}. The two datasets are multi-label graph classification problems and share the same peptide molecular graphs, but with different tasks. While the former consists of 10 classes based on peptides function, the latter is used to predict 11 aggregated 3D properties of peptides at the graph level.
For a fair comparison, we follow the experimental and evaluation setting of \cite{long_range} with the same train/test split ratio. We use mean absolute error (MAE) and average precision (AP) to evaluate the method's performance for Peptides-struct and Peptides-func, respectively.
\paragraph{Baselines and Implementation Details}
We compare our proposed approach with the baselines taken from \cite{long_range}. The local message-passing network class involves GCN \cite{gcn}, GCNII \cite{gated_gcn}, GINE \cite{gin, strategies_pretrain}, and GatedGCN \cite{gated_gcn}. For Transformer-based architectures, we compare our method with vanilla Transformer \cite{vaswari} with Laplacian PE \cite{benchmark_graph, graph_generalize} and SAN \cite{san}. Since all baselines are limited to approximately 500k learnable parameters, we also restrict MGT to roughly the same number of parameters. Additionally, we use GatedGCN \cite{gated_gcn} for local message passing customized with the PEG technique to stabilize the positional features \cite{wang2022equivariant} \footnote{The implementation of GPS is adapted from https://github.com/vijaydwivedi75/lrgb.git.}. We experiment with each task in four different random seeds. We provide further implementation details of MGT for this task in Appendix \ref{ref:appendix_peptides}.
\paragraph{Results}
Table \ref{tab:peptides_results} shows that our proposed MGT + WavePE achieves the best performances in two peptide prediction tasks. In addition to WavePE, MGT + RWPE also attains the second-best performances. The superiority of WavePE to RWPE can be explained as mentioned in Section \ref{wavelet} that WavePE is a generalized version of RWPE. In particular, our proposed MGT outperforms all the baselines in the Petides-func task by a large margin and decreases the MAE score to less than 0.25 in the Petides-struct task.
\begin{table*}[t]
\centering
\caption{Experimental results on the polymer property prediction task. All the methods are trained in four different random seeds and evaluated by MAE $\downarrow$. Our methods are able to attain better performance across three DFT properties of polymers while having less number of parameters. All the properties are measured in eV.}
\vskip 0.1in
\begin{tabular}{ l c c c c }
\toprule
\multirow{2}{*}{Model} & \multirow{2}{*}{No. Params}&\multicolumn{3}{c}{Property} \\
\cmidrule(lr){3-5}
& & GAP & HOMO & LUMO \\
\midrule
GCN + Virtual Node & 557k & 0.1130 $\pm$ 0.0009 & 0.0100 $\pm$ 0.0016 & 0.0871 $\pm$ 0.0019 \\
GINE + Virtual Node & 557k & 0.1669 $\pm$ 0.0078 & 0.1233 $\pm$ 0.0130 & 0.0945 $\pm$ 0.0028 \\
GPS & 600k & 0.0895 $\pm$ 0.0032 & 0.0703 $\pm$ 0.0045 & 0.0695 $\pm$ 0.0011 \\
Transformer + LapPE & 700k & 0.5653 $\pm$ 0.0923 & 0.2621 $\pm$ 0.0450 & 0.2791 $\pm$ 0.0229\\
\midrule
MGT + LapPE (ours) & 499k & \textbf{0.0724 $\pm$ 0.0008} & \textbf{0.0590 $\pm$ 0.0025} & 0.0533 $\pm$ 0.0006 \\
MGT + RWPE (ours) & 499k & 0.0736 $\pm$ 0.0031 & 0.0600 $\pm$ 0.0011 & \textbf{0.0523 $\pm$ 0.0015} \\
MGT + WavePE (ours) & 499k & 0.0741 $\pm$ 0.0023 & 0.0619 $\pm$ 0.0009 & 0.0539 $\pm$ 0.0020 \\
\bottomrule
\end{tabular}
\label{tab:polymer_result}
\end{table*}
\subsection{Polymer Property Prediction}
Polymers are long chains of repetitive substructures known as repeating units. They, as a result, are also hierarchically structured and contain various types of long-range dependencies among the atoms.
Since polymers have a wide range of applications in daily life, it is essential to understand their molecular properties. In this work, we experiment on a dataset of polymers to verify the superior performance of MGT in predicting molecular properties, as well as detecting repeating units existing in polymer structures.
\paragraph{Experimental Setup} We use a polymer dataset proposed in \cite{st2019message}. Each polymer is associated with three types of Density Functional Theory (DFT) \cite{PhysRev.136.B864} properties including the first excitation energy of the monomer calculated with time-dependent DFT (GAP), the energy of the highest occupied molecular orbital for the monomer (HOMO), and the lowest unoccupied molecular orbital of the monomer (LUMO).
The dataset is split into train/validation/test subsets with a ratio of 8:1:1, respectively. For training, we normalize all learning targets to have a mean of 0 and a standard deviation of 1.
\paragraph{Baselines and Implementation Details}
As there are no existing baselines on this dataset, we perform experiments with four different models for comparisons. For local GNNs, we use GCN \cite{gcn} and GINE \cite{gin, strategies_pretrain} augmented with virtual nodes as the baselines \footnote{The implementation of local GNN models is taken from https://github.com/snap-stanford/ogb/tree/master/ogb}. Moreover, we use standard Transformer \cite{vaswari} with Laplacian positional encoding \cite{graph_generalize} and GPS \cite{gps} as the baselines for Transformer-based architectures. The implementation of MGT is similar to the Peptide tasks.
Please refer to Appendix \ref{ref:appendidx_polymer} for the baseline details.
\paragraph{Results}
As shown in Table \ref{tab:polymer_result}, our MGT models achieve the lowest MAE scores across three properties. In addition, WavePE can attain comparable results with LapPE and RWPE for this task. We observe that the vanilla Transformer has the poorest performance. This demonstrates that computing global information without the awareness of locality is not sufficient for macromolecular modeling. As described in Section \ref{sec:method_mgt}, MGT is an extended version of GPS. In particular, a learning-to-cluster module and a substructure-level Transformer encoder are extensions to GPS. The better performance of MGT, as a result, indicates that our methodology in modeling hierarchical structures is appropriate and reasonable.
\subsection{Visualization}
In addition to numerical results, we provide some visualizations to demonstrate the comprehensiveness of MGT in learning to represent large molecules. For visual quality purposes, we place the figures in the Appendix. Figs. \ref{fig:peptides_rep} and \ref{fig:polymer_rep} show clear and smooth clustering patterns in low-dimensional spaces that indicate our proposed approaches are able to learn meaningful molecular representations for hierarchical structures such as peptides and polymers. Furthermore, according to Figs. \ref{fig:peptides_clustering} and \ref{fig:polymer_clustering}, our learning-to-cluster algorithm and multiresolution analysis can pick up functional groups (for proteins/peptides) and repeating units (for polymers) via back-propagation.
\section{Conclusion} \label{sec:conclusion}
In this work, we introduce the MGT model and WavePE method, which perform multiresolution analysis in both spatial and spectral domains. Our proposed methods can learn to represent hierarchical and long-range structures. In addition to numerical performance, MGT can provide interpretable clustering results and produce discernible spaces of the representations of large molecules.
\section{Appendix} \label{sec:appendix}
\onecolumn
\section{Implementation Details}
\subsection{Multiresolution Graph Transformer}
\begin{table*}[h]
\centering
\caption{The hyperparameters for MGT}
\vskip 0.15in
\begin{tabular}{ l c }
\toprule
Hyperparameters & Values \\
\midrule
No. Epoch & 200 \\
Embedding Dimension & 84 \\
Batch size & 128 \\
Learning rate & 0.001 \\
Dropout & 0.25 \\
Attention Dropout & 0.5 \\
Diffusion Step (K) & [1, 2, 3, 4, 5] \\
No. Head & 4 \\
Activation & ReLU \\
Normalization & Batchnorm \\
No. Cluster & 10 \\
$\lambda_1$ & 0.001 \\
$\lambda_2$ & 0.001 \\
\bottomrule
\end{tabular}
\label{tab:hyperparameters}
\end{table*}
In this section, we elaborate on the architecture and hyperparameters used to train and evaluate our MGT to achieve the above numerical results. Table \ref{tab:hyperparameters} show details of the hyperparameters used for MGT in all the experiments. In particular, we use the atom and bond encoder modules provided by OGB \cite{hu2021ogblsc} to attribute the molecular graph. We use two GPS layers to compute the atom-level representations and two Transformer layers for calculating the substructure-level representations. For learning to cluster, we use a 2-layer message-passing network to compute $\textbf{Z}$ and $\textbf{S}$ mentioned in Eq. (\ref{eq:5}) (\ref{eq:6}) as follows:
\begin{align}
\textbf{Z}_a ^ 1, \textbf{E}_a ^1 &= \text{GatedGCN}^1(\textbf{X}_a, \textbf{E}_a, \textbf{A}) \\
\textbf{Z}_a ^1 &= \text{Batchnorm}(\text{ReLU}(\textbf{Z}_a ^ 1)) \\
\textbf{Z}_a ^ 2, \textbf{E}_a ^2 &= \text{GatedGCN}^2(\textbf{Z}_a ^ 1, \textbf{E}_a ^ 1, \textbf{A}) \\
\textbf{Z}_a ^2 &= \text{Batchnorm}(\text{ReLU}(\textbf{Z}_a ^ 2)) \\
\textbf{Z} &= \text{concat}(\textbf{Z}_a ^ 1, \textbf{Z}_a ^ 2) \\
\textbf{Z} &= \text{FFN}(\textbf{Z})
\end{align}
$\textbf{S}$ is computed similarly with an auxiliary Softmax operation on the output to produce a probabilistic clustering matrix.
\label{ref:appendix_peptides}
\subsection{Baselines used in Polymer Property Prediction}
Table \ref{tab:baselines_settings} shows the implementation of the baselines we used in the polymer experiments. All the models are designed to have approximately 500 to 700k learnable parameters. For fair comparisons, all the models are trained in 50 epochs with a learning rate of 0.001 and batch size of 128.
\begin{table*}[h]
\centering
\caption{The detailed settings of baselines for polymer property prediction}
\vskip 0.15in
\begin{tabular}{ l c c c }
\toprule
Model & No. Layer & Embed Dim & No. Params\\
\midrule
GCN + Virtual Node & 5 & 156 & 557k \\
GINE + Virtual Node & 5 & 120 & 557k \\
GPS & 3 & 120 & 600k \\
Transformer + LapPE & 6 & 120 & 700k \\
\bottomrule
\end{tabular}
\label{tab:baselines_settings}
\end{table*}
\label{ref:appendidx_polymer}
\newpage
\section{Visualization}
We use the t-SNE algorithm \cite{tsne} to project the representations produced by MGT (with WavePE) of peptides and polymers of the test datasets into two-dimensional spaces for visualization. Also, we take the probabilistic clustering matrix $\mathbf{S}$ in Eq. (\ref{eq:6}) to visualize the clustering results on the molecules. Specifically, we use the RDKit package \footnote{RDKit: Open-source cheminformatics. https://www.rdkit.org} to draw the molecules.
\subsection{Representations of Peptides}
\begin{figure}[h]%
\centering
\subfloat[]{%
\includegraphics[scale = 0.4]{Figure/Repr/peptides_func.pdf}}%
\qquad
\subfloat[]{%
\includegraphics[scale = 0.38]{Figure/Repr/peptides_struct.pdf}}%
\caption{Visualization of low-dimensional spaces of peptides on two property prediction tasks: Peptides-func and Peptides-struct. All the vectors are normalized to range $[0,1]$. a) t-SNE projection of peptides taken from the Peptides-func testing dataset. We take four random peptide functions, and each figure corresponds to one of the properties with positive (1) and negative (0) ground truths. b) Similarly, we plot the figures of four random peptide properties taken from the Peptides-struct testing dataset. The spectrums represent continuous ground truths, where lower values correspond to cooler colors.}
\label{fig:peptides_rep}
\end{figure}
\newpage
\subsection{Representations of Polymers}
\begin{figure}[h]
\centering
\includegraphics[scale=0.37]{Figure/Repr/polymer_rep.pdf}
\caption{
t-SNE projection of representations of the test polymers in the dataset. We plot the figures of three properties, including GAP, HUMO, and LUMO. As the labels are continuous values, points are color-coded by spectrums wherein higher values correspond to warmer palettes.
}
\label{fig:polymer_rep}
\end{figure}
\subsection{Clustering on Peptides}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{Figure/Clustering/peptides_cluster.pdf}
\caption{The clustering result on a peptide. MGT can group the atoms of a long peptide into different substructure types. Specifically, the groups NH3 and OH are recognized even though the atoms are located distantly. Also, local rings or segments are also detected.}
\label{fig:peptides_clustering}
\end{figure}
\newpage
\subsection{Clustering on Polymers}
\begin{figure}[h]
\centering
\includegraphics[scale = 0.35]{Figure/Clustering/polymer_cluster.pdf}
\vspace{-20mm}
\caption{The clustering result on a polymer. By learning to cluster and using a
substructure-level Transformer encoder, MGT can model repetitive patterns existing in polymers. In this example, the model can recognize repeating units in a long-chain polymer or even symmetries.}
\label{fig:polymer_clustering}
\end{figure}
\newpage
\section{Experiments on Small Molecules}
Although MGT is intentionally designed for learning to represent hierarchical structures, we report its experimental results on the ZINC-12K dataset \cite{Sterling2015}, which consists of small drug-like molecules, in this section. We train MGT to predict the solubility (LogP) of the molecules with up to 50 heavy atoms on a subset of the ZINC dataset. We follow the split of 10K/1K/1K for training/validation/testing proposed in \cite{benchmark_graph}. Baseline results include GCN \cite{gcn}, GINE \cite{gin, benchmark_graph}, , GAT \cite{gat}, Spec-GN \cite{pmlr-v162-yang22n}, PNA \cite{pna}, GatedGCN \cite{gated_gcn}, GPS \cite{gps}, MPNN \cite{10.5555/3305381.3305512}, SAN \cite{san}, DGN \cite{deeper_gcn}, and Graphormer \cite{graphormer}. Our MGT + WavePE outperforms 7 out of 10 other baselines.
\begin{table*}[h]
\centering
\caption{Experimental results on the ZINC-12K dataset}
\vskip 0.15in
\begin{tabular}{ l c c }
\toprule
Method & No. Params & MAE $\downarrow$ \\
\midrule
GCN & 505k & 0.367 $\pm$ 0.011 \\
GINE & 510k & 0.526 $\pm$ 0.051 \\
GAT & 531k & 0.384 $\pm$ 0.007 \\
PNA & 387k & 0.142 $\pm$ 0.010 \\
MPNN & 418k & 0.145 $\pm$ 0.007 \\
GatedGCN & 505k & 0.214 $\pm$ 0.006 \\
SAN & 509k & 0.139 $\pm$ 0.006 \\
Graphormer & 489k & 0.122 $\pm$ 0.006 \\
GPS & - & 0.070 $\pm$ 0.004 \\
Spec-GN & 503k & 0.0698 $\pm$ 0.002 \\
\midrule
MGT + WavePE (ours) & 499k & 0.131 $\pm$ 0.003 \\%0.152 $\pm$ 0.007 \\
\bottomrule
\end{tabular}
\label{tab:zinc_experimental_results}
\end{table*}
\section*{Acknowledgements}
We would like to thank Dao Quang Huy at FPT AI, Andrew Hands at University of Chicago, and Le Duy Dung (Andrew) at VinUniversity for their valuable suggestions.
\section*{Acknowledgements}
We would like to thank Dao Quang Huy at FPT AI, Andrew Hands at University of Chicago, and Le Duy Dung (Andrew) at VinUniversity for their valuable suggestions.
\section{Appendix} \label{sec:appendix}
\onecolumn
\section{Implementation Details}
\subsection{Multiresolution Graph Transformer}
\begin{table*}[h]
\centering
\caption{The hyperparameters for MGT}
\vskip 0.15in
\begin{tabular}{ l c }
\toprule
Hyperparameters & Values \\
\midrule
No. Epoch & 200 \\
Embedding Dimension & 84 \\
Batch size & 128 \\
Learning rate & 0.001 \\
Dropout & 0.25 \\
Attention Dropout & 0.5 \\
Diffusion Step (K) & [1, 2, 3, 4, 5] \\
No. Head & 4 \\
Activation & ReLU \\
Normalization & Batchnorm \\
No. Cluster & 10 \\
$\lambda_1$ & 0.001 \\
$\lambda_2$ & 0.001 \\
\bottomrule
\end{tabular}
\label{tab:hyperparameters}
\end{table*}
In this section, we elaborate on the architecture and hyperparameters used to train and evaluate our MGT to achieve the above numerical results. Table \ref{tab:hyperparameters} show details of the hyperparameters used for MGT in all the experiments. In particular, we use the atom and bond encoder modules provided by OGB \cite{hu2021ogblsc} to attribute the molecular graph. We use two GPS layers to compute the atom-level representations and two Transformer layers for calculating the substructure-level representations. For learning to cluster, we use a 2-layer message-passing network to compute $\textbf{Z}$ and $\textbf{S}$ mentioned in Eq. (\ref{eq:5}) (\ref{eq:6}) as follows:
\begin{align}
\textbf{Z}_a ^ 1, \textbf{E}_a ^1 &= \text{GatedGCN}^1(\textbf{X}_a, \textbf{E}_a, \textbf{A}) \\
\textbf{Z}_a ^1 &= \text{Batchnorm}(\text{ReLU}(\textbf{Z}_a ^ 1)) \\
\textbf{Z}_a ^ 2, \textbf{E}_a ^2 &= \text{GatedGCN}^2(\textbf{Z}_a ^ 1, \textbf{E}_a ^ 1, \textbf{A}) \\
\textbf{Z}_a ^2 &= \text{Batchnorm}(\text{ReLU}(\textbf{Z}_a ^ 2)) \\
\textbf{Z} &= \text{concat}(\textbf{Z}_a ^ 1, \textbf{Z}_a ^ 2) \\
\textbf{Z} &= \text{FFN}(\textbf{Z})
\end{align}
$\textbf{S}$ is computed similarly with an auxiliary Softmax operation on the output to produce a probabilistic clustering matrix.
\label{ref:appendix_peptides}
\subsection{Baselines used in Polymer Property Prediction}
Table \ref{tab:baselines_settings} shows the implementation of the baselines we used in the polymer experiments. All the models are designed to have approximately 500 to 700k learnable parameters. For fair comparisons, all the models are trained in 50 epochs with a learning rate of 0.001 and batch size of 128.
\begin{table*}[h]
\centering
\caption{The detailed settings of baselines for polymer property prediction}
\vskip 0.15in
\begin{tabular}{ l c c c }
\toprule
Model & No. Layer & Embed Dim & No. Params\\
\midrule
GCN + Virtual Node & 5 & 156 & 557k \\
GINE + Virtual Node & 5 & 120 & 557k \\
GPS & 3 & 120 & 600k \\
Transformer + LapPE & 6 & 120 & 700k \\
\bottomrule
\end{tabular}
\label{tab:baselines_settings}
\end{table*}
\label{ref:appendidx_polymer}
\newpage
\section{Visualization}
We use the t-SNE algorithm \cite{tsne} to project the representations produced by MGT (with WavePE) of peptides and polymers of the test datasets into two-dimensional spaces for visualization. Also, we take the probabilistic clustering matrix $\mathbf{S}$ in Eq. (\ref{eq:6}) to visualize the clustering results on the molecules. Specifically, we use the RDKit package \footnote{RDKit: Open-source cheminformatics. https://www.rdkit.org} to draw the molecules.
\subsection{Representations of Peptides}
\begin{figure}[h]%
\centering
\subfloat[]{%
\includegraphics[scale = 0.4]{Figure/Repr/peptides_func.pdf}}%
\qquad
\subfloat[]{%
\includegraphics[scale = 0.38]{Figure/Repr/peptides_struct.pdf}}%
\caption{Visualization of low-dimensional spaces of peptides on two property prediction tasks: Peptides-func and Peptides-struct. All the vectors are normalized to range $[0,1]$. a) t-SNE projection of peptides taken from the Peptides-func testing dataset. We take four random peptide functions, and each figure corresponds to one of the properties with positive (1) and negative (0) ground truths. b) Similarly, we plot the figures of four random peptide properties taken from the Peptides-struct testing dataset. The spectrums represent continuous ground truths, where lower values correspond to cooler colors.}
\label{fig:peptides_rep}
\end{figure}
\newpage
\subsection{Representations of Polymers}
\begin{figure}[h]
\centering
\includegraphics[scale=0.37]{Figure/Repr/polymer_rep.pdf}
\caption{
t-SNE projection of representations of the test polymers in the dataset. We plot the figures of three properties, including GAP, HUMO, and LUMO. As the labels are continuous values, points are color-coded by spectrums wherein higher values correspond to warmer palettes.
}
\label{fig:polymer_rep}
\end{figure}
\subsection{Clustering on Peptides}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{Figure/Clustering/peptides_cluster.pdf}
\caption{The clustering result on a peptide. MGT can group the atoms of a long peptide into different substructure types. Specifically, the groups NH3 and OH are recognized even though the atoms are located distantly. Also, local rings or segments are also detected.}
\label{fig:peptides_clustering}
\end{figure}
\newpage
\subsection{Clustering on Polymers}
\begin{figure}[h]
\centering
\includegraphics[scale = 0.35]{Figure/Clustering/polymer_cluster.pdf}
\vspace{-20mm}
\caption{The clustering result on a polymer. By learning to cluster and using a
substructure-level Transformer encoder, MGT can model repetitive patterns existing in polymers. In this example, the model can recognize repeating units in a long-chain polymer or even symmetries.}
\label{fig:polymer_clustering}
\end{figure}
\newpage
\section{Experiments on Small Molecules}
Although MGT is intentionally designed for learning to represent hierarchical structures, we report its experimental results on the ZINC-12K dataset \cite{Sterling2015}, which consists of small drug-like molecules, in this section. We train MGT to predict the solubility (LogP) of the molecules with up to 50 heavy atoms on a subset of the ZINC dataset. We follow the split of 10K/1K/1K for training/validation/testing proposed in \cite{benchmark_graph}. Baseline results include GCN \cite{gcn}, GINE \cite{gin, benchmark_graph}, , GAT \cite{gat}, Spec-GN \cite{pmlr-v162-yang22n}, PNA \cite{pna}, GatedGCN \cite{gated_gcn}, GPS \cite{gps}, MPNN \cite{10.5555/3305381.3305512}, SAN \cite{san}, DGN \cite{deeper_gcn}, and Graphormer \cite{graphormer}. Our MGT + WavePE outperforms 7 out of 10 other baselines.
\begin{table*}[h]
\centering
\caption{Experimental results on the ZINC-12K dataset}
\vskip 0.15in
\begin{tabular}{ l c c }
\toprule
Method & No. Params & MAE $\downarrow$ \\
\midrule
GCN & 505k & 0.367 $\pm$ 0.011 \\
GINE & 510k & 0.526 $\pm$ 0.051 \\
GAT & 531k & 0.384 $\pm$ 0.007 \\
PNA & 387k & 0.142 $\pm$ 0.010 \\
MPNN & 418k & 0.145 $\pm$ 0.007 \\
GatedGCN & 505k & 0.214 $\pm$ 0.006 \\
SAN & 509k & 0.139 $\pm$ 0.006 \\
Graphormer & 489k & 0.122 $\pm$ 0.006 \\
GPS & - & 0.070 $\pm$ 0.004 \\
Spec-GN & 503k & 0.0698 $\pm$ 0.002 \\
\midrule
MGT + WavePE (ours) & 499k & 0.131 $\pm$ 0.003 \\%0.152 $\pm$ 0.007 \\
\bottomrule
\end{tabular}
\label{tab:zinc_experimental_results}
\end{table*}
\section{Preliminaries} \label{sec:background}
\subsection{Notation}
A molecule can be represented as an undirected graph in which nodes are the atoms and edges are the bonds between them. In paticular, we refer to a molecular graph as $G = (\mathcal{V}, \mathcal{E}, \mathbf{A}, \mathbf{X}, \mathbf{P}, \mathcal{V}_s)$, where $G$ is an undirected graph having $\mathcal{V}$ ($|\mathcal{V}| = n$) and $\mathcal{E}$ as sets of nodes and edges respectively; also, $\mathbf{A} \in \mathbb{R} ^ {n \times n}$ is the graph's adjacency matrix. When a graph is attributed, we augment $G$ with a set of node feature vectors $\mathcal{X} = \{x_1,..., x_n\}, x_i \in \mathbb{R}^d$ and a set of node positional vectors $\mathcal{P} = \{p_1, ..., p_n\}, p_i \in \mathbb{R}^p$. These two types of attributes are stored in $\mathbf{X} \in \mathbb{R} ^ {n \times d}$ and $\mathbf{P} \in \mathbb{R} ^ {n \times p}$ correspondingly.
In addition to the atom-level representation of $G$, $\mathcal{V}_s = \{v_{s_1}, ... , v_{s_k}\}$ denotes the substructure set in which $v_{s_i} \subset \mathcal{V}$, i.e. $v_{s_i}$ is a subset of atoms of the molecule.
\subsection{Hierachical Learning on Molecules}
\label{heirachical_coarsen}
Molecular property prediction is regarded as a graph-level learning task. We need to aggregate node embeddings into graph-level vectors which are then fed to a classifier to make predictions on graphs. Specifically, a function $f: \mathcal{V} \xrightarrow{} \mathcal{Z}$ that maps the atom $u \in \mathcal{V}$ to a $d_o$-dimensional vector $z_u \in \mathcal{Z} \subset \mathbb{R} ^ {d_o}$ should learn to produce atom-level representations. Most existing graph neural networks compute the vector $z =\zeta (\{f(u) | u \in \mathcal{V}\})$ that indicates a representation for the entire molecular graph, where $\zeta$ can be sum, mean, max, or more sophisticated operators. For hierarchical learning, substructure-level representations can be derived in addition to atom-level representations by aggregating node representations in the same substructures as $z_s =\zeta (\{f(u) | u \in v_s \land v_s \in \mathcal{V}_s\})$. Instead of atom vectors, we aggregate the substructure vectors to represent the entire graph, i.e. $z = \zeta (\{z_s | z_s \in \mathcal{V}_s\})$. Finally, a classifier $g$ given $z$ as inputs is trained to predict the molecular properties.
\subsection{Transformers on Graphs}
While GNNs learn node embeddings by leveraging the graph structure via local message-passing mechanisms, Transformers ignore localities and directly infer the relations between pairs of nodes using only node attributes. In other words, the node connectivity is not utilized in pure transformer-like architectures \cite{vaswari}, reducing the graph conditions to a set learning problem. Given a tensor of node features $\mathbf{X} \in \mathbb{R} ^ {n \times d}$, Transformers compute three matrices including query ($\mathbf{Q}$), key ($\mathbf{K}$), and value ($\mathbf{V}$) via three linear transformations $\mathbf{Q} = \mathbf{X} \mathbf{W}_q^T$, $\mathbf{K} = \mathbf{X} \mathbf{W}_k ^ T$, and $\mathbf{V} = \mathbf{X} \mathbf{W}_v ^ T$. A self-attention tensor ($\mathbf{H}$) can be computed as follows:
\begin{equation}
\mathbf{H} = \text{softmax}(\frac{\mathbf{Q}\mathbf{K}^T}{\sqrt{d_o}}) \mathbf{V}
\label{eq:1}
\end{equation}
where $\mathbf{W}_q$, $\mathbf{W}_k$, and $\mathbf{W}_v$ are learnable parameters in $\mathbb{R} ^ {d_o \times d}$, resulting in $\mathbf{H} \in \mathbb{R} ^{n \times d_o}$. Furthermore, each $\mathbf{H}$ in Eq. \ref{eq:1} denotes an attention head. To improve effectiveness, multiple $\{\mathbf{H}\}_{i = 1}^ h$ are computed, which is known as multi-head attention. All of the attention heads are concatenated to form a final tensor: $\mathbf{H}_o = \text{concat}(\mathbf{H}_1,..., \mathbf{H}_h) $, where $h$ is the number of attention heads. Finally, the output $\mathbf{X}^\prime$, i.e. new node representations, can be computed by feeding $\mathbf{H}_o$ into a feed-forward neural network (FFN), i.e. $\mathbf{X} ^\prime = \text{FFN}(\mathbf{H}_o)$. It is easy to see that Transformers operating on inputs without positional encoding are permutation invariant.
\paragraph{Positional Encoding} As pure Transformer encoders only model sets of nodes without being cognizant of the graph structures, positional or structural information between nodes and their neighborhoods should be incorporated into node features. In particular, node positional representations can be added or concatenated with node features, resulting in comprehensive inputs for Transformer-like architectures operating on graph-structured data.
\section{Conclusion} \label{sec:conclusion}
In this work, we introduce the MGT model and WavePE method, which perform multiresolution analysis in both spatial and spectral domains. Our proposed methods can learn to represent hierarchical and long-range structures. In addition to numerical performance, MGT can provide interpretable clustering results and produce discernible spaces of the representations of large molecules.
\section{Experiments} \label{sec:experiments}
\begin{figure}%
\centering
\captionsetup[subfloat]{captionskip=1mm}
\subfloat[]{%
\label{fig:peptides}%
\includegraphics[scale = 0.55]{Figure/peptides_crop.pdf}}%
\qquad
\subfloat[]{%
\label{fig:polymer}%
\includegraphics[scale = 0.4]{Figure/polymer_crop.pdf}}%
\caption{Examples of two macromolecules. a) An example of a peptide that consists of many functional groups. b) An example of a polymer that consists of many repeating units}
\label{fig:macromolecules}
\end{figure}
We empirically validate our proposed approach in two types of macromolecules including peptides and polymers. Figure \ref{fig:macromolecules} illustrates two examples of macromolecules in the datasets. Our PyTorch implementation is publicly available at \url{https://github.com/HySonLab/Multires-Graph-Transformer}.
\subsection{Peptides Property Prediction}
Peptides are small chains of amino acids found in nature that serve a variety of critical biological roles \cite{peptides}; however, they are far shorter than proteins. Because each amino acid is made up of several heavy atoms, a peptide's molecular graph is significantly greater than that of a tiny drug-like molecule. Since peptides are formed by sequences of amino acids, they are naturally hierarchical and long-range dependencies \cite{long_range}, i.e. a peptide should be ideally segmented into an exact set of amino acids. Therefore, we evaluate our method on peptide structures to demonstrate its superiority.
\paragraph{Experimental Setup}
We run experiments on two real-world datasets including (1) Peptides-struct and (2) Peptides-func \cite{long_range}. The two datasets are multi-label graph classification problems and share the same peptide molecular graphs, but with different tasks. While the former consists of 10 classes based on peptides function, the latter is used to predict 11 aggregated 3D properties of peptides at the graph level.
For a fair comparison, we follow the experimental and evaluation setting of \cite{long_range} with the same train/test split ratio. We use mean absolute error (MAE) and average precision (AP) to evaluate the method's performance for Peptides-struct and Peptides-func, respectively.
\paragraph{Baselines and Implementation Details}
We compare our proposed approach with the baselines taken from \cite{long_range}. The local message-passing network class involves GCN \cite{gcn}, GCNII \cite{gated_gcn}, GINE \cite{gin, strategies_pretrain}, and GatedGCN \cite{gated_gcn}. For Transformer-based architectures, we compare our method with vanilla Transformer \cite{vaswari} with Laplacian PE \cite{benchmark_graph, graph_generalize} and SAN \cite{san}. Since all baselines are limited to approximately 500k learnable parameters, we also restrict MGT to roughly the same number of parameters. Additionally, we use GatedGCN \cite{gated_gcn} for local message passing customized with the PEG technique to stabilize the positional features \cite{wang2022equivariant} \footnote{The implementation of GPS is adapted from https://github.com/vijaydwivedi75/lrgb.git.}. We experiment with each task in four different random seeds. We provide further implementation details of MGT for this task in Appendix \ref{ref:appendix_peptides}.
\paragraph{Results}
Table \ref{tab:peptides_results} shows that our proposed MGT + WavePE achieves the best performances in two peptide prediction tasks. In addition to WavePE, MGT + RWPE also attains the second-best performances. The superiority of WavePE to RWPE can be explained as mentioned in Section \ref{wavelet} that WavePE is a generalized version of RWPE. In particular, our proposed MGT outperforms all the baselines in the Petides-func task by a large margin and decreases the MAE score to less than 0.25 in the Petides-struct task.
\begin{table*}[t]
\centering
\caption{Experimental results on the polymer property prediction task. All the methods are trained in four different random seeds and evaluated by MAE $\downarrow$. Our methods are able to attain better performance across three DFT properties of polymers while having less number of parameters. All the properties are measured in eV.}
\vskip 0.1in
\begin{tabular}{ l c c c c }
\toprule
\multirow{2}{*}{Model} & \multirow{2}{*}{No. Params}&\multicolumn{3}{c}{Property} \\
\cmidrule(lr){3-5}
& & GAP & HOMO & LUMO \\
\midrule
GCN + Virtual Node & 557k & 0.1130 $\pm$ 0.0009 & 0.0100 $\pm$ 0.0016 & 0.0871 $\pm$ 0.0019 \\
GINE + Virtual Node & 557k & 0.1669 $\pm$ 0.0078 & 0.1233 $\pm$ 0.0130 & 0.0945 $\pm$ 0.0028 \\
GPS & 600k & 0.0895 $\pm$ 0.0032 & 0.0703 $\pm$ 0.0045 & 0.0695 $\pm$ 0.0011 \\
Transformer + LapPE & 700k & 0.5653 $\pm$ 0.0923 & 0.2621 $\pm$ 0.0450 & 0.2791 $\pm$ 0.0229\\
\midrule
MGT + LapPE (ours) & 499k & \textbf{0.0724 $\pm$ 0.0008} & \textbf{0.0590 $\pm$ 0.0025} & 0.0533 $\pm$ 0.0006 \\
MGT + RWPE (ours) & 499k & 0.0736 $\pm$ 0.0031 & 0.0600 $\pm$ 0.0011 & \textbf{0.0523 $\pm$ 0.0015} \\
MGT + WavePE (ours) & 499k & 0.0741 $\pm$ 0.0023 & 0.0619 $\pm$ 0.0009 & 0.0539 $\pm$ 0.0020 \\
\bottomrule
\end{tabular}
\label{tab:polymer_result}
\end{table*}
\subsection{Polymer Property Prediction}
Polymers are long chains of repetitive substructures known as repeating units. They, as a result, are also hierarchically structured and contain various types of long-range dependencies among the atoms.
Since polymers have a wide range of applications in daily life, it is essential to understand their molecular properties. In this work, we experiment on a dataset of polymers to verify the superior performance of MGT in predicting molecular properties, as well as detecting repeating units existing in polymer structures.
\paragraph{Experimental Setup} We use a polymer dataset proposed in \cite{st2019message}. Each polymer is associated with three types of Density Functional Theory (DFT) \cite{PhysRev.136.B864} properties including the first excitation energy of the monomer calculated with time-dependent DFT (GAP), the energy of the highest occupied molecular orbital for the monomer (HOMO), and the lowest unoccupied molecular orbital of the monomer (LUMO).
The dataset is split into train/validation/test subsets with a ratio of 8:1:1, respectively. For training, we normalize all learning targets to have a mean of 0 and a standard deviation of 1.
\paragraph{Baselines and Implementation Details}
As there are no existing baselines on this dataset, we perform experiments with four different models for comparisons. For local GNNs, we use GCN \cite{gcn} and GINE \cite{gin, strategies_pretrain} augmented with virtual nodes as the baselines \footnote{The implementation of local GNN models is taken from https://github.com/snap-stanford/ogb/tree/master/ogb}. Moreover, we use standard Transformer \cite{vaswari} with Laplacian positional encoding \cite{graph_generalize} and GPS \cite{gps} as the baselines for Transformer-based architectures. The implementation of MGT is similar to the Peptide tasks.
Please refer to Appendix \ref{ref:appendidx_polymer} for the baseline details.
\paragraph{Results}
As shown in Table \ref{tab:polymer_result}, our MGT models achieve the lowest MAE scores across three properties. In addition, WavePE can attain comparable results with LapPE and RWPE for this task. We observe that the vanilla Transformer has the poorest performance. This demonstrates that computing global information without the awareness of locality is not sufficient for macromolecular modeling. As described in Section \ref{sec:method_mgt}, MGT is an extended version of GPS. In particular, a learning-to-cluster module and a substructure-level Transformer encoder are extensions to GPS. The better performance of MGT, as a result, indicates that our methodology in modeling hierarchical structures is appropriate and reasonable.
\subsection{Visualization}
In addition to numerical results, we provide some visualizations to demonstrate the comprehensiveness of MGT in learning to represent large molecules. For visual quality purposes, we place the figures in the Appendix. Figs. \ref{fig:peptides_rep} and \ref{fig:polymer_rep} show clear and smooth clustering patterns in low-dimensional spaces that indicate our proposed approaches are able to learn meaningful molecular representations for hierarchical structures such as peptides and polymers. Furthermore, according to Figs. \ref{fig:peptides_clustering} and \ref{fig:polymer_clustering}, our learning-to-cluster algorithm and multiresolution analysis can pick up functional groups (for proteins/peptides) and repeating units (for polymers) via back-propagation.
\section{Introduction} \label{sec:intro}
Macromolecules are long-range and hierarchical structures as they consist of many substructures. While small molecules in existing datasets \cite{Ruddigkeit2012, Ramakrishnan2014, Sterling2015} comprise less than 50 atoms connected by simple rings and bonds, this number in a macromolecule can be dozens or even hundreds. Substructures such as repeating units and functional groups are intrinsic parts of macromolecules; they present unique chemical reactions regardless of other compositions in the same molecules \cite{functional}. Therefore, studying the multiresolution characteristic of large molecules is imperative to gain comprehensive knowledge about real-life materials like polymers or proteins \cite{multiscale}. In recent years, several works \cite{machine_learning_exploit, screening_macro, macro_molecule} have been proposed to apply machine learning algorithms to learn macromolecules at multiple scales. These approaches, however, rely on thorough feature selection and extraction, which are inefficient when learning from large databases of multicomponent materials \cite{machine_learning_exploit}.
Message passing is a prevailing paradigm for designing neural networks that operate on graph-structured data. Previous studies \cite{PMLR-V70-gilmer17a, gcn, gat, ppa, gin} have proposed different strategies to perform message passing on graphs and achieved remarkable results across various domains. However, message-passing-dominated graph neural networks (GNNs) have some inherent limitations, such as limited expressiveness capability \cite{high-order,gin}, over-smoothing \cite{chen-oversmoothing, li-2018, oono2020graph}, over-squashing \cite{alon2021on} issues. Over-smoothing exists in graph neural networks that consist of a sufficiently large number of layers, and node representations are likely to converge to a constant after going through these deep networks. Over-squashing problems occur when messages are ineffectively propagated and aggregated through bottlenecks on long-range graph structures. These two shortcomings hinder GNNs from making good predictions on long-range and hierarchically structured data. Furthermore, the molecular properties of large molecules are formed not only by interactions among atoms within neighborhoods but also by distant atoms. Therefore, local information is not sufficient to model macromolecules.
Transformers are classes of deep learning models that leverage self-attention mechanisms to handle long-range dependencies in various data domains, such as natural language processing \cite{vaswari, bert} or computer vision \cite{vitrans, Swin}. In graph domains, Transformer-like architectures \cite{san, generalization_trans, gps} have proved their effectiveness in learning node representations as they can overcome the over-smoothing and over-squashing issues by directly measuring the pairwise relationships between the nodes. Contrary to GNNs, graph transformers do not use the graph structure as hard-coded information. They, instead, encode positional and structural information on graphs as soft inductive bias, making them flexible learners in graph learning problems \cite{san}. Node positional representations can be derived based on spectral \cite{graph_generalize, rw} or spatial \cite{anchor, distance} domains. Most existing spectral-based methods decompose the graph Laplacian into sets of eigenvectors and eigenvalues. However, these eigenvectors have sign ambiguity and are unstable due to eigenvalue multiplicities. On the other hand, spatial-based approaches compute the shortest distances among the nodes; however, these encoding methods do not consider the structural similarity between nodes and their neighborhoods \cite{sat}.
\paragraph{Contribution} We propose Multiresolution Graph Transformer (MGT) and
Wavelet Positional Encoding (WavePE), using multiresolution analysis on both spectral and spatial domains for learning to represent hierarchical structures. Our contributions are four-fold:
\begin{compactitem}
\item We design Multiresolution Graph Transformer (MGT), a Transformer-like architecture that can operate on macromolecules at multiple scales. Our proposed model can learn the atomic representations and group them into meaningful clusters via a data-driven algorithm. Finally, the substructures, i.e. clusters, are fed to a Transformer encoder to calculate the representations of several substructures in macromolecules.
\item We introduce Wavelet Positional Encoding (WavePE), a new positional encoding scheme for graph-structured data. Since wavelet analysis can provide localization in both spatial and spectral domains, we construct a set of wavelets to capture the structural information on graphs at different scales. Then, we apply equivariant encoding methods to project the wavelet tensors into positional representations for the atoms.
\item We show the effectiveness of our methodology by reporting its superior performance on three molecular property prediction benchmarks. These datasets contain macromolecules, i.e. peptides and polymers, that are highly hierarchical and consist of up to hundreds of atoms.
\item Our visualization demonstrates the comprehensiveness of our proposed methods in learning to represent large molecules. In general, we show the representations of molecules produced by MGT and how MGT determines and groups the atoms in long-chain molecules.
\end{compactitem}
\section{Wavelet Positional Encoding}
\label{wavelet}
\subsection{Spectral Graph Wavelets}
Let $\mathcal{A} \in \mathbb{R} ^ {n\times n}$ be the adjacency matrix of an undirected graph $G = (\mathcal{V}, \mathcal{E})$. The normalized graph Laplacian is defined as $\mathcal{L} = \mathcal{I}_n - \mathcal{D}^{-1/2} \mathcal{A} \mathcal{D} ^ {-1/2}$, where $\mathcal{I}_n$ is the identity matrix and $\mathcal{D}$ is the diagonal matrix of node degrees. $\mathcal{L}$ can be decomposed into a complete set of orthonormal eigenvectors $U = (u_1, u_2, ..., u_n)$ associated with real and non-negative eigenvalues $\{\lambda\}_1^n$. While graph Fourier transform uses $U$ as a set of bases to project the graph signal from the vertex domain to the spectral domain, graph wavelet transform constructs a set of spectral graph wavelets as bases for this projection via:
\[
\psi_s = U \Sigma_s U^T
\]
where $\Sigma_s=\text{diag}(g(s\lambda_1), g(s\lambda_2), ..., g(s\lambda_n))$ is a scaling matrix of eigenvalues, $\psi_s=(\psi_{s1}, \psi_{s2}, ..., \psi_{sn})$ and each wavelet $\psi_{si}$ indicates how a signal diffuses away from node $i$ at scale $s$; we choose $g(s\lambda) = e ^ {-s \lambda}$ as a heat kernel \cite{wavelet2018}. Since a node's neighborhoods can be adjusted by varying the scaling parameter $s$ \cite{wavelet}, using multiple sets of wavelets at different scales can provide comprehensive information on the graph's structure. It means that larger values of $s_i$ correspond to larger neighborhoods surrounding a center node. Figure \ref{fig:diffusion_wavelet} illustrates how wavelets can be used to determine neighborhoods at different scales on a molecular graph. In this work, we leverage this property of graph wavelets to generate node positional representations that can capture the structural information of a center node on the graph at different resolutions. We employ $k$ diffusion matrices $\{\psi_{s_i}\}_{i=1}^k$ in which each $\psi_{s_i}$ has a size of $n \times n$, resulting in a tensor of graph
wavelets $\mathcal{P} \in \mathbb{R} ^ {n \times n \times k}$. Additionally, WavePE is a generalized version of RWPE \cite{rw} as the random walk process can be regarded as a type of discrete diffusion. In the following section, we demonstrate the use of tensor contractions to generate a tensor of node positional representations $\mathbf{P} \in \mathbb{R} ^ {n \times k}$ from $\mathcal{P}$. In general, Fig.\ref{fig:wavepe} demonstrates our wavelet positional encoding method.
\begin{figure}%
\centering
\captionsetup[subfloat]{captionskip=-2mm}
\subfloat[s = 3]{%
\label{fig:peptides}%
\includegraphics[scale = 0.2]{Figure/scale_3.pdf}}%
\subfloat[s = 5]{%
\label{fig:polymer}%
\includegraphics[scale = 0.2]{Figure/scale_5.pdf}}%
\subfloat[s = 10]{
\label{fig:scale_10}
\includegraphics[scale = 0.2]{Figure/scale_10.pdf}
}
\caption{Visualization of some of the wavelets with scaling parameters on the Aspirin $\text{C}_9\text{H}_8\text{O}_4$ molecular graph with 13 nodes (i.e. heavy atoms). The center node is colored yellow. The colors varying from bright to dark illustrate the diffusion rate from the center node to the others, i.e. nodes that are closer to the center node have brighter colors.
Low-scale wavelets are highly localized, whereas the high-scale wavelets can spread out more nodes on the molecular graphs}
\label{fig:diffusion_wavelet}
\end{figure}
\subsection{Equivariant Encoding}
It is important to note that our spectral graph wavelets computed from the previous section must be further encoded in a permutation-equivariant manner. For simplicity, that means if we permute (i.e. change the order) the set of nodes,
their position encodings must be transformed accordingly. In this section, we formally define permutation symmetry, i.e. symmetry to the action of the symmetric group, $\mathbb{S}_n$, and construct permutation-equivariant neural networks to encode graph wavelets. An element $\sigma \in \mathbb{S}_n$ is a permutation of order $n$, or a bijective map from $\{1,\ldots, n\}$ to $\{1,\ldots, n\}$. For example, the action of $\mathbb{S}_n$ on an adjacency matrix $\mathcal{A} \in \mathbb{R}^{n \times n}$ and on a latent matrix $\mathcal{Z} \in \mathbb{R}^{n \times d_z}$ are:
\[
[\sigma \cdot \mathcal{A}]_{i_1, i_2} = \mathcal{A}_{\sigma^{-1}(i_1), \sigma^{-1}(i_2)}, \ \ \ \
[\sigma \cdot \mathcal{Z}]_{i, j} = \mathcal{Z}_{\sigma^{-1}(i), j},
\]
for $\sigma \in \mathbb{S}_n$. Here, the adjacency matrix $\mathcal{A}$ is a second-order tensor with a single feature channel, while the latent matrix $\mathcal{Z}$ is a first-order tensor with $d_z$ feature channels. In general, the action of $\mathbb{S}_n$ on a $k$-th order tensor $\mathcal{X} \in \mathbb{R}^{n^k \times d}$ (i.e. the last index denotes the feature channels) is defined similarly as:
\[
[\sigma \cdot \mathcal{X}]_{i_1, .., i_k, j} = \mathcal{X}_{\sigma^{-1}(i_1), .., \sigma^{-1}(i_k), j}, \hspace{20pt} \sigma \in \mathbb{S}_n.
\]
\noindent
Formally, we define these equivariant and invariant properties in Def.~\ref{def:Sn-equivariant} and equivariant neural networks in Def.~\ref{def:Sn-network}.
\begin{definition} \label{def:Sn-equivariant}
An $\mathbb{S}_n$-equivariant (or permutation equivariant) function is a function $f\colon \mathbb{R}^{n^k \times d} \rightarrow \mathbb{R}^{n^{k'} \times {d'}}$ that satisfies $f(\sigma \cdot \mathcal{X}) = \sigma \cdot f(\mathcal{X})$ for all $\sigma \in \mathbb{S}_n$ and $\mathcal{X} \in \mathbb{R}^{n^k \times d}$.
Similarly, we say that $f$ is $\mathbb{S}_n$-invariant (or permutation invariant) if and only if $f(\sigma \cdot \mathcal{X}) = f(\mathcal{X})$.
\end{definition}
\begin{definition} \label{def:Sn-network}
An $\mathbb{S}_n$-equivariant network is a function $f: \mathbb{R}^{n^k \times d} \rightarrow \mathbb{R}^{n^{k'} \times d'}$
defined as a composition of $\mathbb{S}_n$-equivariant linear functions $f_1, .., f_T$ and $\mathbb{S}_n$-equivariant nonlinearities $\gamma_1, .., \gamma_T$:
\[
f = \gamma_T \circ f_T \circ .. \circ \gamma_1 \circ f_1.
\]
On the another hand, an $\mathbb{S}_n$-invariant network is a function $f: \mathbb{R}^{n^k \times d} \rightarrow \mathbb{R}$ defined as a composition of an $\mathbb{S}_n$-equivariant network $f'$ and an $\mathbb{S}_n$-invariant function on top of it, e.g., $f = f'' \circ f'$.
\end{definition}
\noindent
In order to build permutation-equivariant neural networks, we revisit some basic tensor operations:
the tensor product $A \otimes B$ (see Def.~\ref{def:product}) and tensor contraction ${A_{\downarrow}}_{x_1, .., x_p}$ (see Def.~\ref{def:contraction}).
It can be shown that these tensor operations respect permutation equivariance \citep{HyEtAl2018} \citep{Kondor2018}.
\begin{definition} \label{def:product}
The \textbf{tensor product} of $A \in \mathbb{R}^{n^a}$ with $B \in \mathbb{R}^{n^b}$ yields a tensor $C = A \otimes B \in \mathbb{R}^{n^{a + b}}$ where
\[
C_{i_1, i_2, .., i_{a + b}} = A_{i_1, i_2, .., i_a} B_{i_{a + 1}, i_{a + 2}, .., i_{a + b}}.
\]
\end{definition}
\begin{definition} \label{def:contraction}
The \textbf{contraction} of $A \in \mathbb{R}^{n^a}$ along the pair of dimensions $\{x, y\}$ (assuming $x < y$) yields a $(a - 2)$-th order tensor
\[
C_{i_1, .., i_{x - 1}, j, i_{x + 1}, .., i_{y - 1}, j, i_{y + 1}, .., i_a} = \sum_{i_x, i_y} A_{i_1, .., i_a},
\]
where we assume that $i_x$ and $i_y$ have been removed from amongst the indices of $C$. Using Einstein notation, this can be written more compactly as
\[
C_{\{i_1, i_2, .., i_a\} \setminus \{i_x, i_y\}} = A_{i_1, i_2, .., i_a} \delta^{i_x, i_y},
\]
where $\delta$ is the Kronecker delta. In general, the contraction of $A$ along dimensions $\{x_1, .., x_p\}$ yields a tensor $C = {A_\downarrow}_{x_1, .., x_p} \in \mathbb{R}^{n^{a - p}}$ where
\[
{A_\downarrow}_{x_1, .., x_p} = \sum_{i_{x_1}} \sum_{i_{x_2}} ... \sum_{i_{x_p}} A_{i_1, i_2, .., i_a},
\]
or compactly as
\[
{A_\downarrow}_{x_1, .., x_p} = A_{i_1, i_2, .., i_a} \delta^{i_{x_1}, i_{x_2}, ..,i_{x_p}}.
\]
\end{definition}
Based on these tensor contractions and Def.~\ref{def:Sn-equivariant}, we can construct the second-order $\mathbb{S}_n$-equivariant networks encoding a graph with adjacency matrix $\mathcal{A} \in \mathbb{R}^{n \times n}$, node features $\mathcal{F}_v \in \mathbb{R}^{n \times d_v}$ and edge features $\mathbb{R}^{n \times n \times d_e}$ as in Section ~\ref{exp:2nd-order}:
$$f = \gamma \circ \mathcal{M}_T \circ .. \circ \gamma \circ \mathcal{M}_1.$$
The ``raw'' graph wavelets can be treated as a second-order tensor of size $n \times n \times k$ where $k$ is the number of scales, similarly as the edge features. We employ the higher-order permutation-equivariant message passing proposed by \cite{maron2018invariant}, \cite{HyEtAl2018} and \cite{Kondor2018} to encode the ``raw'' graph wavelets from size $n \times n \times k$ into $n \times k$ that will be further used as nodes/tokens' embeddings of our Transformer architecture (see Fig.~\ref{fig:overall-idea}).
\subsubsection{Higher-order message passing} \label{exp:2nd-order}
The second order message passing has the message $\mathcal{H}_0 \in \mathbb{R}^{|\mathcal{V}| \times |\mathcal{V}| \times (d_v + d_e)}$ initialized by promoting the node features $\mathcal{F}_v$ to a second order tensor (e.g., we treat node features as self-loop edge features), and concatenating with the edge features $\mathcal{F}_e$. Iteratively,
$$
\mathcal{H}_t = \gamma(\mathcal{M}_t), \ \ \ \
\mathcal{M}_t = \mathcal{W}_t \bigg[ \bigoplus_{i, j} (\mathcal{A} \otimes {\mathcal{H}_{t - 1})_\downarrow}_{i, j} \bigg],
$$
where $\mathcal{A} \otimes \mathcal{H}_{t - 1}$ results in a fourth order tensor while $\downarrow_{i, j}$ contracts it down
to a second order tensor along the $i$-th and $j$-th dimensions, $\oplus$ denotes concatenation along the feature channels, and $\mathcal{W}_t$ denotes a multilayer perceptron on the feature channels.
We remark that the popular MPNNs \citep{10.5555/3305381.3305512} is a lower-order one and a special case in which $\mathcal{M}_t = \mathcal{D}^{-1}\mathcal{A}\mathcal{H}_{t - 1}\mathcal{W}_{t - 1}$ where $\mathcal{D}_{ii} = \sum_j \mathcal{A}_{ij}$ is the diagonal matrix of node degrees.
The message $\mathcal{H}_T$ of the last iteration is still second order, so we contract it down to the first order latent $\mathcal{Z} = \bigoplus_i {{\mathcal{H}_T}_\downarrow}_i$.
\section{Multiresolution Graph Transformers} \label{sec:method_mgt}
In this section, we present Multiresolution Graph Transformers (MGT), a neural network architecture for learning hierarchical structures. MGT uses Transformers to yield the representations of macromolecules at different resolutions. While previous work either neglects the hierarchical characteristics of large molecules or fails to model global interactions between distant atoms, our proposed approach can satisfy these two properties via multiresolution analysis.
Figs. \ref{fig:atom_level_mgt} and \ref{fig:substructure_level_mgt} show an overview of our framework. MGT consists of three main components: an atom-level encoder, a module to extract substructures, and a substructure-level encoder. We use a graph transformer to generate the atomic embeddings. Then, substructures present in molecules are extracted by a learning-to-cluster algorithm. The molecular graph is coarsened into a set of substructures, and we use a pure Transformer encoder to learn their relations.
\subsection{Atom-Level Encoder}
To utilize the proposed wavelet positional encoding demonstrated in Section \ref{wavelet}, we leverage the design of the graph transformer proposed in \cite{gps}, which is a general, powerful, and scalable Graph Transformer (GraphGPS) for graph representation learning. Let $\mathbf{A} \in \mathbb{R} ^ {n \times n}$ be the adjacency matrix of a graph with $n$ nodes and $e$ edges; $\textbf{X}^l$ and $\textbf{E}^l$ are node and edge features at layer $l$-th, respectively. In addition, $\textbf{X} ^ 0 \in \mathbb{R} ^{n \times d}$ and $\textbf{E}^0 \in \mathbb{R}^ {e \times d}$ are initial atom and bond features embedded in $d$-dimensional spaces created by two embedding layers. The wavelet positional vectors $\textbf{p} \in \mathbb{R} ^ {n \times k}$ are fed to an encoder (e.g., a feed-forward neural network or a linear transformation), yielding a tensor of positional features $\textbf{P} \in \mathbb{R} ^ {n \times d_p}$. We let $\textbf{X}^0:= \text{concat}(\textbf{X}^0, \textbf{P})$ to produce new node features $\textbf{X}^0 \in \mathbb{R} ^ {n \times (d + d_p)}$. From here, we define $d := d + d_p $, and for convenience, the output dimensions of all layers are equal to $d $.
Each layer of GraphGPS uses a message-passing neural network ($\text{MPNN}^l$) to exchange information (i.e., messages) within the neighborhood and a self-attention layer ($\text{SA}^l$) described in Eq. (\ref{eq:1}) to compute global interactions among distant nodes:
\begin{align}
\textbf{X}_L^{l+1}, \textbf{E}^{l+1} & = \text{MPNN} ^ l(\textbf{X}^l, \textbf{E}^l, \textbf{A})\\
\textbf{X}_G ^ {l+1} &= \text{SA} ^ l(\textbf{X}^{l}) \\
\textbf{X} ^ {l+1} &= \text{FFN} ^ l(\textbf{X}_L ^ {l+1} + \textbf{X}_G ^ {l+1}) \label{eq:4}
\end{align}
where $\textbf{X}_L ^ {l+1}$ and $\textbf{X}_G ^ {l+1}$ are node local and global representations; they are unified into $\textbf{X} ^ {l+1}$ via Eq. \ref{eq:4}. Popular techniques such as Dropout \cite{dropout} and normalization \cite{batchnorm, layernorm} are omitted for the sake of clarity. By feeding the molecular graph through $L$ layers, we attain two tensors $\textbf{X}_a:=\textbf{X}^L$ and $\textbf{E}_a:=\textbf{E}^L$ indicating the node and edge embeddings, respectively.
\subsection{Learning to Cluster}
In this work, we use a message-passing neural network augmented with differentiable pooling layers \cite{diffpool} to cluster the atoms into substructures automatically:
\begin{align}
\textbf{Z} &= \text{MPNN}_\text{e}(\textbf{X}_a, \textbf{E}_a, \textbf{A}) \label{eq:5} \\
\textbf{S} &= \text{Softmax}(\text{MPNN}_\text{c}(\textbf{X}_a, \textbf{E}_a, \textbf{A}) )
\label{eq:6}
\end{align}
where $\text{MPNN}_\text{e}$ and $\text{MPNN}_\text{c}$ are two-layer message-passing networks that learn to generate node embeddings ($\textbf{Z} \in \mathbb{R} ^ {n \times d}$) and a clustering matrix ($\textbf{S} \in \mathbb{R} ^ {n \times C})$, respectively; $C$ denotes the number of substructures in molecules. A tensor of features $\textbf{X}_s \in \mathbb{R} ^ {C \times d}$ for the substructures is computed:
\begin{equation}
\textbf{X}_s = \textbf{S} ^ T \textbf{Z}
\end{equation}
This learning-to-cluster module is placed after the atom-level encoder. Intuitively, atom nodes updated with both local and global information should be classified into accurate substructures.
\subsection{Substructure-level Encoder}
Given a set of substructures $\mathcal{V}_s$ with a tensor of features $\mathbf{X}_s \in \mathbb{R} ^ {C \times d}$, we forward $\mathbf{X}_s$ to $L$ conventional Transformer encoder layers \cite{vaswari} to capture their pairwise semantics:
\begin{align}
\textbf{H}_\text{1}^{l+1} &= \text{Norm}(\text{SA}^l(\textbf{H} ^ l) + \textbf{H} ^ l)\\
\textbf{H} ^ {l+1} &= \text{Norm}(\text{FFN}(\textbf{H}_\text{1}^{l+1}) + \textbf{H}_\text{1}^{l+1})
\end{align}
where $\text{SA}$ refers to (multi-head) self-attention described in Eq. (\ref{eq:1}), and $\textbf{H}^0$ is equal to $\textbf{X}_s$. Additionally, we add a long-range skip connection to alleviate gradient vanishing as:
\begin{equation}
\textbf{H}_s = \text{FFN}(\text{concat}(\textbf{H}^0, \textbf{H}^{L}))
\end{equation}
$\textbf{H}_s \in \mathbb{R} ^ {C \times d}$ is the output indicating the representations for the substructures. Finally, we aggregate all $C$ vectors $h_s \in \textbf{H}_s$ to result in a unique representation $z \in \mathbb{R} ^ d$ for the molecules (refer to Section \ref{heirachical_coarsen}), before feeding it to a feed-forward network to compute the final output $y \in \mathbb{R} ^ c$ for property prediction:
\begin{align}
z &= \zeta (\{h_s\}_{s = 1} ^ C) \\
\hat{y} &= \text{FFN}(z)
\end{align}
\paragraph{Training Objective} We train MGT by minimizing $\mathcal{L}$:
\begin{equation}
\mathcal{L} = \mathcal{L}_1 + \lambda_1 \mathcal{L}_{LP} + \lambda_2 \mathcal{L}_{E}
\end{equation}
where $\mathcal{L}_1 = l(\hat{y}, y)$ denotes the loss function between predicted values and ground truths (e.g., cross-entropy or mean-squared error), $\mathcal{L}_{LP} =||\textbf{A} - \textbf{S}^T \textbf{S}||_F $ indicates auxiliary link prediction loss ($||\cdot||_F$ denotes the Frobenius norm), and $L_E = \frac{1}{n} \sum_{i=1}^n H(\textbf{S}_i)$ denotes the entropy regularization of the cluster assignment, i.e. each atom should be assigned into a unique cluster. Additionally, $\lambda_1$ and $\lambda_2$ are hyperarameters.
\section{Related work} \label{sec:related}
\paragraph{Hierachical Learning on Molecule}
Functional groups or repeating units are essential phenomena in chemistry. While functional groups constitute large molecules, repeating units are the primary parts that produce complete polymer chains. We regard them as substructures. In particular, similar substructures undergo similar chemical reactions regardless of the remaining compositions existing in the molecules \cite{functional}. Previous work has leveraged the hierarchical property of molecules to improve the performance in molecular representation learning and generation. \citet{knowledge_constrastive}, \citet{grover}, and \citet{hyper_message} used functional groups as prior knowledge to guide the models to predict accurate molecular properties. For the molecular generation task, \citet{structural_motifs} and \citet{hpgraph} used chemical rules to extract substructures and constructed a vocabulary of structural motifs to generate the molecules.
\paragraph{Graph Transformers} Earlier research efforts have adopted Transformer-like architectures to graph-structured data. \citet{graph_generalize} proposed an early approach to generalize Transformers to graphs using Laplacian positional encoding and performing self-attention on one-hop neighbors surrounding center nodes. On the other hand, \citet{san} computes attention scores on the entire graph with differentiation between positive and negative edges, while also using Laplacian positional encoding. \citet{grover} introduced GTransformer that utilizes vectorized outputs from local GNNs as inputs for a Transformer encoder, making up an effective combination between node local and global information. \citet{gps} proposed a general framework that integrates essential components of Graph Transformers, including positional or structural encoding, graph feature extraction, local message passing, and self-attention. Also, \citet{sat} extracted multiple k-hop subgraphs and fed them to local GNNs to compute their embeddings, which are then moved to a Transformer encoder. Graphormer proposed in \cite{graphormer} uses attention mechanisms to estimate several types of encoding, such as centrality, spatial, and edge encodings. In addition, \citet{pure_transformer} treated all nodes and edges as independent tokens augmented with orthonormal node identifiers and trainable type identifiers, and fed them to a standard Transformer encoder.
Moreover, \citet{graph_transformer_networks} generated multiple meta-paths, i.e. views, of a graph and computed their pairwise attention scores, before aggregating them into a final representation for the entire graph.
\paragraph{Graph Positional Encoding} Several approaches have been proposed to encode the positional or structural representations into node features to improve the expressiveness of GNNs and Graph Transformers. Node positions can be determined via spectral or spatial domains. Spectral-based methods include Laplacian positional encoding \cite{graph_generalize, san} and random walk positional encoding (RWPE) \cite{rw}. For spatial-based methods, \citet{anchor} computed distances of sets of nodes to anchor nodes, whereas \citet{distance} calculated the shortest distances between pairs of nodes.
\paragraph{Multiresolution Analysis and Wavelet Theory}
Multiresolution Analysis (MRA) has been proposed by \cite{192463, 10.5555/1525499} as a method to approximate signals at multiple scales in which the signals are decomposed over elementary waveforms chosen from a family called wavelets (i.e. mother wavelets and father wavelets), including Haar \cite{Haar1910ZurTD}, Daubechies \cite{Daubechies1988OrthonormalBO}, etc., to produce the sparse representations. In graph and discrete domains, \citet{HAMMOND2011129} introduced spectral graph wavelets that are determined by applying the wavelet operator on the graph Laplacian at multi-levels. \citet{COIFMAN200653} proposed diffusion wavelet that is a fast multiresolution framework for analyzing functions on discretized structures such as graphs and manifolds. In the deep learning era, \citet{10.5555/2999611.2999723} and \citet{xu2018graph} leveraged the power of neural networks for graph wavelet construction and computation.
|
{
"arxiv_id": "2302.08646",
"language": "en",
"timestamp": "2023-02-27T02:12:16",
"url": "https://arxiv.org/abs/2302.08646",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
Undergoing worldwide rapid development~\cite{tesla, waymo, uber}, \textit{autonomous driving} (AD) aims to offer a wide range of benefits including better safety, less harmful emissions, increased lane capacity, and less travel time~\cite{shaheen2019mobility}. The core of AD is the perception capability to detect objects (e.g, vehicles, bicycles, signs, pedestrians) on the road; it enables interpretable path and action planning. Formally, the SAE (Society of Automotive Engineers) requires Level 3-5 AD to be able to monitor environments and detect
objects, even under adverse road and weather conditions~\cite{sae2014taxonomy}.
\brev{To reach these goals, \textit{multiple on-board sensing modalities} (e.g., lidar, radar, and camera) collaboratively deliver complementary and real-time information of the surroundings. While lidar and camera provide high-definition measurements in short distance due to
attenuation in distance and degradation by adverse weather or lighting conditions,
radar achieves relatively longer-range monitoring robust to adverse conditions, leveraging the penetrating power of radio waves~\cite{zheng2021siwa}.}
\brev{To fully take advantage of the rich multimodal information provided by various sensors, a plethora of previous arts~\cite{ku2018joint, xu2018pointfusion, qi2018frustum, chen2017multi, liang2018deep, qian2021robust,liu2021bev} have employed deep learning to perform multi-modality fusion as well as pattern recognition, aiming to conduct accurate and reliable \textit{object detection} (OD).
The mainstream of OD relies on a two-stage method~\cite{ku2018joint, xu2018pointfusion, qi2018frustum, chen2017multi, qian2021robust}, where proposals for regions of interest are generated first and then refined for object classification and bounding box regression.
Though OD can handle different viewing angles in general, we focus only on \textit{bird's-eye view}~\cite{qian2021robust, liu2021bev} for reduced complexity, as it
reconciles the view discrepancy among different sensing modalities at a reasonably low cost.
Yet even with this cost reduction, the fundamental difference between OD and basic learning tasks (e.g., classification) still lead to far more (deep learning) model parameters than normal, rendering its training hard to converge even for a single model, yet we shall further promote the need for training multiple models in a distributed manner.}
Ideally, deep neural networks (DNNs) for OD should be trained on a dataset that takes into account different road, traffic, and weather conditions. However, the ever-changing driving environments render it infeasible for car manufacturers and developers to collect a dataset covering all scenarios. Whereas crowdsensing~\cite{ganti2011mobile, incentive, Dual-DCOSS14, GROPING-TMC15, Quality-ToN18, Learning-TMC19} can be exploited to overcome this difficulty by outsourcing data collection and annotation tasks to \textit{autonomous vehicle}s (AVs), conventional crowdsensing suffers from privacy concerns~\cite{vergara2016privacy,de2013unique} and data communication burdens. Fortunately,
integrating \textit{federated learning} (FL)~\cite{konevcny2016federated} into crowdsensing could virtually tackle these problems. As an emerging paradigm for distributed training across massive participating clients,
FL demands a central \textit{server} only to coordinate the distributed learning process, with which each \textit{client} shares only the local model parameters: such a scheme protects data privacy while reducing communication load at the same time.
\textbf{Challenges.} Designing an FL system for OD upon AVs' multimodal data is challenging, because all three entities (i.e., human, vehicle, and environment) involved in the system introduce a variety of data heterogeneity. First, relying on clients to label the data can lead to annotation heterogeneity: some clients may be more motivated to provide
annotations with adequate quality (e.g., bounding boxes around the detected vehicle), while others may be so busy and/or less skillful that they miss a large proportion of the annotations. Second, crowdsensing by different AVs also introduces sensing modality heterogeneity,\footnote{\brev{Training a single model using FL under the heterogeneity of sensing modality allows vehicles with fewer sensors to learn from others via FL, and it also improves the model robustness against sensor malfunctions.}} since the vehicles may be equipped with different types of sensors by their manufacturers. Even for AV models from the same manufacturer, it is common that certain sensor experiences malfunction, causing corresponding data modality to get lost.
Third, the ever-changing environment (e.g, weather and road) can introduce drifts in data distributions, further exacerbating the heterogeneity issue. Unfortunately, prior arts on FL either focus on homogeneous scenarios~\cite{mcmahan2017communication, konevcny2016federated}, or deal with heterogeneity of unimodal data~\cite{li2020federated, fallah2020personalized, tu2021feddl}). None of existing works is capable of handling all the aforementioned heterogeneity induced by humans, vehicle, and environment under our targeted AD scenarios. \brev{Last but not least, the high complexity of the two-stage OD network makes its loss surface chaotic~\cite{li2018visualizing}, further exacerbating the negative impacts of the data heterogeneity on the performance.}
\begin{figure}[t]
\setlength\abovecaptionskip{8pt}
\centering
\includegraphics[width=\linewidth]{pic/auto2.pdf}
\caption{\brev{The bird's-eye view FL-OD of \sysname.}}
\label{fig:teaser}
\end{figure}
\textbf{Our solutions.} To tackle these challenges, \brev{we carefully re-engineer the classical two-stage OD model~\cite{ren2015faster} to accommodate the multimodal data generated on AVs, and we exploit a major insight on the loss surface of OD under FL to guide the design of several learning mechanisms that handle the heterogeneity issue,
as briefly illustrated in Figure~\ref{fig:teaser}.
Since we notice that tolerance for data anomalies is crucial to efficiently navigate on the chaotic loss surface, we focus on robust designs to achieve such tolerance.
}
Specifically, we design a cross-entropy-based loss function for training the neural model to handle unlabeled regions (of certain vehicles) that could be mistakenly regarded as the background during training. \sysname\ also employs inter-modality autoencoders to perform data imputation of missing sensor modalities. The autoencoders learn from incomplete data modality and generate plausible values for the missing modality.
Finally, \sysname\ exploits a novel client selection mechanism to \brev{handle environment heterogeneity by eliminating diverged models. All in all, these three mechanisms together may largely avoid data abnormality and hence prevent the clients' losses from falling into local minimums on the chaotic loss surface.}
Our key contributions can be summarized as follows:
\begin{itemize}
\item To the best of our knowledge, \sysname\ is the first FL system specifically designed for multi-modal OD under heterogeneous AV settings.
\item We design a novel cross entropy loss for training the neural model for OD, aiming to mitigate the annotation heterogeneity across clients.
\item We design an inter-modality autoencoder to perform missing data modality imputation, thus alleviating the modality heterogeneity across the clients.
\item We design a novel client selection mechanism for choosing mutually-enhancing clients, thus further eliminating the harmful effects induced by heterogeneity.
\item We implement \sysname\ prototype and evaluate \sysname\ with extensive experiments. The promising results demonstrate that \sysname\ can enable robust vehicle detection under AD scenarios.
\end{itemize}
Whereas most FL proposals consider only basic learning tasks~\cite{konevcny2016federated, li2020federated, li2021fedmask, tu2021feddl}, \sysname\ pioneers in FL-driven AV-OD far more sophisticated yet realistic than basic classification or regression.
In the following, \S~\ref{sec:background_motivation} motivates the design of \sysname\ by revealing the damaging effects of the heterogeneity. \S~\ref{sec:system_design} presents the system design of \sysname. \S~\ref{sec:evaluation} introduces the datasets, system implementation, and experiment setup, before reporting the evaluation results. Related works and technical limitations are discussed in \S~\ref{sec:related_work}. Finally, \S~\ref{sec: conclusion} concludes the paper with future directions.
\section{Motivation}\label{sec:background_motivation}
We first investigate the impact of annotation heterogeneity on the performance of a DNN model for vehicle detection. Then we show that the heterogeneous modality significantly degrades the performance of the federated model on OD. Finally, we confirm the necessity to tackle the model divergence potentially caused by heterogeneous factors (e.g., diversified environments and human inputs) in federated training.
\subsection{Quantity Skew of Labeled Data} \label{ssec:label_skew}
As an FL system, AutoFed relies on AV clients to provide labels (i.e., bounding boxes around vehicles) for two reasons: i) the server should not have access to local data due to privacy concerns; and ii) labeling data locally is more reasonable compared to performing the labeling offline on the server, \brev{because more visual cues can be leveraged for labeling locally on AVs,\footnote{\brev{The driver/co-pilot can provide online
labels similar to crowdsourcing in Waze~\cite{waze}.}} and we hence deem all such labels as reliable. However, relying on clients for data labeling can lead to skew in label quantity:} some clients may be more motivated to provide annotations with adequate quality, while others may be so busy and/or less skillful that they miss a large proportion of the annotations. The situation may get worse during training, as the missing annotations on some AVs could be mistakenly marked as background by the OD network, thus backpropagating wrong gradients during local training. As a result, the small number of labels on some AVs may degrade the overall performance and cause training instability of the OD network.
\begin{figure}[t]
\setlength\abovecaptionskip{6pt}
\vspace{-2.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/missing_anno1.pdf}
%
\end{minipage}
}
\subfloat[Average recall.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/missing_anno2.pdf}
%
\end{minipage}
}
\caption{Damaging effects of missing annotations.}
\label{fig:damage_miss_anno}
\vspace{-2ex}
\end{figure}
To demonstrate the damaging effects of missing labeling, we show the average precision and recall of %
\brev{a two-stage OD network for the task of vehicle detection in Figure~\ref{fig:damage_miss_anno}. The network utilizes a VGG variant~\cite{simonyan2014very} as its backbone and was trained by standard backpropagation using an SGD optimizer on a dataset of 1,000 data samples with 50\% data with missing annotations and 50\% data with complete annotations in a standalone manner. After training, the network is tested on another 1,000-sample dataset.} Clearly, the network under complete labeling outperforms that under missing labeling by around 20\% in terms of both precision and recall. Moreover, it is evident that the performance of the DNN under missing labeling experiences a downward trend after the 20-th epoch, confirming the negative effects of the wrong gradient signals introduced by missing labeling.
\subsection{Heterogeneous Modality across AVs} \label{ssec:modality_hetero}
Most prior work on the fusion of multimodal sensing data assumes that all modalities are available for every training data point~\cite{xu2018pointfusion, bijelic2020seeing}. This assumption may not be valid in reality, as the sensing modalities of different AVs are often heterogeneous for two reasons. On one hand, the AVs may be equipped with different types of sensors by their manufacturers. On the other hand, even for AVs from the same manufacturer, it is common that certain sensor experiences malfunctions, causing corresponding data modality to get lost. Such heterogeneous modalities pose significant challenges to DNN-based OD.
Removing data entries with missing modalities or keeping only modalities shared among all clients can be a makeshift, but useful information conveyed in other modalities or clients can be discarded. \brev{Lacking access to global statistics also renders filling a missing modality with typical statistics (e.g., mean) impractical, leaving zero-filling~\cite{van2018flexible} as the only possibility. Therefore, we show the average precision and recall of an OD network in Figure~\ref{fig:damage_miss_mod}; the model is trained in a standalone manner under complete modalities and missing modalities with zero-filling. In the training process, data with missing radar and lidar each accounting for 25\% of a 1,000-sample dataset.} The results demonstrate that the precision and recall of training the models with complete modalities outperform those with partial modalities by more than 20\% and 10\%, respectively, confirming that zero-filling does not fully overcome the challenge. To mitigate the missing modality, it is necessary to propose a new data imputation technique.
\begin{figure}[t]
\setlength\abovecaptionskip{6pt}
\vspace{-2.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/miss_modality1.pdf}
\end{minipage}
}
\subfloat[Average recall.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/miss_modality2.pdf}
\end{minipage}
}
\caption{Damaging effects of missing modality.}
\label{fig:damage_miss_mod}
\vspace{-2ex}
\end{figure}
\subsection{Model Divergence}\label{ssec:environ_hetero}
Besides the above label and modality heterogeneity across the clients, there exist other heterogeneities such as those introduced by environments (e.g., different weather and road conditions). Such heterogeneity makes the local models on AVs to be diverged and the optimization goal can even become contradictory. %
We demonstrate such model divergence in Figure~\ref{subfig:pca_diverging}, \brev{where we involve 40 clients each holding a 1,000-sample dataset for training. These datasets have i) annotation level ranging from 10\% to 100\% (with 10\% step size for every 4 clients), ii) 25\% chance to hold data with missing radar or lidar modality, and iii) equal chance to have data recorded under clear, foggy, rainy, and snowy weather. After training, the network is tested on another 2,000-sample dataset.} \brev{We apply} PCA (principal component analysis)~\cite{pearson1901liii} to the model weights \brev{at the 10-th epoch, }%
and visualize the first two PCA components, one may readily observe that, while about half of local model weights form a cluster (colored in blue), there exist multiple outliers (colored in red). If we recklessly perform aggregation on these model weights, the performance of federated model will be significantly degraded by the outliers. We demonstrate the effects of diverged models in Figure~\ref{subfig:pre_diverging}. The results show that aggregating the diverged models leads to a 10\% drop in OD precision when compared with the aggregated model from homogeneous training data.
\begin{figure}[h]
\setlength\abovecaptionskip{6pt}
\vspace{-2.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[PCA embedding.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/div1.pdf}
\label{subfig:pca_diverging}
\end{minipage}
}
\subfloat[Average precision.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/diverging_precision.pdf}
\label{subfig:pre_diverging}
\end{minipage}
}
\caption{Damaging effects of diverging models.}
\label{fig:div_gradient}
\vspace{-2ex}
\end{figure}
\section{System Design}\label{sec:system_design}
Based on our discussions in \S~\ref{sec:background_motivation}, we hereby present \sysname\ comprising a two-level design: i) \brev{a multimodal OD} network
to fully exploit the information provided by multimodal sensors equipped on the AVs,
and ii) an FL framework involving specifically designed loss, missing modality completion module, and client selection mechanism, aiming to achieve heterogeneity-aware federated multimodal OD on distributed AVs. In the following, we first define our problem concretely, and then we present the details of the multimodal OD network and FL framework.
\subsection{Problem Statement and Overview}\label{ssec:statement}
The ultimate goal of \sysname\ is to make use of the crowdsensed data collected from multiple AVs (i.e., clients) to increase the data diversity, thus improving upon the performance of a standalone OD network deployed on a single client. Since the sensors
on AVs can have multiple viewing perspectives, i.e., the lidar, radar, and camera have 3D, bird's-eye view, and front view, respectively, there are no one-size-fits-all solutions. Therefore, we specifically choose to solve the \textit{vehicle detection} problem~\cite{petrovskaya2009model} (a special case of OD) from the bird's-eye view \brev{using lidar and radar}, thanks to (also confined by) the availability of dataset and vehicle annotations~\cite{oxford_robotcar}. \brev{We avoid using camera in \sysname\ for two reasons. First, the perception capability of lidar and camera largely overlap due to their similar spectrums. Second, the current settings and network architecture mostly are focused on the bird’s eye view of the vehicle’s surroundings, making the camera’s orthogonal front view largely incompatible. }Note that our \sysname\ framework is not limited to any specific OD tasks, because performing vehicle detection actually encompasses all critical elements in fulfilling other OD tasks.
Since AV scenarios are by default a distributed setting, FL is a good candidate for better utilizing the data diversity from geographically distributed clients. \brev{However, combining FL with OD may exacerbate OD's chaotic loss surface emphasized in \S~1,
forcing naive aggregation algorithms to yield only comparable or even inferior performance compared to traditional standalone training~\cite{liu2020fedvision, jallepalli2021federated, yu2019federated}, especially under the challenges mentioned in \S~\ref{sec:background_motivation}.
Fortunately, our insight indicates that high tolerance to data anomalies can often allow effective training that leads to meaningful local minimums, by smoothly navigating on the chaotic loss surface; this motivates our following design considerations. Firstly, both data preprocessing and network architecture should be modularized and flexible enough to accommodate potentially abnormal multimodal inputs. Secondly, the network should be equipped with a mechanism to tolerate annotation anomalies of the input data. Thirdly, there should be a way to fill in missing modalities without making the data distributions abnormal. Finally, the aggregation mechanism must be sufficiently robust to withstand potentially diverging client models that could result in a non-optimal outcome after aggregation.}
\subsection{Multimodal Vehicle Detection} \label{ssec:multimodal_network}
Before introducing the \sysname\ framework, we first look at how to design a multimodal OD network under an FL setting. While the design method of a conventional two-stage OD network is well-established,
how to integrate multimodal processing into the network remains an open issue. Furthermore, extending the multimodal OD network to the FL scenario put more stringent requirements on handling of the multimodal data. Intuitively speaking, different data modalities of such a network should i) conform to similar data formats, thus facilitating multimodal fusion, ii) collaborate by sharing information so as to enhance other modalities, and iii) be loosely coupled so as to support a more flexible FL pipeline that better deals with heterogeneous data and environment. In this section, we start with introducing the basics of the conventional OD network. Then we align lidar and radar data for improving data compatibility, in order to satisfy the requirement i). Finally, we present a novel feature-level fusion technique to satisfy the other two requirements with strong information sharing and loose coupling among the modalities.
\subsubsection{Object Detection Basics} \label{sssec:od_basics}
\begin{figure}[h]
\setlength\abovecaptionskip{6pt}
\centering
\includegraphics[width=0.95\linewidth]{pic/detection.pdf}
\caption{The upgraded OD pipeline of \sysname's multi-model vehicle detection network.}
\label{fig:object_detection}
\end{figure}
Conventional two-stage OD follows 3 major steps~\cite{girshick2015fast, ren2015faster}, with 2 steps in the first stage as shown by the ``blue'' boxes in Figure~\ref{fig:object_detection}. Generally, a feature map is first extracted using well-accepted feature extractors (e.g., VGGNet~\cite{simonyan2014very} or ResNet~\cite{he2016deep}), then region proposals are generated by the region proposal network (RPN). Specifically, taking the feature maps as input,
RPN generates anchor boxes with pre-defined fixed scales and aspect ratios. The built-in \newrev{classifier of RPN differentiates} whether each anchor box is foreground or background. \newrev{The outcome allows RPN to generate the region proposals; it leverages a built-in regressor to fit the anchor boxes to their corresponding objects by adjusting their offsets. With the above completing Step-1, Step-2 involves the region proposals being filtered by non-maximum suppression (NMS): the proposals with the highest confidence are selected and excessive proposals overlapping with higher-confidence proposals above a given threshold are removed.} The loss of RPN is $L^{\mathrm{RPN}} = L_{\mathrm{cls}}^{\mathrm{RPN}} + L_{\mathrm{loc}}^{\mathrm{RPN}}$, where $L_{\mathrm{cls}}^{\mathrm{RPN}}$ is a binary cross entropy (BCE) loss \newrev{measuring} the ``objectness'' of the classification (i.e., how good the RPN is at labelling the anchor boxes as foreground or background), and $L_{\mathrm{loc}}^{\mathrm{RPN}}$ is an $L^1$ loss \newrev{quantifying} the localization performance of the predicted regions generated by the RPN.
\newrev{The second stage (also Step-3) performs a fine-tuning to} jointly optimizes a classifier and bounding-box regressors. After cropping out the feature map and RoI pooling~\cite{ren2015faster} of the interested region according to the generated proposals, it further uses a classifier to detect whether the generated bounding box contains a specific class of object. It also fine-tunes the bounding boxes using a class-specific regressor. \newrev{Essentially, this stage} introduces three losses, i.e., a BCE classification loss $L_{\mathrm{cls}}$ measuring \newrev{the network performance in} labeling a predicted box with an object, an $L^1$ box regression loss $L_{\mathrm{reg}}$ \newrev{quantifying} how the predicted location deviates from the true location, and a BCE direction loss $L_{\mathrm{dir}}$ \newrev{specifying} whether the vehicle is pointing upward or downward to remove ambiguity, thus confining the possible angles of the rotated bounding box to be in the range of $[0^{\circ}, 180^{\circ}]$. In summary, the \newrev{overall} loss function for the OD network can be written as:
\begin{align}
L_{\mathrm{total}} = L^{\mathrm{RPN}} + L_{\mathrm{cls}} + L_{\mathrm{reg}} + L_{\mathrm{dir}}
\end{align}
\begin{figure*}[t]
\setlength\abovecaptionskip{8pt}
\centering
\includegraphics[width=0.92\linewidth]{pic/arch.pdf}
\caption{\sysname\ architecture: Federated multimodal learning with heterogeneity-awareness.}
\label{fig:arch}
\end{figure*}
\subsubsection{Modality Alignment} \label{sssec:mod_alignment}
The heterogeneous data generated by multiple modalities poses a challenge to conventional OD network. Specifically, the input 3-D lidar point clouds and mechanically scanned 2-D radar heatmap are incompatible and cannot be readily fused or imputed (as will be explained in \S~\ref{sssec:imputation}) on both the original data space and the feature space. To reconcile the incompatibility, we first voxelize the 3-D point cloud obtained by lidar~\cite{yang2018pixor}. Since we are interested in performing vehicle detection from the bird's-eye view, the horizontal 2-D slices of the resulting lidar data can be deemed as an image with 36 channels, i.e., 35 channels depicting the point occupancy in the space and 1 channel \newrev{indicating} the overall intensity of the lidar signals obtained on the horizontal plane. Similarly, the radar signal can be deemed as an image with a single channel since it has no 3-D information. After converting the data into ``multi-channel'' images, they are further registered by considering the extrinsics and resolutions of the sensors, as well as vehicle kinematics. \newrev{Finally, two independent yet identical feature extractors (those of the OD network as shown on the left side of Figure~\ref{fig:object_detection}) are used} to process lidar image $\mathbf{x}_l \in \mathcal{L}$ and radar image $\mathbf{x}_r \in \mathcal{R}$, where $\mathcal{L}$ and $\mathcal{R}$ are the datasets containing lidar and radar images, respectively. \newrev{While the same architecture of these feature extractors guarantees that the modality alignment is preserved on the feature space,
they differ in the number of input channels to} cater the respective needs of lidar and radar data.
\subsubsection{Feature-Level Sensor Fusion} \label{sssec:fl_fusion}
Two approaches exist for fusing multimodal data, i.e., data-level and feature-level fusion. \sysname\ opts for feature-level fusion \newrev{thanks to its} better flexibility and low coupling offered by fusion at a \newrev{later} stage in the network. Specifically, to extend the OD network in \S~\ref{sssec:od_basics} to a multimodal setting, we further add parallel feature extractors for other modalities. Suppose the feature extractors output lidar feature map $\mathbf{z}_l$ and radar feature map $\mathbf{z}_r$, one naive method to perform feature-level fusion would be to concatenate $\mathbf{z}_l$ and $\mathbf{z}_r$, and feed the concatenated features to \newrev{Step-2 of the OD} network. However, this straightforward method fails to exploit the inter-modality relationship. A more relevant approach for exploiting the relationship is to apply the cross-attention mechanism~\cite{wei2020multi}. It generates an attention mask, in which information from a different modality is harnessed to enhance the latent features of the interested modality (e.g., an attention mask derived from lidar is used to enhance radar features, and vice versa). Different from the existing self-attention mechanism~\cite{vaswani2017attention}, our cross-attention mechanism focuses on modeling the cross-correlation among different modalities, and it adaptively learns the spatial correspondence to derive better alignment of important details from different modalities.
Essentially, \newrev{our} cross-attention mechanism can be described as transforming the latent representation $\mathbf{z}$ to a query $\mathbf{q}$ and a set of key-value pair $\mathbf{k}$ and $\mathbf{v}$, and then mapping them to an output. The query, keys, and values are all linearly transformed versions of the \newrev{input $\mathbf{z}_s\!: s \in \{\text{lidar}, \text{radar}\}$:}
\begin{align}\label{eq:qkv}
\mathbf{q}_s = \mathbf{W}_q\mathbf{z}_{\bar{s}} + \mathbf{b}_q,~~~
\mathbf{k}_s = \mathbf{W}_k \mathbf{z}_{\bar{s}} + \mathbf{b}_k,~~~
\mathbf{v}_s = \mathbf{W}_v \mathbf{z}_s + \mathbf{b}_v,
\end{align}
where $\bar{s}$ is the complementary sensing modality of $s$ (e.g., if $s$ is radar, then $\bar{s}$ is lidar, and vice versa), $\mathbf{W}_q$, $\mathbf{W}_k$, $\mathbf{W}_v$ and $\mathbf{b}_q$, $\mathbf{b}_k$, $\mathbf{b}_v$ are trainable matrices and vectors that help transforming the input to its corresponding query $\mathbf{q}_s$, key $\mathbf{k}_s$, and value $\mathbf{v}_s$, whose dimensions are denoted by $d_q$, $d_k$, $d_v$, respectively. The output context $\mathbf{z^{\prime}}_s$ is obtained as a weighted sum of the values in $\mathbf{v}_s$, where the weight of each value is a normalized product of the query $\mathbf{q}_s$ and its corresponding key $\mathbf{k}_s$:
$
\mathbf{z^{\prime}}_s=\operatorname{softmax}\left(\frac{1}{\sqrt{d_k}}\mathbf{q}_s \mathbf{k}_s^{T} \right) \mathbf{v}_s.
$
\subsection{\sysname\ Framework} \label{ssec:autofed_sysdesign}
We intend to design an FL framework that extends \newrev{our} multimodal vehicle detection network in \S~\ref{ssec:multimodal_network} to a training scenario where the data are collected by geographically distributed AVs. As illustrated in Figure~\ref{fig:arch}, \sysname\ improves the multimodal vehicle detection network in three aspects: i) modifying the loss of RPN to deal with client annotation heterogeneity, ii) employing an autoencoder to perform data imputation of missing sensing modalities, and iii) applying a client selection strategy based on $k$-d tree~\cite{bentley1975multidimensional} to overcome the diverged models brought by the environment \newrev{and aforementioned} heterogeneity.
\subsubsection{\brev{Modified Loss Function for Tolerating Annotation Anomalies}}\label{sssec:mce}
As stated in \S~\ref{ssec:label_skew}, the heterogeneity of labeled data may send wrong gradient signals during \sysname\ training, since the bounding boxes that should be classified as foreground otherwise \newrev{can be} wrongly labeled as background when their correct annotations are missing. The motivation for our modified loss is that, despite the lack of correct annotations, the \sysname\ model can identify vehicles wrongly labeled as backgrounds according to its own well-established classifier, \brev{thus avoiding sending erroneous gradient signals during backpropagation and better guiding the convergence
on the OD loss surface mentioned in \S~\ref{ssec:statement}.} Specifically, if the feature map of an anchor region is found to be similar to a vehicle, the classifier naturally assigns a high probability $p$ of predicting it as a vehicle. \brev{This comes under a reasonable assumption that, since the global model is trained sufficiently with on average high-quality annotations, it can be more trustworthy than the annotations from a few incompetent clients.} %
Recall \newrev{the BCE loss of RPN in \S~\ref{sssec:od_basics} as: $L_{\mathrm{cls}}^{\mathrm{RPN}} = -p^{*} \log \left(p\right)-\left(1-p^{*}\right) \log \left(1-p\right)$, where $p^{*}$ is the training label taking on values of 0 or 1, \brev{respectively indicating the bounded region being background or vehicle}.
Consequently, the modified cross-entropy (MCE) loss becomes:}
\begin{align}
\left\{\begin{matrix}
0, & p>p_{\mathrm{th}}~\text{and}~p^{*}=0, \\ -p^{*} \log p-(1-p^{*}) \log (1-p), & \text{otherwise},
\end{matrix}\right.
\end{align}
where $p_{\mathrm{th}}$ is a threshold value after which we believe that the classifier is more trustworthy than the annotations. \brev{The value of $p_{\mathrm{th}}$ is determined by hyperparameter search in \S~\ref{sssec:loss_thresh}.}
\begin{figure}[h]
\setlength\abovecaptionskip{6pt}
\vspace{-1.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/missing_anno_sol1.pdf}
\end{minipage}
\label{subfig:effect_loss_ap}
}
\subfloat[Average recall.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/missing_anno_sol2.pdf}
\end{minipage}
\label{subfig:effect_loss_ar}
}
\caption{\newrev{Comparison between CE and} MCE loss.}
\label{fig:effect_loss}
\vspace{-.5ex}
\end{figure}
To demonstrate the efficacy of the MCE loss, we train a multimodal vehicle detection network \brev{using the settings in \S~\ref{ssec:label_skew}}. The training results of regular CE loss and our MCE are shown in Figure~\ref{fig:effect_loss}; \newrev{they evidently confirm the superiority of MCE loss,
though the average precision and average recall of both CE and BCE losses} fluctuate around their means after sufficient training (approximately 15 epochs). \newrev{First of all}, The average precision of vehicle detection is respectively 0.57 and 0.4 when CE and MCE loss are used. Similarly, \newrev{there is a gap greater than 0.1} in the average recalls when the two losses are used. Moreover, it is \newrev{clear that, though} training with CE loss achieves higher precision and recall in the few initial epochs, it is quickly overtaken by the MCE loss, which keeps an upward trend and converges faster. Last but not least, one may also observe that there is a slight downward trend of performance when the CE loss is used after the 15-th epochs. The performance gaps and different performance trends clearly demonstrate that our MCE loss can make full use of vehicle annotations while avoiding backpropagating erroneous gradients caused by missing annotations.
\subsubsection{\brev{Modality Imputation with Autoencoder for Tolerating Modality Anomalies}} \label{sssec:imputation}
We have shown in \S~\ref{ssec:modality_hetero} that conventional data imputation methods (e.g., filling the missing modalities with 0’s) incurs information loss, and may even introduce biases into the network. To leverage the valuable information in the heterogeneous sensing modalities, we propose to fill in the missing data by leveraging the relations among different modalities. Since different modalities are aligned and loosely coupled (as explained \S~\ref{sssec:mod_alignment} and \S~\ref{sssec:fl_fusion}), we employ a convolutional autoencoder with residual connections (which connects a layer to further layers by skipping some layers in between, thus facilitating information flow) to directly perform modality imputation. The encoder of the autoencoder consists of 4 convolutional layers, and correspondingly, the decoder of the autoencoder consists of 4 transposed convolutional layers. \brev{Consequently, the lightweight architecture of our autoencoder only incurs negligible overhead representing an increase of only 4.38\% (3.129~\!GFLOPS vs. 2.988~\!GFLOPS) from the \sysname\ variant without autoencoder.} It should be noted that the autoencoder is pre-trained and does not participate in the training process of \sysname. During the pretraining stage, the autoencoder aims to learn a latent representation, and reconstruct the missing modality. For example, when the radar modality $\mathcal{R}$ is missing, the autoencoder encodes the lidar modality $\mathcal{L}$ and translates the latent information therein to fill in the missing radar modality.
\begin{figure}[h]
\setlength\abovecaptionskip{6pt}
\vspace{-1.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/miss_modality_sol1.pdf}
\end{minipage}
\label{subfig:effect_ae_ap}
}
\subfloat[Average recall.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/miss_modality_sol2.pdf}
\end{minipage}
\label{subfig:effect_ae_ar}
}
\caption{Modality imputation with an autoencoder.}
\label{fig:effect_ae}
\vspace{-.5ex}
\end{figure}
To show the efficacy of the above method, we train the multimodal vehicle detection network \brev{following the settings in \S~\ref{ssec:modality_hetero}}, and compare the average precision and recall of autoencoder imputation with zero-filling in Figure~\ref{subfig:effect_ae_ap} and~\ref{subfig:effect_ae_ar}, respectively. One may readily observe that \newrev{zero-filling only achieves an average precision of approximately 0.4, lower than an average precision of about 0.5 achieved by our autoencoder imputation.} Similarly, autoencoder imputation also surpasses zero-filling in terms of average recall by a \newrev{discernible} margin. \newrev{Figure~\ref{fig:effect_ae} also indicates
that autoencoder imputation only takes about 5 epochs to converge, much faster than} the convergence speeds (i.e., 10 and 15 epochs) by zero-filling. The higher average precision and recall, as well as the faster convergence training speed \newrev{have clearly demonstrated} that our designed autoencoder makes full use of the heterogeneous data by taking into account the \newrev{correlations} among different modalities.
\subsubsection{\brev{Client Selection for Tolerating Model Weight Anomalies}} \label{sssec:client_sel}
\brev{Environment heterogeneities, including different weather and road conditions (as indicated in \S~\ref{ssec:environ_hetero}), as well as other human-induced heterogeneities (e.g., inaccurate annotations), are not easily solvable using the techniques described in Sections~\ref{sssec:mce} and~\ref{sssec:imputation}, yet they can cause serious model divergence among the clients.}
Training with \brev{diverging }%
clients holding extremely biased datasets may contradict models from other clients, thus increasing the overall losses. \brev{To make things worse, the chaotic loss surface mentioned in \S~\ref{sec:introduction} and \S~\ref{ssec:statement} can disorient the gradient descent algorithm used for training the OD model, and further diverge the model weights. }These observations urge us to devise a novel client selection strategy immune to divergence, rather than blindly using FedAvg to aggregate model weights from all clients equally. %
By selectively removing outlier clients, the client selection strategy should help the loss navigating on the surface more efficiently. %
Suppose there are $N$ clients $\{C_1, \cdots, C_n, \cdots, C_N\}$ in total, which forms a set $S$. To mitigate the issue of diverged models, we would like to dynamically select a subset $S^{\prime} =\{ C_1, \cdots, C_m, \cdots, C_M\}$ of $M$ clients ($M<N$) after each FL communication round to minimize the sum of inter-client distances of model weights. To achieve this, we propose that, after receiving the local models from the clients, the central server constructs a $k$-d tree using the received model weights. The $k$-d tree is a bisecting structure where each branch point is the median in some dimension, and this bisecting structure helps improve the efficiency of finding the \newrev{nearest client (local) models} with minimum distances. Subsequently, the central server traverses every client in the set $S$, and queries its $M-1$ nearest neighbors efficiently using the $k$-d tree data structure. The client with the minimum distance sum to its $M-1$ neighbors, together with its $M-1$ neighbors, form the subset of selected clients $S^{\prime}$. Since the time complexity of one query is $O(\log{N})$, traversing the whole set $S$ demands a complexity of $O(N\log{N})$, which saves up a lot of time when compared with $O(N^2)$ complexity of brute-force search, especially when there are many clients involved. At last, the \newrev{central server} aggregates the model weights from the selected clients in subset $S^{\prime}$, and distributes the updated global model to all clients in $S$ for training in the next communication round.
\begin{figure}[h]
\setlength\abovecaptionskip{6pt}
\vspace{-1.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[PCA embedding.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/div3.pdf}
\end{minipage}
\label{subfig:client_sel_pca}
}
\subfloat[Average precision.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/diverging_rmvd_precision.pdf}
\end{minipage}
\label{subfig:client_sel_precision}
}
\caption{Client selection mitigates diverged models.}
\label{fig:client_sel}
\end{figure}
To illustrate the effect of our client selection strategy, we train the multi-modal vehicle detection network \brev{following the settings in \S~\ref{ssec:environ_hetero}}. After each communication round, we let the central server selects 40\% of the clients \needrev{(i.e., $M = 0.4N$)} to form a subset of clients with minimum \newrev{inter-client local model} weight distance, as demonstrated in Figure~\ref{subfig:client_sel_pca}. The average vehicle detection precision is shown in Figure~\ref{subfig:client_sel_precision}. One may readily observe that the precision of vehicle detection reaches up to 0.6 when client selection is enabled, and it fluctuates around 0.5 when model weights from all clients are aggregated using the FedAvg algorithm. Moreover, \newrev{Figure~\ref{fig:client_sel} also demonstrates} that client selection makes the training converge faster
\newrev{with less than 20 epochs,} while the training without client selection barely starts to converge \newrev{till} the 25-th epoch. These phenomena indicate that client selection helps better utilize data from beneficial clients. Upon further inspection, we find that after convergence, the fluctuation of the precision curve with client selection is much smaller than that without client selection, which indicates that the mechanism indeed selects mutually-enhancing clients while excluding erroneous gradient signals from outliers. \brev{Additionally, it can be observed that the average precision with client selection becomes stable after only 70 epochs. This confirms that the model has effectively learned from all clients (including the corner cases), so additional training will not yield any further improvement in performance.}
\RestyleAlgo{ruled}
\LinesNumbered
\begin{algorithm}[t]
\caption{\sysname\ training.}\label{alg:fed}
\SetKwInOut{Input}{Require}
\SetKwFunction{Fns}{Server Executes}
\SetKwProg{Fn}{}{:}{}
\SetKwFunction{Fc}{Client Selection}
\SetKwFunction{Fu}{Client Update}
\Input{$N$ is the total number of clients, $c$ is the percentage of clients to choose.}
\KwData{$\{(\mathcal{L}_1, \mathcal{R}_1), \cdots, (\mathcal{L}_n, \mathcal{R}_n), \cdots, (\mathcal{L}_N, \mathcal{R}_N)\}$ where $(\mathcal{L}_n, \mathcal{R}_n)$ is the local collected lidar and radar data on the $n$-th AV.}
\Fn{\Fns}{
initialize the global model $w^g_0$ at $t=0$\;
$S \leftarrow \{C_1, \cdots, C_N \}$\;
\For{communication round $t$}{
\For{$C_n \in S$ in parallel }{
$w_{t+1, n} \leftarrow \FuncSty{Client Update}(n)$\;
$W_t \leftarrow W_t \cup w_{t+1, n}$\;
}
$M \leftarrow c\times N$\;
$W_t^{\prime} \leftarrow \Fc(W_t, M)$\;
$w^g_{t+1} \leftarrow \FuncSty{Model Aggregate}(W_t^{\prime})$
}
}
\Fn{\Fu{$n$}}{
$w_{n} \leftarrow w_t^g$ ($w_t^g$ is downloaded global model) \;
\uIf{$\mathcal{R}_n=\varnothing$}{
$\mathcal{R}_n \leftarrow$ \FuncSty{Radar Imputation} ($\mathcal{L}_n$)\;
}
\uElseIf{$\mathcal{L}_n=\varnothing$}{
$\mathcal{L}_n \leftarrow$ \FuncSty{Lidar Imputation} ($\mathcal{R}_n$)\;
}
\For{each local epoch $e$}{
\For{each batch $b$}{
$w_{n} \leftarrow $ \FuncSty{SGD}($w_{n}, b$) \;
}
}
\KwRet $w_{n}$\;
}
\Fn{\Fc{$W_t$, M}}{
$T_t \leftarrow \FuncSty{Construct k-d Tree}(W_t)$\;
\For{$C_i \in S$}{
$S_i \leftarrow \FuncSty{Query k-d Tree}(T_t, C_i, M)$\;
$d_i \leftarrow \Sigma_{m=1}^M \FuncSty{Dist}(C_i, C_m)$ \textbf{for} $C_m \in S_i$\;
}
$I_{min} = \argmin_i (d_i)$\;
\For{$C_m \in S_{I_{min}}$ in parallel }{
$W_{t, I_{min}}^{\prime} \leftarrow W_{t, I_{min}}^{\prime} \cup w_{t, m}$\;
}
\KwRet $W_{t, I_{min}}^{\prime}$\;
}
\end{algorithm}
\subsubsection{Putting It All Together}
We carefully summarize the training strategy of the \sysname\ framework in \textbf{Algorithm~\ref{alg:fed}}. In the algorithm, \textsf{Client Update} is the local training process for each client, \textsf{Radar Imputation} and \textsf{Lidar Imputation} are the imputation function introduced in \S~\ref{sssec:imputation}, \textsf{SGD} is the standard stochastic gradient descent algorithm with our MCE loss, \textsf{Client Selection} \newrev{has been introduced in \S~\ref{sssec:client_sel}, which includes \textsf{Construct k-d tree} and \textsf{Query k-d Tree} as the respective processes of constructing and querying $k$-d tree, as explained in \S~\ref{sssec:client_sel}, and \textsf{Model Aggregate} as} the standard process of averaging the selected local models. \brev{By putting together \sysname's modules, we create a cohesive ensemble to substantially enhance the tolerance to data anomalies. Although some techniques can be relevant even to a single model context, they work together in the FL setting to help \sysname\ navigate on the chaotic loss surface in a more robust and efficient manner. }
\section{Performance Evaluation}\label{sec:evaluation}
To evaluate the performance of AutoFed, we apply AutoFed to build a vehicle detection application using the benchmark dataset~\cite{oxford_robotcar}.
In particular, we evaluate the performance of AutoFed from four aspects: i) \newrev{comparisons with five baseline methods to demonstrate the superiority of \sysname; ii) cross-domain tests to show that \sysname\ is robust against real-life scenarios with heterogeneous data; iii) ablation study} to show the necessity of key parameter designs, and iv) investigating the impact of FL-related hyper-parameter on the model performance.
\subsection{Dataset}
We \brev{mainly} use the Oxford Radar RobotCar dataset~\cite{oxford_robotcar} in our experiment. The dataset is collected by a vehicle driving around Oxford, and it includes both lidar and radar data. The lidar data is obtained by merging the point clouds collected by two Velodyne HDL-32E~\cite{velodyne} lidars \newrev{mounted on the left and right of the vehicle's top. Each} lidar sensor provides a range of 100~\!m, a range resolution of 2~\!cm, a horizontal field of view (FoV) of 360$^\circ$, and a vertical FoV of 41.3$^{\circ}$. The radar data is collected by a millimeter-wave Frequency-Modulated Continuous-Wave (FMCW) NavTech CTS350-X radar~\cite{NavTech} mounted between the two lidar sensors and at the center of the vehicle aligned to the vehicle axes. The radar achieves 2-D horizontal scan by rotation, operating with a center frequency of 76.5~\!GHz, a bandwidth of 1.5~\!GHz, a sampling rate of 4~\!Hz (hence a range resolution of 4.38~\!cm), a rotational angle resolution of 0.9$^{\circ}$, a beamwidth of 1.8$^{\circ}$, and a range up to 163~\!m; it complements lidar by providing robustness to weather conditions that may cause trouble to lidar. We further convert the data residing in the polar coordinates to Cartesian coordinates and then \newrev{calibrate radar and lidar extrinsic parameters (i.e., translation and rotation with respect to the world) by performing pose optimization to minimize} the differences between lidar and radar observations. Since there is no original ground truths for vehicle detections, we create rotated boxes by inspecting the point cloud data using Scalabel~\cite{scalabel}, which is a scalable open-source web annotation tool for various types of annotations on both images and videos.
\brev{We also involve another dataset nuScenes~\cite{caesar2020nuscenes} in our experiment to demonstrate \sysname's generalizability across datasets. The dataset contains 1 lidar and 5 radars: the lidar has 360$^\circ$ horizontal FoV, 40$^{\circ}$ vertical FoV, and 2~\!cm range resolution, while the 5 radars have 77~\!GHz center frequency and 0.1~\!km/h velocity accuracy. Unlike the radars in the Oxford dataset that perform fine-grained mechanical scans, the radars in the nuScenes dataset are fixed in positions and do not have scan capability. As a result, they only generate low-quality pointclouds. Since \sysname\ cannot demonstrate its full potential with the inferior radar modality, we limit the evaluation on the nuScenes dataset to only \S~\ref{ssec:superiority}. For both datasets, we take out a total of 50,000 samples, and use 80\% and 20\% of the total data to create training and test datasets, respectively.}
\subsection{System Implementation}
We implemented the vehicle detection application using AutoFed on multiple NVIDIA Jetson TX2~\cite{jetson} devices. The central server is equipped with an Intel Xeon Gold 6226 CPU~\cite{intel} and 128~\!GB RAM. For both \sysname\ and the baselines, we implement an FL protocol that \brev{allows 20 participating clients to randomly take 2,000 non-overlapping samples from the 40,000-sample training set.} %
Each participating client performs 5 local training epochs for \newrev{each} communication round. As for the software, Python 3.7 and PyTorch 1.9.1~\cite{pytorch} are used for implementing the application. Our vehicle detection model is built upon Detectron2~\cite{detectron2}, which is a Python library that provides state-of-the-art OD models. In particular, the settings for the multimodal vehicle detection model are as follows:
\begin{itemize}
\item \brev{The autoencoder is trained with 20,000 samples from the Oxford dataset, distinct (in terms of traffic, weather, and locations) from those used for training the rest of \sysname.}
\item The angles of the rotated anchors used by the RPN are set to -90$^{\circ}$, -45$^{\circ}$, 0$^{\circ}$, and 45$^{\circ}$.
\item Both lidar and radar feature extractors are composed of four consecutive convolutional layers with a kernel size of 3 and padding of 1.
\item The aspect ratio of the anchors is set to 2.5 to conform to the length-width ratio of regular vehicles~\cite{corolla}.
\item The IoU threshold \newrev{(defined later)} of NMS for removing excessive proposals during testing is set to 0.2.
\end{itemize}
In the local training process, we employ the SGD optimizer by setting the initial learning rate as 0.01 and the decay factor as 0.01.
\begin{figure*}[t]
\setlength\abovecaptionskip{6pt}
\vspace{-1.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[AP@IoU=0.5.]{
\begin{minipage}[b]{0.24\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/eval_base2.pdf}
\label{subfig:ap_iou05}
\end{minipage}
}
\subfloat[AP@IoU=0.65.]{
\begin{minipage}[b]{0.24\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/eval_base3.pdf}
\label{subfig:ap_iou065}
\end{minipage}
}
\subfloat[AP@IoU=0.8.]{
\begin{minipage}[b]{0.24\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/eval_base4.pdf}
\label{subfig:ap_iou08}
\end{minipage}
}
\subfloat[AP@IoU=0.5:0.9.]{
\begin{minipage}[b]{0.24\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/eval_base1.pdf}
\label{subfig:ap_iou0509}
\end{minipage}
}
\\ \vspace{-1.6ex}
\subfloat[AR, maxDets = 1.]{
\begin{minipage}[b]{0.24\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/eval_base5.pdf}
\label{subfig:ar_max1}
\end{minipage}
}
\subfloat[AR, maxDets = 10.]{
\begin{minipage}[b]{0.24\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/eval_base6.pdf}
\label{subfig:ar_max10}
\end{minipage}
}
\subfloat[AR, maxDets = 100.]{
\begin{minipage}[b]{0.24\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/eval_base7.pdf}
\label{subfig:ar_max100}
\end{minipage}
}
\subfloat[Convergence time.]{
\begin{minipage}[b]{0.24\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/convergence.pdf}
\label{subfig:convergence}
\end{minipage}
}
\caption{\brev{Comparing \sysname\ with several baseline methods, in terms of FL convergence and communication overhead.}}
\label{fig:baseline_comparison}
%
\end{figure*}
\subsection{Experiment Setup} \label{ssec:setup}
\paragraph{Baselines} To comprehensively evaluate the performance of \sysname, we compare \sysname\ against five baselines:
\begin{itemize}
\item \textbf{Standalone} trains a vehicle detection model using heterogeneous data (e.g., heterogeneous annotations, sensing modalities, and environments) locally without collaborations among clients.
\item \textbf{Standalone+} trains a vehicle detection model locally using the same setting as Standalone, but the data are sampled in a homogeneous way.
\item \textbf{FedAvg} is the first and perhaps the most widely adopted FL method~\cite{mcmahan2017communication}. During training, all clients communicate updated local parameters to the central server and download the aggregated (i.e., averaged) global model for local training in the next round.
\item \textbf{FedCor} is a correlation-based client selection strategy for heterogeneous FL~\cite{tang2022fedcor}. It formulates the goal of accelerating FL convergence as \newrev{an optimization problems that maximizes} the posterior expectation of loss decrease utilizing the Gaussian process.
\item \textbf{FedProx} adds a proximal term to the loss function of local training to reduce the distance between the local model and the global model~\cite{li2020federated}, hence addressing both system and statistical heterogeneity.
\end{itemize}
In addition, we adopt the same multimodal vehicle detection model configuration for each baseline method as \sysname. We also apply the same training settings and data configurations as \sysname\ to the baseline methods, the results are reported after the same number of communication rounds. \brev{It should be noted that we use Standalone and Standalone+ as baselines to provide context for how better FL methods perform: it confirms that they do improve upon standalone training, because each client only has limited data in reality.}
\paragraph{Evaluation Metrics.} Before introducing the evaluation metrics, we first define an important concept called IoU (intersection over union), which evaluates the overlap between two bounding boxes. Suppose the ground truth and predicted bounding boxes are $B_{gt}$ and $B_p$, respectively, then IoU is given by the overlapping area between the predicted bounding box and the ground truth bounding box divided by the area of union between them:
\begin{align}
\mathrm{IoU} = \frac{\mathrm{Area}(B_{p}\cap B_{gt})}{\mathrm{Area}(B_{p}\cup B_{gt})}
\end{align}
We define $\mathit{TP}$ as the number of correct detections (i.e., detections with an IoU greater than the predefined threshold), $\mathit{FP}$ as wrong detections (i.e., detections with an IoU smaller than the threshold), and $\mathit{FN}$ as the number of ground truths that are not identified. Based on these definitions, we define precision and recall as:
\begin{align}
\mathrm{Precision} = \frac{\mathit{TP}}{\mathit{TP}+\mathit{FP}},\quad \mathrm{Recall} = \frac{\mathit{TP}}{\mathit{TP}+\mathit{FN}}.
\end{align}
Since there is often a tradeoff between precision and recall, we also define an average precision (AP) value across all precision values from 0 to 1, thus summarizing the precision-recall curve. Moreover, we calculate the average recall (AR) value at IoU thresholds from 0.5 to 1, thus summarizing the distribution of recall values across a range of IoU thresholds~\cite{lin2014microsoft}. AP and AR are our key evaluation metrics hereafter.
\subsection{Superiority of \sysname} \label{ssec:superiority}
\begin{figure*}[t]
\setlength\abovecaptionskip{6pt}
\vspace{-2.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Ground truth.]{
\begin{minipage}[b]{0.135\linewidth}
\centering
\includegraphics[width = 0.99\textwidth]{pic/groundtruth.jpg}
\label{subfig:de_gt}
\end{minipage}
}
\subfloat[\sysname.]{
\begin{minipage}[b]{0.135\linewidth}
\centering
\includegraphics[width = 0.99\textwidth]{pic/FedAuto.jpg}
\label{subfig:de_fedauto}
\end{minipage}
}
\subfloat[\brev{Standalone.}]{
\begin{minipage}[b]{0.135\linewidth}
\centering
\includegraphics[width = 0.99\textwidth]{pic/mono.jpg}
\label{subfig:de_mono}
\end{minipage}
}
\subfloat[Standalone+.]{
\begin{minipage}[b]{0.135\linewidth}
\centering
\includegraphics[width = 0.99\textwidth]{pic/mono_plus.jpg}
\label{subfig:de_mono_plus}
\end{minipage}
}
\subfloat[FedAvg.]{
\begin{minipage}[b]{0.135\linewidth}
\centering
\includegraphics[width = 0.99\textwidth]{pic/FedAvg.jpg}
\label{subfig:de_fedavg}
\end{minipage}
}
\subfloat[FedCor.]{
\begin{minipage}[b]{0.135\linewidth}
\centering
\includegraphics[width = 0.99\textwidth]{pic/FedCor.jpg}
\label{subfig:de_fedcor}
\end{minipage}
}
\subfloat[FedProx.]{
\begin{minipage}[b]{0.135\linewidth}
\centering
\includegraphics[width = 0.99\textwidth]{pic/PoC.jpg}
\label{subfig:de_FedProx}
\end{minipage}
}
\caption{Example detection results of \sysname\ and other baseline methods.}
\label{fig:detection_examples}
\vspace{-.5ex}
\end{figure*}
We compare \sysname\ with the baselines in terms of the evaluation metrics defined in \S~\ref{ssec:setup}. Specifically, we report AP when the IoU is 0.5, 0.65, and 0.8, respectively, and the mean AP when the IoU ranges from 0.5 to 0.9. As for AR, we focus on the cases when the number of maximum detections is 1, 10, and 100, respectively. We report the evaluation results in Figure~\ref{fig:baseline_comparison}. Figure~\ref{subfig:ap_iou05} shows that, when the IoU is set as 0.5, the AP of \sysname\ is 0.71 while the number of FedAvg and FedProx are 0.68 and 0.58, respectively. Moreover, the APs of Standalone, Standalone+, and FedCor oscillate dramatically and barely converge. Similarly, as shown in Figures~\ref{subfig:ap_iou065},\brev{~\ref{subfig:ap_iou08}}, and~\ref{subfig:ap_iou0509}, the performance of \sysname\ significantly outperforms the baselines. \newrev{It might be curious that the AP curve of \sysname\ in Figure~\ref{subfig:ap_iou08} appears to be fluctuating, but this can be readily attributed to the fact} that setting IoU as 0.8 is a stringent criterion for the vehicle detection task and causes the performance to become unstable.
With regard to AR \newrev{shown by} Figures ~\ref{subfig:ar_max1},~\ref{subfig:ar_max10}, and~\ref{subfig:ar_max100}, AutoFed \newrev{exhibits significantly better performance compared with the baselines, in terms of}
both AP and AR. Moreover, we also find \newrev{that, when compared with the baselines, \sysname\ reaches the maximum AP and AR with less number of communication rounds, as also confirmed by the results} presented in Figure~\ref{fig:baseline_comparison}. Specifically, while \sysname\ converges in 10 communication rounds,
\newrev{all baseline} methods converge after 20 communication rounds. Furthermore, the AP and AR curves of \sysname\ rarely fluctuate, and the
training of \sysname\ is much more stable than the baselines, indicating that the multimodal network trained by \sysname\ is much more robust.
We also \newrev{showcase} some examples of vehicle detection in Figure~\ref{fig:detection_examples}. In the examples, we use the 2-D lidar intensity map as background for reference, and draw the ground truth and predicted bounding boxes upon it. Figure~\ref{subfig:de_fedauto} shows that \sysname\ generates high-precision vehicle detection results \newrev{very close} to the ground truth in Figure~\ref{subfig:de_gt}. In contrast, the \brev{Standalone}, Standalone+, and FedAvg methods make incorrect predictions outside the road, FedCor's misses most of the vehicles, and FedProx misses some vehicles and generates inaccurate bounding boxes overlapped with each other. The results \newrev{evidently confirm} that \sysname\ outperforms the baselines with more accurate predictions.
Furthermore, we compare the communication cost of \newrev{\sysname\ training (the same as other FL baselines)} with centralized training, i.e., all the clients transfer the collected data to a central server for training the model. The results show that, while centralized training transfers 660000~\!KB of sensor data during each communication round per client, \sysname\ only transfers 62246~\!KB of model weights. \newrev{In other words,} \sysname\
reduces up to more than 10$\times$ communication cost per client than the centralized training, firmly validating its communication-efficient design.
\brev{We finally compare the performance of \sysname\ with the baselines on the nuScenes dataset~\cite{caesar2020nuscenes} to demonstrate its generalizability across different datasets. We train \sysname\ for 100 communication rounds on the dataset. As shown in Figure~\ref{fig:eval_nuscenes}, \sysname\ outperforms all of the baselines on the nuScenes dataset by a large margin, firmly demonstrating that the evaluation results can be generalized to other datasets as well. It is worth noting that the overall AP and AR results of \sysname\ on this dataset (0.687 and 0.672) are slightly lower than those shown in Figures~\ref{subfig:ap_iou05} and \ref{subfig:ar_max100} on the Oxford Radar RobotCar dataset, which can be attributed to a variety of factors, such as the complexity of the scenes and objects, sensor mounting positions, and most importantly, the sparsity and lower quality of the radar point cloud provided by the nuScenes dataset. }
\begin{figure}[h]
\setlength\abovecaptionskip{6pt}
\vspace{-2.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/nuscenes_ap.pdf}
\end{minipage}
\label{subfig:nuscenes_ap}
}
\subfloat[Average recall.]{
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/nuscenes_ar.pdf}
\end{minipage}
\label{subfig:nuscenes_ar}
}
\caption{\brev{Evaluation on the nuScenes dataset.}}
\label{fig:eval_nuscenes}
\vspace{-1.5ex}
\end{figure}
\subsection{Cross-domain Robustness}
We evaluate the robustness of \sysname in cross-domain settings by investigating how the trained model performs in \newrev{varied sensing modalities and different weather conditions.} Since the AVs' routes in the experiment encompass different roads and areas, the results in \S~\ref{ssec:superiority} have already proven the cross-road and cross-area capabilities of \sysname, therefore we omit their discussions here. %
\subsubsection{Various Sensing Modalities}
Since \sysname\ involves both lidar and radar sensors, there are three possible sensor combinations, i.e., i) lidar + radar (Li + Ra), ii) without radar (w/o Ra), and iii) without lidar (w/o Li). We evaluate the performance of \sysname\ under these three settings, and report the results in Figure~\ref{fig:eval_sensing_modality}. The results show that, when the IoU is set to be above 0.5, the median APs achieved by \sysname\ are 0.71, 0.57, and 0.12 under the aforementioned three settings. Correspondingly, the median ARs achieved by \sysname\ are 0.70, 0.59, and 0.12.
The autoencoder employed by \sysname\ helps the model to maximize the efficacy of information embedded in either radar or lidar data, and \sysname\ \newrev{exhibits} the smallest performance drop compared with the baselines whose performance is drastically impacted by missing modalities. However, since the performance drop of missing modalities stems from the loss of information, even the adoption of an autoencoder cannot \newrev{totally} fill up the performance gap. \brev{We have also noticed that the AP and AR of \sysname\ are significantly lower in the radar-only mode compared to the other sensor combinations. Upon further investigation, we suspect that this may be because the importance of radar is overshadowed by lidar that provides most of the information used by \sysname. Specifically, the majority of the vehicles in the dataset are close to the ego vehicle, probably due to the narrow width of the road, and as a result, lidar can detect almost all of these vehicles because they are within its range. This leads to the lower performance of the radar-only mode, as radar is
often meant to supplement the lidar sensor for long-range detection.}
\begin{figure}[b]
\setlength\abovecaptionskip{6pt}
\vspace{-2.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/sensor_precision.pdf}
\label{subfig:sensor_precision}
\end{minipage}
}
\\
\subfloat[Average recall.]{
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/sensor_recall.pdf}
\label{subfig:sensor_recall}
\end{minipage}
}
\\
\subfloat[Lidar + radar.]{
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/groundtruth_weather.jpg}
\label{subfig:li_and_ra}
\end{minipage}
}
\subfloat[Missing radar.]{
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/noradar.jpg}
\label{subfig:wo_ra}
\end{minipage}
}
\subfloat[Missing lidar.]{
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/nolidar.jpg}
\label{subfig:wo_li}
\end{minipage}
}
\caption{Different missing modalities.}
\label{fig:eval_sensing_modality}
\vspace{-1.5ex}
\end{figure}
We also show one example of vehicle detection with three sensor combinations in Figure~\ref{fig:eval_sensing_modality}. As Figure~\ref{subfig:li_and_ra} illustrates, when both lidar and radar are available, \sysname\ is able to recognize most of the vehicles on the road. As a comparison, Figure~\ref{subfig:wo_ra} shows that missing radar data affects the detection of vehicles in the \newrev{further} distance, but the nearby vehicles can still be identified. This phenomenon is consistent with the characteristics of the radar sensor, i.e., the radar has an extended range due to better penetration capability while lidar can only obtain a much shorter range due to attenuation caused by in-air particles~\cite{zhao2020method}. In addition, we also visualize the case of missing lidar in Figure~\ref{subfig:wo_li}, where the vehicles \newrev{in distance} can be well detected by the radar. The results \newrev{clearly demonstrate the complementary sensing capability of radar and lidar.}
\begin{figure}[b]
\setlength\abovecaptionskip{6pt}
\vspace{-2.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{0.98\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/weather_precision.pdf}
\label{subfig:weather_precision}
\end{minipage}
}
\\
\subfloat[Average recall.]{
\begin{minipage}[b]{0.98\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/weather_recall.pdf}
\label{subfig:weather_recall}
\end{minipage}
}
\\
\subfloat[Foggy.]{
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/weather_fog.jpg}
\label{subfig:de_foggy}
\end{minipage}
}
\subfloat[Rainy.]{
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/weather_rain.jpg}
\label{subfig:de_rainy}
\end{minipage}
}
\subfloat[Snowy.]{
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/weather_snow.jpg}
\end{minipage}
\label{subfig:de_snowy}
}
\caption{Different weathers.}
\label{fig:eval_weather}
\vspace{-1.5ex}
\end{figure}
\subsubsection{Robustness against Adverse Weather Conditions}
Adverse weather is a realistic but challenging \newrev{scenario} for vehicle detection, which has a negative impact on the sensing capabilities~\cite{kilic2021lidar}.
\newrev{Therefore,} we evaluate the performance of \sysname\ under different adverse weathers (e.g., foggy, rainy, and snowy). Due to the lack of available datasets collected under adverse weather, we employ the physical models in DEF~\cite{bijelic2020seeing} and LISA~\cite{kilic2021lidar} to simulate fog, rain, and snow respectively. Specifically, we set the fog density to 0.05m$^{-1}$ in the DEF model and the rate of rain and snow to 30~\!mm/h in the LISA model.
Comparing the backgrounds in Figures~\ref{subfig:de_foggy},~\ref{subfig:de_snowy}, and~\ref{subfig:de_rainy}, while foggy weather attenuates lidar signals and shrinks the field of view, rainy and snowy weathers mainly affect the lidar signals by inducing scattered reflections near the sensor. In particular, the three adverse weather conditions degrade the median AP of \sysname\ from 0.71 to 0.65, 0.63, and 0.63, respectively, and degrade the median AR from 0.71 to 0.64, 0.63, and 0.63, respectively. The performance discrepancies among these adverse weathers can be attributed to their different reflectance of lidar signals. Despite the performance degradation, \sysname\ exhibits the best generalization when compared with the baselines. \brev{The consistently high performance of \sysname\ under all adverse weather conditions confirms that the client selection mechanism has allowed the DNN model to effectively incorporate information from unusual circumstances after sufficient training.}
\subsection{Ablation Study}
We evaluate the impact of each module of \sysname\ on the model performance. We use \sysname\ to train the model for 150 communication rounds, and record the AP in Table~\ref{tab:ablation}. Take the AP when IoU is above 0.5 as an example, \sysname\ achieves an AP of 0.731, while \sysname\ without MCE loss, modality imputation with autoencoder, and client selection obtain the AP of 0.707, 0.692, and 0.542, respectively. One may think that the MCE loss and modality imputation only improves the result by small margins, while the client selection is much more effective in significantly improving performance. However, it is worth noting that both MCE loss and modality imputation are indispensable parts: although the lack of the two can be compensated by client selection (which excludes erroneous gradients) to a certain extent, there still are many heterogeneous scenarios that cannot be addressed by client selection alone, such as \newrev{those demonstrated in Figures}~\ref{fig:damage_miss_anno} and~\ref{fig:damage_miss_mod}. The integration of MCE loss and modality imputation, together with client selection, can act as ``belt and braces'' to guarantee the robustness of \sysname\ in \newrev{diversified} heterogeneous scenarios.
\begin{table}[h]
\vspace{-1ex}
\centering
\small
\caption{Effects of key \sysname\ parts.}
\label{tab:ablation}
\vspace{-1.5ex}
\begin{tabular}{|l|cccc|}
\hline
& \multicolumn{4}{c|}{AP} \\ \hline
& \multicolumn{1}{l|}{IoU=0.5:0.9} & \multicolumn{1}{l|}{IoU=0.5} & \multicolumn{1}{l|}{IoU=0.65} & IoU=0.8 \\ \hline
\sysname\ & \multicolumn{1}{c|}{0.461} & \multicolumn{1}{c|}{0.731} & \multicolumn{1}{c|}{0.698} & 0.371 \\ \hline
w/o MCE & \multicolumn{1}{c|}{0.405} & \multicolumn{1}{c|}{0.707} & \multicolumn{1}{c|}{0.660} &0.212 \\ \hline
w/o AE & \multicolumn{1}{c|}{0.396} & \multicolumn{1}{c|}{0.692} & \multicolumn{1}{c|}{0.657} & 0.189 \\ \hline
w/o CS & \multicolumn{1}{c|}{0.342} & \multicolumn{1}{c|}{0.542} & \multicolumn{1}{c|}{0.523} & 0.272 \\ \hline
\end{tabular}
\vspace{-2ex}
\end{table}
\subsection{Hyper-parameter Evaluation}
\subsubsection{Loss Threshold} \label{sssec:loss_thresh}
As stated in \S~\ref{sssec:mce}, $p_{\mathrm{th}}$ is a threshold above which we believe that the classifier is more trustworthy than the manual annotations. On one hand, when $p_{\mathrm{th}}$ is too small, the MCE loss and traditional CE loss are equivalent, and we cannot exclude incorrect gradients induced by missing annotation boxes. On the other hand, many real backgrounds can be mistakenly excluded if $p_{\mathrm{th}}$ is set too large. Therefore, we evaluate the impact of $p_{\mathrm{th}}$ on the \sysname\ performance. As Figure~\ref{subfig:mce_thresh_ap} shows, the AP of vehicle detection increases from 0.7 to 0.73 as $p_{\mathrm{th}}$ increases to 0.1. However, the AP rapidly decreases to around 0 at $p_{\mathrm{th}}=0.3$. \brev{Likewise, a similar trend can be observed in Figure~\ref{subfig:mce_thresh_ar} for AR of the vehicle detection. Overall,} Figure~\ref{fig:mce_thresh} offers a guidance for choosing $p_{\mathrm{th}}$.
\begin{figure}[h]
\setlength\abovecaptionskip{6pt}
\vspace{-1.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/hyper1.pdf}
\end{minipage}
\label{subfig:mce_thresh_ap}
}
\subfloat[\brev{Average recall.}]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/hyper10.pdf}
\end{minipage}
\label{subfig:mce_thresh_ar}
}
\caption{Impact of the MCE threshold.}
\label{fig:mce_thresh}
\vspace{-1.5ex}
\end{figure}
\subsubsection{The Number of Selected Clients} Another hyperparameter that significantly impacts the performance of \sysname\ is the number of clients selected for model aggregation. On one hand, a small percentage of selected clients could not fully utilize the diverse data collected by different clients and introduce bias into the federated model. On the other hand, if a very large proportion of the clients are selected, we cannot effectively mitigate the detrimental effect caused by diverged local models. Therefore, the number of selected clients balances the tradeoff between utilizing data and excluding diverged models. As Figure~\ref{subfig:sel_clients_ap} shows, the AP of \sysname\ first increases with a greater percentage of selected clients, but starts to drop after the percentage reaches 0.4. The reason is that as the excessive clients are selected for aggregation, the divergence among them will degrade the performance of the federated model. \brev{Furthermore, in Figure~\ref{subfig:sel_clients_ar}, it can be seen that AR of \sysname\ follows a similar trend as AP, and reaches its peak when the percentage of selected clients is 0.4.}%
\begin{figure}[h]
\setlength\abovecaptionskip{6pt}
\vspace{-1.5ex}
\captionsetup[subfigure]{justification=centering}
\centering
\subfloat[Average precision.]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/hyper2.pdf}
\end{minipage}
\label{subfig:sel_clients_ap}
}
\subfloat[\brev{Average recall.}]{
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width = 0.96\textwidth]{pic/hyper20.pdf}
\end{minipage}
\label{subfig:sel_clients_ar}
}
\caption{Impact of selected clients percentage.}
\label{fig:sel_clients}
\vspace{-1.5ex}
\end{figure}
\section{Discussion}\label{sec:discussion}
\section{Related Work and Discussion}\label{sec:related_work}
Recent years have witnessed rapid developments in DNN-based OD methods~\cite{redmon2016you, liu2016ssd, lin2017focal, girshick2014rich, girshick2015fast, ren2015faster}. These approaches have been applied to AD~\cite{chen2017multi, yang2018pixor, li20173d, li2016vehicle}. Since most AVs are equipped with multiple sensors (e.g., lidar, radar, and camera),
\newrev{they become technology} foundations for the OD systems to fully exploit the multimodal data by sensor fusion. Among various sensor fusion schemes, the combination of lidar and another sensor (e.g., radar or camera)~\cite{ku2018joint, xu2018pointfusion, qi2018frustum, chen2017multi, liang2018deep, qian2021robust} is a widely-adopted option due to the complements between each other~\cite{geiger2012we, oxford_robotcar}. One challenge in fusing lidar with other sensors is the unique data structure of lidar, i.e., 3-D point cloud, which is a point set and not compatible with the 2-D matrix in conventional computer vision tasks. One way to overcome this challenge is to employ specially designed DNNs, such as PointNet~\cite{qi2017pointnet}, to directly extract features from point clouds and fuse with other sensing data in the feature space~\cite{xu2018pointfusion}. Another approach is voxelization via transforming the point cloud to 3-D data formats like images, with the height dimension being deemed as image channels. Therefore, the transformed point clouds can be handled by conventional OD-DNNs and fused with other modalities as demonstrated in~\cite{yang2018pixor, simon2019complexer, zhou2018voxelnet, luo2018fast}.
FL~\cite{konevcny2016federated} is a distributed machine learning paradigm that \newrev{transfers only model weights
instead of explicitly sharing} raw data with the central server. \sysname\ employs FL to enable data crowdsensing without breaching privacy and incurring unaffordable communication cost on AVs. Despite recent FL applications in classification and regression tasks~\cite{konevcny2016federated, li2020federated, li2021fedmask, tu2021feddl,rehman2022federated,so2022fedspace}, applying FL to more sophisticated computer vision tasks such as OD (especially vehicle detection) is far from being exploited. \newrev{In~\cite{jallepalli2021federated}, the authors investigate} the possibility of applying FL to AD applications, and conduct preliminary experiments to verify privacy protection and convergence speed. FedVision~\cite{liu2020fedvision} proposes an online visual OD platform powered by FL,
\newrev{but it} focuses more on building and deploying a cloud-based platform, \newrev{without concerning much on FL-related designs.
Fjord \cite{horvath2021fjord} claims to target the data heterogeneity in FL, yet it seems to have missed certain complicated aspects,
such as annotation and modality heterogeneity tackled in \sysname.}
\newrev{While different from existing OD proposals by pioneering federated OD on AVs, \sysname\ is also the first to take into account the effects of all kinds of multimodal heterogeneity for FL-OD on AVs.
However, \sysname\ still bears one limitation: it stresses on the FL aspect of crowdsensing, pessimistically assuming a finite number of clients unable to provide complete annotations. In other words, we have not considered positive aspects innate to crowdsensing~\cite{ParkGauge-MDM16, WOLoc-INFOCOM17}, such the impact of client incentive~\cite{Truthful-TC16, PostedP-MobiHoc16}. In a future study,} we will extend the design goals of \sysname\ to include designing proper incentives, in order to expand its user base and attract more AV owners to perform collective learning on distributed AV data and \newrev{thus} guarantee \sysname\ service quality.
\section{Conclusion}\label{sec: conclusion}
Taking an important step towards full driving automation, we have proposed \sysname\ in this paper for federated multimodal vehicle detection. Employing a novel loss function, data imputation technique, and client selection strategy, the \sysname\ framework gracefully handles the multimodal data crowdsensed by multiple AV clients, and mines information in the highly heterogeneous data to its maximum, thus releasing its full potential in the vehicle detection task. With extensive experiments under highly heterogeneous scenarios and comparisons with other baselines, we have demonstrated the promising performance of \sysname\ in vehicle detection for autonomous driving. We plan to extend \sysname\ framework to encompass more sensing modalities,
in order to promote its real-life usage and wider acceptance.
Currently, \sysname\ targets on FL-driven vehicle detection, but we are planning to apply FL to other out-vehicle sensing tasks, such as pedestrian detection, lane tracking, and environment semantic segmentation. Moreover, modern vehicles are also equipped with in-vehicle sensing modalities to improve user experience, and we believe FL can help improve the performance of deep analytics upon these modalities too.
Therefore, we are actively exploring the potential of using FL for full vehicle intelligence, particularly for in-vehicle user monitoring (e.g.,~\cite{RF-Net-SenSys20, zheng2020v2ifi, chen2021movifi});
this should put us on the right track towards a future with full intelligent transportation.
\section*{Acknowledgement}
We are grateful to anonymous reviewers for their constructive suggestions. This research was support in part by National Research Foundation (NRF) Future Communications Research \& Development Programme (FCP) grant FCP-NTU-RG-2022-015 and MOE Tier 1 Grant RG16/22. We further thank ERI@N and NTU-IGP for supporting the PhD scholarship of Tianyue Zheng.
|
{
"arxiv_id": "2302.08710",
"language": "en",
"timestamp": "2023-02-20T02:07:32",
"url": "https://arxiv.org/abs/2302.08710",
"yymm": "2302"
} | \section{Introduction \label{Introduction}}
\IEEEPARstart{O}{ne} common assumption of statistical learning theory is that the training data and test data are drawn from an identical feature distribution, which may be violated in many situations. Moreover, in practical applications, collecting labeled training data is often expensive and time-consuming. Thus, there is a strong demand to leverage the knowledge from a source domain with sufficient labels to help design effective model for the unlabeled target domain data, which follows a different feature distribution. To this end, considerable efforts have been devoted to domain adaptation \cite{Survey}, and impressive progress has been made in various tasks, {\it e.g.}, object recognition \cite{ MCTL,HCA, CMMS}, semantic segmentation \cite{ReID,ReID1}, and sentiment analysis \cite{PRDA,SDAF}.
\begin{figure*}[h]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\renewcommand{\figurename}{Figure}
\centering
\includegraphics[width=0.98\textwidth]{./framework.pdf}
\caption{Flowchart of our proposed CDGS. We integrate domain-invariant feature learning, adaptive graph learning and cross-domain label propagation into a unified optimization framework. Besides, in order to construct a high-quality affinity matrix in our CDGS, we further propose a discriminative graph self-learning strategy. To be specific, instead of predefining the similarity metric, our proposal could adaptively assign neighbors for each sample according to the local distance in the projected feature space. To fully explore the discriminative information contained in well-labeled source data and pseudo-labeled target data, we further impose block diagonal structure constraint on source data and label smoothness constraint on two domain data.}
\label{Framework}
\end{figure*}
The goal of domain adaptation is to mitigate the distribution discrepancy between the source and target domains, such that the classifier could be applicable across two domains. To accomplish this, numerous works \cite{JDA,DICD,DICE,DTLC,PACET} have devoted to learning a domain-invariant space where distribution discrepancy can be significantly reduced via minimizing a distance metric, {\it e.g.}, the widely used maximum mean discrepancy (MMD) \cite{MMD}. Along this line, JDA \cite{JDA} is a pioneering method, which aims to reduce the joint distribution shift between two domains by simultaneously aligning the marginal distribution and conditional distribution. Inheriting the core idea of minimizing joint distribution discrepancy, tremendous subsequent studies following JDA \cite{DICD,DICE,PACET}, focus on further reducing the conditional distribution discrepancy by improving the prediction accuracy of target pseudo-labels. Despite the brilliant achievements in the literature, most of them generally overlook the underlying data manifold structure in the process of inferring data labels on target domain, thus making the performance of domain adaptation far from satisfactory.
More recently, to explore the data distribution structure, several studies \cite{DICE,DGA-DA,LPJT,GAKT} innovatively propose to infer target pseudo-labels by cross-domain label propagation \cite{LP}.
Generally, these methods follow a multi-stage paradigm in each iteration: 1) projecting the source and target data into a domain-invariant common feature space; 2) constructing a affinity matrix by calculating the sample similarity in the projected space with a predefined metric, {\it e.g.}, the gaussian kernel similarity \cite{DGA-DA,LPJT}; 3) assigning pseudo-labels for target data via propagating the labels of source data with the constructed affinity matrix. Although improved performance has been achieved by these methods, they still suffer from three crucial issues:
\begin{itemize}
\item \textbf{Issue 1} The domain-invariant feature learning, affinity matrix constructing and target labels inferring are separated into three independent stages. Thus, the correlation among these three parts could not be fully exploited.
\item \textbf{Issue 2} Constructing the affinity matrix with predefined metric may not capture the inherent similarity of samples in both domains, which might seriously affect the effectiveness of cross-domain label propagation.
\item \textbf{Issue 3} During the construction of affinity matrix, the discriminative information contained in the ground-truth labels of source data, as well as in the pseudo-labels of target data is less explored.
\end{itemize}
In this study, we propose a novel domain adaptation method called Cross-domain label propagation with Discriminative Graph Self-learning (CDGS) to remedy the above three issues. As illustrated in Fig. \ref{Framework}, to tackle the first issue, we propose to formulate the three parts of cross-domain label propagation into a unified optimization framework, which learns domain-invariant features, constructs affinity matrix and infers target labels simultaneously. In the unified framework, these three parts can assist each other from an iterative optimization perspective. For the second issue, inspired by \cite{PCAN,AWSSL}, we resort to a graph self-learning strategy, which assigns adaptive neighbors for each sample according to the local distance in the projected feature space. In such way, the underlying data manifold structure of two domains could be captured more effectively.
To handle the third issue, for well-annotated source data, we enforce the learned connected subgraph to have a block diagonal structure, which means that only source samples within the same category are allowed to be connected, while the connection weight of source samples between different categories is forcibly set to 0. In this manner, the discriminative information of source data can be exploited to the maximum extent. Beyond that, inspired by \cite{AWSSL,USSL}, we further impose the label smoothness constraint during the graph self-learning,
such that the weakly supervised information contained in target pseudo-labels can be well inserted into the adaptive graph.
It is noteworthy that, except for unsupervised domain adaptation (UDA), our CDGS could be readily extended to the semi-supervised domain adaptation (SDA) scenario where some labeled target samples are available. Interestingly, the extended SDA model could be solved with the same algorithm as UDA.
To sum up, we list our contributions in fourfolds:
\begin{enumerate}
\item We propose a novel cross-domain label propagation method for domain adaptation named CDGS, which integrates domain-invariant feature learning, affinity matrix constructing and target labels inferring into a unified optimization framework. Through the joint optimization, the three parts could boost each other and thus more effective knowledge transfer can be achieved.
\item To construct a high-quality affinity matrix in CDGS, we propose a discriminative graph self-learning strategy, which can not only adaptively capture the local connectivity structure of data from two domains but also effectively explore the discriminative information.
\item An efficient optimization algorithm is designed to solve the objective function of our CDGS. In addition to UDA, we further extend CDGS to the semi-supervised scenario in a direct but effective way and solve the extended model with the identical optimization algorithm.
\item Extensive experiments on six standard datasets verify that the proposed CDGS can consistently outperform the state-of-the-art methods in both UDA and SDA settings.
\end{enumerate}
The rest of this paper is organized as follows. Section \ref{Related work} provides a brief review on related domain adaptation and label propagation methods. Section \ref{Proposed Method} introduces the proposed CDGS approach, the optimization algorithm, the computational complexity and the extension to SDA. Extensive experimental analysis is presented in Section \ref{experiments}. Finally, this paper is summarized in Section \ref{conclusion}.
\section{Related Work \label{Related work}}
In this section, we review the related works in terms of domain adaptation and label propagation, and highlight the difference between the previous works and our proposal.
\subsection{Domain Adaptation}
Domain adaptation aims to leverage the knowledge from a well-labeled source domain to an unlabeled but related target domain. In general, domain adaptation can be grouped as UDA and SDA. In UDA, no labeled target samples are available. While in SDA, the target domain contains few labeled samples.
Generally, existing UDA methods can be roughly divided into three categories: instance reweighting \cite{CSA, TJM}, classifier adaptation \cite{ARTL, MEDA} and feature adaptation \cite{TCA,JDA,DICD,DICE} methods. Instance reweighting methods assign source samples with different weights to reduce the distribution shift between two domains. Classifier adaptation methods adapt the classifier trained on source data to target data. Feature adaptation methods seek a common feature space \cite{JDA} or latent intermediate subspaces \cite{GFK} to make the two domains have similar distributions. The proposed CDGS falls into the former line of feature adaptation methods, thus we focus on reviewing the works related to it. Among existing works, TCA \cite{TCA} proposes to align marginal distribution between two domains with MMD metric for the first time. Following this idea, JDA \cite{JDA} further considers the conditional distribution, such that the joint distribution alignment can be achieved. To boost the classification performance, several subsequent works propose to employ the discriminative information by encouraging intra-class compactness and inter-class dispersion \cite{DICD} simultaneously or promoting domain-irrelevant class clustering \cite{DICE}. To refine the target pseudo-labels to further mitigate the conditional distribution discrepancy, several recent works attempt to exploit the geometric structure underlying data manifold by assigning target pseudo-labels via cross-domain label propagation \cite{DICE,DGA-DA,LPJT,GAKT} or performing label propagation just on target domain \cite{DTLC,LSC}, and promising performance have been achieved by them.
Our CDGS also employs cross-domain label propagation strategy to assign target pseudo-labels. However, CDGS is significantly different from these methods. First, CDGS integrates domain-invariant feature learning, affinity matrix constructing and target labels inferring into a unified optimization formulation while \cite{DICE,DGA-DA,LPJT} separate the three parts into independent stages, and \cite{GAKT} only combines the domain-invariant feature learning and target labels inferring. Through the joint optimization in our CDGS, the three parts could benefit from each other to yield a superior performance. Second, CDGS presents a novel self-learning strategy to construct a discriminative graph. Specifically, the neighbors of each sample are adaptively assigned according to the local distance, which is calculated based on the projected features and label information of source and target data. Besides, only source samples within the same class are enforced to be connected to exploit the source discriminative information. Thus, the discriminative graph can not only faithfully capture the inherent local connectivity structure of samples but also effectively explore the discriminative information contained in source ground-truth labels and target pseudo-labels, which is beneficial to effective target pseudo-labels assignment.
In the past few years, deep domain adaptation methods have attracted considerable interest and different strategies have been proposed to align deep features. For example, DAN \cite{DAN} exploits the multikernel MMD to reduce the marginal distribution discrepancy in the reproducing kernel Hilbert space (RKHS). Based on this framework, JAN \cite{JAN} proposes to align the joint distribution between two domains. To capture the fine-grained information, DSAN \cite{DSAN} further aligns the relevant subdomain distributions within the same category in two domains based on a local MMD. Different from them, DANN \cite{DANN} tries to learn domain agnostic feature representations with adversarial learning. Later, MADA \cite{MADA} trains a class-wise domain discriminator for each class. To enhance positive transfer and relieve negative transfer, Wang \emph{et al}. \cite{Wang2020} introduced a self-adaptive re-weighted adversarial approach to promote domain alignment in terms of conditional distribution. However, these deep methods may confront the challenges of long training time and massive resource consumption while CDGS is faster and can achieve excellent performance by just using off-the-shelf deep features.
Many methods have also been developed for SDA \cite{MMDT,CDLS,OBTL}. For instance, MMDT \cite{MMDT} learns the transformation matrix and classifier parameters jointly by making samples within the same class have high similarity. CDLS \cite{CDLS} aligns the conditional distribution by selecting representative landmarks. OBTL \cite{OBTL} is a Bayesian transfer learning framework, which relates the two domains by joint prior density. The proposed CDGS can be readily extended to SDA. Specifically, we take the labeled and unlabeled target data as a whole. In such case, we can estimate target class means more accurately, which can result in more accurate conditional distribution alignment. Besides, as a common strategy in semi-supervised learning, reliable connections between labeled and unlabeled data are built by discriminative graph self-learning, thus the knowledge from labeled samples can be propagated to the unlabeled ones. Moreover, the resulting optimization problem has the same formula as that of the unsupervised setting, thus they can be solved with the same optimization algorithm.
\subsection{Label Propagation}
The goal of label propagation is to propagate the label information of limited labeled samples to amounts of unlabeled samples through graph. In the graph, a vertex represents a sample and the weight of the edge between two vertexes measures the similarity of the corresponding samples.
GFHF \cite{GFHF} and LGC \cite{LGC} are two classical methods. Both of them first use the gaussian kernel similarity to build the affinity matrix and then utilize label propagation to predict the unknown labels via gaussian fields and harmonic function, or the local and global consistency. However, they can not exploit the relationship of the affinity matrix and label information of samples due to the two separated stages. To overcome this limitation, STSSL \cite{USSL} integrates the affinity matrix constructing and the unknown labels inferring into one unified optimization framework to exploit the correlation between them. Following this idea, AWSSL \cite{AWSSL} futher proposes to adaptively assign the neighbors of each sample and effectively extract robust features by auto-weighting feature selection.
There are several classifier adaptation methods, which borrow the advantages of cross-domain label propagation to assign target pseudo-labels, {\it e.g.}, ARTL \cite{ARTL} and MEDA \cite{MEDA}. ARTL is also a unified framework, which learns an adaptive classifier by jointly optimizing the source structural risk, joint distribution alignment and manifold regulation, which is relevant to our CDGS. However, CDGS differs from ARTL in three aspects. First, ARTL learns the classifier with the original features while CDGS conducts subspace learning, which is more flexible and effective. Second, CDGS learns domain-invariant features, constructs affinity matrix and infers target labels jointly to fully exploit the relationship among them. Third, CDGS and ARTL use different strategies to construct the affinity matrix. Specifically, CDGS introduces a self-learning strategy to capture the intrinsic similarity of samples as well as effectively explore the label information of source and target data. By contrast, ARTL just utilizes the predefined metric to calculate the similarity for all samples.
\section{Proposed Method \label{Proposed Method}}
In this section, the key notations throughout this paper are first introduced. Then, we describe the details of the proposed CDGS. Next, we design an iterative algorithm to solve the optimization problem and provide the computational complexity analysis. Finally, we extend our method to SDA.
\subsection{Notations}
In UDA, the labeled source data $\mathcal{D}_s = \{\mathbf{X}_s,\mathbf{Y}_s\} = \{(\mathbf{x}_{si},y_{si})\}_{i=1}^{n_s}$ and unlabeled target data $\mathcal{D}_t = \{\mathbf{X}_t\} = \{\mathbf{x}_{tj}\}_{j=1}^{n_t}$ are given, where $\mathbf{x}_{si} \in \mathbb{R}^{m}$ is a source sample ($y_{si} \in \mathbb{R}$ is its label), $\mathbf{x}_{tj} \in \mathbb{R}^{m}$ is a target sample, $n_s$ and $n_t$ represent the number of source and target samples. The entire data matrix is denoted as $\mathbf{X} =[ \mathbf{X}_s,\mathbf{X}_t] = \{\mathbf{x}_{i}\}_{i=1}^{n}$, where $n = n_s + n_t$.
For clarity, the key notations throughout this paper and their descriptions are summarized in Table \ref{notations}.
\begin{table}[]
\centering
\caption{Frequently Used Notations and Their Descriptions}
\label{notations}
\begin{tabular}{ccc}
\toprule
Notation & \quad Description\\
\midrule
$\mathbf{X}_s/\mathbf{X}_t/\mathbf{X}$& \quad source/target/all data\\
$n_s/n_t/n$& \quad number of source/target/all data matrix\\
$\mathbf{S}/\mathbf{L}$& \quad affinity matrix/Laplacian matrix for all data\\
$\mathbf{P}$& \quad projection matrix\\
$\mathbf{F}$& \quad label matrix for all data\\
$\mathbf{H}$& \quad centering matrix\\
$\mathbf{I}_d$ & \quad identity matrix with dimension $d$\\
$m/d$ & \quad dimension of original/projected features\\
$C$& \quad number of shared classes\\
$\mathbf{1}_{p \times q}$ & \qquad a matrix of size $p \times q$ with all elements as $1$ \\
$\mathbf{1}_{p}$ & \quad a column vector of size $p$ with all elements as $1$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Problem Formulation \label{Problem Formulation}}
In this paper, we propose the CDGS framework to address domain adaptation problem, which integrates domain-invariant feature learning, affinity matrix constructing and target labels inferring into a unified optimization objective. The overall framework of our CDGS can be formulated as:
\begin{equation}
\label{framework}
\min_{\mathbf{P}, \mathbf{S}, \mathbf{F}}\Omega(\mathbf{P}, \mathbf{X}) + \alpha \Theta(\mathbf{P}, \mathbf{S}, \mathbf{X})+ \beta \Psi(\mathbf{F},\mathbf{S}) + \gamma\Phi(\mathbf{P})
\end{equation}
where $\mathbf{P} \in \mathbb{R}^{m \times d}$ denotes the projection matrix, $\mathbf{F} \in \mathbb{R}^{n \times C}$ is the label matrix for all data and $\mathbf{S} \in \mathbb{R}^{n \times n}$ represents the affinity matrix. $\Omega(\mathbf{P}, \mathbf{X})$ is employed to learn domain-invariant features. $\Theta(\mathbf{P}, \mathbf{S}, \mathbf{X})$ is utilized to adaptively construct the affinity matrix with the projected features. $\Psi(\mathbf{F},\mathbf{S})$ is used to infer the target labels by cross-domain label propagation. $\Phi(\mathbf{P})$ is the regularization term for the projection matrix to avoid overfitting. $\alpha$, $\beta$ and $\gamma$ are hyperparameters to balance the importance of different parts. As we can see, by integrating the three parts into the joint optimization objective, they could well communicate with each other to achieve more effective knowledge transfer. Next, more details about the three parts are presented.
\subsubsection{Domain-invariant Feature Learning} When $\mathbf{X}_s$ and $\mathbf{X}_t$ are drawn from different feature distributions, it is crucial to reduce the distribution discrepancy between two domains, such that the classifier trained on source data can be directly applied to target domain. To measure the distribution discrepancy, numerous metrics have been proposed. Among them, MMD \cite{MMD} is probably the most widely used one. In the projected space, the MMD distance between two domains can be calculated as the distance between the sample means of the source and target data \cite{DICD}. Considering the large distribution discrepancy across domains, we minimize the marginal distribution distance and the conditional distribution distance simultaneously, and denote them by $\mathcal{L}_{mmd}^{m}$ and $\mathcal{L}_{mmd}^{c}$, respectively. With the MMD metric, marginal distribution distance can be stated as:
\begin{equation}
\label{marginal}
\begin{aligned}
\mathcal{L}_{mmd}^{m}
&=\|\frac{1}{n_s}\sum\limits_{i=1}^{n_s}\mathbf{P}^\mathrm{T}\mathbf{x}_{si}-\frac{1}{n_t}\sum\limits_{j=1}^{n_t}\mathbf{P}^\mathrm{T}\mathbf{x}_{tj}\|_2^2 \\
&=\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{X}\mathbf{M}_0\mathbf{X}^\mathrm{T}\mathbf{P})
\end{aligned}
\end{equation}
where $\mathrm{tr}(\cdot)$ is the trace operator, and $\mathbf{M}_0 \in \mathbb{R}^{n \times n}$ represents the marginal MMD matrix calculated as:
\begin{equation}
\label{M0}
\begin{aligned}
\mathbf{M}_0 =
\begin{bmatrix}
\frac{1}{n_s^2}\mathbf{1}_{n_s \times n_s}& -\frac{1}{n_sn_t}\mathbf{1}_{n_s \times n_t} \\
-\frac{1}{n_sn_t}\mathbf{1}_{n_t \times n_s} & \frac{1}{n_t^2}\mathbf{1}_{n_t \times n_t}
\end{bmatrix}
\end{aligned}
\end{equation}
The calculation of conditional MMD distance requires to get the labels of target samples, which is generally infeasible in domain adaptation task. To remedy this issue, we employ the target pseudo-labels instead of the unavailable true labels to compute the conditional distribution distance as follows:
\begin{equation}
\label{conditional}
\begin{aligned}
\mathcal{L}_{mmd}^{c}
&=\sum\limits_{c=1}^{C}\|\frac{1}{n_s^c}\sum\limits_{\mathbf{x}_{si} \in \mathbf{X}_s^c}\mathbf{P}^\mathrm{T}\mathbf{x}_{si}-\frac{1}{n_t^c}\sum\limits_{\mathbf{x}_{tj} \in \mathbf{X}_t^c}\mathbf{P}^\mathrm{T}\mathbf{x}_{tj}\|_2^2 \\
&=\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{X}(\sum\nolimits_{c=1}^C\mathbf{M}_c)\mathbf{X}^\mathrm{T}\mathbf{P})
\end{aligned}
\end{equation}
where $C$ is the number of classes, $\mathbf{M}_c \in \mathbb{R}^{n \times n}$ is conditional MMD matrix defined as:
\begin{equation}
\label{MMD matrix}
(\mathbf{M}_c)_{ij} =
\begin{cases}
\frac{1}{n_s^cn_s^c}, &\mathrm{if} \ \mathbf{x}_i,\mathbf{x}_j \in \mathbf{X}_s^c;\\
\frac{1}{n_t^cn_t^c}, &\mathrm{if} \ \mathbf{x}_i,\mathbf{x}_j \in \mathbf{X}_t^c;\\
\frac{-1}{n_s^cn_t^c}, &\mathrm{if} \ \mathbf{x}_i \in \mathbf{X}_s^c \wedge \mathbf{x}_{j} \in \mathbf{X}_t^c;\\
\frac{-1}{n_s^cn_t^c}, &\mathrm{if} \ \mathbf{x}_j \in \mathbf{X}_s^c \wedge \mathbf{x}_{i} \in \mathbf{X}_t^c;\\
0, &\text{otherwise}
\end{cases}
\end{equation}
$\mathbf{X}_s^c$ represents all source samples in class $c$, and $n_s^c$ is the corresponding number of samples. Similar definitions can be applied for target samples according to the pseudo-labels. Denote $\mathbf{M} = \sum\nolimits_{c=0}^C\mathbf{M}_c$, then we have the following formula:
\begin{equation}
\label{MMD}
\begin{aligned}
\Omega(\mathbf{P}, \mathbf{X}) = \mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{XMX}^\mathrm{T}\mathbf{P})
\end{aligned}
\end{equation}
\subsubsection{Graph Self-Learning with Source Domain Discriminative Structure Preserving} Obviously, the quality of the affinity matrix is crucial to the performance of cross-domain label propagation. Most of previous works \cite{DICE, DGA-DA,LSC} use the same strategy to construct it, which calculates the similarity for all samples with predefined similarity metric, {\it e.g.}, the heatkenel similarity \cite{DICE,DGA-DA}. This strategy may not capture the inherent similarity of samples, thus hinders the correctness of cross-domain label propagation and results in serious misclassification for target data. The wrong pseudo-labels will further mislead the conditional distribution alignment in the next iteration, which ultimately results in significant performance degradation. To tackle this issue, inspired by several recent works \cite{PCAN, AWSSL}, we adopt a self-learning strategy, which constructs the affinity matrix by assigning the adaptive neighbors for each sample according to the local distance in the projected space. In light of this, the optimization objective of graph self-learning can be stated as follows:
\begin{equation}
\label{asm}
\begin{aligned}
&\min_{\mathbf{S}}\sum\nolimits_{i=1}^n((\sum\nolimits_{j=1}^n\|\mathbf{z}_i-\mathbf{z}_j\|_2^2S_{ij} + \lambda_i\|\mathbf{S}_{i,:}\|_2^2)\\
& =\min_{\mathbf{S}}\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{XL}\mathbf{X}^\mathrm{T}\mathbf{P}) + \|\mathbf{\Lambda}\mathbf{S}\|_F^2\\
&s.t. \quad \mathbf{S}\mathbf{1}_n = \mathbf{1}_n, \ 0 \leq S_{ij} \leq 1
\end{aligned}
\end{equation}
where $\mathbf{z}_i = \mathbf{P}^\mathrm{T}\mathbf{x}_i$ is the projection of sample $\mathbf{x}_i$, $\mathbf{S}_{i,:}$ represents the $i$-th row of $\mathbf{S}$, and $\mathbf{\Lambda} = \mathrm{diag}(\sqrt{\lambda_1}, \sqrt{\lambda_2}, ... , \sqrt{\lambda_n})$. $\mathbf{L}$ is the graph Laplacian matrix calculated as $\mathbf{L} = \mathbf{D} - \mathbf{S}$, and $\mathbf{D}$ is a diagonal matrix with the $i$-th element $D_{ii} = \sum\nolimits_{j=1}^nS_{ij}$. An $F$-norm regulation term is imposed on the $i$-th ($i = 1,2,\ldots,n$) row of $\mathbf{S}$ and the corresponding regularization term is $\lambda_i$, which can be determined automatically and will be elaborated in Section \ref{Optimization Procedure}. Then, we can obtain the following formula for $\Theta(\mathbf{P}, \mathbf{S}, \mathbf{X})$:
\begin{equation}
\label{GFHF3}
\Theta(\mathbf{P}, \mathbf{S}, \mathbf{X}) = \mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{XL}\mathbf{X}^\mathrm{T}\mathbf{P}) + \|\mathbf{\Lambda}\mathbf{S}\|_F^2
\end{equation}
In addition, several previous works \cite{DICD, DICE,DGA-DA} have shown that the performance of domain adaptation can be significantly enhanced if the discriminative information of source data is exploited. To this end, we adopt an intuitive strategy for labeled source data that only the samples belonging to the same category are allowed to be connected. In such case, each source sample could be connected with two parts, one of which is the source samples within the identical class and the other is all target samples. For simplicity, we fix the probability as $\delta$ and $1-\delta$ for these two parts, respectively. That is, when $i \leq n_s$, we have $\sum\nolimits_{j=1}^{n_s}S_{ij} = \delta$ and $\sum\nolimits_{j=n_s+1}^{n_s+n_t}S_{ij} = 1-\delta$, where $\delta \in [0, 1]$ is a hyperparameter to control the partition of probability. In this way, the learned adaptive discriminative graph owns the following structure:
\begin{equation}
\mathbf{S} = \left[
\begin{array}{cc}
\begin{array}{c|c}
\overbrace{
\begin{array}{cccc}
\mathbf{S}_{ss}^{(1)} & \mathbf{0} & \cdots & \mathbf{0}\\
\mathbf{0} & \mathbf{S}_{ss}^{(2)} & \cdots & \mathbf{0}\\
\vdots & \vdots & \ddots & \vdots\\
\mathbf{0} & \mathbf{0} & \cdots & \mathbf{S}_{ss}^{(C)}
\end{array}
}^{\delta}
&
\overbrace{
\begin{array}{ccc}
& & \\
& \\
& \mathbf{S}_{st}& \\
& & \\
& &
\end{array}
}^{1-\delta}
\end{array} \\ \hline
\begin{array}{ccccccc}
&&&&&&\\
&&&&\mathbf{S}_{ta}&&\\
&&&&&&\\
\end{array}
\end{array}
\right]
\end{equation}
where
$\mathbf{S}_{ss}^{(c)}$ represents the affinity matrix of all source samples in class $c$, $\mathbf{S}_{ss}$ is the connected subgraph with block diagonal structure for all source samples, $\mathbf{S}_{st}$ is the connected subgraph between source and target samples, and $\mathbf{S}_{ta}$ denotes the connected subgraph between target samples and all samples. Then, considering the above constraints, the objective of graph self-learning with source domain discriminative structure preserving term $\Theta(\mathbf{P}, \mathbf{S}, \mathbf{X})$ can be formulated as:
\begin{equation}
\label{asm1}
\begin{aligned}
&\min_{\mathbf{S}}\sum\nolimits_{i=1}^n((\sum\nolimits_{j=1}^n\|\mathbf{z}_i-\mathbf{z}_j\|_2^2S_{ij} +\lambda_i\|\mathbf{S}_{i,:}\|_2^2)\\
& =\min_{\mathbf{S}}\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{XL}\mathbf{X}^\mathrm{T}\mathbf{P}) + \|\mathbf{\Lambda}\mathbf{S}\|_F^2\\
&s.t. \ \mathbf{S}\mathbf{1}_n = \mathbf{1}_n, \ 0 \leq S_{ij} \leq 1, \ \sum\limits_{j=1}^{n_s}S_{ij} = \delta, \ i \leq n_s, \\
& \qquad \ S_{ij} = 0, \ i,j \leq n_s\wedge y_{si} \ne y_{sj}
\end{aligned}
\end{equation}
\subsubsection{Cross-Domain Label Propagation with Self-Learned Graph}
The main assumption of label propagation \cite{LP} is that the adjacent points tend to similar labels. Thus, assigning the target pseudo-labels by cross-domain label propagation could effectively exploit the geometric structure underlying the source and target data manifolds, which can improve the accuracy of target pseudo-label prediction and further benefit the subsequent domain-invariant feature learning. To achieve this goal, numerous current label propagation algorithms can be our candidates. For the sake of simplicity, we employ the classical GFHF algorithm \cite{GFHF}. Suppose the source label matrix is $\mathbf{F}_s \in \mathbb{R}^{n_s \times C}$, where the ($i$, $j$)-th element is 1 if $y_{si} = j$, and 0 otherwise. Denote $\mathbf{F} = [\mathbf{F}_l;\mathbf{F}_t]$, where $\mathbf{F}_l = \mathbf{F}_s$, and $\mathbf{F}_t \in \mathbb{R}^{n_t \times C}$ is the inferred target label matrix. Then, given the self-learned graph $\mathbf{S}$, the optimization problem of the GFHF algorithm can be formulated as:
\begin{equation}
\label{GFHF}
\min_{\mathbf{F}}\sum\limits_{i,j = 1}^n\|\mathbf{F}_i - \mathbf{F}_j\|_2^2S_{ij} = \min_{\mathbf{F}}\mathrm{tr}(\mathbf{F}^\mathrm{T}\mathbf{L}\mathbf{F}) \ s.t. \ \mathbf{F}_l = \mathbf{F}_s
\end{equation}
Thus, we can obtain the following formula for $\Psi(\mathbf{F},\mathbf{S})$:
\begin{equation}
\label{GFHF2}
\Psi(\mathbf{F},\mathbf{S}) = \mathrm{tr}(\mathbf{F}^\mathrm{T}\mathbf{L}\mathbf{F})
\end{equation}
\subsubsection{The Final Form for Discriminative Graph Self-Learning} Actually, the pseudo-labels of target data has been proven to be able to provide useful ``weakly" supervised information to yield a better connectivity graph \cite{AWSSL,USSL}. In our CDGS,
we further impose {\it label smoothness constraint} ({\it i.e.}, Eq.(\ref{GFHF2})) during graph self-learning to effectively explore the target pseudo-label information.
Thus, our final optimization objective of discriminative graph self-learning can be formulated as:
\begin{equation}
\label{AMC}
\begin{aligned}
&\min_{\mathbf{S}}\sum\nolimits_{i=1}^n((\sum\nolimits_{j=1}^n\|\mathbf{z}_i-\mathbf{z}_j\|_2^2S_{ij} +\beta \|\mathbf{F}_i - \mathbf{F}_j\|_2^2S_{ij})\\
&\qquad \qquad +\lambda_i\|\mathbf{S}_{i,:}\|_2^2)\\
& \quad =\min_{\mathbf{S}}\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{XL}\mathbf{X}^\mathrm{T}\mathbf{P}) + \|\mathbf{\Lambda}\mathbf{S}\|_F^2 + \beta \mathrm{tr}(\mathbf{F}^\mathrm{T}\mathbf{L}\mathbf{F})\\
&s.t. \ \mathbf{S}\mathbf{1}_n = \mathbf{1}_n, \ 0 \leq S_{ij} \leq 1, \ \sum\limits_{j=1}^{n_s}S_{ij} = \delta, \ i \leq n_s, \\
& \qquad \ S_{ij} = 0, \ i,j \leq n_s\wedge y_{si} \ne y_{sj}
\end{aligned}
\end{equation}
To avoid overfitting and improve the generalization capacity, we further impose an $F$-norm regularization term on $\mathbf{P}$, then we have:
\begin{equation}
\label{reg}
\Phi(\mathbf{P}) = \|\mathbf{P}\|_F^2
\end{equation}
Finally, by combining Eq. (\ref{MMD}), Eq. (\ref{GFHF}), Eq. (\ref{AMC}) and Eq. (\ref{reg}), we obtain the final formulation of our CDGS:
\begin{equation}
\label{GFHF-obj}
\begin{aligned}
&\min_{\mathbf{P},\mathbf{S},\mathbf{F}}\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{XMX}^\mathrm{T}\mathbf{P}) + \alpha(\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{X}\mathbf{L}\mathbf{X}^\mathrm{T}\mathbf{P}) \\
& \qquad \ + \|\mathbf{\Lambda S}\|_F^2) + \beta \mathrm{tr}(\mathbf{F}^\mathrm{T}\mathbf{L}\mathbf{F}) + \gamma\|\mathbf{P}\|_F^2\\
&\ \ s.t. \ \mathbf{P}^\mathrm{T}\mathbf{XHX}^\mathrm{T}\mathbf{P} = \mathbf{I}_d, \ \mathbf{S}\mathbf{1}_n = \mathbf{1}_n, 0 \leq S_{ij} \leq 1,\\
& \qquad \ \mathbf{F}_l = \mathbf{F}_s, \ \sum\nolimits_{j=1}^{n_s}S_{ij} = \delta, \ i \leq n_s, \\
& \qquad \ S_{ij} = 0, \ i,j \leq n_s\wedge y_{si} \ne y_{sj}
\end{aligned}
\end{equation}
where $\mathbf{H}$ is the centering matrix defined as $\mathbf{H} = \mathbf{I}_{n}- \frac{1}{n}\mathbf{1}_{n \times n}$. The first constraint is to maximize the variance of all data \cite{JDA} in the projected space, which is inspired by the principal component analysis. Similar to \cite{DICD}, $\mathbf{M}$ and $\mathbf{L}$ can be normalized into the same scale. Thus, we set $\alpha = 1.0$ for all cases.
\subsection{Optimization Procedure \label{Optimization Procedure}}
In problem (\ref{GFHF-obj}), we need to optimize three variables $\mathbf{P}$, $\mathbf{S}$, and $\mathbf{F}$. As it is not jointly convex with the three variables, we update each variable alternatively with the others fixed. To be specific, we solve each subproblem as follows.
\textbf{1. $\mathbf{P}$-Subproblem:} When we fix $\mathbf{S}$ and $\mathbf{F}$, the optimization problem (\ref{GFHF-obj}) becomes:
\begin{equation}
\begin{aligned}
&\min_{\mathbf{P}}\mathrm{tr}(\mathbf{P}^\mathrm{T}(\mathbf{X}\mathbf{M}\mathbf{X}^\mathrm{T} + \alpha \mathbf{X}\mathbf{L}\mathbf{X}^\mathrm{T} + \gamma \mathbf{I}_m)\mathbf{P}) \\
&\qquad s.t. \ \mathbf{P}^\mathrm{T}\mathbf{XHX}^\mathrm{T} = \mathbf{I}_d
\end{aligned}
\end{equation}
We employ the Lagrange techniques to solve it. The corresponding Lagrangian function can be formulated as:
\begin{equation}
\label{LF}
\begin{aligned}
L(\mathbf{P},\mathbf{\Theta}) = &\ \mathrm{tr}(\mathbf{P}^\mathrm{T}(\mathbf{X}\mathbf{M}\mathbf{X}^\mathrm{T} + \alpha \mathbf{X}\mathbf{L}\mathbf{X}^\mathrm{T} + \gamma \mathbf{I}_m)\mathbf{P}) \\
&+\mathrm{tr}((\mathbf{I}_d - \mathbf{P}^\mathrm{T}\mathbf{XHX}^\mathrm{T}\mathbf{P})\mathbf{\Pi})
\end{aligned}
\end{equation}
where $\mathbf{\Pi} = \mathrm{diag}(\pi_1, \pi_2, ..., \pi_d) \in \mathbb{R}^{d \times d}$ is a diagonal matrix and each element is a Lagrange Multiplier. By setting the gradient of (\ref{LF}) with respect to $\mathbf{P}$ to zero, we obtain:
\begin{equation}
\label{P-sub}
(\mathbf{X}\mathbf{M}\mathbf{X}^\mathrm{T} + \alpha \mathbf{X}\mathbf{L}\mathbf{X}^\mathrm{T} + \gamma \mathbf{I}_m)\mathbf{P} = \mathbf{XHX}^\mathrm{T}\mathbf{P}\mathbf{\Theta}
\end{equation}
Then the optimal solution can be obtained by computing the eigenvectors of (\ref{P-sub})
regarding to the $d$-smallest eigenvalues.
\textbf{2. $\mathbf{S}$-Subproblem:} When $\mathbf{P}$ and $\mathbf{F}$ are fixed, the optimization problem (\ref{GFHF-obj}) with regard to $\mathbf{S}$ is equal to problem (\ref{AMC}). Actually, problem (\ref{AMC}) can be divided into $n$ subproblems and each of them is formulated as:
\begin{equation}
\label{S-sub1}
\begin{aligned}
&\min_{\mathbf{S}_{i,:}}\sum\nolimits_{j=1}^n(\|\mathbf{z}_i-\mathbf{z}_j\|_2^2S_{ij}+\beta \|\mathbf{F}_i - \mathbf{F}_j\|_2^2S_{ij})\\
&\qquad +\lambda_i\|\mathbf{S}_{i,:}\|_2^2 \\
&s.t. \ \mathbf{S}_{i,:}\mathbf{1}_{n} = 1,\ 0 \leq S_{ij} \leq 1, \ \sum\nolimits_{j=1}^{n_s}S_{ij} = \delta, \ i \leq n_s \\
&\quad \ S_{ij} = 0, \ i,j \leq n_s\wedge y_{si} \ne y_{sj}
\end{aligned}
\end{equation}
\textbf{Case 1:}
First of all, we show how to obtain the optimal solution when $i \ \textgreater \ n_s$. We define $A_{ij} = \|\mathbf{z}_i-\mathbf{z}_j\|_2^2 + \beta \|\mathbf{F}_i - \mathbf{F}_j\|_2^2$, then the above problem can be reformulated as:
\begin{equation}
\label{S-sub2}
\min_{\mathbf{S}_{i,:}\mathbf{1}_{n} = 1,0 \leq S_{ij} \leq 1}\|\mathbf{S}_{i,:} + \frac {\mathbf{A}_{i,:}}{2\lambda_i}\|_2^2
\end{equation}
\begin{algorithm}[h]
\SetAlgoLined
\caption{CDGS Algorithm}
\label{alg1}
\KwIn{Source data $\{\mathbf{X}_s,\mathbf{Y}_s\}$;
Target data $\{\mathbf{X}_t\}$; Initial affinity matrix $\mathbf{S}$;
Hyper-parameters $\alpha = 1.0$, $k = 20$, $\delta = 0.8$, $\beta$, $\gamma$, $d$; Maximum iteration $T$ = 10.}
\KwOut{Target pseudo-labels $\mathbf{\widehat{Y}}$.}
\BlankLine
$t$ = 1;\\
\While { \rm{not converge} \textbf{and} $t$ $\leq T$}
{
// \textit{Projection matrix $\mathbf{P}$} \\
Update $\mathbf{P}$ by solving (\ref{P-sub});\\
// \textit{Affinity matrix $\mathbf{S}$} \\
Update each row of $\mathbf{S}$ by (\ref{S-sub6}), (\ref{S-sub8}) and (\ref{S-sub10});\\
// \textit{Target label matrix $\mathbf{F}_t$} \\
Update $\mathbf{F}_t$ by (\ref{GFHF1});\\
//\textit{Target pseudo-labels $\mathbf{\widehat{Y}}$} \\
Update the target pseudo-labels $\mathbf{\widehat{Y}}$ by (\ref{pseudo label});\\
$t$ = $t$ + 1;
}
\textbf{Return} Target pseudo-labels $\mathbf{\widehat{Y}}$.
\end{algorithm}
The corresponding Lagrangian function is
\begin{equation}
\label{S-sub3}
\min_{\mathbf{S}_{i,:}}\|\mathbf{S}_{i,:} + \frac {\mathbf{A}_{i,:}}{2\lambda_i}\|_2^2 - \mu(\mathbf{S}_{i,:}\mathbf{1}_{n} - 1) - \mathbf{S}_{i,:}\bm{\eta}^{\mathrm{T}}
\end{equation}
where $\mu$ and $\bm{\eta}$ are the Lagrangian multipliers.
To utilize the local structure of data and relieve computation burden, we learn a sparse $\mathbf{S}_{i,:}$, i.e., each sample is only locally connected with its $k$-nearest neighbors. Based on the KKT condition, problem (\ref{S-sub3}) has a closed-form solution as follows:
\begin{equation}
\label{S-sub4}
S_{ij} = \mathrm{max}(z-\frac{A_{ij}}{2\lambda_i},0)
\end{equation}
where $z = \frac{1}{k} + \frac{1}{2k\lambda_i}\sum\nolimits_{j=1}^{k}\tilde{A}_{ij}$ and $\tilde{A}_{ij}$ is the entry of matrix $\tilde{\mathbf{A}}$, which is obtained by sorting the elements of each row of $\mathbf{A}$ from small to large. To ensure that each $\mathbf{S}_{i,:}$ has exactly $k$ nonzero elements, we could set $z - \tilde{A}_{i,k+1}/(2\lambda_i) = 0$, then we have:
\begin{equation}
\label{S_sub5}
\lambda_i = \frac{1}{2}(k\tilde{A}_{i,k+1} - \sum\nolimits_{j=1}^{k}\tilde{A}_{ij})
\end{equation}
Submitting Eq. (\ref{S_sub5}) into Eq. (\ref{S-sub4}), we can obtain:
\begin{equation}
\label{S-sub6}
S_{ij} = \mathrm{max}(\frac{\tilde{A}_{i,k + 1} - A_{ij}}{k\tilde{A}_{i,k+1} - \sum\nolimits_{j=1}^{k}\tilde{A}_{ij}},0)
\end{equation}
\textbf{Case 2:} When $i,j \leq n_s$, Eq.(\ref{S-sub1}) can be reformulated as:
\begin{equation}
\label{S-sub7}
\begin{aligned}
&\min_{\mathbf{S}_{i,:}}\sum\nolimits_{j=1}^{n_s}\|\mathbf{z}_i-\mathbf{z}_j\|_2^2S_{ij}+\beta \|\mathbf{F}_i - \mathbf{F}_j\|_2^2S_{ij}+\lambda_i S_{ij}^2\\
&s.t. \sum\nolimits_{j=1}^{n_s}S_{ij} = \delta, \ 0 \leq S_{ij} \leq 1, \ S_{ij} = 0, \ y_{si} \ne y_{sj}
\end{aligned}
\end{equation}
To satisfy the last constraint, we could set $A_{ij} = +\infty$ if $y_{si} \ne y_{sj}$. Similar to problem (\ref{S-sub2}), we can obtain the closed-form solution of problem (\ref{S-sub7}):
\begin{equation}
\label{S-sub8}
S_{ij} = \delta\mathrm{max}(\frac{\tilde{A}_{i,k_1 + 1} - A_{ij}}{k_1\tilde{A}_{i,k_1+1} - \sum\nolimits_{j=1}^{k_1}\tilde{A}_{ij}},0)
\end{equation}
where $k_1 = \mathrm{min}(k,n_s^{y_{si}})$ as in practice, some classes may have very small-size samples.
\textbf{Case 3:}
When $i \leq n_s, j > n_s$, problem (\ref{S-sub1}) can be rewritten as:
\begin{equation}
\label{S-sub9}
\begin{aligned}
&\min_{\mathbf{S}_{i,:}}\sum\nolimits_{j=n_s+1}^n\|\mathbf{z}_i-\mathbf{z}_j\|_2^2S_{ij}+\beta \|\mathbf{F}_i - \mathbf{F}_j\|_2^2S_{ij} \\
& \qquad +\lambda_i S_{ij}^2\\
&s.t. \sum\nolimits_{j=n_s + 1}^{n}S_{ij} = 1-\delta,\ 0 \leq S_{ij} \leq 1
\end{aligned}
\end{equation}
Similarly, the closed-form solution of problem (\ref{S-sub9}) is:
\begin{equation}
\label{S-sub10}
S_{ij} = (1-\delta)\mathrm{max}(\frac{\tilde{A}_{i,k + 1} - A_{ij}}{k\tilde{A}_{i,k+1} - \sum\nolimits_{j=1}^{k}\tilde{A}_{ij}},0)
\end{equation}
\textbf{3. $\mathbf{F}$-Subproblem:} With fixed $\mathbf{P}$ and $\mathbf{S}$, the optimization problem (\ref{GFHF-obj}) with respect to $\mathbf{F}$ is equal to solve problem (\ref{GFHF}). According to \cite{GFHF}, we only need to update $\mathbf{F}_t$. Split $\mathbf{L}$ into four blocks: $\mathbf{L} =
\begin{bmatrix}
\mathbf{L}_{ss} & \mathbf{L}_{st} \\ \mathbf{L}_{ts} & \mathbf{L}_{tt}
\end{bmatrix}
$, where $\mathbf{L}_{ss} \in \mathbb{R}^{n_s \times n_s}$, $\mathbf{L}_{st} \in \mathbb{R}^{n_s \times n_t}$, $\mathbf{L}_{ts} \in \mathbb{R}^{n_t \times n_s}$ and $\mathbf{L}_{tt} \in \mathbb{R}^{n_t \times n_t}$, and then, the optimal solution of problem (\ref{GFHF}) is:
\begin{equation}
\label{GFHF1}
\mathbf{F}_t = - \mathbf{L}_{tt}^{-1}\mathbf{L}_{ts}\mathbf{F}_s
\end{equation}
Eventually, the target pseudo-labels can be obtained based on the following decision function:
\begin{equation}
\label{pseudo label}
\widehat{y}_{ti} = \mathrm{argmax}_j\ (\mathbf{F}_t)_{ij}
\end{equation}
The affinity matrix $\mathbf{S}$ is initialized according to (\ref{S-sub8}) in the original feature space. We summarize the detailed optimization steps of the proposed CDGS in Algorithm \ref{alg1}.
\subsection{Computational Complexity Analysis}
To find the optimal solutions for the optimization Algorithm \ref{alg1}, we need to solve three subproblems. The complexity of each subproblem in each iteration is induced as follows: First, constructing and solving the eigen-decomposition problem (\ref{P-sub}) for $\mathbf{P}$-subproblem costs $\mathcal{O}(n^2m + dm^2)$; Then, updating the affinity matrix $\mathbf{S}$ needs a time cost of $\mathcal{O}(n^2\mathrm{log}(n))$; Finally, the complexity of obtaining the target estimated label matrix $\mathbf{F}_t$ and the pseudo-labels $\mathbf{\widehat{Y}}$ is $\mathcal{O}(n_t^3)$. Thus, the overall computational complexity of our proposal is $\mathcal{O}(Tn^2m+Tdm^2+Tn^2\mathrm{log}(n)+Tn_t^3)$, where $T$ is the number of iterations.
\subsection{Extension to Semi-supervised Domain Adaptation}
We denote the target data as $\mathbf{X}_t = [\mathbf{X}_l, \mathbf{X}_u]$, where $\mathbf{X}_l = \{\mathbf{x}_{li}\}_{i=1}^{n_l}$ is the labeled data and $\mathbf{X}_u = \{\mathbf{x}_{uj}\}_{j=1}^{n_u}$ is the unlabeled data. Then, by submitting $\mathbf{X}_s$ and $\mathbf{X}_{t}$ into Eq. (\ref{GFHF-obj}), the semi-supervised extension for our CDGS can be stated as:
\begin{equation}
\label{semi-obj}
\begin{aligned}
&\min_{\mathbf{P},\mathbf{S},\mathbf{F}}\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{XMX}^\mathrm{T}\mathbf{P}) + \alpha(\mathrm{tr}(\mathbf{P}^\mathrm{T}\mathbf{X}\mathbf{L}\mathbf{X}^\mathrm{T}\mathbf{P}) \\
& \qquad \ + \|\mathbf{\Lambda S}\|_F^2) + \beta \mathrm{tr}(\mathbf{F}^\mathrm{T}\mathbf{L}\mathbf{F}) + \gamma\|\mathbf{P}\|_F^2\\
&\ \ s.t. \ \mathbf{P}^\mathrm{T}\mathbf{XHX}^\mathrm{T}\mathbf{P} = \mathbf{I}_d, \ \mathbf{S}\mathbf{1}_n = \mathbf{1}_n, 0 \leq S_{ij} \leq 1,\\
& \qquad \ \mathbf{F}_l = \mathbf{F}_{s}, \ \sum\nolimits_{j=1}^{n_s}S_{ij} = \delta, \ i \leq n_s, \\
& \qquad \ S_{ij} = 0, \ i,j \leq n_s\wedge y_{si} \ne y_{sj}
\end{aligned}
\end{equation}
where $n = n_s + n_l+n_u$. Obviously, Eq. (\ref{semi-obj}) owns the same formula with Eq. (\ref{GFHF-obj}), thus they can be solved with the identical algorithm.
Actually, our semi-supervised extension can be effective for the following two reasons: 1) The estimation of target class means is more accurate when some labeled target samples are available, which can promote to perform conditional distribution alignment more accurately; 2) Through Eq. (\ref{asm}), reliable connections between the labeled and unlabeled data are built, which can transfer the knowledge of labeled samples to the unlabeled ones via cross-domain label propagation.
\section{Experiments \label{experiments}}
In this section, we first describe the six benchmark datasets. Then, the details of experimental setup are shown. Next, we present the evaluation results of UDA, ablation study, parameter sensitivity and convergence analysis. Finally, the results for SDA are reported. The source code of this paper is available at https://drive.google.com/drive/folders/19Fqxxuf9MTcd-1em\\XstZE01G60JUyAst?usp=sharing.
\subsection{Datasets and Descriptions}
We adopt six benchmark datasets in our experiments, including Office31, Office-Caltech10, ImageNet-VOC2007, Office-Home, MNIST-USPS and PIE, which are widely used cross-domain object, digit and face datasets. Overall descriptions about these datasets are summarized in Table \ref{datasets}. We will introduce more details for each dataset as follows.
\emph{Office31} \cite{Office31} contains 4,110 images with 31 categories collected from three domains: Amazon (A), DSLR (D) and Webcam (W). Amazon images are downloaded from the online merchants. DSLR images are captured by a digital SLR camera while Webcam images are recorded by a web camera. Following \cite{CORAL}, we utilize the AlexNet-FC$_7$ features\footnote{ \url{https://github.com/VisionLearningGroup/CORAL/tree/master/dataset}} fine-tuned on the source domain.
\emph{Office-Caltech10} \cite{GFK} includes 2,533 images in 10 shared categories from the Office31 dataset and the Caltech256 (C) dataset, which is a widely used dataset for object recognition. Following \cite{GFK}, we exploit the SURF features\footnote{\url{http://boqinggong.info/assets/GFK.zip}}. Besides, the VGG-FC$_{6,7}$ features\footnote{ \url{https://sherath@bitbucket.org/sherath/ils.git}} provided by \cite{ILS} are used.
\begin{table}[]
\centering
\caption{Statistics of the Six Benchmark Datasets}
\label{datasets}
\scalebox{0.8}{%
\begin{tabular}{ccccc}
\toprule
Dataset & Subsets (Abbr.) & Samples & Feature (Size) & Classes \\ \midrule
\multirow{3}{*}{Office31} & Amazon (A) & 2,817 & \multirow{3}{*}{Alexnet-FC$_7$ (4,096)} & \multirow{3}{*}{31} \\
& DSLR (D) & 498 & & \\
& Webcam (W) & 795 & & \\ \midrule
\multirow{4}{*}{Office-Caltech10} & Amazon (A) & 958 & \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}SURF (800)\\ VGG-FC$_6$ (4,096)\\ VGG-FC$_7$ (4,096)\end{tabular}} & \multirow{4}{*}{10} \\
& Caltech (C) & 1,123 & & \\
& DSLR (D) & 157 & & \\
& Webcam (W) & 295 & & \\ \midrule
\multirow{5}{*}{PIE} & C05 & 3,332 & \multirow{5}{*}{Pixel (1024)} & \multirow{5}{*}{65} \\
& C07 & 1,629 & & \\
& C09 & 1,632 & & \\
& C27 & 3,329 & & \\
& C29 & 1,632 & & \\ \midrule
\multirow{4}{*}{Office-Home} & Art (Ar) & 2,421 & \multirow{4}{*}{Resnet50 (2,048)} & \multirow{4}{*}{68} \\
& Clipart (Cl) & 4,379 & & \\
& Product (Pr) & 4,428 & & \\
& Real-World (Re) & 4,357 & & \\ \midrule
\multirow{2}{*}{MNIST-USPS} & MNIST (M) & 2,000 & \multirow{2}{*}{Pixel (256)} & \multirow{2}{*}{10} \\
& USPS (U) & 1,800 & & \\ \midrule
\multirow{2}{*}{ImageNet-VOC2007} & ImageNet (I) & 7,341 & \multirow{2}{*}{DeCAF$_6$ (4,096)} & \multirow{2}{*}{5} \\
& VOC2007 (V) & 3,376 & & \\ \bottomrule
\end{tabular}%
}
\end{table}
\emph{PIE} \cite{PIE} involves 41,638 facial images of 68 people with different poses, illuminations, and expression changes. Following \cite{JDA}, we focus on five poses: C05 (left), C07 (upward), C09 (downward), C27 (frontal) and C29 (right). All images were converted to grayscale and cropped to the size 32 $\times$ 32. We adopt the pixel features\footnote{ \url{https://github.com/jindongwang/transferlearning/tree/master/data}\label{WJD}}.
\begin{table*}[]
\centering
\caption{Recognition Accuracies (\%) on Office31 Dataset}
\label{office31}
\scalebox{0.85}{%
\begin{tabular}{ccccccccccccc}
\toprule
Task & 1-NN & SVM & JDA & DICD & PACET & MCS & DTLC & ARTL & MEDA & DGA-DA & DICE$_\mathrm{lp}$ & CDGS \\ \midrule
A$\rightarrow$D & 59.4 & 59.2 & 65.7 & 66.7 & 69.1 & 71.9 & 66.1 & 64.7 & 69.5 & 64.5 & 67.7 & \textbf{73.5}\\
A$\rightarrow$W & 57.5 & 57.9 & 69.1 & 70.7 & 71.7 & 75.1 & 67.5 & 71.7 & 69.9 & 65.0 & 70.7 & \textbf{79.5}\\
D$\rightarrow$A & 47.2 & 48.8 & 57.4 & 57.7 & \textbf{62.3} & 58.8 & 58.9 & 59.5 & 58.0 & 55.0 & 56.5 & 61.8\\
D$\rightarrow$W & 96.1 & 95.2 & \textbf{98.0} & 97.0 & 97.4 & 96.7 & \textbf{98.0} & 96.0 & 94.0 & 97.2 & 97.2 & 97.2\\
W$\rightarrow$A & 44.8 & 46.5 & 54.1 & 58.1 & 59.2 & 57.2 & 55.1 & 58.5 & 56.0 & 53.8 & 57.7 & \textbf{61.2}\\
W$\rightarrow$D & 99.0 & 98.8 & 99.6 & 99.8 & \textbf{100.0} & 99.4 & 99.6 & 99.4 & 96.8 & 99.8 & \textbf{100.0} & \textbf{100.0} \\ \midrule
Average & 67.3 & 67.7 & 74.0 & 75.0 & 76.6 & 76.5 & 74.2 & 74.9 & 74.0 & 72.5 & 75.0 & \textbf{78.9} \\ \bottomrule
\end{tabular}%
}
\end{table*}
\emph{MNIST-USPS} is made up of two handwritten digit image datasets: MNIST (M) and USPS (U). Following \cite{JDA}, we randomly choose 2,000 images in MNIST and 1,800 images in USPS and utilize the pixel features\textsuperscript{\ref{WJD}}.
\emph{ImageNet-VOC2007} consists of two large image recognition datasets, ImageNet (I) and VOC2007 (V). Following \cite{MEDA}, we extract all images from five common classes of the two datasets, {\it i.e.}, bird, cat, chair, dog and person. The DeCAF$_6$ feature\textsuperscript{\ref{WJD}} is employed.
\emph{Office-Home} \cite{Office-Home} includes 15,585 object images in 65 categories from four domains: Art (artistic depictions of objects, Ar), Clipart (clipart images, Cl), Product (object images without background, Pr) and Real-World (images captured by a regular camera, Re). We employ the Resnet50 features extracted by a Resnet50 model \cite{Resnet50} pretrained on ImageNet.
For simplicity, in our experiments, each cross-domain task is denoted by S$\rightarrow$ T, where S represents the source domain and T is the target domain.
\subsection{Experimental Setup}
\subsubsection{Comparison Methods}
For UDA, we compare the performance of our CDGS with massive methods, which can be classified into two categories: \emph{shallow methods}: 1-NN, SVM\footnote{\url{https://www.csie.ntu.edu.tw/~cjlin/liblinear/}}, JDA \cite{JDA}, DICD \cite{DICD}, PACET \cite{PACET}, MCS \cite{MCS}, DTLC \cite{DTLC}, ARTL \cite{ARTL}, MEDA \cite{MEDA}, DGA-DA \cite{DGA-DA} and DICE$_\mathrm{lp}$ \cite{DICE}, \emph{deep methods}: the method of \cite{Wang2020}, DRCN \cite{DRCN}, DSAN \cite{DSAN}, the method of \cite{Liang2020}, and GSP \cite{GSP}. For SDA, the competitors include MMDT \cite{MMDT}, CDLS \cite{CDLS}, ILS \cite{ILS}, TFMKL-S \cite{TFMKL-S} and OBTL \cite{OBTL}.
\subsubsection{Training Protocol}
We exploit all source data for training, known as full protocol, on all datasets in Table \ref{datasets}. Besides, regarding the Office-Caltech10 dataset, two kinds of sampling protocols are also adopted, where only few labeled source samples per category are employed for training. For the first sampling protocol, similar to \cite{DICE}, we use the SURF features and 20 instances per class are randomly selected for domain A while 8 instances per class for other domains as sources. For the second sampling protocol, following \cite{MCS}, VGG-FC$_6$ features are utilized and 8 samples per category are selected for domain D while 20 samples per category for the others.
\subsubsection{Parameter Setting}
In UDA and SDA, sufficient labeled target samples are unavailable, thus we cannot perform a standard cross-validation procedure to decide the optimal parameters. Following \cite{DICD}, we report the best results by grid-searching the hyper-parameter space. For all competitors, we run the public codes provided by the authors using the default parameters or following the given procedure to tune parameters. For all approaches requiring a subspace dimension, the optimal value is searched in $d \in \{1C, 2C, 3C, 4C, 5C, 6C\}$, where $C$ is the number of classes for the corresponding dataset. The regulation parameter for projection matrix is searched in $\gamma \in \{0.005,0.01,0.05,0.1,0.5,1.0,5.0,10.0\}$. For the other parameters in our CDGS, we fix $\alpha = 1.0$, $k = 20$, $\delta = 0.8$, $T = 10$ and set $\beta = 0.5$ for Office-Home and Office-Caltech10 datasets, $\beta = 0.01$ for PIE dataset and $\beta = 0.1$ for other datasets. We also provide the optimal parameters for UDA setting: Office31 ($d = 124$, $\gamma = 0.01$), Office-Caltech10 ($d = 30$, $\gamma = 0.5$ for SURF, $d = 30$, $\gamma = 0.1$ for SURF split, $d = 40$, $\gamma = 0.1$ for VGG-FC$_{6,7}$ split), MNIST-USPS ($d = 40$, $\gamma = 0.5$), ImageNet-VOC2007 ($d = 30$, $\gamma = 0.01$), PIE ($d = 340$, $\gamma = 0.005$) and Office-Home ($d = 130$, $\gamma = 0.005$).
\begin{table*}[]
\centering
\caption{Recognition Accuracies (\%) on Office-Caltech10 Dataset with SURF Features}
\label{Office_surf}
\scalebox{0.85}{%
\begin{tabular}{ccccccccccccc}
\toprule
Task & 1-NN & SVM & JDA & DICD & PACET & MCS & DTLC & ARTL & MEDA & DGA-DA & DICE$_\mathrm{lp}$ & CDGS \\ \midrule
A$\rightarrow$C & 26.0 & 35.6 & 39.4 & 42.4 & 42.7 & 40.8 & \textbf{46.6} & 41.3 & 43.9 & 41.3 & 44.1 & 42.7 \\
A$\rightarrow$D & 25.5 & 36.3 & 39.5 & 38.9 & 50.3 & 45.2 & 45.4 & 38.9 & 45.9 & 38.2 & 49.0 & \textbf{51.0} \\
A$\rightarrow$W & 29.8 & 31.9 & 38.0 & 45.1 & \textbf{53.2} & 50.8 & 48.1 & 39.0 & \textbf{53.2} & 38.3 & 52.9 & 52.5\\
C$\rightarrow$A & 23.7 & 42.9 & 44.8 & 47.3 & 52.2 & \textbf{58.8} & 50.3 & 54.9 & 56.5 & 52.1 & 53.7 & 56.8 \\
C$\rightarrow$D & 25.5 & 33.8 & 45.2 & 49.7 & 52.2 & 45.2 & 52.4 & 44.6 & 50.3 & 45.9 & 51.6 & \textbf{59.2} \\
C$\rightarrow$W & 25.8 & 34.6 & 41.7 & 46.4 & 51.5 & 51.9 & 54.4 & 50.5 & 53.9 & 47.1 & 53.9 &\textbf{55.9} \\
D$\rightarrow$A & 28.5 & 34.3 & 33.1 & 34.5 & 40.8 & 37.1 & 36.2 & 38.1 & 41.2 & 33.6 & 41.2 & \textbf{45.2} \\
D$\rightarrow$C & 26.3 & 32.1 & 31.5 & 34.6 & 34.5 & 31.3 & 32.1 & 31.0 & 34.9 & 33.7 & 34.5 & \textbf{39.4} \\
D$\rightarrow$W & 63.4 & 78.0 & 89.5 & 91.2 & 91.5 & 86.1 & 92.9 & 83.4 & 87.5 & \textbf{93.2} & 84.1 & 92.5 \\
W$\rightarrow$A & 23.0 & 37.5 & 32.8 & 34.1 & 40.8 & 37.8 & 33.5 & 40.1 & 42.7 & 41.8 & 33.1 & \textbf{47.4} \\
W$\rightarrow$C & 19.9 & 33.9 & 31.2 & 33.6 & \textbf{39.0} & 29.8 & 33.8 & 34.8 & 34.0 & 33.3 & 37.8 & 38.2 \\
W$\rightarrow$D & 59.2 & 80.9 & 89.2 & 89.8 & 92.4 & 83.4 & 87.3 & 78.3 & 88.5 & 89.8 & 87.3 & \textbf{94.3} \\ \midrule
Average & 31.4 & 42.6 & 46.3 & 49.0 & 53.6 & 50.1 & 51.1 & 47.9 & 52.7 & 49.0 & 51.9 & \textbf{56.3} \\ \bottomrule
\end{tabular}%
}
\end{table*}
\begin{table*}[]
\centering
\caption{Recognition Accuracies (\%) on Office-Caltech10 Dataset with SURF and VGG-FC$_{6,7}$ Features under Different Splitting Protocols}
\label{Office_surf_split}
\scalebox{0.8}{%
\begin{tabular}{c|ccccc|ccccc|ccccc}
\toprule
Feature & \multicolumn{5}{c|}{SURF} & \multicolumn{5}{c|}{VGG-FC$_6$} & \multicolumn{5}{c}{VGG-FC$_7$} \\ \midrule
Task & MCS & ARTL & MEDA & DICE$_\mathrm{lp}$ & CDGS & MCS & ARTL & MEDA & DICE$_\mathrm{lp}$ & CDGS & MCS & ARTL & MEDA & DICE$_\mathrm{lp}$ & CDGS \\ \midrule
A$\rightarrow$C & 40.4 & 36.4 & 38.1 & 39.6 & \textbf{40.8} & \textbf{87.1} & 84.6 & 85.2 & 83.9 & 85.1 & \textbf{86.3} & 84.1 & 84.4 & 83.6 & 84.8 \\
A$\rightarrow$D & 43.7 & 38.2 & 39.0 & 39.7 & \textbf{44.3} & 74.8 & 75.0 & 71.7 & 66.4 & \textbf{82.6} & 72.8 & 74.8 & 70.8 & 64.9 & \textbf{81.0} \\
A$\rightarrow$W & 48.3 & 38.3 & 45.8 & 42.9 & \textbf{49.8} & 84.8 & 90.1 & 88.5 & 77.2 & \textbf{97.9} & 86.6 & 87.8 & 88.2 & 79.4 & \textbf{94.5} \\
C$\rightarrow$A & 43.2 & 41.5 & 44.3 & \textbf{44.8} & 44.5 & \textbf{92.3} & 89.0 & 90.8 & 91.6 & 88.8 & \textbf{92.8} & 91.1 & 91.6 & 91.6 & 88.9 \\
C$\rightarrow$D & 45.3 & 39.0 & 39.9 & 39.6 & \textbf{46.7} & 77.3 & 79.9 & 78.1 & 68.2 & \textbf{84.5} & 73.0 & 79.0 & 75.7 & 65.4 & \textbf{83.5} \\
C$\rightarrow$W & 43.8 & 35.8 & 40.3 & 40.5 & \textbf{44.7} & 87.1 & 89.9 & 90.3 & 83.8 & \textbf{93.9} & 89.3 & 90.6 & 90.2 & 84.2 & \textbf{93.0} \\
D$\rightarrow$A & 37.7 & 37.0 & 40.3 & 40.9 & \textbf{45.0} & 84.7 & \textbf{90.4} & 86.3 & 85.0 & 84.8 & 84.6 & \textbf{88.9} & 83.7 & 83.3 & 85.7 \\
D$\rightarrow$C & 30.8 & 32.1 & 33.5 & 33.8 & \textbf{35.6} & 76.0 & 75.5 & \textbf{81.2} & 77.9 & 71.5 & \textbf{76.5} & 74.9 & 73.8 & 76.0 & 70.8 \\
D$\rightarrow$W & 78.1 & 79.7 & 82.5 & 81.7 & \textbf{86.2} & 95.9 & 95.7 & 96.1 & 95.7 & \textbf{96.9} & 95.5 & 94.3 & 95.8 & 94.8 & \textbf{96.9} \\
W$\rightarrow$A & 36.3 & 37.6 & 40.9 & 38.7 & \textbf{44.0} & 88.9 & 92.0 & 90.6 & 89.8 & \textbf{92.2} & 90.4 & \textbf{92.8} & 90.2 & 89.4 & 92.5 \\
W$\rightarrow$C & 32.6 & 32.3 & 33.1 & 34.5 & \textbf{35.4} & \textbf{87.4} & 85.6 & 85.2 & 81.0 & 85.6 & 85.6 & \textbf{85.8} & 84.6 & 82.3 & 85.0 \\
W$\rightarrow$D & 73.3 & 68.5 & 74.6 & 76.1 & \textbf{78.8} & 92.9 & 92.8 & \textbf{97.2} & 93.9 & 96.7 & 88.9 & 91.1 & 93.4 & 90.8 & \textbf{93.5} \\ \midrule
Averaged & 46.1 & 42.9 & 46.0 & 46.1 & \textbf{49.7} & 85.8 & 86.7 & 86.8 & 82.9 & \textbf{88.2} & 85.2 & 86.3 & 85.2 & 82.1 & \textbf{87.5} \\ \bottomrule
\end{tabular}%
}
\end{table*}
\subsubsection{Evaluation Metric}
Following many previous works \cite{JDA,DICD,DICE}, we adopt the classification accuracy of target data as the evaluation metric, which is computed as:
\begin{equation}
\label{evaluation}
\mathrm{Accuracy} = \frac{|\mathbf{x}:\mathbf{x} \in \mathbf{X}_t \cap \tilde{y} = y|}{|\mathbf{x}:\mathbf{x} \in \mathbf{X}_t|}
\end{equation}
where $\mathbf{x}$ is a target sample, $y$ is the truth label of $\mathbf{x}$, and $\tilde{y}$ is the corresponding pseudo-label.
\subsection{Unsupervised Domain Adaptation}
\subsubsection{The Experimental Results on Unsupervised Domain Adaptation}
{\it a) Results on Office31 Dataset}. The classification accuracies of all methods on this dataset are listed in Table \ref{office31}, where the highest accuracy for each task is boldfaced. The results of DGA-DA are copied from \cite{DICE}. It is observed that CDGS performs much better than all competitors. Specifically, CDGS achieves 78.9$\%$ average accuracy, which leads the second best method PACET by 2.3$\%$. DICE$_\mathrm{lp}$ and DGA-DA both explore the geometric structure underlying data manifold to assign target pseudo-labels by cross-domain label propagation. However, CDGS further integrates domain-invariant feature learning, affinity matrix constructing and target labels inferring into one framework. Therefore, CDGS could make the three parts interact with each other to yield a superior performance. Besides, CDGS employs a self-learning strategy to construct a discriminative graph to capture the inherent similarity of samples as well as explore the label information of source and target data. In such case, the discriminative graph can transfer source knowledge to target domain more effectively.
{\it b) Results on Office-Caltech10 Dataset.} The results on Office-Caltech10 dataset with SURF features under the full protocol are shown in table \ref{Office_surf}. In terms of the average accuracy, CDGS owns a large advantage, which improves 2.7$\%$ over the best competitor PACET. CDGS works the best for 7 out of 12 tasks while PACET only wins two tasks, which verifies the significant effectiveness of CDGS. Compared with these methods which employ cross-domain label propagation to infer target labels, {\it i.e.}, ARTL, MEDA, DGA-DA and DICE$_\mathrm{lp}$, the improvement of CDGS is 3.6$\%$, which illustrates the superiority of our CDGS over the counterparts.
Then, we also compare our CDGS with several competitors under different splitting protocols with different features. The results over 20 random splits are illustrated in table \ref{Office_surf_split}. For SURF features, CDGS performs much better than other methods in terms of the average accuracy. CDGS achieves 49.7$\%$ average performance, which owns 3.6$\%$ improvement compared with the best competitors, MCS and DICE$_\mathrm{lp}$. Notably, CDGS performs the best on all tasks except for C$\rightarrow$A. For VGG-FC$_{6,7}$ features, CDGS outperforms all comparison methods again. Carefully comparing the results of SURF and VGG-FC$_{6,7}$ features, we can find that CDGS can consistently achieve good performance regardless of the features, which illustrates that CDGS holds better generalization capacity.
{\it c) Results on MNIST-USPS and ImageNet-VOC2007 Datasets.} To verify the effectiveness of CDGS on digit images, we further conduct experiments on MNIST-USPS dataset. The comparison results are listed in Table \ref{MI}. CDGS achieves the highest average accuracy compared with all competitors. We can observe that CDGS is much superior to feature adaptation approaches, {\it e.g.}, DGA-DA and DICE$_\mathrm{lp}$, and owns 5.7$\%$ improvement in terms of the average accuracy, which demonstrates the superiority of our proposal. The classification results of all methods on ImageNet-VOC2007 dataset are also provided in Table \ref{MI}. CDGS performs much better than other methods. Moreover, compared with the related methods, {\it i.e.}, ARTL, MEDA and DICE$_\mathrm{lp}$, CDGS shows large improvement up to 8.3$\%$, which confirms the advancement of our CDGS.
\begin{table*}[]
\centering
\caption{Recognition Accuracies (\%) on MNIST-USPS and ImageNet-VOC2007 Datasets. ``$-$" Indicates That the Results Are Unavailable}
\label{MI}
\scalebox{0.85}{%
\begin{tabular}{ccccccccccccccc}
\toprule
Task & 1-NN & SVM & JDA & DICD & PACET & MCS & DTLC & ARTL & MEDA & DGA-DA & DICE$_\mathrm{lp}$ & CDGS \\ \midrule
M$\rightarrow$U & 65.9 & 50.0 & 67.3 & 77.8 & 77.0 & 53.4 & 70.7 & 88.5 & \textbf{89.5} & 82.3 & 78.3 & 87.1 \\
U$\rightarrow$M & 44.7 & 29.9 & 59.7 & 65.2 & 64.5 & 51.3 & 56.9 & 61.8 & 72.1 & 70.8 & 65.2 & \textbf{77.3} \\ \midrule
Average & 55.3 & 40.0 & 63.5 & 71.5 & 70.7 & 52.4 & 63.8 & 75.1 & 80.8 & 76.5 & 71.8 & \textbf{82.2} \\ \toprule
I$\rightarrow$V & 65.4 & 69.9 & 63.9 & 64.5 & 63.9 & 60.6 & 64.8 & 65.4 & 67.3 & $-$ & 65.8 & \textbf{72.3} \\
V$\rightarrow$I & 73.7 & 78.7 & 72.4 & 78.2 & 72.8 & 84.2 & 85.8 & 77.8 & 74.7 & $-$ & 76.0 & \textbf{87.5} \\ \midrule
Average & 69.5 & 74.3 & 68.1 & 71.3 & 68.4 & 72.4 & 75.3 & 71.6 & 71.0 & $-$ & 70.9 & \textbf{79.9} \\ \bottomrule
\end{tabular}%
}
\end{table*}
\begin{table*}[]
\centering
\caption{Recognition Accuracies (\%) on PIE Dataset}
\label{PIE}
\scalebox{0.85}{%
\begin{tabular}{cccccccccccccc}
\toprule
Task & 1-NN & SVM & JDA & DICD & PACET & MCS & DTLC & ARTL & MEDA & DGA-DA & DICE$_\mathrm{lp}$ & CDGS\\ \midrule
C05$\rightarrow$C07 & 26.1 & 30.9 & 58.8 & 73.0 & 82.2 & 68.9 & \textbf{85.1} & 59.7 & 64.2 & 65.3 & 83.9 & 84.8 \\
C05$\rightarrow$C09 & 26.6 & 33.9 & 54.2 & 72.0 & 80.8 & 62.9 & \textbf{82.7} & 57.8 & 59.4 & 62.8 & 77.5 & 81.4 \\
C05$\rightarrow$C27 & 30.7 & 41.4 & 84.5 & 92.2 & 94.9 & 87.9 & 97.1 & 86.3 & 84.2 & 83.5 & 95.9 & \textbf{98.5} \\
C05$\rightarrow$C29 & 16.7 & 23.8 & 49.8 & 66.9 & 64.5 & 53.6 & \textbf{77.2} & 47.5 & 46.5 & 56.1 & 66.0 & 72.9 \\
C07$\rightarrow$C05 & 24.5 & 31.8 & 57.6 & 69.9 & \textbf{82.9} & 62.3 & 82.8 & 78.3 & 77.5 & 63.7 & 81.4 & 80.0 \\
C07$\rightarrow$C09 & 46.6 & 41.0 & 62.9 & 65.9 & 73.5 & 52.1 & 83.9 & 69.0 & 71.4 & 61.3 & 74.1 & \textbf{86.5} \\
C07$\rightarrow$C27 & 54.1 & 62.2 & 75.8 & 85.3 & 90.1 & 80.8 & 92.1 & 90.0 & 89.2 & 82.4 & 88.4 & \textbf{93.6} \\
C07$\rightarrow$C29 & 26.5 & 28.8 & 39.9 & 48.7 & 72.4 & 58.8 & 79.7 & 54.2 & 58.6 & 46.6 & 68.0 & \textbf{81.4} \\
C09$\rightarrow$C05 & 21.4 & 32.3 & 51.0 & 69.4 & 79.7 & 58.4 & 80.0 & 72.3 & 73.2 & 56.7 & 78.0 & \textbf{85.3} \\
C09$\rightarrow$C07 & 41.0 & 39.7 & 58.0 & 65.4 & 79.3 & 52.5 & \textbf{84.4} & 66.5 & 68.8 & 61.3 & 75.9 & 82.6 \\
C09$\rightarrow$C27 & 46.5 & 61.9 & 68.5 & 83.4 & 84.6 & 82.8 & 94.3 & 85.5 & 86.9 & 77.8 & 85.2 & \textbf{95.6} \\
C09$\rightarrow$C29 & 26.2 & 37.7 & 40.0 & 61.4 & 70.2 & 59.1 & 79.9 & 60.0 & 65.6 & 44.2 & 71.3 & \textbf{81.0} \\
C27$\rightarrow$C05 & 33.0 & 57.7 & 80.6 & 93.1 & 94.0 & 87.7 & 96.7 & 88.7 & 89.6 & 81.8 & 93.3 & \textbf{98.8} \\
C27$\rightarrow$C07 & 62.7 & 69.2 & 82.6 & 90.1 & 93.5 & 87.2 & 94.8 & 86.7 & 88.6 & 85.3 & 95.0 & \textbf{95.2} \\
C27$\rightarrow$C09 & 73.2 & 69.7 & 87.3 & 89.0 & 91.3 & 83.6 & \textbf{95.4} & 87.6 & 88.8 & 91.0 & 92.3 & 93.9 \\
C27$\rightarrow$C29 & 37.2 & 48.7 & 54.7 & 75.6 & 77.0 & 79.2 & 84.4 & 71.0 & 78.3 & 53.8 & 80.5 & \textbf{88.6} \\
C29$\rightarrow$C05 & 18.5 & 29.4 & 46.5 & 62.9 & 76.2 & 48.7 & 75.4 & 66.1 & 65.2 & 57.4 & 74.2 & \textbf{84.4} \\
C29$\rightarrow$C07 & 24.2 & 33.1 & 42.1 & 57.0 & 69.2 & 58.4 & \textbf{77.8} & 57.3 & 58.1 & 53.8 & 69.2 & 75.7 \\
C29$\rightarrow$C09 & 28.3 & 40.6 & 53.3 & 65.9 & 79.2 & 63.4 & 82.4 & 62.9 & 68.1 & 55.3 & 74.6 & \textbf{83.5} \\
C29$\rightarrow$C27 & 31.2 & 51.5 & 57.0 & 74.8 & 85.3 & 76.2 & \textbf{89.7} & 76.2 & 78.0 & 61.8 & 83.5 & 89.5\\ \midrule
Average & 34.8 & 43.3 & 60.3 & 73.1 & 81.0 & 68.2 & 85.8 & 71.2 & 73.0 & 65.1 & 80.4 & \textbf{86.7} \\ \bottomrule
\end{tabular}%
}
\end{table*}
\begin{table*}[]
\centering
\caption{Recognition Accuracies (\%) on Office-Home Dataset. Deep Learning Methods Are Below CDGS}
\label{Office-Home}
\scalebox{0.85}{%
\begin{tabular}{cccccccccccccc}
\toprule
Method & Ar$\rightarrow$Cl & Ar$\rightarrow$Pr & Ar$\rightarrow$Re & Cl$\rightarrow$Ar & Cl$\rightarrow$Pr & Cl$\rightarrow$Re & Pr$\rightarrow$Ar & Pr$\rightarrow$Cl & Pr$\rightarrow$Re & Re$\rightarrow$Ar & Re$\rightarrow$Cl & Re$\rightarrow$Pr & Average \\ \midrule
1-NN & 38.0 & 54.4 & 61.5 & 40.7 & 52.6 & 52.6 & 47.1 & 41.0 & 66.7 & 57.1 & 45.1 & 73.2 & 52.5 \\
SVM & 47.1 & 66.2 & 73.3 & 50.8 & 62.1 & 63.9 & 54.0 & 44.4 & 73.6 & 62.5 & 47.9 & 77.3 & 60.3 \\
JDA & 45.3 & 62.5 & 65.7 & 51.9 & 62.1 & 62.2 & 55.0 & 47.2 & 71.8 & 60.6 & 50.3 & 73.8 & 59.0 \\
DICD & 46.4 & 63.3 & 68.4 & 53.7 & 62.7 & 64.4 & 56.0 & 45.6 & 72.0 & 63.2 & 50.2 & 76.7 & 60.2 \\
PACET & 52.1 & 71.5 & 76.3 & 62.2 & 75.4 & 75.5 & 60.8 & 50.5 & 79.3 & 67.8 & 56.6 & 81.5 & 67.4 \\
MCS & 54.2 & 76.4 & 78.9 & 63.7 & 74.5 & 78.3 & 55.9 & 53.2 & 79.9 & 68.1 & 55.9 & 80.2 & 69.1 \\
DTLC & 51.9 & 74.1 & 75.1 & 61.4 & 70.7 & 73.7 & 63.0 & 51.1 & 76.1 & 66.7 & 54.9 & 79.0 & 66.5 \\
ARTL & 52.9 & 73.8 & 76.7 & 63.0 & \textbf{78.5} & 77.1 & 63.6 & 52.4 & 78.7 & 70.1 & 55.2 & 82.6 & 68.7 \\
MEDA & 52.9 & 75.7 & 77.4 & 60.3 & 77.6 & 77.8 & 62.5 & 52.8 & 79.3 & 68.4 & 54.7 & 82.4 & 68.5 \\
DICE$_\mathrm{lp}$ & 48.4 & 70.8 & 72.7 & 52.9 & 65.2 & 65.6 & 59.3 & 49.0 & 76.4 & 65.1 & 52.6 & 79.0 & 63.1 \\
CDGS & 55.6 & \textbf{77.0} & \textbf{80.1} & \textbf{67.1} & 77.2 & \textbf{78.4} & \textbf{66.5} & 54.4 & \textbf{81.0} & 70.5 & 57.9 & 83.3 & \textbf{70.8} \\ \midrule
Wang \emph{et al}. \cite{Wang2020} & 55.5 & 73.5 & 78.7 & 60.7 & 74.1 & 73.1 & 59.5 & 55.0 & 80.4 & 72.4 & 60.3 & \textbf{84.3} & 68.9 \\
DRCN & 50.6 & 72.4 & 76.8 & 61.9 & 69.5 & 71.3 & 60.4 & 48.6 & 76.8 & 72.9 & 56.1 & 81.4 & 66.6 \\
DSAN & 54.4 & 70.8 & 75.4 & 60.4 & 67.8 & 68.0 & 62.6 & \textbf{55.9} & 78.5 & \textbf{73.8} & \textbf{60.6} & 83.1 & 67.6 \\
Liang \emph{et al.} \cite{Liang2020} & 54.1 & 74.2 & 77.7 & 62.9 & 73.6 & 74.6 & 63.4 & 54.9 & 80.4 & 73.1 & 58.2 & 83.6 & 69.2 \\
GSP & \textbf{56.8} & \textbf{75.5} & 78.9 & 61.3 & 69.4 & 74.9 & 61.3 & 52.6 & 79.9 & 73.3 & 54.2 & 83.2 & 68.4 \\ \bottomrule
\end{tabular}%
}
\end{table*}
{\it d) Results on PIE Dataset.} Table \ref{PIE} summarizes the classification performance of CDGS and other methods on PIE dataset. We can observe that CDGS performs better than all competitors in terms of the average performance. Specifically, CDGS achieves the highest average classification accuracy, which owns 0.9$\%$ improvement against the best competitor DTLC. Besides, CDGS wins 12 out of 20 tasks while DTLC only performs the best on 7 tasks. It is worthy to note that compared with ARTL, MEDA, DGA-DA and DICE$_\mathrm{lp}$, CDGS achieves 6.3$\%$ improvement, which indicates that our CDGS is more conductive for cross-domain face recognition tasks.
{\it e) Results on Office-Home Dataset.} For this large-scale dataset, we use the Resnet50 model pretrained on ImageNet to extract features. The classification results are shown in Table \ref{Office-Home}. Here, we also report the results of five recent deep domain adaptation methods, which take the Resnet50 model as the backbone. It is clearly observed that our CDGS outperforms all traditional and deep comparison methods in average accuracy.
Specifically, CDGS leads the best traditional competitor MCS by 1.7$\%$. In addition, CDGS is the best method on 5 out of 12 tasks while MCS only wins one task, which verifies the significant effectiveness of our proposal against the traditional competitors. Compared with the best deep competitor, CDGS achieves 1.6$\%$ improvement, which validates the superiority of our proposal when equipped with off-the-self deep features.
For a complete understanding, we summarize the average accuracy of several competitors and our CDGS on all benchmark datasets under the full protocol in Table \ref{averaged accuracy}. We discover that CDGS obtains the highest average accuracy, leading the best competitor MEDA by 5.8$\%$, which validates that our CDGS is capable of addressing various DA tasks effectively.
\begin{table}[h]
\centering
\caption{Average Accuracies of Our CDGS and Several Competitors on All Six Datasets under the Full Protocol}
\label{averaged accuracy}
\scalebox{0.85}{%
\begin{tabular}{ccccccc}
\midrule
Method & MCS & DTLC & ARTL & DICE$_\mathrm{lp}$ & MEDA & CDGS \\ \midrule
Average accuracy & 64.7 & 69.5 & 68.2 & 68.9 & 70.0 & \textbf{75.8}\\ \midrule
\end{tabular}%
}
\end{table}
\subsubsection{Ablation Study}
To understand our method more deeply, we propose three variants of CDGS:
\emph{a}) CDGS$^\mathrm{sp}$, Separates domain-invariant feature learning, affinity matrix constructing and target labels inferring into three independent stages and constructs the affinity matrix with Predefined similarity metric, {\it i.e.}, the gaussian kernel similarity with kernel width 1.0; \emph{b}) CDGS$^\mathrm{dg}$, integrates Domain-invariant feature learning and Graph self-learning into one framework, {\it i.e.}, jointing Eq. (\ref{MMD}), Eq. (\ref{asm}) and Eq. (\ref{reg}); \emph{c}) CDGS$^\mathrm{ds}$, jointly performs Domain-invariant feature learning and graph self-learning with Source domain discriminative structure preserving, {\it i.e.}, unifying Eq. (\ref{MMD}), Eq. (\ref{asm1}) and Eq. (\ref{reg}). It is worthy noting that compared with CDGS$^\mathrm{ds}$, our CDGS further considers the label smoothness constraint during the discriminative graph self-learning. In Table \ref{ablation study}, we list the average classification accuracy of CDGS and the three variants on all datasets under the full protocol. Based on this table, more detailed analysis about our CDGS is presented as follows.
{\it a) Effectiveness of Graph Self-learning.} As we can see, CDGS$^\mathrm{dg}$ is superior to CDGS$^\mathrm{sp}$ on all datasets except for PIE, which verifies the effectiveness of graph self-learning. Particularly, compared with CDGS$^\mathrm{sp}$, CDGS$^\mathrm{dg}$ achieves 5.9$\%$ improvement on ImageNet-VOC2007 dataset and 3.3$\%$ improvement on Office-Home dataset respectively, which confirms the superiority of graph self-learning. By integrating the domain-invariant feature learning and graph self-learning into one framework, we can capture the inherent similarity connections among source and target samples more effectively, and thus improve the classification performance of cross-domain label propagation.
{\it b) Effectiveness of Graph Self-learning with Source Discriminative Structure Preserving.} We can see that CDGS$^\mathrm{ds}$ performs much better than CDGS$^\mathrm{dg}$ in terms of average accuracy, which achieves a large improvement of 5.1$\%$. Notably, on datasets MNIST-USPS and PIE, CDGS$^\mathrm{ds}$ even achieves more than 12.9$\%$ advancement. The above results demonstrate that preserving the source discriminative structure in graph self-learning process is of vital importance to improve the quality of affinity matrix, such that the knowledge from source domain can be transferred to target domain more effectively.
{\it c) Effectiveness of Label Smoothness Constraint for Discriminative Graph Self-learning.} It is observed that our CDGS outperforms CDGS$^\mathrm{ds}$ on 5 out of all 6 datasets and achieves superior performance in terms of average accuracy. This phenomenon indicates that
the introduction of weakly supervised information contained in target pseudo-labels can promote to yield a discriminative graph with higher quality, and thus the source knowledge can be propagated to target domain more adequately.
\begin{figure*}[]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\renewcommand{\figurename}{Figure}
\centering
\includegraphics[width=1.0\textwidth]{./parameter_analysis.pdf}
\caption{Parameter Sensitivity Analysis of CDGS with respect to $d$, $\gamma$, $\beta$ and $T$.}
\label{parameter_analysis}
\end{figure*}
\begin{table}[]
\centering
\caption{The Average Classification Accuracies of CDGS and the Three Variants on All Datasets under the Full Protocol}
\label{ablation study}
\scalebox{0.85}{%
\begin{tabular}{p{2.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}}
\toprule
Dataset & CDGS$^\mathrm{sp}$ & CDGS$^\mathrm{dg}$ & CDGS$^\mathrm{ds}$ & CDGS \\ \midrule
Office31 & 74.5 & 75.6 & 78.5 & \textbf{78.9} \\
Office-Caltech10 & 53.3 & 54.5 & 55.2 & \textbf{56.3} \\
MNIST-USPS & 66.0 & 66.7 & 80.4 & \textbf{82.2} \\
ImageNet-VOC2007 & 71.7 & 77.6 & 76.6 & \textbf{79.9} \\
PIE & 75.2 & 73.9 & \textbf{86.8} & 86.7 \\
Office-Home & 65.3 & 68.6 & 69.8 & \textbf{70.8} \\ \midrule
Average & 67.7 & 69.5 & 74.6 & \textbf{75.8} \\ \bottomrule
\end{tabular}%
}
\end{table}
\begin{table*}[]
\centering
\caption{Recognition Accuracies (\%) on Office-Caltech10 Dataset with SURF Features for SDA.}
\label{office_surf_semi}
\scalebox{0.85}{%
\begin{tabular}{cccccccccccccc}
\toprule
Method & A$\rightarrow$C & A$\rightarrow$D & A$\rightarrow$W & C$\rightarrow$A & C$\rightarrow$D & C$\rightarrow$W & D$\rightarrow$A & D$\rightarrow$C & D$\rightarrow$W & W$\rightarrow$A & W$\rightarrow$C & W$\rightarrow$D & Averaged \\ \midrule
MMDT & 36.4 & 56.7 & 64.6 & 49.4 & 56.5 & 63.8 & 46.9 & 34.1 & 74.1 & 47.7 & 32.2 & 67.0 & 52.5 \\
CDLS & 35.3 & 60.4 & 68.7 & 50.9 & 59.8 & 66.3 & 50.7 & 34.9 & 68.5 & 51.8 & 33.5 & 60.7 & 53.5 \\
ILS & 43.6 & 49.8 & 68.7 & 55.1 & 56.2 & 62.9 & 55.0 & \textbf{41.0} & 80.1 & 54.3 & 38.6 & 70.8 & 55.6 \\
TFMKL-S & 43.8 & \textbf{62.0} & 70.9 & 54.2 & 60.1 & 68.1 & 53.1 & 38.9 & 79.1 & 54.4 & 36.2 & 69.1 & 57.5 \\
OBTL & 41.5 & 60.2 & 72.4 & 54.8 & 56.2 & 71.1 & 54.4 & 40.3 & 83.2 & \textbf{55.0} & 37.4 & 75.0 & 58.9 \\
CDGS & \textbf{44.7} & 60.9 & \textbf{73.5} & \textbf{57.9} & \textbf{63.9} & \textbf{75.0} & \textbf{57.3} & 40.8 & \textbf{87.1} & 54.1 & \textbf{39.4} & \textbf{79.3} & \textbf{61.1} \\ \bottomrule
\end{tabular}%
}
\end{table*}
\begin{table}[]
\centering
\caption{Recognition Accuracies (\%) on MNIST-USPS Dataset for SDA.}
\label{mnist_semi}
\scalebox{0.9}{%
\begin{tabular}{cccccc}
\toprule
Task & MMDT & CDLS & ILS & TFMKL-S & CDGS\\ \midrule
M$\rightarrow$U &61.7 &79.0 &45.2 &69.5 &\textbf{88.0} \\
U$\rightarrow$M &47.8 &68.6 &50.4 &53.5 &\textbf{79.1} \\ \midrule
Averaged &54.8 &73.8 &47.8 &61.5 &\textbf{83.5} \\ \bottomrule
\end{tabular}%
}
\end{table}
\subsubsection{Parameter Sensitivity and Convergence Analysis}
Three tunable parameters are involved in our CDGS: $d$, $\gamma$, $\beta$. We have conducted extensive parameter sensitivity analysis on object, digit and face datasets by varying one parameter once in a wide range and fixing the other parameters to the optimal values. We display the results of task C$\rightarrow$ D (SURF), U$\rightarrow$M, C29$\rightarrow$C05 and Cl$\rightarrow$Pr in Fig. \ref{parameter_analysis} (a) $\sim$ (c). To verify the effectiveness of our CDGS, the results of the best competitor for each task are also provided as the dash lines.
First, we run CDGS as $d$ varies in $d \in [1C,2C,...,10C]$, where $C$ is the number of classes for the corresponding task. From Fig. \ref{parameter_analysis} (a), we can observe that our CDGS is robust to different values of $d$. We empirically find that $d \in [2C,7C]$ is an optimal choice. Then, we investigate the sensitivity of $\gamma$ by varying it from 0.001 to 10.0. Theoretically, when $\gamma \rightarrow 0$, the optimization problem is ill-defined, while when $\gamma \rightarrow \infty$, the domain-invariant feature learning and discriminative graph self-learning are not performed, thus our CDGS can not learn robust features for cross-domain label propagation. As we can see from Fig. \ref{parameter_analysis} (b), determining the optimal value of $\gamma$ is infeasible and a reasonable one will make CDGS outperform the best competitor generally. Finally, we vary the value of $\beta$ from 0.001 to 10.0 to evaluate its influence. Theoretically, too small (large) values of $\beta$ make the label smoothness constraint (graph self-learning with the projected features) ineffective, which hinders us to construct a high-quality affinity matrix. A proper value of $\beta$ helps to capture the intrinsic similarity of samples, thereby improving the performance of cross-domain label propagation. From Fig. \ref{parameter_analysis} (c), we can discover that $\beta \in [0.01,5.0]$ is an optimal choice. Moreover, we display the convergence analysis in Fig. \ref{parameter_analysis} (d), where the maximum iteration is 15. We can observe that our CDGS can quickly converge within several iterations.
\subsection{Semi-supervised Domain Adaptation}
\subsubsection{Results on Office-Caltech10 dataset}
We follow the standard experimental setup of \cite{MMDT}, where 20 samples per class are randomly selected for amazon domain while 8 for the others as the sources. Besides, three labeled target samples per category are selected for training with the rest for testing. For fair comparison, we use the train/test splits released by \cite{MMDT}. The average accuracies for each task over 20 random splits are shown in Table \ref{office_surf_semi}. We also report the performance of OBTL \cite{OBTL}, which to our knowledge, is the best method on this dataset. We can observe that in terms of the average accuracy, CDGS obtains 2.2$\%$ improvement over OBTL. Besides, CDGS works the best for 9 out of all 12 tasks while OBTL just wins one task, which verifies the significant effectiveness of our semi-supervised extension. Carefully comparing the results of Table \ref{office_surf_semi} and Table \ref{Office_surf_split}, we find that when few labeled target samples are available, CDGS obtains 11.4$\%$ gain in the average classification performance, which highlights the value of our extension.
\subsubsection{Results on MNIST-USPS dataset}
We follow the protocol of \cite{TFMKL-S}. Specifically, all source samples are utilized for training, and 2 labeled target samples per category are also selected for training with the remaining to be recognized. The average classification accuracies over 5 random splits are reported in Table \ref{mnist_semi}, where some results are copied from \cite{TFMKL-S}. We can observe that our CDGS is the best method for all tasks and achieves 83.5 $\%$ averaged accuracy, leading the second best method CDLS by 9.7$\%$, which confirms the superiority of our semi-supervised extension.
\section{Conclusion and Future Work \label{conclusion}}
In this paper, a novel domain adaptation approach called CDGS is proposed, which infers target pseudo-labels by cross-domain label propagation. Different from existing cross-domain label propagation methods that separate domain-invariant learning, affinity matrix constructing and target labels inferring into three independent stages,
our CDGS integrates these three parts into one unified optimization framework, such that they can assist each other to achieve more effective knowledge transfer. Furthermore, to construct a high-quality affinity matrix in CDGS, we propose a discriminative graph self-learning strategy, which can capture the inherent data manifold structure by adaptively calculating sample similarity in the projected space and exploring the discriminative information contained in well-labeled source data and pseudo-labeled target data. An iterative optimization algorithm is designed to solve the CDGS optimization problem. We further extend our CDGS to the SDA scenario in a direct but effective way and the corresponding optimization problem can be solved with the identical optimization algorithm. Extensive experimental results on six benchmark datasets have verified the significant superiority of our CDGS against the competitors in both UDA and SDA settings.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
{
"arxiv_id": "2302.08683",
"language": "en",
"timestamp": "2023-02-23T02:08:24",
"url": "https://arxiv.org/abs/2302.08683",
"yymm": "2302"
} | \section{Introduction}
To become a communication medium on a par with movies, computer
animations must present a rich view into an artificial world. Texture
maps applied to three-dimensional models of scenery help to create
some of the required visual complexity. But static scenery is only
part of the answer; subtle motion of many elements of the scene is
also required. Trees and bushes should move in response to the wind
created by a passing car, a runner should crush the grass underfoot,
and clouds should drift across the sky. While simple scenery and
sparse motion can sometimes be used effectively to focus the attention
of the viewer, missing or inconsistent action may also distract the
viewer from the plot or intended message of the animation. One of the
principles of animation is that the viewer should never be
unintentionally surprised by the motion or lack of it in a
scene\cite{Thomas:1984:DA}.
\figureSimple{
\centerline{\epsfxsize=\columnwidth \epsfbox{PsImages/alienBikesCGF.eps}}
\caption{
Image of tracks left in the sand by a group of fast moving,
alien bikers.
}\label{fig:AlienBikes}
}
Subtle changes in the scenery may also convey important information
about the plot or scene context to the viewer. For example,
figure~\ref{fig:AlienBikes} shows an image of alien bikers riding
across a desert landscape. The presence of tracks makes it clear that
the ground is soft sand rather than hard rock, and that other bikers
have already passed through the area. Figure~\ref{fig:NoSand} shows
the same scene without the sand. In addition to being visually less
interesting, the altered image lacks some of the visual cues that help
the viewer understand the scene.
\figureSimple{
\centerline{\epsfxsize=\columnwidth \epsfbox{PsImages/noSandCGF.eps}}
\caption{
Same image as shown in figure~\ref{fig:AlienBikes} but without the
simulated tracks in the sand.
}\label{fig:NoSand}
}
Movie directors face a related problem because they must ensure that
the viewer is presented with a consistent view of the world and the
characters. An actor's clothing should not inexplicably change from
scene to scene, lighting should be consistent across edits, and such
absent, unexpected, or anachronistic elements as missing tire tracks,
extra footprints, or jet trails must be avoided. The risk of
distracting the viewer is so great that one member of the director's
team, known as a ``continuity girl,'' ``floor secretary,'' or ``second
assistant director,'' is responsible solely for maintaining
consistency\cite{Reisz:1994:TFE}.
Maintaining consistency is both easier and harder in computer
animation. Because we are creating an artificial world, we can
control the lighting conditions, layout, and other scene parameters
and recreate them if we need to ``shoot'' a fill-in scene later.
Because the world is artificial, however, we may be tempted to
re\-arrange objects between scenes for best effect, thereby creating a
series of scenes that could not coexist in a consistent world.
Computer-generated animations and special effects add another facet to
the consistency problem because making models that move and deform
appropriately is a lot of work. For example, most animated figures do
not leave tracks in the environment as a human actor would and special
effects artists have had to work hard to create such subtle but
essential effects as environment maps of flickering flames. Because
each detail of the scene represents additional work, computer graphics
environments are often conspicuously clean and sparse. The approach
presented here is a partial solution to this problem; we create a more
interesting environment by allowing the character's actions to change
a part of the environment.
In this paper, we describe a model of ground surfaces and explain how
these surfaces can be deformed by characters in an animation. The
ground ma\-terial is modeled as a height field formed by vertical
columns. After the impact of a rigid body model, the ground material
is deformed by allowing compression of the material and movement of
material between the columns. To demonstrate the algorithms, we show
the creation of footprints in sand, mud, and snow. These surfaces are
created by modifying only five essentially independent parameters of
the simulation. We evaluate the results of the animation through
comparison with video footage of human runners and through more
dramatic patterns created by bicycle tire tracks
(figure~\ref{fig:AlienBikes}), a falling bicycle
(figure~\ref{fig:Bike}), and a tripping runner
(figure~\ref{fig:Trip}).
\section{Background}
Several researchers have investigated the use of pro\-cedural
techniques for generating and animating background elements in
computer-generated scenes. Although we are primarily interested in
techniques that allow the state of the environment to be altered in
response to the motions of an actor, methods for animating or modeling
a part of the environment independent of the movements of the actors
are also relevant because they can be modified to simulate
interactions.
The first example of animated ground tracks for computer animation was
work done by Lundin\cite{Lundin:1984:MS,SVR:100}. He describes how
footprints can be created efficiently by rendering the underside of
an object to create a bump map and then applying the bump map to the
ground surface to create impressions where the objects have contacted
the ground.
The work most closely related to ours is that of Li and
Moshell\cite{Li:1993:MSR}. They developed a model of soil that allows
interactions between the soil and the blades of digging machinery.
Soil spread over a terrain is modeled using a height field, and soil
that is pushed in front of a bulldozer's blade is modeled as discrete
chunks. Although they discount several factors that contribute to
soil behavior in favor of a more tractable model, their technique is
physically based and they arrive at their simulation formulation after
a detailed analysis of soil dynamics. As the authors note, actual
soil dynamics is complex and their model, therefore, focuses on a
specific set of actions that can be performed on the soil, namely the
effect of horizontal forces acting on the soil causing displacements
and soil slippage. The method we present here has obvious
similarities to that of Li and Moshell, but we focus on modeling a
different set of phenomena at different scales. We also adopt a more
appearance-based approach in the interest of developing a technique
that can be used to model a wide variety of ground materials for
animation purposes.
Another method for modeling the appearance of ground surfaces is
described by Chanclou, Luciani, and Habibi\cite{Chanclou:1996:PML}.
They use a simulation-based ground surface model that behaves
essentially like an elastic sheet. The sheet deforms plasticly when
acted on by other objects. While their model allows objects to make
smooth impressions in the ground surface, they do not describe how
their technique could be used to realistically model real world ground
materials.
Nishita and his colleagues explored modeling and rendering of snow
using metaballs\cite{Nishita:1997:MRM}. Their approach allowed them
to model snow on top of objects and
drifts to the side of objects. They also developed a method
for realistically rendering snow that captured effects due to multiple
levels of light scattering.
Other environmental effects that have been animated include water,
clouds and gases\cite{Ebert:1994:TMP,Stam:1995:DFO,Foster:1997:MMT},
fire\cite{Stam:1995:DFO,Chiba:1994:TDV},
lightning\cite{Reed:1994:VSL}, and leaves blowing in the
wind\cite{Wejchert:1991:AA}. Among these, water has received the most
attention. Early work by Peachey\cite{Peachey:1986:MWS} and by
Fournier and Reeves\cite{Fournier:1986:SMO} used procedural models
based on specially designed wave functions to model ocean waves as
they travel and break on a beach. Later work by Kass and
Miller\cite{Kass:1990:RSF} developed a more general approach using
shallow water equations to model the behavior of water under a wider
variety of conditions. Their model also modified the appearance of a
sand texture as it became wet. O'Brien and
Hodgins\cite{OBrien:1995:DSS} extended the work of Kass and Miller to
allow the behavior of the water simulation to be affected by the
motion of other objects in the environment and to allow the water to
affect the motion of the other objects. They included examples of
objects floating on the surface and simulated humans diving into pools
of water. More recently Foster and Metaxas\cite{Foster:1996:RAL} used
a variation of the three-dimensional Navier-Stokes equations to model
fluids. In addition to these surface and volumetric approaches,
particle-based methods have been used to model water spray and other
loosely packed materials. Supplementing particle models with
inter-particle dynamics allows a wider range of phenomena to be
modeled. Examples of these systems include
Reeves\cite{Reeves:1983:PST}, Sims\cite{Sims:1990:PAR}, Miller and
Pearce\cite{Miller:1989:GDC}, and Terzopoulos, Platt, and
Fleischer\cite{Terzopoulos:1989:HMD}.
Simulation of interactions with the environment can also be used to
generate still models. Several researchers have described techniques
for generating complex plant models from grammars describing how the
plant should develop or grow over time. M{\v{e}}ch and
Prusinkiewicz\cite{Mech:1996:VMP} developed techniques for allowing
developing plants to affect and be affected by their environment.
Dorsey and her colleagues\cite{Dorsey:1996:MRM,Dorsey:1996:FCA} used
simulation to model how an object's surface changes over time as
environmental factors act on it.
\section {Simulation of Sand, Mud, and Snow}
In this paper, we present a general model of a deformable ground
material. The model consists of a height field defined by vertical
columns of material. Using displacement and compression algorithms,
we animate the deformations that are created when rigid geometric
objects impact the ground material and create footprints, tire tracks,
or other patterns on the ground. The properties of the model can be
varied to produce the behavior of different ground materials such as
sand, mud, and snow.
\subsection {Model of Ground Material}
\figureSimple{
\centerline{\epsfxsize= 0.8 \columnwidth \epsfbox{PsImages/heightField.eps}}
\caption{
The uniform grid forms a height field that defines the ground
surface. Each grid point within the height field represents a
vertical column of ground material with the top of the column
centered at the grid point.
}\label{fig:HeightField}
}
\figureTwoCol{
\centerline{\epsfxsize=\textwidth \epsfbox{PsImages/resolution.eps}}
\caption{
Footprint in sand computed with different grid resolutions. {\em
(A)} 20\,mm, {\em (B)} 10\,mm, {\em (C)} 5\,mm, and {\em (D)}~2.5\,mm. As
the grid resolution increases, the shape of the footprint is
defined more clearly but its overall shape remains the same.
}\label{fig:Resolution}
}
Our simulation model discretizes a continuous volume of ground
material by dividing the surface of the volume into a uniform
rectilinear grid that defines a height field
(figure~\ref{fig:HeightField}). The resolution of the grid must be
chosen appropriately for the size of the desired features in the
ground surface. For example, in figure~\ref{fig:AlienBikes} the
resolution of the grid is 1\,cm and the bicycles are approximately
2\,meters long with tires~8\,cm wide. Though the resolution of the
grid determines the size of the smallest feature that can be
represented, it does not otherwise dramatically affect the shape of
the resulting terrain (figure~\ref{fig:Resolution}).
Initial conditions for the height of each grid point can be created
procedurally or imported from a variety of sources. We implemented
initial conditions with noise generated on an integer lattice and
interpolated with cubic Catmull-Rom splines (a variation of a
two-dimensional Perlin noise function described by
Ebert\cite{Ebert:1994:TMP}). Terrain data or the output from a
modeling program could also be used for the initial height field.
Alternatively, the initial conditions could be the output of a
previous simulation run. For example, the trampled surface of a
public beach at the end of a busy summer day could be modeled by
simulating many crisscrossing paths of footprints.
\subsection{Motion of the Ground Material}
\figureTwoCol{
\centerline{\epsfxsize=\textwidth \epsfbox{PsImages/steps.eps}}
\caption{
The motion of the ground material is computed in stages. {\em
(A)} First, the geometric objects are intersected with the ground
surface. {\em (B)} Next, the penetrating columns are adjusted,
and {\em (C)} the material is distributed to non-penetrating
columns. {\em (D)} Finally, the erosion process spreads the
material.
}\label{fig:Steps}
}
The height field represented by the top of the columns is deformed as
rigid geometric objects push into the grid. For the examples given in
this paper, the geometric objects are a runner's shoe, a bicycle tire
and frame, and a jointed human figure. The motion of the rigid bodies
was computed using a dynamic simulation of a human running, bicycling,
or falling down on a smooth, hard ground plane\cite{Hodgins:1995:AHA}.
The resulting motion was given as input to the simulation of the
ground material in the form of trajectories of positions and
orientations of the geometric objects. Because of this generic
specification of the motion, the input motion need not be dynamically
simulated but could be keyframe or motion capture data.
The surface simulation approximates the motion of the columns of
ground material by compressing or displacing the material under the
rigid geometric objects. At each time step, a test is performed to
determine whether any of the rigid objects have intersected the height
field. The height of the affected columns is reduced until they no
longer penetrate the surface of the rigid object. The material that
was displaced is either compressed or forced outward to surrounding
columns. A series of erosion steps are then performed to reduce the
magnitude of the slopes between neighboring columns. Finally,
particles can be generated from the contacting surface of the rigid
object to mimic the spray of material that is often seen following an
impact. These stages are illustrated in figure~\ref{fig:Steps}. We
now discuss each stage of the algorithm in more detail: collision,
displacement, erosion, and particle generation.
\paragraph* {Collision.} The collision detection and response
algorithm determines whether a rigid object has collided with the
ground surface. For each column, a ray is cast from the bottom of the
column through the vertex at the top. If the ray intersects a rigid
object before it hits the vertex, then the rigid object has penetrated
the surface and the top of the column is moved down to the
intersection point. A flag is set to indicate that the column was
moved, and the change in height is recorded. The computational costs
of the ray inter\-section tests are reduced by partitioning the
polygons of the rigid body models using an axis-aligned bounding box
hierarchy\cite{Snyder:1995:ITP}.
Using a vertex coloring algorithm, the simulation computes a contour
map based on the distance from each column that has collided with the
object to the closest column that has not collided
(figure~\ref{fig:GridColor}). This information is used when the
material displaced by the collision is distributed. As an
initialization step, columns not in contact with the object are
assigned the value zero. During subsequent iterations, unlabeled
columns adjacent to labeled columns are assigned a value equal to the
value of the lowest numbered adjacent column plus one.
\figureSimple{
\centerline{\epsfxsize= 0.75 \columnwidth \epsfbox{PsImages/gridColorCGF.eps}}
\caption{
The contour map represents the distance from each column in
contact with the foot to a column that is not in contact. For
this illustration, we used columns that are four-way
connected. However, in the examples in this paper we used
eight-way connectivity because we found that the higher
connectivity yielded smoother results.
}\label{fig:GridColor}
}
\paragraph* {Displacement.} Ground material from the
columns that are in contact with the object is either compressed or
distributed to surrounding columns that are not in contact with the
object. The compression ratio $\alpha$ is chosen by the user and is
one of the parameters available for controlling the visual appearance
of the ground material. The material to be distributed, $\Delta h$,
is computed by $\Delta h = \alpha m$, where $m$ is the total amount of
displaced material. The material that is not compressed is equally
distributed among the neighbors with lower contour values, so that the
ground material is redistributed to the closest ring of columns not in
contact with the rigid object. The heights of the columns in this
ring are increased to reflect the newly deposited material.
\paragraph* {Erosion.} Because the displacement algorithm deposits
material only in the first ring of columns not in contact with the
object, the heights of these columns may be increased in an
unrealistic fashion. An ``erosion'' algorithm is used to identify
columns that form steep slopes with their neighbors and move material
down the slope to form a more realistic mound. Several parameters
allow the user to control the shape of the mound and model different
ground materials.
The erosion algorithm examines the slope between each pair of adjacent
columns in the grid. For a column $ij$ and a neighboring column $kl$,
the slope, $s$, is
\begin{equation}
s = {\rm tan}^{-1} (h_{ij} - h_{kl}) / d
\end{equation}
where $h_{ij}$ is the height of column $ij$ and $d$ is the
distance between the two columns. If the slope between two neighboring
columns is greater than a
threshold $\theta_{out}$, then ground material is moved from the
higher column down the slope to the lower column. In the special case
where one of the columns is in contact with the geometric object, a
different threshold, $\theta_{in}$, is used to provide independent
control of the inner slope.
Ground material is
moved by computing the average difference in height, $\Delta h_a$, for
the $n$ neighboring columns with too large a downhill slope:
\begin{equation}
\Delta h_a = {{\sum (h_{ij} - h_{kl})} \over {n}}.
\end{equation}
The average difference in height is multiplied by a fractional
constant, $\sigma$, and the resulting quantity is equally distributed
among the downhill neighbors.
This step in the algorithm repeats until all slopes
are below a threshold, $\theta_{stop}$.
The erosion algorithm may cause some
columns to intersect a rigid object but this penetration will be
corrected on the next time step.
\figureTwoCol{
\vskip 2ex
\centerline{%
\hfill
{\epsfysize= 2in \epsfbox{PsImages/hashTable.eps}}
\hfill
{\epsfysize= 2in \epsfbox{PsImages/closeUpBike.eps}}
\hfill
}
\caption{
The left figure shows the ground area that has been created in the
hash table. The currently active area is highlighted in red. The
right figure shows the same scene rendered over the initial ground
surface. There are approximately {37,000} columns in the active
area and {90,000} stored in the hash table; the number of
columns in the entire virtual grid is greater than $2$~million.
}\label{fig:Bike}
}
\paragraph*{Particle Generation.}
We use a particle system to model portions of the ground material that
are thrown into the air by the motion of the rigid geometric objects.
The user controls the adhesiveness between the object and the material
as well as the rate at which the particles fall from the object. Each
triangle of the object that is in contact with the ground picks up a
volume of the ground material during contact. The volume of material
is determined by the area of the triangle multiplied by an adhesion
constant for the material. When the triangle is no longer in contact
with the ground, it drops the attached material as particles according
to an exponential decay rate.
\begin{equation}
\Delta v = v ( e^{(-t + t_c + \Delta t) / h } - e^{(-t + t_c) / h} )
\end{equation}
where $v$ is the initial volume attached to the triangle, $t$ is the
current time, $t_c$ is the time at which the triangle left the ground,
$\Delta t$ is the time step size, and $h$ is a half life parameter
that controls how quickly the material falls off. The number of
particles released on a given time step is determined by
$n = \Delta v / \phi$, where ${\phi}$ is the volume of each particle.
The initial position, ${\bf p_0}$, for a particle is randomly
distributed over the surface of the triangle according to:
\begin{equation}
{\bf p_0} = b_a {\bf x}_a + b_b {\bf x}_b + b_c {\bf x}_c
\end{equation}
where ${\bf x}_a$, ${\bf x}_b$, and ${\bf x}_c$ are the coordinates of
the vertices of the triangle and $b_a$, $b_b$, and $b_c$ are the
barycentric coordinates of ${\bf p_0}$ given by
\begin{eqnarray}
b_a &=& 1.0 - \sqrt{ \rho_a } \\
b_b &=& \rho_b ( 1.0 - b_a ) \\
b_c &=& 1.0 - ( b_a + b_b )
\end{eqnarray}
where $\rho_a$ and $\rho_b$ are independent random variables evenly
distributed between $[0..1]$.
This computation results in a uniform
distribution over the triangle\cite{Turk:1990:GRP}.
The initial velocity of a particle is computed from the velocity of
the rigid object:
\begin{equation}
{\bf \dot{p}}_0 = \mbox{\boldmath $\nu$} + \mbox{\boldmath $\omega$} \times {\bf p}_0
\end{equation}
where \mbox{\boldmath $\nu$} and \mbox{\boldmath $\omega$} are the
linear and angular velocity of the object. To give a more realistic
and appealing look to the particle motion, the initial velocities are
randomly perturbed.
The final component of the particle creation algorithm accounts for
the greater probability that material will fall off rapidly
accelerating objects. A particle is only created if
$(|{\bf\ddot{p}}_0|/s)^\gamma > \rho$, where $s$ is the minimal
acceleration at which all potential particles will be dropped,
$\gamma$ controls the variation of the probability of particle
creation with speed, and $\rho$ is a random variable evenly
distributed in the range $[0..1]$.
If particles are only generated at the beginning of a time step then
the resulting particle distribution will have a discrete, sheetlike
appearance. We avoid this undesirable effect by randomly distributing
each particle's creation time within the time step interval. The
information used to calculate the initial position and velocity is
interpolated within the interval to obtain information appropriate for
the particle's creation time.
Once generated, the particles fall under the influence of gravity.
When a particle hits the surface of a column, its volume is added to
the column.
\subsection {Implementation and Optimization} Simulations of terrain
generally span a large area. For example, we would like to be able to
simulate a runner jogging on a beach, a skier gliding down a
snow-covered slope, and a stampede of animals crossing a sandy valley.
A naive implementation of the entire terrain would be intractable
because of the memory and computation requirements. The next two
sections describe optimizations that allow us to achieve reasonable
performance by storing and simulating only the active portions of the
surface and by parallelizing the computation.
\paragraph*{Algorithm Complexity.} Because the ground model is a
two-dimensional rectilinear grid, the most straightforward
implementation is a two-dimensional array of nodes containing the
height and other information about the column. If an animation
required a grid of \(i\) rows and \(j\) columns, \(i \times j \) nodes
would be needed, and computation time and memory would grow linearly
with the number of grid points. Thus, a patch of ground 10\,m $\times$
10\,m with a grid resolution of 1\,cm yields a \(1000 \times 1000
\) grid with one million nodes. If each node requires 10\,bytes of
memory, the entire grid requires 10\,Mbytes of storage. Even this
relatively small patch of ground requires significant system
resources. However, most of the ground nodes are static throughout
the simulation, allowing us to use a much more efficient algorithm
that creates and simulates only the active nodes.
The active area of the ground surface is determined by projecting an
enlarged bounding box for the rigid objects onto the surface as shown
in figure~\ref{fig:Bike}. The nodes within the projection are marked
as active, and the collision detection, displacement, and erosion
algorithms are applied, not to the entire grid, but only to these
active grid points. Additionally, nodes are not allocated for the
entire ground surface, rather they are created on demand as they become
active. The $ij$ position of a particular node is used as the
index into a hash table allowing the algorithms to be implemented as
if a simple array of nodes were being used.
Because only the active grid points are processed, the computation
time is now a function of the size of the rigid objects in the scene
rather than the total grid size. Memory requirements are also
significantly reduced, although the state of all modified nodes must
be stored even after they are no longer active.
\figureSimple{
\vskip -.15in
\centerline{\epsfxsize=\columnwidth \epsfbox{PsImages/graph.eps}}
\vskip -.25in
\caption{
These timing results were computed on a Silicon Graphics Power
Challenge system with $16$ $195$MHz MIPS R10000 processors and
4\,Gbytes of memory. Each character is an alien biker like the ones
shown in figure~\ref{fig:AlienBikes}. Times plotted are for one
second of simulated motion.
}\label{fig:Graph}
}
\paragraph*{Parallel Implementation.} Despite the optimization provided
by simulating only active nodes, the computation time grows linearly
with the projected area of the rigid objects. Adding a second
character will approximately double the active area (see
figure~\ref{fig:Graph}), but the computation time for multiple
characters can be reduced by using parallel processing when the
characters are contacting independent patches of ground.
We have designed and implemented a parallel scheme for the ground
surface simulation. A single parent process maintains the state of
the grid and coordinates the actions of the child processes. During
initialization, a child process is created for each character that
will interact with the ground surface. The children communicate with
the parent process via the UNIX socket mechanism and may exist
together on a single multiprocessor machine or on several separate
single processor machines.
Each child computes the changes to the grid caused by its character as
quickly as possible, without any direct knowledge about the progress
of the other children. When a child completes the computation for a
time step, it reports the changes it has made to the parent process
and then waits for information about any new grid cells that will be
in the bounding box for its character during the next time step.
However, if the child is ready to compute a time step before another
child has reported prior changes that are within the bounding box of a
character assigned to the first child, the parent will prevent the
first child from continuing until the changes are available. For
example, in an animation of a cyclist that rides across a footprint
left by a runner, the child process computing the cyclist may arrive
at the footprint location before the process computing the runner has
simulated the creation of the footprint. If two or more characters
have overlapping bounding boxes for the same time step, the
computation for those characters is reassigned to a single child
process until they no longer overlap.
\figureTwoCol{
\vskip 0.5ex
\centerline{\epsfxsize=6in \epsfbox{PsImages/rsmsFilmStrip.eps}}
\vskip -2ex
\caption{
Images from video footage of a human runner stepping in sand and a
simulated runner stepping in sand, mud, and snow. The human
runner images are separated by 0.133\,s; the simulated images are
separated by 0.1\,s.
}\label{fig:RsmsFilmStrip}
}
\figureTwoCol{
\vskip 0.5ex
\centerline{\epsfxsize=6in \epsfbox{PsImages/trip.eps}}
\vskip -2ex
\caption{
Images of a runner tripping over an obstacle and falling onto the
sand. The final image shows the pattern she made in the
sand.
}\label{fig:Trip}
}
Simulation run times for both the serial and parallel versions of our
algorithm are shown in figure~\ref{fig:Graph}. As expected, the time
required by the serial implementation grows linearly with the number
of characters. Ideally, the time required by the parallel
implementation would be constant since each character has its own
processor. However, due to communication overhead and interactions
between the characters, the run time for the parallel
version grows as the number of characters increases, but at a much
slower rate than the serial version.
\section {Animation Parameters}
One goal of this research is to create a tool that allows animators to
easily generate a significant fraction of the variety seen in ground
materials. Five parameters of the simulation can be changed by the
user to achieve different effects: inside slope, outside slope,
roughness, liquidity, and compression. The first four are used by the
erosion algorithm, and the fifth is used by the displacement
algorithm.
The inside and outside slope parameters, $\theta_{in}$ and
$\theta_{out}$, modify the shape of a mound of ground material by
changing the slope adjacent to intersecting geometry and the slope on
the outer part of the mound. Small values lead to more erosion and a
more gradual slope; large values yield less erosion and a steeper
slope.
Roughness, $\sigma$, controls the irregularity of the ground
deformations by changing the amount of material that is moved from one
column to another during erosion. Small values yield a smooth mound
of material while larger values give a rough, irregular surface.
Liquidity, $\theta_{stop}$, determines how watery the material appears
by controlling the amount of erosion within a single timestep. With
less erosion per time step, the surface appears to flow outward from
the intersecting object; with more erosion, the surface moves to its
final state more quickly.
The compression parameter, $\alpha$, offers a way to model substances
of different densities by determining how much displaced material is
distributed outward from an object that has intersected the grid. A
value of one causes all material to be displaced; a value less than
one allows some of the material to be compressed.
When particles are used, additional parameters are required to
determine their appearance. We included parameters to control
adhesion, particle size, and the rate at which material falls off of
the objects. We used particles in the animations of sand but did not
include them in the animations of mud or snow. Other more dynamic
motions such as skiing might generate significant spray, but running
in snow appears to generate clumps of snow rather than particles.
\section {Results and Discussion}
\tableSimple{
\begin{center}
\begin{tabular} {|l|l|l|l|l|} \hline
\multicolumn{1}{|l|}{Effect} & \multicolumn{1}{l|}{Variable} & \multicolumn{1}{l|}{Sand} & \multicolumn{1}{l|}{Mud} & \multicolumn{1}{l|}{Snow} \\ \hline
inside slope & $\theta_{in}$ & 0.8 & 1.57 & 1.57 \\
outside slope & $\theta_{out}$ & 0.436 & 1.1 & 1.57 \\
roughness & $\sigma$ & 0.2 & 0.2 & 0.2 \\
liquidity & $\theta_{stop}$ & 0.8 & 1.1 & 1.57 \\
compression & $\alpha$ & 0.3 & 0.41 & 0.0 \\ \hline
\end{tabular}
\end{center}
\caption{
Table of parameters for the three ground
materials.
}\label{tab:ParameterTable}
}
Figure~\ref{fig:RsmsFilmStrip} shows images of a human runner stepping in
sand and a simulated runner stepping in sand, mud, and snow. The
parameters used for the simulations of the three ground materials are
given in table~\ref{tab:ParameterTable}. The footprints left by the
real and simulated runners in sand are quite similar.
Figures~\ref{fig:Bike} and~\ref{fig:Trip} show more complicated
patterns created in the sand by a falling bicycle and a tripping
runner. For each of these simulations, we used a grid resolution of
1\,cm by 1\,cm yielding a virtual grid size of 2048$\times$1024 for the
bicycle and 4096$\times$512 for the runner.
The images in this paper were rendered with Pixar's RenderMan
software. We found that rendering the ground surface using a
polygonal mesh was computationally expensive and that the data files
required to describe the mesh were large and difficult to work with.
We achieved better results using a single polygon with a
displacement shader that modeled the ground surface.
The simulation described in this paper allows us to capture many of
the behaviors of substances such as sand, mud, and snow. Only about
fifteen iterations were required to hand tune the parameters for the
desired effect with each material. The computation time is not
burdensome: a 3-second simulation of the running figure interacting
with a 1\,cm by 1\,cm resolution ground material required less than
2\,minutes of computation time on a single $195$MHz MIPS R10000
processor.
Many effects are missed by this model. For example, wet sand and
crusty mud often crack and form large clumps, but our model can
generate only smooth surfaces and particles. Actual ground material
is not uniform but contains both small grains of sand or dirt as well
as larger objects such as rocks, leaves, and sea shells. More
generally, many factors go into creating the appearance of a given
patch of ground: water and wind erosion, plant growth, and the
footprints of many people and animals. Some of these more subtle
effects are illustrated by the human footprints in snow and mud shown
in figure~\ref{fig:Reality}.
\figureSimple{
\centerline{\epsfxsize=3.in \epsfbox{PsImages/reality.eps}}
\caption{
Images of actual tire tracks in snow and human footprints in snow
and in mud.
}\label{fig:Reality}
}
One significant approximation in the ground simulation is that the
motion of the rigid objects is not affected by the deformations of the
surface. For the sequences presented here, each of the rigid body
simulations interacted with a flat, smooth ground plane. A more
accurate and realistic simulation would allow the bike and runner to
experience the undulations in the initial terrain as well as the
changes in friction caused by the deforming surfaces. For example, a
bike is slowed down significantly when rolling on sand and a runner's
foot slips slightly with each step on soft ground.
Other approximations are present in the way that the sand responds to
the motion of the rigid objects. For example, a given area of sand has
no memory of the compression caused by previous impacts. Because the
motion of the rigid objects are specified in advance, this
approximation does not cause any noticeable artifacts. Compression
could also be used to change rendering parameters as appropriate.
We do not take the velocity of the rigid object into account in the
ground simulation. For bicycling and running, this approximation is
negligible because the velocities of the wheel and the foot with respect
to the ground are small at impact. For the falling runner or bicycle,
however, this approximation means that the ridge of sand is uniformly
distributed rather than forming a larger ridge in the direction of
travel.
The motions of sand, mud, and snow that we generated are distinctly
different from each other because of changes to the simulation
parameters. Although much of the difference is due to the
deformations determined by our simulations, part of the visual
difference results from different surface properties used for
rendering. To generate the images in this paper, we had not only to
select appropriate parameters for the simulation but also to select
parameters for rendering. A more complete investigation of techniques
for selecting rendering parameters and texture maps might prove
useful.
We regard this simulation as appearance-based rather than
engineering-based because most of the parameters bear only a scant
resemblance to the physical parameters of the material being modeled.
The liquidity parameter, for example, varies between $0$ and $\pi/2$
rather than representing the quantity of water in a given amount of
sand. It is our hope that this representation for the parameters
allows for intuitive adjustment of the resulting animation without
requiring a deep understanding of the simulation algorithms or soil
mechanics. The evaluation is also qualitative or appearance-based in
that we compare simulated and video images of the footprints rather
than matching initial and final conditions quantitatively.
\section*{Acknowledgments}
A previous version of this paper appeared in \textit{The
Proceedings of Graphics Interface '98}.
This project was supported in part by NSF NYI Grant No. IRI-9457621
and associated REU funding,
Mitsubishi Electric Research Laboratory, and a Packard Fellowship. The
second author was supported by an
Intel Fellowship.
\bibliographystyle{cgf}
|
{
"arxiv_id": "2302.08587",
"language": "en",
"timestamp": "2023-02-20T02:02:19",
"url": "https://arxiv.org/abs/2302.08587",
"yymm": "2302"
} | \section{Introduction}
Hydrogen molecules and their isotopes play a key role on chemical reactions in the divertor of magnetic fusion devices.
The rates of many molecular relevant processes, such as the formation of negative ions~\cite{bacal_pressure_1981,gorse_dynamics_1992}, dissociation and ionization~\cite{Sawada1993}, and dissociative attachment~\cite{capitelli_open_2005,may_non-linear_nodate,krasheninnikov_plasma-neutral_1997}, are known to be significantly influenced by the rotational and vibrational population distribution in the electronic ground state of hydrogen molecules.
Rotational and vibrational populations in the electronic ground state ($X$ state) have been estimated for variety of plasmas,
based on the direct measurement with vaccuum ultraviolet lasers~\cite{pealat_rovibrational_1985,stutzin__1990,vankan_high_2004,gabriel_formation_2008} or the combination of visible emission observations and population-kinetics models (e.g., a corona model and collisional-radiative model)~\cite{xiao_rovibrational_2004,fantz_spectroscopypowerful_2006,Briefi2017}.
In many cases, non-thermal population distribution has been observed, where the vibrational temperature is significantly higher than the rotational temperature~\cite{xiao_rovibrational_2004} and highly excited rotational states are excessively populated than expected from a Boltzmann distribution~\cite{vankan_high_2004,gabriel_formation_2008, Briefi2017}.
A Monte-Carlo simulation has also predicted the nonthermal population distribution of hydrogen molecules~\cite{Sawada2020-er}.
The rotational temperature, which is estimated from the population at the low-rotational-quantum-number states, has been thought close to the wall temperatures~\cite{Stutzin1989-oj,Watts2001-up}, because the energy gap among the rotational states is in the similar scale to the room temperature.
Addition to the wall effect, cascade from upper electronic levels may also change the rotational temperature.
This effect has been attributed to account the electron-density dependence of the rotational temperature found in the spectroscopic observation of magnetic plasma confinement devices~\cite{brezinsek_molecular_2002,unterberg_local_2005,Hollmann2006-zy}.
A surface effect has been considered as the origin of the higher vibrational temperature.
The surface assisted recombination of two hydrogen atoms results in the highly excited molecules~\cite{capitelli_open_2005}.
Although, a plasma-condition dependence on the vibrational temperatures have been studied~\cite{fantz_spectroscopic_1998}, the connection to the elementary processes are still unknown.
Despite the understandings of such microscopic processes, the full understandings of population kinetics are still lacking.
Experimental evaluations of the population kinetics in variety of plasmas may help us to reveal the macroscopic behavior.
In this work, we study the emission spectra of hydrogen Fulcher-$\alpha$ band, which are routinely obtained from the divertor region of LHD.
The spectra were observed with an echelle spectrometer that has both high wavelength resolution $\approx$~0.1~nm and wide wavelength bandwidth 400--760 nm.
We analyzed a thousand of spectra observed for a hundred of LHD discharge experiments, covering wide range of plasma parameters.
By assuming a two temperature model and utilizing the Bayesian inference framework, we robustly estimated the population distribution of the electronic ground state of hydrogen molecules for these experiments.
We found that the lower rotational temperature shows a significant dependence on the plasma parameters, as reported in previous works~\cite{brezinsek_molecular_2002,unterberg_local_2005}.
On the other hand, the other parameters show only a little change over the parameter range we studied.
Furthermore, significant correlation among these temperatures is found, suggesting an existence of a simple mechanism generating the non-thermal population distribution.
For example, Sawada et al have pointed out that with the electron-impact excitation similar nonthermal population distribution can be formed in plasmas~\cite{sawada_rovibrationally_2016}.
Our result is qualitatively consistent with their simulation.
The rest of this paper is organized as follows.
In section~\ref{sec:experiment}, we briefly present our experimental setup, although the same setup was used in our preceding paper~\cite{ishihara}.
In section~\ref{sec:analysis}, the analysis method and our assumption are described.
The result for typical spectra are also shown in this section.
In section~\ref{sec:correlation}, we present our full results for the thousands frames.
The detailed correlation among the estimated parameters are presented.
\section{Experimental Setup\label{sec:experiment}}
\subsection{Large Helical Device}
LHD is a helical-type magnetic-plasma-confinement machine.
With a pair of helical coils, plasmas with the major radius $R = $ 3.5--4.0 m and the minor radius $\approx 0.6$ m are generated.
This helical coil also makes helical divertor structures.
A poloidal cross section of the LHD plasma is shown in \fref{fig:poloidal}.
In the figure, the closed magnetic flux surfaces are indicated by gray curves.
The cross points of open magnetic field lines are shown by gray dots.
The intrinsic helical divertor structure and the stochastic layers can be seen.
Inside the last closed flux surface (LCFS), ions and electrons are confined.
Once these charged particles diffuse out the LCFS, they are guided by an open magnetic field line to divertor plates through the divertor leg structure.
Depending on the coil current, LHD can take several magnetic configurations.
The magnetic axis position $R_{ax}$ is an important parameter representing a configuration.
With the inner shifted configuration $R_{ax} = 3.60$ m (\fref{fig:poloidal}~(b)), the LHD plasma has thinner ergodic layer and a less helical ripple than those with $R_{ax} = 3.65$ m (\fref{fig:poloidal}~(a)).
\begin{figure}[hbt]
\includegraphics[width=8cm]{poloidal}
\caption{%
Poloidal crosssections of LHD plasmas for (a) the configuration $R_{ax} = 3.65$ m and (b) $R_{ax} = 3.60$ m.
Some closed magnetic flux surfaces are shown by solid curves while the stochastic layer encircling the last closed flux surface is shown by gray dots.
The stochastic layer also provides helical divertor structure, which connects the main plasma and divertor located in the inner and outer boards.
}
\label{fig:poloidal}
\end{figure}
The plasma is heated by neutral-beam-injections (NBIs) and electron cyclotron heating (ECH).
LHD equips several gas injection systems.
By varying the gas pressure, heating power, and the magnetic configuration, variety of plasmas spanning broad range of parameter space can be generated.
An example of the temporal sequence of a discharge experiment is shown in \fref{fig:shotsummary}.
This plasma is initiated by ECH and sustained by NBI (\fref{fig:shotsummary}~(a)).
In \fref{fig:shotsummary}~(b), the temporal evolutions of the electron temperature ($T_e$) and density ($n_e$) measured by Thomson scattering method are shown.
In this experiment, helium gas is injected at $t$ = 4.5 s, resulting in an increase in $n_e$ and decrease in $T_e$, followed by their gradual recoveries.
This Thomson scattering system measures the radial distributions of $T_e$ and $n_e$ with a spatial resolution of $\approx 10$~mm and temporal resolution of $\approx 30$~ms.
The spatial distributions of $T_e$ and $n_e$ measured at $t$ =~4.00 and 4.75~s are shown in \fref{fig:thomson}.
Although it is difficult to precisely determine the position of the LCFS due to the stochastic nature of the magnetic field lines at the edge region, typically this is located at $r_{eff} \approx 0.6$ m, where $r_{eff}$ is the effective minor radius.
$T_e^{ax}$ and $n_e^{ax}$ shown in \fref{fig:shotsummary}~(b) are the values of $T_e$ and $n_e$ at the magnetic axis $r_{eff}$ = 0 m, while $T_e^{LCFS}$ and $n_e^{LCFS}$ indicate these values at $r_{eff}$ = 0.6 m.
Many other diagnostics have been installed in LHD.
In this work, we consider some of old-established diagnostics, namely
the ion saturation current onto divertor plates $I_{is}$ measured by Langmuir probes, and the gas pressure measured in the divertor region $P_{H_2}$ by a fast-ionization-guage, in addition to the $n_e$ and $T_e$ values measured by the Thomson scattering method.
The temporal evolutions of the $I_{is}$ and $P_{H_2}$ are shown in ~\fref{fig:shotsummary}~(c).
Because of the gas injection, the gas pressure increases.
The ion saturation current shows a non-monotonic behavior, indicating the interplay between the increase in $n_i$ and decrease in $T_i$ in the diverotr region.
\begin{figure}[ht]
\includegraphics[width=8cm]{shotsummary}
\caption{%
Temporal evolution of a typical experiment (shotnumber \#142897).
(a) Heating power by neutral beam injection (NBI) and electron cyclotron heating (ECH).
(b) Temporal evolutions of $T_e$ and $n_e$ on the plasma axis ($T_e^{ax}$ and $n_e^{ax}$, respectively) and last closed flux surface ($T_e^{LCFS}$ and $n_e^{LCFS}$, respectively).
Because of the gas injection at $t$ = 4.5 s (indicated by the thick vertical bar), $n_e$ increases and $T_e$ decreases.
(c) Temporal evolutions of the ion saturation current measured at the divertor plate ($I_{is}$) and neutral gas pressure measured near the divertor region ($P_{H2}$).
(d) Measured emission intensities of hydrogen atomic Balmer-$\beta$ ($I_{H_\beta}$) and the Fulcher-$\alpha$ $(0-0) Q1$ transition.
The estimated result of the excitation flux ($n_X\,R\,n_e$, see the text for its definition) is also shown.
(e) Estimated rotational and vibrational temperatures of the $X$-state of hydrogen molecule.
Estimation uncertainties are also shown either by the error bars and width of the curves.
}
\label{fig:shotsummary}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=8cm]{Te_ne}
\caption{%
Radial distributions of (a) $T_e$ and (b) $n_e$ measured for \#142897 at $t=$ 4.00 s and 4.75 s with Thomson scattering method.
Rough position of the LCFS is shown by vertical dashed lines.
}
\label{fig:thomson}
\end{figure}
\subsection{Visible spectroscopy}
We observed the visible emission from the divertor region of the LHD.
The experimental setup is the same with that shown in Ref.\cite{ishihara}, where the emission at the inner divertor region was collected by an optical lens, focused on an optical fiber edge, transferred to the entrance slit of an echelle spectrometer which have been developed by our group~\cite{Tanaka2020-gg,Hasuo2012-dv}.
This spectrometer measures the spectrum in the wavelength range of 400--780 nm with the wavelength resolution of $\approx 0.1$ nm simultaneously.
33 ms exposure time and 4 Hz frame rate are used for all the results shown in this work.
\Fref{fig:spectra} shows the visible spectra measured by this spectrometer at $t$ = 4.00 and 4.75 s for the experiment shown in \fref{fig:shotsummary}.
The vertical bars in the figure show the central wavelengths of the Q branches of the hydrogen Fulcher-$\alpha$ band.
The Fulcher-$\alpha$ band is the emission lines from $d ^3\Pi_u^-$ state to $a ^3 \Sigma_g^+$ state.
The transition notations $(v'-v'') QN'$ are shown in the figure, where $v'$ and $v''$ indicate the vibrational quantum numbers of the upper and lower states, respectively, while $N'$ indicates the rotational quantum number of the upper state.
As it is in the $Q$ branch, $N' = N''$.
For example, $(0-0) Q1$ indicates $v'=0, v''=0$, and $N'$ = $N''$ = 1.
\begin{figure*}[ht]
\includegraphics[width=18cm]{spectra}
\caption{%
The emission spectra observed at $t$ = 4.0 and 4.75 s with the echelle spectrometer.
The central wavelengths of Fulcher-$\alpha$ band are shown by the vertical bars.
The transitions for the emission lines used in the analysis are also indicated.
}
\label{fig:spectra}
\end{figure*}
After the gas injection, the intensities of these lines change.
The intensity of the $(0-0) Q1$ line decreases while that of the $(0-0) Q11$ line increases.
We estimate the emission intensities by fitting the spectra by a Gaussian function.
The temporal evolution of the $(0-0) Q1$ line intensity is shown in \fref{fig:shotsummary}~(d).
In this panel, we also plot the intensity evolution of the hydrogen atom Balmer-$\beta$ line.
Balmer-$\beta$ line intensity increases according to the gas injection while the $(0-0) Q1$ line intensity decreases.
We estimate the population of the $d$ states from the intensity $I_{(v'-v'')QN'}$ based on the following relation
\begin{align}
I_{(v'-v'')QN'} = h\nu^{d v' N'}_{a v'' N''}\; A^{d v' N'}_{a v'' N''}\; n_{d v' N'},
\end{align}
where $\nu^{d v' N'}_{a v'' N''}$ is the photon energy of the $(d, v', N') \rightarrow (a, v'', N'')$ transition and $n_{d v' N'}$ is the line-integrated population density of the upper state.
$A^{d v' N'}_{a v'' N''}$ is the Einstein coefficient for the transition $d v' N'\rightarrow a v'' N'' $,which are computed by~\cite{Surrey2003-st}
\begin{align}
A^{d v' N'}_{a v'' N''} =
\frac{16 \pi^3}{3h^4\epsilon_0 c^3}
(h\nu^{d v' N'}_{a v'' N''})^3
\,
\overline{R_e}^2
\,
q^{dv'}_{av''}
\,
\frac{S_{N'N''}}{2N' + 1}
\label{eq:fc}
\end{align}
with the vecuum permittivity $\epsilon_0$, the planck constant $h$, the light speed $c$, the dipole transition moment $\overline{R_e}$ between $a$ and $d$ states, the Franck-Condon factor between the upper and lower vibrational states $q^{dv'}_{av''}$, and the H\"onl-London factor for the $Q$ branch $S_{N'N''}=(2N'+1) / 2$.
The values of $q^{dv'}_{av''}$ are taken from Ref.~\cite{fantz_franckcondon_2006}.
\begin{figure}[ht]
\includegraphics[width=8cm]{population}
\caption{%
The population of $d$-state observed at $t$ = 4.0 and 4.75 s.
The solid curves show the best fit by the coronal model (\eref{eq:coronal_model}) with the two-temperature Boltzmann distribution for $X$ state (\eref{eq:two_temperature}).
The estimated $X$ state populations are shown in the inset.
}
\label{fig:population}
\end{figure}
\Fref{fig:population} shows the population of the upper state of the Fulcher-$\alpha$ band, normalized by the statistical weight of the upper state $g_{N'} = (2N'+1)g_{as}$ with the nucleus statistical weight $g_{as} = 2 - (-1)^{N'}$.
The rotational population for each $v'$ state shows the decreasing trend as a function of the excited energy.
The slope in the vertical logarithmic scale is steeper at the lower density plasma ($t$ = 4.00 s) and more flat at the higher density plasma ($t$ = 4.75 s).
\section{Estimation of the Population Distribution of the $X$ states\label{sec:analysis}}
Only the upper-state population can be directly obtained from the observed emission intensities.
In order to estimate the population distribution in the $X$ state, we utilize the coronal model, which has been widely used in the literature~\cite{xiao_rovibrational_2004,fantz_spectroscopypowerful_2006,Briefi2017,fantz_spectroscopic_1998}.
Note that the coronal model holds only when the electron density is sufficiently small.
In high-density plasmas, the population influx from excited states becomes important, however, the validity criteria is not yet established.
Thus, in this work we ignore the effect and assume that the coronal model is valid in our parameter range.
\subsection{Coronal model}
With the assumption of the coronal equilibrium, the population distribution of the $d$ state is determined by the population influx from the $X$ state ($\Gamma_{X v N}^{d v' N'}$) and the outflux to the $a$ state ($\Gamma_{d v' N'}^{a v'' N''}$).
$\Gamma_{X v N}^{d v' N'}$ is computed with the Born-Oppenheimer approximation,
\begin{align}
\Gamma_{X v N}^{d v' N'} \approx q^{Xv}_{dv'} \delta(N-N') \,
n_{X v N} \,
R(T_e)\,
n_e.
\end{align}
where $q^{Xv}_{dv'}$ is the Franck-Condon factor between $X$ and $d$ states, and $R(T_e)$ is the excitation rate coefficient from $X$ to $d$ states.
The population outflux $\Gamma_{d v' N'}^{a v'' N''}$ is
\begin{align}
\Gamma_{d v' N'}^{a v'' N''} \approx A^{d v' N'}_{a v'' N''} \delta(N'-N'')
\; n_{d v' N'}.
\end{align}
From the steady state condition
$\sum_{v,N}\Gamma_{X v N}^{d v' N'} = \sum_{v''}\Gamma_{d v' N'}^{av''N''}$,
the population of the $d$ state can be written by
\begin{align}
n_{d v' N'} = \frac{
\sum_{v, N} \Gamma_{X v N}^{d v' N'}
}{
\sum_{v''} A^{d v' N'}_{a v'' N'}
}.
\label{eq:coronal_model}
\end{align}
\subsection{Ro-Vibrational Temperature Estimation of the $X$ state}
Because \eref{eq:coronal_model} involves more number of unknown parameters ($n_{X v N}$) than the measured values of $n_{d v' N'}$, an appropriate parameterization is necessary to estimate the distribution of $n_{X v N}$~\cite{fantz_spectroscopic_1998}.
Although Boltzmann's distribution has been assumed for $n_{X v N}$ in several works~\cite{xiao_rovibrational_2004,fantz_spectroscopypowerful_2006,Briefi2017,fantz_spectroscopic_1998}, it has been also known that the distribution deviates from the Boltzmann distribution, particularly in the high rotational-quantum-number states.
In fact, our observation of the population distribution shown in \fref{fig:population} also presents a deviation from the Boltzmann distribution, where highly rotational states are excessively populated.
According to the preceding works which report the direct observations of the $X$-state population~\cite{pealat_rovibrational_1985,stutzin__1990,vankan_high_2004,gabriel_formation_2008}, the distribution may have the following properties:
\begin{itemize}
\item The distribution of the population in the state with small rotational quantum number $N$ in each vibrational state follows the Boltzmann distribution with temperature $T_{rot}^{lo}$.
\item The large $N$ states are populated more than the Boltzmann distribution and can be approximated by another Boltzmann distribution with higher temperature $T_{rot}^{hi} > T_{rot}^{lo}$.
\item The total population for each $v$ state follows yet another Boltzmann distribution with $T_{vib}$.
\end{itemize}
Based on these observations, we assume the following two-temperature form for the $X$-state population,
\begin{align}
n_{X v N} = n_X \biggl(
(1 - \alpha) f(E_{X v N} | T_{vib}^{lo}, T_{rot}^{lo})
+ \alpha f(E_{X v N} | T_{vib}^{hi}, T_{rot}^{hi})
\biggr),
\label{eq:Xdistribution}
\end{align}
where $f(E_{X v N} | T_{vib}, T_{rot})$ is the Boltzmann distribution with vibrational and rotational temperatures, $T_{vib}$ and $T_{rot}$, respectively,
\begin{align}
f(E_{X v N} | T_{vib}, T_{rot}) =
\frac{1}{T_{vib}T_{rot}}\exp\left[
- \frac{E_{X v 0} - E_{X 0 0}}{T_{vib}}
- \frac{E_{X v N} - E_{X v 0}}{T_{rot}}
\right].
\label{eq:two_temperature}
\end{align}
Here, $E_{X v N}$ is the excited energy of $X$ state with vibrational and rotational quantum nubers $v$ and $N$.
Thus, $E_{X 0 0}$ is the ground state energy.
\Eref{eq:Xdistribution} and its substitution to \eref{eq:coronal_model} have six adjustable parameters, i.e., the excitation flux $n_{X}\,R\,n_e$, the weight of the two distributions $\alpha$, and a pair of vibrational and rotational temperatures $T_{rot}^{lo}, T_{vib}^{lo}, T_{rot}^{hi}, T_{vib}^{hi}$.
Here, $n_{X}\,R\,n_e$ represents the product of the total hydrogen density $n_X = \sum_{vN}n_{XvN}$, the electronic-state-resolved excitation rate coefficient $R(R_e)$, and the electron density.
Because these three terms are unknown and impossible to resolve from the analysis, we treat it as a single adjustable parameter.
We fit the distribution of $n_{d v' N'}$ by adjusting these parameters.
In order to make a robust inference not only for these two frames but also for more than $10^3$ frames of data, we adopt a hierarchical Bayesian model, the detail of which will be described in the Appendix.
The bold curves in \fref{fig:population} show the fit result of $n_{d v' N'}$ for $t$ = 4.00 and 4.75 s.
The change in the populations is well captured.
The reconstructed population in the $X$ state is also shown in the inset.
The estimated values of these parameters are listed in \tref{tb:parameters}.
\begin{table}
\caption{\label{tb:parameters}Estimated parameters for \#142897 at $t$ = 4.00 and 4.75 s.
The $\pm$ range indicates the 16--84\% confidential interval of these parameters.
}
\begin{tabular}{l c c c c c}
$t$ &
$T_{rot}^{lo}$ (eV) & $T_{rot}^{hi}$ (eV) &
$T_{vib}^{lo}$ (eV) & $T_{vib}^{hi}$ (eV) &
$\alpha$
\vspace{2mm}
\\
\hline
\vspace{2mm}
4.00 s &
$0.153^{+0.006}_{-0.006}$ & $0.56^{+0.05}_{-0.05}$ &
$0.65^{+0.03}_{-0.05}$ & $0.88^{+0.05}_{-0.04}$ &
$0.48^{+0.02}_{-0.02}$\\
4.75 s &
$0.289^{+0.014}_{-0.013}$ & $0.77^{+0.04}_{-0.09}$ &
$0.77^{+0.04}_{-0.04}$ & $0.90^{+0.04}_{-0.04}$ &
$0.51^{+0.03}_{-0.03}$ \\
\end{tabular}
\end{table}
The value of $T_{rot}^{lo}$ changes significantly in these two timings,
while the changes in $T_{rot}^{hi}, T_{vib}^{lo}, T_{vib}^{hi}$, and $\alpha$ are less significant.
The values of the three temperatures, $T_{rot}^{hi}, T_{vib}^{lo}, T_{vib}^{hi}$, are estimated to the similar range.
As shown in \fref{fig:population} inset, this results in the convergence into a single Boltzmann's distribution for the $X$ state in the highly excited levels.
This behavior is consistent with the previous works, where the population in the $X$ state has been directly measured~\cite{pealat_rovibrational_1985,stutzin__1990,vankan_high_2004,gabriel_formation_2008}.
\section{Parameter dependence of the $X$ state population\label{sec:correlation}}
We collect the spectra obtained for 120 discharge experiments (74 experiments with $R_{ax}$ = 3.65 m and 46 experiments with $R_{ax}$ = 3.60 m) totalling 1145 frames.
The summary of the experiments is shown in \tref{tb:shotnumber}.
There are also some variations in ECH and NBI powers.
The same inference of the $X$ state population is carried out for all of these frames.
\Fref{fig:pairplot} shows the distribution among several parameters, $n_e^{LCFS}$, $T_e^{LCFS}$, $P_{H_2}$, $I_{is}$, $I_{H\beta}$, $n_X\,R\,n_e$, $T_{rot}^{lo}$, and $T_{vib}^{lo}$ at these frames.
The parameters for the $R_{ax} = 3.65$-m- and 3.60-m-experiments are shown in different colors.
\begin{table}[hbt]
\centering
\caption{%
Summary of the experiments we analyze in this work.
The ECH column indicates the typical ECH power for these experiments.
In the column of NBI, the indices of the neutral beam injectors used for the experiments are shown.
\#1, 2 and 3 are negative-ion based NBI with 180 keV injection energy, which mainly heat the electrons.
\#4, 5 are the positive-ion based NBI with 40 keV injection energy, mainly heating the ions.
}
\label{tb:shotnumber}
\begin{tabular}{ccccc}
\hline
\textit{$R_{ax}$} (m) & shotnumber & \textit{$B_{t}$} (T) & ECH (MW) & NBI\\
\hline
3.65 & 142857-142917 & -2.712& 75& \#1,2,3\\
& 143523-143560 & 2.712& 60& \#1,2,3\\
\hline
3.60 & 143293-143297 & -1.640& 0& \#1,2,3,4,5\\
& 143306-143307 & -2.750& 60&\#1,2,3,4,5\\
& 143387-143415 & -2.750& 60& \#1,2,3\\
& 143943-143950 & -2.750& 250-550& \#2,3\\
& 143973-143988 & -2.750& 75& \#2,3\\
\hline
\end{tabular}
\end{table}
\begin{figure*}[ht]
\includegraphics[width=18cm]{pairplot}
\caption{%
A pair plot for plasma parameters and $T_{rot}^{lo}$ and $T_{vib}^{lo}$.
The observation results for different magnetic configurations ($R_{ax}$ = 3.60 m and 3.65 m) are shown in different colors.
}
\label{fig:pairplot}
\end{figure*}
\subsection{Plasma parameters}
$n_e^{LCFS}$ varies over 0.06--$1\times 10^{20}\mathrm{\;m^{-3}}$, while $T_e^{LCFS}$ varies 0.08--0.9 keV.
$T_e^{LCFS}$ has a negative correlation against $n_e^{LCFS}$.
The values of $n_e^{LCFS}$ and $T_e^{LCFS}$ are dependent on each other, i.e., the most of the points in the top-left panel in \fref{fig:pairplot} stay on a single curve.
Positive correlations among $n_e^{LCFS}$, $P_{H_2}$, and $I_{is}$ are apparent.
This may be intuitive, as higher gas pressure often gives more dense plasma and dense plasmas often result in more ion flux to the divertor.
However, the scatter plots show their diffuse relations compared with the $n_e^{LCFS}$-$T_e^{LCFS}$ relation.
This suggests that other factor also affects their relations.
The similar trend can be seen in \fref{fig:shotsummary}~(a), (b), where $n_e^{LCFS}$ and $T_e^{LCFS}$ changes accordingly, while the change in the $P_{H_2}$ and $I_{is}$ are not monotonic against the change of $n_e^{LCFS}$.
The values of $I_{H\beta}$ and $n_X\,R\,n_e$ also show the positive correlations against $n_e^{LCFS}$.
This is consistent with the positive correlation between $P_{H_2}$ and $n_e^{LCFS}$, i.e., the atom and molecule densities should have a positive dependence on $P_{H_2}$ and the emission rate is almost linearly proportional to $n_e$.
Their relations show a large scatter, suggesting existence of another process to affect the molecular density in front of the divertor.
\subsection{The rotational temperature}
\begin{figure*}[ht]
\includegraphics[width=18cm]{correlation}
\caption{%
(a-d) Dependence of $T_{rot}^{lo}$ on some plasma parameters.
The results for $R_{ax}$ = 3.65 m and 3.6 m are shown in the upper and lower panels, respectively.
Scatters of $T_{rot}^{lo}$-$n_e^{LCFS}$ and $T_{rot}^{lo}$-$T_e^{LCFS}$ are smaller than the rest of plots.
%
In (a), the empirical relation of $T_{rot}$ with $n_e$ (\eref{eq:empirical}) is shown by a solid curve.
The width of the curve indicates the uncertainty in the wall temperature ($300$--$600$ K) at the desorption location.
The dashed curve is the same relation, but $n_e$ at the dievertor plate is used instead of $n_e^{LCFS}$, which is estimated from \eref{eq:ne_div}.
(e) The histogram of $T_{rot}^{lo}$.
The prior distribution optimized by Bayesian inference is also shown by the solid curve. The median and the 68\% range of the prior are shown by solid and dashed lines, respectively.
See Appendix for the details of Bayesian inference.
}
\label{fig:correlation}
\end{figure*}
The values of $T_{rot}^{lo}$ show a clear dependence on $n_e^{LCFS}$ and $T_e^{LCFS}$.
Expanded correlation plots are shown in \fref{fig:correlation}.
The scatter plot between $T_{rot}^{lo}$ and $n_e^{LCFS}$ is closest to a single curve, while the scatter of $T_{rot}^{lo}$-$P_{H_2}$ and $T_{rot}^{lo}$-$I_{is}$ are larger.
This suggests the direct dependence of the rotational temperature on the electron density, rather than the gas pressure and the ion flux to the divertor.
This interpretation is consistent with that of the previous works~\cite{brezinsek_molecular_2002,unterberg_local_2005,Hollmann2006-zy}.
However, it should be noted that we only consider the electron density at $r_{eff}$ = 0.6 m and those exactly at the emission location are unavailable.
The solid curve in \fref{fig:correlation}~(a) shows the empirical dependence of $T_{rot}$~\cite{Sergienko2013-th,Brezinsek2005-dz,Hollmann2006-zy},
\begin{align}
T_{rot} \approx 280 + T_s + 6 \times 10^{-17} n_e \;\;\mathrm{[K]},
\label{eq:empirical}
\end{align}
where $T_s$ is the surface temperature (assuming no extra excitation mechanism due to the surface recombination) in K, and $n_e$ is in $\mathrm{m^{-3}}$.
We assume the wall temperature at the desorption location as $T_s \approx 300$--$600$ K.
The width of the curve shows this uncertainty.
Although our result consistently shows the positive $n_e$ dependence with convergence to $\approx$ 600 K at $n_e \rightarrow 0$, a significant discrepancy is found; our results lie at the larger density side of the solid curve.
This discrepancy may be originated from the difference in $n_e^{LCFS}$ and the $n_e$ values at the emission location.
In order to estimate the electron density on the divertor plate $n_e^{div}$, we use the relation
\begin{align}
\label{eq:ne_div}
\frac{n_e^{div}}{10^{19}\,\mathrm{m^{-3}}} \approx
0.08 \times \left[\frac{n_e^{LCFS}}{10^{19}\,\mathrm{m^{-3}}}\right]^{1.5},
\end{align}
which has been suggested in Ref.~\cite{Kobayashi2010-iv}.
The dashed curves in \fref{fig:correlation}~(a) show \eref{eq:empirical} but with this relation, the width of which again indicates $T_s \approx 300$--$600$ K.
Our data point is in between the two curves.
This may be understood that the dominant emission location is between the divertor plate and the LCFS.
This is consistent with the emission location observation based on Zeeman spectroscopy in~\cite{Fujii2013-ze,Fujii2015-ys}.
\subsection{The vibrational temperature}
\begin{figure*}[ht]
\includegraphics[width=18cm]{correlation_Tvib}
\caption{%
(a-d) Dependence of $T_{vib}^{lo}$ on some plasma parameters.
The results for $R_{ax}$ = 3.65 m and 3.6 m are shown in the upper and lower panels, respectively.
%
In (a), the empirical relation of $T_{vib}$ with $n_e$ (\eref{eq:empirical}) is shown by a solid curve.
The dashed curve is the same relation, but $n_e$ at the dievertor plate is used instead of $n_e^{LCFS}$, which is estimated from \eref{eq:ne_div}.
(e) The histogram of $T_{vib}^{lo}$.
The prior distribution optimized by Bayesian inference is also shown by the solid curve. The median and the 68\% range of the prior are shown by a solid and dashed lines, respectively.
See Appendix for the details of Bayesian inference.
}
\label{fig:correlation_Tvib}
\end{figure*}
$T_{vib}^{lo}$ also shows a dependence on $n_e^{LCFS}$ and $T_e^{LCFS}$.
Expanded correlation plots are shown in \fref{fig:correlation_Tvib}.
A similar positive $n_e$-dependence of $T_{vib}^{lo}$ has been reported in \cite{brezinsek_molecular_2002}.
From their plot, we extract the dependence as
\begin{align}
T_{vib} \approx 2400 + 2.6 \times 10^{-16} \; n_e \;\;\mathrm{[K]},
\label{eq:empirical_Tvib}
\end{align}
with $n_e$ is again in $\mathrm{m^{-3}}$.
The solid curves in \fref{fig:correlation_Tvib}~(a) represent this relation, and the dashed curves show the same relation with the assumption of \eref{eq:ne_div}.
The dependence of $T_{vib}^{lo}$ is in between the two curves.
This is consistent with the above discussion for $T_{rot}^{lo}$, where the emission location of the molecules is suggested between the divertor plate and the LCFS.
These observations for $T_{rot}^{lo}$ and $T_{vib}^{lo}$ indicate that \eref{eq:empirical} and \eref{eq:empirical_Tvib} hold universally, although the geometry of the original experiment is different from that in this work; Brezinsek et al have measured the emission in front of the graphite limiter of TEXTOR~\cite{brezinsek_molecular_2002}, while we measured the emission from LHD divertor.
This suggests that these parameters are mostly determined by the electron density, and the effect of the wall and electron / ion temperatures is small in this parameter range.
\subsection{The other population parameters}
\begin{figure*}[h]
\includegraphics[width=14cm]{pairplot2}
\caption{%
A pair plot for the population parameters.
The observation results for different magnetic configurations ($R_{ax}$ = 3.60 m and 3.65 m) are shown in different colors.
The relations among these parameters except for $n_{X}\,R\,n_e$ stay on single curves, suggesting a simple mechanism to determine the population distribution.
}
\label{fig:pairplot2}
\end{figure*}
Correlation among the population parameters, $n_{X}\,R\,n_e$, $T_{rot}^{lo}$, $T_{vib}^{lo}$, $T_{rot}^{hi}$, $T_{vib}^{hi}$, and $\alpha$, are shown in \fref{fig:pairplot2}.
$T_{rot}^{lo}$ and $n_{X}\,R\,n_e$ vary over the factor of 5 and 10, respectively.
On the other hand, the variations of $T_{rot}^{hi}$, $T_{vib}^{lo}$, $T_{vib}^{hi}$, and $\alpha$ are smaller.
For example, $T_{rot}^{hi}$ only changes by a factor of $\approx$ 1.5.
Furthermore, the scatter plots among these population parameters except for $n_{X}\,R\,n_e$ show a significant correlation.
For example, the pair plot of $T_{rot}^{lo}$ and $n_{X}\,R\,n_e$ (the top left panel) shows a large scatter, while $T_{rot}^{lo}$-$T_{vib}^{lo}$ (the second panel from the top and the left) is close to a single curve.
This suggests that a simple process determines the entire population distribution.
There are several processes have been proposed for the $X$ state population of hydrogen molecules, such as electron impact, proton impact, and neutral particle impact~\cite{sawada_rovibrationally_2016}.
From \fref{fig:pairplot}, we see high correlation between $T_{rot}^{lo}$ and $n_e$.
On the other hand, the correlation between $T_{rot}^{lo}$ and $n_{X}\,R\,n_e$ shows larger scatter, although the $T_e$-dependence of $R$ is known to small.
This indicates that the dependence on $n_{X}$ is smaller, suggesting the unimportance of the molecular-collision effect on the population distribution.
As such, a further study based on our data may help identification of important processes.
\section{Conclusion\label{sec:conclusion}}
We analyzed a thousand visible spectra of Fulcher-$\alpha$ band measured for divertor plasmas in the LHD.
With the coronal model and Baysian inference, the population distribution of the electronical ground state was estimated from these spectra.
A nearly linear $n_e$-dependences of $T_{rot}^{lo}$ and $T_{vib}^{lo}$ were found, which is consistent with previous works.
On the other hand, the higher rotational and vibrational temperatures as well as the mixture coefficient only show slight changes over the broad range of plasma parameters, and these parameters show a significant correlation.
This suggests an importance of electron-impact excitation to determine the non-thermal population distribution of molecular hydrogen in divertor plasmas.
|
{
"arxiv_id": "2302.08624",
"language": "en",
"timestamp": "2023-02-22T02:08:57",
"url": "https://arxiv.org/abs/2302.08624",
"yymm": "2302"
} | \section{Introduction}
Aspect-Based Sentiment Analysis (ABSA) is an important task in understanding fine-grained sentiments in user expressions \cite{Zhang2012SentimentAA}.
As shown in Figure \ref{fig:teaser}, ABSA extracts aspects and classifies the aspect's sentiment polarity by understanding the author's opinions.
Encoder, decoder approaches \cite{jiang-etal-2019-challenge, zhang-qian-2020-convolution}, that utilize transformer-based models \cite{, He2020DeBERTaDB, radford2019language} have been proposed but have limitations like information loss and ignoring semantic labels \cite{hosseini-asl-etal-2022-generative, seoh-etal-2021-open,kamila-etal-2022-ax,DBLP:journals/corr/abs-2211-07743}.
\begin{figure}[h!]
\centering
\includegraphics[width= 7.6 cm, height= 3.6 cm]{pics/introfig.png}
\caption{Illustration of the three ABSA subtasks where $S_i$ is the $i^{th}$ sentence, $a^i$ is the aspect terms, $sp^i$ is the sentiment polarity and $o^i$ is the opinion term.}
\label{fig:teaser}
\end{figure}
Instruction learning paradigm \cite{mishra-etal-2022-cross, wei2022chain} has significantly improved the reasoning abilities of large language models and has shown impressive results across a variety of tasks \cite{zhang-chai-2021-hierarchical, DBLP:journals/corr/abs-2203-02155, wang2022self,lu2022learn}.
Owing to its previous success, we propose \textsc{I}nstruct\textsc{ABSA}\xspace{}, for aspect-based sentiment analysis (ABSA).
Our approach involves further instruction tuning of the Tk-Instruct model \cite{wang-etal-2022-super} to address all three subtasks of ABSA: ATE, ATSC, and Joint Task.
We add instruction prompts specific to the downstream ABSA subtasks in the form of task definitions, followed by positive, negative, and neutral examples.
The proposed approach for the ATSC subtask is represented in Figure \ref{fig:flowchart}.
We conduct extensive experiments on the SemEval 2014 dataset \cite{pontiki-etal-2014-semeval}, which comprises the laptops and restaurants domain.
Across all three subtasks in both domains, \textsc{I}nstruct\textsc{ABSA}\xspace{} outperforms state-of-the-art language models.
Specifically, for the ATE subtask, \textsc{I}nstruct\textsc{ABSA}\xspace{} obtains F1-score of $92.3\%$ and $92.76\%$ (laptops, restaurants), surpassing SOTA by $4.37\%$ and $7.31\%$ points respectively.
For the ATSC subtask, \textsc{I}nstruct\textsc{ABSA}\xspace{} attains an accuracy of $88.37\%$ in the laptops domain exceeding the previous results by $2.16\%$ points.
In the restaurants' domain, our approach gets a competitive score of $89.76\%$ compared to SOTA of $90.33\%$.
For the Joint Task, \textsc{I}nstruct\textsc{ABSA}\xspace{} exhibits strong performance and achieves F1-score of $79.34\%$ and $79.47\%$ (laptops, restaurants), outperforming SOTA by $8.63\%$ and $1.4\%$ points, respectively.
We also provide evidence of cross-domain and joint-domain generalizations arising as part of our proposed approach. The extensive analysis further leads to several interesting findings, which are detailed in (Section \ref{sec_results}).
\begin{figure}[t!]
\includegraphics[width=7.6 cm, height= 9.0 cm]{pics/teaserimg.png}
\caption{Formulation of \textsc{I}nstruct\textsc{ABSA}\xspace{} for ATSC task. The input consists of instruction prompt and sentence. The output label is the sentiment polarity for the corresponding aspect. This is done for each ABSA subtasks.}
\label{fig:flowchart}
\end{figure}
\noindent\textbf{Contribution:} This work makes the following contributions:
(a) In this paper, we introduce \textsc{I}nstruct\textsc{ABSA}\xspace{} which achieves remarkable performance gains on all three ABSA subtasks of the SemEval 2014 dataset, surpassing the previous state-of-the-art models.
(b) Despite having only 200M parameters, our results outperform the prior state-of-the-art models with 1.5B parameters.
(c) We also demonstrate the generalization capabilities of our approach by conducting cross and joint-domain evaluations and finding that additional data improves the performance of our model with limited impact on cross-domain performance.
(d) Finally, we provide a comprehensive analysis of the impact of our method on various task categories.
\section{\textsc{I}nstruct\textsc{ABSA}\xspace{}: Instruction Learning for ABSA }
We describe the mathematical formulation of ABSA subtasks and then the proposed approach.
\subsection{Aspect Term Extraction (ATE)}
Given the $i^{th}$ review sentence in the training sample, $S_i = \{w_{i}^1, w_{i}^2,$...$w_{i}^n\}$, where $n$ is the number of tokens in the sentence, this subtask aims to extract the set of aspect terms $A_i = {a_{i}^1, a_{i}^2 .., a_{i}^m }$, where $m$ is the number of aspect terms in the sentence $S_i$.
The ATE subtask can be formulated as
$A_i = LM_{ATE}(S_i)$.
Here, LM denotes Language Model. $S_i$, which is the $i^{th}$ sentence, is passed as the input to the model during training, and the corresponding aspect terms $A_i$ in the sentence are the output labels, respectively.
\subsection{Aspect Term Sentiment Classification (ATSC)}
In this subtask, we extract sentiment polarities $SP_{i} = sp_{i}^1, sp_{i}^2 .., sp_{i}^m$, where $sp_i^k \in [ positive, negative, neutral ]$, for each $m$ aspect terms in the review sentence $S_i$.
As the polarity $sp_i^k$ for each aspect term $a_i^k$ is extracted individually, we get $k$ additional training samples for each sentence $S_i$.
This subtask is mathematically represented as: $sp_i^k = LM_{ATSC}(S_i, a_i^k)$.
Here, $S_i$, and $a_i^k$ which is the $k^{th}$ aspect term is passed as input to the model, and the output label is the sentiment polarity $sp_i^k$, the polarity for the $k^{th}$ aspect term.
\subsection{Joint Task}
Joint Task is the task of extracting the aspect terms and their corresponding sentiment polarity pairs simultaneously for a given review sentence $S_i$.
This subtask is formulated as:
$[A_i, SP_i] = LM_{Joint}(S_i)$.
During training, the language model takes in the sentence $S_i$ as input and the aspect term - sentiment polarity pair $[A_i, SP_i] = \{(a_i^k, sp_i^k); a_i^k \in A_i, sp_i^k \in SP_i\}$ are the corresponding output labels.
\subsection{Proposed Approach}
By instruction tuning a language model $LM$ using instruction-equipped pertaining data, we get instruction-tuned model $LM_{Inst}$.
This model is further fine-tuned on downstream tasks of ABSA.
The task is formulated as follows. ATE subtask: $A_i = LM_{Inst}(Inst,S_i)$.
ATSC subtask: $sp_i^k = LM_{Inst}(Inst,S_i, a_i^k)$
Joint Task: $[A_i, SP_i] = LM_{Inst}(Inst,S_i)$
The instructions prompt $Inst$ comprise the task definition, followed by a combination of positive, negative, and neutral examples, which are described in detail in Appendix \ref{sec:prompt_examples}.
\section{Experimental Setup}
This section describes the experimental setup for our proposed IntructABSA approach.
We use the T$k$-Instruct-base-def-pos as the instruction-tuned model $LM_{Inst}$.
We use two configurations of instructions as prompts for our experiments.
\textsc{I}nstruct\textsc{ABSA}\xspace{}-1 has the instruction prompt that includes the definition of the ABSA subtasks followed by 2 positive examples for the respective task.
\textsc{I}nstruct\textsc{ABSA}\xspace{}-2 has the definition followed by 2 positive, negative, and neutral examples.
\paragraph{Dataset:} SemEval 2014 Task-4 dataset from \citet{pontiki-etal-2014-semeval}, is used for our experimentation.
The dataset is used as a benchmark for ABSA tasks and has customer reviews from two domains; laptops (Lapt14) and restaurants (Rest14).
More details can be found in Appendix \ref{sec:dataset}.
\paragraph{Cross Domain \& Joint Domain Experiments:} We conduct two additional experiments for analysis.
Firstly, we perform cross-domain experiments; we train the model on the Lapt14 and test it on the Rest14 dataset and vice versa.
This experiment is performed to check the domain generalizability of the model
Secondly, we perform a joint domain experiment where we combine Lapt14 and Rest14 datasets while training. To evaluate the performance of this approach, it is tested on both datasets individually.
This analysis is done to check how additional data of a different domain help for each ABSA subtask.
All the experiments are performed across both instruction prompt configurations, and all three subtasks on both dataset domains.
\paragraph{Evaluation Metric:} We use the F1-score for ATE and joint task and the accuracy metric for ATSC subtask, following previous approaches \cite{zhang-etal-2021-towards-generative, luo-etal-2020-grace}.
\paragraph{Hyperparameters:} Model: T$k$-Instruct-base-def-pos \footnote{\url{https://huggingface.co/allenai/tk-instruct-base-def-pos}} ,
GPU: 1xNvidia Tesla P40,
Train Batch size: ATE and ATSC: 16, Joint Task: 8 (The batch size was reduced in Joint task to accommodate the batch in GPU),
Gradient Accumulation Steps: 2,
Initial learning rate: 3e-4,
Number of Epochs: 4
\section{Results and Analysis}
\label{sec_results}
\subsection{Sub Task Results}
Tables \ref{tab:ate_result}, \ref{tab:atsc_result}, and \ref{tab:joint_result} denote the results in ATE, ATSC, and joint task, respectively.
All the results reported are the average values from 5 runs for each experiment.
Both \textsc{I}nstruct\textsc{ABSA}\xspace{}-1 and 2 exhibit strong performance across all three subtasks.
The subscript values next to our metrics in the table denote the absolute percentage increase from the previous SOTA approach
$(\uparrow = increase$ and $\downarrow = decrease)$.
\input{tables/ATERes.tex}
For \textbf{ATE} subtask (Table \ref{tab:ate_result}), both \textsc{I}nstruct\textsc{ABSA}\xspace{}-1 and 2 surpass the previous best scores.
The best F1-score obtained for the Lapt14 dataset was by \textsc{I}nstruct\textsc{ABSA}\xspace{}-2; 92.30, and for the Rest14 was 92.76 by \textsc{I}nstruct\textsc{ABSA}\xspace{}-1, respectively.
It is to be noted that \textsc{I}nstruct\textsc{ABSA}\xspace{} uses a model with 200M parameters whereas \citet{hosseini-asl-etal-2022-generative} uses a model with 1.5B parameters.
Our model surpasses SOTA by 4.37 and 7.31 (absolute percentage points) for Lapt14 and Rest14, respectively.
\input{tables/ATSCRes.tex}
For the \textbf{ATSC} subtask (Table \ref{tab:atsc_result}), \textsc{I}nstruct\textsc{ABSA}\xspace{}-1 surpasses SOTA by 2.06\% points in the Lapt14 dataset (88.37 vs. 86.31 obtained in LSAT-XdeBERTa).
For the Rest14 dataset, \textsc{I}nstruct\textsc{ABSA}\xspace{}-2 gets a competitive accuracy of 89.76 compared to the best score of 90.86 by LSAT-XdeBERTa (355M parameters).
\input{tables/JointTaskRes.tex}
For the \textbf{Joint Task} in the Lapt14 dataset (Table \ref{tab:joint_result}), both \textsc{I}nstruct\textsc{ABSA}\xspace{}-1 and 2 surpass SOTA by a significant margin (78.89 and 79.34 as compared to 70.71 obtained in GRACE).
For the Rest14 Dataset, the second model surpasses the competitive score to get an F1 of 79.47 compared to 78.07.
All other approaches performing the joint task, such as GPT2$_{med}$ (1.5B), SPAN (345M), and LSA (355M), are significantly larger than \textsc{I}nstruct\textsc{ABSA}\xspace{}.
Detailed results for all three subtasks containing precision and recall are present in Appendix \ref{sec:additional_results}.
\subsection{Analysis}
In this subsection, we analyze how \textsc{I}nstruct\textsc{ABSA}\xspace{} performs in non-conventional training approaches.
\input{tables/crossdomainRes.tex}
\paragraph{Cross Domain Evaluation:}
In this experiment, we evaluated the performance of two models in a cross-domain setting, where the models were trained on a train set from one domain and tested on a test set from another domain.
The evaluation was performed on all three subtasks for both instruction-tuned models (\textsc{I}nstruct\textsc{ABSA}\xspace{}-1 \& 2).
Table \ref{tab:cross_domain} presents the results of the experiment.
The F1 scores for both models were close when trained on Rest14 and tested on Lapt14 for ATE and Joint Task, with values of 71.98 and 65.30, respectively.
The first model performed better than the second in the ATSC task (87.20 vs. 82.75) but still showed lower accuracy compared to the results obtained using the same domain train and test sets (highest: 88.37).
When trained on Lapt14 and tested on Rest14, Model 1 showed a drop in F1-score for the ATE and Joint Task compared to Model 2 (62.85 and 55.06 vs. 76.85 and 62.95) respectively).
In the ATSC task, the accuracy of both models did not experience a significant drop, with similar performance (85.91 vs. 83.05).
\input{tables/cross_domain_joint.tex}
\paragraph{Joint Domain Evaluation:}
In this setting, the train data of both domains are combined to train the model, and the model is evaluated on both test sets.
The experiment is performed on all three subtasks and for both instruction-tuned models, and the results are presented in Table \ref{tab:cross_domain_joint}.
The availability of additional training data for ATE subtask helps the language models as the proposed model surpasses the previously achieved SOTA.
For Lapt14 and Rest14 domains, the F1-scores of 93.28 and 93.55 surpasses the SOTA achieved by \textsc{I}nstruct\textsc{ABSA}\xspace{} when modeled as individual ATE subtasks (92.3 and 92.76).
A similar trend is observed in the Joint Task as well. The F1 scores obtained on the Lapt14 and Rest14 data via Joint Domain training (80.47 and 80.81) surpass the individual Joint Task scores (79.34 and 79.47).
However, this trend is not evident in the ATSC subtask for Lapt14 and Rest14 datasets, as competitive accuracies are obtained (88.47 and 88.62) compared to individual ATSC subtask training (88.37 and 89.76).
\section{Conclusion}
In this work, we proposed \textsc{I}nstruct\textsc{ABSA}\xspace{}, an instruction-tuned modeling approach for all subtasks of ABSA.
Our findings show that \textsc{I}nstruct\textsc{ABSA}\xspace{} surpassed the previous scores using a significantly smaller model than previous approaches.
We further analyzed the performance of the approach in cross-domain and joint-domain settings, revealing several interesting findings.
Finally, we release our code and hope that our work will encourage further research in this direction.
\section*{Limitations}
Our study is limited to the Sem Eval 2014 dataset, a widely used dataset in recent works. Future studies should include the extension of this work on other ABSA datasets to test the generalizability of our findings.
We conducted our experiments using a 200M model, which may limit the applicability of our findings to smaller models. Future studies could consider using even smaller instruction-tuned models to analyze their performance.
Our study was conducted using Tk-Instruct models for the English language. As a result, our findings may not be directly applicable to other languages. Future studies should include a multilingual dataset and a multilingual instruction-tuned model to investigate the model's performance across different languages.
|
{
"arxiv_id": "2302.08635",
"language": "en",
"timestamp": "2023-02-20T02:04:23",
"url": "https://arxiv.org/abs/2302.08635",
"yymm": "2302"
} | \section{Introduction}
Human Trajectory Prediction (HTP) is a valuable task in many applications such as infrastructure design, traffic operations, crowd abnormality detection systems, evacuation situation analysis, and autonomous vehicles \cite{motionsurvey}. It is also more challenging compared to other types of agent trajectory prediction, such as vehicles, due to the complex behaviour of humans in different environments.
Most of the proposed models in the literature of motion forecasting \cite{SocialGAN, Social-BiGAT, PECNet, DESIRE, STGAT, STGCNN, Trajectron++} rely on statistical inference, which has two important shortcomings. First, the learnt representations are not explainable and, therefore, are ineffective to transfer into a new environment.
Secondly, statistical inference is susceptible to learn spurious correlations, which can significantly degrade the performance of the models in presence of a domain shift or measurement noise \cite{causalmotion}. In this work we aim to tackle these shortcomings from a causality perspective.
Causal representation learning has attracted much attention in various applications \cite{csg, CONTA, causal_survey, causalmotion}. Causal inference eliminates the confounding bias in predictions, \cite{causaljudea} which is a common phenomenon in motion forecasting since inputs and targets come from the same interdependent time series.
Furthermore, models based on causal inference learn meaningful features that do not rely on spurious correlations. The coexistence of target trajectories and observation noise, is an example of spurious correlation in motion forecasting \cite{causalmotion}. Causality can also identify the generative factors in the dataset \cite{DisentangleR}. Physical laws, social norms, and motion styles are examples of generative factors in HTP. Under covariate shifts, a model that learns the causal structure of the data would require many fewer samples to adapt to the new environment because most of the modules can be reused without further training. This expectation is consistent with the Sparse Mechanism Shift hypothesis in causal representation learning \cite{causal_survey}.
To this end, we propose Generative Causal Representation Learning (GCRL) that leverages causality to increase the identifiability and robustness of current motion forecasting models. While our target application is motion forecasting, our proposed solution can be applied to domain adaptation in other applications. Oftentime, the training dataset is collected from different locations such as the ETH-UCY dataset \cite{eth-ucy1, eth-ucy2}. Considering this, we introduce a causal model that includes the invariant features, which are common across all types of environments, and variant features, which are environment-specific. Our cuasal model includes selection variables \cite{Transportability} to directly transfer some parts of the model to a new domain without fine-tuning. Second, we propose a new learning paradigm that can learn invariant and variant features simultaneously and eliminate the confounding bias using a backdoor adjustment
\cite{causaljudea}. Previous work \cite{causalmotion} uses Invariant Risk Minimization (IRM), \cite{irm} which requires solving a very complex minimization problem.
Third, our proposed method uses a generative process to infer latent representations, which can capture multi-modal trajectory distributions, without adding any noise to the representations as proposed in \cite{STGAT}.
\textbf{Contributions} Our key contributions can be summarized as follows:
\vspace{-1em}
\paragraph{(1)}A learning paradigm which enables end-to-end training and eliminating confounding bias.
\vspace{-0.8em}
\paragraph{(2)} Augmented causally factorized model to enable direct transportability and reusability of some parts of the model.
\vspace{-0.8em}
\paragraph{(3)} A generative process to produce latent representations to tackle multi-modality of trajectories.
\section{Related Work}
\vspace{-0.6em}
\subsection{Causal Representation Learning}
\vspace{-0.6em}
As pointed out in \cite{prop}, a transfer assumption is required in domain adaptation. Most of the methods in computer vision applications have parametric distribution families, hence, it is assumed that domain shifts affect the parameters of the distributions. Similar to our approach, INFER \cite{infer} uses an augmented version of Directed Acyclic Graphs (DAGs) to represent the casual structure of data. Their method assumes that the targets are not available in the test domain and perform unsupervised domain adaptation (DA) using a conditional GAN. CAE \cite{CAE} proposes and end-to-end framework to jointly learn a model for causal structure learning and a deep autoencoder model. Markov Blanket features \cite{markovblanket} are then extracted from the learnt causal model to be used in the test domain. CSG \cite{csg} leverages the Independent Causal Mechanisms (ICM) principle \cite{causal_survey} to learn the causal model for image classification task. In CSG, covariate shift is assumed to manifest itself in the priors of the latent variables in the form of spurious correlations.
CONTA \cite{CONTA} tackles the confounding bias in causal models by using the backdoor criterion \cite{causaljudea} for image segmentation task. To obtain invariant conditional distributions applicable in the test domain \cite{InvariantCD} uses the selection diagrams. CGNN \cite{CGNN}, on the other hand, applies maximum mean discrepancy between the generated distribution by a set of generative models and true distribution of the data to approximate the causal model of the data. DAG-GNN \cite{DAG-GNN} uses a variational autoencoder to learn the adjacency matrix of the DAG that is supposed to represent the causal model of the data. In \cite{unknownI}, the causal structure is learnt from both observational data and unknown interventions. In \cite{DisentangleR}, causal disentanglement is studied and a set of metrics are introduced to measure the robustness of the learnt features.
Most of these methods attempt to learn the causal model of the data, however, in motion forecasting we can hypothesize a causal model based on the domain knowledge. Furthermore, identifying true causal model of the data is extremely challenging and impossible in an unsupervised setting with observational data \cite{impossibledisentangle}. Hence, the previous methods have not investigated causal discovery in real-world time-series data, which has exclusive challenges.
\vspace{-0.6em}
\subsection{Motion Forecasting}
\vspace{-0.6em}
Most of the deep-learning based motion forecasting models consist of an Encoder, an Interaction module, and a Decoder, \cite{motionsurvey} which we will refer to as EID architecture. STGAT \cite{STGAT} uses a graph attention network to model the human-human interactions and LSTM cells to encode past trajectories and to predict future trajectories. On the other hand, STGCNN \cite{STGCNN} consist of CNN layers only which significantly improves inference speed while performing on par with its RNN counterparts in terms of prediction error.
Social-GAN \cite{SocialGAN} focuses on the multi-modality of human trajectories and uses GANs to generate multiple future trajectories per sample. Social-BiGAT \cite{Social-BiGAT} models both social and physical interactions using images of the environment and trajectories of pedestrians. DESIRE \cite{DESIRE} proposes a conditional variational autoencoder (CVAE) to tackle the multi-modality of human trajectories. Trajectron++ \cite{Trajectron++} focuses on the dynamic constraints of agents when predicting trajectories and models not only the interactions between humans but also interactions of humans with other agents in the scene. PECNet \cite{PECNet} uses a CVAE to obtain a distribution for the endpoints of the pedestrians, which are employed to predict the future trajectories conditioned on the endpoints.
According to the recent works \cite{causalmotion, counterfactualmotion}, there are varying types of noise and intrinsic biases in the data resulting from human behaviour in different environments. The performance of the current motion forecasting models is negatively affected when transferred to a new environment or in the presence of a noise \cite{causalmotion}. Furthermore, there are some confounding factors in motion forecasting applications, which can lead to biased predictions \cite{counterfactualmotion}. Our work focuses on the robustness aspect of the motion forecasting problem. We propose a method applicable to motion forecasting models with an EID architecture, which will be discussed in the following sections.
\vspace{-0.8em}
\section{Proposed Method}\label{sec:proposed_method}
\vspace{-0.6em}
\subsection{Formalism of Motion Forecasting}
\vspace{-0.6em}
Consider a motion forecasting problem in a multi-agent environment with $M$ agents. Let's denote the states of the agents as $S_t=\{s_t^1, s_t^2, ... , s_t^M\}$ where $s_t^i=(x_t^i,y_t^i)$ are the 2D coordinates of agent $i$ at time $t$. The objective is to predict the states of agents $T_{pred}$ time steps into the future from observations of the previous $T_{obs}$ time steps. The model takes as input $X^i=\{s_1, s_2, ... , s_{T_{obs}}\}$ and predicts $Y^i=\{s_{T_{obs}+1}, s_{T_{obs}+2}, ... , s_{T_{obs}+T_{pred}}\}$ for every agent $i$.
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\textwidth]{causal.png}}
\caption{The proposed causal model (center). Filled circles are observed variables and empty shapes are the unobserved variables. $X$ and $Y$ represent past trajectories and future trajectories to be predicted, respectively. $Z$ represents invariant features common across domains, such as physical laws, while $S$ represents variant features specific to each environment, such as motion styles. Finally, $E$ is the selection variable, conditioning on $E$ allows us to switch between environments.}
\label{fig:causalmodel}
\end{center}
\vskip -0.2in
\end{figure}
Deep learning based motion forecasting models are often composed of three modules; an encoder, an interaction module and a decoder \cite{motionsurvey}.
The encoder takes previously observed states $X^i$ and produces representations of the observed states. The interaction module receives the representations from the encoder to capture the social interactions (human-human), the physical interactions (human-space), or both. Eventually, the decoder takes as input, the interaction vectors and predicts the future states of the agent $Y^i$.
It is worth mentioning that any type of neural network, which can handle sequence-to-sequence modeling such as Recurrent Neural Networks (RNNs) or temporal Convolutional Neural Networks (CNNs) could be employed as the Encoder and the decoder of the model.
Training data is usually collected from a set of $K$ environments or domains, $E=\{e_1, e_2, ..., e_k\}$. An example set of environments could be `Pedestrians crossing an intersection', `People standing in front of a department store', or `Pedestrians walking along the sidewalk'. In real-world datasets, it is common for the training and test environments to differ significantly \cite{counterfactualmotion}, such as in the widely-used ETH-UCY benchmark. Therefore, the i.i.d assumption does not always hold in practice and Empirical Risk Minimization (ERM) techniques cannot be used to train the neural networks. In the following section, we briefly present some concepts of causality applied in our proposed method.
\vspace{-0.6em}
\subsection{Background in Causality}
\vspace{-0.6em}
Structural Causal Models (SCMs) are a way of describing causal features and their interactions, which are represented by Directed Acyclic Graphs (DAG) \cite{causaljudea}. We say that $X$ is a direct cause of $Y$ when there is a directed edge from $X$ to $Y$ in the DAG. The cause and effect relation $X \rightarrow Y$ tells us that changing the value of $X$ can result in a change in the value of $Y$, but that the reverse is not true. A causal model receives as inputs a set of qualitative causal assumptions (A), a set of queries concerning the causal relations among variables (Q), and experimental or non-experimental data (D), presumably consistent with (A). A causal model makes predictions about the behavior of a system. The outputs of a causal model are then a set of logical implications of (A), data-dependent claims (C) represented by the magnitude or likelihoods of the queries (Q), and a list of testable statistical implications (T) \cite{Transportability}.
A major drawback of deep learning models using statistical inference is that they do not consider the confounding bias. Normally, we would predict $p(Y|X)$, however, in the presence of confounding variables, $p(Y|X)$ cannot capture the true causal effect of $X$ on $Y$ and the predictions will be erroneous. Therefore, we are interested in the causal query $p(Y|do(X))$ which eliminates the confounding bias by surgically intervening on $X$. This causal query can be calculated using the Backdoor Criterion \cite{causaljudea}. Our set of causal assumptions are causal sufficiency and faithfulness \cite{markovblanket}.
Since we do not assume i.i.d samples, we use an augmented version of causal models called Selection Diagrams \cite{Transportability} to discriminate between samples of different domains. Selection variables $S$ are introduced to model the variations in each domain. Every member of $S$ corresponds to a mechanism by which the two populations differ. Therefore, we can condition on the values of $S$ for switching between domains \cite{Transportability}.
\vspace{-0.6em}
\subsection{Causal Formalism}
\vspace{-0.6em}
Our proposed SCM for motion forecasting is shown in Figure \ref{fig:causalmodel}. There are two causal variables in motion forecasting, that can affect the trajectories of the pedestrians: invariant features and variant features \cite{causalmotion}. \textit{Invariant features} do not vary across domains but can influence the trajectories of the pedestrians. These features can be associated with physical laws, traffic laws, social norms, and etc.
In contrast, \textit{variant features} vary across domains and can be associated with the motion styles of the pedestrians in an environment \cite{causalmotion}.
In our proposed causal model we consider four endogenous variables: $S$ to represent variant features, $Z$ for invariant features, $X$ for past trajectories, and $Y$ for future trajectories.
We also introduce an additional exogenous variable $E$ as the selection variable \cite{Transportability} to account for the changing factors in each environment. The selection variable acts as an identifier of an environment such that the conditional probability of $p(X,Y|E=e_1)$ represents the distribution of the samples in an environment having the selection variable of $e_1$. In other words, we assume that all members of the dataset are sampled from a parent distribution over $X$, $Y$, and $E$. Furthermore, we assume that the proposed model is \textit{causally sufficient}. That is, it explains all the dependencies without adding further causal variables. To build a causal graph, we reason about causal edges connecting the causal variables:\\
\textbf{(1)} There should be an edge from $S$ to $X$ and $Y$ because motion styles can influence the speed of the pedestrians.\\
\textbf{(2)} There should be an edge from $Z$ to $X$ and $Y$ because social norms can influence how closely pedestrians move next to each other.\\
\textbf{(3)} There should be an edge from $X$ to $Y$ because the location in the past determines where the pedestrian is going to be in the future. \\
\textbf{(4)} $S$ varies in each domain, hence, there should be an edge from selection variable $E$ to $S$ to account for all the changing factors in each domain.\\
\vspace{-0.9em}
\subsection{Learning Latent Variables with Variational Inference}
\vspace{-0.6em}
According to Figure \ref{fig:causalmodel}, $S$ and $Z$ confound the causal effect of $X$ on $Y$ and the backdoor paths are $Y \leftarrow S \rightarrow X$ and $Y \leftarrow Z \rightarrow X$. Therefore, we need to use the backdoor criterion to eliminate the confounding effect. To this end, the causal effect of $X$ on $Y$ (i.e., $p(Y|do(X))$) is calculated. Before calculating this causal query, we need to recall the \textit{S-admissibility} criterion:
\begin{definition}
\label{sadmissible}
``A set $T$ of variables satisfying $(Y \independent S|T, X)$ in $D_{\Bar{X}}$ will be called \textit{S-admissible} (with respect to the causal effect of $X$ on $Y$).''\cite{Transportability}
\end{definition}
Where $D_{\Bar{X}}$ denotes an augmented SCM in which we intervene on X (i.e., the arrows entering X are deleted). According to this definition, in Figure \ref{fig:causalmodel}, the set ${S}$ will be an E-admissible set, which means once we know about $S$, the predictions ($Y$) will not be dependent on the environment ($E$) we're in.
We can now proceed to calculate $p(Y|do(X))$:
\begin{equation}
\label{eq:backdoor}
\begin{split}
&p(Y|do(X), E=e_i) \\
&=\int p(Y|X,S,Z,E)p(S|E)p(Z|E) ds dz \\
&=\int p(Y|X,S,Z)p(S|E)p(Z) ds dz \\
&= E_{p(S|E), p(Z)}[p(Y|X,S,Z)],
\end{split}
\end{equation}
where $p(Y|do(X), E=e_i)$ is the causal effect of $X$ on $Y$ in environment $i$. The first line follows from the backdoor criterion and the second line follows from the fact that $S$ is E-admissible \cite{Transportability} and $X$ is a collider. Equation \ref{eq:backdoor} implies that in order to calculate the causal effect of $X$ on $Y$ in every environment, we need to stratify over $S$ and $Z$. Although we don't know the true posterior of the latent variables, we will approximate them using mean-field variational inference \cite{Goodfellowdeeplearning}.
The standard log-likelihood to train the model is given by:
\begin{equation}
\label{eq:mll}
\begin{split}
\underset{p}{max} \: E_{p^*(x,y)}[\log p(x,y)]
\end{split}
\end{equation}
where $p^*(x,y)$ is the distribution of samples in the dataset. Calculating $p(x,y)$ is intractable since $p(x,y)=\sum_{e \in E}\int(p(x,y,s,z,e)dsdz$ where $E$ is the set of environments. Mean-field variational inference is used to approximate the true posterior distribution of latent variables.
For this purpose, the \textit{Evidence Lower Bound (ELBO)} function is used to train the model:
\begin{equation}
\label{eq4}
\underset{p, q}{max} \: E_{p^*(x,y)}\left[E_{q(s,z, e|x,y)}\left[\log \frac{p(x,y,s,z,e)}{q(s,z,e|x,y)}\right]\right]
\end{equation}
Theoretically, the ELBO function will drive $q(s,z,e|x,y)$ towards its target $p(s,z,e|x,y)$ and the objective function in Equation \ref{eq4} will become Equation \ref{eq:mll}. The approximate model is still intractable because we don't have access to the future trajectories in the test domain. Therefore, we replace it with $q(s,z,e,y|x)$ and the loss function becomes:
\begin{equation}
\label{eq:twoterms}
\begin{split}
&\underset{p, q}{max} \: E_{p^*(x)}E_{p^*(y|x)}[\log q(y|x)] + \\
&E_{p^*(x)}\Biggl[E_{q(s,z,e,y|x)}\left[\frac{p^*(y|x)}{q(y|x)}\log \frac{p(x,y,s,z,e)}{q(s,z,y,e|x)}\right]\Biggr]
\end{split}
\end{equation}
Assuming that $q(y|x)$ is Gaussian, the first term in the loss function of Equation \ref{eq:twoterms} would be the negative of Mean Squared Error (MSE). Eventually, this term will drive $q(y|x)$
towards $p^*(y|x)$ and the second term will become a lower bound of $\log p(x)$ as stated in Theorem \ref{thm:lowerbound}, which we prove below.
\begin{theorem}
\label{thm:lowerbound}
Let $p(x,y,s,z,e)$ be the joint distribution of latent and observed variables and $q(s,z,y,e|x)$ be the approximate posterior of latent variables and future trajectories given the past trajectories in GCRL, a lower bound on the log-likelihood of past trajectories is:
\begin{equation*}
E_{q(s,z,e,y|x)}\left[\log \frac{p(x,y,s,z,e)}{q(s,z,y,e|x)}\right] \leq \log p(x)
\end{equation*}
\end{theorem}
\begin{proof}
According to the Causal Markov condition \cite{causal_survey}, we can factorize $q(s,z,e,y|x)=q(y|x,s,z)q(z|x)q(s|x,e)q(e|x)$ and $p(x,s,z,y,e)=p(y|x,s,z)p(x|s,z)p(s|e)p(z)p(e)$. The first approximate model, $q(y|x,s,z)$, can be replaced with $p(y|x,s,z)$ since it is known. Secondly, since $q(s|x,e)$ is an approximation of its true posterior, we assume it to be $q(s|x)$. Therefore:
\begin{equation}
\label{eq:elbof}
\begin{split}
&E_{q(s,z,y,e|x)}\left[\log \frac{p(x,y,s,z,e)}{q(s,z,y,e|x)}\right] = \\
&E_{q(s,z,e|x)}\left[\log \frac{p(x|s,z)p(s|e)p(z)p(e)}{q(z|x)q(s|x)q(e|x)}\right] = \\
&E_{q(s|x), q(z|x)}\left[\log p(x|s,z)\right] - \\
&KL(q(z|x)||p(z)) - KL(q(s|x)||p(s)) - \\
&KL(q(e|x)||p(e|s)) \leq \log p(x)
\end{split}
\end{equation}
The detailed proof is given in Appendix \ref{appA}.
\end{proof}
As shown in Equation \ref{eq:elbof}, $q(e|x)$ will be driven towards $p(e|s)$, hence, it is sensible to model $q(e|x)$ with $p(e|s)$. By leveraging the Causal Markov condition and replacing $q(e|x)$ with $p(e|s)$, we can obtain a lighter inference model (q) and the loss function in Equation \ref{eq:twoterms} becomes:
\begin{equation}
\label{eq:finalloss}
\begin{split}
\underset{p, q}{max} &~ E_{p^*(x,y)}\Biggl[\log q(y|x) + \frac{1}{q(y|x)}\\
&\quad E_{q(s|x),q(z|x)}\Bigl[p(y|x,s,z)
\log \frac{p(x|s,z)p(s)p(z)}{q(s|x)q(z|x)}\Bigr]\Biggr] ,
\end{split}
\end{equation}
where $p(s) = \sum_{e \in E}p(s|e)p(e)$ which means that $S$ has a Gaussian mixture prior and $q(y|x)=E_{q(s|x), q(z|x)}[p(y|x,s,z)]=p(y|do(x))$ which can be calculated by ancestral sampling. The expectations in Equation \ref{eq:finalloss} can be estimated using the Monte-Carlo method after applying the re-parametrization trick \cite{repar}. Consequently, GCRL learns: \textbf{(1)} To minimize the distance between groundtruth future trajectories and predicted future trajectories via maximizing $\log q(y|x)$, \textbf{(2)} To eliminate the confounding effect by estimating the causal effect of $X$ on $Y$ via $p(y|do(x))$, \textbf{(3)} To reconstruct past trajectories via maximizing $\log p(x|s,z)$, \textbf{(4)} Invariant representations via maximizing $\log \frac{p(z)}{q(z|x)}$, \textbf{(5)} Variant representations via maximizing $\log \frac{p(s)}{q(s|x)}$. Furthermore, since GCRL learns to predict the future trajectories with a generative approach, it can tackle the multi-modality of trajectories.
\begin{figure*}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\textwidth]{model.png}}
\caption{A general overview of the proposed method. The approximate posteriors of the latents are estimated using the encoded past trajectories and the priors of the latents are calculated using the coupling layers. The encoder, interaction module, and decoder of the model can be replaced by motion forecasting models that use an EID architecture.}
\label{fig:model}
\end{center}
\vskip -0.2in
\end{figure*}
Gaussian Mixture priors have been previously used in popular VAE models \cite{VaDE, SVAE}. Variational models are susceptible to poor local minima and posterior collapse, it is important to show the identifiability of latent variables \cite{vae-identifiability}. Variational models with GMM priors are proven to be identifiable \cite{vae-identifiability}, hence, using a Gaussian Mixture prior for $S$ aligns with the theory as well. Furthermore, Gaussian Mixture Models (GMMs) are universal approximators, hence, $q(s|x)$ will be capable of producing arbitrary variant features. To obtain a better likelihood for latent variables, we use coupling layers \cite{rnvp} to learn rich priors $p(s|e_i)$ and $p(z)$ from simple priors such as standard Normal distributions. These proirs are referred to as \textit{flow priors} and are also used in VAEs \cite{learning_prior}.
A general overview of our model is shown in Figure \ref{fig:model}. The encoded past trajectories are used to model the approximate posteriors of the latents. We sample from these posteriors to reconstruct the past trajectories and predict the future trajectories. As shown in Figure \ref{fig:model}, any motion forecasting model with an EID architecture can be used with our method provided that the decoder is a piece-wise affine injective function as this is required to obtain the weakest form of identifiability (i.e., identifiability up to an affine transformation \cite{vae-identifiability}).
\vspace{-0.6em}
\subsection{Domain Adaptation Method}
\vspace{-0.6em}
After learning the causal mechanisms and causal variables using Equation \ref{eq:finalloss}, we know that $q(z|x)$ will generate representations with a single Gaussian distribution and $q(s|x)$ will generate representations with a Gaussian Mixture Model (GMM) distribution. Therefore, as illustrated in Figure \ref{fig:dist}, all representations generated by $q(z|x)$ will be in the same range, whereas the representations of $q(s|x)$ will form clusters, each modeled by a component of the GMM. Since $Z$ is invariant, we can directly transfer it to the new domain without any fine-tuning. However, $S$ can be interpreted as a weighted sum of the representations learnt from different environments of the training domains, which may be used in the test domains as well. Depending on how related the test domains are to the training domains, we may need to fine-tune the components of the GMM and obtain a new prior for $S$.
Thus, to fine-tune the model at inference time, we reuse the loss function in Equation \ref{eq:finalloss} without the regularizing $Z$ posterior by omitting $q(z|x)$. Eventually, $q(s|x)$ will be driven towards the new prior and compensate for the domain shift in the test domain. The models to predict future trajectories $p(y|x,s,z)$ and to reconstruct past trajectories $p(x|s,z)$ also needs to be fine-tuned as the samples of $q(s|x)$ will be updated.
Consequently, we only fine-tune the models of $q(s|x)$, $p(s)$, $p(y|x,s,z)$, and $p(x|s,z)$ while the models for, $p(z)$, and $q(z|x)$ can be arbitrarily complex as it is not required to fine-tune them in the test domain, but all the other models should be as simple as possible.
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=0.7\columnwidth]{dist.png}}
\caption{The priors of variant and invariant features. $E_i$ represents environment $i$.}
\label{fig:dist}
\end{center}
\vskip -0.2in
\end{figure}
\vspace{-0.6em}
\section{Experiments}
\vspace{-0.6em}
In our experiments we evaluate our models using two metrics often used in evaluation of the motion forecasting models:
\begin{equation}
\label{eq:ade}
ADE = \frac{\sum_{i=1}^{n}\sum_{t=T_{obs}+1}^{T_{pred}}((\hat{x}_i^t-x_i^t)^2+(\hat{y}_i^t-y_i^t)^2)^ \frac{1}{2}}{n(T_{pred}-(T_{obs}+1))}
\end{equation}
\begin{equation}
\label{eq:fde}
FDE = \frac{\sum_{i=1}^{n}((\hat{x}_i^{T_{pred}}-x_i^{T_{pred}})^2+(\hat{y}_i^{T_{pred}}-y_i^{T_{pred}})^2)^\frac{1}{2}}{n},
\end{equation}
where $\hat{x}_i^t$ and $\hat{y}_i^t$ are the \textit{predicted} horizontal and vertical coordinates of the pedestrian at time step $t$, while $x_i^t$ and $y_i^t$ are the \textit{actual} horizontal and vertical coordinates of the pedestrian at time step $t$.
\vspace{-0.6em}
\subsection{Datasets}
\vspace{-0.4em}
\paragraph{ETH-UCY dataset}This dataset contains the trajectory of 1536 detected pedestrians captured in five different environments \textit{\{hotel, eth, univ, zara1, zara2\}}. All trajectories in the dataset are sampled every 0.4 seconds. Following the experimental settings of \cite{causalmotion, counterfactualmotion, STGAT}, we also use a leave-one-out approach for training and evaluating our model so to predict the future 4.8 seconds (12 frames), we utilize the previously observed 3.2 seconds (8 frames).
\vspace{-0.6em}
\paragraph{Synthetic dataset}This dataset published in \cite{causalmotion} contains the trajectories of pedestrians in circle-crossing scenarios \cite{circle_cross} where the minimum separation distance of pedestrians differ in each domain. There are 8 domains in the dataset with separation distances $\{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8\}$ meters. Each domain contains 10,000 trajectories for training, 3,000 trajectories for validation, and 5,000 trajectories for testing.
\vspace{-0.6em}
\subsection{Robustness}
\vspace{-0.6em}
To evaluate the robustness of our model in the presence of spurious correlations, we compare our method with \cite{causalmotion, counterfactualmotion}. For a fair comparison, we use the STGAT \cite{STGAT} as our baseline model. Although, ETH-UCY contains five environments, it is not trivial to pinpoint the shifts in each environment. Therefore, we add a third dimension to the coordinates of the pedestrians, which measures observation noise and is modeled as in \cite{causalmotion}:
\begin{equation}
\label{eq:noise}
\begin{split}
&\gamma_t := (\Dot{x}_{t+\delta t} - \Dot{x}_t)^2 + (\Dot{y}_{t+\delta t} - \Dot{y}_t)^2 \\
&\sigma_t := \alpha (\gamma_t + 1),
\end{split}
\end{equation}
where $\Dot{x}_t = x_{t+1} - x_{t}$ and $\Dot{y}_t = y_{t+1} - y_{t}$ reflect the velocity of the pedestrians within the temporal window length of $\delta t = 8$ and $\alpha$ is the noise intensity. For the training domains $\alpha \in \{1, 2, 4, 8\}$ while for the test domain $\alpha \in \{8, 16, 32,64\}$. The test domain is the \textit{eth} environment for this experiment. Since the value of $\alpha$ for the third input dimension in training domains were $\{1, 2, 4, 8 \}$, the samples of the test domain with $\alpha \in \{16, 32, 64\}$ can be considered as out-of-distribution samples. To evaluate other methods in presence of observation noise, we have used the publicly available code from \cite{causalmotion}. The results in Table \ref{tab:table1} demonstrate the robustness of our method against observation noise while performing comparably with other motion forecasting models for low $\alpha$. Since our proposed method also learns to reconstruct inputs, it eliminates the effect of noise by reconstructing uncorrupted inputs, hence, it is not sensitive to noise.
\begin{table*}[h]
\caption{Robustness of different methods in the ETH-UCY dataset with controlled spurious correlation.}
\label{tab:table1}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccc}
\toprule
Method & ADE/FDE & ADE/FDE & ADE/FDE & ADE/FDE \\
& $\alpha=8$ & $\alpha=16$ & $\alpha=32$ & $\alpha=64$ \\
\midrule
Baseline \cite{STGAT} & \textbf{0.80/1.37} & 2.15/3.80 & 2.64/4.44 & 2.68/4.48 \\
Counterfactual \cite{counterfactualmotion} & 0.80/1.59 & 1.62/2.68 & 2.32/3.90 & 2.68/4.52 \\
Invariant $\lambda=1.0$ \cite{causalmotion} & 0.94/1.65 & 1.04/\textbf{1.76} & 1.52/2.55 & 1.96/3.35 \\
Invariant $\lambda=3.0$ \cite{causalmotion} & 0.91/1.67 & 0.99/1.87 & 1.18/2.20 & 1.27/2.33 \\
Invariant $\lambda=5.0$ \cite{causalmotion} & 0.98/1.79 & 1.00/1.83 & 1.06/1.90 & 1.56/2.58 \\
GCRL (ours) & 0.97/1.8 & \textbf{0.97}/1.8 & \textbf{0.97/1.8} & \textbf{0.97/1.8} \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table*}
\vspace{-0.6em}
\subsection{Domain Generalization}
\vspace{-0.6em}
In this experiment, we evaluate the generalizability of our proposed method using the synthetic dataset \cite{causalmotion}. We will refer to \cite{causalmotion} as IM in the remainder of the paper. For a fair comparison with IM, we use a PECNet \cite{PECNet} variant as our base model, train the model with five seeds and report the mean and standard deviation of the results. We will use the PECNet variant as our base model in the subsequent experiments. The Minimum Separation Distances (MSD) in the training and test domains are $\{0.1, 0.3, 0.5\}$ and $\{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8\}$ meters, respectively.
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=0.5\textwidth]{Best_N.png}}
\caption{Comparison of ADE/FDE for different values of $N$.}
\label{fig:best_k}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=0.5\textwidth]{DG.png}}
\caption{Domain generalization with different style domain shifts. The mean and standard deviation of ADEs are calculated and plotted for 5 seeds.}
\label{fig:generalize}
\end{center}
\vskip -0.2in
\end{figure}
Since GCRL is a generative approach, we can generate multiple future trajectories per sample and select the best of them to tackle the multi-modality of trajectories as practiced in \cite{SocialGAN, STGAT, counterfactualmotion}. Therefore, we use a hyper-parameter N in testing to determine the number of generated trajectories per sample. Figure \ref{fig:best_k} illustrates the significant impact that a generative approach can have in the performance. We will use $N=100$ in the following experiments.
As illustrated in Figure \ref{fig:generalize}, our method is more robust to domain shifts compared to IM, and it is achieving slightly better ADE, which is 8.8\% on average.
It is evident that for OOD-Inter cases where the test domain shift is within the range of training domain shifts e.g., test domain shift=0.4, GCRL is reusable since ADE is insensitive to the domain shifts. On the other hand, for the test domain shifts out of the range of training domain shifts, the OOD-Extra cases, the model needs to be fine-tuned.
\vspace{-0.6em}
\subsection{Domain Adaptation}
\vspace{-0.6em}
In this experiment, we evaluate the efficiency of our proposed method in knowledge transfer using the synthetic dataset for an OOD-Extra case. We train IM and GCRL with the previous setting and fine-tune different components of the model using a small number of batches from the test domain. The batch size is 64, hence, the number of samples used in fine-tuning will be $\{1, 2, 3, 4, 5, 6\} \times 64$. For IM, we fine-tune it using the best setting reported in the paper. For GCRL, we fine-tune our models for $p(y|x,s,z)$, $p(x|s,z)$, $p(s)$ and $q(s|x)$.
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=0.5\textwidth]{DA.png}}
\caption{Domain Adaptation with different number of batches}
\label{fig:adaptation}
\end{center}
\vskip -0.2in
\end{figure}
As shown in Figure \ref{fig:adaptation}, GCRL adapts to the new environment faster than IM and it is more robust to OOD-Extra shifts. In our experiment we fine-tune only the weights of GMM prior, which improves the ADE by 34.3\% from IM on average.
\vspace{-0.6em}
\subsection{Identifiability}
\vspace{-0.6em}
To evaluate the identifiability of $S$ and $Z$, we train GCRL with five different seeds on the synthetic dataset. With a GMM prior and a piecewise-affine injective decoder, identifiability up to an affine transformation can be achieved. Therefore, we calculate MCC between pairs of learnt latents before and after applying an affine transformation $f$, which we refer to as Strong MCC and Weak MCC, respectively. $f$ can be learnt via least-squares minimization.
The results in Table \ref{tab:table2} support the identifiability of $S$ as expected. $Z$, however, is not identifiable from observed trajectories and more data from the environments is required. This finding also aligns with the capability of past trajectories in explaining the features that $Z$ is supposed to represent such as physical laws. For example, in order to learn that pedestrians avoid obstacles in their paths, one needs to provide images of the environments. Despite the poor identifiability of $Z$, the empirical results in the next section indicate the invariance of $Z$ across domains.
\begin{table}[ht]
\caption{MCC of $S$ and $Z$ after and before applying an affine transformation.}
\label{tab:table2}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccc}
\toprule
Weak & Weak & Strong & Strong \\
MCC of S & MCC of Z & MCC of S & MCC of Z \\
\toprule
0.956 & 0.049 & -0.16 & -0.025 \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\vspace{-0.6em}
\subsection{Ablation studies}
\vspace{-0.6em}
In this experiment, we examine the individual contribution of the components of GCRL. First, we remove the coupling layers from the priors and replace them with learnable parameters. Second, we reconstruct past trajectories and predict future trajectories using only samples from $q(z|x)$. Third, we use only the samples of $q(s|x)$. We train the models on the synthetic dataset with $\{0.1, 0.3, 0.5\}$ shifts and test it on the domain with 0.6 shift.
\begin{table}[ht]
\caption{The effect of different components on performance}
\label{tab:table3}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcc}
\toprule
Models & ADE & FDE \\
\toprule
Vanila GCRL & 0.0871 & 0.1047 \\
\toprule
No coupling layers & 0.0772 & 0.0916 \\
\toprule
Z only & 0.1054 & 0.1347 \\
\toprule
S only & 0.2188 & 0.2418 \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
As shown in Table \ref{tab:table3}, the model with only $Z$ performs on par with the default model, however, the performance deteriorates when using only $S$. It can be concluded that $Z$ is invariant across domains as expected, however, it lacks information, which leads to worse ADE. The model without coupling layers performs better than the default model for synthetic dataset, which indicates that fewer parameters would suffice for certain environments.
\vspace{-1.0em}
\section{Conclusions}
\vspace{-0.6em}
We propose a method that leverages causality to learn meaningful features that can increase the robustness and transferability of deep learning models. In presence of spurious correlation, we demonstrated the robustness of our method while other human trajectory prediction models performed poorly compared to our method. Furthermore, our augmented causal model was able to enhance the transferability in a zero-shot and low-shot settings. It can be concluded from our results that incorporating causality in deep learning is a promising research direction towards robustness and explainability of deep learning models.
\bibliographystyle{unsrtnat}
|
{
"arxiv_id": "2302.08618",
"language": "en",
"timestamp": "2023-02-20T02:03:39",
"url": "https://arxiv.org/abs/2302.08618",
"yymm": "2302"
} |
\section{Introduction}\label{sec:introduction}}
Training deep neural networks (DNNs) requires large amounts of computing power and data; however, relying on a sustained increase in computing power is unsustainable \cite{thompson2020computational}, and data from multiple sources cannot always be aggregated (e.g. due to healthcare data regulations \cite{annas_hipaa_2003, mercuri_hipaa-potamus_2004}).
Distributed deep learning frameworks such as \textit{split learning} (SplitNN) \cite{vepakomma_split_2018, gupta_distributed_2018} and \textit{federated learning} \cite{bonawitz_towards_2019, konecny_federated_2016, konecny_federated_2017} aim to solve these two problems by allowing a group of data-holders (clients) to train a DNN without raw data sharing. The resulting DNN is effectively trained using the data-holders' collective data.
In federated learning, each client trains a local model and sends its parameter updates to the central server. The server aggregates the parameter updates (e.g. taking their average) of the clients and redistributes the final value. In SplitNN, a DNN is split into multiple parts (typically two); in the two-part setting, clients compute the first few layers of a DNN and send the output to a central server, who then computes the rest of the layers and initiates the backpropagation of gradients. In both methods, no client shares its private data with another party, and all clients end up with the same model.
\textbf{The Problem.} In SplitNN, the server has control over what the client models learn since the client models' parameter updates are fully determined by the server's backpropagated gradients. This creates a new attack vector we call \textit{training-hijacking}, that has already been exploited in an attack \cite{pasquini_unleashing_2021}, for a malicious server trying to obtain the clients' private data.\footnote{By contrast, this attack vector does not exist in federated learning, since the clients can trivially check if their model is aligned with their goals by calculating its accur acy on their local data. Running the same detection method is not possible in split learning since the clients cannot evaluate the model entirely locally.} In the attack, the server leads a client towards outputting values in such a way that it is as easy as possible for the server to obtain back the original inputs from the intermediate values. This is a serious potential violation of the clients' data privacy, but if the clients can detect early in the training process that the server is launching an attack, they can halt training and leave the attacker empty-handed.
\textbf{Our Solution.} Building on our earlier work \cite{splitguard}, we propose a set of two methods by which a SplitNN client can detect, without expecting cooperation from the server, if its local model is being hijacked. Our first approach is an active (i.e. clients interfere with the training process) method relying on the empirical observation that if a model is learning the intended classification task, its parameter updates should significantly differ from the norm when the ground truth labels for a batch are randomized. Our second approach is a passive method, demonstrating that a relatively straightforward outlier detection algorithm can reliably detect training-hijacking attacks, albeit at a higher compute cost on the clients' side. We demonstrate our methods' effectiveness through various experiments (using the MNIST \cite{lecun2010mnist}, Fashion-MNIST \cite{xiao2017/online}, and CIFAR10/100 \cite{Krizhevsky09learningmultiple} datasets) and discuss different use cases for both.
\begin{figure*}[t!]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/splitnn_simple.png}
\caption{With label-sharing.}
\label{fig:splitnn_label_sharing}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/splitnn_private_label.png}
\caption{Without label-sharing.}
\label{fig:splitnn_private_labels}
\end{subfigure}
\caption{\textbf{Potential SplitNN setups}. Arrows denote the forward and backward passes, starting with the examples $X$, and propagating backwards after the loss computation using the labels $Y$. In Figure \ref{fig:splitnn_label_sharing}, clients send the labels to the server along with the intermediate outputs. In Figure \ref{fig:splitnn_private_labels}, the model terminates on the client side, and the clients do not share their labels.}
\label{fig:splitnn_setups}
\end{figure*}
Our main claim is that both methods can detect an existing attack with perfect accuracy, but the passive method results in fewer false positives, albeit under stronger assumptions on the clients' capabilities.
The code for our methods is available at \url{https://github.com/ege-erdogan/splitguard}.
\section{Background and Related Work}
\subsection{Neural Networks}
In the context of supervised learning,\footnote{Supervised learning involves learning through labeled data, as opposed to unsupervised learning, in which the data used is not labeled.} a neural network \cite{Goodfellow-et-al-2016} is a parameterized function $f:X \times \Theta \rightarrow Y$ that approximates a function $f^*:X \rightarrow Y$. The training process aims to learn the parameters $\Theta$ using a training set consisting of examples $\tilde X$ and labels $\tilde Y$ sampled from the real-world distributions $X$ and $Y$.
A typical neural network, also called a \textit{feedforward neural network}, consists of discrete units called \textit{neurons}, organized into layers. Each neuron in a layer takes in a weighted sum of the previous layer's neurons' outputs, applies a non-linear activation function, and outputs the result. The weights connecting the layers to each other constitute the parameters that are updated during training. Considering each layer as a separate function, we can model a neural network as a chain of functions, and represent it as $f(x)=f^{(N)}(...(f^{(2)}(f^{(1)}(x)))$, where $f^{(1)}$ corresponds to the first layer, $f^{(2)}$ to the second layer, and $f^{(N)}$ to the final, or the \textit{output} layer. The final layer often has a different activation function, such as the softmax function.
Training a neural network involves minimizing a loss function. However, since the nonlinearity introduced by the activation functions applied at each neuron causes the loss function to become non-convex, we use iterative, gradient-based approaches to minimize the loss function. Since these methods do not provide any global convergence guarantees, it is important that the training data represent the real-world data as accurately as possible.
A widely-used optimization method is \textit{stochastic gradient descent} (SGD). Rather than computing the gradient from the entire data set, SGD computes gradients for batches selected from the data set. The weights are updated by propagating the error backwards using the backpropagation algorithm. Training a deep neural network generally requires multiple passes over the entire data set, each such pass being called an \textit{epoch}. One round of training a neural network requires two passes through the network: one forward pass to compute the network's output, and one backward pass to update the weights. We will use the terms \textit{forward pass} and \textit{backward pass} to refer to these operations in the following sections. For an overview of gradient-based optimization methods other than SGD, we refer the reader to \cite{ruder_overview_2017}.
\subsection{Split Learning}
In split learning (SplitNN) \cite{gupta_distributed_2018, vepakomma_no_2018, vepakomma_split_2018}, a DNN is split between the clients and a server such that each client locally computes the first few layers, and the server computes rest of the layers. This way, a group of clients can train a DNN utilizing, but not sharing, their collective data.
Most of the computational work is offloaded to the server, reducing the cost of training for the clients. However, this partitioning involves a privacy/cost trade-off for the clients, with the outputs of earlier layers leaking more information about the inputs.
Figure \ref{fig:splitnn_setups} displays the two basic setups of SplitNN, the main difference being whether the clients share their labels with the server or not. In Figure \ref{fig:splitnn_label_sharing}, clients compute only the first few layers, and share their labels with the server. The server then computes the loss value, starts backpropagation, and sends the gradients of its first layer back to the client, who then completes the backward pass. The private-label scenario depicted in Figure \ref{fig:splitnn_private_labels} follows the same procedure, with an additional communication step. Since now the client computes the loss value and initiates backpropagation, it should first feed the server model with the gradient values to resume backpropagation.
The primary advantage of SplitNN compared to federated learning is its lower communication load \cite{singh2019detailed}. While federated learning clients have to share their entire parameter updates with the server, SplitNN clients only share the output of a single layer.
SplitNN follows a round-robin training protocol to accommodate multiple clients; clients take turn training with the server using their local data. Before a client starts its turn, it should bring its parameters up-to-date with those of the most recently trained client. There are two ways to achieve this: the clients can either share their parameters through a central parameter server, or directly communicate with each other in a peer-to-peer way.
Choosing a split depth is crucial for SplitNN to actually provide data privacy. If the initial client model is too shallow, an honest-but-curious server can recover the private inputs with high accuracy, knowing only the model architecture (not the parameters) on the clients' side \cite{erdogan2021unsplit}. This implies that SplitNN clients should increase their computational load by computing more layers for better data privacy.
\subsection{Training-Hijacking in Split Learning}
In a training-hijacking attack against a SplitNN client, the attacker server tries to direct the client models towards its own malicious goal, independent of the actual classification task. The Feature-Space Hijacking Attack (FSHA) (Pasquini et al. CCS '21) \cite{pasquini_unleashing_2021} is the only proposed training-hijacking attack against SplitNN clients so far. The server aims to lead the clients, by propagating back loss values independent of the original task, towards outputting values in such a way that it is easier to recover the original inputs (clients' private data) than if the model was learning the original task.
In FSHA, the attacker (a SplitNN server) first trains an autoencoder (consisting of the encoder $\tilde{f}$ and the decoder $\tilde{f}^{-1}$) on some public dataset $X_{pub}$ similar to that of the client's private dataset $X_{priv}$. It is important for the attack's effectiveness that $X_{pub}$ be similar to $X_{priv}$. Without such a dataset at all, the attack cannot be launched. The main idea is for the server to bring the output spaces of the client model $f$ and the encoder $\tilde f$ as close as possible, so that the decoder $\tilde{f}^{-1}$ can successfully invert the client outputs and recover the private inputs.
After this initial \textit{setup phase}, the client model's training begins. For this step, the attacker initializes a distinguisher model $D$ that tries to distinguish the client's output $f(X_{priv})$ from the encoder's output $\tilde{f}(X_{pub})$. More formally, the distinguisher is updated at each iteration to minimize the loss function.
\begin{equation}
L_D = \log(1-D(\tilde{f}(X_{pub}))) + \log(D(f(X_{priv}))).
\end{equation}
Simultaneously at each training iteration, the server directs the client model $f$ towards maximizing the distinguisher's error rate, thus minimizing the loss function.
\begin{equation}
L_f = \log(1-D(f(X_{priv}))).
\end{equation}
In the end, the output spaces of the client model and the server's encoder are expected to overlap to a great extent, making it possible for the decoder to invert the client's outputs.
Notice that the client's loss function $L_f$ is totally independent of the training labels, as in changing the value of the labels does not affect the loss function. We will soon refer to this observation.
\subsection{Anomaly Detection \& Local Outlier Factor}
An anomaly detection (AD) method tries to identify data points that are ``different" from the ``normal," e.g. to identify potentially dubious financial transactions. An important property of an AD method is whether the training data contains outliers or not; if it does, the approach is called \textit{outlier detection}, and otherwise \textit{novelty detection}.
Local outlier factor (LOF) \cite{breunig2000lof, chen2010comparison, janssens2009outlier} is an unsupervised AD method. It does not make any distribution assumptions and is density-based. Rather than output a binary decision, LOF assigns a \textit{local outlier factor score} to each point, where points in homogeneous clusters have low scores and those with smaller local densities have high scores. The LOF algorithm works as follows:
Using a distance measure $d$ such as the Euclidean distance, calculate the \textit{k-distance} (the distance between a point $p$ and its $k^{th}$ closest neighbor $p_k$), denoted $d_k(p)$ of each point $p$. Then calculate its \textit{reachability distance} to each point $q$ as
$$
\text{reachDist}(p, q) = \max(d(p,q), d_k(p)),
$$
and its \textit{local reachability density} as the inverse of its average reachability distance to its $k$-neighbors $\text{kNN}(p)$:
$$
\text{LRD}(p) = \Bigg(\sum_{q \in \text{kNN(p)}} \frac{\text{reachDist}(p, q)}{k}\Bigg)^{-1}.
$$
Finally, assign a LOF score to $p$ as the ratio between the average LRD of its $k$-neighbors and its own LRD:
$$
\text{LOF}(p) = \frac{\sum_{q \in \text{kNN(p)}} \text{LRD}(q)}{k \cdot \text{LRD(p)}}
$$
In the end, we expect that if the point $p$ is an inlier, $\text{LOF}(p)$ will be nearly 1 and greater otherwise. Generally, we say $p$ is an outlier if $\text{LOF}(p) > 1$.
Since LOF discovers outliers only based on their local densities and does not try to model the distribution, it does not require us to have even a rough model of the expected outlier behavior which makes it a feasible choice for us.
So far we have not touched upon how $k$, the number of neighbors, is chosen. There has been methods to choose $k$ automatically and we explain the way we choose $k$ in our study in Section \ref{sg_ad} .
\subsection{Differential Privacy as a Defense Against Training-Hijacking}
Differential privacy \cite{dwork2014algorithmic} applied to model training reduces the memorization of individual examples, and thus has been evaluated as a potential defense mechanism against training-hijacking attacks \cite{gawron2022feature}, through the clients applying differential privacy on the gradients they receive from the server. As reported in \cite{gawron2022feature}, applying DP makes the attack less effective after the same number of iterations compared to the non-DP scenario, but the attacker still obtains high-accuracy results after a higher number of iterations.
Thus, differential privacy by itself does not rule out training-hijacking attacks, the bottom line being as the authors state in \cite{gawron2022feature}: "DP can at most delay FSHA convergence." Nevertheless, this delay can still prove useful. For example, a SplitNN client running one of our methods while also applying DP on the gradients it receives would have more time to detect a training-hijacking attack before the attacker learns accurate results. DP should not be ruled out as a defense against training-hijacking; in fact, it can be a strong tool for clients when used in the right context.
\begin{table}[h!]
\centering
\caption{Summary of notation and shorthand used throughout the paper.}
\label{tab:notation}
\begin{tabular}{ll}
\toprule
\multicolumn{2}{l}{\textbf{Notation}} \\ \midrule
SG-LC & abbrv. SplitGuard Label-Changing \\
SG-AD & abbrv. SplitGuard Anomaly Detection \\
$P_F$ & Probability of sending a fake batch \\
$B_F$ & Share of randomized labels in a fake batch \\
$N$ & Batch index at which SG-LC starts running \\
$F$ & Set of fake gradients \\
$R_1, R_2$ & Random, disjoint subsets of regular gradients \\
$R$ & $R_1 \cup R_2$ \\
$\alpha, \beta$ & Parameters of the SG-LC score function \\
$L$ & Number of classes \\
$A$ & Model's classification accuracy \\
$A_F$ & Expected classification accuracy for a fake batch \\
\bottomrule
\end{tabular}
\end{table}
\section{SplitGuard: Label-Changing}
\subsection{Model Behavior Under a Reversed Task}
A supervised learning task can be reversed by randomizing the ground truth values used in training. The points corresponding to high-accuracy classifiers on the parameter space lead to a low classification accuracy when the labels are randomized.
If the client model is learning the intended task, a change should be visible in its parameter updates when the task is reversed. Moreover, since the attacker's objective is independent of the original task, the same discrepancy should not be visible if the server is hijacking the training process. We then need this discrepancy to become evident before the attacker can learn significant amount of information so that the clients can stop training soon enough if the expected discrepancy \textit{does not} occur.
During training with SG-LC, clients intermittently input batches with randomized labels, denoted \textit{fake batches}, as opposed to \textit{regular batches}.\footnote{\textit{Fake gradients} and \textit{regular gradients} similarly refer to the gradients resulting from fake and regular batches.}
There are two components of the aforementioned discrepancy between the fake and regular gradients: \textit{angle} and \textit{magnitude}. We make the following two claims (empirically demonstrated in the Supplementary Material):
\begin{claim}
If the client model is learning the intended task, then the angle between fake and regular gradients will be higher than the angle between two random subsets of regular gradients (the once-optimal points becomes a point-to-avoid).\footnote{Angle between \textit{sets} meaning the angle between the \textit{sums} of vectors in those sets.}
\end{claim}
\begin{claim}
If the client model is learning the intended task, then fake gradients will have a higher magnitude than regular gradients (fake batches will result in a greater error, resulting in bigger updates).
\end{claim}
\subsection{Putting the Claims to Use}
At the core of SG-LC, clients compute the SG-LC score, based on the fake and regular gradients they have collected up to that point, and use the scores to detect the attack. We now describe this calculation process in more detail. Table \ref{tab:notation} displays the notation we use from here on.
Algorithm \ref{alg:main} explains the modified training procedure in more detail. Starting with the $N$th batch during the first epoch of training, with probability $P_F$, clients send fake batches in which the share $B_F \in [0,1]$ of the labels are randomized. Upon computing the gradient values for their first layer, clients append the fake gradients to the list $F$, and split the regular gradients randomly into the lists $R_1$ and $R_2$, where $R=R_1 \cup R_2$. To minimize the effect of fake batches on model performance, clients discard the parameter updates resulting from fake batches.
\input{include/main_alg}
We should first define two quantities. For two sets of vectors $A$ and $B$, we define $d(A,B)$ as the absolute difference between the average magnitudes of the vectors in $A$ and $B$:
\begin{equation}
d(A, B) = \Big| \frac{1}{|A|} \sum_{a \in A} \Vert a \Vert -
\frac{1}{|B|} \sum_{b \in B} \Vert b \Vert
\Big|,
\end{equation}
and $\theta(A,B)$ as the angle between sums of vectors in two sets $A$ and $B$:
\begin{equation}
\theta(A,B) = \arccos\Big(\frac{\bar A \cdot \bar B}{\Vert \bar A \Vert \cdot \Vert \bar B \Vert}\Big)
\end{equation}
where $\bar A$ is the sum of the vectors in $A$. We can restate our two claims more concisely using these quantities under the condition that the client model is learning the intended task:
\begin{claim1}
$\theta(F, R) > \theta(R_1, R_2)$
\end{claim1}
\begin{claim2}
$d(F,R) > d(R_1, R_2)$
\end{claim2}
It then follows that if the server is honest, $\theta(F,R) \cdot d(F,R)$ will be greater than $\theta(R_1,R_2)\cdot d(R_1,R_2)$. If the model is learning some other task independent of the labels, then $F, R_1$, and $R_2$ will essentially be three random samples of the set of gradients obtained during training, and it will not be possible to consistently detect the same relationships among them.
To quantify this, after each fake batch, clients compute the value:
\begin{equation}
S = \frac{
\theta(F,R) \cdot d(F,R) - \theta(R_1,R_2)\cdot d(R_1,R_2)
}{
d(F,R) + d(R_1, R_2) + \varepsilon
}.
\label{eq:main}
\end{equation}
The numerator contains the useful information we want to extract, and we divide that result by $d(F,R) + d(R_1, R_2) + \varepsilon$, where $\varepsilon$ is a small constant to avoid division by zero, to bound the $S$ value within the interval $[-\pi, \pi]$.
Informally, high S values correspond to an honest server, and low values to a training-hijacking server. For an effective method, we need to define the notions of \textit{higher} and \textit{lower} more clearly. For this purpose, we will define a \textit{squashing function} that maps the interval $[-\pi, \pi]$ to the interval $(0,1)$, where high S values get mapped infinitesimally close to 1 while the lower values get mapped to considerably lower values. This allows the clients to choose a threshold to consistently separate high and low values.
Our function of choice for the squashing function is the logistic sigmoid function $\sigma$. To provide some form of flexibility to the clients, we introduce two hyper-parameters, $\alpha$ and $\beta$, and define the function as follows:
\begin{equation}
\tag{SG-LC Score}
SG = \sigma(\alpha \cdot S)^\beta \in (0,1).
\end{equation}
The function fits naturally for our purposes into the interval $[-\pi, \pi]$, mapping the high-end of the interval to 1, and the lower-end to 0. The parameter $\alpha$ determines the range of values that get mapped very close to 1, while increasing the parameter $\beta$ punishes the values that are less than 1.
\input{include/policies_alg}
\subsection{Detecting the Attack}
A client can take into account the \textit{history} of SG-LC scores to make a decision. Although the space of potential decision policies is large, we propose three simple-to-use alternatives (also displayed in Algorithm \ref{alg:policies}) as follows:
\begin{itemize}
\item \textit{Fast:} Fix an early batch index. Report an attack if the last score obtained is less than the decision threshold after that index. The goal of this policy is to detect an attack as fast as possible, without worrying about a high false positive rate.
\item \textit{Avg-$k$:} Report an attack if the average of the last $k$ scores is less than the decision threshold. This policy represents a middle point between the \textit{Fast} and the \textit{Voting} policies.
\item \textit{Voting:} Divide the scores sequentially into groups of a fixed size and calculate each group's average. Report attack if the majority of the means is less than the decision threshold. This policy aims for a high overall success rate (i.e. high true positive and low false positive rates); it can tolerate making decisions relatively later.
\end{itemize}
\section{SplitGuard: Anomaly Detection} \label{sg_ad}
Our AD-based approach SG-AD consists of two phases: \textit{colecting training data} and \textit{detecting the attack}.
\input{tables/main_results}
\subsection{Collecting Training Data for LOF}
We assume the clients have access to the whole neural network architecture (not necessarily the same parameters with the server-side model). They can then train the neural network before the actual training using a portion of their local data to collect a set of \textit{honest} gradients.
This seemingly contradicts the whole reason of using a collaborative ML protocol such as SplitNN: outsourcing computation. However, as we will demonstrate in more detail, clients using a very small share of their local data for the data collection phase still perform accurate detection. Moreover, better data utilization, rather than purely outsourcing computation might be a stronger reason for using SplitNN in certain scenarios (e.g. financially/technologically capable institutions such as universities or hospitals working on regulated private data).
\subsection{Detecting the Attack}
When the SplitNN training begins, gradients the server sends are input to LOF, and are classified as \textit{honest} (inlier) or \textit{malicious} (outlier). Clients can make a decision after each gradient or combine the results from multiple gradients, by classifying the last few gradients and reaching the final decision by a majority vote between them. If a client concludes that the server is launching an attack, it can stop training and prevent further progress of the attack.
For the LOF algorithm, clients need to decide on a hyperparameter, the number of neighbors. Although there has been work on automatically choosing such values \cite{xu2019automatic}, we have observed that setting it equal to one less than the number of honest gradients collected (i.e. the highest feasible value) achieves consistently the best performance across all our datasets.
Algorithm \ref{alg:ad_cli} explains attack detection process in detail. Before training, clients allocate a certain share of their training data as a separate set $train_{\text{SIM}}$ and train a local copy of the server-side layers using that dataset. The resulting gradients then constitute the training data of LOF, denoted $train_{\text{LOF}}$. During the actual training with SplitNN, each gradients received from the server is input to LOF, and classified as an outlier/inlier. With the window size $w$, clients decide that the server is attacking if at least $w$ LOF decisions has been collected and the majority of them are outliers (i.e. malicious gradients). Similar to SG-LC, training is stopped as soon as an attack is detected.
\input{include/ad_cli_alg.tex}
\section{Results}
We now test how accurately our two methods can detect FSHA. For our experiments, we use the ResNet architecture \cite{he2015deep}, trained using Adam \cite{kingma_adam_2017}, on the MNIST \cite{lecun2010mnist}, Fashion-MNIST \cite{xiao2017/online}, and CIFAR10/100 \cite{Krizhevsky09learningmultiple} datasets. We implemented our methods in Python (v 3.7) using the PyTorch library (v 1.9) \cite{pytorch}. In all our experiments, we limit our scope only to the first epoch of training.
\subsection{Detecting FSHA by SG-LC} \label{detecting_fsha}
\subsubsection{Distinguishable Scores}
\input{tables/detection_results}
Figure \ref{fig:main_results} compares the SG-LC scores obtained against a malicious (FSHA) server and an honest server, averaged over 5 runs with a $P_F$ value of 0.1 and varying $B_F$ values.\footnote{The $B_F$ values do not affect the SG-LC scores obtained against a FSHA server, since the client's loss function $L_f$ is independent of the labels, though as we will discuss later there might be strategic reasons for choosing different $B_F$ values.} We set the $\alpha$ and $\beta$ values to 7 and 1 for all datasets.
The results displayed in Figure \ref{fig:main_results} indicate that the scores are distinguishable enough to enable detection by the client. Scores obtained against an honest server are very close or equal to 1, while those obtained against a FSHA server do not surpass $0.8$, and vary more vigorously. Higher $B_F$ values are more effective. For example, it takes slightly more time for the scores to get fixed around 1 for Fashion-MNIST with a $B_F$ of $4/64$ compared to a $B_F$ of 1.
\subsubsection{Decision Policies}
We now report the detection statistics for the three example policies described earlier, with a threshold value of 0.9.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{figures/fsha_results.png}
\caption{\textbf{Results obtained by a FSHA attacker} for the MNIST, F-MNIST, CIFAR10, and CIFAR100 datasets with respect to the detection times (as shown in Table \ref{tab:detection_results} for SG-LC, and SG-AD) of different methods. The first row displays the original images, and the last row displays the results obtained by a FSHA attacker able to run for an arbitrary duration without being detected.}
\label{fig:fsha_results}
\end{figure*}
Table \ref{tab:detection_results} displays the detection statistics for each of these strategies obtained over 100 runs of the first epoch of training against a FSHA attacker and an honest server with a $B_F$ of 1 and $P_F$ of 0.1. For the \textit{Avg-k} policy, we use $k$ values of 10 and 20; this ensures that the policy can run within the first training epoch.\footnote{With a batch size of 64, one epoch is equal to 938 batches for MNIST and F-MNIST, and 782 for CIFAR10/100.} For the \textit{Voting} policy, we set the group size to 5. Finally, we set $N$, the index at which label-changing starts running, as 20 for MNIST and F-MNIST, 50 for CIFAR10, and 100 for CIFAR100.\footnote{The models initially behave randomly. We want to exclude those periods from SG-LC.}
Most significantly, all the strategies achieve a perfect true positive rate (i.e. successfully detect all runs of FSHA). Expectedly, the \textit{Fast} strategy achieves the fastest detection times as denoted by the $t$ values in Table \ref{tab:detection_results}, detecting in at most a hundred training batches all instances of the attack.
False positive rates increase as the model's accuracy decreases, moving from MNIST to F-MNIST and then to CIFAR10/100. This means that more training time should be taken to achieve higher success rates in more complex tasks. However, as we will observe in Section \ref{sec:fsha_result}, the model not having a high performance also implies that FSHA will be less effective. Nevertheless, the \textit{Voting} policy achieves a false positive rate of 0 for (F-)MNIST, 0.02 for CIFAR10, and 0.11 for CIFAR100, indicating that despite the relatively high false positive rates of the \textit{Fast} and \textit{Avg-$k$} policies, better detection performance in less time is achievable through more sophisticated policies, such as the \textit{Voting} policy.
\subsection{Detecting FSHA with SG-AD}
We test the LOF algorithm's performance when used with different shares of client training data and different decision-window sizes using the MNIST \cite{lecun2010mnist}, F-MNIST \cite{xiao2017/online}, and CIFAR10/100 \cite{Krizhevsky09learningmultiple} datasets. As stated earlier, we set the number of neighbors to the highest possible value (i.e. one less than the number of training points).
For each configuration, we measure the true and false positive rates by testing the method on 100 randomly-initialized honest and FSHA runs each; a positive (attack detected) decision in a FSHA run is a true positive, and in a honest run a false positive. To make sure we can detect the attack early enough, we start the algorithm after the required number of gradients for the decision window has been obtained.
Table \ref{tab:detection_results} displays the detection results for the best possible setup (window size 10 and LOF train data rate of $1\%$), with the more comprehensive results covering a wider range of hyperparameters shown in the Supplementary Material.
In the more simpler MNIST and F-MNIST datasets, our method achieves a TPR of 1 and FPR of 0, meaning it can detect if there is an attack or not with perfect accuracy. For the more complex CIFAR datasets, $2\%$ of honest runs are classified as malicious, but that also reduces to zero as more data is allocated for LOF (Supplementary Material). We should also note that since AD requires a fewer number of gradients to start running (e.g. compared to LC-Voting), it detects the attack considerably faster than LC.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{figures/MT_RESULTS.png}
\caption{\textbf{Results the FSHA attacker obtains when it performs multitask learning until detection} for the MNIST, F-MNIST, CIFAR10, and CIFAR100 datasets, with 5 images randomly selected from each dataset, until the detection time for the Voting policy for SG-LC and the times shown in table \ref{tab:mt_detection_results} for SG-AD. The bottom row displays the original inputs.}
\label{fig:mt_fsha_results}
\end{figure*}
\subsection{What Does the Attacker Learn Until Detection?} \label{sec:fsha_result}
We now analyze what an FSHA adversary can learn until the detection batch indices displayed in Table \ref{tab:detection_results} for SG-LC, and SG-AD. Figure \ref{fig:fsha_results} displays the results the FSHA attacker obtains until it is allowed to run for the respective times.
If stopped very early within the first epoch (e.g. with the \textit{Fast} policy), the attacker obtains not much more than random noise. Furthermore, the attack requires more time to obtain high-fidelity results as the dataset gets more complex. Using CIFAR100, even if the attack is detected in the second half of the epoch (\textit{Voting}), it is difficult to distinguish the attack outputs from each other, let alone compare their similarity to the original inputs. Most importantly, while SG-AD's FPR values of 0 can be compared to the \textit{Voting} policy, SG-AD detects the attack earlier, meaning the server learns less information compared to when the \textit{Voting} policy is used.
Finally, the CIFAR10/100 results also show that the attacker having more time for a more complex task is tolerable because after the same number of batches, the attacker's results for MNIST and Fashion-MNIST are more accurate compared to the CIFAR10/100 results.
\subsection{Against a Multitasking FSHA Server}
As a response to the preceding discussion, the question might arise of the server somehow including the label values in the attack in an attempt to subvert the detection process.\footnote{This only concerns the shared-label SplitNN setup (Figure \ref{fig:splitnn_label_sharing}) since in the private-label scenario the server does not have access to the label values input to the classification loss.} A reasonable way of doing this is to make the client optimize both the FSHA loss and the classification loss functions, e.g. by computing their weighted average, an \textit{attack weight} of 1 meaning plain FSHA and 0 no attack.
Table \ref{tab:mt_detection_results} displays the true positive rates for both our methods against a multitasking server. SG-LC again detects all instances of the attack, but although SG-AD maintains a high TPR of $>$80\% for our recommended setup of $1\%$ AD data rate and 10 window size, it fails to detect some instances of the attack.
\begin{table}[t!]
\centering
\caption{\textbf{Detection results against a server launching FSHA while performing multitask learning} with attack weight 0.5 on the ResNet model for the MNIST and CIFAR datasets, averaged over 100 independent runs for both methods. False positives correspond to the honest server scenario with the results shown earlier in Table \ref{tab:detection_results}.
}
\label{tab:mt_detection_results}
\begin{subtable}{0.49\textwidth}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{l|cc|cc|cc|cc}
\toprule
\textbf{Method} & \multicolumn{2}{c}{\textbf{MNIST}} & \multicolumn{2}{|c|}{\textbf{F-MNIST}} & \multicolumn{2}{c|}{\textbf{CIFAR10}} & \multicolumn{2}{c}{\textbf{CIFAR100}} \\
& TPR & $t$ & TPR & $t$ & TPR & $t$ & TPR & $t$ \\ \midrule
Fast & 1.0 & 0.015 & 1.00 & 0.028 & 1.00 & 0.030 & 1.0 & 0.140 \\
Avg-10 & 1.0 & 0.110 & 1.00 & 0.110 & 1.00 & 0.140 & 1.0 & 0.140 \\
Avg-20 & 1.0 & 0.220 & 1.00 & 0.220 & 1.00 & 0.260 & 1.0 & 0.260 \\
Voting & 1.0 & 0.320 & 1.00 & 0.310 & 1.00 & 0.470 & 1.0 & 0.400 \\
\midrule
AD & 1.0 & 0.012 & 0.98 & 0.029 & 0.98 & 0.026 & 1.0 & 0.064 \\
\bottomrule
\end{tabular}
\end{subtable}
\vspace{0.2cm}
\end{table}
Finally, Figure \ref{fig:mt_fsha_results} displays the results obtained by a FSHA server performing multitask learning with varying attack weights. A very low attack weight such as 0.01 produces random-looking results, while the distinction between a full- and half-attack result is less clear when compared to the results in Figure \ref{fig:mt_fsha_results}. However, it is not clear whether this is an inherent property of the attack, or is specific to the dataset or client/server model architectures.
\section{Comparing SG-AD and SG-LC}
\subsection{SG-LC Requires Less Time/Space}
Figure \ref{fig:nn_training_time} compares the time overhead of SG-AD and SG-LC as the share of normal training time incurred as an overhead, averaged over 5 runs across our test datasets.
Since SG-AD involves training the whole model on part of client data, SG-LC's overhead is lower, less than $10\%$ of the SplitNN training time for all datasets. Furthermore, while reducing the share of client data used to train LOF decreases SG-AD's overhead, the reduction is not directly proportional to the share of data; e.g. using $1\%$ of client data still results in an additional cost of $\sim55\%$ for CIFAR100 and $\sim35\%$ for other datasets.
Space-wise for SG-LC, for each of the sets $F$, $R_1$, $R_2$, the clients should store two values: a sum of all vectors in the set, and the average magnitude of the vectors in the set. The former has the dimensions of a single gradient vector, and the latter is a scalar. Since both of these can be maintained in a running way, the space requirement is independent of the time for which SG-LC is run. To run the LOF algorithm on the other hand, the training examples for LOF should be stored for further computation later on. The space requirement grows linearly with the share of client data used in training, as opposed to the constant use of the label-changing method.
\begin{figure}[t!]
\centering
\includegraphics[width=1.05\columnwidth]{figures/NN_overhead_percent100_wrt_data_rate.png}
\caption{\textbf{Time overhead comparison} between SG-LC and SG-AD, with respect to varying percentage of client data used to train LOF, averaged over 5 runs. The y-axis displays the additional time required as a share of the non-SG training time. Solid lines correspond to SG-AD and the dashed lines represent to SG-LC.}
\label{fig:nn_training_time}
\end{figure}
SG-AD inevitably incurs a higher cost space- and time-wise on the clients, albeit at a stronger performance as discussed above. Collecting data for LOF requires simulating an epoch from start, and while smaller shares of training data used for this step reduces the overall cost, the reduction is not directly proportional (i.e. using $1\%$ of the data does not make training 100x faster); thus the cost of preparing SG-AD is higher than the overhead incurred by SG-LC during training.
This, combined with the better accuracy of SG-AD makes it a better choice for highly sensitive setups in which clients can tolerate the additional costs.
\subsection{Both Achieve High TPR; SG-AD Obtains Lower FPR}
Overall, while both methods can achieve a perfect true-positive rate (i.e. detect all instances of the attack) against a server only launching the attack, SG-AD achieves a lower false-positive rate especially for the more complex CIFAR tasks (with window size 10, it achieves an FPR of 0 compared to the minimum 0.02 of SG-LC). This is not surprising, since SG-LC's decisions rely on the difference between fake and regular gradients, and SG-AD's decisions rely on the gradients' similarity to the ground-truth honest gradients obtained during the simulation step. The latter can be said to be a more direct comparison between the actual quantities of interest, while SG-LC compares the outcome of an intervention with the expected outcome.
\subsection{SG-AD is Simpler to Use, Requires Fewer Hyperparameters}
While SG-AD has a higher cost as discussed above, it is considerably simpler to use. SG-LC involves setting a higher number of parameters, and the process of interpreting the decision is more complex than treating the LOF algorithm as a black-box.
Regarding the choice of hyperparameters for SG-AD, we recommend setting the AD train data rate as low as possible; $1\%$ was enough to reach perfect accuracy in our test scenarios. For the window size, relatively small values such as 10 were enough to reduce the noise in individual decisions and achieve perfect accuracy.
There is a considerably higher number of parameters involved in SG-LC, as can be seen in the first step of the outline in Algorithm \ref{alg:endtoend} adding up to a total of six parameters to choose. While we gave a detailed treatment on choosing and later altering (based on expected/observed behavior of the SG-LC scores) their values in our earlier work \cite{splitguard}, we recommend the values we've used in our experiments as reasonable starting points, that is, small $\alpha$ and $\beta$ values such as 7 and 1, $P_F$ and $B_F$ of 0.1 and 1, and a decision threshold at 0.9. However, we should note (as detailed further in our earlier work) that these parameters also reflect the setup SG-LC is used in; e.g. depending the relative importance of false positives and false negatives.
\subsection{Attacker Might Detect SG-LC, but not SG-AD} \label{detection}
SG-AD cannot be detected by the server since the client does not diverge from the SplitNN protocol or manipulate its data in any way. However, SG-LC can potentially be detected by the attacker, especially if the labels are shared too. Then, the attacker can try to circumvent detection by training a legitimate surrogate model for the original task.
If the server controls the model's output (Figure \ref{fig:splitnn_label_sharing}), then it can detect if the loss for a batch is significantly higher than the other ones. Since SG-LC is a potential, albeit not the only, explanation of such behavior, this presents an opportunity for an attacker to detect it. However, the model behaving significantly differently for fake and regular batches also implies that the model is at a stage at which SG-LC is effective. This leads to an interesting scenario: since the attack's and SG-LC's effectiveness both depends on the model learning enough, the attack cannot be detected without the attacker detecting SG-LC and vice versa.\footnote{Our empirical claim however was that the discrepancy between fake and regular gradients precedes the FSHA server being able to extract useful information.}
We argue heuristically that this is not the case, due to the clients being in charge of setting the $B_F$ value. For example, with the MNIST dataset for which the model obtains a classification accuracy around $98\%$ after the first epoch of training, a $B_F$ value of $4/64$ results in an expected classification accuracy of $91.8\%$ for fake batches. The SG-LC scores on the other hand displayed in Figure \ref{fig:main_results} being very close to one implies that an attack can be detected with such a $B_F$ value. Thus, clients can make it difficult for an attacker to detect SG-LC by setting the $B_F$ value more smartly, rather than setting it blindly to 1.
Finally, we strongly recommend that a secure SplitNN setup follow the three-part setup shown in Figure \ref{fig:splitnn_private_labels} to prevent the clients sharing their labels with the server. This way, an attacker would not be able to observe the accuracy of the model, and it would become harder for it to detect SG-LC.
\begin{algorithm}[h!]
\begin{enumerate}
\item Choose parameters $\alpha, \beta, B_F, P_F, N, T$; simulating different server behaviors if possible.
\item Choose a decision policy based on user goals \\(Section \ref{detecting_fsha}).
\item Start training and evaluate scores after each fake batch (Algorithm \ref{alg:main}).
\item Stop training if the server is likely attacking.
\end{enumerate}
\caption{SG-LC End-to-End Outline}
\label{alg:endtoend}
\end{algorithm}
\section{Conclusion and Future Work}
We presented two methods for split learning clients to detect if they are being targeted by a training-hijacking attack \cite{pasquini_unleashing_2021}. When used appropriately (e.g. combined with other defensive tool such as differential privacy), both of our methods demonstrated perfect accuracy in detecting the attack, and reasonable (very low in the case of SG-LC, and zero in the case of SG-AD) false positive rates when there is no attack.
As we have explained in Section \ref{detection}, SG-LC can potentially be detected by the attacker, who can then start sending fake gradients from its legitimate surrogate model and regular gradients from its malicious model. This could again cause a significant difference between the fake and regular gradients, and result in a high SG-LC score. However, a potential weakness of this approach by the attacker is that now the fake gradients result from two different models with different objectives. Clients might also be able to detect this switch in models. This is another point for which future improvement might be possible.
Another line of future work might involve passive detection of training-hijacking attacks without knowledge of the server-side model architecture. SG-AD requires clients to know the architecture of the entire model so that they can train it with some of their data to obtain training data for LOF. SG-LC does not require such knowledge, and thus can be used in scenarios in which clients do not know the server-side model architecture. A passive method without this architecture assumption would combine the best of both worlds.
Finally, the space of training-hijacking attacks as of this writing is very limited, making it difficult to assess our work's effectiveness against a general class of attacks (e.g. the labels-not-included-in-attack-objective assumption may not always hold as discussed above). As more effort is put into this area (currently covered by \cite{pasquini_unleashing_2021, gawron2022feature} and our previous work \cite{splitguard}), it might be possible to develop more sophisticated training-hijacking attacks resistant to label-changing or output gradients less distinguishable from honest gradients.
\bibliographystyle{ieeetr}
|
{
"arxiv_id": "2302.08674",
"language": "en",
"timestamp": "2023-02-20T02:06:17",
"url": "https://arxiv.org/abs/2302.08674",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Face recognition (FR) techniques offers a simple yet convenient way for identity authentication applications, such as mobile access control and electronic payments. Though face biometric systems are widely used, with the emergence of various presentation attacks, critical concerns about security risk on face recognition systems are increasing. An unprotected face recognition system might be fooled by merely presenting artifacts like a photograph or video in front of the camera. Therefore, how to strengthen the face recognition system from a variety of presentation attacks promotes the techniques of face anti-spoofing (FAS).
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{Flops.png}
\caption{The parameters and FLOPs of different model. Dark circles indicate the number of parameters in the models, light circles indicate FLOPs of the model. ViTAF uses ViT-Base as their backbone; SSAN-M uses MADDFG; SSAN-R uses Resnet-18 and our method uses ViT-Tiny. }
\label{fig:glops}
\end{figure}
As an important research topic, a series of face anti-spoofing (FAS) methods have been proposed, from hand-craft feature based to deep representation based methods. Although the previous FAS methods have achieved promising performance in intra-domain scenarios, they may suffer dramatic degradation when encounter unseen domains. To alleviate this issue, researchers have proposed various approaches ~\cite{DBLP:conf/cvpr/Wang0SC20,DBLP:conf/aaai/QinZZWYFZSL20,DBLP:conf/aaai/ShaoLY20,Wang_2022_CVPRSSAN,liu2021dual,jia2020single,chen2021generalizable} to improve the generalizability of FAS in cross-domain settings.
However, these methods are almost based on common convolutional neural network (CNN), which lack taking advantage of subtle properties of global and local image statistics in FAS~\cite{Wang_2022_CVPRSSAN}.
Recently, the modern architecture Vision Transformer (ViT)~\cite{dosovitskiy2020image} has received increasing attention in the vision community. With their in-built local patchifying and global self-attention mechanisms, ViTs may be potentially better-suited to FAS over their CNN counterparts.
Most recently, CNN equipped with attention
modules~\cite{DBLP:journals/corr/abs-2203-01562} and sophisticated designed ViT variants~\cite{DBLP:journals/corr/abs-2203-12175,DBLP:conf/ijcai/LiuL22,DBLP:conf/bmvc/HuangHCKLS21,DBLP:conf/icb/GeorgeM21} have been introduced into FAS and obtained promising performance.
However, whether a vanilla ViT without extra training samples from upstream tasks can achieve competitive cross-domain generalization has not been explored thus far.
Despite the advantages, directly training ViT as well as CNN with a binary classification model is prone to overfit to the biases introduced by the data come from different domains. The most discriminative spoofing cues can dramatically change or even disappear across various domains, making the learned features vulnerable in the unseen scenarios. To alleviate this problem, we propose to use the recently emerged Masked Image Modeling (MIM) to learn facial representation for FAS. MIM is mostly built upon the Vision Transformer (ViT), which suggests that self-supervised visual representations can be done by masking input image parts while requiring the target model to recover the missing contents. With the masked-reconstruct mechanisms, MIM pretraining forces the ViT to learn maximize the mutual information between local cues and global features, thus constraining the model from easily overfitting to local discriminative spoofing bais.
However, it is non-trivial to apply MIM pretrain methods like MAE~\cite{MaskedAutoencoders2021} to FAS. Previous works~\cite{dosovitskiy2020image,MaskedAutoencoders2021} have demonstrated that the performance of ViT relies heavily on pre-training of large-scale datasets like JFT-300M (300 million images)~\cite{DBLP:conf/iccv/SunSSG17} and ImageNet21K (14 million images)~\cite{DBLP:conf/cvpr/DengDSLL009}. There is little work to study how to pre-train ViT on small data sets, especially for MIM based pre-training. To fill this gap, in this paper we take MAE as an example and investigate how this pre-training method perform on small FAS datasets. Specifically, some intriguing findings reveal that MIM pre-training on small data has distinctly different design principles than pre-training on large-scale natural images.
It is worth noting that, although MIM pretraining can prevent the model from over-fitting to local local discriminative spoofing bais, as the reconstruction quality continues to improve during pretraining, the network also inevitably learns domain-specific global features like color distortion, illuminations, and image resolutions. These redundant features can significantly reduce the cross-domain generalizability of the pre-trained model in downstream FAS tasks. To mitigate this situation and make it easier to transfer pre-trained features to downstream FAS tasks, we propose to incorporate contrastive learning into the MIM pre-training framework to capture more domain invariant liveness information and suppress domain-specific one. Following the above design principles, we propose a Masked Contrastive Autoencoder training framework for face anti-spoofing, called MACE. As shown in Figure \ref{fig:glops}, our work demonstrates that even with a lightweight vanilla ViT, superior cross-domain generalizability can be achieved in FAS tasks without using extra training data. We also provide insightful analysis on why the proposed MCAE outperforms the related methods. The main contributions of this work are:
\begin{itemize}
\item We introduce the modern architecture Vision Transformer into face anti-spoofing and propose a simple yet effective framework for robust cross-domain FAS.
\item We conducted a systematic study on how to apply the MIM pre-training on small datasets, by taking the FAS task as an example. By incorporating contrastive learning into the MIM pre-training, we propose a Masked Contrastive Autoencoder training framework for face anti-spoofing, called MACE.
We also summarize empirical design principles for performing MIM pre-training for downstream cross-domain FAS, which are significantly different from the known experience with large-scale natural image pre-training.
\item Extensive experiments on the widely-used benchmark datasets demonstrate the superior cross-domain generalizability of our proposed MCAE in FAS tasks even with a lightweight vanilla ViT.
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{figure1.pdf}
\caption{Overview of our MCAE mehtod. We randomly choose some tokens and send them to the encoder. The output of the encoder are used to do reconstruction task with masked tokens and get the aggregate token. The blue square represent features of live sample and the pink square represent features of spoof sample. The ViT blocks have the same weights.
}
\label{figmain}
\end{figure*}
\section{Related Work}
\subsection{Face anti-spoofing (FAS).}The traditional features used are often hand-crafted features such as LBP~\cite{freitas2012lbp}, HOG~\cite{komulainen2013context} and SIFT~\cite{patel2016secure}. Rencently, CNN and Vision Transformer~\cite{dosovitskiy2020image} are used as the backbone to extract features to distinguish the spoof samples from live samples. In order to get a model that has good performance on cross-domain datasets. Domain adaptation (DA) and domain generation (DG) method are widely used in FAS. In those methods, the algorithms are designed to pull the live samples close in feature space. PatchNet~\cite{Wang_2022_CVPR} combine the information in capturing device and presenting materials to improve the generality of their model. The method SSAN~\cite{Wang_2022_CVPRSSAN} is designed to split the representation into content and style ones with different supervision. ViTAF method~\cite{huang2022adaptive} uses adaptive transformers as backbone and has good performance on cross-domain task. But this model requires large additional datasets for supervised pre-training. How to get a model with good generalization performance using limited datasets is still a problem.
\subsection{Masked Image Model (MIM).}Recently, the MIM become popular in self-supervised vision tasks thanks to the introduction of ViT model. Masked prediction was inspired by the success of Masked Language Model(MLM) like BERT~\cite{devlin2018bert}.Training such models requires two stages namely pre-training and fine-tuning. During the pre-train stage, the MIM mask some patches of the original image and predict the features of those masked patches,such as pixel feature~\cite{MaskedAutoencoders2021},discrete token~\cite{bao2021beit} and HOG feature~\cite{wei2022masked}. Then in the fine-tuning stage the pre-trained model are used as the feature extractor to downstream tasks. Because the model has learned a good representation of the relative images, the feature extracted by it can be very useful to downstream tasks. But it should be noticed that the great success of MIM is based on large dataset for pre-training.
\subsection{Contrastive learning.}The contrastive learning method in self-supervised learning are based on pretext tasks. The key idea of contrastive learning method is pull the positive samples close and meanwhile push away the negative samples in feature space. There are different ways to construct positive and negative samples. For example, the SimCLR~\cite{chen2020simple} considers the same image with different data augmentation as positive samples, and all of other images as negative samples. The BYOL~\cite{grill2020bootstrap} even does not need to construst negative samples anymore. It uses an online encoder to predict the output of another momentum encoder MoCo~\cite{he2020momentum}.
The above methods are self-supervised and no label information is need. In order to find the similarity between samples with the same label, the SupContrast ~\cite{khosla2020supervised} and WACL~\cite{chen2022representation} use all the samples with the same label to compute contrastive loss during pre-training stage. The method proposed by ~\cite{gunel2020supervised} combines contrastive loss with classification loss in the fine-tuning stage. However, none of the above methods combine the contrast task with reconstruction task during the pre-training stage.
\section{Proposed Method}
\subsection{Overview}
An overview of our method is show in Figure \ref{figmain}. Our method consists of two stages, including pre-training stage and fine-tuning stage. We describe our method in Algorithm \ref{alg:algorithm} of the pre-training stage. In the fine-tuning stage, we only keep the encoder to extract features for FAS tasks while discarding the decoder.
\subsection{Masked Autoencoders}
MIM is a simple but effective way to train a general feature extractor for different vision tasks such as classification, object detection and semantic segmentation.
In the pre-training stage, we use ViT as our encoder. First of all, we divided the input image into non-overlapping patches and then projected each of them into tokens $T_{i}$ so that we can get a token sequence $\{T_{i}\}$ where $i=1,\cdot \cdot \cdot,n $ and $n$ is the sequence length. Then we randomly choose a subset of the token sequence $\{T_{v_{i}}\}$ to keep, and the other tokens will be masked and denoted as $\{T_{m_{i}}\}$. The $\{T_{v_{i}}\}$ will be fed into encoder $G_{\theta}$ to get the latent representation of the original image. we use aggregation to represent the image's features, denoted as $T_{agg}$. The decoder $D_{\theta}$ combine $\{T_{v_{i}}\}$ with $\{ T_{m_{i}}\}$ to reconstruct the original input image. The indicate function $\mathds{1}_{mask}(i)$ indicates whether token $T_{i}$ is masked is defined as Equation (\ref{eq:indicate}):
\begin{equation} \label{eq:indicate}
\mathds{1}_{mask}(i)=\begin{cases}
1, &\quad i\in T_{m} \\
0, &\quad i \notin T_{m}
\end{cases}.
\end{equation}
The loss function $\mathcal{L}_{rec}$ are defined as Equation (\ref{eq:mse}):
\begin{equation} \label{eq:mse}
\mathcal{L}_{rec}=\frac{1}{n} \sum_{i=1}^{n}\left\|D_{\theta}(G_{\theta}\left(T_{v}\right),T_{m})-T_{i}\right\|_{2}^{2} \mathds{1}_{mask}(i).
\end{equation}
During constructing MIM pre-training for small FAS datasets, we find some intriguing design principles which are different from previous experience~\cite{MaskedAutoencoders2021} with MIM pre-training on large-scale natural datasets. Specifically, unlike a lightweight decoder used in MAE~\cite{MaskedAutoencoders2021} for large-scale natural datasets pre-training, the decoder size matters for the representation of FAS tasks. We also find that compare to nature images a higher proportion of the input image (85\%), yields a better self-supervisory task for FAS data. Moreover, introducing some high level semantic information can help the self-supervised model better benefit the downstream task with small size datasets. In the following, we present a practice of introducing supervised contrast learning in MIM pre-training to capture more domain invariant and task-related information for FAS tasks. We hope that these found design principles can inspire researchers to design better MIM pre-training strategies for a variety of downstream tasks, especially on small data sets.
\subsection{Information Theoretic Analysis}
In this subsection, we give an information theoretic analysis of our method and explain why it's useful to combine reconstruction task with contrastive task. Intuitively, we want to reconstruct all of the $T_{i} $ of the original image, which is equal to maximize the mutual information between the $G_{\theta}(T_{v})$ and $T_{i}$. Therefore, we will find the correlation between the reconstruction task and contrastive module.
Based on the definition of the mutual information~\cite{DBLP:books/daglib/0016881}, the mutual information between the $T_{v}$ and $T_{i}$ is given in Equation (\ref{eq:mi}):
\begin{equation}\label{eq:mi}
\begin{split}
\mathcal{I}\left(T_{i};G_{\theta}(T_{v})\right)=&H\left(T_{i}\right)-H\left(T_{i} \mid G_{\theta}(T_{v})\right)\\=&H(G_{\theta}(T_{v}))-H\left(G_{\theta}(T_{v}) \mid T_{i}\right).
\end{split}
\end{equation}
By the definition of the conditional entropy,we have:
$$
H\left(T_{i} \mid G_{\theta}(T_{v})\right)=\mathbb{E}_{P_{T_{i}, G_{\theta}(T_{v})}}\left[-\log P\left(T_{i} \mid G_{\theta}(T_{v})\right)\right].
$$
But in practice it's very difficult to get distribution of $P\left(T_{i} \mid G_{\theta}\left(T_{v}\right)\right)$ directly. The most common way to approximate this distribution is using another distribution $Q(x)$ instead of it and maximize the lower bound of KL divergence between them~\cite{agakov2004algorithm}:
\begin{equation}
\begin{aligned}
&\mathcal{I}\left(G_{\theta}\left(T_{v}\right);T_{i}\right) \\
&=H\left(T_{i}\right)-H\left(T_{i} \mid G_{\theta}\left(T_{v}\right)\right) \\
&=H\left(T_{i}\right)+\mathbb{E}_{P_{T_{i}, G_{0}\left(T_{v}\right)}}\left[\log P\left(T_{i} \mid G_{\theta}\left(T_{v}\right)\right)\right] \\
&=H\left(T_{i}\right)+\mathbb{E}_{P_{T_{i}, G_{\theta}\left(T_{v}\right)}}\left[\log Q\left(T_{i} \mid G_{\theta}\left(T_{v}\right)\right)\right] \\
& \;+\underbrace{D_{K L}\left(P\left(T_{i} \mid G_{\theta}\left(T_{v}\right)\right) \| Q\left(T_{i} \mid G_{\theta}\left(T_{v}\right)\right)\right)}_{\geq 0} \\
& \geq \mathbb{E}_{P_{T_{i}, G_{0}\left(T_{v}\right)}}\left[\log Q\left(T_{i} \mid G_{\theta}\left(T_{v}\right)\right)\right].
\end{aligned}
\end{equation}
The distribution $Q(x)$ can be chosen arbitrarily. So we can use Gaussian distribution with $\sigma I$ diagonal matrix as $Q(x)$ ~\cite{ly2022student} i.e. $Q(x)\sim\mathcal{N}\left(T_{i} \mid G_{\theta}\left(T_{v}\right), \sigma I\right)$. Therefore, the maximize problem can be convert to the minimal problem in Equation (\ref{eq:min}):
\begin{equation}\label{eq:min}
\min \mathbb{E}_{P_{T_{i}, T_{v}}}[\left\| D_{\theta}\left(G_{\theta}(T_{v}),T_{m}\right)-T_{i}\right\|_{2}^{2}].
\end{equation}
In the reconstruction task, we want to minimize the Equation (\ref{eq:mse}). In fact Equation (\ref{eq:min}) is very similar to the Equation (\ref{eq:mse}), the only difference between them is whether to compute loss value on unmasked tokens.
Next, we will proof that $T_{agg}$ can be used as the representation of the original image. Based on the assumption proposed by ~\cite{DBLP:conf/colt/SridharanK08}, we know that the reconstruct image and input image are both redundant for the task-relevant information i.e. there exists an $\epsilon$ s.t. $\mathcal{I}(T_{i};T_{agg}|D_{\theta}(G_{\theta}(T_{v_{i}}),T_{m})) \leq \epsilon$.
\noindent \textbf{Theorem 1}.
The self-supervised learned aggregate token contains all the task-relevant information~\cite{DBLP:conf/iclr/Tsai0SM21} in the token sequence $\{T_{i}\}$ where $i=1,\cdot \cdot \cdot,n $ with a potential loss $\epsilon$.
\begin{equation*}
\mathcal{I}(T_{i};G_{\theta}(T_{v_{i}})) - \epsilon \leq \mathcal{I}(T_{agg};G_{\theta}(T_{v_{i}})) \leq \mathcal{I}(T_{i};G_{\theta}(T_{v_{i}})).
\end{equation*}
The proofs are provided in the appendix. By Thm 1. we can get the result that $T_{agg}$ can be used as the original image's task-relevant representation. The ideal situation is that $T_{agg}$ is the sufficient statistic for estimating the $\{T_{i}\}$. In that case, $G_{\theta}(T_{v})$, $T_{agg}$ and $\{T_{i}\}$ form a Markov chain $\{T_{i}\} \leftrightarrow T_{agg} \leftrightarrow G_{\theta}(T_{v})$ and $T_{agg}$ can represent $\{T_{i}\}$ without any information loss.
Contrastive learning focuses on learning common features between instances of the same class and distinguishing differences between instances of different classes. Meanwhile, when we already have an encoder that can extract input features well, the next step is to find the common information of the samples from different domains with the same label. Since the reconstruction task can promote the model to learn a good task-relevant representation of the input image, we can add the contrastive module to maximize the mutual information between positive samples.
\subsection{Supervised Contrastive Module}
Different datasets of face images are collected by different capture devices in different scenes, and have different resolutions. Thus we want to find the common information about live samples without any additional features due to domain difference. However, the common contrastive learning methods rely on an extensive dictionary, which is difficult to achieve on small datasets. Meanwhile, the datasets of face anti-spoofing is different from the nature image dataset. Since the original datasets of the face are video, there are high similarities between adjacent frames in the video. Therefore, treating images between adjacent frames as negative samples is unreasonable.
According to the previous analysis, we already have an encoder that can extract task-relevant features of the input image well. So we can aggregate all tokens directly as the representation of the input image. The contrastive module aims to narrow the live samples, which common method used in the previous is triplet loss. However, triplet loss not only narrows the distance between each live sample but also the distance between each spoof sample. We think it is unreasonable to narrow the distance between spoof samples because spoof face images come in many forms and do not have consistent features. Our aim is also to find domain-invariant features between live samples, so we use the domain label and live label as our supervised information.
Therefore, we design our supervised contrastive loss based on the previous work~\cite{gunel2020supervised,chen2022representation,khosla2020supervised}. We special design weighting factors for different samples for the face anti-spoofing task. We have two types of positive samples; one is the positive samples between the same domain, and the other is the positive samples between different domains. We give more significant weight to positive samples from different domains. The supervised contrastive loss is defined in Equation (\ref{eq:conloss}):
\begin{equation} \label{eq:conloss}
\begin{split}
L_{con} = &-\mathbb{E}[\sum_{j=1}^{N}\mathds{1}_{i \neq j}(1-\mathds{1}_{y_{i} \neq y_{j}} )\\ &\log \frac{\lambda_{l} \exp \left(s_{i, j} / \tau\right)}{\lambda_{l} \exp \left(s_{i, j} / \tau\right)+\sum_{k=1}^{N} \mathds{1}_{y_{i} \neq y_{k}} \exp \left(s_{i, k} / \tau\right)}].
\end{split}
\end{equation}
In the above Equation (\ref{eq:conloss}), $N$ is the mini-batch size; $y_{i}$ and $y_{j}$ mean the label of sample $i$ and sample $j$; $\tau$ is the temperature parameter; $\mathds{1}_{y_{i} \neq y_{j}}$ is the indicator function of whether sample $i$ and sample $j$ have the same label; $s_{i j}$ is the cosine similarity between sample $i$ and sample $j$; $\lambda_{l}$ are the weighting factors. Every sample in a mini-batch is used as an anchor once.
It is worth noting that we should add our supervised contrastive module after the mutual information between $T_{agg}$ and $G_{\theta}(T_{v})$ tends to converge. Only in this case can we consider that the features of the original image are well extracted. If we add modules too early, the features used to contrast are meaningless, and we will illustrate this phenomenon in the next section.
\begin{algorithm}[tb]
\caption{MCAE Pre-training stage}
\label{alg:algorithm}
\textbf{Input}: Face images from different domain; A relatively small hyperparameter $\epsilon$.; Various weight parameters $\lambda_{l}$\\
\textbf{Parameter}: Encoder $G_{\theta}$ and decoder $D_{\theta}$. \\
\textbf{Output}: Trained encoder $G_{\theta}$.
\begin{algorithmic}[1]
\STATE Choose a subset $T_{v}$ from all of the token $T_{i}$ as visible token. The rest of $T_{i}$ are masked
which are denoted as $T_{m}$. The mask ratio is 85\%.
\STATE Input visible token $T_{v}$ to encoder to get the feature representation $G_{\theta}(T_{v})$.
\FOR{i = 1,...,$N_{train}$}
\STATE Input $T_{v}$ and $T_{m}$ to decoder to get reconstruct image $D_{\theta}(G_{\theta}(T_{v}),T_{m})$.
\IF {$|G_{\theta}(T_{v})-T_{m}|^{2}$ $<$ $\epsilon$}
\STATE Compute reconstruct loss $\mathcal{L}_{rec}$.
\ELSE \STATE Aggregate feature token $T_{agg}$.
\STATE Compute reconstruct loss $\mathcal{L}_{rec}$ and use $T_{agg}$ to compute supervised contrastive loss $L_{c o n}$.
\ENDIF
\STATE Update parameters $\theta$ of encoder $G_{\theta}$ and decoder $D_{\theta}$.
\ENDFOR
\STATE Keep encoder $E_{\theta}$ and discard decoder $D_{\theta}$.
\STATE \textbf{return} Trained Encoder $G_{\theta}$
\end{algorithmic}
\end{algorithm}
\begin{table*}[]
\scalebox{0.5}
\centering
\begin{tabular}{c|cc|cc|cc|cc}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{O\&C\&I to M} & \multicolumn{2}{c|}{O\&M\&I to C} & \multicolumn{2}{c|}{O\&C\&M to I} & \multicolumn{2}{c}{I\&C\&M to O} \\ \cline{2-9}
& \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) \\ \hline
MADDG ~(\cite{shao2019multi}) & \multicolumn{1}{c|}{17.69} & 88.06 & \multicolumn{1}{c|}{24.50} & 84.51 & \multicolumn{1}{c|}{22.19} & 84.99 & \multicolumn{1}{c|}{27.98} & 80.02 \\
NAS-FAS ~(\cite{yu2020fas}) & \multicolumn{1}{c|}{16.85} & 90.42 & \multicolumn{1}{c|}{15.21} & 92.64 & \multicolumn{1}{c|}{11.63} & 96.98 & \multicolumn{1}{c|}{13.16} & 94.18 \\
$D^{2}$AM ~(\cite{chen2021generalizable} ) & \multicolumn{1}{c|}{12.70} & 95.66 & \multicolumn{1}{c|}{20.98} & 85.58 & \multicolumn{1}{c|}{15.43} & 91.22 & \multicolumn{1}{c|}{15.27} & 90.87 \\
SSDG-R ~(\cite{jia2020single}) & \multicolumn{1}{c|}{7.38} & 97.17 & \multicolumn{1}{c|}{10.44} & 95.94 & \multicolumn{1}{c|}{11.71} & 96.59 & \multicolumn{1}{c|}{15.61} & 91.54 \\
ANRL ~(\cite{liu2021adaptive}) & \multicolumn{1}{c|}{10.83} & 96.75 & \multicolumn{1}{c|}{17.83} & 89.26 & \multicolumn{1}{c|}{16.03} & 91.04 & \multicolumn{1}{c|}{15.67} & 91.90 \\
DRDG ~(\cite{liu2021dual} ) & \multicolumn{1}{c|}{12.43} & 95.81 & \multicolumn{1}{c|}{12.43} & 95.81 & \multicolumn{1}{c|}{19.05} & 88.79 & \multicolumn{1}{c|}{15.63} & 91.75 \\
SSAN-R ~(\cite{Wang_2022_CVPRSSAN} ) & \multicolumn{1}{c|}{6.67} & 98.75 & \multicolumn{1}{c|}{10.00} & 96.67 & \multicolumn{1}{c|}{8.88} & 96.79 & \multicolumn{1}{c|}{13.72} & 93.63 \\
PatchNet ~(\cite{Wang_2022_CVPR} ) & \multicolumn{1}{c|}{7.10} & 98.46 & \multicolumn{1}{c|}{11.33} & 94.58 & \multicolumn{1}{c|}{13.40} & 95.67 & \multicolumn{1}{c|}{11.82} & 95.07 \\
ViTAF${^\dag}$ ~(\cite{huang2022adaptive} ) & \multicolumn{1}{c|}{4.75} & 98.79 & \multicolumn{1}{c|}{15.70} & 92.76 & \multicolumn{1}{c|}{17.68} & 86.66 & \multicolumn{1}{c|}{16.46} & 90.37 \\ \hline
\textbf{MCAE (Ours)} & \multicolumn{1}{c|}{\textbf{3.81}} & \textbf{99.10} & \multicolumn{1}{c|}{\textbf{10.00}} & \textbf{96.71} & \multicolumn{1}{c|}{\textbf{5.25}} & \textbf{97.49} & \multicolumn{1}{c|}{\textbf{11.81}} & \textbf{95.34} \\ \hline
\end{tabular}
\caption{Comparison results between our MCAE method and state-of-the-art methods on cross-dataset testing. ViTAF${^\dag}$ denote the ViT-Base model pre-trained by Imagenet dataset. Our MCAE method achieves the best performance in each setting.}
\label{table:result}
\end{table*}
\section{Experiments}
\subsection{Experimental Setups}
\subsubsection{Experiment Datasets.}
We evaluate our proposed methods on cross-dataset testing based on four public datasets CASIA-MFSD(C)~\cite{zhang2012face}, Replay-Attack(I)~\cite{chingovska2012effectiveness}, MSU-MFSD(M)~\cite{wen2015face} and OULU-NPU(O)~\cite{boulkenafet2017oulu}. Since each datasets are sampled by various devices in different scenarios, there are large differences between those datasets. Experiments in such a setting can be a good evaluation of the generalization ability of the model.
\subsubsection{Implementation Details.}
We use MTCNN~\cite{zhang2016joint} to detect faces in each dataset,then crop and resize each face image into 256 $\times$ 256 $\times$ 3. We use ViT-Tiny as our backbone whose embedding dimension is 192 and the patch size of each image is 16 $\times$ 16 . We only use random resized cropping as our data augmentation method. We use the same evaluation metric as previous work~\cite{shao2019multi}, i.e. the Half Total Error Rate (HTER) and the Area Under Curve (AUC).
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{addloss.png}
\caption{The procedure with the supervised contrastive module. When we add supervised contrastive module after the model are pre-trained a period of time, the performance of the model are effectively improved. The unit of abscissa is epoch.}
\label{fig:timecon}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{hter.png}
\caption{The influence of masking ratio. Because of the strong structural similarity of face images, the best masking ratio 85\% which is larger than that of natural images.}
\label{figmask}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\textwidth]{recon.pdf}
\caption{The reconstruct result: (a): Original image. (b): Masked image. (c): Reconstructed image with 1 layer decoder. (d): Reconstructed image with 2 layer decoder. (e): Reconstructed image with 4 layer decoder. (f): Reconstructed image with 8 layer decoder. The reconstructed image with 4 layer and 8 layer decoder distinguish between attack and real sample.
}
\label{fig:rec}
\end{figure*}
\subsection{Experiment Result}
\subsubsection{Cross-domain experiments.}
Following previous works, we use Leave-One-Out (LOO) setting to do our cross-domain experiments. We pre-train and fine-tune our model on three datasets and test it on the rest one. The cross-domain result compared to other state-of-the-art method are as shown in Table \ref{table:result}. We observed that our CMAE method achieves the best performance. The result shows that our method has good generalization ability. Although the dataset size of FAS is limited, we still get a model with good generalization performance through pre-training and contrastive learning without any other datasets.
\subsubsection{Experiments on Limited Source Domains.}
To further evaluate the generalization ability of our model, we do cross-domain experiment based on limited source domain data. Following previews work, we use M and I as our source domain data to train our model. Then test our model in dataset C and O. The result are shown in Table \ref{tab:limited}, Our method achieves the best performance. The result proves that our method has good generalization ability.
\subsection{Effect Analyses of pre-training stage}
In this subsection, we explore the impact of some essential modules in the pre-training stage because our model has some unique designs for the face anti-spoofing tasks. The face image has substantial structural similarity, which is very different from the nature images.
\subsubsection{Pre-training schedules.}
As shown in Figure \ref{fig:timecon}, adding a supervised contrastive module after the model has been pre-trained at a specific time has good performance. The result is consistent with what we analyzed earlier. When the encoder does not converge, it is meaningless to use the features at this time as contrastive features. In addition, the contrastive loss also takes a particular training time to converge, so we cannot add this module too late.
\begin{table}[]
\scalebox{0.7}{
\centering
\begin{tabular}{c|cc|cc}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{M\&I to C} & \multicolumn{2}{c}{M\&I to O} \\ \cline{2-5}
& \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) \\ \hline
IDA ~(\cite{wen2015face} ) & \multicolumn{1}{c|}{45.16} & 58.80 & \multicolumn{1}{c|}{54.52} & 42.17 \\
MADDG ~(\cite{shao2019multi}) & \multicolumn{1}{c|}{41.02} & 64.33 & \multicolumn{1}{c|}{39.35} & 65.10 \\
SSDG-M ~(\cite{jia2020single} ) & \multicolumn{1}{c|}{31.89} & 71.29 & \multicolumn{1}{c|}{36.01} & 66.88 \\
DR-MD-Net ~(\cite{wang2020cross} ) & \multicolumn{1}{c|}{31.67} & 75.23 & \multicolumn{1}{c|}{34.02} & 72.65 \\
ANRL ~(\cite{liu2021adaptive} ) & \multicolumn{1}{c|}{31.06} & 72.12 & \multicolumn{1}{c|}{30.73} & 74.10 \\
SSAN-M ~(\cite{Wang_2022_CVPRSSAN} ) & \multicolumn{1}{c|}{30.00} & 76.20 & \multicolumn{1}{c|}{29.44} & 76.62 \\ \hline
MCAE (Ours) & \multicolumn{1}{c|}{\textbf{29.89}} & \textbf{77.65} & \multicolumn{1}{c|}{\textbf{21.32}} & \textbf{87.35} \\ \hline
\end{tabular}
}
\caption{The result of the limited source domain experiments. Even though the dataset is limited, our method still get a good performance on each setting. }
\label{tab:limited}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{gradnew.pdf}
\caption{Attention visualization of different model. (a): Original images. (b): Resnet-18. (c): ViT without pretrain. (d): Pre-trained ViT without supervised contrastive module. (e): Pre-trained ViT with supervised contrastive module. The pre-trained ViT model pay more attention about the whole image, while other model care the local feature.}
\label{fig:grad}
\end{figure}
\begin{table}[]
\centering
\scalebox{0.8}{
\begin{tabular}{cc|cc}
\hline
\multicolumn{2}{c|}{\textbf{Decoder}} & \multicolumn{2}{c}{\textbf{Evaluation Metric}} \\ \hline
width & depth & HTER(\%) & AUC(\%) \\ \hline
48 & 1 & 17.38 & 89.40 \\
192 & 2 & 17.21 & 90.84 \\
384 & 4 & 13.35 & 92.56 \\ \rowcolor{gray!20}
\textbf{512} & \textbf{8} & \textbf{11.81} & \textbf{95.34} \\
768 & 10 & 14.01 & 92.30 \\ \hline
\end{tabular}
}
\caption{Effect of the docoder size. The performance of model is sensitive to the decoder structure, the decoder with 8 layers has the best performance.}
\label{table:docoder}
\end{table}
\begin{table*}[]
\centering
\scalebox{0.8}{
\begin{tabular}{c|cc|cc|cc|cc}
\hline
\multirow{2}{*}{Method} &
\multicolumn{2}{c|}{O\&C\&I to M} &
\multicolumn{2}{c|}{O\&M\&I to C} &
\multicolumn{2}{c|}{O\&C\&M to I} &
\multicolumn{2}{c}{I\&C\&M to O} \\ \cline{2-9}
&
\multicolumn{1}{c|}{HTER(\%)} &
AUC(\%) &
\multicolumn{1}{c|}{HTER(\%)} &
AUC(\%) &
\multicolumn{1}{c|}{HTER(\%)} &
AUC(\%) &
\multicolumn{1}{c|}{HTER(\%)} &
AUC(\%) \\ \hline
ViT-Tiny${^\dag}$ &
\multicolumn{1}{c|}{10.48} &
94.80 &
\multicolumn{1}{c|}{26.78} &
79.21 &
\multicolumn{1}{c|}{26.67} &
74.93 &
\multicolumn{1}{c|}{24.77} &
81.47 \\
ViT-Tiny${^*}$ &
\multicolumn{1}{c|}{8.57} &
95.84 &
\multicolumn{1}{c|}{22.11} &
84.32 &
\multicolumn{1}{c|}{17.33} &
85.64 &
\multicolumn{1}{c|}{22.34} &
82.92 \\
ViT-Tiny &
\multicolumn{1}{c|}{\textbf{3.81}} &
\textbf{99.10} &
\multicolumn{1}{c|}{\textbf{10.00}} &
\textbf{96.71} &
\multicolumn{1}{c|}{\textbf{5.25}} &
\textbf{97.49} &
\multicolumn{1}{c|}{\textbf{11.81}} &
\textbf{95.34} \\ \hline
\end{tabular}
}
\caption{Ablation result for model pre-training. ViT-Tiny$^{\dag}$ denoted the ViT-Tiny model without pre-training. ViT-Tiny$^{*}$} denoted the ViT-Tiny model pre-trained on imagenet dataset. The results show that our training method are the best.
\label{table:pretrain}
\end{table*}
\begin{table*}[t]
\centering
\scalebox{0.8}{
\begin{tabular}{c|cc|cc|cc|cc}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{O\&C\&I to M} & \multicolumn{2}{c|}{O\&M\&I to C} & \multicolumn{2}{c|}{O\&C\&M to I} & \multicolumn{2}{c}{I\&C\&M to O} \\ \cline{2-9}
& \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) \\ \hline
w contrastive & \multicolumn{1}{c|}{\textbf{3.81}} & \textbf{99.10} & \multicolumn{1}{c|}{\textbf{10.00}} & \textbf{96.71} & \multicolumn{1}{c|}{\textbf{5.25}} & \textbf{97.49} & \multicolumn{1}{c|}{\textbf{11.81}} & \textbf{95.34} \\
w/o $\lambda_{k}$ parameters & \multicolumn{1}{c|}{5.71} & 97.33 & \multicolumn{1}{c|}{13.22} & 93.23 & \multicolumn{1}{c|}{8.08} & 96.82 & \multicolumn{1}{c|}{13.05} & 93.24 \\
w/o contrastive & \multicolumn{1}{c|}{6.19} & 96.98 & \multicolumn{1}{c|}{16.67} & 90.79 & \multicolumn{1}{c|}{12.67} & 94.97 & \multicolumn{1}{c|}{13.19} & 92.92 \\ \hline
\end{tabular}
}
\caption{Ablation result on supervised contrastive module. With the contrastive module, the performance is improved.}
\label{table:con}
\vspace{-0.2cm}
\end{table*}
\subsubsection{Mask ratio.}
The mask ratio determines the number of visible patches processed by the encoder in pre-training stage. Figure \ref{figmask} show the result of experiment in different mask ratio. As we can see in Figure \ref{figmask}. The best mask ratio for face anti-spoofing tasks is 85\% which is larger than the nature image. Because of the substantial structural similarity of the face images, masking more of the input images is a useful way to promote the encoder to learn a better representation of the face images.
\subsubsection{Decoder size.}
Decoder is a key module in pre-training stage. We find that the performance of model is sensitive to the decoder structure. Unlike using lightweight decoders in nature images, we design our decoder especially for FAS tasks. The results of experiment with different decoder size are shown in Table \ref{table:docoder}. Because in FAS tasks, the detail of the image is crucial to classifier. If the size of the decoder is too small, the encoder cannot learn a good representation of the masked image which may negatively affect the aggregate of the sample with the same label. Meanwhile, the decoder will be discarded in the fine-tuning stage, so we must limit the size of decoder to avoid the phenomenon that too many useful information are in decoder instead of encoder.
\subsection{Ablation Studies}
\subsubsection{The effect of Pre-training.}
Table \ref{table:pretrain} shows the importance of our special design for the pre-training stage. The results are very unsatisfying when we use the ViT model without pre-training to do face anti-spoofing tasks. Meanwhile, even though we use the Imagenet dataset to pre-train our model, the improvement is also limited. Since the images of the face are quite different from the natural image. The features extracted by the encoder trained with the Imagenet dataset may not be suitable for face anti-spoofing tasks.
\subsubsection{Supervised contrastive module.}
The results in Table \ref{table:con} show that the contrastive module is beneficial in improving the model's generalization ability. If the contrastive module is removed, the model's performance is unsatisfactory. Meanwhile, the weighting factor brings a significant boost to our model performance. Because this module brings task-related information to the model, it makes sense for small dataset-based tasks.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{tsnez.png}
\caption{Visualization of the feature distribution of pre-trained MCAE model. Because of the introduction of label information in the pre-training stage, live samples are well differentiated from spoof samples.}
\label{fig:tsne}
\end{figure}
\subsection{Visualization and Analysis}
\subsubsection{Reconstruct Visualization.} To further explore the role of decoder, we visualize the reconstruction results based on different decoders size. As shown in Figure \ref{fig:rec}, we can see that the larger decoder can better reconstruct the original image. Meanwhile, we find that when the size of decoder is too small, the reconstructed image of the live samples and spoof samples are very similar in details. In that case, features of live samples cannot be distinguished well.
\subsubsection{Attention Visualization.}
In order to find the area where the model focuses on, we choose Grad-CAM~\cite{zhou2016learning} to describe the activation maps on the original images. We compare the attention visualization with different model in Figure \ref{fig:grad}. The pre-trained ViT model focus on the whole area of the face image. If we do not pre-train the ViT model, the model pay attention to different location of the input image. Meanwhile, we also compare ViT model with CNN model, we find that the CNN model pay attention to the local feature which is very different from pre-trained ViT model.
\subsubsection{t-SNE.}
We visualize the t-SNE~\cite{van2008visualizing} in Figure \ref{fig:tsne} to analyze our MCAE model feature space. We can observe that all of the live samples from different domains are aggregated. This phenomenon indicates that our method is effective to push all of the live samples close. Furthermore, we find that the distance between live samples and spoof samples is large, and it is easy to distinguish them from the feature space.
\section{Conclusion}
In this paper, we propose a novel model Masked Contrastive Autoencoder for cross-domain face anti-spoofing task. Meanwhile in order for a feature extractor to extract common features in live samples from different domains, we combine Masked Image Model (MIM) with supervised contrastive learning to train our model.
By taking the FAS tasks as an example, we find the MIM pre-training has potential to improve downstream performance on small size dataset. Some intriguing design principles are summarized for performing MIM pre-training for downstream tasks, which are different from previous experience with MIM pre-training on large-scale natural datasets. We hope that these found design principles can inspire researchers to design better MIM pre-training strategies for a variety of downstream tasks, especially on small data sets.
|
{
"arxiv_id": "2302.08579",
"language": "en",
"timestamp": "2023-02-20T02:01:50",
"url": "https://arxiv.org/abs/2302.08579",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
\vspace{-0.15cm}
End-to-end (E2E) automatic speech recognition (ASR) models simplify conventional pipeline ASR methods and directly transcribe input speech into corresponding text \cite{8068205,6638947}. Due to a large amount of labelled training data, E2E ASR models surpass pipeline methods on most public datasets \cite{9688009}.
However, E2E ASR still suffers from unseen domains \cite{tsunoo22_interspeech}, and large quantities of labelled data are not always feasible to collect and can therefore be limited \cite{9746480}. Adaptation training methods can be utilised to alleviate this issue when the target domain has enough paired data \cite{tsunoo19_interspeech, 6424251}. However, text-only data from the target domain is easier to obtain in most scenarios, and it is more efficient to bias the E2E ASR systems to the target domain using such data \cite{tsunoo22_interspeech}.
There are several studies exploring the use of text-only data via an external language model (LM). Shallow fusion \cite{chorowski2015attention} which linearly interpolates the E2E ASR with an external LM is straightforward and widely deployed \cite{8462682}.
Several structural fusion methods like deep fusion \cite{gulcehre2015using} and cold fusion \cite{sriram18_interspeech} were further proposed, but they acquire additional training and have not replaced shallow fusion \cite{chorowski2015attention} as the dominant method for LM integration \cite{9003790, 9383515}.
Considering that E2E models implicitly learn to model the context between words and characterise the training data of the source domain, a density ratio method \cite{9003790} was proposed as an extension of shallow fusion. It subtracts the score of a source-domain LM from the log-linear combination of the E2E ASR model and target-domain LM scores \cite{9003790}. Furthermore, the estimate of the E2E ASR model's internal LM\footnote{We refer to the ability of E2E ASR to model the context of token sequence as internal LM \cite{9383515}.} has been explored \cite{9383515, 9415039, 9746948, zeineldeen21_interspeech}.
However, both the density ratio and internal LM estimation methods
make the decoding process more complicated, and it is not always feasible to accurately estimate the internal LM due to domain mismatch \cite{tsunoo22_interspeech}. To simplify the decoding process based on the internal LM estimation,
\cite{tsunoo22_interspeech} further proposed a residual LM that
models the residual factor of external and internal LMs, but the internal LM estimate must be pre-calculated for all text data before training and the issue of inaccurate estimation still exists.
\cite{meng22_interspeech} explores fine-tuning internal LM on text adaptation data but requires regularisation strategies to limit the internal LM learning target domains.
Previous methods rely on an external LM with the estimate of the internal LM used to bias the prediction of E2E ASR systems, thus improving the cross-domain ASR performance. However, incorporating the external LM demands extra computational costs and additional parameters, and accurate internal LM estimation is not always feasible \cite{tsunoo22_interspeech}. In this paper, the motivation is making the E2E ASR system itself adaptable and can be biased to unseen domains without needing an external LM or re-training. Therefore,
this paper proposes a replaceable internal LM (RILM) method, through which the internal LM of the E2E ASR system can be directly replaced with a target-domain LM during the decoding stage to improve the cross-domain ASR accuracy. Furthermore, this paper designs a residual softmax (R-softmax) for CTC-based ASR models that efficiently adapts to the target domain during the inference stage. With E2E ASR models
trained on the LibriSpeech corpus \cite{7178964}, experiments showed that the proposed methods greatly boosted the cross-domain ASR accuracy on the Switchboard \cite{225858} and AESRC2020 \cite{9413386} corpus while performing robustly in intra-domain scenarios.
\begin{figure*}[t]
\centering
\includegraphics[width=0.92\linewidth]{structure_r2.png}
\caption{Illustration of the proposed methods. (a) a decoder based on proposed RILM method; (b) structure of Transformer LM that is part of the decoder in Fig.1 (a); (c) the training and (d) the decoding process of the proposed E2E ASR system.}
\label{fig:arch}
\end{figure*}
The rest of this paper is organized as follows:
Sec.~\ref{sec:format} describes the proposed methods.
Sec.~\ref{sec:typestyle}
details the data, model, and experimental results. Conclusions are provided in Sec.~\ref{sec:print}.
\section{Proposed Methods}
\label{sec:format}
This paper proposes a replaceable internal language model (RILM) method to make the attention-based encoder-decoder E2E ASR system itself adaptable when encountering domain shift. In addition, this paper further designs a residual softmax (R-softmax) for the CTC-based models in cross-domain scenarios.
In this paper, a hybrid CTC/attention framework \cite{8068205} is employed to utilise both proposed methods.
The realisation of the methods is shown in Fig.~\ref{fig:arch}, where FC represents a fully connected layer, and \textcircled{+} denotes addition operations.
\subsection{Replaceable Internal Language Model (RILM)}
A standard Transformer decoder contains a stack of several identical layers \cite{Vaswani2017}, and each layer consists of several sub-layers, including a self-attention module, a cross-attention module, and a feed-forward module. It is the self-attention module in each layer that enables the decoder to model the context between token sequences and is therefore interpreted as having an internal LM. However, the cross-attention module in each layer makes the decoder dependent on the acoustic encoder output and thus can not be separately pre-trained on text data \cite{9688009}, and also makes the internal LM hard to replace.
Inspired by prior work \cite{9688009}, the proposed RILM method makes some modifications to the Transformer decoder structure. As shown in Fig.~\ref{fig:arch} (a) and (b), the RILM method only retains the cross-attention module in the last $M$ layers and removes it from the previous $N$ layers.
The structure of previous $N$ layers without cross-attention mechanisms is the same as a standard Transformer LM, and is named the internal LM of the E2E ASR system decoder in this paper, as it is the self-attention module that makes it feasible for the decoder to model the context between tokens.
To avoid a mismatch when directly replacing the internal LM, the RILM method first calculates a predicted distribution of the internal LM and then uses a fully connected layer to transform its dimensions to be the same as the attention module before feeding it into the following cross-attention modules.
Inspired by \cite{he2016deep}, a highway connection can be used to directly connect the internal LM to the final output, thus emphasizing semantic information. The final output of the model is as follows and is illustrated in Fig.1 (a):
\begin{equation}
{\rm logits} = {\rm logits}_{A}+\beta \cdot{\rm logits}_{L} \label{logits}
\end{equation}
where ${\rm logits_L}$ and ${\rm logits_A}$ denote the output of the internal LM and the cross-attention modules, respectively. $\beta$ is a tunable hyperparameter and is shown in Fig.~\ref{fig:arch} (a).
\subsection{Residual Softmax (R-softmax)}
Previous work \cite{tsunoo22_interspeech, 9383515, 9003790, 9415039} focused on the encoder-decoder E2E ASR structure rather than CTC-based structure to estimate the internal LM because CTC-based models are generally not considered capable of modelling context between output tokens due to conditional independence assumption.
However, CTC-based E2E ASR models learn the training data distribution and are affected by the frequency of words in the training data \cite{DBLP:journals/taslp/DengCYY22}. The CTC-based model therefore at least has the modelling ability of a uni-gram LMs, and this paper aims to adapt it to the target domain effectively without re-training during inference. Therefore, this paper proposes the R-softmax for CTC-based models.
Let $x$ and $y$ be input data and its label, respectively, and $p^{t}_j$ and $p^{s}_j$ be the frequency of the $j$-th token in target-domain and source-domain data distribution, respectively.
Assume $\phi$ is a desired predicted CTC probability of the target domain, with form $\phi_{j}=p(y=j|x)=(p(x|y=j)/{p(x)})\cdot p(y=j)=(p(x|y=j)/{p(x)})\cdot p^{t}_j$,
and $\hat{\phi}$ is a predicted CTC probability of the source domain,
with form $\hat{\phi_{j}}=\hat{p}(y=j|x)=(p(x|y=j)/{\hat{p}(x)})\cdot\hat{p}(y=j)=(p(x|y=j)/{\hat{p}(x)})\cdot p^{s}_j$ \footnote{We assume that all data in the source and target domains are generated from the same process $p(x|y=j)$ following \cite{ren2020balanced}.}. Let $l_j$ be a logit of the $j$-th token output by a CTC-based model, then $\hat{\phi_{j}}={\rm exp}({l_j})/{\sum_{i=1}^{V}{\rm exp}(l_i)}$, where V denotes the vocabulary size. If $\hat{\phi_{j}}$ is expressed by the normal softmax function, then ${\phi_{j}}$ can be obtained via the R-softmax as follows\footnote{The key to the proof for R-softmax is to first show that $\sum_{i=1}^{V}{\rm exp}(l_i)=\sum_{i=1}^{V}{\rm exp}(l_i-{\rm log}({\hat{\phi_i}}/{\phi_i}))$ and then substitute it into the denominator of $\phi_j = {{\rm exp}(l_j - {\rm log}(\hat{\phi_j}/\phi_j))}/{\sum_{i=1}^{V}{\rm exp}(l_i)}$.}:
\begin{equation}
\phi_{j} = \frac{{\rm exp}(l_j)\cdot(p_j^t/p_j^s)}{\sum_{i=1}^{V}{\rm exp}(l_i)\cdot(p_i^t/p_i^s)} \label{r-softmax}
\end{equation}
Considering that some tokens may never appear in the corresponding text, the R-softmax employs a smoothing strategy when counting the $p^{t}_j$ and the $p^{s}_j$ as follows:
\begin{equation}
p_i=
\begin{Large}
\begin{cases}
\frac{ C_i}{C}-\frac{{\rm I}_{\{n_0 \neq 0\} }}{(V-n_0)\times C},& \text{\normalsize $ C_i >0$ } \\
\frac{{\rm I}_{\{n_0\neq 0\}}}{n_0 \times C},& \text{\normalsize otherwise} \label{tlce2}
\end{cases}
\end{Large}
\end{equation}
where $C$ denotes the total number of counts for all tokens, $C_i$ is the count of the $i$-th token, $n_0$ represents the number of tokens that never appear, and ${\rm I}_{\{n_0 \neq 0\}}$ is an indicator function which is 1 if $n_0\neq 0$ and 0 otherwise.
Furthermore, the ${\rm [blank]}$ label in CTC is unique and never appears when counting the frequency of token appearance.
Therefore, the R-softmax keeps the predicted probability of ${\rm [blank]}$ the same as when using the normal softmax.
Assume the index of ${\rm [blank]}$ as 1 and the weight ($p_1^t/p_1^s$) of ${\rm [blank]}$ in R-softmax as $k$, the value of $k$ can be calculated using the following equations:
\begin{equation}
\frac{k \cdot {\rm exp}({l}_{1})}{k \cdot {\rm exp}({l}_{1})+\sum_{i=2}^{V}(p_i^t/p_i^s)\cdot {\rm exp}({l}_{i})}=\frac{{\rm exp}({l}_{1})}{\sum_{i=1}^{V}{\rm exp}({l}_{i})}
\end{equation}
\begin{equation}
\frac{p_1^t}{p_1^s}=k=\frac{\sum_{i=2}^{V}(p_i^t/p_i^s)\cdot {\rm exp}({l}_{i})}{\sum_{i=2}^{V}{\rm exp}({l}_{i})} \label{ctc-soft}
\end{equation}
\section{Experiments}
\label{sec:typestyle}
\subsection{Corpus}
\label{ssec:corp}
E2E ASR models were trained on Librispeech corpus \cite{7178964}, a read speech corpus that contains 960-hour labelled speech data, and the standard test sets from Librispeech (“test-clean/-other”) were used for intra-domain evaluation.
To verify the effectiveness of the proposed methods on domain adaptation,
two out-of-domain datasets were used in the experiments. The first was the dev and test sets (Eval2000 and RT03) normally used with the Switchboard (SWBD) telephone conversation \cite{225858} corpus.
The text data for
target-domain LM training was the SWBD and
Fisher transcriptions.
Also, the dev and test sets from the AESRC2020 corpus \cite{9413386} were used, which is an accented English speech corpus, and the text data for
target-domain LM training was the training transcriptions.
\subsection{Model Descriptions}
Baseline models and the proposed E2E ASR systems were built using the ESPnet2 toolkit \cite{watanabe2018espnet}. Experiments used 80-dimensional filter bank features.
Text output used 5000 modeling units, including 4997 byte pair encoding (BPE) units \cite{gage1994} and 3 non-verbal symbols.
Under a hybrid CTC/attention framework, a Conformer \cite{gulati20_interspeech} baseline model was developed following the ESPnet2 recipe \cite{watanabe2018espnet} (i.e., 12-layer Conformer encoder and 6-layer Transformer decoder with 512 attention dimensions, 2048 feed-forward dimensions, and 8 heads). To achieve a more competitive performance by including popular self-supervised pre-training, we also built a strong Wav2vec2.0-based \cite{NEURIPS2020_92d1e1eb, hsu21_interspeech} baseline model to replace the Conformer encoder with a Wav2vec2.0 encoder \cite{hsu21_interspeech} provided by Fairseq \cite{ott2019fairseq} (i.e. "w2v\_large\_lv\_fsh\_swbd\_cv"). It also used a fully connected layer to change the dimensions of the encoder output from 1024 to 512 before it was fed into the decoder, and the decoder was the same as the previous baseline model.
For the proposed E2E ASR systems shown in Fig.~\ref{fig:arch}, the decoder contained 6 self-attention, cross-attention, and feed-forward modules (i.e., $M$=$N$=$6$), the FCs respectively changed the dimensions to 5000 and 512, and other structures were the same as the baseline models. The $\beta$ in Eq.~\ref{logits} was set to 0.3 for Conformer-based model and 0 for Wav2vec2.0-based models.
During training, the CTC weight was 0.3, and the models based on Conformer and Wav2vec2.0 were trained for 35 and 20 epochs respectively. A source-domain internal Transformer LM (6 layers) was trained on Librispeech LM corpus \cite{7178964} for 25 epochs following the ESPnet2 recipe, and it was fine-tuned on target-domain text corpus for extra 5 epochs. The source-domain LM was used as the internal LM part of the proposed decoder and was fixed during ASR training.
Model parameters from the last 10 epochs were averaged to avoid over-fitting.
During decoding, CTC weights were set to 0.3 and 0.5 for models based on Conformer and Wav2vec2.0 encoders respectively.
We implemented shallow fusion \cite{chorowski2015attention} with 0.1 external LM weight and density ratio \cite{9003790} with 0.2 target and 0.1 source LM weights following \cite{9383515} to compare with the proposed methods. The beam size was 20.
\subsection{Experimental results}
Experiments were conducted to compare the proposed E2E ASR methods with the strong baseline models for both intra-domain and cross-domain scenarios.
This paper aims to improve cross-domain accuracy while avoiding a degradation of intra-domain performance.
\label{sec:majhead}
\begin{table}[t]
\caption{WER of the E2E ASR models in intra-domain scenarios on the test sets of Librispeech.}
\label{tab:librispeech}
\centering
\setlength{\tabcolsep}{2.5mm}
\begin{tabular}{l c| c c}
\Xhline{3\arrayrulewidth}
{Model}&{Encoder} &{Test-clean}&{Test-other} \\
\hline
Baseline&Conformer &2.7& 6.8\\
Proposed& Conformer &\textbf{2.7}&\textbf{6.5}\\
\hline
Baseline&Wav2vec2.0 &3.6&5.5 \\
Proposed& Wav2vec2.0&\textbf{2.2}&\textbf{4.6}\\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{table}
\begin{table}[t]
\vspace{-0.3cm}
\caption{WER of the E2E ASR models in cross-domain adaptation scenarios on the dev and test sets of AESRC2020.}
\label{tab:aesrc}
\centering
\setlength{\tabcolsep}{1.0mm}
\begin{tabular}{l c c c| c c}
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{Model}&\multirow{2}{*}{Encoder}&Replace&Residual &\multirow{2}{*}{Dev}&\multirow{2}{*}{Test} \\
& &Internal LM&Softmax && \\
\hline
Baseline&Conformer &\XSolidBrush& \XSolidBrush &14.3& 15.8\\
Proposed& Conformer&\Checkmark&\Checkmark &\textbf{13.4}&\textbf{14.8}\\
Proposed& Conformer&\XSolidBrush&\Checkmark &13.7& 15.2\\
Proposed& Conformer&\Checkmark&\XSolidBrush &13.8& 15.2\\
Proposed& Conformer&\XSolidBrush&\XSolidBrush &14.1& 15.6\\
\hline
Baseline&Wav2vec2.0 &\XSolidBrush& \XSolidBrush &11.6&12.4 \\
Proposed& Wav2vec2.0&\Checkmark&\Checkmark &\textbf{10.7}&\textbf{11.4}\\
Proposed& Wav2vec2.0&\XSolidBrush&\Checkmark &11.0& 11.8\\
Proposed& Wav2vec2.0&\Checkmark&\XSolidBrush &11.2&12.0 \\
Proposed& Wav2vec2.0&\XSolidBrush&\XSolidBrush &11.3&12.2\\
\Xhline{3\arrayrulewidth}
\end{tabular}
\vspace{-0.3cm}
\end{table}
\begin{table}[t]
\caption{WER of the E2E ASR models in cross-domain adaptation scenarios on the dev and test sets of SWBD. Dev set was obtained following ESPnet2 \cite{watanabe2018espnet} processing.}
\label{tab:swbd}
\centering
\setlength{\tabcolsep}{0.3mm}
\begin{tabular}{l c c c| c c c}
\Xhline{3\arrayrulewidth}
{\multirow{2}{*}{Model}}&
\multirow{2}{*}{Encoder}&Replace&Residual &\multirow{2}{*}{Dev}&\multirow{2}{*}{Eval2000}&\multirow{2}{*}{RT03} \\
& &ILM&Softmax &&& \\
\hline
Baseline&Conformer &\XSolidBrush& \XSolidBrush &39.7& 33.6& 38.2\\
Proposed& Conformer&\Checkmark&\Checkmark &\textbf{37.1}&\textbf{31.2}& \textbf{35.9}\\
Proposed&Conformer&\XSolidBrush&\Checkmark &38.0& 31.6& 36.4\\
Proposed&Conformer&\Checkmark&\XSolidBrush &37.9& 32.0& 36.7\\
Proposed& Conformer&\XSolidBrush&\XSolidBrush &38.7& 32.3& 37.2\\
\hline
Baseline&Wav2vec2.0 &\XSolidBrush& \XSolidBrush &27.8&18.3 &21.1 \\
Proposed& Wav2vec2.0&\Checkmark&\Checkmark &\textbf{26.1}&\textbf{17.8}& \textbf{20.4}\\
Proposed& Wav2vec2.0&\XSolidBrush&\Checkmark &26.6& 18.1& 20.9\\
Proposed&Wav2vec2.0&\Checkmark&\XSolidBrush &27.3& 18.3&21.3 \\
Proposed& Wav2vec2.0&\XSolidBrush&\XSolidBrush &27.7& 18.5& 21.6\\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{table}
\begin{table}[t]
\vspace{-0.25cm}
\caption{WER for proposed methods and shallow fusion (SF) and density ratio (DR) in cross-domain scenarios on AESRC and SWBD. Wav2vec2.0 encoder was used.}
\vspace{0.05cm}
\label{tab:other}
\centering
\setlength{\tabcolsep}{1.0mm}
\begin{tabular}{l c c| c c c}
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{Model}&\multicolumn{2}{c|}{AESRC2020} &\multicolumn{3}{c}{SWBD}\\
&Dev&Test&Dev&Eval2000&RT03\\
\hline
Baseline + SF \cite{chorowski2015attention}$^*$&10.9&11.8&27.4&17.9&20.5\\
Baseline + DR \cite{9003790}$^*$&10.8&11.6&27.3&\textbf{17.8}&\textbf{20.4}\\
Proposed &\textbf{10.7}&\textbf{11.4}&\textbf{26.1}&\textbf{17.8}&\textbf{20.4}\\
\hline
\Xhline{3\arrayrulewidth}
\end{tabular}
\begin{tablenotes}
\footnotesize
\item{\hspace{-3.5mm}*}{Note the comparison is unfair as the SF and the DR rely on external LMs, \\ \hspace{-3.5mm}which are trained on source/target-domain text as in experimental setups.}
\end{tablenotes}
\vspace{-0.3cm}
\end{table}
\subsubsection{Intra-domain ASR}
The results in Table~\ref{tab:librispeech} showed that the proposed ASR system could slightly outperform the baseline model in the source domain when using the Conformer encoder. In addition, due to teacher forcing \cite{Williams1989} in decoder training,
some overfitting was observed on the baseline model when using the stronger Wav2vec2.0 encoder, while the proposed ASR system performs more robustly, which may be due to the fixed internal LM of RILM during ASR training.
\subsubsection{Cross-domain ASR}
Experiments were then conducted to compare the cross-domain ASR performance on the AESRC2020 and SWBD corpora.
As shown in Table~\ref{tab:aesrc} on the AESRC2020 corpus, without needing an external LM or re-training, the proposed methods outperformed the baseline models by about 1\% absolute WER reduction in the cross-domain scenarios whether using the Conformer or Wav2vec2.0 encoders. More specifically, in Table~\ref{tab:aesrc}, ablation studies showed that both the proposed RILM method and R-softmax could bring around 0.5\% reduction in WER and
were effective for flexible domain adaptation. It also shows that the proposed methods has broad application, whether for CTC-based, encoder-decoder, or hybrid CTC/attention models.
The results on the SWBD corpus in Table~\ref{tab:swbd} showed more severe performance degradation, which was also due to different corpus collection environments.
However, after using the robust Wav2vec2.0 encoder \cite{hsu21_interspeech}, the performance degradation was greatly reduced. The conclusion from SWBD was in general consistent with that of the AESRC2020 corpus: 1. the proposed methods achieved better cross-domain performance compared with the baseline models, with around 2.6\% absolute WER reduction for Conformer encoder and about 1.7\% for Wav2vec2.0 encoder; 2. both the designed RILM and R-softmax worked for ASR models in cross-domain scenarios and had broad application.
We also compared the proposed methods with shallow fusion \cite{chorowski2015attention} and density ratio \cite{9003790} methods on the AESRC2020 and SWBD corpora, and the results in Table~\ref{tab:other} showed that the proposed methods achieved close cross-domain
results to them, but is more flexible as it did not use external LM.
\section{Conclusions}
\label{sec:print}
This paper proposes a replaceable internal LM (RILM) method, which effectively makes the E2E ASR system itself adaptable and can be biased to target domains without needing an external LM or re-training. Through the proposed RILM method, the E2E ASR model decoder internal LM can be directly replaced by a target-domain LM, thus achieving flexible adaptation. Furthermore, this paper designs a residual softmax (R-softmax) for CTC-based models to achieve domain adaptation without re-training during inference. Under a hybrid CTC/attention framework, both proposed methods are employed in this paper. Experimental results showed that the proposed methods greatly improved cross-domain ASR performance while maintaining intra-domain results.
\small
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08589",
"language": "en",
"timestamp": "2023-02-20T02:02:28",
"url": "https://arxiv.org/abs/2302.08589",
"yymm": "2302"
} | \section{Introduction}
A key assumption in psycholinguistics is that sentence processing involves two operations: (i) the construction of a syntactic structure that represents the relation between its components and (ii) the retrieval of the meaning of single linguistic units from semantic memory.
When presented with a sentence in a task, humans can understand word meaning effectively while reading and listening. Listeners and readers appear to extract a similar semantic meaning from narrative stories~\cite{rubin2000reading,diakidoy2005relationship}, hence suggesting that the brain represents semantic information in an amodal form, i.e., independent of input modality. Further, earlier language-fMRI encoding studies have observed that sentence semantics alone cannot explain all the variance in brain activity; \emph{syntactic} information can also be used to explain some of the variance~\cite{binder2016toward,fedorenko2014reworking}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.647\textwidth]{images/jointArch.pdf}
\caption{Four steps of our proposed approach: (1) fMRI acquisition, (2) Syntactic parsing, (3) Regression model training, and (4) Predictive power analysis of the three embeddings methods.}
\label{fig:jointArch}
\end{figure*}
Prior to different aspects of semantic interpretation, the brain performs syntactic structure analysis inherently~\cite{hirst1984semantic}.
The syntactic information helps to identify the structural constituents that have to be interpreted as nominal, ordinal, or noun phrases, e.g., we identify ``Brazil'', ``four'', ``world cups'', and ``2002'' in a sentence: ``Brazil won four world cups till 2002'' before interpreting the semantics. Hence, investigating how the brain encodes syntactic word features is crucial for understanding language comprehension in the brain.
\noindent\textbf{Two paradigms of syntactic parsing:}
Constituency and dependency
are two different syntactic formalisms using different structural primitives (dependency relations and phrases). There has been some discussion in the field of theoretical linguistics with regard to whether they capture the same information or to what degree the structures they sanction are equivalent~\cite{hays1964dependency,jung1995syntaktische}.
Discussing the linguistic information the two parsers capture,~\citet{rambow2010simple} states from a theoretical linguistic point of view that they describe distinct syntactic entities; thus, they are not strictly equivalent. Dependencies capture direct relations between words, identical to thematic functions such as subject, object, modifier, etc. Constituent syntactic structure, on the other hand, is not so much about functional relations between words but about the recursive grouping of sentence constituents (words and phrases), such that at each level, each grouping acts as a syntactic unit~\cite{schneider1998linguistic}. Moreover, according to~\citet{jung1995syntaktische}, only dependencies can express the syntactic word-to-word relations of a sentence, whereas constituency expresses the linear order of a sentence.
On the other hand, an incremental top-down constituency parser processes input words from left to right, producing the possible parses in a top-down manner as future words are read.
Therefore, ~\citet{jung1995syntaktische} sees the two grammars as complementary but not equivalent.
Following these last observations, we consider dependency and constituent structures as distinct and the type of information they capture as nonequivalent.
The question we address in this study is whether different brain regions are associated with building different kinds of syntactic structures. We compare the predictive power of syntactic structural measures derived from these parsers
with regard to modeling the brain activity in language processing areas recorded during naturalistic story listening.
\noindent\textbf{Stimulus types for studying syntactic processing:}
Earlier psycholinguistic studies explored syntactic processing while subjects were involved in activities that required less versus more syntactic comprehension effort~\cite{friederici2011brain} using carefully designed sentence/phrase stimuli.
In the past decade, the study of syntactic processing has been extended to naturalistic settings that use narratives, such as reading ~\cite{reddy2021can} or listening to stories ~\cite{bhattasali2018differentiating,
zhang2022probing} generally in a task-free setting.
Due to the complexity of extracting syntactic word embeddings from sentence parsers, investigation of the predictive power of sentence parsers for brain encoding, especially for the neuroimaging data from naturalistic listening paradigms, is still under-explored.
\noindent\textbf{Brain Regions of Interest (ROIs) for syntactic processing:}
Several classical studies report the involvement of a language network of mostly left-lateralised cortical regions, including the left inferior frontal gyrus (IFG) with sub-regions (BA 44 and BA 45), the left posterior superior temporal gyrus (pSTG), and the left anterior temporal pole (ATP)~\cite{caramazza1976dissociation,friederici2006processing,friederici2011brain,pallier2011cortical,zaccarella2015merge}.
However, several other studies did not report activity in left IFG and left pSTG~\cite{humphries2006syntactic,rogalsky2009selective,bemis2011simple}, despite using paradigms similar to the studies mentioned above.
A series of recent studies have used functional magnetic resonance imaging (fMRI) brain activity to find that those brain regions spanning both the left and right hemispheres are involved in language processing~\cite{fedorenko2014reworking,caucheteux2021disentangling,reddy2021can,zhang2022probing,oota2022neural,oota2022visio,oota2022long,toneva2022combining,aw2022training,oota2022joint,merlin2022language}.
Further, these works conclude that syntax is distributed throughout the language system~\cite{blank2016syntactic,fedorenko2012lexical,fedorenko2020lack,caucheteux2021disentangling,wang2020probing,reddy2021can,zhang2022probing,oota2022neural}.
However, whether different brain regions are sensitive to distinct sentence-parsing strategies remains unclear. Moreover, in a listening task, it is unclear how syntactic features are represented in the brain and whether the neural correlates of different syntactic parsing signals overlap or dissociate from one another.
\noindent\textbf{Word stimulus representations for brain encoding}:
Several studies have used basic syntactic features such as part-of-speech, dependency relations, complexity metrics~\cite{caucheteux2021disentangling,reddy2021can}, and semantic word embeddings~\cite{oota2018fmri,jain2018incorporating,hollenstein2019cognival} to represent words for brain encoding with text stimulus.
In this paper, to understand how the brain processes linguistic structure in sentences, we leverage three different text representations using syntax parsers, as shown in Fig.~\ref{fig:jointArch}. We aim to understand the relative importance of these syntax parser embeddings and also their additional importance when compared with basic syntactic features or semantic embeddings like BERT.
\noindent\textbf{Limitations of previous work}:
(i) Existing work has focused on either constituency parsing mainly including incremental top-down parsing~\cite{reddy2021can}. No previous work has explored syntactic structure present in dependency trees.~\citet{reddy2021can} have only used one-hot vector for dependency tags as part of their complexity metrics. But we leverage dependency information more systematically by learning the dependency representations using graph convolutional networks.
(ii) Existing work has mostly focused on reading tasks only, and that too on small number of subjects (e.g., 7 subjects in~\cite{reddy2021can}). There is evidence that several cortical regions are activated during listening~\cite{handjaras2016concepts}. But which brain areas and sub-regions of the language network are involved in syntactic processing is yet unexplored. (iii) Lastly, existing work does not perform pairwise predictive power comparison for different syntactic parse methods.
Overall, our main contributions are as follows.
(1) We explore (a) basic syntactic features such as complexity metrics, part-of-speech (POS) tags, and dependency role (DT) tags, (b) embeddings obtained from three parse tree representations, and (c) semantic BERT embeddings for brain encoding.
(2)
Constituency and dependency tree-based embeddings are effective across different language regions for brain activity prediction, even after controlling for basic syntactic signals.
(3)
We find that prediction of the activation in regions such as the bilateral temporal areas (ATL, PTL) and middle-frontal gyrus (MFG) is significantly related to constituency parse representations. At the same time, brain activity in other language regions, such as the angular gyrus (AG) and posterior cingulate cortex (PCC) is significantly associated with dependency parse embeddings.
(4) Lastly, in the inferior frontal gyrus (IFG), we identify that dependency parse embeddings encode syntactic information better in the sub-regions such as 44, 45, IFJa, and IFSp of the left hemisphere, whereas constituency parse tree and incremental top-down parse tree based embeddings are better aligned in the right hemisphere.
\vspace{-0.1cm}
\section{Feature Representations}
We used four different features computed per word to simultaneously test different syntactic and semantic representations.
\noindent\textbf{(1) Constituency Tree-based Embeddings: }
Similar to~\citet{reddy2021can}, we build three types of constituency tree-based graph embeddings (ConTreGE): (i) ConTreGE Complete vectors (CC), (ii) ConTreGE Incomplete vectors (CI) and (iii) Incremental Top-Down Parser Embeddings (INC).
A CC vector is generated for every word using the largest subtree completed by that word.
A subtree is considered complete when all of its leaves are terminals.
The largest subtree completed by a given word refers to the subtree with the largest height.
A CI vector is generated for every word using the incomplete subtree that contains all of the Phrase Structure Grammar productions needed to derive the words seen till then, starting from the root of the sentence’s tree. Some examples for CC and CI are added in the Appendix (Figs.~\ref{fig:complete_tree} and~\ref{fig:incomplete_tree}). Like~\cite{reddy2021can}, we use Berkeley Neural Parser\footnote{\url{https://spacy.io/universe/project/self-attentive-parser}\label{note1}} for constituency parsing (i.e., for both CI and CC).
In ConTreGE Complete tree (CC), the largest subtree completed by a given word refers to the subtree with the largest height that also satisfies the following conditions - the given word must be one of its leaves and all of its leaves must only contain words that have been seen till then.
In ConTreGE Incomplete tree (CI), the embeddings are constructed using incomplete subtrees that are constructed by retaining all the phrase structure grammar productions that are required to derive the words seen till then, starting from the root of the sentence’s tree. If incomplete subtrees are more representative of the brain’s processes, it would mean that the brain correctly predicts certain phrase structures even before the entire phrase or sentence is read.
The incremental top-down parser is a statistical syntactic parser that
processes input strings from left to right, producing partial
derivations in a top-down manner, using beam search as detailed in~\cite{roark2001probabilistic}. Specifically, we use the implementation as described here\footnote{\url{https://github.com/roarkbr/incremental-top-down-parser}\label{note2}}. The INC embeddings are obtained using exactly the same methods as described in Section 3 of~\cite{reddy2021can}.
The brain could be computing several possible top-down partial parses that can derive the words seen so far and modifying the list of possible parses as future words are read.
The INC feature space is constructed to encode the different possible parse trees that can derive the words seen so far. When considering parse tree based representations, the embeddings may contain information about what is yet to be seen by the subject. However, this is not a problem since it mimics the human capability of guessing what is to come next. With this embedding space, we attempt to measure the ability of the brain to predict future constituents correctly.
\noindent\textbf{(2) Dependency Tree-based Embeddings (DEP): }
Graph Convolutional Networks (GCNs) have been widely used to encode syntactic information from dependency parse trees~\cite{vashishth2019incorporating}.
Rather than using pretrained syntactic GCN word embeddings generated from Wikipedia~\cite{vashishth2019incorporating}, we create DEP embeddings using GCNs on the ``Narrative stories'' dataset as follows.
To generate syntactic word embeddings using GCN,
we first extract the dependency parse tree $G_s$=($V_s$, $\epsilon_s$)
for every sentence in our dataset $s$ = ($w_1, w_2,$\dots$, w_n$),
using the Stanford CoreNLP parser~\cite{manning2014stanford}.
Here, $V_s$ = \{$w_1$, $w_2$,$\dots$, $w_n$\} and $\epsilon_s$ denotes the labeled directed dependency edges of the form ($w_i$, $w_j$ , $l_{ij}$ ), where $l_{ij}$ is the dependency relation of $w_i$ to $w_j$.
GCN computations iteratively utilize the context defined by a word's neighbors in the graph to compute embedding for every word $w_i$.
Further, we also perform edge-wise gating to give importance to relevant edges and suppress noisy ones. We follow the architecture defined in~\citep{vashishth2019incorporating} for training a GCN on our dataset leading to syntactically-rich DEP embeddings.
Overall, GCN utilizes syntactic context to learn rich DEP embeddings.
\noindent\textbf{(3) Basic Syntactic Features:}
Similar to~\citep{wang2020probing,reddy2021can,zhang2022probing}, we use various multi-dimensional syntactic features such as Punctuation (PU), Complexity Metrics (CM), and Part-of-speech and dependency tags (PD), described briefly below.
\noindent\textbf{Punctuation (PU)}
The role of punctuation is to resolve syntactic and semantic ambiguity in the lexical grammar and encode relational discourse links between text units in sentences~\cite{briscoe1996syntax}.
Punctuation-based features are encoded using a one-hot vector where the type of punctuation is presented along with a word (e.g. . or ,).
\noindent\textbf{Complexity Metrics (CM)}
We use three features in the complexity metrics: Node Count (NC), Word Length (WL), and Word Frequency (WF).
The node count for each word is the number of subtrees that are completed by incorporating each word into its sentence.
Word length is the number of characters present in the word.
Word frequency reports log base-10 of the number of occurrences per billion of a given word in a large text corpus.
\noindent\textbf{Part-of-speech and Dependency tags (PD)}
We use the Spacy English dependency parser~\cite{spacy2} to extract the Part-of-speech (POS) and dependency tags.
Unlike DEP embeddings (which use GCNs), in PD, we generate a one-hot vector for each word and dependency tag.
The final vector is called PD, a concatenation of both the POS tag and dependency vector. Note that DEP and PD features use different methods for dependency analysis -- PD features are just one-hot encoded representations while DEP features are learned syntactic embeddings using GCNs.
\noindent\textbf{(4) BERT Features}
Given an input sentence, the pretrained BERT~\cite{devlin2018bert} outputs token representations at each layer.
Since BERT embeds a rich hierarchy of linguistic signals: surface information at the bottom, syntactic information in the middle, semantic information at the top~\cite{jawahar2019does}; hence, we use the \#tokens $\times$ 768D vector from the last hidden layer to obtain the semantic embeddings.
For uniformity of feature dimensions, we used PCA to bring down the dimensions to 250.
\section{Dataset Curation}
\vspace{-0.1cm}
\noindent\textbf{Brain Imaging Dataset}
\label{sec:dataset}
The ``Narratives'' collection aggregates a variety of fMRI datasets collected while human subjects listened to real spoken stories~\cite{nastase2021narratives}.
We analyze data from 82 subjects listening to the story titled `PieMan' with 282 TRs (repetition time -- fMRI recorded every 1.5 sec.). We chose this story since it contains maximum number of subjects in the ``Narratives'' collection. The dataset is in English and contains 957 words across 67 sentences. The story duration is 07m:02s. We use the multi-modal parcellation of the human cerebral cortex (Glasser Atlas: consists of 180 ROIs in each hemisphere) to display the brain maps~\cite{glasser2016multi} since the Narratives dataset contains annotations tied to this atlas.
The data covers eight language brain ROIs with the following subdivisions: (i) angular gyrus (AG: PFm, PGs, PGi, TPOJ2, and TPOJ3); (ii) anterior temporal lobe (ATL: STSda, STSva, STGa, TE1a, TE2a, TGv, and TGd); (iii) posterior temporal lobe (PTL: A5, STSdp, STSvp, PSL, STV, TPOJ1); (iv) inferior frontal gyrus (IFG: 44, 45, IFJa, IFSp); (v) middle frontal gyrus (MFG: 55b); (vi) inferior frontal gyrus orbital (IFGOrb: a47r, p47r, a9-46v), (vii) posterior cingulate cortex (PCC: 31pv, 31pd, PCV, 7m, 23, RSC); and (viii) dorsal medial prefrontal cortex (dmPFC: 9m, 10d, d32)~\cite{baker2018connectomic,milton2021parcellation,desai2022proper}.
The dataset has been made available freely without restrictions by~\citet{nastase2021narratives}.
\noindent\textbf{Downsampling}
Since the rate of fMRI data acquisition (TR = 1.5sec) was lower than the rate at which the text stimulus was presented to the subjects, several words fall under the same TR in a single acquisition.
Hence, we match the stimulus acquisition rate to fMRI data recording by downsampling the stimulus features using a 3-lobed Lanczos filter~\cite{lebel2021voxelwise}.
After downsampling, we average the word-embeddings within each TR to obtain chunk-embedding for each TR.
\noindent\textbf{TR Alignment}
To account for the slowness of the hemodynamic response, we model the hemodynamic response function using finite response filter (FIR) per voxel and for each subject separately with 8 temporal delays corresponding to 12 seconds.
\section{Methodology}
\label{headings}
\vspace{-0.1cm}
\noindent\textbf{Encoding Model}
To explore how and where syntactic and semantic specific features are represented in the brain when listening to stories, we extract different features describing each stimulus sentence and use them in an encoding model to predict brain responses.
If a feature is a good predictor of a specific brain region, information about that feature is likely encoded in that region.
The main goal of each fMRI encoder model is to predict brain responses associated with each brain voxel when given stimuli.
We train a model per subject separately. Following the literature on brain encoding~\cite{wehbe2014simultaneously,toneva2020modeling,caucheteux2021model, reddy2021can,toneva2021same,zhang2022probing,oota2022neural,oota2022visio}, we choose to use a ridge regression model instead of more complicated models.
We plan to explore more such models as part of future work.
The ridge regression objective function for the $i^{th}$ example is $f(X_{i})= \underset{W}{\text{min}} \lVert Y_i - X_{i}W \rVert_{F}^{2} + \lambda \lVert W \rVert_{F}^{2}$.
Here, $W$ are the learnable weight parameters, $\lVert.\rVert_{F}$ denotes the Frobenius norm, and $\lambda >0$ is a tunable hyper-parameter representing the regularization weight. $\lambda$ was tuned on a small disjoint validation set obtained from the training set.
\noindent\textbf{Cross-Validation}
We follow 4-fold (K=4) cross-validation. All the data samples from K-1 folds were used for training, and the model was tested on samples of the left-out fold.
\noindent\textbf{Evaluation Metric} We evaluate our models using the popular brain encoding evaluation metric, $R^2$. Let TR be the number of time repetitions. Let $Y=\{Y_i\}_{i=1}^{TR}$ and $\hat{Y}=\{\hat{Y}_i\}_{i=1}^{TR}$ denote the actual and predicted value vectors for a single voxel. Thus, $Y\in R^{TR}$ and also $\hat{Y}\in R^{TR}$.
We use R$^{2}(Y, \hat{Y})$ metric to measure the coefficient of determination for every voxel.
We average $R^2$ score over all voxels in a region to get region-level aggregated metric. Finally, they are further averaged across all subjects to obtain final region-level metrics, which are reported with mean and standard deviation.
\noindent\textbf{Statistical Significance}
We run a permutation test to check if $R^2$ scores are significantly higher than chance. We permute blocks of contiguous fMRI TRs, instead of individual TRs, to account for the slowness of the underlying hemodynamic response.
We choose a standard value of 10 TRs.
The predictions are permuted within fold 5000 times, and the resulting $R^2$ scores are used as an empirical distribution of chance performance, from which the p-value of the unpermuted performance is estimated.
We also run a bootstrap test, to test if a model has a higher $R^2$ score than another.
In each iteration, we sample with replacement the predictions of both models for a block of TRs, compute the difference of their $R^2$, and use the resulting distribution to estimate the p-value of the unpermuted difference.
Finally, the Benjamni-Hochberg False Discovery Rate (FDR) correction~\cite{benjamini1995controlling} is used for all tests (appropriate because fMRI data is considered to have positive dependence~\cite{genovese2000bayesian}).
The correction is performed by grouping all the voxel-level p-values (i.e., across all subjects and feature groups) and choosing one threshold for all of our results.
The correction is done this way as we test multiple prediction models across multiple voxels and subjects.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.65\linewidth]{images/narratives_individual_lefthemisphere_significantpairs.pdf}
\includegraphics[width=0.65\linewidth]{images/narratives_individual_righthemisphere_significantpairs.pdf}
\vspace{-0.3cm}
\caption{\textbf{Performance of Individual Embedding Methods}: ROI-wise analysis of the prediction performance of various feature sets. We show the \% of ROI voxels with a significant increase in prediction performance. Each bar shows avg \%; error bars show standard error across 82 subjects. Left hemisphere (Top); Right hemisphere (Bottom).}
\label{fig:listening_syngcn_all_left}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{images/synparsers_brainmaps.pdf}
\caption{$R^2$ score per voxel for the whole brain.
(a) PU (b) CM (c) PD (d) CC (e) CI (f) INC (g) DEP (h) BERT.}
\label{fig:listening_left}
\end{figure*}
\section{Experiments and Results}
\label{sec:experiments}
We discuss detailed hyper-parameter settings in Appendix~\ref{sec:hyperParameters}.
\noindent\textbf{Which word representations are semantic versus syntactic?} We first empirically show that syntactic embeddings do not encode a significant amount of semantic information.
In particular, we train the RidgeCV regression model in a 10-fold cross-validation setting to predict the semantic GloVe~\cite{pennington2014glove} features (300 dimensions) using syntactic embeddings for all the representations, similar to earlier works~\cite{caucheteux2021disentangling,reddy2021can,zhang2022probing}.
Average $R^2$ scores are as follows: BERT (0.560), CC (0.052), CI (0.020), DEP (0.170), INC (0.040), PD (0.183), CM (0.027), and PU (0.005). These $R^2$ scores indicate that (a) overall, BERT has high semantic information compared to other embeddings, and (b) constituency parsers have low semantic information compared to DEP.
Overall, all the syntactic embeddings consist of very low semantic information. Hence, it is reasonable to infer that any additional variance predicted by the syntactic parsing methods compared to the semantic feature space (BERT) is mainly due to their syntactic information.
\noindent\textbf{Performance of individual embedding methods:}
In order to assess the performance of the fMRI encoder models learned using the individual syntactic and semantic representations, we computed the $R^2$ scores between the predicted and true responses across various ROIs of the language network.
Fig.~\ref{fig:listening_syngcn_all_left} reports the \% of ROI voxels with significant $R^2$ scores (based on a hypothesis test where the $R^2$ score for each voxel is greater than 0) across different representations for different language regions in the left and right hemispheres.
We make the following observations from Fig.~\ref{fig:listening_syngcn_all_left}: (1) Among basic syntactic features, PD features perform best across most of the language regions, whereas CM yields the second-best result. (2) Among the syntactic embedding methods, CC encodes syntactic information better in the language regions such as temporal lobes (ATL and PTL) and MFG. (3) Among the syntactic embedding methods, DEP embeddings predict brain activity better in the language regions (PCC and IFG of left hemisphere, and AG, IFGorb, and PCC of right hemisphere). (4) Semantic embeddings using BERT are the best across all regions in the right hemisphere, but the effectiveness of BERT is rather mixed in the left hemisphere.
Further, we report the avg $R^2$ scores across all different language ROIs in the Appendix (Fig.~\ref{fig:listening_avg_r2_allmodels}).
We further demonstrate the performance of embedding methods for various sub-regions of each language ROI in the Appendix Figs.~\ref{fig:listening_avg_r2_AG} to~\ref{fig:listening_avg_r2_dmPFC}. We observe the following from these figures: (1) In the ATL region (Fig.~\ref{fig:listening_avg_r2_ATL}), CC better encodes in the superior temporal sulcus with dorsal attention (STSda). For STS in ventral attention (STSva), CC encodes better in the left hemisphere while DEP is better in the right. (2) In the PTL region (Fig.~\ref{fig:listening_avg_r2_PTL}), CC is best for STSdp sub-region. (3) In the IFG region (Fig.~\ref{fig:listening_avg_r2_ifg}), DEP is better aligned with 44 region whereas CC is better aligned with IFJa region. These results are in line with observations made in~\cite{pallier2011cortical}.
Overall, a higher percentage of voxels with all the frontal and temporal regions, demonstrates that language comprehension may be associated more with both frontal and temporal regions~\cite{cohen2021does}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.78\linewidth]{images/narratives_syngcn_groups_lefthemisphere.pdf}
\includegraphics[width=0.78\linewidth]{images/narratives_syngcn_groups_righthemisphere.pdf}
\caption{\textbf{Additional Predictive Power of various Representations}: For each model, we show the \% of ROI voxels with a significant increase in prediction performance. Each bar shows avg \%; error bars show standard error across 82 subjects. ‘-’ indicates a hypothesis test for the difference in $R^2$ scores between the two feature groups being larger than 0. Left hemisphere (Top); Right hemisphere (Bottom). Note that PU values here are slightly different from Fig.~\ref{fig:listening_syngcn_all_left} since here the FDR correction was done across all the groups.}
\label{fig:listening_syngcn_all_left_pairs}
\end{figure*}
We also report brain maps with avg $R^2$ for all the representations in Fig.~\ref{fig:listening_left}.
From Figs.~\ref{fig:listening_syngcn_all_left} and~\ref{fig:listening_left}, we can infer that the different word representations, including all syntactic and semantic methods, are highly distributed across ROIs of language network. In particular, PTL and MFG have high overlap for both syntactic (CC, CI, DEP, INC), and semantic (BERT) features. Also, ROIs such as PTL, IFGOrb and PCC have higher overlap with PD. Most of these observations agree with previous findings on the brain networks of language processing~\cite{friederici2011brain,fedorenko2014reworking,caucheteux2021disentangling,reddy2021can,zhang2022probing}, supporting that both syntax and semantics are distributed across language ROIs. Lastly, similar to an earlier study~\cite{blank2016syntactic}, basic syntactic features are much less associated with voxels in AG region.
\noindent\textbf{Additional predictive power of various representations}
Many feature spaces have overlapping information, e.g., PD (part-of-speech and dependency) tags include punctuation, BERT vectors have been shown to encode syntax~\cite{jawahar2019does,luoma2020exploring}, and DEP embeddings built from GCNs encode some POS tags information. Are various representations capturing very similar signals, i.e., redundant or capturing new information, which is additionally useful to predict brain activations? To answer this question, we first organize the feature groups in the increasing order of syntactic information.
We build hierarchical feature groups in increasing order of syntactic information and test for significant differences in prediction performance between two consecutive groups. We start with the simple feature -- punctuation (PU) and then add more complex features in this order: the complexity metrics (CM), POS and dependency tags (PD), \{CC, CI, INC, DEP\}, and lastly, BERT. Fig.~\ref{fig:listening_syngcn_all_left_pairs} reports the \% of ROI voxels with significant $R^2$ scores (hypothesis test where the difference in $R^2$ scores between the two feature groups is larger than 0) across feature groups for different ROIs in the left and right hemispheres, respectively.
We make the following observations from Fig.~\ref{fig:listening_syngcn_all_left_pairs}. (i) Unlike~\cite{reddy2021can}, we find that punctuation features yield a lower predictive performance across language regions for listening in both the left and right hemispheres. This is intuitive since punctuation marks are not ``visible'' when listening. (ii) Amongst CC, CI, INC, and DEP, after controlling for basic syntactic features \{PD, CM, PU\}, CC displays a large \% of significant voxels across multiple language sub-regions, largest in ATL, PTL, and MFG in left and in IFGOrb, PCC and dmPFC in the right hemispheres. This means there are voxels in these language sub-regions that capture hierarchical English grammar syntax beyond simple syntax signals captured by PD, CM, and PU. (iii)
DEP parser explains addition variance after controlling for basic syntactic features for the AG region which is mainly a knowledge store of thematic relations between entities.
Also, DEP yields a large \% of significant voxels for the IFG region in the left hemisphere whereas PCC region in the right hemisphere.
Although INC does not show any additional variance in the left hemisphere, it performs well for IFG and MFG in the right hemisphere. (iv) On top of these representations, BERT adds to the variance the most in the context of CC, CI, INC, and DEP features in both hemispheres.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{images/narratives_pairs_lefthemisphere_new.pdf}
\includegraphics[width=0.8\linewidth]{images/narratives_pairs_righthemisphere_new.pdf}
\vspace{-0.3cm}
\caption{\textbf{Pairwise Predictive Power Comparison for Syntactic Parse Methods and BERT}: For each model, we show the percentage of ROI voxels in which we see a significant increase in prediction performance. Each bar represents the average percentage across 82 subjects, and the error bars show the standard error across subjects. ‘-’ indicates a hypothesis test for the difference in $R^2$ scores between the two feature groups being larger than 0. Left hemisphere (Top) and Right hemisphere (Bottom).}
\label{fig:listening_syngcn_bert_r2_left_new}
\end{figure*}
\noindent\textbf{Pairwise predictive power comparison for syntactic parse methods and BERT}
To compare relative extra syntactic information in various parse-based representations, we compute the difference in $R^2$ between every pair of representations from \{CC, CI, DEP\}. For this analysis, we ignore INC since it performed worst, as shown in Fig.~\ref{fig:listening_syngcn_all_left}. Thus, we plot \% of significant ROI voxels for \{CC, DEP\}-\{CC\} and other such feature-pairwise combinations in Fig.~\ref{fig:listening_syngcn_bert_r2_left_new} for both hemispheres. We make the following observations from Fig.~\ref{fig:listening_syngcn_bert_r2_left_new}. (i) CC and CI show greater variance in brain predictivity (ATL and PTL for both hemispheres, MFG, IFGOrb and dmPFC of left hemisphere) even after controlling for either DEP. Also, CC and DEP show greater variance after controlling for CI. However, DEP or CI have negligible \% of ROI voxels after controlling for CC, specifically for temporal lobe (ATL and PTL) and frontal regions (IFG and MFG). Thus, we can conclude that constituency trees, specifically CC, encode similar syntactic information as DEP in temporal lobe (ATL and PTL) and frontal regions (IFG and MFG). Also, DEP based on dependency trees does not have additional syntactic information compared to constituency trees, except for AG, IFGOrb, PCC and dmPFC regions.
(ii) While BERT provides improvement over CC, CI and DEP in most brain areas (especially in MFG and dMPFC), surprisingly in AG and IFG, BERT does not provide much additive value.
\section{Discussion}
\vspace{-0.1cm}
In this section, we correlate our empirical findings about syntactic parsing methods with previously proposed neuroscience theories.
From Fig.~\ref{fig:listening_syngcn_all_left_pairs}, we observe that activity in the left temporal lobe (ATL and PTL) seems to be predicted well using either CC or basic syntactic (PD) representations. These results are supported by theory of~\citet{matchin2020cortical}, who concluded that parts of the PTL are involved in hierarchical lexical-syntactic structure building, while the ATL is a knowledge store of entities.
While activity in the left IFGOrb, left PCC, and left AG seems to be better modeled by basic syntactic feature (PD) representations, that in MFG seems to be related to CC representations. DEP embeddings seem to perform better for activity in the left AG, left ATL and left IFG. This supports the theory of ~\citet{matchin2020cortical}, which reports that ATL is a knowledge store of entities and AG is a store of thematic relations between entities.
A sub-ROI in the left AG, namely parietal area G inferior (PGi) has significantly more number of voxels sensitive to dependency features when we control for all other syntactic features. On the other hand sub-ROIs in the right temporo-parieto-occipital junction (TPOJ) are more sensitive to incremental top-down syntactic features (Appendix Fig.~\ref{fig:listening_ag_pairs}). While it is known that AG is sensitive to stimuli that are connected through a narrative rather than unconnected words~\cite{baker2018connectomic}, the current findings suggest that distinct sub-ROIs within AG are related to different syntactic features.
Further sub-regions in the prefrontal cortex such as Brodmann area (BA) 44 and the inferior frontal junction area (IFJa) also seem to be related to representations of dependency parser (Appendix Fig.~\ref{fig:listening_ifg_pairs}). The results in the prefrontal cortex seem to concur with the observations of~\citet{grodzinsky2006neuroimaging} and~\citet{kaan2002brain} who have shown that Broca’s area (Brodmann areas 44 and 45) has higher brain activation while processing complex sentences. Since narrative listening also involves processing highly complex sentences, consistent activation found in Left Brodmann areas 44 and 45 may relate to parsing of sentences or to see if they had distinct meanings.
The right hemisphere activation in the language network (AG, ATL, PTL, IFG, MFG, IFGOrb, PCC, and dMPFC) on the whole seems to be associated with basic syntactic features such as word length, word frequency, word count as embodied in CM representations.
In linguistic studies, INC has been shown to be effective in checking if sentences with different syntax, have the same or different meaning. This in line with our observation that representations from INC parser seem to be more related to language regions (inferior frontal gyrus, IFG) in the right hemisphere as shown in Fig.~\ref{fig:listening_ifg_pairs}.
Overall,~\citet{grodzinsky2006neuroimaging} concluded that syntax processing is not limited to specific regions (left IFG or Broca's area). Along with IFG, other regions such as PTL, ATL, MFG, and IFGOrb are also involved in different stages of syntax processing. Our results (Fig.~\ref{fig:listening_syngcn_all_left}) also seem to support distributed representation of syntax across the language network. Moreover, our results clearly show the kind of syntax encoded by these individual ROIs.
\section{Conclusion}
\vspace{-0.2cm}
We studied the relative importance of multiple constituency and dependency syntax parsing methods for fMRI prediction for the listening task. We find that (1) both CC and DEP are effective; CC is more important than CI, (2) CC is better in temporal cortex and MFG, while DEP is better in AG and PCC, (3) while BERT embeddings seem to be the most effective, syntactic embedding methods also explain additional variance for a few ROIs. In line with previous works, we find that syntax and semantic processing is spread across multiple brain areas.
\section{Limitations}
Although these experiments were performed on only one dataset, it is indeed large with data from 82 participants. That said, it will be nice to perform experiments with more listening datasets.
We experiment with a linear encoder -- Ridge regression.
We plan to experiment with more complex encoders as part of future work.
This work was done on data related to English stories only. Several other languages belong to the same language family as English~\cite{malik2022investigation}. While we can expect the insights and learnings to hold across languages in the same language family as English, empirical validation needs to be done. For languages in other language families, syntactic structure may be very different from English. Hence, more work needs to be done to check which of these insights hold for datasets in other language families.
This work was conducted on a dataset where the participants were involved in the listening task. However, the stimuli was represented in the text form. We believe that an audio form of the stimuli can lead to improved insights. Thus, more work needs to be done to design representations (like prosodic features) for auditory stimuli.
\section{Ethical Statement}
We did not create any new fMRI data as part of this work. We used Narratives-Pieman dataset which is publicly available without any restrictions. Narratives dataset can be dowloaded from \url{https://datasets.datalad.org/?dir=/labs/hasson/narratives}. Please read their terms of use\footnote{\url{https://datasets.datalad.org/labs/hasson/narratives/stimuli/README}} for more details.
We do not foresee any harmful uses of this technology.
|
{
"arxiv_id": "2302.08669",
"language": "en",
"timestamp": "2023-02-20T02:05:56",
"url": "https://arxiv.org/abs/2302.08669",
"yymm": "2302"
} | \section{INTRODUCTION}
While there have been major advances in autonomy and reinforcement learning (RL), real world deployment of autonomous agents remains a challenge. Of the numerous factors that contribute to this, arguably the most important one is user trust in the autonomous agent \cite{trust_in_automation, buildingtrust}. Trust depends heavily on user belief in the capabilities of the agent, and establishing appropriate levels of trust can lead to proper usage of and reliance in autonomy. An autonomous agent can help establish trust by self-assessing and communicating its \emph{competency}---its capability and confidence in performing a specified task \cite{Aitken2016AssurancesAM, israelsen}. One way to do this is to have the agent analyze and forecast its own behavior to communicate its competency in the form of a calibrated distribution of outcomes~\cite{acharya2022competency}.
To communicate actionable information about the outcome distribution, the agent must be
able to perform accurate forecasting over the full time horizon relevant to tasking. Inspired by model-based RL, we use a neural network world model~\cite{ha2018worldmodels} to capture the dynamics and uncertainty associated with the environment, and the agent's interaction with it. Then, we can forecast full trajectories by rolling out the agent's policy with the world model~\cite{acharya2022competency, rosemaryke_longtermfuture}, allowing for competency assessment in multi- or novel task settings. For these models, uncertainty quantification is vital as it ensures that the forecasts are accurate and the generated outcome distributions are trustworthy. Trajectory uncertainties arise from various sources, both from the environment as well as from the modelling process, and can be separated into aleatoric and epistemic components~\cite{chua2018deep, clements_rl_risks_uncertainty}. Our approach, shown in Figure~\ref{overview}, enables the long horizon forecasting over task duration and self-assessment of competency accounting for both components of uncertainty.
\begin{figure}
\centering
\includegraphics[scale=0.36]{images/overview_v3.pdf}
\caption{View of forecasted trajectories and its separation into aleatoric and epistemic uncertainty components. Identification and communication of uncertainty components in this way leads to improved competency reporting and outcome assessment by the autonomous agent. }
\label{overview}
\end{figure}
Aleatoric uncertainty~\cite{kendall_gal} is irreducible and arises due to factors such as process noise, measurement noise, and unobserved or partially observed environmental variables. Epistemic uncertainty is reducible with sufficient training data and arises from the modelling process. To quantify epistemic uncertainty, a distribution is placed over the model parameters; to quantify aleatoric uncertainty, a distribution is placed over the model output. Aleatoric uncertainty is naturally intertwined with epistemic uncertainty due to the dependence of model output on the model parameters, making total uncertainty particularly challenging to quantify when both components are significant, as in most real-world environments. For long horizon forecasting, where uncertainty compounds over time, this interplay between aleatoric and epistemic components can dramatically affect the outcome distribution.
In this work, we contribute: (i) a case for combined aleatoric and epistemic uncertainty quantification and its pertinence for real world deployment of competency-aware autonomous agents; (ii) a new method for quantifying aleatoric uncertainty by accounting for any systemic epistemic uncertainty, which is pre-quantified using state-of-the-art ensemble based methods; (iii) results on two different autonomous tasking environments to validate the generality and utility of our approach when there are different levels of epistemic uncertainty in a high aleatoric uncertainty environment. Background information and related works are provided in Sec. \ref{background}. We then describe our technical approach, including a new model training process using deep generative models specifically for aleatoric uncertainty, in Sec. \ref{methods}. Results on two RL robotics environments are provided in Sec. \ref{results}. Finally, we provide conclusions and future work in Sec. \ref{conclusions}.
\section{BACKGROUND AND RELATED WORK}\label{background}
\subsection{Uncertainty Quantification in Deep Learning and RL}
Since the inception of modern deep learning methods, Bayesian deep learning \cite{MacKay92bayesianmethods, MacKay95probablenetworks, neal_bayesian} has received much consideration as a way of quantifying epistemic uncertainty. This comes from the recognition that deep learning models inherently contain uncertainties from their training that must be accounted for, especially when used in applications with real world impact, such as in medical domains \cite{medicalceeplearning2, medicaldeeplearning}, robotic systems \cite{roboticsdeeplearning, roboticsdeeplearning2}, and autonomous vehicles \cite{autdrivingdeeplearning, autdrivingdeeplearning2}. Practical methods such as Monte Carlo dropout \cite{kendall_gal}, deep ensembles \cite{ensembles}, and evidential networks \cite{evidentialnetwork, evidentialnetwork2} approximately quantify epistemic uncertainty. To account for aleatoric uncertainty, entropy-based or variance-based uncertainty separation is typically used \cite{depeweg2017decomposition}. However, these methods do not account for the double counting of epistemic uncertainty that may occur when there is still significant epistemic uncertainty during training, since they do not consider the impact of epistemic uncertainty when modelling the aleatoric uncertainty.
For RL robotics applications that are of particular interest in this work, epistemic uncertainty quantification has been studied extensively~\cite{jain2021deup, osband2021epistemic, eriksson2019epistemic}. This is justified because many of those applications are in simulation and/or controlled environments which present minimal aleatoric uncertainty. Hence, the focus in these works is instead on
recognizing insufficient training data or when testing regions are out of bounds from the training range. Out-of-distribution (OOD) or novelty detection is an important class of problems where epistemic uncertainty information is used \cite{sedlmeier2019uncertainty, wu2021uncertainty}. Having OOD knowledge when deployed into the real world can improve the reliability of autonomous agents in novel scenarios. Moreover, having knowledge about the epistemic uncertainty can also aid \enquote{exploration}, especially for RL systems \cite{explorationinRL}. Therefore, modelling of epistemic uncertainty has many benefits for a competency-aware autonomous agent, and this work will rely on epistemic quantification using state-of-the-art ensemble technique \cite{ensembles}.
Aleatoric uncertainty
quantification has also been addressed for RL. One primary area of interest is in
quantifying \enquote{risk} \cite{osband2016risk}, which considers the variance of task returns (e.g. outcome rewards) rather than just the mean \cite{rigter2021risk}. This is useful in recognizing and optimizing for worse case scenarios \cite{neuneier1998risk}. In the context of model-based RL, there are works that have looked at uncertainty propagation by using
probabilistic dynamics models \cite{chua2018deep}. However, the propagation horizon is fairly short and the authors also assume Gaussian distributions for aleatoric uncertainties. Other works have separately looked at \enquote{perceptual uncertainty}, where known covariance data from sensors are used in trajectory forecasting
\cite{ivanovic2022propagating}. While useful when we have sensor data, this method also makes a strong Gaussian assumption, which will not hold for long horizon forecasting with compounding uncertainties and non-linear behaviors.
In this work, we are interested in creating a competency-aware autonomous agent that can forecast its behavior over long time horizons and communicate the likely outcomes under the provided initial conditions. As such systems expand into the real world, quantifying aleatoric uncertainty becomes just as important as quantifying epistemic uncertainty. This is especially true since competency-aware autonomous agents must forecast and recognize the various conditions that can impact their behaviors and thus their competency over \emph{long time horizons}, where many assumptions---such as Gaussianity---will break down. For example, for an autonomous vehicle that is reporting its competency, the presence of wet driving conditions leads to extra uncertainty in its behavior which manifests itself as aleatoric uncertainty. If the task is to get to a certain location within a designated amount of time, the driving conditions directly contribute to the distribution of outcome traverse times, which determine whether the task can be successfully completed. Thus, one of our primary objectives is to produce \emph{accurate distributions} over the likely outcomes. Since this process of outcome assessment heavily relies on the (correct) distribution of aleatoric uncertainty, we need accurate aleatoric quantification. However, our own experimentation and some past works~\cite{clements_rl_risks_uncertainty} have shown that it is difficult to directly quantify aleatoric uncertainty, as the effects of epistemic uncertainty will naturally be embedded in the process. Therefore, new methodologies are needed to directly account for the epistemic uncertainty in aleatoric model training.
\subsection{Problem Statement} \label{probstatement}
We consider an autonomous agent that interacts with its environment over time, resulting in a trajectory containing state and action information. The states describe both the agent's dynamics along with any relevant environmental conditions, and the actions are those performed by the agent within the environment. Such a trajectory is represented as:
\begin{equation}
\mathcal{\bm{T}} = \{\bm{s}_0, \bm{a}_0, \bm{s}_1, \bm{a}_1, \ldots, \bm{a}_{t-1}, \bm{s}_t, \ldots\},
\end{equation}
\noindent where $\bm{s}_t$ and $\bm{a}_t$ denote states and actions at time $t$, respectively.
Competency assessment requires access to a forecast of all future states up to time $T$ given only the current state and action resulting from any policy. As such, we assume the following distribution is available:
\begin{equation}\label{eq:traj_prob}
p(\bm{s}_1, \bm{s}_2, \ldots, \bm{s}_{T}|\bm{s}_0; \bm{a}_0, \bm{a}_1, \dots, \bm{a}_{T-1}),
\end{equation}
\noindent which allows sampling of new states $\hat{\bm{s}}_{t}$ such that,
\begin{equation}
\hat{\bm{s}}_{1}, \ldots, \hat{\bm{s}}_{T} \sim p(\bm{s}_{1}, \ldots, \bm{s}_{T}|\bm{s}_0; \bm{a}_0, \dots, \bm{a}_{T-1}),
\label{eq:sampletraj}
\end{equation}
\noindent and results in a collection of forecasted trajectories,
\begin{equation}
\mathcal{\bm{T}}_{\textit{forecasted}} = \{\bm{s}_0, \bm{a}_0, \hat{\bm{s}}_{1}, ..., \bm{a}_{T-1}, \hat{\bm{s}}_{T}\}.
\end{equation}
Since we are operating in a model-based RL framework, the learned model of the environment is used to forecast the states out to the desired time horizon $T$. However, there are uncertainties associated with the forecasted states and this is the focus of this work.
To quantify epistemic uncertainty, the learned model of Equation \ref{eq:traj_prob} is designed such that there is a distribution placed over its parameters. When different parameter values are sampled from this distribution, the model outputs vary accordingly. Hence, given an epistemic uncertainty quantified system with $M$ samples of model parameters, there are $M$ forecasted trajectories $\mathcal{\bm{T}}_{\textit{forecasted}}^{m=1:M}$. The best practice to compute the final prediction is to take an average over all available $m = 1:M$ model outputs.
For the purpose of aleatoric uncertainty quantification and to formalize our problem, we use a residual to define the difference between the observed trajectories and the corresponding forecasted trajectories. The residual arises due to the uncertainty in the forecasted states and can be separated into the their two components: aleatoric and epistemic. Hence, the total residual $\bm{\epsilon}_{total}$ is defined as:
\vspace*{-\baselineskip}
\begin{equation} \label{total_res_eps}
\bm{\epsilon}_{total} = \bm{\epsilon}_{alea} + \bm{\epsilon}_{epist},
\end{equation}
\noindent where $\bm{\epsilon}_{alea}$ and $\bm{\epsilon}_{epist}$ denote contributions from aleatoric and epistemic uncertainty, respectively. The statistical definition of residual gives the following equation for total residual:
\vspace*{-\baselineskip}
\begin{equation} \label{total_res_state}
\bm{\epsilon}_{total} = \bm{y}_{true} - \frac{1}{M} \sum_{m=1}^M \hat{\bm{y}}_m,
\end{equation}
\noindent where $\bm{y}_{true} = \{ \bm{s}_1, \bm{s}_2, \ldots, \bm{s}_{T} \}$ are states from the observed trajectory $\mathcal{\bm{T}}$ and $\hat{\bm{y}}_m = \{ \hat{\bm{s}}_{1}^m, \hat{\bm{s}}_{2}^m, \ldots, \hat{\bm{s}}_{T}^m \}$ are states from forecasted trajectory $\mathcal{\bm{T}}_{\textit{forecasted}}^{m}$. The second term in Equation \ref{total_res_state} is the averaged value of all the available model output states. We use a temporally compressed version of $\bm{y}_{true}$ and $\hat{\bm{y}}_m$ to simplify the notation.
To define the epistemic residual, each of the individual model forecasts are compared against the mean of the forecast. This fits the formulation of state-of-the-art uncertainty decomposition methods which present the variance of the individual model outputs as epistemic uncertainty \cite{depeweg2017decomposition}. Hence, the epistemic residual is defined as:
\begin{equation} \label{epist_res}
\bm{\epsilon}_{epist} = \hat{\bm{y}}_m - \frac{1}{M} \sum_{m=1}^M \hat{\bm{y}}_m.
\end{equation}
Then, following (\ref{total_res_eps}), the aleatoric residual $\bm{\epsilon}_{alea}$ is:
\vspace*{-\baselineskip}
\begin{equation}
\begin{split}
\bm{\epsilon}_{alea} &= \bm{\epsilon}_{total} - \bm{\epsilon}_{epist} = \bm{y}_{true} - \hat{\bm{y}}_m,
\end{split}
\end{equation}
\noindent where $\hat{\bm{y}}_m$ is sampled from any of the available $M$ ensemble models. Under this formulation, the aleatoric residual is dependent on both the observed and the forecasted trajectories.
Our process is to first use existing methods of epistemic uncertainty quantification to place a distribution over model parameters. Then, different model outputs, representing forecasted trajectories, from the sampled model parameters are used in conjunction with the observed trajectory to define aleatoric residual $\bm{\epsilon}_{alea}$ and collect the training data. Using this data, the goal is to learn the distribution:
\vspace*{-\baselineskip}
\begin{equation}
p(\bm{\epsilon}_{alea, 1}, \bm{\epsilon}_{alea, 2}, \ldots, \bm{\epsilon}_{alea, T} | \bm{s}_0; \bm{a}_0, \bm{a}_1, \dots, \bm{a}_{T-1}),
\end{equation}
\noindent where $\bm{\epsilon}_{alea, t}$ represents the aleatoric residual at each time $t$ in a trajectory. We make no assumption regarding the distribution of the aleatoric residuals, and assume the availability of the first state, a policy, and a collection of forecasted trajectories which is used to define $\bm{\epsilon}_{alea, t}$.
\section{METHODS}\label{methods}
\begin{figure*}
\centering
\includegraphics[scale=0.40]{images/architecture_v3.pdf}
\caption{(Left) End-to-end model training process with temporal information excluded for clarity. First, the trajectory data collection is used to train an ensemble of $M$ models. The outputs from $M$ ensemble models are uniformly sampled to select $\hat{\bm{y}}_m$, which is used in conjunction with the training data $\bm{y}_{true}$ to get a residual aleatoric value $\bm{\epsilon}_{alea}$. The conditional variational autoencoder (CVAE) is trained on $\bm{\epsilon}_{alea}$ while the final loss is computed using the prediction $\bm{y}_{pred} = \hat{\bm{\epsilon}}_{alea} + \hat{\bm{y}}_m$ and true observation $\bm{y}_{true}$. (Right) A temporal view of CVAE, which contains a recurrent neural network (RNN) within both the encoder and the decoder to capture temporal memory. This detailed view also shows all the input and output variables, such as the true state $s_t$, actions $a_t$, ensemble model prediction state $\hat{s}_t^m$, and final prediction state $\hat{s}_{pred, t}$.}
\label{architecture}
\end{figure*}
\subsection{Deep Ensembles for Epistemic Uncertainty}\label{eps_uncertainty}
To quantify epistemic uncertainty, we follow the state-of-the-art method using deep ensembles \cite{ensembles}. The observed trajectory data is used to train $M$ ensemble models, each one designed to forecast states as as defined in (\ref{eq:sampletraj}). Due to the temporal nature of our applications, we train a collection of recurrent neural networks (RNNs), where each one learns the optimal model parameters $\theta^*$ such that:
\begin{equation}
\theta^* = \argmax_\theta \prod^n_{i=1} p_\theta(\bm{s}^i_{1:T}|\bm{s}^i_0; \bm{a}^i_{0:T-1}),
\end{equation}
\noindent where $n$ is the number of training samples and $\theta$ corresponds to model parameters for a single model that is being trained. By training $M$ ensemble models, $M$ different model parameters $\theta_{m=1:M}$ are obtained. Then, model output corresponding to each of model parameters $\theta_m$ produce a collection of state forecasts $\hat{\bm{y}}_m = \{ \hat{\bm{s}}_{1}^m, \hat{\bm{s}}_{2}^m, \ldots, \hat{\bm{s}}_{T}^m \}$.
\subsection{Deep Generative Models for Aleatoric Uncertainty}
Generative models learn the joint probability distribution of the underlying training data. There are many variations of generative models, and they provide more flexibility than discriminative models in capturing different distribution types. This makes them suitable to model aleatoric uncertainty, especially over long horizon forecasting where complex compounding of uncertainty occurs~\cite{acharya2022uncertainty, huang2022evaluating}. To our knowledge, our work is the first to analyze the usage of generative models for aleatoric uncertainty isolation for long horizon forecasting.
The desired distribution to learn is $p(\bm{\epsilon}_{alea, 1:T}| \bm{s}_0; \bm{a}_{0:T-1})$, which represents the aleatoric residual as defined in Section \ref{probstatement}. We use a conditional variational autoencoder (CVAE) that uses a latent space to capture the stochasticity of the data \cite{Kingma2014, condVAE}. Since we are working with temporal data, a recurrent structure can be added within both the encoder and decoder portions of the CVAE so that there is memory within the network. An overview of the end-to-end training process alongside a detailed view of the temporal inputs and outputs of the CVAE is shown in Figure \ref{architecture}.
We use both the observed trajectory data and the forecasted trajectory data from the ensembles models to train the model. During the training process, we simultaneously sample from both of these trajectory sources. Then, aleatoric residual $\bm{\epsilon}_{alea}$ is calculated as the difference between the observed trajectory states and the forecasted trajectory states. By incorporating sampled forecasted states in this way, we directly incorporate the residual contributions from epistemic uncertainty. Note that the aleatoric residual depends on each of the individual model outputs from an ensemble rather than the mean of all the models. This uses the variance information resulting from the variation in model parameters $\theta_m$, providing an indication of epistemic uncertainty \cite{depeweg2017decomposition}.
Although the input and the output from the CVAE is $\bm{\epsilon}_{alea}$ and its prediction $\hat{\bm{\epsilon}}_{alea}$, respectively, the loss function optimizes the difference between the state prediction $\bm{y}_{pred} = \hat{\bm{\epsilon}}_{alea} + \hat{\bm{y}}_m $ and the observed state $\bm{y}_{true}$. This prevents any individual, computed $\bm{\epsilon}_{alea}$ to be considered the \enquote{true} aleatoric residual value. The loss function follows the format of the original CVAE loss \cite{condVAE}, and is described as:
\begin{equation}\label{loss}
\mathcal{L}\left(\bm{\hat{s}}_{pred, 1:T}|\bm{s}_0, \bm{a}_{0:T-1}\right) = \sum_{t = 0}^{T-1} l_t,
\end{equation}
\noindent where
\vspace*{-\baselineskip}
\begin{equation}
\begin{split}
&l_t = \\
& - \E_{q_{\phi}\left(\bm{z}_{t}|\bm{s}_{0:t}; \bm{a}_{0:t}\right)} \left[\log p_{\theta}\left(\bm{\bm{\epsilon}}_{alea, t+1}|\bm{s}_{0:t} ; \bm{a}_{0:t}; \bm{z}_{t} \right) + \hat{\bm{s}}_{t+1}^m \right] \\
& \quad + \beta \times \KL\left[ q_{\phi}\left(\bm{z}_{t}|\bm{s}_{0:t} ; \bm{a}_{0:t}\right) || p_{\theta}\left(\bm{z}_{t}\right)\right],
\end{split}
\label{vae_loss}
\end{equation}
\noindent and $\phi$ represents the encoder parameters, $\theta$ represents the decoder parameters, $\bm{z}_{t}$ is the latent variable at time $t$, and $q_{\phi}$ is an approximate distribution over those latent variables. The $\beta$ variable in Equation \ref{vae_loss} weighs the KL divergence term, and helps regulate how the two terms in the loss interplay with each other. Inclusion of $\beta$ helps to ensure there is no mode collapse occurring in the CVAE training process. The full training process is further described in Algorithm \ref{alg:aleatoric}.
\begin{algorithm}
{\small{
\caption{Aleatoric uncertainty residual model training process }
\label{alg:aleatoric}
\begin{algorithmic}[1]
\State Collect $M$ ensemble models trained on observed trajectory data, resulting in model parameters $\theta_{m=1, \ldots, M}$
\For {each training batch}
\State Sample true trajectory states $\bm{y}_{true} = \{ \bm{s}_{1:T} \}$ and \par
\hskip \algorithmicindent actions $\bm{A}_{true}=\{ \bm{a}_{0:T-1} \}$
\State Uniformly sample $m \sim U[0, M]$
\State Collect forecasted trajectory states $\hat{\bm{y}}_m = \{ \hat{\bm{s}}_{1:T}^m \}$ \par
\hskip \algorithmicindent using $p_{\theta_m}(\bm{s}_{1:T}|\bm{s}_0; \bm{a}_{0:T-1})$
\State Compute aleatoric residual $\bm{\epsilon}_{alea} = \bm{y}_{true} - \hat{\bm{y}}_m$
\State Train CVAE using $\bm{\epsilon}_{alea}$, $\bm{y}_{true}$, and $\bm{A}_{true}$
\State Compute loss using CVAE output aleatoric residual \par
\hskip \algorithmicindent $\hat{\bm{\epsilon}}_{alea}$ such that $\bm{y}_{pred} = \hat{\bm{\epsilon}}_{alea} + \hat{\bm{y}}_m $ is compared \par
\hskip \algorithmicindent to $\bm{y}_{true}$
\EndFor
\end{algorithmic}
}}
\end{algorithm}
\begin{figure*}
\centering
\includegraphics[scale=0.39]{images/mmd_pusher_uav.png}
\caption{Maximum mean discrepancy (MMD) plots for the Pusher and UAV environment for the three comparison models in low epistemic + high aleatoric (solid line) and high epistemic + high aleatoric (dashed line) scenarios (lower MMD is better). The plots show the effectiveness of residual VAE in achieving the lowest MMD values.}
\label{mmd}
\end{figure*}
\subsection{Comparison Models}
To compare the developed residual CVAE model, we train two additional models that quantify aleatoric uncertainty:
\begin{itemize}
\item \textbf{Probabilistic MLP} \cite{kendall_gal}: A probabilistic multi-layer perceptron (MLP) which predicts a Gaussian distribution by outputting the mean and the variance. When deployed in a temporal environment, the next state is sampled from the output Gaussian distribution, and used as an input in the following time step. In state-of-the-art uncertainty-aware model-based RL \cite{chua2018deep}, this model is used to capture the aleatoric uncertainty. Hence, comparison against this model demonstrates how our developed method holds against a tried/tested method.
\item \textbf{Full VAE} \cite{acharya2022competency}: A recurrent conditional VAE model that is trained on the full training data. This model makes no assumption on the distribution shape of the training data and has a recurrent structure to capture temporal correlations. It is designed for long horizon forecasting of the states but is not designed to isolate aleatoric uncertainty in its quantification process. Comparison against this model shows what a naive approach to aleatoric uncertainty quantification using generative models, without accounting for epistemic uncertainty, looks like.
\end{itemize}
\subsection{Metrics}
We use the following metrics to compare the models:
\begin{itemize}
\item \textbf{Maximum mean discrepancy (MMD) with Gaussian kernel}: nonparametric statistical distance metric which compares the forecasted trajectories against the observed. Each forecasted state dimension is compared with the corresponding environmental observation at a temporal level, and the average across all of the states are reported. This is a good indication of aleatoric uncertainty because the observed trajectories' stochasticities come entirely from aleatoric uncertainty sources. Hence, ensuring the distribution of the forecasted trajectories matches the observed trajectories ensures that the aleatoric distribution is being captured accurately.
\item \textbf{Brier score}: a proper scoring rule that measures the accuracy of predicted outcome probabilities using the mean squared error between the predicted and the actual outcome. This metric directly analyzes the outcome probabilities, which are important for competency-aware autonomous agents. Given that outcome distribution results from the model forecasted states distribution, this is an indirect metric on aleatoric uncertainty distribution that analyzes its impact on downstream outcome analysis.
\end{itemize}
\section{RESULTS}\label{results}
We present uncertainty forecasting results for two levels of uncertainty (high aleatoric + high epistemic, and high aleatoric + low epistemic) for two different RL environments: Pusher and unmanned aerial vehicle (UAV).
\subsection{Pusher}
The Pusher environment is a modified version of the Gym library Reacher environment \cite{brockman2016openai}. To the Reacher environment, we add a ball which has to be pushed to different target locations using the arm (the double pendulum). The state-space for this environment is 12-dimensional: arm position in $(x, y)$, cosine and sine of the upper arm angle $\theta$, the angular rate of the upper arm angle $\dot{\theta}$, cosine and sine of the lower arm angle $\gamma$, angular rate of lower arm angle $\dot{\gamma}$, and ball position and velocity in $(x, y, \dot{x}, \dot{y})$. The 2-dimensional applied action is the torque on the upper and lower arm joints. To create a high aleatoric variation of this environment, we make the Pusher stochastic by injecting Gaussian noise internally using the actions, which over time compounds to create complex, non-Gaussian distributions. To create a high epistemic scenario, we reduce the amount of data available for training by $10 \times$ in comparison to the low epistemic scenario.
As a first point of comparison, we consider the MMD plots shown in Figure \ref{mmd}a (lower is better). The results are shown for forecasts out to $120$ time steps, and demonstrate that MMD is highest for the probabilistic MLP model. This is expected because the probabilistic MLP is not designed for long horizon forecasting and makes a very strong assumption that the aleatoric distribution is Gaussian for all time. This assumption does not hold for the Pusher environment where there are two entities interacting with each other and causing momentum transfer, bringing forth interesting bifurcating behaviors.
The two VAE models, which do not make any assumption about the distribution of aleatoric uncertainty, produce significantly lower MMD values. The results from the full VAE, which does not actively subtract epistemic uncertainty, is higher than with the residual VAE. When analyzing the full VAE based on the epistemic uncertainty level, we see that the high epistemic case typically results in higher MMD. In the case of the residual VAE, both levels of uncertainty show comparable MMD values. Overall, these results show the clear benefit of using the residual VAE for accurate aleatoric quantification for long horizon trajectories.
\begin{table}[hbt!]
\caption{Brier scores for Pusher environment (lower is better)}
\begin{center}
\renewcommand{\arraystretch}{2}
\begin{tabular}{ | M{1.5cm} || M{1.5cm} | M{1.5cm} | M{1.5cm} | }
\hline
& \makecell{Residual \\ VAE} & \makecell{Full \\ VAE} & \makecell{Probabilistic \\ MLP}\\
\hline \hline
\makecell{Low \\ Epistemic} & $\bm{0.072}$ & $0.391$ & $0.416$ \\
\hline \hline
\makecell{High \\ Epistemic} & $\bm{0.061}$ & $0.439$ & $0.539$ \\
\hline
\end{tabular}
\end{center}
\label{pusher_brier_scores}
\end{table}
Next, we compute the Brier score relative to the outcomes that are of interest. Although multiple outcomes can be analyzed under our framework, we present results for when the arm is tasked to get the ball to a target location within a given amount of time. The target location, the starting location of the ball, and the starting location of the arm all vary for the 250 analyzed scenarios. The Brier scores for output outcome probabilities are shown in Table \ref{pusher_brier_scores}. The residual VAE produces the lowest value of the Brier scores, indicating that the predictions from this model are the most accurate. Second, the Brier scores produced by the full VAE are also lower than those from the probabilistic MLP, which give the least accurate outcome probabilities.
\subsection{UAV}
The second experimental environment is a Gazebo-based UAV simulation \cite{conlon2022generalizing}.
The $17$-dimensional state-space for this environment is made up of the UAV platform position $(x, y, z)$, velocity $(\dot{x}, \dot{y}, \dot{z})$, orientation $(\phi, \theta, \psi)$, and angular velocity in $(\dot{\phi}, \dot{\theta}, \dot{\psi})$. Additionally, the 3-dimensional wind component in $(x, y, z)$ is also included, as is the temperature and the battery percentage. To introduce aleatoric uncertainty into this environment, we include a hidden variable that directly impacts the dynamics of the vehicle but is not part of the state-space representation. This hidden variable is a payload mass, which varies for any given trajectory. Similar to the Pusher environment, we create a high epistemic scenario by limiting the amount of data available for training.
The UAV MMD results are shown in Figure \ref{mmd}b for $60$ time steps. These results also show probabilistic MLP with the highest and residual VAE with the lowest MMD values. The actual values are larger than the Pusher, which is expected since this is a more complex environment. Additionally, we again see that both the probabilistic MLP and the residual VAE produce similar values for both high and low epistemic scenarios. However, this difference is more pronounced for the full VAE, where the high epistemic scenario results in a higher MMD than the low epistemic scenario. This shows that the full VAE may be double counting epistemic uncertainty as aleatoric uncertainty in this environment.
\begin{table}[hbt!]
\caption{Brier scores for UAV environment (lower is better)}
\begin{center}
\renewcommand{\arraystretch}{2}
\begin{tabular}{ | M{1.5cm} || M{1.5cm} | M{1.5cm} | M{1.5cm} | }
\hline
& \makecell{Residual \\ VAE} & \makecell{Full \\ VAE} & \makecell{Probabilistic \\ MLP}\\
\hline \hline
\makecell{Low \\ Epistemic} & $\bm{0.190}$ & $0.272$ & $0.712$ \\
\hline \hline
\makecell{High \\ Epistemic} & $\bm{0.245}$ & $0.327$ & $0.782$ \\
\hline
\end{tabular}
\end{center}
\label{uav_brier_scores}
\end{table}
To compute the Brier score in this environment, the outcome we analyze is to have UAV fly to a target location in a given amount of time. We again analyze $250$ scenarios with different initial conditions. Brier score values are reported in Table \ref{uav_brier_scores}. The Brier scores for this environment are also the lowest for the residual VAE and highest for the probabilistic MLP. When comparing the low epistemic versus high epistemic scenarios, all three models have higher Brier score values for the high epistemic scenario. This is expected because we expect less accurate outcome prediction for a high uncertainty environment.
\section{CONCLUSIONS AND FUTURE WORK}\label{conclusions}
To create trustworthy competency-aware autonomous agents, we provide the autonomous agents with the capability to forecast their outcomes and uncertainties. A significant component of this is quantification of both epistemic and aleatoric uncertainties. While modeling of epistemic uncertainty can be achieved using state-of-the-art methods such as deep ensembling, aleatoric uncertainty modelling has to include effects of epistemic uncertainty. To achieve this, we showed how deep generative models in the form of CVAE can be trained exclusively on aleatoric uncertainty residual. This residual takes into account the samples from both the observed training data trajectory as well as from the ensemble model forecasted trajectories to actively incorporate the effects of epistemic uncertainty. We presented our results on the Pusher robot and UAV environment under varying levels of uncertainties.
There are many possible avenues of extension for this work. First, we recognize that the model design process of CVAE may itself introduce some epistemic uncertainty, even though its impact may be significantly lower than the effects of systemic epistemic uncertainty that we address here. While communicating uncertainties to the user, this potential source of uncertainty also has to be addressed. Additionally, we plan to explore the ties between the uncertainty types, the forecasted outcome distributions, and the agent's performance. To do this, we will task the agent with multiple and increasingly complex objectives while studying the impacts of uncertainties on the forecasted and observed outcomes. As a longer term goal, we are looking to deploy our developed method on a real world UAV platform to demonstrate its utility beyond the simulated environment, thus leading to reliable and effective autonomous agents.
\addtolength{\textheight}{-3cm}
\section*{ACKNOWLEDGMENT}
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0032. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08628",
"language": "en",
"timestamp": "2023-02-20T02:04:06",
"url": "https://arxiv.org/abs/2302.08628",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Dust grains in molecular clouds are covered by an ice mantle consisting of H$_2$O, CO$_2$, CO, NH$_3$, CH$_4$, and CH$_3$OH\cite{Boogert2015}. Other molecules, particularly complex organic molecules, which are defined in the astronomy context as carbon-containing molecules with at least six atoms, should also be present in the ice\cite{Herbst2009}. Various laboratory experiments and astrochemical modeling suggest that most of these molecules are formed by chemical reactions in the solid state rather than formed in the gas phase and then condensed on the grain surface\cite{Herbst2009}. After their formation, they may desorb, either thermally or non-thermally, and return to the gas phase and be identified by radio telescopes.
One of the most important mechanisms for chemical reactions on grain surfaces is diffusion. Reactants on the surface need to diffuse in order to encounter the reaction counterparts. There are two types of diffusion, quantum diffusion, which is only important for very light species such as hydrogen, and thermal diffusion, which is relevant for almost all reactants. In most gas-grain astrochemical models, thermal diffusion is the dominant pathway to chemical reactions on grains\cite{Garrod2006, Acharyya2022}, although in recent years the role of thermal diffusion has been challenged and non-diffusive mechanisms have been proposed as well to account for the formation of complex organic molecules in interstellar ices\cite{Jin2020, Garrod2022, Herbst2022, He2021phase, He2022phase}.
Thermal diffusion is normally described by an Arrhenius equation with two parameters, the pre-exponential factor $\nu_{\rm dif}$, and the diffusion energy barrier $E_{\rm dif}$. The goal of laboratory experiments on diffusion is to quantify $E_{\rm dif}$ and $\nu_{\rm dif}$ for various atoms, radicals, and molecules on representative dust grain surfaces, such as water ice, silicate, or carbonaceous surfaces. Diffusion of reactants on silicate and carbonaceous surfaces is only important before the dust grains are covered by an ice layer. Once water starts to form on the bare grain surface, subsequent condensation of particles happens on top of the ice surface dominated by H$_2$O. Most of the existing laboratory experiments on diffusion focused on amorphous water ice, i.e., the so-called amorphous solid water (ASW), which is the dominant form of water ice on the grain surface. This current study also focuses on the diffusion on ASW surface.
Several laboratory experimental studies focused on the diffusion of molecules on the surface of ASW, either on the outer surface of non-porous ASW (np-ASW) or through the inner pores of porous ASW (p-ASW). Zubkov et al. \cite{Zubkov2007} deposited N$_2$ on ASW of different thicknesses and performed temperature-programmed desorption (TPD) experiments, from which they obtained $E_{\rm dif}=890$~K and $\nu_{\rm dif}=9\times10^7$--$4\times10^8$~s$^{-1}$, respectively. Mispelaer et al. \cite{Mispelaer2013} presented one of the first systematic experimental studies on the diffusion of astro-relevant molecules in p-ASW. They covered the target molecule with ASW layers, and then warmed up the ice to a temperature at which the target molecules start to penetrate through the p-ASW and desorb from the ice. By monitoring the remaining amount of target molecules in the ice, they were able to quantify the diffusion rate through the pores. Though it is an important step ahead, it still has some limitations: (1) The diffusion might have already started even during the growth of the ASW layers, and it is not accurate to treat the beginning of the isotherm as the beginning of the diffusion process; (2) The structure of the ASW might be changing during the diffusion/desorption, since no annealing was done to stabilize the structure; (3) The effect of diffusion and desorption are mixed and very difficult to separate, therefore introducing large uncertainty; (4) Based on the design of the experiment, only a narrow temperature range can be explored, and the accuracy in diffusion energy barrier determination is limited. More recently, Mat{\'e} et al. \cite{Mate2020} used a similar approach to measure the diffusion rate of CH$_4$ in ASW, and therefore inherited the same advantages and drawbacks.
Lauck et al. \cite{Lauck2015} measured the diffusion of CO in p-ASW. In their experiments, CO and water were deposited at 12~K. Then the ice was flash heated to a temperature between 15 and 23~K for isothermal experiments. They used the fact that pure CO and CO mixed with water ice has different infrared absorption peak shapes to trace the diffusion of CO in porous water ice. Pure CO ice has a sharp peak centered at about 2139~cm$^{-1}$, while the mixture of CO and ASW has two peaks at 2139 and 2152~cm$^{-1}$, respectively. As diffusion goes on, there is a gradual transition from pure CO to a mixture of CO and water. Therefore, the diffusion rate can be calculated from the time evolution of this transition. Compared to Mispelaer et al., Lauck et al. solved the problem of a limited temperature span, and the problem with desorption. However, the method by Lauck et al. suffers from an unstable ice structure during the diffusion process, and this method only works for CO diffusion on ASW.
He et al. (2017) \cite{He2017co2diff} (referred to as He17 later) measured the diffusion of CO$_2$ on np-ASW. Discrete CO$_2$ molecules and CO$_2$ clusters have different infrared absorption peaks. With less than a monolayer of CO$_2$ present on the surface, when the surface temperature increases, CO$_2$ diffuses and forms clusters. Therefore, there is a transition from discrete CO$_2$ to CO$_2$ clusters which is observable in infrared measurement. By using a rate equation model to simulate the diffusion process and to fit the experimental data, the diffusion energy barrier was calculated to be 2100~K. It has to be noted that in this study, the prefactor was assumed to be $10^{12}$~s$^{-1}$, as is customarily done in these studies. If a different prefactor is assumed, the diffusion energy value would be different. It is important to determine both the prefactor and the diffusion energy value simultaneously. An energy value without the correct prefactor could be in error.
Kouchi et al. (2020) \cite{Kouchi2020} used a transmission electron microscope (TEM) to measure the diffusion energy barrier of CO and CO$_2$ on p-ASW. This is the first attempt to use the TEM technique to measure the diffusion of astro-relevant molecules. In their experiment, they first deposited 10~nm of water ice at 10~K and then deposited, for example CO, while taking TEM scans during deposition. When the ice is covered by the required amount of CO, and if the temperature is high enough for the crystallization to proceed\cite{He2021phase}, crystals of CO are formed in the ice. They interpreted the forming of crystals as a result of the diffusion of CO and used the distance between crystals to represent the average diffusion distance of CO molecules. By analyzing the diffusion distance at different temperatures, they calculated the diffusion energy barrier. The study by Kouchi et al. suffers from a few drawbacks: (1) The interpretation of the experimental result is not intuitive, and alternative interpretations exist. The observed temperature dependence of crystal distance may be a result of the temperature dependence of the phase transition of CO ice. Such alternative interpretations have to be checked before extending this method to a wider range of systems; (2) The water ice was unstable during the diffusion and could have affected the experimental results; (3) They could not determine the prefactor from the experiments; as mentioned above, it is important to determine both values simultaneously. A similar approach has recently been applied to a wider range of molecules\cite{Furuya2022}.
He et al. (2018)\cite{He2018diff} (referred to as He18 later) designed an experiment to measure diffusion based on the fact that when a molecule is binding to the dangling OH (dOH) site of p-ASW, the weak dOH absorption band in the infrared is red-shifted. In the experiment, porous water ice was grown at 10~K, stabilized by annealing to 70~K, and then cooled down to 10~K to deposit the target molecule. Then the ice was warmed to a specific temperature, for example, 20~K, and infrared spectra were measured continuously to monitor the shifting of dOH bands. When diffusion happens, a gradient in concentration of the target molecule is expected in the ice. After a long time, the whole pore surface is evenly covered by the target molecule, and almost all the dOH is at the shifted position. This method was applied to molecules including N$_2$, O$_2$, CO, CH$_4$, and Ar, all of which wet the ASW surface. From the experiments, both the prefactor and energy barrier for diffusion were obtained. The prefactor was found to be mostly in the $10^7$--$10^9$~s$^{-1}$ range, and the diffusion energy barrier was roughly 0.3 times the upper bound of the desorption energy distribution on ASW. Table~\ref{tab} shows a summary of the above-mentioned experimental studies and some key aspects of the experiments. The same method cannot be applied directly to non-wetting molecules such as CO$_2$ \cite{Noble2012}. In fact, we attempted to measure the diffusion of CO$_2$ through p-ASW using the same method, but found no clear evidence of CO$_2$ penetration into the p-ASW to form monomers. This is somehow in agreement with prior experimental studies on the segregation of CO$_2$ from CO$_2$:H$_2$O ice mixtures. Several groups found that by warming up the mixture of CO$_2$:H$_2$O ice, CO$_2$ tends to segregate and form clusters or islands, rather than mixing. This indicates that the interaction force between CO$_2$ molecules is stronger than the interaction force between CO$_2$ and ASW\cite{Hodyss2008, Oberg2009, He2018co2}. A TPD study of CO$_2$ on ASW also supports this argument\cite{He2017co2diff}.
\begin{table}[ht]
\caption{Summary of a few previous laboratory studies on diffusion. Mi13, La15, He17, He18, Ko20, Ma20 and Fu22 stand for Mispelaer et al. (2013)\cite{Mispelaer2013}, Lauck et al. (2015)\cite{Lauck2015}, He et al. (2017)\cite{He2017co2diff}, He et al. (2018)\cite{He2018diff}, Kouchi et al. (2020)\cite{Kouchi2020}, Mate et al. (2020)\cite{Mate2020}, and Furuya et al. (2022) \cite{Furuya2022}, respectively. The following aspects are considered: (1) whether desorption interferes with diffusion; (2) whether the water ice is stable during the whole diffusion process; (3) whether it is applicable to IR-inactive molecules; (4) whether both $E_{\rm dif}$ and $\nu_{\rm dif}$ are obtained simultaneously. }
\begin{tabular}{l|cccccc}
\hline
& Mi13 & La15 & He17 & He18 & Ko20/Fu22 & Ma20 \\ \hline
Des. not interfere with dif. & $\times$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ \\
H$_2$O ice stable during dif. & $\times$ & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ \\
Applicable to IR-inactive mol. & $\times$ & $\times$ & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ \\
Both $E_{\rm dif}$ and $\nu_{\rm dif}$ obtained & $\checkmark$ & $\checkmark$ & $\times$ & $\checkmark$ & $\times$ & $\checkmark$ \\ \hline
\end{tabular}\label{tab}
\end{table}
The mixed ice of CO$_2$:H$_2$O had been the focus of a number of experimental studies because of the presence of such mixture in interstellar ices\cite{Gerakines1995, Galvez2007, Galvez2008, Hodyss2008, Oberg2009, Fayolle2011}. However, diffusion of CO$_2$ on ASW surface was not addressed in those studies. Combining the strengths of both He17 and He18, we perform a new set of experiments to measure the diffusion rate of CO$_2$ on np-ASW. Isothermal experiments are carried out at different temperatures and CO$_2$ are allowed to diffuse and form clusters. We obtain the diffusion energy barrier $E_{\rm dif}$ and prefactor $\nu_{\rm dif}$ values simultaneously from the experiments.
\section{Experimental}
\label{sec:exp}
\subsection{Experimental setup}
Experiments were performed using an ultra-high vacuum (UHV) setup, which has been described previously \cite{He2017co2diff, He2018diff, He2018co2}. The setup consists of a main UHV chamber and two molecular beamlines. In the present study, only the main UHV chamber was used. The chamber is pumped by a combination of scroll pumps, turbomolecular pumps, and a cryopump, and a base pressure of $2\times10^{-10}$~mbar is achieved after bake-out. Located at the center of the chamber is a gold-plated copper disk, which is used as the substrate onto which ice samples are grown. The substrate can be cooled down to 6~K using a closed-cycle helium cryostat, and heated up using a resistive heater. The temperature was measured using a silicon diode temperature sensor. A Lakeshore 336 temperature controller recorded and controlled the temperature to an accuracy of 0.05~K. Ice was grown on the substrate by gas/vapor deposition from the chamber background, following a procedure described in detail in He et al. (2018) \cite{He2018co2}. Two separate computer-controlled leak valves (VAT variable leak valve 590) were used to deposit H$_2$O and CO$_2$, respectively. The thickness of the ice in monolayer (ML, defined as 10$^{15}$ molecules per cm$^2$) was calculated by integrating the chamber pressure during deposition, assuming the sticking as unity \cite{he2016sticking}. The ionization pressure gauge calibration factors for H$_2$O and CO$_2$ were taken into account in the calculation. Throughout the experiments, a Thermo Fisher Nicolet 6700 Fourier Transform Infrared Spectrometer (FTIR) in the Reflection Absorption Infrared Spectroscopy (RAIRS) configuration was used to monitor the ice composition over time. A spectrum was created by taking an average of nine scans every 12 seconds.
\subsection{Experimental procedure}
The gold surface was first covered with 30 ML of non-porous amorphous solid water (np-ASW) by vapor deposition at a rate of 6 ML/minute over 5 minutes while the substrate was at 130~K. It remained at 130~K for 20 minutes to stabilize the ice and pump out the residual gas. Subsequently, the substrate was cooled down to 40~K for CO$_2$ deposition. 0.075 ML of CO$_2$ was deposited on top of the np-ASW at a deposition rate of 0.1 ML/minute. At 40~K, CO$_2$ does not diffuse\cite{He2017co2diff}, and the coverage is low enough so that almost all the CO$_2$ molecules are monomers. After CO$_2$ deposition, flash heating was applied to heat the substrate to the desired target temperature ($T_{\rm iso}$) at a ramp rate of 30~K/minute for the isothermal experiment. During the isothermal evolution, CO$_2$ molecules diffuse and form clusters. For $T_{\rm iso}>62$~K, the temperature was held for 30 minutes to allow CO$_2$ to diffuse and form clusters. For $T_{\rm iso}\leq62$~K, at which the diffusion is slower, 60 minutes were selected to provide sufficient time for diffusion. After the isothermal measurement, the sample was heated to 110~K to remove the CO$_2$ ice and leave the np-ASW intact. By 110~K, there is no CO$_2$ left on the surface, as evidenced by no signature of CO$_2$ in the RAIRS spectra. The substrate was then cooled down to 40~K again for the next CO$_2$ deposition. Figure~\ref{fig:T_zoomin} shows the temperature during one segment of the experiment. Different stages of the experiment are marked. To minimize human error, we used a LabVIEW program to regulate the temperature and gas deposition throughout the entire experiment. We explored the $T_{\rm iso}$ range between 56 and 68~K in 1~K steps.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{T_zoomin.pdf}
\caption{Temperature of the substrate during one segment of the experiment (isothermal experiment at 67~K). Different stages of the experiment are marked. } \label{fig:T_zoomin}
\end{figure}
\section{Results and analysis}
\label{sec:results}
During the isothermal evolution at each temperature, RAIRS spectra are collected every 12 seconds, and a selection of them is displayed in Figure~\ref{fig:fitting}. At the very beginning of the isothermal evolution, the CO$_2$ absorption profile shows a single peak at 2330~cm$^{-1}$, which indicates isolated CO$_2$ molecules, i.e., monomers, on the surface. This is to be expected given that the coverage is only 0.075~ML and the likelihood of cluster formation is very low before the diffusion process kicks in. After a certain period of time, the 2330~cm$^{-1}$ peak drops and a new peak at $\sim$2355~cm$^{-1}$, indicative of CO$_2$ clusters \cite{He2017co2diff}, grows over time. This suggests that CO$_2$ is diffusing to form clusters. Considering the very low coverage of CO$_2$, we assume that the shifting of the adsorption peak position and the formation of clusters are only governed by the diffusion of CO$_2$ molecules. The rate of diffusion should be temperature-dependent. Indeed, the shifting from 2330 to 2355~cm$^{-1}$ peak is faster at 68~K than at 58~K. To quantify the rate of diffusion, we use two Gaussian functions to fit the two peaks and analyze the time and temperature dependence of them. Figure~\ref{fig:fitting} shows the fitting of selected spectra using the sum of two Gaussian functions. This fitting is only an approximation, particularly, since the profile of the 2355~cm$^{-1}$ peak is not exactly a Gaussian line shape \cite{He2018co2}. Furthermore, even if all the CO$_2$ are in the form of clusters or in the form of bulk CO$_2$, the 2330~cm$^{-1}$ component is not absent. This is seen in a prior paper \cite{He2017co2diff}, which shows that even if CO$_2$ is allowed to have sufficient diffusion, both peaks are still present. Nonetheless, the shifting from one peak to the other still allows us to derive information about the kinetics. Figure~\ref{fig:bandArea} shows the resulting area of the two peaks during the isothermal evolution at each $T_{\rm iso}$. When the 2330~cm$^{-1}$ peak area decreases, the 2355~cm$^{-1}$ peak area increases, except when the temperature is higher than 67~K, at which the 2355~cm$^{-1}$ peak also drops. This is because the desorption of CO$_2$ starts when the temperature is close to $\sim$67~K \cite{He2017co2diff}, and the total area of the two peaks decreases over time. In the other extreme, when the temperature is at 56~K and the diffusion is too slow, the area of the 2355~cm$^{-1}$ peak is much noisier than the rest and is considered less reliable. As a result, for the diffusion analysis, we only consider the temperature range of 57 to 65~K.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{fitting.pdf}
\caption{Selected RAIRS spectra in the range of the CO$_2$ $\nu_3$ mode during the isothermal experiment at different temperatures. Each panel depicts the CO$_2$ absorption peak at various times since the start of an isothermal experiment. The time and temperature of the isotherm are indicated in the figure. The black curves represent the experimental data, while the orange curves represent the fittings made with a sum of two Gaussian functions.} \label{fig:fitting}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{bandArea.pdf}
\caption{The band areas of the 2330 and 2355~cm$^{-1}$ components of the CO$_2$ absorption peak during the isothermal evolution at different temperatures.} \label{fig:bandArea}
\end{figure}
Next, we use a model to describe the diffusion. Surface diffusion has been reviewed in previous publications \cite{Tsong2005,SurfaceDiffusion}. We refer intereted readers to those publications. When the concentration of molecules on the substrate surface is low enough so that the interaction force between them is negligible, the diffusion of molecules can be analyzed by assuming that they follow random walk. This type of diffusion is called tracer diffusion or intrinsic diffusion. On a 2D surface such as the np-ASW in the present study, CO$_2$ molecules jumps from one adsorption site to the nearest neighbour. Although the np-ASW surface is by no means a regular periodic surface, for the sake of simplicity, we still use a 2D lattice to represent it. We have assumed that 1 ML of CO$_2$ is equivalent to $10^{15}$ molecules per cm$^{2}$; correspondingly, the lattice size is $l=3.2$~\AA. In the random walk analysis, diffusion rate $k_D(T)$ can be described by an Arrhenius equation:
\begin{equation}
k_D(T) = \nu_{\rm dif} \exp(\frac{-E_{\rm dif}}{k_{\rm B}T}) \label{eq:arr}
\end{equation}
$E_{\rm dif}$ is the diffusion energy barrier, and $\nu_{\rm dif}$ is the prefactor for diffusion. In the literature, the prefactor is sometimes referred to as the attempt frequency. In astronomy, it is a convention to omit the Boltzmann constant $k_{\rm B}$, and then the diffusion energy barrier has the unit Kelvin. We denote the time-dependent coverage of CO$_2$ monomers and dimers as $C_1(t)$ and $C_2(t)$, respectively. They are unitless variables defined to be the fraction of absorption sites holding monomers and dimers. In theory, larger clusters such as trimers should also be considered. However, due to the low coverage of CO$_2$, the coverage of trimers should be much smaller than dimers and can be ignored. Once dimers are formed, we assume that they do not break apart before desorption. This is reasonable assumption, since the binding force between CO$_2$ molecules is stronger than the binding force between CO$_2$ and water \cite{He2017co2diff}, and CO$_2$ molecules are prone to form clusters on ASW. Following He et al. (2017)\cite{He2017co2diff}, we describe the coverage of monomers and dimers using the following rate equations:
\begin{align}
\frac{dC_1(t)}{dt} &= -2C_1(t)^2 k_D(T)\\
\frac{dC_2(t)}{dt} &= C_1(t)^2 k_D(T)
\end{align}
Note that both the left and the right sides of the equations have the unit of s$^{-1}$, since $C_1$ and $C_2$ are unitless.
The initial condition of the rate equation is $C_1(0)=0.075$ and $C_2(0)=0$. Solving the rate equation analytically, we have:
\begin{equation}
C_1(t)=\frac{1}{2k_D(T)t+\frac{1}{C_1(0)}}=\frac{1}{2k_D(T)t+13.33}
\label{eq:c1_sol}
\end{equation}
$C_2(t)$ can be expressed as $0.5\times (C_1(0)-C_1(t))$. The next step is to relate $C_1(t)$ and $C_2(t)$ to the area of the 2330 and 2355~cm$^{-1}$ peaks.
As discussed above, the 2330~cm$^{-1}$ peak area is not exactly proportional to $C_1(t)$ because $C_2(t)$ also contributes to it. However, it is reasonable to assume they have a linear relation, that is:
\begin{equation}
C_1(t) = \frac{a}{2k_D(T)t+13.33}+b \label{eq:c1_sol_linear}
\end{equation}
where $a$ and $b$ are two free parameters. This is then used to fit the curves in the top panel of Figure~\ref{fig:bandArea}, and the fitting as well as the corresponding $k_D(T)$ value are shown in Figure~\ref{fig:Dfit}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{Dfit.pdf}
\caption{The band area of the 2330~cm$^{-1}$ component during isothermal experiment at different temperatures and the fitting using Equation~\ref{eq:c1_sol_linear}. The best fit $k_D$ values are shown in the figure. } \label{fig:Dfit}
\end{figure}
The diffusion rate expression in Equation~\ref{eq:arr} can be rewritten as:
\begin{equation}
\log(k_D(T))=\log(\nu_{\rm dif}) - \frac{E_{\rm dif}}{T} \label{eq:arr_modify}
\end{equation}
This shows that $\log(k_D(T))$ is a linear function of $1/T$ with a slope of $-E_{\rm dif}$. In Figure~\ref{fig:arrh} we make an Arrhenius-type plot. A linear fitting of it yields the slope and intercept, from which we obtain the diffusion energy barrier $E_{\rm dif}=1300\pm110$~K, and prefactor $\nu_{\rm dif}=10^{7.6\pm0.8}$~s$^{-1}$. In many other studies on diffusion, it is custom to report the Arrhenius expression of the diffusion constant (units of cm$^2$ s$^{-1}$) \cite{Tsong2005}:
\begin{equation}
D=D_0 \exp(\frac{-E_{\rm dif}}{k_{\rm B}T})
\end{equation}
$D_0$ is often called the diffusivity:
\begin{equation}
D_0 = \exp(\frac{\Delta S_{\rm dif}}{k_{\rm B}T})\frac{\nu_{0} l^2}{4}=\frac{\nu_{\rm dif} l^2}{4};
\end{equation}
where $\Delta S_{dif}$ is the change in entropy between the saddle point and the adsorption site, and $\nu_0$ is the {\underline {small}}-amplitude frequency ($\approx 10^{13}$ sec$^{-1}$) of oscillation of the particle in the adsorption well.
In this case, D$_0$=$10^{-8.0\pm0.8}$ cm$^2$ s$^{-1}$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{arrh.pdf}
\caption{Arrhenius plot $log(k_D)$ over $1/T$, where $k_D$ is the diffusion rate, and $T$ is the isotherm temperature. The linear fitting is shown in orange. } \label{fig:arrh}
\end{figure}
\section{Discussions}
\label{sec:discuss}
Molecule diffusion is crucial for the chemistry on/in interstellar ice. However, laboratory measurements of diffusion are lacking, limiting the accuracy of astrochemical modeling of grain surface processes. The current study builds on prior studies \cite{He2017co2diff, He2018diff} to measure the diffusion energy barrier and prefactor for CO$_2$ on np-ASW surface simultaneously. A whole set of isothermal experiments were carried out to quantify the diffusion rate at different temperatures, from which the energy barrier and the prefactor for diffusion were then obtained.
It is worthwhile to compare this study to prior experimental studies. He17 deposited a small coverage of CO$_2$ on np-ASW and warmed up the ice slowly to monitor the shifting of infrared absorption peaks with increasing temperature. Because it was only a single measurement, it was not possible to derive the diffusion energy barrier and the prefactor simultaneously. It was assumed that the diffusion and desorption prefactor were the same, $\nu_{\rm dif} = \nu_{\rm des} = 10^{12}$~s$^{-1}$, and based on a rate equation model, the $E_{\rm dif}$ value was calculated to be $2150\pm50$~K, which is about 0.96 times the desorption energy of CO$_2$ from np-ASW surface. The main drawback of He17 is the assumption that the diffusion and desorption prefactors are equivalent, which is typically not the case, as demonstrated by He18. Here we recalculate the $E_{\rm dif}$ value for the experiment that is shown in Fig.~10 of He17, but replacing the $\nu_{\rm dif}$ value $10^{12}$~s$^{-1}$ by $10^{7.6}$~s$^{-1}$ obtained from the present study. Figure~\ref{fig:he17fig10} shows the equivalent of Fig.~10 in He17. By lowering the diffusion prefactor, the diffusion energy barrier is reduced to $1400\pm100$~K, which is in agreement with the value obtained by the present study $1300\pm110$~K within error allowance.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{diff_sim_he2017.pdf}
\caption{Recalculation of the diffusion energy barrier for the experiment in Fig.~10 of He17. The prefactor value $10^{12}$~s$^{-1}$ is replaced by $10^{7.6}$~s$^{-1}$. See He17 for more details. } \label{fig:he17fig10}
\end{figure}
In the present study, we carried out isothermal experiments, similar to what He18 did, and indeed obtained a diffusion prefactor much lower than desorption, in agreement with He18. Based on prior studies on diffusion (see a review in \cite{Wang2002}), it is not uncommon to see different diffusion prefactors, either higher or lower than the one for desorption. This present study, along with He18, makes a strong case that in astrochemical modeling one should make a distinction between diffusion prefactor $\nu_{\rm dif}$ and desorption prefactor $\nu_{\rm des}$. Based on these two studies, we suggest using a $\nu_{\rm dif}$ value about 3--5 orders of magnitude lower than $\nu_{\rm des}$ in modeling. However, we acknowledge that this is only based on the measurement of a few volatile molecules on the ASW surface and that exceptions may be possible for other molecules. More diffusion measurements are required to confirm it.
Compared to He18, the Arrhenius fitting in Figure~\ref{fig:arrh} has a much larger error, and correspondingly, the uncertainty of diffusion energy barrier and prefactor values are larger. At least the following factors contribute to the larger uncertainty:
\begin{enumerate}
\item The peak profile of CO$_2$ is more complicated than the dOH profile, as in He18. One cannot simply attribute the 2330~cm$^{-1}$ peak exclusively to CO$_2$ monomers. In our model, we had to simplify the problem by assuming a linear dependence between the coverage of monomers and the 2330~cm$^{-1}$ peak area. This could be one important source of uncertainty.
\item We assumed that there is only a single diffusion energy barrier that governs the formation of clusters. In practice, we cannot exclude the possibility that more than one activation energy is contributing.
\item A Gaussian function does not describe the 2355~cm$^{-1}$ component exactly. It is clear in Figure~\ref{fig:fitting} that the fitting of the 2355~cm$^{-1}$ peak is not ideal.
\item Ignoring clusters larger than dimers may also induce some uncertainty.
\end{enumerate}
A prior experimental determination of CO$_2$ on ASW surface was also done by Kouchi et al. (2020)\cite{Kouchi2020}, who reported an energy value of 1500 K, which is slightly higher than what we found in the present study. This difference could be due to the uncertainty in the prefactor that is assumed in Kouchi et al. They were unable to determine the prefactor value simultaneously with the energy value. It would be interesting to repeat the experiments in Kouchi et al. at different temperatures and derive the values of both the prefactor and the energy simultaneously. Karssemeijer \& Cuppen (2014)\cite{Karssemeijer2014} did molecular dynamics modeling calculation of CO$_2$ diffusion on water ice, and found the diffusion energy value to be $\sim1470$~K on a disordered water surface, which is close to our measured value.
In the present study, we only presented the experiments on np-ASW. Although the porosity of the ice mantle on interstellar grains is still under debate, it is probably the case that it is neither highly porous nor completely compact. It is still relevant to consider the diffusion on the surface of both p-ASW and np-ASW. We attempted to run experiments similar to He18, i.e., tracing the diffusion of CO$_2$ into p-ASW by monitoring the infrared spectra. Unfortunately, the diffusion of CO$_2$ on p-ASW surface is not efficient enough to allow entry into the pores. CO$_2$ primarily remains in the top layers as ``pure'' CO$_2$. The results are not shown here. Nonetheless, the results on np-ASW may apply to p-ASW as well. Previous experimental studies \cite{He2019asw, Zubkov2007} have suggested a similarity between the surface of a np-ASW and the pore surface of a p-ASW that is annealed to 60~K or above in terms of binding energy distribution and the fraction of dangling OH groups. The diffusion energy barrier and prefactor measured from np-ASW are likely to apply to the pore surface of p-ASW. We suggest that, in general, more experimental work is needed to measure diffusion on interstellar dust grain analogs.
\section*{Acknowledgement}
We dedicate this paper to the memory of Dieter Gerlich, who was a good friend and colleague of some of us. We acknowledge the support from the European Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grant Origins 83 24 28, and from NSF Astronomy \& Astrophysics Research Grant \#1615897.
\bibliographystyle{tfo}
|
{
"arxiv_id": "2302.08678",
"language": "en",
"timestamp": "2023-02-20T02:06:21",
"url": "https://arxiv.org/abs/2302.08678",
"yymm": "2302"
} | \section{Conclusion}
\label{sec:conclusion}
In this work, we contribute a new end-to-end framework (MBRec) for multi-behavior recommendation via the modeling of cross-behavior inter-dependencies under a high-order graph learning architecture. In our MBRec\ model, we first learn the dependent relationships among various types of user interactions with a behavior-aware message passing mechanism. Additionally, a designed high-order mutual relation learning scheme is integrated with the graph neural architecture, so as to encode the implicit dependencies between layer-specific behavior representations. When evaluated on three real-world datasets, our framework achieves significantly better recommendation performance as compared to various baselines. Further studies on model ablation show the rationality of designed key components in our proposed recommendation framework. In future, we would like to integrate the causal effect analysis~\cite{bonner2018causal} with our designed multi-behavior graph neural paradigm, in order to infer the causal relations from observed user behaviors and identify the implicit factors which influence user preference.
\section{Evaluation}
\label{sec:eval}
We evaluate MBRec\ to answer the research questions as:
\begin{itemize}[leftmargin=*]
\item \textbf{RQ1}: How does our \emph{MBRec} perform compared with various recommendation baselines on different datasets? \\\vspace{-0.1in}
\item \textbf{RQ2}: How does each model design (\eg, multi-channel behavior embedding layer and cross-layer mutual relation learning module) affect the model performance? \\\vspace{-0.1in}
\item \textbf{RQ3}: What is the impact of incorporating different types of behaviour context in our graph neural multi-behavior recommender system? \\\vspace{-0.1in}
\item \textbf{RQ4}: How do different interaction sparsity degrees influence the recommendation performance? \\\vspace{-0.1in}
\item \textbf{RQ5}: How do the key hyperparameters impact the performance of \emph{MBRec} neural architecture? \\\vspace{-0.1in}
\item \textbf{RQ6}: How is the model efficiency of \emph{MBRec} when competing with various types of recommendation techniques? \\\vspace{-0.1in}
\item \textbf{RQ7}: How does the user multi-behavior dependency study benefit the interpretation ability for recommendation? \\\vspace{-0.1in}
\item \textbf{RQ8}: What is the effect of the graph sampling algorithm on the model performance of \emph{MBRec}?
\end{itemize}
\begin{table}[t]
\caption{Statistics of our evaluation datasets.}
\vspace{-0.1in}
\label{tab:data}
\centering
\scriptsize
\setlength{\tabcolsep}{0.6mm}
\begin{tabular}{ccccc}
\toprule
Dataset&User \#&Item \#&Interaction \#&Interactive Behavior Type\\
\midrule
Tmall & 147894 & 99037 & 7658926 & \{Page View, Favorite, Cart, Purchase\}\\
BeiBei & 21716 & 7977 & 3338068 &\{Page View, Cart, Purchase\}\\
IJCAI & 423423 & 874328 & 36203512 &\{Page View, Favorite, Cart, Purchase\}\\
\hline
\end{tabular}
\vspace{-0.15in}
\end{table}
\begin{table*}[t]
\caption{Performance comparison on Beibei, Tmall and IJCAI data, in terms of \textit{HR@$N$} and \textit{NDCG@$N$} ($N=10$).}
\centering
\scriptsize
\setlength{\tabcolsep}{1mm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Data & Metric & BiasMF & DMF & NCF & AutoRec & CDAE & NADE & CF-UIcA & ST-GCN & NGCF & NMTR & DIPN & NGCF+M & MBGCN & MATN & GNMR & \emph{MBRec}\\
\hline
\multirow{4}{*}{Beibei}
&HR & 0.588 & 0.597 & 0.595 & 0.607 & 0.608 & 0.608 & 0.610 & 0.609 & 0.611 & 0.613 & 0.631 & 0.634 & 0.642 & 0.626 & 0.631 & \textbf{0.670}\\
\cline{2-18}
&Imprv & 13.95\% & 12.23\% & 12.61\% & 10.38\% & 10.20\% & 10.20\% & 9.84\% & 10.02\% & 9.66\% & 9.30\% & 6.18\% & 5.68\% & 4.36\% & 7.03\% & 6.18\% & --\\
\cline{2-18}
&NDCG & 0.333 & 0.336 & 0.332 & 0.341 & 0.341 & 0.343 & 0.346 & 0.343 & 0.375 & 0.349 & 0.384 & 0.372 & 0.376 & 0.385 & 0.380 & \textbf{0.402}\\
\cline{2-18}
&Imprv & 20.72\% & 19.64\% & 21.08\% & 17.89\% & 17.89\% & 17.20\% & 16.18\% & 17.20\% & 7.20\% & 15.19\% & 4.69\% & 8.06\% & 6.91\% & 4.42\% & 5.79\% & --\\
\hline
\multirow{4}{*}{Tmall}
&HR & 0.262 & 0.305 & 0.319 & 0.313 & 0.329 & 0.317 & 0.332 & 0.347 & 0.302 & 0.332 & 0.317 & 0.374 & 0.369 & 0.354 & 0.424 & \textbf{0.444} \\
\cline{2-18}
&Imprv & 69.47\% & 45.57\% & 39.18\% & 41.85\% & 34.95\% & 40.06\% & 33.73\% & 27.95\% & 47.02\% & 33.73\% & 40.06\% & 18.72\% & 20.33\% & 25.42\% & 4.72\% & --\\
\cline{2-18}
&NDCG & 0.153 & 0.189 & 0.191 & 0.190 & 0.196 & 0.191 & 0.198 & 0.206 & 0.185 & 0.179 & 0.178 & 0.221 & 0.222 & 0.209 & 0.249 & \textbf{0.262} \\
\cline{2-18}
&Imprv & 71.24\% & 38.62\% & 37.17\% & 37.89\% & 33.67\% & 37.17\% & 32.32\% & 27.18\% & 41.62\% & 46.37\% & 47.19\% & 18.55\% & 18.02\% & 25.36\% & 5.22\% & --\\
\hline
\multirow{4}{*}{IJCAI}
&HR & 0.285 & 0.392 & 0.449 & 0.448 & 0.455 & 0.469 & 0.429 & 0.452 & 0.461 & 0.481 & 0.475 & 0.481 & 0.463 & 0.489 & 0.519 & \textbf{0.554}\\
\cline{2-18}
&Imprv & 94.39\% & 41.33\% & 23.39\% & 23.66\% & 21.76\% & 18.12\% & 29.14\% & 22.57\% & 20.17\% & 15.18\% & 16.63\% & 15.18\% & 19.65\% & 13.29\% & 6.74\% & --\\
\cline{2-18}
&NDCG & 0.185 & 0.250 & 0.284 & 0.287 & 0.288 & 0.304 & 0.260 & 0.285 & 0.292 & 0.304 & 0.296 & 0.307 & 0.277 & 0.309 & 0.312 & \textbf{0.338}\\
\cline{2-18}
&Imprv & 82.70\% & 35.20\% & 19.01\% & 17.77\% & 17.36\% & 11.18\% & 30.00\% & 18.60\% & 15.75\% & 11.18\% & 14.19\% & 10.10\% & 22.02\% & 9.39\% & 8.33\% & --\\
\hline
\end{tabular}
\vspace{-0.05in}
\label{tab:target_behavior}
\end{table*}
\subsection{Data Description}
Our evaluations are performed on three real-world datasets: Tmall, BeiBei and IJCAI-Competition. We summarize the detailed statistical information of those datasets in Table~\ref{tab:data}.
\begin{itemize}[leftmargin=*]
\item \textbf{Tmall}. This is a public recommendation dataset from the Tmall e-commerce platform by including four types of user behaviors: click, add-to-cart, tag-as-favorite and purchase. This data contains 47,894 users and 99,037 items.\\\vspace{-0.12in}
\item \textbf{BeiBei}. This is another e-commerce dataset for item recommendation from one of the largest infant product retail site in China. There are 21,716 users and 7,977 items in this dataset with three types of user-item interactions, namely, click, add-to-cart and purchase.\\\vspace{-0.12in}
\item \textbf{IJCAI-Competition}. This data comes from the released repository of IJCAI competition to provide researchers with user online behavior modeling. It involves four types of interaction behavior between user and item, \textit{i}.\textit{e}., click, add-to-cart, tag-as-favorite and purchase. 423,423 users and 874,328 items are included in this data source.
\end{itemize}
To be consistent with the settings in~\cite{matnsigir20,jin2020multi}, the target predicted behaviors in our recommendation scenario are user purchases and other types of behaviors (\eg, click, add-to-cart) are regarded as the auxiliary behaviours.
\subsection{Evaluation Metrics}
In our experiments, the models are evaluated on the top-$N$ item recommendation task with the metrics of Hit Ratio (HR)@$N$ and NDCG@$N$. In our evaluation protocol, we use the leave one item out strategy~\cite{zhao2020revisiting} to consider the last interaction with the target behavior of each user as the testing set. In particular, following the similar settings in~\cite{kang2018self,sun2019bert4rec}, for individual user, we sample 99 items as negative instances from the set of all non-interacted items. Items in the test set are regarded as the positive instances.
\subsection{Baseline Models}
To demonstrate the effectiveness of our \emph{MBRec} framework, we compare our method with the following state-of-the-art methods, which involves different categories:
\noindent \textbf{Conventional Matrix Factorization Method}:
\begin{itemize}[leftmargin=*]
\item \textbf{BiasMF}~\cite{koren2009matrix}: this model attempts to incorporate user and item bias information into the matrix factorization, so as to learn latent embeddings of users/items.
\end{itemize}
\noindent \textbf{Neural Collaborative Filtering}:
\begin{itemize}[leftmargin=*]
\item \textbf{NCF}~\cite{he2017neuralncf}: it augments the embedding paradigm in collaborative filtering with the multilayer perceptron to enable the non-linear feature interactions.
\item \textbf{DMF}~\cite{xue2017deep}: this is another neural collaborative filtering technique, to learn a common low dimensional space for users and items with non-linear transformations.
\end{itemize}
\noindent \textbf{Autoencoder-based Recommendation Models}:
\begin{itemize}[leftmargin=*]
\item \textbf{AutoRec}~\cite{sedhain2015autorec}: this recommendation model stacks multiple autoencoder layers to project user-item interaction inputs into the latent representations for data reconstruction.
\item \textbf{CDAE}~\cite{wu2016collaborative}: It is a model-based CF recommender with the denoising auto-encoder technique to learn user correlations.
\end{itemize}
\noindent \textbf{Neural Auto-regressive Recommender Systems}:
\begin{itemize}[leftmargin=*]
\item \textbf{NADE}~\cite{zheng2016neural}: it designs a neural autoregressive architecture for recommendation task with the parameter sharing between different ratings.
\item \textbf{CF-UIcA}~\cite{du2018collaborative}: this is an user-item co-autoregressive framework with a new stochastic learning strategy to encode correlations between users and items.
\end{itemize}
\noindent \textbf{Graph Neural Network-based Recommendation Methods}:
\begin{itemize}[leftmargin=*]
\item \textbf{ST-GCN}~\cite{zhang2019star}: this graph-based method is built over an encoder-decoder framework to perform the convolution-based embedding propagation between user and item nodes.
\item \textbf{NGCF}~\cite{wang2019neural}: it is a state-of-the-art GNN-based colloborative filtering model which exploits the high-order user-item interaction structures.
\end{itemize}
\noindent \textbf{Multi-Behavior Recommender Systems}:
\begin{itemize}[leftmargin=*]
\item \textbf{NMTR}~\cite{gao2019neural}: This method relies on the defined cascaded behavior relationships for encoding the multi-behavior semantics with a multi-task learning scheme.
\item \textbf{DIPN}~\cite{guo2019buying}: this deep intent prediction network aims to integrate the browsing and buying preferences of users with a new type touch-interactive behavior patterns.
\item \textbf{NGCF+M}~\cite{wang2019neural}: we generate a new multi-behavior recommendation variant of NGCF by injecting the multi-behavior context into the message passing scheme.
\item \textbf{MATN}~\cite{matnsigir20}: this recommendation model considers the influences among different types of interactions with attentive weights for pattern aggregation.
\item \textbf{GNMR}~\cite{icdegnmr}: this is the previous version of our \emph{MBRec} which captures the pairwise dependencies between different types of behaviors with the integration of the multi-channel behavior representation layer and self-attention network for relation aggregation. However, it ignores the layer-wise embedding dependency during the representation integration.
\item \textbf{MBGCN}~\cite{jin2020multi}: this multi-behavior recommender system leverages the graph convolutional network to capture the multi-behaviour patterns over the interaction graph.
\end{itemize}
\subsection{\bf Parameter Settings}
Our \emph{MBRec} model is implemented with TensorFlow. The parameter inference is conducted with the Adam optimizer and the training phase is performed with the learning rate of $1e^{-3}$ and batch size of 32. For the model hyperparameters, the dimensionality of hidden state $d$ is set as 16 in our representation space. The number of channels for behavior embedding layer is set as 8. We use 2 attention-based representation heads in our behavior inter-dependency modeling component. To alleviate the overfitting issue, the regularization strategy with the weight decay parameter sampled from \{0.05, 0.01, 0.005, 0.001\}.
\begin{table}[t]
\caption{Recommendation accuracy with different Top-\textit{N} values in terms of \textit{HR@N} and \textit{NDCG@N} on BeiBei dataset.}
\centering
\scriptsize
\setlength{\tabcolsep}{1mm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Model}&\multicolumn{2}{c|}{@5}&\multicolumn{2}{c|}{@10}&\multicolumn{2}{c|}{@20}&\multicolumn{2}{c|}{@50} \\
\cline{2-9}
&HR&NDCG&HR&NDCG&HR&NDCG&HR&NDCG\\
\hline
\hline
BiasMF & 0.453 & 0.287 & 0.588 & 0.333 & 0.678 & 0.357 & 0.807 & 0.379\\
\hline
NCF & 0.447 & 0.283 & 0.601 & 0.336 & 0.698 & 0.359 & 0.819 & 0.383\\
\hline
NGCF+M & 0.496 & 0.337 & 0.634 & 0.372 & 0.743 & 0.381 & 0.872 & 0.407\\
\hline
MBGCN & 0.498 & 0.337 & 0.642 & 0.376 & 0.740 & 0.398 & 0.902 & 0.429\\
\hline
AutoRec & 0.456 & 0.291 & 0.607 & 0.341 & 0.707 & 0.366 & 0.826 & 0.391\\
\hline
MATN & 0.467 & 0.330 & 0.626 & 0.385 & 0.667 & 0.342 & 0.833 & 0.396\\
\hline
\emph{MBRec} & \textbf{0.527} & \textbf{0.359} & \textbf{0.670} & \textbf{0.402} & \textbf{0.788} & \textbf{0.433} & \textbf{0.927} & \textbf{0.461}\\
\hline
\end{tabular}
\label{tab:vary_k}
\end{table}
\begin{table}[t]
\caption{Performance comparison with different number of negative samples in terms of \textit{HR@10} and \textit{NDCG@10}}
\centering
\scriptsize
\setlength{\tabcolsep}{0.6mm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\# samples &\multicolumn{2}{c|}{400}&\multicolumn{2}{c|}{800}&\multicolumn{2}{c|}{1600}&\multicolumn{2}{c|}{3200} & \multicolumn{2}{c|}{6400}\\
\hline
Model&HR&NDCG&HR&NDCG&HR&NDCG&HR&NDCG & HR & NDCG\\
\hline
\hline
BiasMF & 0.285 & 0.140 & 0.149 & 0.078 & 0.091 & 0.049 & 0.055 & 0.032 & 0.036 & 0.022\\
\hline
NCF & 0.295 & 0.144 & 0.155 & 0.080 & 0.090 & 0.047 & 0.053 & 0.029 & 0.037 & 0.019\\
\hline
AutoRec & 0.245 & 0.122 & 0.134 & 0.074 & 0.0761 & 0.047 & 0.0500 & 0.033 & 0.036 & 0.023\\
\hline
ST-GCN & 0.311 & 0.158 & 0.184 & 0.093 & 0.104 & 0.053 & 0.058 & 0.031 & 0.036 & 0.019\\
\hline
MBGCN & 0.353 & 0.181 & 0.218 & 0.099 & 0.122 & 0.063 & 0.073 & 0.035 & 0.039 & 0.018\\
\hline
MATN & 0.339 & 0.165 & 0.192 & 0.093 & 0.113 & 0.056 & 0.058 & 0.031 & 0.037 & 0.020\\
\hline
GNMR & 0.345 & 0.176 & 0.218 & 0.100 & 0.103 & 0.055 & 0.063 & 0.036 & 0.040 & 0.024\\
\hline
\emph{MBRec} & \textbf{0.361} & \textbf{0.1848} & \textbf{0.219} & \textbf{0.109} &\textbf{0.124} & \textbf{0.064} & \textbf{0.075} & \textbf{0.040} & \textbf{0.044} & \textbf{0.025}\\
\hline
\end{tabular}
\vspace{-0.1in}
\label{tab:neg_samp}
\end{table}
\subsection{Performance Comparison (RQ1)}
The evaluation results (measured by HR@10 and NDCG@10) of all compared methods on three datasets are shown in Table~\ref{tab:target_behavior}. In all cases, we could observe that \emph{MBRec} consistently outperforms baseline methods from various research lines by a significant margin. We attribute such performance improvement to the joint learning of multi-behavior inter-dependencies as well as the cross-layer collaborative signals under graph neural network. For example, \emph{MBRec} makes over 34\% and 33\% relatively improvement with respect to HR@10 and NDCG@10 respectively, as compared to autoencoder-based recommendation models (\textit{i}.\textit{e}., AutoRec \& CDAE) on Tmall data. Additionally, for the results in terms of HR@10 on IJCAI-Competition data, the constant gain achieved by the developed \emph{MBRec} is around 20-22\% over graph neural network-based CF models (ST-GCN and NGCF), and 18-29\% over neural auto-regressive recommendation methods (NADE and CF-UIcA).
The proposed \emph{MBRec} also outperforms all other baseline methods with the modeling of multi-behavior data with respect to all metrics. Results show that our \emph{MBRec} allows the graph neural architecture to capture the multi-behavior interaction patterns, and successfully distinguish the layer-wise representations. While MBGCN and NGCF+M are built over the graph neural network to model behavior correlations, they fall short in encoding the latent type-specific characteristics and cross-type behavior inter-dependencies simultaneously. The performance of NMTR and MATN are limited to the failure for considering the high-order collaborative effects over the multi-behavior interaction graph. Furthermore, our new version model \emph{MBRec} always achieves better recommendation accuracy than the simplified version GNMR, which also confirms the effectiveness of our designed component for cross-layer mutual relation modeling. We further evaluate the performance of our \emph{MBRec} and several representative baselines with different top-$N$ positions. The results are reported in Table~\ref{tab:vary_k}. The best performance is achieved by our framework under different settings.
To further evaluate the performance of our MBRec\ framework, we make the performance comparison by varying the number of sample negative instances. The evaluation results are shown in Table~\ref{tab:neg_samp}. We can observe that our MBRec\ method consistently outperforms other alternative methods under different settings of negative samples in the range of \{400, 800, 1600, 3200, 6400\}. This observation validates the superiority of our MBRec\ in advancing the recommendation performance with the effective modeling of high-order heterogeneous collaborative relationships.
\subsection{Ablation Study (RQ2)}
In this section, we would like to answer the question that if the designed individual component could help improve the recommendation accuracy. Specifically, we generate four types of model variants of our \emph{MBRec} corresponding to different aspects of our model design:
\begin{itemize}[leftmargin=*]
\item \textbf{Impact of Multi-Channel Behavior Embedding}. To evaluate the effect of our multi-channel behavior embedding layer, we compare the proposed method with the variant (w/o-MCE). This variant discards the behavior semantic modeling with multi-channel representation spaces. As shown in Table~\ref{tab:module_ablation} about the evaluation on three datasets, we can observe that the results of \emph{MBRec} are better than that of the variant (w/o-MCE). It demonstrates that the encoding of type-specific behavior characteristic could facilitate the multi-behavior dependency modeling. \\\vspace{-0.1in}
\item \textbf{Impact of Behavior Inter-dependency Modeling}.
To investigate the rationality of our behavior inter-dependency modeling, our \emph{MBRec} is compared with another model implementation (w/o-BIM) by removing the multi-behavior attention network. From the results in Table~\ref{tab:module_ablation}, \emph{MBRec} outperforms w/o-BIM in all cases, which benefits from the user/item representation enhanced by the exploration of pairwise behavior relational structures. \\\vspace{-0.1in}
\item \textbf{Impact of Behavior Pattern Fusion}.
We generate another simplified implementation of our recommendation architecture: (w/o-BFu) that does not consider the aggregation layer for pattern aggregation across various types of behavior representations. Instead, the type-aware behavior representations are directly combined through the element-wise mean pooling. As expected, \emph{MBRec} achieves better recommendation accuracy as compared to the variant (w/o-BFu). It verifies the necessity of our embedding fusion scheme during our multi-behavior dependency modeling. \\\vspace{-0.1in}
\item \textbf{Impact of High-order Mutual Relation Learning}.
To evaluate the effect of augmenting the graph neural model by capturing the cross-layer collaborative relations, we generate another variant (w/o-HMR) by only generating the output from the highest graph order after the information propagation process. From the evaluation results, we can observe the efficacy of the designed mutual relation encoder in learning the contributions of order-specific embeddings for the final prediction result.
\end{itemize}
\begin{table}[t]
\caption{Ablation study on key components of MBRec.}
\centering
\footnotesize
\setlength{\tabcolsep}{1mm}
\begin{tabular}{c|cc|cc|cc}
\hline
Data & \multicolumn{2}{c|}{Beibei Data} & \multicolumn{2}{c|}{Tmall Data} & \multicolumn{2}{c}{IJCAI Data}\\
\hline
Metrics & HR & NDCG & HR & NDCG & HR & NDCG\\
\hline
\hline
w/o-MCE & 0.6549 & 0.3876 & 0.4399 & 0.2580 & 0.5420 & 0.3289\\
w/o-BIM & 0.6696 & 0.4000 & 0.4391 & 0.2554 & 0.5358 & 0.3228\\
w/o-BFu & 0.6572 & 0.3907 & 0.4238 & 0.2487 & 0.5494 & 0.3321\\
w/o-HMR & 0.6169 & 0.3470 & 0.3856 & 0.2240 & 0.3445 & 0.1760\\% mutual order-wise
\hline
\emph{MBRec} & \textbf{0.6701} & \textbf{0.4021} & \textbf{0.4435} & \textbf{0.2624} & \textbf{0.5535} & \textbf{0.3376}\\
\hline
\end{tabular}
\label{tab:module_ablation}
\end{table}
\subsection{Analysis on Individual Behavior Context (RQ3)}
This section conducts ablation studies on the influence of type-specific behavior context for the recommendation performance. The compared model variants are generated with the rubric as: First, ``+'' behavior type means that merely considering the target behaviors into the system to make predictions (\textit{i}.\textit{e}., +buy). Second, ``-'' behavior type indicates the removing of this certain type of user behaviors (\eg, -pv, -cart) from the recommendation architecture. For instance, -pv indicates that we do not include the page view behaviors into the interaction inter-dependency modeling. We present the evaluation results in terms of NDCG@N and HR@N when $N=10$ on three real-world datasets in Figure~\ref{fig:beh_ablation}. As shown in Table~\ref{tab:data}, the number of behavior types is 3 (\textit{i}.\textit{e}., page view, add-to-cart, buy) on BeiBei data and 4 (\textit{i}.\textit{e}., page view, add-to-cart, tag-as-favorite, buy) on Tamll, IJCAI-Competition data. From the results, we can observe that each type of interaction behavior individually contributes to improve the user preference learning, and integrate multi-behavior behavior patterns for performance improvement.
\begin{figure}[t]
\centering
\subfigure[][Beibei-HR]{
\centering
\includegraphics[width=0.28\columnwidth]{figures/ablation_beibei_hr.pdf}
\label{fig:ab_beibei_hr}
}
\subfigure[][IJCAI-HR]{
\centering
\includegraphics[width=0.28\columnwidth]{figures/ablation_ijcai_HR.pdf}
\label{fig:ab_ijcai_HR}
}
\subfigure[][Tmall-HR]{
\centering
\includegraphics[width=0.28\columnwidth]{figures/ablation_tmall_HR.pdf}
\label{fig:ab_tmall_HR}
}
\subfigure[][Beibei-NDCG]{
\centering
\includegraphics[width=0.28\columnwidth]{figures/ablation_beibei_NDCG.pdf}
\label{fig:ab_beibei_NDCG}
}
\subfigure[][IJCAI-NDCG]{
\centering
\includegraphics[width=0.28\columnwidth]{figures/ablation_ijcai_NDCG.pdf}
\label{fig:ab_ijcai_NDCG}
}
\subfigure[][Tmall-NDCG]{
\centering
\includegraphics[width=0.28\columnwidth]{figures/ablation_tmall_NDCG.pdf}
\label{fig:ab_tmall_NDCG}
}
\vspace{-0.1in}
\caption{Impact study of diverse behavior types. There are three types of behaviors for BeiBei data, and four types of behaviors for Tmall, IJCAI data.}
\label{fig:beh_ablation}
\vspace{-0.1in}
\end{figure}
\subsection{Performance Under Different Sparsity (RQ4)}
\label{sec:sparsity}
In our experiments, we also evaluate the recommendation performance of different models under different interaction sparsity. Following the similar settings in~\cite{wang2019neural,wu2020diffnet}, we first partition users into five groups based on the number of interactions. For example, ``\textless36'' and ``\textless52'' indicate that users belong to this group have the number of interactions ranging from 1 to 35, and 36 to 51, respectively. We keep the same number of users in each group and select the corresponding ranges as shown in x-axis of Figure~\ref{fig:sparsity}. The total number of users contained in each group and the recommendation accuracy with respect to HR (Figure~\ref{fig:sparsity} (a)) and NDCG (Figure~\ref{fig:sparsity} (b)) are shown in the left side and right side of y-axis in Figure~\ref{fig:sparsity}. From evaluation results, we can notice the superiority of our \emph{MBRec} with different sparsity levels. It suggests that the incorporation of multi-typed behaviour patterns into the user preference learning could reach performance improvement as compared with other baselines. In addition, we can observe that the overall performance of all compared methods share similar increase trend as users have more interactions. This may indicate that more user behavior data may help characterize user preference with more accurate latent representations.
\begin{figure}[t]
\centering
\subfigure[][Tmall HR@10]{
\centering
\includegraphics[width=0.45\columnwidth]{figures/sparsity_tmall_hr.pdf}
\label{fig:sp_tmall_hr}
}
\subfigure[][Tmall NDCG@10]{
\centering
\includegraphics[width=0.45\columnwidth]{figures/sparsity_tmall_ndcg.pdf}
\label{fig:sp_tmall_ndcg}
}
\caption{Performance comparison of MBRec\ and baseline methods \wrt\ different data sparsity levels on Tmall data.}
\label{fig:sparsity}
\end{figure}
\begin{table}[t]
\caption{Effect of embedding propagation layers.}
\centering
\footnotesize
\begin{tabular}{c|cc|cc|cc}
\hline
Data & \multicolumn{2}{c|}{BeiBei Data} & \multicolumn{2}{c|}{Tmall Data} & \multicolumn{2}{c}{IJCAI Data}\\
\hline
Metrics & HR & NDCG & HR & NDCG & HR & NDCG\\
\hline
\hline
\emph{MBRec}-1 & 0.662 & 0.394 & 0.383 & 0.226 & 0.533 & 0.313 \\
\emph{MBRec}-2 & \textbf{0.670} & \textbf{0.402} & \textbf{0.444} & \textbf{0.262} & \textbf{0.554} & \textbf{0.338} \\
\emph{MBRec}-3 & 0.664 & 0.398 & 0.408 & 0.237 & 0.543 & 0.332 \\
\hline
\end{tabular}
\vspace{-0.05in}
\label{tab:graph_layers}
\end{table}
\begin{table}[t]
\caption{Effect of behavior embedding channels.}
\vspace{-0.1in}
\centering
\footnotesize
\begin{tabular}{c|cc|cc|cc}
\hline
Data & \multicolumn{2}{c|}{BeiBei Data} & \multicolumn{2}{c|}{Tmall Data} & \multicolumn{2}{c}{IJCAI Data}\\
\hline
Metrics & HR & NDCG & HR & NDCG & HR & NDCG\\
\hline
\hline
\emph{MBRec}-2 & 0.655 & 0.390 & 0.438 & 0.260 & 0.518 & 0.308 \\
\emph{MBRec}-4 & 0.656 & 0.401 & 0.443 & 0.260 & 0.553 & 0.328 \\
\emph{MBRec}-8 & \textbf{0.670} & \textbf{0.402} & \textbf{0.444} & \textbf{0.262} & \textbf{0.554} & \textbf{0.338} \\
\emph{MBRec}-16 & 0.646 & 0.387 & 0.419 & 0.241 & 0.558 & 0.336 \\
\hline
\end{tabular}
\label{tab:memory_units}
\end{table}
\subsection{Analysis on Hyperparameters (RQ5)}
\label{sec:hyper_study}
We study the impact of different hyperparameter settings on the model performance in our joint learning framework.\\\vspace{-0.1in}
\begin{itemize}[leftmargin=*]
\item \textbf{Comparison with Different Hidden Dimensionality}. Our model results with different dimension size of hidden states are shown in Figure~\ref{fig:hyperparam}. We observe that larger embedding size does not always bring the positive effect for improving model performance, especially for sparse experimented datasets. The larger size of hidden state dimensionality may lead to the overfitting issue. We set the hidden dimensionality $d=16$ as the default value in our \emph{MBRec}. \\\vspace{-0.05in}
\item \textbf{Comparison with Different Graph Model Depth}.
To investigate the performance of our MBGNN method by stacking multiple graph neural layers, we conduct experiments by varying the number of graph-based embedding propagation layers. As shown in Table VII, we can observe that \emph{MBRec}-2 and \emph{MBRec}-3 obtain consistent improvement over \emph{MBRec}-1 which merely considers the first-order neighbors for message passing. We attribute the performance improvement to the encoding of collaborative relations based on our considered second- and third-order neighboring node dependency. With the further increase of model depth from two to three graph layers, the performance slight degrades with the configuration of deep graph neural architecture. The reason may lie in that deep graph neural framework tends to be overfitting and involve the over-smoothing issue in the generated user/item representations. According to the statistical information from our experimented Tmall data, with the consideration of three-hop connections, a large percentage of user-item pairs may be connected, which unavoidably leads to the over-smoothing issue of making user embeddings indistinguishable.\\\vspace{-0.05in}
\item \textbf{Comparison with Different Number of Channels}. We vary the number of embedding channels in our multi-channel behavior embedding layer. The results in terms of HR@10 and NGCD@10 are presented in Table~\ref{tab:memory_units}, from which we notice that the performance of \emph{MBRec} is improved at first, with the increase of behavior representation channels. But we can observe that the recommendation performance degrades with the further increase of channel numbers, due to the overfiting. Hence, behavior embedding channels with the dimension of 16 is enough for encoding interaction semantics.
\end{itemize}
\begin{figure}[t]
\vspace{-0.05in}
\centering
\begin{adjustbox}{max width=1.0\linewidth}
\input{./figures/newhyper1.tex}
\end{adjustbox}
\begin{adjustbox}{max width=1.0\linewidth}
\input{./figures/newhyper2.tex}
\end{adjustbox}
\caption{Impact of hidden state dimensionality of MBRec framework on BeiBei, Tmall, IJCAI datasets.}
\vspace{-0.1in}
\label{fig:hyperparam}
\end{figure}
\begin{table}[t]
\caption{Computational cost (seconds) study.}
\centering
\small
\vspace{-0.05in}
\begin{tabular}{lccc}
\toprule
Models& BeiBei \ \ \ & Tmall & IJCAI \\
\midrule
NADE & 4.1s & 26.9s & 60.4s\\
CF-UIcA & 11.5s & 61.7s & 139.1s\\
ST-GCN & 12.6s & 58.5s & 94.8s\\
NGCF+M & 15.8s & 74.6s & 152.3s\\
NMTR & 14.0s & 37.3s & 118.0s\\
MBGCN & 17.4s & 85.3s & 186.5s\\
MATN & 11.5s & 74.7s & 196.5s\\
DIPN & 53.2s & 172.6s & 284.6s\\
\hline
\emph{MBRec} & 14.3s & 58.6s & 101.1s \\
\hline
\end{tabular}
\label{tab:time}
\end{table}
\subsection{Computational Cost Analysis (RQ6)}
\label{sec:time_eval}
Our evaluation also includes the computational cost investigation of our \emph{MBRec} model and several representative methods in terms of their implementation time on different datasets. We evaluate the computational cost of all compared methods on the machine of NVIDIA TITAN RTX GPU with the configurations of Intel Xeon W2133 CPU 3.6G Hz and 64GB RAM. For fair comparison, we apply the same setting of hidden state dimensionality for all methods. For graph-based methods, the number of embedding propagation layers is set as 2. From the reported evaluation results in Table~\ref{tab:time}, we can observe that our \emph{MBRec} model can achieve comparable model efficiency compared with other baselines in terms of the implementation time. In particular, when competing with multi-behavior recommendation baselines (NGCF+M, MBGCN), our \emph{MBRec} requires less implementation time, which indicates the efficiency of our multi-behavior graph neural framework. Additionally, compare with the autoregressive collaborative filtering model-CF-UIcA, our \emph{MBRec} can still achieve competitive model efficiency with the incorporation of multi-typed behaviour context. In summary, the above observations justify the scalability of our proposed \emph{MBRec} in dealing with large-scale user behavior data for recommendation.\\
\subsection{Model Interpretation with User Study (RQ7)}
\label{subsec:case}
To analyze the multi-behavior dependency interpretation of our proposed MBRec\ framework, we conduct user studies with identified real user examples. We show the study results in Figure~\ref{fig:case_study_weights}. In this figure, the cross-type behavior dependencies between user ($u_{116}$, $u_{1621}$) and item ($v_{13844}$, $v_{64224}$) are shown with the learned quantitative dependency weights. Specifically, $E_i^{(1)}$ and $E_i^{(2)}$ denote our produced user representation encoded from the $1^{st}$ and $2^{nd}$ graph-based embedding propagation layer, respectively. Similarly, $E_j^{(1)}$ and $E_j^{(2)}$ represents the encoded item embeddings from the $1^{st}$ and $2^{nd}$ message passing layers, respectively. From the study results, we summarize the key observations as follows:
\begin{itemize}[leftmargin=*]
\item \textbf{Encoded Behavior Inter-Correlations}. In this user study, we present the learned behavior inter-correlation matrix with the dimension of $\mathbb{R}^{4\times 4}$ to reflect the pairwise correlations between different types of user behaviors, \textit{i}.\textit{e}., page view, add-to-cart, tag-as-favorite and purchase.\\\vspace{-0.1in}
\item \textbf{Type-specific Behavior Pattern Fusion}. In our MBRec\ recommendation framework, we design the multi-behavior pattern aggregation module with the aim of integrating type-specific behaviour patterns for making final recommendation. In particular, each user is associated with a learned attention-based behavior importance vector with the dimension of $\mathbb{R}^{1\times 4}$ (as shown in Figure~\ref{fig:case_study_weights}). For example, we can observe that users who view item $v_{13844}$ are more likely to purchase it compared with item $v_{64224}$.\\\vspace{-0.1in}
\item \textbf{Cross-layer Mutual Relation Encoding}. In our multi-layer graph neural framework, we introduce a mutual relation encoding component to explicitly aggregate representations from different hops in the multi-behavior interaction graph. $E_i^{(1)}$ and $E_i^{(2)}$ represents the encoded embeddings of user $u_i$ from his/her first- and second-order neighboring nodes. In Figure~~\ref{fig:case_study_weights}, the correlations among hop-aware user/item representations are shown with different connection lines. From the visualization results, We can observe that cross-layer user/item embeddings (\eg, $E_i^{(0)}$ and $E_j^{(1)}$) are often highly correlated with each other compared with the embeddings of the same layer (\eg, $E_i^{(2)}$ and $E_j^{(2)}$).
\end{itemize}
\subsection{Effect of Graph Sampling}
In this section, we investigate the effect of our graph sampling algorithm on the model performance by testing the prediction accuracy of MBRec\ with different number of training and testing sub-graphs. In specific, MBRec\ is trained with sub-graphs containing 5000, 10000, 20000, 40000 nodes, and is tested using input sub-graphs containing 5000, 10000, 20000, 40000, 60000 nodes. The results are shown in Table~\ref{tab:graphSampling}, from which we can conclude that testing on larger sub-graphs always yields better performance, while training on larger sub-graphs does not always result in better performance. This is because training with smaller sub-graphs may serve as regularization operation for predictions.
\begin{table}[t]
\scriptsize
\centering
\setlength{\tabcolsep}{0.7mm}
\caption{Influence of the sub-graph sampling scale.}
\label{tab:graphSampling}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{Train. $N$} & \multicolumn{10}{c|}{Number of Sub-graph Size $N$ When Testing}\\
\cline{2-11}
& \multicolumn{2}{c|}{5,000} & \multicolumn{2}{c|}{10,000} & \multicolumn{2}{c|}{20,000} & \multicolumn{2}{c|}{40,000} & \multicolumn{2}{c|}{60,000}\\
\hline
& HR & NDCG & HR & NDCG & HR & NDCG & HR & NDCG & HR & NDCG\\
\hline
\hline
\multirow{1}{*}{5,000} & 0.365 & 0.201 & 0.409 & 0.230 & 0.463 & 0.266 & 0.527 & 0.310 & 0.552 & 0.338\\
\hline
\multirow{1}{*}{10,000} & 0.359 & 0.198 & 0.407 & 0.229 & 0.464 & 0.266 & 0.529 & 0.307 & 0.552 & 0.336\\
\hline
\multirow{1}{*}{20,000} & 0.357 & 0.196 & 0.407 & 0.228 & 0.466 & 0.270 & 0.537 & 0.326 & 0.554 & 0.338\\
\hline
\multirow{1}{*}{40,000} & 0.322 & 0.168 & 0.367 & 0.197 & 0.424 & 0.236 & 0.500 & 0.292 & 0.548 & 0.330 \\
\hline
\end{tabular}
\end{table}
\section{Introduction}
\label{sec:intro}
With the growth of Internet services and mobile applications, recommender systems have played an increasingly critical role in addressing the information overload for many online platforms~\cite{zhao2018recommendations,han2019adaptive,yang2022knowledge}. For example, the benefits of recommendation systems could lie in providing personalized recommendations in e-commerce sites (\eg, Amazon and Taobao), or satisfying users' interest in online music streaming services (\eg, Pandora and Spotify). Currently, collaborative filtering techniques serve as one of the most important paradigms to accurately understand the preferences of users, based on their interaction behaviors~\cite{zheng2018spectral,shi2018heterogeneous}.
With the remarkable success of deep learning, there exist renewed interests in modeling user-item interactions with various neural network architectures, such as multi-layer perceptron~\cite{he2017neuralncf,sheu2021knowledge}, autoencoder network~\cite{sedhain2015autorec} and neural autoregressive models~\cite{zheng2016neural}. Built on the recent strength of graph neural networks, several studies seek to aggregate feature information from the graph-structured relational data generated by the observed user behaviors~\cite{wang2019neural,zhang2019star}. These neural network models generally focus on single type of user interaction behaviors over items, during the vectorized representation procedure of users and items. However, in real-life applications, items are often interacted by users with diverse ways~\cite{gao2019neural,zhang2020multiplex,yu2022multiplex}. For example, users can view, tag-as-favourite and purchase different products in E-commerce platforms. In such real-life scenarios, effectively modeling of multi-typed user-item interactions can provide auxiliary knowledge to characterize the diverse user behavior semantics for interest representation in recommender systems~\cite{guo2019buying,wei2022contrastive}.
For simplifying the model design, the embedding functions in most existing recommendation models nearly ignore the explicit encoding of multi-behavior collaborative signals, which are insufficient to yield satisfactory latent representations for both users and items. In this paper, we tackle the multi-behavior recommendation by enhancing user preference learning with the exploration of multi-typed user behavior data.
Although it is desirable to consider the behavior diversity in user interest representation learning for accurate recommendations, it is not a trivial task to capture the complex multi-behavioral collaborative relations. In particular, each type of user behaviors has its own interaction contexts and there exist complex dependencies across various types of interactions. Different behavior views usually provide complementary information for encoding user's interests. Therefore, in order to learn meaningful behavior representations from multi-typed user-item interactions, an effective cross-type behavior dependency modeling is a necessity in solving the multi-behavior recommendation. In addition, in the view of user-item interaction graph, the effectiveness of exploring subgraph structures has been shown in recently emerged graph-based methods (\eg, PinSage~\cite{ying2018graph} and NGCF~\cite{wang2019neural}), with the consideration of high-hop neighbors. However, to design effective embedding function in the recommendation architecture for representation aggregations, it is crucial to expressively model the high-order multi-behavior patterns of users over the interaction graph structure from different propagation layers. We show the multi-behavior recommendation scenario with the illustrated examples in Figure~\ref{fig:intro}. We can observe that user can interact with items with different behavior types (differentiated with weight lines), \eg, click, add-to-cart and purchase. In such cases, we generate a multi-behavior interaction graph to represent the diverse collaborative relations among users and items with the high-order multiplex connectivity information. \\\vspace{-0.1in}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figures/intro.pdf}
\vspace{-0.05in}
\caption{Illustration of the user-item multi-behavior interactions and the corresponding multi-behavior high-order connectivity. Best viewed in color.}
\label{fig:intro}
\vspace{-0.1in}
\end{figure}
\noindent \textbf{Present Work.} Motivated by the aforementioned challenges, this work proposes a new recommendation framework: \underline{M}ulti-\underline{B}ehavior \underline{G}raph \underline{N}eural \underline{N}etwork (MBRec) that explicitly incorporates multi-typed behavior context into the encoding of diverse user preference. In the proposed MBRec\ framework, we can capture the heterogeneous relationships across different types of user-item interactions. Specifically, to cope with the cross-type behavior dependencies, we propose an attention-enhanced graph neural network to preserve the high-order interaction patterns over the multiplex graph structures. Through this way, MBRec\ is able to preserve the fine-grained semantics of user-item relations and facilitate the modeling of diverse user interest.
To differentiate the influences of different types of behaviors, a gated aggregation mechanism is developed to help fuse the contextual signals from different types of behaviors for a better embedding learning. In addition, we endow our MBRec\ with the capability of aggregating high-order behavior representations in an adaptive way. To achieve this goal, a mutual relation encoder is proposed to learn summarized representations across different graph layers. This component allows our model to better capture and interpret the global property of user-item interaction graph.
Lastly, it is worth mentioning that although the multi-behavior information has been considered in recent studies~\cite{gao2019neural,gao2019learning}. these work only consider the multi-behavior dependencies with the predefined correlations, which can hardly comprehensively capture the complex cross-type behavior dependencies in real-world recommendation scenarios. In addition, another study of multi-behavior recommendation model proposes to consider the dependencies between different interactions based on multi-channel graph convolutional network~\cite{jin2020multi}. Different from these approaches, we contribute a new recommendation framework to explicitly exploit the high-order collaborative signals in the form of multi-behavior patterns. To enhance the global multi-behavior relation learning, we design a new graph aggregation scheme with order-wise mutual dependency modeling, which automatically differentiates the importance of behavior-aware representations from different graph hops during the message passing process. Lack of considering the complex global multi-behavior patterns could easily lead to suboptimal representations for user preference modeling.
In summary, we highlight our contributions as follows:
\begin{itemize}[leftmargin=*]
\item We introduce a new multi-behavior recommendation framework, which explores the high-order and cross-type behavior inter-dependencies in a hierarchical manner.
\item We propose a new recommendation framework that inherits the merits of graph-enhanced collaborative filtering paradigm and designs a multi-behavior propagation strategy for heterogeneous user-item interactions.
\item Then, a graph-structured mutual relation encoder is developed to explore high-order user preference and promote the collaboration of different layer-specific patterns for robust multi-behavior representations. Furthermore, a graph sampling algorithm is developed to improve the scalability of MBRec\ for dealing with large-scale graph data.
\item Experimental results on three real-world datasets show the superiority of our MBRec\ model over a variety of baselines. We further perform the model ablation study to better understand the effect of our designed sub-modules.
\end{itemize}
In this paper, we propose to advance our previous work~\cite{icdegnmr} from the following aspects: i) Different from our previous method which overlooks the cross-layer implicit dependency between behavior representations, we propose a new recommendation framework to explicitly promote the cooperation of behavior patterns across different graph layers, for more accurate user and item representations (Section~\ref{sec:solution}). ii) We provide a comprehensive complexity analysis and computational cost evaluation of our model to show that our proposed model could achieve competitive time efficiency as compared to most state-of-the-art techniques (Section~\ref{sec:complexity} and Section~\ref{sec:time_eval}). iii) We further evaluate the model performance with respect to different sparsity levels of user-item interaction data, to justify the robustness of our multi-behavior recommendation method in capturing user's preference under different data sparsity degrees (Section~\ref{sec:sparsity}). iv) We add three new recently developed baselines (\textit{i}.\textit{e}., MBGCN, MATN and NGCF+M) in our performance evaluation to show the superiority of our method (Section~\ref{sec:eval}). Additionally, we present more hyperparameter study results with respect to the hidden state dimensionality and behavior embedding channels (Section~\ref{sec:hyper_study}). v) We perform case studies to show the interpretation capability of our approach in capturing the behavior relationships as well as the layer-wise representation dependence (Section~\ref{subsec:case}). vi) We adopt two new datasets (BeiBei and IJCAI-contest) collected from real-world e-commerce platforms for performance evaluation across different experimental settings. The user-item interactions in online retailing systems are multiplex in nature and can well reflect the relation heterogeneity between users and items. vii) Finally, we present detailed discussion about the related work from three research lines: neural collaborative filtering techniques, recommendation with multi-behavior modeling and graph neural networks (Section~\ref{sec:relate}). \\\vspace{-0.1in}
\section*{Acknowledgments}
We thank the reviewers for their valuable feedback and comments. This research work is supported by the research grants from the Department of Computer Science \& Musketeers Foundation Institute of Data Science at the University of Hong Kong (HKU). The research is also partially supported by National Nature Science Foundation of China (62072188), Major Project of National Social Science Foundation of China (18ZDA062), Science and Technology Program of Guangdong Province (2019A050510010).
\bibliographystyle{abbrv}
\section{Preliminaries}
\label{sec:model}
We begin by describing the multi-behavior recommendation and introducing key notations. Suppose we have a recommendation scenario with a set of users $U$ ($u_i\in U$) and a set of items $V$ ($v_j\in V$). Here, the index of user and item is denoted by $i$ ($i\in [1,...,I]$) and $j$ ($j\in [1,...,J]$), respectively. Different from most existing recommender systems which associate users with their interacted items based on singular type of user-item relations, this work explores the inter-dependent relations across different types of user-item behaviors (\eg, click, tag-as-favorite, review, like, or purchase). \\\vspace{-0.12in}
\noindent Definition 1. \textbf{Multi-Behavior Interaction Tensor} $\textbf{X}$. To represent the multi-typed interactions between users and items, we define a three-way multi-behavior interaction tensor $\textbf{X} \in \mathbb{R}^{I\times J\times K}$, where $K$ (indexed by $k$) denotes the number of types of user-item interactions. Given the $k$-th type of interactions, the corresponding element $x_{i,j}^k \in \textbf{X}$ is set to 1 if the item $v_j$ has been adopted by user $u_i$. Otherwise, $x_{i,j}^k=0$.\\\vspace{-0.12in}
\noindent \textbf{Multi-Behavior Recommendation}. In the recommendation scenario with multi-type interaction behaviors, we first define the target behavior type (\eg, $k$-th) as our predictive objective and consider other types ($k'\in [1,...,K] \& k' \neq k$) of interaction as auxiliary behaviors. In real-world recommendation scenarios, the target behavior type can be set according to different task-specific requirements. For example, some e-commerce systems may be more interested in ultimate purchase transactions~\cite{wu2018turning}, while forecasting click-interactive behavior is also very important for online advertising platforms~\cite{ren2018learning}. We formally present the multi-behavior recommendation as: \\\vspace{-0.12in}
\noindent \textbf{Input}: multi-behavior interaction tensor $\textbf{X} \in \mathbb{R}^{I\times J\times K}$ which jointly includes source and target behavior of users in $U$.\\
\noindent \textbf{Output}: A predictive framework which effectively forecasts the unknown target type of user-item interactions.
\section{Related Work}
\label{sec:relate}
\begin{figure}[t]
\centering
\includegraphics[width=0.93\columnwidth]{figures/case_study_2.pdf}
\caption{Interpretation study of multi-behavior inter-dependency in our \emph{MBRec} \wrt\ behavior inter-dependency modeling, behavior pattern aggregation and cross-layer mutual relation learning. Dark color indicates higher relevance. Best viewed in color.}
\label{fig:case_study_weights}
\end{figure}
\subsection{Neural Network Collaborative Filtering Models}
Collaborative Filtering (CF) has become one of the most important paradigms for personalized recommender systems in real-life platforms~\cite{wang2019unified,huang2021neural}. The general idea of CF models is that users may share similar preference if they interact with similar items~\cite{hu2008collaborative}. In recent years, many efforts have been made to augment the CF techniques with deep neural network models~\cite{shi2019deep}. These methods apply different neural mechanisms (\eg, autoencoder, attention mechanism, graph neural network) in the matching function to parameterize users and items into latent representation space. The learned representations of users and items can be used to estimate the likelihood of unobserved interactions.
Some of studies follow this research line to enable the non-linear feature interactions with the multi-Layer feed-forward network, such as NCF~\cite{he2017neuralncf} and DMF~\cite{xue2017deep}. To consider item relational data into the CF model, the relational collaborative filtering (RCF~\cite{xin2019relational}) framework designs neural two-stage attention mechanism to enhance the item embedding process. Another recent research line of recommendation models is to explore the user-item interaction graph to capture the collaborative filtering signals. For example, NGCF~\cite{wang2019neural} is developed based on the high-hop information propagation framework to guide the user/item representation procedure. ST-GCN~\cite{zhang2019star} is another graph learning model to encode user-item interaction patterns with an encoder-decoder framework. In addition, to bridge the logical reasoning and representation learning in recommender systems, a neural collaborative reasoning approach (NLR)~\cite{chen2020neural} is proposed to incorporate the logic priors into the neural architecture.
\subsection{Recommendation with Multi-Behavior Modeling}
There exist some research works aiming at enhancing recommendation models by considering multi-typed behavior of users~\cite{tang2016empirical}. In those methods, the implicit user-item feedback from auxiliary behaviors (\eg, click, add-to-cart) are considered as behavior contextual signals to predict target user behaviors (\eg, purchase)~\cite{jin2020multi,xia2021graph}. For example, multi-task learning frameworks are developed to perform the joint training among the prediction tasks of different behavior types~\cite{gao2019neural}. However, those methods reply on the predefined dependent relationships between different types of user behaviors, and can hardly be reflective of the complex multi-behaviour context in practical scenarios.
To capture the correlations between different types of behaviors, MATN~\cite{matnsigir20} utilizes the attention network for multi-behavior information aggregation. Both browsing and buying behaviors of users are considered in DIPN~\cite{guo2019buying} with an attention-based RNN model.
However, the high-order behavior dependent structures have been overlooked in them. In this work, the proposed MBRec\ framework aims to encode the high-order collaborative signals in the embedding function. Additionally, graph-based methods have been designed to tackle the multi-behavior recommendation problem. Specifically, Zhang~\textit{et al}.~\cite{zhang2020multiplex} employs the multiplex network embedding technique to generate behavior-aware embeddings. MBGCN~\textit{et al}.~\cite{jin2020multi} is built on the graph convolutional network to propagate the behavior embeddings over the interaction graph. Our new MBRec\ differs from those graph-based models from two perspectives: i) we discriminate the influence between various behaviour patterns through a dual-stage relation learning scheme. The designed new message passing paradigm endows the multi-behavior graph neural network with the capability of encoding behavior-aware characteristics and dependencies simultaneously. ii) The high-order collaborative signals are aggregated across different graph layers explicitly, under the cross-layer message passing architecture.
\subsection{Graph Neural Networks for Recommendation}
In view of the effectiveness of Graph Neural Networks (GNNs), GNNs have been widely used to perform the representation learning over the graph-structured data~\cite{wang2020gcn,wu2020comprehensive,liu2021item,chen2020handling,li2021learning,tang2019coherence,kipf2016semi}.
Recent research works apply the graph neural network to model user-item interactions in recommender systems: PinSage~\cite{ying2018graph} is a graph convolutional network to propagate embeddings over the pin-board bipartite graph structure. Additionally, modeling the dynamic user-item interactions has attracted much attention for recommender systems~\cite{chen2021temporal,yan2019cosrec}. To encode the sequential patterns, graph neural networks have been utilized to consider the transitions between items of session sequences in SRGNN~\cite{wu2019session} and MTD~\cite{2021graph}, or user interaction sequence in H2SeqRec~\cite{li2021hyperbolic}. In addition, the graph diffusion network~\cite{wu2020diffnet} and graph attention mechanism~\cite{wu2019dual} have been utilized to capture the influence among users, so as to incorporate the social relations into the recommendation and alleviate the data sparsity issue.
\section{Methodology}
\label{sec:solution}
In this section, we first elaborate the technical details of our proposed MBRec\ framework. Its key idea is to explore the complex inter-dependencies across different types of users' interactive behavior, to parameterize weight matrices for the relation heterogeneity aggregation, high-order message passing and propagation modules. Such process can be decomposed into two key components: (i) Multi-Behavior Graph-Structured Dependency Modeling: it jointly preserves the type-specific behavior semantics and type-wise behavior inter-dependencies within a graph-structured learning architecture. (ii) Cross-Layer Mutual Relation Learning: it captures the mutual relationships between the aggregated multi-hop feature representations of neighbors at different hops.
\begin{table}[t]
\caption{Summary of Key Notations}
\label{tab:data}
\centering
\footnotesize
\setlength{\tabcolsep}{0.6mm}
\begin{tabular}{c | c}
\hline
Notations & Description \\
\hline
$U$ ($u_i\in U$) & Set of users \\
$V$ ($v_j\in V$) & Set of items \\
$K$ (indexed by $k$) & Number of behavior types \\
$\textbf{X} \in \mathbb{R}^{I\times J\times K}$ & Multi-behavior interaction tensor \\
$\textbf{H}_{i\leftarrow}^{k,(l)}$, $\textbf{H}_{j\leftarrow}^{k,(l)}$ & Behavior-aware representations of users/items \\
$\lambda(\cdot)$ & Behavior-aware encoding function \\
$\hat{\textbf{H}}_{i\leftarrow}^{k,(l)}$ & Recalibrated type-specific behavior embedding \\
$\beta_k $ & Attention value for type-specific behavior \\
$\psi(\cdot)$ & Cross-behavior message aggregation function \\
$G=\{U,V,\mathcal{E}\}$ & Multi-behavior high-order graph \\
$\hat{\textbf{E}}_i^{(l)}$, $\hat{\textbf{E}}_j^{(l)}$ & Cross-layer fused embedding \\
$\mathbf{\Gamma}_{i,j}$ & Fused representation for prediction \\
\hline
\end{tabular}
\label{tab:notations}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/framework2.pdf}
\caption{Multi-behavior graph dependency modeling.}
\label{fig:framework2}
\vspace{-0.05in}
\end{figure}
\subsection{Multi-Behavior Graph Dependency Modeling}
Figure~\ref{fig:framework2} presents the model flow of our multi-behavior graph dependency modeling. With the consideration of different types of user-item behavioral relations, we first construct the multi-behavior user-item interaction graph.\\\vspace{-0.12in}
\noindent Definition 2. \textbf{Multi-Behavior Graph} $G$. Based on the input multi-behavior interaction tensor $\textbf{X}$, we generate a graph $G=\{U,V,\mathcal{E}\}$ where user node $u_i \in U$ and item node $v_j \in V$ is connected with the edge $e_{i,j,k}\in \mathcal{E}$, if $u_i$ interacts with $v_j$ under the $k$-th behavior type (\textit{i}.\textit{e}., $x_{i,j}^k=1$). Each edge $e_{i,j,k}\in \mathcal{E}$ is associated with a specific behavior type of $k$. Due to the interaction heterogeneity property, there exist multiplex edges between the same user-item pair given $u_i$ interacts with $v_j$ under multiple behavior types.\\\vspace{-0.12in}
\subsubsection{\bf Behavior-aware Message Construction}
Based on the multi-behavior graph $G$, we first generate the propagated information for the user($u_i$)-item($v_j$) pair with the following behavior-aware message passing paradigm:
\begin{align}
\textbf{H}_{i\leftarrow}^{k,(l+1)} = \lambda(\{\textbf{E}_{j}^{(l)}:x_{i,j}^k=1\})\nonumber\\
\textbf{H}_{j\leftarrow}^{k,(l+1)} = \lambda(\{\textbf{E}_{i}^{(l)}:x_{i,j}^k=1\})
\end{align}
\noindent where $\lambda(\cdot)$ denotes the behavior-aware encoding function for preserving the semantics of individual type of interactive behavior for $u_i$ and $v_j$. Here, $\textbf{H}_{i\leftarrow}^{k,(l+1)} \in \mathbb{R}^d$ and $\textbf{H}_{j\leftarrow}^{k,(l+1)} \in \mathbb{R}^d$ with dimensionality of $d$ (output from $\lambda(\cdot)$ function) are the $(l+1)$-th layer representation which preserves the characteristics of the behavior type of $k$. In the first-layer propagation, we generate the input feature vector $\textbf{E}_{i}^{0}$ and $\textbf{E}_{j}^{0}$ of user $u_i$ and item $v_j$ with the Autoencoder-based pre-training~\cite{sedhain2015autorec} over multi-behavior interaction tensor $\textbf{X}$, to project different types of high-dimensional behavior embeddings into low-dimensional latent space. The reasons for using Autoencoder is to generate more informative initial embeddings for users and items, through the auto-encoding training paradigm.
In the multi-behavior recommendation, various types of users' behaviors reflect the preference from behavior-specific characteristics. For example, view-interactive behavior happens more frequently than add-to-cart and add-to-favorite activities~\cite{guo2019buying}. Additionally, add-to-favorite interactive behavior could provide rich information to characterize users' implicit interests over items, although the purchase behavior may be postponed and does not happen right away~\cite{cen2019representation}. Hence, we design our semantic encoding function $\lambda(\cdot)$, to capture the type-specific behavior contextual information in the message construction process.\\\vspace{-0.1in}
\noindent \textbf{Multi-Channel Behavior Embedding Layer}. Inspired by recent advancement of memory-augmented neural network models in multi-dimensional context learning~\cite{wang2018neural}, we build our behavior semantic encoder upon the multi-channel neural framework to learn customized representations for individual type of user-item interactions. Our multi-channel embedding layer utilizes the external representation units, in which each unit corresponds to a certain dimensional of behavior semantics (\eg, behavior data distributions with respect to different categories of items). Specifically, we integrate the channel-based projection layer with the behavior-aware attention mechanism to fuse the learned semantic information across different channels. For each behavior type $k$, this module is firstly equipped with a contextual transformation layer to update the input user/item embeddings $\textbf{E}_j^{(l)}$ and $\textbf{E}_j^{(l)}$ in the $l$-th graph layer. Without loss of generality, we formally present the behavior semantic encoder with $M$ (indexed by $m$) channels for the message of user $u_i$ from his/her connected item nodes $\{v_j|x_{i,j}^k=1\}$ under the behavior type of $k$ as below:
\begin{align}
\textbf{H}_{i\leftarrow}^{k,(l+1)} &=\sum_{m=1}^M\omega_m^k\textbf{U}_m \sum_{x_{i,j}^k=1}{\textbf{E}_j^{(l)}}\nonumber\\
\omega_m^k &=\delta(\textbf{K}\cdot \sum_{x_{i,j}^k=1}{\textbf{E}_j^{(l)}} + \textbf{b})(m)
\end{align}
\noindent where $\textbf{U}_m\in\mathbb{R}^{d\times d}$ is the $m$-th channel transformation, $\omega_m^k$ is the $m$-th weight calculated from the neighboring nodes $\{v_j|x_{i,j}^k=1\}$ of $u_i$ under behavior type of $k$. $\sum_{m=1}^M\omega_m^k\textbf{U}_m$ represents the learned behavior-type-specific contextual transformation. Additionally, $\textbf{K}\in\mathbb{R}^{M\times d}$ and $\textbf{b}\in\mathbb{R}^M$ are transformation and bias parameters to calculate the weights, and $\delta(\cdot)$ denotes ReLU activation. Similar semantic encoder can be applied to learn embedding $\textbf{H}_{j\leftarrow}^{k,(l+1)}$ between item $v_j$ and its connected users $\{u_i|x_{i,j}^k=1\}$ with the $k$-th behavior type. By performing the dimension-wise relation learning for each type of behavior patterns, the underlying semantic information can be smoothly captured with multi-channel behavior embedding in our framework.
\subsubsection{\bf Behavior Inter-dependency Modeling}
In addition to encoding the type-specific behavior semantics, another key aspect of multi-behavior relation learning lies in exploring the inter-dependencies across different types of user-item interactive behavior. For instance, add-to-cart and tag-as-favorite activities are good indicators for purchase of users. Hence, after the message propagating process with the learned representations $\textbf{H}_{i\leftarrow}^{k,(l+1)}$ and $\textbf{H}_{j\leftarrow}^{k,(l+1)}$, it is crucial to consider the inter-dependencies among different types of behaviors (\textit{i}.\textit{e}., $k\in [1,...,K]$), and further refine the type-specific behavior embeddings propagated from other behavior types.
Motivated by the recent success of transformer networks in distilling inter-correlations between entities~\cite{yun2019graph}, we develop a self-attentive network for multi-behavior inter-dependency modeling. Based on the paradigm of self-attention layer with scaled dot-product attention, we create three transformation matrices to project input representation into three latent dimensions, \textit{i}.\textit{e}., $\textbf{Q}^h\in\mathbb{R}^{\frac{d}{C}\times d}$, $\textbf{V}^h\in\mathbb{R}^{\frac{d}{C}\times d}$ and $\textbf{K}^h\in\mathbb{R}^{\frac{d}{C}\times d}$ as the query, value and key transformations for each of the $C$ (indexed by $c$) attention heads. Then, the dependency scores between the $k$-th and the $k'$-th behavior message is calculated in the dot-product manner as follows:
\begin{align}
\alpha_{k,k'}^c &=\frac{(\textbf{Q}^c\textbf{H}^{k,(l+1)}_{i\leftarrow})^\top \cdot (\textbf{K}^c\textbf{H}^{k',(l+1)}_{i\leftarrow})}{\sqrt{\frac{d}{C}}} \nonumber\\
\hat{\alpha}_{k,k'}^c&=\frac{\exp{\alpha_{k,k'}^c}}{\sum_{k'=1}^{k,(l+1)}\exp{\alpha_{k,k'}^c}}
\end{align}
\noindent where $\hat{\alpha}_{k,k'}^c$ is the learned quantitative correlation weight between the behavior type $k$ on $k'$, which is calculated from the intermediate score $\alpha_{k,k'}^c$ by the softmax function. To expand the model ability in capturing cross-behavior dependency from different hidden dimensions (\eg, co-occurrence frequencies and regularties), we augment the self-attention layer with the multi-head mechanism to endow the attentive relation learning under multiple represenation subspaces. In such cases, we associate each head-specific attention component with the a set of Query ($\textbf{Q}^c$), Key ($\textbf{K}^c$) and Value ($\textbf{V}^c$) weight matrices. Based on the head-specific correlation scores, we then recalibrate the type-specific behavior message by concatenating the multiple attention heads:
\begin{align}
\tilde{\textbf{H}}_{i\leftarrow}^{k,(l+1)} &= \text{MH-Att}(\textbf{H}_{i\leftarrow}^{k,(l+1)}) \nonumber\\
&=\mathop{\Bigm|\Bigm|}\limits_{c=1}^C \sum_{k'=1}^K \hat{\alpha}_{k,k'}^h\textbf{V}^c\cdot \textbf{H}_{i\leftarrow}^{k',(l+1)}
\end{align}
where $\mathop{\Bigm|\Bigm|}$ denotes the vector concatenation, and $\tilde{\textbf{H}}_{i}^{k,(l+1)}$ is the recalibrated message propagated to the target node $u_i$ under behavior type $k$. To preserve the original type-specific behavior message and prevent the gradient vanishing issue, we perform the element-wise addition between the original and recalibrated type-specific message with a residual connection scheme. That is, $\hat{\textbf{H}}_{i\leftarrow}^{k,(l+1)}=\tilde{\textbf{H}}_{i\leftarrow}^{k,(l+1)}+\textbf{H}_{i\leftarrow}^{k,(l+1)}$ is the refined type-specific behavior representation which preserves the cross-type behavior inter-dependent information.
\subsubsection{\bf Personalized Multi-Behavior Aggregation}
With the incorporation of both the type-specific behavior semantics and type-wise inter-correlations into our multi-behavior dependency modeling module, we introduce an aggregation layer to fuse interactive patterns across different types of behaviors. Formally, we define the message aggregation function as:
\begin{align}
\textbf{E}_i^{(l+1)}=\psi(\{\hat{\textbf{H}}_{i\leftarrow}^{k,(l+1)}:k=[1,2,...,K]\})
\end{align}
In real-life online platforms (\eg, online retailing or review sites), item interactive patterns may vary by users due to their different behavior preferences. For example, some people prefer to add their interested items into the favorite list but sporadically make purchases, while the add-to-favorite action is more likely to be followed by the purchase behavior. Therefore, to aggregate message from different types of behavior embeddings and obtain expressive representations on the local user-item multi-behavior interaction graph, it is essential to identify the contribution of different types of behavior in assisting the final prediction on the target type of user behavior in a customized manner.
Towards this end, with the recalibrated type-specific behavior message $\hat{\textbf{H}}_{i\leftarrow}^{k,(l+1)}$, the multi-behavior graph encoder is further equiped with an attention network to differentiate the influences of different behavior types for user $u_i$. Specifically, the message aggregation module adopts a two-layer feed-forward network for weight estimation:
\begin{align}
\beta_{k}&=\textbf{w}_2^\top\cdot\delta(\textbf{W}_1 \hat{\textbf{H}}_{i\leftarrow}^{k,(l+1)} + \textbf{b}_1)+b_2\nonumber\\ \hat{\beta}_k&=\frac{\exp{\beta_k}}{\sum_{k'=1}^{K}\exp{\beta_{k'}}}
\end{align}
where $\textbf{W}_1\in\mathbb{R}^{d'\times d}$, $\textbf{b}_1\in\mathbb{R}^{d'}$, $\textbf{w}_2\in\mathbb{R}^{d'}$, and $b_2\in\mathbb{R}$ are transformations and bias vectors for weights calculation, and $d'$ is the dimensionality of the intermediate hidden layer. $\delta$ is the ReLU activation function. $\beta_k$ is the intermediate attention value and $\hat{\beta}_k$ is the final influence weight of interaction type $k$ calculated using softmax function. With the calculated weights, the message aggregation layer then applies perform the weighted summation over type-specific behavior embeddings $\textbf{H}_{i\leftarrow}^{k,(l+1)}$, so as to acquire the encoded node embeddings $\textbf{E}^{(l+1)}_i=\sum_{k=1}^K\hat{\beta}_k\hat{\textbf{H}}_{i\leftarrow}^{k,(l+1)}$ corresponding to the $l$-th layer graph encoder. With the developed multi-behavior modeling component, we endow the MBRec\ with the capability of modeling of type-specific behavioral semantics and cross-type dependencies.
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{figures/framework1.pdf}
\vspace{-0.05in}
\caption{High-order propagation with layer-wise mutual relation learning for multi-behavior representations.}
\label{fig:framework1}
\vspace{-0.2in}
\end{figure}
\subsection{High-order Mutual Relation Learning}
\subsubsection{\bf High-order Multi-Behavior Pattern Propagation}
With the multi-behavior pattern representation performed by first-order dependency learning, we can enable the high-order multi-behavior relational structures on the global graph $G=\{U,V,\mathcal{E}\}$ by stacking more embedding propagation layers (as shown in Figure~\ref{fig:framework1}). Based on our developed multi-behavior graph encoder, in the high-order information propagation process, MBRec\ is capable of capturing the high-order collaborative signals across different types of user-item interactive relations. After performing the multi-behavior graph encoder $l$ times, each node (\textit{i}.\textit{e}., $u_i$ or $v_j$) could receive the messages propagated from its neighbors with $l$-th hop distance, which is presented as follows:
\begin{align}
\textbf{E}^{(l+1)}_{u}&=\psi(\text{MH-Att}(\lambda(\{\textbf{E}^{(l)}_j:x_{i,j}^k=1\}))\nonumber\\
&=\sum_{k=1}^K\hat{\beta}_k\cdot\text{MH-Att}(\textbf{X}^k\textbf{E}^{(l)}_{v}\sum_{m=1}^M\omega_m^k\textbf{U}_m^\top)
\end{align}
where $\textbf{X}^k\in\mathbb{R}^{I\times J}$ is the adjacent matrix under the $k$-th behavior type, $\textbf{E}^{(l+1)}_u\in\mathbb{R}^{I\times d}$ and $\textbf{E}^{(l)}_v\in\mathbb{R}^{J\times d}$ refer to the embedding matrix of users and items, respectively.
While the high-order connectivity is exploited in the multi-behavior graph $G$, the message propagation process with the rigid order hinders the representation power in learning the global graph-structured behavior dependencies. In specific, the current graph-based message passing architecture maps the input multi-behavior user-item interactions into a fixed length embedding with the highest $L$-th layer, which results in the information loss of previous layer-specific aggregated patterns. For example, a user can interact with different items with direct (one-hop) or indirect (multi-hop) connected relationships. In such case, it is crucial to identify the most relevant signals across both low- and high-order connections, for characterizing the preference of this target user. To overcome this limitation and free the message passing graph neural architecture from fixed-length internal representation, we introduce a cross-layer mutual relation learning module, to endow our graph neural architecture MBRec\ with the capability to focus on certain parts of graph-structured behavioral patterns across different-order connectivity.\\\vspace{-0.12in}
\subsubsection{\bf Cross-Layer Mutual Relation Modeling} To remedy the existing shortcomings, we propose to model the importance between users' and items' multi-order embeddings with a multi-head self-attention network. For user $u_i$'s multi-order embeddings $\textbf{E}_i^{(0)},...,\textbf{E}_i^{(1)},...,\textbf{E}_i^{(L)}$ ($\textbf{E}_i^{(1)} \in\mathbb{R}^{d}$) and item $v_j$'s embeddings $\textbf{E}_j^{(0)},...,\textbf{E}_j^{(1)},...,\textbf{E}_j^{(L)}$ ($\textbf{E}_j^{(1)} \in\mathbb{R}^{d}$) learned by the stacked multi-behavioral graph encoders, we first normalize the multi-order embeddings for cross-layer embedding fusion with the following operation:
\begin{align}
\hat{\textbf{E}}_i^{(l)}=\frac{\textbf{E}_i^{(l)}}{\sqrt{\|\textbf{E}_i^{(l)}\|_2^2}};~~\hat{\textbf{E}}_j^{(l)}=\frac{\textbf{E}_j^{(l)}}{\sqrt{\|\textbf{E}_j^{(l)}\|_2^2}}
\end{align}
Based on the block of multi-head self-attention mechanism, we calculate $C$ (corresponds to $C$ heads) importance matrices $\phi^c\in\mathbb{R}^{(L+1)\times(L+1)}$ for adaptive multi-order combination. With the design of 2-D matrix $\phi^c$, we can capture the pairwise mutual relation across different layer-specific multi-behavior patterns. We present this process as follows:
\begin{align}
\phi_{l,l'}^c=\delta((\textbf{P}^c\hat{\textbf{E}}_i^{l})^\top\cdot(\textbf{P}^c\hat{\textbf{E}}_j^{l'}))
\end{align}
where $\phi^c_{l,l'}$ is the $c$-th importance score for the combination between the $l$-th layer user embedding and $l'$-th layer item embeddings. $\textbf{P}^c\in\mathbb{R}^{\frac{d}{C}\times d}$ is the transformation to acquire key vectors, and $\delta(\cdot)$ is the ReLU activation function to learn non-linearities of feature interactions.
Based on the learned importance scores, we then generate the fused representation for prediction as follows:
\begin{align}
\mathbf{\Gamma}_{i,j}=\mathop{\Bigm|\Bigm|}\limits_{c=1}^{C}\sum_{l=0}^L
\sum_{l'=0}^L\phi_{l,l'}^h(\textbf{T}^c\hat{\textbf{E}}_i^{(l)})\circ(\textbf{T}^c\hat{\textbf{E}}_j^{(l')})
\end{align}
where $\circ$ denotes the element-wise production, and $\textbf{T}^c\in\mathbb{R}^{\frac{d}{C}\times d}$ is the value transformation. The $C$ head-specific representations are concatenated to generate the fused representation $\mathbf{\Gamma}_{i,j}\in\mathbb{R}^d$, which is fed into a feed-forward network to make forecasting on the unknown user-item interaction with the target behavior type $k$:
\begin{align}
\text{Pr}_{i,j}=\textbf{w}_4^\top(\delta(\textbf{W}_3\mathbf{\Gamma}_{i,j}+\textbf{b}_3)+\mathbf{\Gamma}_{i,j})
\end{align}
where $\textbf{W}_3\in\mathbb{R}^{d\times d}$, $\textbf{w}_4\in\mathbb{R}^d$ and $\textbf{b}_3\in\mathbb{R}^d$ are network parameters, $\delta(\cdot)$ is the ReLU activation. Note that a residual connection is employed for better gradients propagation.
\subsection{Graph Sampling Algorithm of MBRec}
\begin{algorithm}[t]
\small
\caption{Graph Sampling of MBRec\ Model}
\label{alg:sample}
\LinesNumbered
\KwIn{seed users $\mathbb{U}$, seed items $\mathbb{V}$, adjacent tensor $\textbf{X}\in\mathbb{R}^{I\times J\times K}$, sampling depth $D$, sampling number per step $N$}
\KwOut{sampled users $\hat{\mathbb{U}}$, sampled items $\hat{\mathbb{V}}$, adjacent matrix of the sampled sub-graph $\hat{\textbf{X}}$}
Initialize the normalized adjacent matrix $\bar{\textbf{X}}\in\mathbb{R}^{I\times J}$ with $\bar{\textbf{X}}_{i,j}=\frac{\|\textbf{X}_{i,j}\|_1}{\sqrt{\|\textbf{X}_i\|_1\|\textbf{X}_j\|_1}}$ for non-zero elements\\
Initialize sampling weights $P_u\in\mathbb{R}^I$ and $P_v\in\mathbb{R}^J$ with zeros\\
Initialize sampled user/item set $\hat{\mathbb{U}}=\mathbb{U}$, $\hat{\mathbb{V}}=\mathbb{V}$\\
$P_u+=\bar{\textbf{X}}_i$ for $u_i$ in $\hat{\mathbb{{U}}}$;~~ $P_v+=\bar{\textbf{X}}_j$ for $v_j$ in $\hat{\mathbb{{V}}}$\\
\For{$d=1$ to $D$}{
$\bar{P_u}=\frac{P_u^2}{\|P_u\|_2^2}$;~~ $\bar{P_v}=\frac{P_v^2}{\|P_v\|_2^2}$\\
Sample $N$ unsampled users and $N$ unsampled items $\bar{\mathbb{U}}$, $\bar{\mathbb{V}}$ according to $\bar{P_u}$ and $\bar{P_v}$\\
$\hat{\mathbb{U}}=\hat{\mathbb{U}}\cup\bar{\mathbb{U}}$;~~ $\hat{\mathbb{V}}=\hat{\mathbb{V}}\cup\bar{\mathbb{V}}$\\
$P_u+=\bar{\textbf{X}}_i$ for $u_i$ in $\bar{\mathbb{U}}$;~~ $P_v+=\bar{\textbf{X}}_j$ for $v_j$ in $\bar{\mathbb{{V}}}$\\
}
Construct $\hat{\textbf{X}}\in\mathbb{R}^{|\hat{\mathbb{U}}|\times |\hat{\mathbb{V}}| \times K}$ by using $\textbf{X}_{i,j,k}$ for $u_i$ in $\hat{\mathbb{U}}$ and $v_j$ in $\hat{\mathbb{V}}$\\
return $\hat{\mathbb{U}}$, $\hat{\mathbb{V}}$, $\hat{\textbf{X}}$
\end{algorithm}
In this subsection, we present the sampling algorithm of MBRec\ in handling large-scale user-item graph with multi-behavioral interaction information. One key limitation of graph neural architecture with full-batch mode is the requirement of performing aggregation over all nodes per graph layer. This operation involves high computational cost, which can hardly make it scalable for large-scale multi-behavior graph. To mitigate this problem and endow MBRec\ with the ability of serving as large-scale deep recommendation engine in real-world scenarios, we adopt graph sampling for MBRec\ model.
The graph sampling algorithm is elaborated in Algorithm~\ref{alg:sample}. The key idea of our algorithm is to perform random walk-based computation graph sampling over the multi-behavior graph $G=\{U,V,\mathcal{E}\}$. Current state-of-the-art sampling strategies of graph neural models~\cite{ying2018graph,hu2020heterogeneous} largely rely on the mini-batch training paradigm over sparse local sub-graph, which may limit the model efficiency in our multi-behavior recommendation settings. The reasons are mainly two-folds: (i) While the sampled relatively small sub-graph could speed-up the model training for each step, it is likely that the user preference estimation is inaccurate if a lot of behavior-aware user-item edges are ignored during the sampling. (ii) The frequent CPU calculation and memory access also involve much computational and space cost.
To tackle the above challenges, we optimize our sampling algorithm to generate densely-connected computation graph to fit our multi-behavior recommendation scenario. In the training process of our MBRec\ framework, we maintain the sampling weight vectors $P_u$ and $P_v$ to contain the cumulative neighborhood information for the set of already sampled nodes. To present the dominate phenomenon of frequent users/items with large number of multi-behavior interactions, we design a normalized adjacent matrix $\bar{\textbf{X}}$ to update the sampling weights. Empirically, by sampling relatively enough node instances (\textit{i}.\textit{e}., $N$) for each step, the obtained sub-graphs are dense enough to contain the important multi-behavior information between users and items. Normally, each time we construct a sub-graph which contains a moderate number of nodes (\eg, tens of thousands), and the model is trained on this sampled sub-graph for one epoch (\eg, hundreds or thousands of training steps).
\subsection{Model Optimization of MBRec}
\begin{algorithm}[t]
\small
\caption{Model Optimization of MBRec}
\label{alg:train}
\LinesNumbered
\KwIn{multi-behavior interaction tensor $\textbf{X}\in\mathbb{R}^{I\times J\times K}$, initial node embeddings $\bar{\textbf{E}}^{(0)}$, the number of graph layer $L$, the number samples for training $S$, weight $\lambda$ for regularization, the number of epochs $E$}
\KwOut{trained model parameters $\mathbf{\Theta}$}
hyperparameter Initializations $\mathbf{\Theta}$\\
\For{$e=1$ to $E$}{
Seed node sampling $\mathbb{U}$, $\mathbb{V}$\\
sub-graph generation $(\hat{\mathbb{U}}, \hat{\mathbb{V}}, \hat{\textbf{X}})$ using the seeds according to Algorithm~\ref{alg:sample}\\
Get $\textbf{E}^{(0)}$ from $\bar{\textbf{E}}^{(0)}$ for $u_i$ in $\mathbb{U}$ and $v_j$ in $\mathbb{V}$\\
\For{$l=1$ to $L$}{
\For{each $u_i$ in $\hat{\mathbb{U}}$, $v_j$ in $\hat{\mathbb{V}}$ and $k=1$ to $K$}{
Type-specific behavior message generation $\textbf{H}^k$\\
Message refinement $\hat{\textbf{H}}^k$\\
Embedding aggregation $\textbf{E}^{(l)}$\\
}
}
$\mathcal{L}=\lambda\|\mathbf{\Theta}_\text{F}^2\|$\\
\For{each $u_i$ in $\hat{\mathbb{U}}$}{
Positive and negative items are sampled from $\hat{\mathbb{V}}$\\
\For{each $v_{p_s}$ and ${v_{n_s}}$} {
Interaction probability inference $\text{Pr}_{i,j}$.\\
$\mathcal{L}+=\max(1-\text{Pr}_{i,p_s}+\text{Pr}_{i,n_s})$
}
}
Model training with the optimized objective. $\mathcal{L}$\\
}
return $\mathbf{\Theta}$
\end{algorithm}
To perform the model inference, we optimize our MBRec\ framework with the pair-wise loss, which has been widely adopted in the Top-N recommendation scenarios~\cite{nikolakopoulos2019recwalk}.
During the training phase, $S$ positive instances and $S$ negative instances will be sampled from the observed interacted item set and non-interacted item set, respectively. Our model parameters are inferred by minimizing the defined loss function:
\begin{align}
\mathcal{L}=\sum_{i=1}^I\sum_{s=1}^S\max(1-\text{Pr}_{i,p_s}+\text{Pr}_{i,n_s}, 0)+\lambda\|\mathbf{\Theta}\|_{\text{F}}^2
\end{align}
where the first term is the pair-wise loss, and the second term is the regularization term with hyper-parameter $\lambda$ as weight. The set of model parameters is denoted by $\mathbf{\Theta}$. We summarize the procedure of model training with the sub-graph sampling algorithm in Algorithm~\ref{alg:train}.
\subsection{Model Complexity Analysis}
\label{sec:complexity}
\noindent \textbf{Time Complexity.} The running time of our model can be divided into two parts: the time for sub-graph sampling, and the cost for model training and inference. As described in Algorithm~\ref{alg:sample}, the major cost of the former process is $O(D\times N\times (I + J))$ for updating the sampling probability $P_u$ and $P_v$, where $D$ is the number of sampling steps and $N$ is the number of sampled nodes per step. In the model running phase, MBRec\ takes $O(L\times |\textbf{X}|\times d)$ ($|\textbf{X}|$ denotes the number of non-zero elements in $\textbf{X}$) to encode the type-specific message, in which $O(L\times (I+J)\times K\times d^2)$ is needed by the attention module. The complexity of the type-wise inter-dependency modeling and the aggregation layer is analogously $O(L\times (I+J)\times K\times d^2)$, in which the primary contributor is the matrix-multiplications. The complexity of the cross-order mutual relation learning is $O(|\textbf{X}|\times L^2\times d^2)$ which comes from the order-wise representation fusion, and this term dominates the complexity of the model running process. Empirically, by sharing the sampled sub-graph among training/testing steps, the sub-graph sampling costs much less time compared to the entire computational cost.
\noindent \textbf{Space Complexity.} Due to sampling larger sub-graphs for computing efficiency and data integrity, MBRec\ takes more memory than some GNN model in sub-graph sampling. But the memory cost is fully acceptable for common devices. For the model memory cost, the space complexity of MBRec\ is $O(L\times (I+J)\times K\times d)$, which is mainly for the intermediate hidden states, the same as a common graph neural networks (\eg~GCN and GraphSAGE) for modeling multi-behavior data.
|
{
"arxiv_id": "2302.08586",
"language": "en",
"timestamp": "2023-02-22T02:19:46",
"url": "https://arxiv.org/abs/2302.08586",
"yymm": "2302"
} | \section{\MakeUppercase{#1}}}
\makeatletter
\def\bibsection{%
\par
\begingroup
\baselineskip26\p@
\bib@device{\hsize}{72\p@}%
\endgroup
\nobreak\@nobreaktrue
\addvspace{19\p@}%
}%
\makeatother
\begin{document}
\preprint{ }
\title{Automated detection of symmetry-protected subspaces in quantum simulations}
\author{Caleb Rotello}
\email{caleb.rotello@nrel.gov}
\affiliation{Department of Physics and Quantum Engineering Program, Colorado School of Mines, Golden CO, 80401}
\affiliation{National Renewable Energy Laboratory, Golden CO, 80401}
\author{Eric B. Jones}
\email{eric.jones@coldquanta.com}
\email{eric.jones@infleqtion.com}
\affiliation{Infleqtion, Louisville CO, 80027}
\affiliation{National Renewable Energy Laboratory, Golden CO, 80401}
\author{Peter Graf}
\email{Peter.Graf@nrel.gov}
\affiliation{National Renewable Energy Laboratory, Golden CO, 80401}
\author{Eliot Kapit}
\email{ekapit@mines.edu}
\affiliation{Department of Physics and Quantum Engineering Program, Colorado School of Mines, Golden CO, 80401}
\date{\today}
\begin{abstract}
The analysis of symmetry in quantum systems is of utmost theoretical importance, useful in a variety of applications and experimental settings, and is difficult to accomplish in general. Symmetries imply conservation laws, which partition Hilbert space into invariant subspaces of the time-evolution operator, each of which is demarcated according to its conserved quantity. We show that, starting from a chosen basis, any invariant, symmetry-protected subspaces which are diagonal in that basis are discoverable using transitive closure on graphs representing state-to-state transitions under $k$-local unitary operations. Importantly, the discovery of these subspaces relies neither upon the explicit identification of a symmetry operator or its eigenvalues nor upon the construction of matrices of the full Hilbert space dimension. We introduce two classical algorithms, which efficiently compute and elucidate features of these subspaces. The first algorithm explores the entire symmetry-protected subspace of an initial state in time complexity linear to the size of the subspace by closing local basis state-to-basis state transitions. The second algorithm determines, with bounded error, if a given measurement outcome of a dynamically-generated state is within the symmetry-protected subspace of the state in which the dynamical system is initialized. We demonstrate the applicability of these algorithms by performing post-selection on data generated from emulated noisy quantum simulations of three different dynamical systems: the Heisenberg-XXX model and the $T_6$ and $F_4$ quantum cellular automata. Due to their efficient computability and indifference to identifying the underlying symmetry, these algorithms lend themselves to the post-selection of quantum computer data, optimized classical simulation of quantum systems, and the discovery of previously hidden symmetries in quantum mechanical systems.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:introduction}
The analysis of symmetry is a central tool in physics and has enabled some of the most profound discoveries in the field. Noether's Theorem famously connects the symmetries of a system's action with conservation laws to which that system's equations of motion are subject~\cite{Noether1918}. Generally, the analysis of symmetry, or the breaking thereof, allows one to constrain theories~\cite{gaillard1999standard}, solve equations of motion more efficiently~\cite{kozlov1983integrability}, and identify phases of matter~\cite{Landau:1937obd}. Applications of symmetry analysis in quantum information include, but are not limited to: quantum error correction~\cite{Fowler_2012}, error mitigation on quantum hardware~\cite{PhysRevA.98.062339, PhysRevA.100.010302, PhysRevA.95.042308, Cai_2021}, and quantum machine learning model design~\cite{PRXQuantum.3.030341}.
Quantum computing can efficiently simulate quantum dynamics in regimes where classical simulation becomes impossible~\cite{Daley2022-vl}. However, current quantum processors operate in a regime severely constrained by noise, with error rates not yet sufficiently below most error correction thresholds~\cite{acharya2022suppressing}. Error mitigation will therefore be
critical in the interim before fault-tolerant architectures can be scaled~\cite{cai2022quantum, quek2022exponentially}. In recent work, and despite its limitations~\cite{Takagi_2022}, the technique of
\textit{post-selection} has proven useful to mitigate errors and extract useful results from quantum simulation experiments (see e.g.,~\cite{arute2020observation}).
Post-selection works by identifying measured states that could only have come from error processes and excluding them from the statistics used to calculate
output quantities. The most obvious example is a conserved quantity such as particle number. In such a case, any measured state that does not preserve the conserved quantity must be
the result of errors. Due to the connection established by Noether, a more fundamental way to describe post-selection is with respect to symmetry.
According to this description,
post-selection works by checking the eigenvalue of the simulation's ``fiducial" (i.e., initial) state under the symmetry operator
against the corresponding symmetry operator eigenvalues (e.g., value of a conserved quantity) of individual measurement results in the dynamically-generated output state of the simulation. If a particular measurement outcome registers a different eigenvalue under the symmetry operator than the fiducial state does, then the measurement is a result of error and can be discarded. This procedure is restricted in the scope of its application, as symmetries of a quantum system and their corresponding operators
(i.e., conserved quantities) are typically either engineered into the dynamics ``by hand'' or identified by clever theoretical intuition. For a generic quantum system, the relevant symmetry operator(s) may not be obvious a priori and may be difficult to identify.
Being able to perform post-selection in a manner that does not require explicit identification of a symmetry operator would greatly increase the technique's applicability; this is the subject of this paper.
As a corollary, such an operator-free method for error detection also enables additional applications such as more efficient classical simulation of quantum systems via computational basis state reduction. For example, particle number conservation in hardcore boson models can be used both for post-selection in quantum simulation and for reducing the basis state set size from $2^n$ to $\binom{n}{N}$, where $n$ is the number of lattice sites and $N$ the number of particles, in classical simulations~\cite{hebert2001quantum}. Interestingly, the identification of symmetry or conserved quantities in some instances can make classical simulation so efficient that it can obviate the need for quantum computation altogether~\cite{anschuetz2022efficient}. Finally, in certain special cases, one may be able to infer the explicit form of a symmetry operator by inspection of the reduced basis set.
In this paper, we provide algorithms to efficiently make use of symmetry in an operator-free manner. To do this, our methods create the subspace of measurement basis states which would share a conserved quantity of some commuting symmetry operator that is diagonal in that measurement basis, without needing to explicitly create that operator. We call such a space a \textit{symmetry-protected subspace} (SPS). In the language of linear algebra, these are \textit{invariant subspaces} of the evolution operator; they are subspaces,
determined by the initial state, from which the evolution cannot escape. To reap the benefits of symmetry we only need to find the SPS of the initial state, not a conserved quantity, much less an explicit symmetry. However, naively, to find an SPS we need to actually evolve the system in the full Hilbert space, which is
exponentially large in the number of qubits (particles, spins, etc.). Sections~\ref{sec:graphtheory} through~\ref{sec:algorithm2} describe the formulation and algorithms by which we avoid this exponential scaling, but here we provide a non-technical overview.
First, note that most Hamiltonians and resulting Unitary evolution operators are built from a number of \textit{local} operators. For example, the Heisenberg-XXX
model described below consists only of nearest-neighbor interactions. So, at some level, we have an intuition that the dynamics, thus the SPSs, should be
derivable, like the Hamiltonian itself, from a combination of local operations, and that local operations are inherently less computationally expensive to work with. This is indeed the case, as shown below.
Next, note that to say that a wavefunction is \textit{in} a symmetry-protected or invariant subspace is to say that it is and remains throughout dynamic evolution a linear combination of basis states in that subspace and that subspace alone. And if we care only about finding the subspace, we do not need to keep track of the actual linear combination (i.e., both the basis vectors and their amplitudes) but only the basis vectors. This ``binarization" of the evolution is critical, because it allows us to adopt a graph-theoretic framework that is vastly more efficient for finding and searching SPSs.
This also leads to an important restriction in our work; using our methods, we can only automate the discovery of symmetry which is diagonal in a chosen basis. We work in the computational $Z$ basis throughout this paper, though extensions to other bases are of course possible by rewriting the time evolution operators in the new basis and performing the same procedure we describe below. Our automated methods should thus be viewed as a tool to find symmetry-protected subspaces within a given basis (if, of course, they exist), and to potentially improve classical and noisy quantum simulations based on those discovered subspaces. However, they still require an intelligently guessed initial basis as a starting point. For the problems we consider in this work, the computational basis is sufficient to derive novel results, though more complex choices can be required in other cases.
With this caveat in mind, once the basis is chosen we create an undirected and unweighted graph, called the state interaction graph, which describes all possible state-to-state transitions over a single application of a unitary evolution operator. The transitive closure of this graph fragments the Hilbert space of the system, represented in a particular measurement basis, into a cluster graph, whose subgraphs are each a symmetry-protected subspace.
Our main results are two classical algorithms that efficiently construct and work within these subspaces. The first, Algorithm~\ref{alg:makesps}, uses ``transitive closure" on local operations to explore and explicitly construct the full SPS of an initial state, which enables the partition of the Hilbert space into a set of disjoint SPSs. This algorithm scales linearly in both the number of local operators from which the global operator is constructed and the size of the SPS, which is a huge improvement over the exponential scaling of the naive ``full evolution" approach. However, because the SPS itself can be exponentially large (albeit with an asymptotically smaller prefactor), the second algorithm, Algorithm~\ref{alg:greedypath}, finds a \textit{path} of local operations through a set of SPS graphs from an initial to final (i.e., measured) state to determine if they lie within the \textit{same} SPS (and thus the final state is valid). This algorithm scales as the number of local operations raised to a small integer power (that can be tuned for accuracy/performance) times the length of the path, thus completely eliminating any exponential scaling.
This paper is structured as follows. In Sec.~\ref{sec:preliminaries}, we define symmetry-protected subspaces and the three quantum systems that will serve as our ``exemplars" throughout. In Sec.~\ref{sec:graphtheory} we outline our novel graph theoretical approach to quantum simulations. Section~\ref{sec:algorithm1} discusses the algorithm for computing an entire symmetry-protected subspace. Section~\ref{sec:algorithm2} provides a more efficient algorithm to verify if two states exist within the same symmetry-protected subspace. Finally, we demonstrate the effectiveness of symmetry-protected subspaces to mitigate error in quantum simulations on a classical emulator in Sec.~\ref{sec:psdemo}.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Symmetry-protected subspaces}\label{subsec:symmetryprotectedsubspace}
Consider a quantum system undergoing unitary evolution according to the operator $U(t)$ for a time $t$. The operator $U(t)$ can represent continuous time evolution, but also includes other cases, such as discrete time evolution. We say that $U(t)$ is invariant under the action of an operator $S$ if $[S,U(t)]=0$ for all times $t$. In this instance, $S$ is a symmetry operator. For a basis in which $S$ is diagonal, states can be labeled by their eigenvalues under the action of $S$: $S|s,b\rangle = s|s,b\rangle$, where $b$ is some other (set of) label(s), which could represent, for example, a computational basis state integer encoding or a many-body eigenvalue. Suppose now that we initialize the dynamics in a state of definite $s$: $|\psi^s_0\rangle = \sum_{b} \alpha^s_{b} |s,b\rangle$, and evolve under $U(t)$. Given the commutativity of the symmetry and evolution operators, the action of the symmetry operator on the output state is: $S[U(t)|\psi^s_0\rangle]=U(t)S|\psi^s_0\rangle=U(t)s|\psi^s_0\rangle=s[U(t)|\psi^s_0\rangle]$, indicating that the eigenvalue $s$ is conserved under the evolution for all time. Our methods identify states that would share an eigenvalue under $S$ without
explicitly knowing $S$. To do this, we use the notion of a symmetry-protected subspace, also known as an invariant subspace, which we define below.
\begin{definition}[Symmetry-Protected Subspace]\label{def:sps}
Let $\mathcal{H}_d$ be a Hilbert space of dimension $d$ spanned by an orthonormal set of basis vectors, $B(\mathcal{H}_d)=\{|b\rangle\}$ . A subspace $G \subseteq \mathcal{H}_d$, which is spanned by a subset of $B(\mathcal{H}_d)$,
is a symmetry-protected subspace of unitary operator $U(t)$ if and only if a projection onto $G$, $P_G = \sum_{b\in G} \ket{b}\bra{b}$, obeys the commutation relation
$[P_G, U(t)]=0$.
\end{definition}
Note that while our definition emphasizes the connection to symmetries, an SPS, as defined above, is indeed an invariant subspace according to the usual definition~\cite{radjavi2003invariant}, which is simply that $\forall \ket{g} \in G, \: U(t) \ket{g} \in G$, because if $\ket{g} \in G$,
\begin{equation}
U(t) \ket{g} = U(t) P_G \ket{g} = P_G [U(t) \ket{g}] \in G,
\label{eq:invsubspace}
\end{equation}
where we have used the commutativity of $P_G$ and $U(t)$ and the fact that by definition the result of applying $P_G$
to anything is in $G$ . Another consequence of Def.~\ref{def:sps} is that if a particular basis state is not in $G$, then the transition matrix element to that state from any state in $G$ is strictly zero.
\begin{lemma}\label{lem:thm1_connection}
Let $\ket{g}\in G$ be an arbitrary element of a symmetry-protected (i.e., invariant) subspace $G$ of an evolution operator $U(t)$ and let $\ket{b} \in B(\mathcal{H}_d)$ be a basis vector outside of $G$, $\ket{b} \notin G$. Then, $\bra{b}U(t)\ket{g}=0$ for any time $t$.
\end{lemma}
\begin{proof}
Using Def.~\ref{def:sps}, $\bra{b}U(t)\ket{g}=\bra{b}U(t)P_G\ket{g}=\bra{b}P_GU(t)\ket{g}=[P_G\ket{b}]^{\dagger}U(t)\ket{g}=0$, since the projection operator onto $G$ annihilates states outside of $G$.
\end{proof}
In Sec.~\ref{sec:algorithm1} we present an algorithm that prescriptively constructs a subspace, denoted $G_{\ket{\psi_0}}$, of a particular initial state, $\ket{\psi_0}$. We also show in the corresponding theorem, Thm.~\ref{thm:spstheorem}, that subspaces so constructed share the property described in Lem.~\ref{lem:thm1_connection}, which relies on Def.~\ref{def:sps} for its proof. Hence, $G_{\ket{\psi_0}}$ constructed according to the procedure in Sec.~\ref{sec:algorithm1} are symmetry-protected subspaces.
Similar to that of the symmetry operator $S$, the commutation relation involving $P_G$ also identifies a dynamical invariance, since for any $|g\rangle \in G$, $P_G[U(t)|g\rangle] = [U(t)|g\rangle]$. That is, the state $U(t)|g\rangle$ is an eigenvector of the projector with eigenvalue $1$. Therefore, while the projector $P_G$ does not identify conserved symmetry eigenvalues, it does indicate when such an eigenvalue exists. In Sec.~\ref{sec:graphtheory}, we will show how discovering $P_G$, rather than $S$ directly, empowers our algorithms to discover underlying invariances.
\subsection{Quantum simulations}\label{subsec:quantumsimulation}
The general form for unitary evolution operators we assume is
\begin{equation}\label{eqn:timeevol}
U(t) = \mathcal{O}_{op} \Bigg[ \prod_{j=1}^p \prod_{i=1}^m U_i(\tau_j)\Bigg],
\end{equation}
where $t$ is the total duration of evolution, $\mathcal{O}_{op}$ denotes some operator ordering, such as time ordering, and $p$ is the number of time-steps used to evolve to $t$. Each $U_i(\tau_j)$ acts locally on $k$-qubits and is parameterized by a real time coordinate $\tau_j$. Evolution unitaries of the form in Eq.~\eqref{eqn:timeevol} can evolve over discrete time, where each $\tau_j$ is finite, or discretized-continuous time, where each $\tau_j$ is (ideally infinitessimally) small in order to minimize Trotter error.
In the instance where a $k$-local Hamiltonian, $H = \sum_i h_i$, is known, Eq.~\eqref{eqn:timeevol} results from the local dynamics governed by the $h_i$ via the Trotter-Suzuki formula~\cite{Suzuki1991}, and $p$ corresponds to the number of Trotter steps. We refer to the operator $U$ as the relevant quantum system and examine the dynamics and associated symmetry-protected subspaces of three exemplary systems.
\subsubsection{Heisenberg-XXX}\label{subsubsec:heisenbergXXX}
The one-dimensional Heisenberg-XXX model for $n$ spin-$1/2$ particles with nearest-neighbor interactions is given by the following Hamiltonian:
\begin{equation}\label{eq:xxx_ham}
H = \sum_{i=0}^{n-2} X_{i}X_{i+1} + Y_{i}Y_{i+1} + Z_{i}Z_{i+1},
\end{equation}
where $X_i, Y_i,$ and $Z_i$ are Pauli operator acting on spin $i$. The model conserves total spin in the $Z$ basis, represented by the operator
\begin{equation}
S^z = \sum_{i=0}^{n-1} Z_{i},
\end{equation}
as well as the correspondingly-defined operators $S^x$ and $S^y$. Quantum simulation of the Heisenberg model on a digital quantum processor can be achieved via exponentiation and Trotterization of Eq.~\eqref{eq:xxx_ham}. In such quantum simulation experiments, one usually picks one or a number of qubit bases in which to measure. The symmetry operators $S^{x, y, z}$ can be used to mitigate errors in the all-qubit $X,Y,Z$ measurement bases, respectively, via post-selection. In the context of classical simulation of the XXX model, the symmetries can be used to constrain the number of basis states included in the dynamics.
\subsubsection{$T_6$ quantum cellular automata}\label{subsubsec:t6qca}
The one-dimensional $T_6$ quantum cellular automata (QCA) rule has recently come to interest within the context of quantum complexity science as a dynamical small-world mutual information network generator~\cite{Jones2022} and a QCA Goldilocks rule~\cite{Hillberry2021}. Its discrete-time unitary update can be derived from a parent Hamiltonian, but it is more natural to define the system by specifying the simulation unitary for a discrete time $t=p$, directly:
\begin{equation}
\begin{split}
U(T_6;t)&=\mathcal{O}_{op}^{time} \Bigg[\prod_{j=1}^{p} \prod_{i=3, 5, \ldots}^{n-1} U_i(\tau_j) \prod_{i=2,4, \ldots}^{n-1}U_i(\tau_j)\Bigg]\\
U_i(\tau_j) &= \sum_{\alpha,\beta=0}^1 P_{i-1}^{(\alpha)} \otimes (H_i)^{\delta_{\alpha+\beta,1}} \otimes P_{i+1}^{(\beta)},
\end{split}
\end{equation}
where $P^{(\alpha)}_i = \ket{\alpha_i}\bra{\alpha_i}$ for $\alpha=0,1$ is the projection operator onto the corresponding state of qubit $i$, $H_i$ is the Hadamard operator, and $\delta_{\alpha+\beta,1}$ is the Kronecker delta function. At each time step, a Hadamard is applied to a qubit only if exactly one of its neighbors is in the $\ket{1}$ state (i.e., $\alpha + \beta = 1$) and does nothing otherwise. It has a known $Z$ basis symmetry related to domain-wall conservation:
\begin{equation}
S = \sum_{i=0}^{n} Z_{i}Z_{i+1},
\end{equation}
where it should be understood that indices $i=1, \ldots, n$ refer to dynamical, computational qubits while the indices $i=0$ and $i=n+1$ refer to non-dynamical qubits fixed to the $\ket{0}$ state.
\subsubsection{$F_4$ quantum cellular automata}\label{subsubsec:f4qca}
The one-dimensional $F_4$ QCA with nearest- and next-nearest-neighbor connectivity is another Goldilocks rule~\cite{Hillberry2021}. It is also most easily specified by its simulation unitary for discrete-time duration $t=p$:
\begin{equation}
\begin{split}
U(F_4;t) = \mathcal{O}_{op}^{time} \Bigg[ \prod_{j=1}^p U(\tau_j) \Bigg],
\end{split}
\end{equation}
where if the time step index, $j$, is even
\begin{equation}
U(\tau_{j=\text{even}}) = \prod_{i = 2, 5, 8, \ldots} U_i(\tau_j) \prod_{i=3,6,9,\ldots} U_i(\tau_j) \prod_{i=4,7,10,\ldots} U_i(\tau_j)
\end{equation}
and if $j$ is odd, then
\begin{equation}
U(\tau_{j=\text{odd}}) = \prod_{i = 3,6,9, \ldots} U_i(\tau_j) \prod_{i=2,5,8\ldots} U_i(\tau_j) \prod_{i=4,7,10,\ldots} U_i(\tau_j).
\end{equation}
In either case,
\begin{equation}\label{eqn:f4_activator}
U_i(\tau_j) = \sum_{\alpha, \beta, \gamma, \omega = 0}^1 P_{i-2}^\alpha P_{i-1}^\beta (H_i)^{\delta_{\alpha+\beta+\gamma+\omega,2}} P_{i+1}^\gamma P_{i+2}^{\omega}.
\end{equation}
Equation~\eqref{eqn:f4_activator} applies a Hadamard to a qubit if exactly two out of its neighbors or next-nearest neighbors are in the $\ket{1}$ state. There are no analytically known symmetries for this rule. As shown in Fig.~\ref{fig:scaling} in Sec.~\ref{subsec:complexityscaling}, our methods discover previously-unknown, symmetry-protected subspaces, indicating a hitherto hidden symmetry of the system.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{interactiongraphworkflow.pdf}
\caption{\label{fig:workflow}Example construction of the state interaction graph $D_{U_{\text{Trot}}}$ and corresponding symmetry-protected subspaces for the one-dimensional, four-site hopping unitary in Eq.~\eqref{eq:hop_unitary}. Multi-qubit states are ordered as $\ket{q_0 q_1 q_2 q_3}$. One Trotter approximation for the unitary, $U_{\text{Trot}}$, is given in Eq.~\eqref{eq:trot_unitary}, where it should be understood that right-most operators act first. For graphical clarity we omit loops, with the understanding they are always implied. (a) State interaction graph $D_{U_{\text{Trot}}}$. Nodes represent the different 4-bit strings, and edges occur where matrix elements of $U_{\text{Trot}}$ are nonzero. Note the presence of five disconnected subgraphs. In practice, we treat each of these edges as undirected. (b) Transitive closure of $D_{U_{\text{Trot}}}$ resulting in the closed state interaction graph, which in this case corresponds exactly to the state interaction graph for the hopping unitary: $D_{U_{\text{Hop}}} = D^+_{U_{\text{Trot}}}$. Each disconnected subgraph in $D_{U_{\text{Trot}}}$ has become a disconnected complete graph in $D^+_{U_{\text{Trot}}}$. (c) Each complete graph in $D_{U_{\text{Hop}}} = D^+_{U_{\text{Trot}}}$ corresponds to a symmetry-protected subspace $G_s$. In this example, the subspace indices correspond to eigenvalues under particle number, $S=\sum_{i=0}^3 a^{\dagger}_i a_i$, conservation.
}
\end{figure*}
\section{Graph Theory Approach to Quantum Simulations}\label{sec:graphtheory}
In this section, we show how graph theory coupled with transitive closure discovers symmetry-protected subspaces. We describe how to ``binarize'' interactions between basis states through the dynamical system by discarding amplitude and phases to solely highlight \textit{if} such an interaction exists, describing these interactions as transitive relations, and finding the states connected by a transitive relation to form the SPSs of the system.
Directly, this method still requires unitary matrix multiplication in Hilbert space to establish basis state interactions through the dynamical system. Thus, to apply our methods, we need an efficient way to describe state-to-state interactions. We do this by creating a structure we call a \textit{string edit map} $\mathcal{L}_U$ for a unitary operator $U$, which relies on the observation that quantum systems are typically structured by local interactions. This string edit map returns the basis states available through any operations of a single local unitary operator on a single basis state in near-constant time complexity, allowing us to inexpensively find basis vectors available in local quantum dynamics.
This section will proceed as follows: in Sec.~\ref{subsec:stateinteractiongraph}, we first define the concept of a state interaction graph, whose edges indicate nonzero amplitudes on transitions between measurement basis states in the quantum simulation. Next, in Sec.~\ref{subsec:transitiveclosure} we show how transitive closure on this graph creates a cluster graph, which we call the closed state interaction graph, whose complete subgraphs are symmetry-protected subspaces. Finally, in Sec.~\ref{subsec:stringeditoperator} we show how this closed interaction graph can efficiently return the set of all measurement basis states seen by a unitary operator on a single state via the construction of the string edit map $\mathcal{L}_U$ for a unitary operator $U$.
\subsection{State interaction graph}\label{subsec:stateinteractiongraph}
We begin by defining the state interaction graph $D_U$ for a unitary operator $U$ over a Hilbert space basis $B(\mathcal{H}_d)$ and showing how to construct it. For now we leave the form of $U$ general and will specify particular forms when necessary.
\begin{definition}[State Interaction Graph]\label{def:stateinteractiongraph}
Given a basis $B$ and a unitary operator $U$, define a vertex set $V \equiv B$ and an undirected edge set $E \equiv \{ (\ket{b}\leftrightarrow\ket{b'}) \: \: \forall \: \: \ket{b}, \ket{b'} \in B \: \: | \: \: \bra{b'}U\ket{b}\neq 0\}$. Then, the state interaction graph is defined by the ordered tuple $D_U \equiv (V, E)$. In other words, the basis states of $B$ are assigned to vertices (nodes) in the interaction graph and edges are created between vertex states only where the matrix element of the evolution unitary between the two states is nonzero.
\end{definition}
Strictly speaking, $D_U$ should be a directed graph, where an edge points from $\ket{b}$ to $\ket{b'}$ if $\bra{b'}U\ket{b}\neq 0$ and from $\ket{b'}$ to $\ket{b}$ if $\bra{b}U\ket{b'}\neq 0$. However, for a symmetry operator $S$ and time-dependent simulation unitary $U(t)$, one can show that $[S,U(t)]=0 \iff [S,U^{\dagger}(t)]=0$, meaning that symmetries of the evolution operator, and their associated protected subspaces, are invariant under time-reversal. To remain consistent with this observation, we treat every directed relationship $\bra{b'}U\ket{b}\neq 0 \Rightarrow (\ket{b}\rightarrow\ket{b'})$ as an undirected edge $(\ket{b}\leftrightarrow \ket{b'})$. This treatment is equivalent to assuming that the true, directed state interaction graph corresponding to Def.~\ref{def:stateinteractiongraph} always has a cycle that leads back to every node, such as is the case in Fig.~\ref{fig:workflow}(a). Formally, we assume that if an edge $(\ket{b}\rightarrow\ket{b'})\in D_U$, there exists a path $\{\ket{b'}\rightarrow\dots\rightarrow\ket{b}\}\subseteq D_U$. This global cyclicity assumption enables us to use the notion of transitive closure in Sec.~\ref{subsec:transitiveclosure}, and subsequently, in an uncomplicated manner that respects the time-reversal invariance of the resulting subspaces. It is justified on two points: 1.) Most simulation unitaries have a very regular, repetitive structure so that directed, acyclic state interaction graphs are likely only to arise in extremely pathological instances, and 2.) The failure of the global cyclicity assumption will only ever result in the artificial enlargement of a symmetry-protected subspace, and while such a failure leads to underconstrained subspaces, which is bad for the efficacy of e.g., post-selection, it will never result in the corruption of simulation fidelity by overconstraining or throwing out good simulation data.
To construct $D_U$, we use the following steps. First, choose a set of Hilbert space basis vectors $B(\mathcal{H}_d)=\{\ket{b}\}$. Any symmetry-protected subspaces that are discovered must be formed by the basis vectors of this basis. In the context of quantum simulation, $B$ dictates the basis in which a quantum computer will be measured. For example, parallel readout in the computational $Z$ basis will result in bit strings, $B_{\text{comp.}}=\{\ket{0\ldots00}, \ket{0\ldots01},\ldots,\ket{1\ldots11}\}$, which are Pauli $Z$-string eigenvectors. In the context of classical simulation, the basis furnishes a representation for the $d$-dimensional vector of complex amplitudes that stores the evolving many-body wavefunction. When the evolution operator is applied to a basis vector for a single time-step the resulting state $\ket{\psi(\tau)}$ has a basis vector decomposition
\begin{equation}\label{eqn:basisvecdecomp}
\ket{\psi(\tau)} = U(\tau) \ket{b} = \sum_{b'\in B} \ket{b'}\bra{b'}U(\tau) \ket{b} = \sum_{b'\in B}\alpha_{b'}(\tau) \ket{b'},
\end{equation}
where we suppress the time-ordering subscript $j$ in $\tau_j$ for simplicity. With this decomposition for each $\ket{b} \in B$, we create the state interaction graph $D_U$ of the operator $U(\tau)$: for each pair $\ket{b},\ket{b'}$ such that $\alpha_{b'}(\tau)\neq 0$ in Eq.~\eqref{eqn:basisvecdecomp}, we add an edge $(\ket{b}\leftrightarrow \ket{b'})$ to $D_U$. If one has a direct $d\times d$ matrix representation of $U(\tau)$ on-hand, then the adjacency matrix for $D_U$ can be read off directly as $\mathcal{A}_U =\text{bit}[U(\tau)] + \text{bit}[U^\dagger(\tau)]$, where if an entry in $U(\tau)$ or $U^\dagger(\tau)$ becomes non-zero for any value of $t$, its complex value is replaced by $1$ under the operation $\text{bit}[\ldots]$. This adjacency matrix can be formed, for example, by directly exponentiating a $d\times d$ Hamiltonian matrix with a time parameter. In practice, however, constructing or storing an entire evolution unitary in memory is costly, since the size of Hilbert space grows exponentially in the number of qubits, or spins, in the simulation: $d=2^n$. Indeed, one of the main advantages of digital quantum simulation is the ability to break global evolution unitaries into sequences of local unitaries, at the expense of introducing error, which are then implemented as quantum gates. Therefore, being able to extract symmetry-protected subspaces from consideration of local operations, rather than from the global unitary they may approximate, is of clear benefit.
Towards this end, Fig.~\ref{fig:workflow}(a) shows the state interaction graph $D_{U_{\text{Trot}}}$ for one, potentially very bad depending on $\theta$, Trotter approximation to the one-dimensional, four-qubit hopping unitary
\begin{equation}\label{eq:hop_unitary}
U_{\text{Hop}}(\theta) = e^{i \theta \sum_{i=0}^2 (X_i X_{i+1} + Y_i Y_{i+1})/2}.
\end{equation}
We take the Trotterization to be
\begin{equation}\label{eq:trot_unitary}
U_{\text{Trot}}(\theta) = \text{iSWAP}_{01}(\theta) \times \text{iSWAP}_{12}(\theta) \times \text{iSWAP}_{23}(\theta),
\end{equation}
where it should be understood that rightmost operators are applied first, and multi-qubit states are ordered as $\ket{q_0q_1q_2q_3}$. Notice that each $\text{iSWAP}_{i,i+1}(\theta)$ is parameterized by an arbitrary $\theta$, and as such we expect each iSWAP operation to be a fractional operation that leaves some residual state behind, i.e., the operator has an identity component. The system is small enough that the state interaction graph can be checked by hand in this case, and the main observation to be made is that it is comprised of four, disjoint subgraphs, each only containing transitions between states of fixed particle number (i.e., number of $\ket{1}$s), and that all nodes in each subgraph have a path to all other nodes in the subgraph. The zero- and four-particle states are isolated, while the one-, two-, and three-particle states form directed, incomplete, isolated subgraphs. As we will see, this ``incompleteness'' feature is a pathology of the Trotter approximation which will be rectified in Sec.~\ref{subsec:transitiveclosure} via transitive closure. It is also worth noting that constructing the state interaction graph using the Trotter-approximated unitary is not yet useful, since on a classical computer it currently still requires the storage and evolution of a $2^n$-dimensional wavefunction. We will demonstrate the utility of constructing state interaction graphs from component $k-$local operators in Sec.~\ref{subsec:stringeditoperator}.
\subsection{Defining symmetry-protected subspaces with transitive closure}\label{subsec:transitiveclosure}
Suppose we have states $\ket{b}$ and $\ket{b''}$, such that $\bra{b''}U\ket{b}=0$ and $\bra{b''}UU\ket{b}\neq 0$. This requires a transitive relation: $\bra{b''}UU\ket{b} = \sum_{b'\in B} \bra{b''}U\ket{b'}\bra{b'}U\ket{b}$, because $\ket{b}$ must first transition to an intermediate state $\ket{b'}$ to reach its final destination at $\ket{b''}$. Therefore, in our state interaction graph $D_U$, as defined in Def.~\ref{def:stateinteractiongraph}, there are edges $(\ket{b}\rightarrow \ket{b'}) ,(\ket{b'} \rightarrow \ket{b''}) \in D_U$, which will have the same transitive relation encoded in the path $\{\ket{b} \rightarrow \ket{b'} \rightarrow \ket{b''}\} \subseteq D_U $. We use this duality to make the assumption that if the edges $(\ket{b}\rightarrow \ket{b'}),(\ket{b'}\rightarrow \ket{b''}) \in D_U$, the transition amplitude $\bra{b''}UU\ket{b}\neq 0$.
There are cases where the amplitudes of the states cancel, due to destructive interference, and break this transitive property on the level of individual basis state to basis state interactions. By ignoring the amplitudes of the basis states, we run the risk of including states in the subspace which would be removed via destructive interference. This risk comes with the benefit of efficiently knowing which states are reachable in the quantum simulation, and for our applications it does not \textit{add} any error to a simulation, as it does not break the underlying commuting subspace projection operator $P_G$; the subspaces are simply not as restrictive as they could be. See Appendix~\ref{appendix:proofdestint} for a proof. This also allows one to define the symmetry-protected subspaces to be for \textit{any} parameterization of the simulation unitary
The transitive property exists for every state $\ket{b}$ in $D_U$, so we can take the transitive closure of the state interaction graph to create a \textit{closed} state interaction graph $D^{+}_U$. The transitive closure of an edgeset $E$ is a transitively closed edgeset $E^+$, where every pair of states $\ket{b},\ket{b'}\in E$ which can be associated by any transitive relation, in other words can be connected by a path $\{\ket{b}\rightarrow \dots \rightarrow \ket{b'}\} \subseteq E$, has an edge $(\ket{b}\rightarrow \ket{b'}) \in E^+$~\cite{lidl}.
\begin{definition}[Transitively Closed State Interaction Graph]\label{def:closedstateinteractiongraph}
Let $D_U=(V,E)$ be a potentially non-closed state interaction graph for unitary $U$ and basis $B$. Define $V^+\equiv V$ to be the closed state interaction graph vertex (node) set and $E^+$ to be the state interaction graph edge set. An edge, $(\ket{b}\leftrightarrow\ket{b'})\in E^+$, exists in this edge set if and only if there is a path between $\ket{b}$ and $\ket{b'}$ in $D_U$, $\{\ket{b}\leftrightarrow\dots\leftrightarrow \ket{b'}\}\subseteq E$. The transitively closed state interaction graph is then defined as $D_U^+ \equiv (V^+, E^+)$.
\end{definition}
Because the original interaction graph represents single-operator state-to-state transitions, any two basis states which can discover each other through the quantum evolution have an edge in $D^{+}_U$; in other words,
\begin{equation}\label{eqn:unitaryinterpretationofgraph}
\exists\:t\;s.t.\;\bra{b'}U(t)\ket{b}\neq 0 \Rightarrow (\ket{b}\leftrightarrow \ket{b'}) \in D^+_U,
\end{equation}
for some time/operator exponent $t$. The transitive closure of an undirected, unweighted graph is a cluster graph, or a set of complete subgraphs; as discussed at the end of Sec.~\ref{subsec:stateinteractiongraph}, we treat the graph $D_U$ as undirected just for this purpose. The transitive closure of the state interaction graph for the Trotterized unitary in Fig.~\ref{fig:workflow}(a) can be seen in Fig.~\ref{fig:workflow}(b), which in this case turns out to be state interaction graph for the original hopping unitary, or $D^+_{U_{\text{Trot}}}=D_{U_{\text{Hop}}}$. As with $D_{U_{\text{Trot}}}$, within each complete subgraph, total particle number is conserved.
If the edge $(\ket{b}\leftrightarrow \ket{b'}) \in D^{+}_U$, then $\ket{b}$ and $\ket{b'}$ share a conserved quantity of the underlying unknown symmetry. This cluster graph structure also makes it apparent that if a wavefunction is initialized as a linear combination of vectors in one subgraph $G$ of $D^{+}_U$: $\ket{\psi_0} = \sum_{b\in G} \alpha_b \ket{b}$, it will remain in that subgraph:
\begin{equation}
\ket{\psi(t)}=U(t)\ket{\psi_0} = \sum_{b'\in G}\alpha_{b'}(t) \ket{b'}\; \forall \: t.
\end{equation}
Therefore, each complete subgraph $G$ within $D^{+}_U$ is a symmetry-protected subspace. We will formally prove that transitive closure on the state interaction graph can give an SPS with Thm.~\ref{thm:spstheorem} in Sec.~\ref{subsec:recurrence_relation}
$D^{+}_U$ can be represented as a list of disjoint sets of nodes, where each set has implied all-to-all connectivity. This set construction can be seen for the hopping unitary in Fig.~\ref{fig:workflow}(c). Here, the index, $s$, of each subset, $G_s$ counts the number of conserved particles. Formally, we can define the associated symmetry-protected subspaces by constructing their projection operators according to Def.~\ref{def:sps}: $P_G = \sum_{b\in G} \ket{b}\bra{b}$.
\subsection{Basis state string edit map}\label{subsec:stringeditoperator}
We now have a method to identify symmetry-protected subspaces using the language of graph theory. However, actually computing these subspaces still requires the construction and manipulation of vectors and matrices in an exponentially large Hilbert space. Recall, though, that the systems of interest are defined by Hamiltonians composed of local operations $U = \prod_i U_i $ where each $U_i$ is \textit{k-local}, meaning it only involves $k$ of the $n$ total qubits in the system, and where in general we will have $k \ll n$. In this section we describe our mechanism for using this fact to build up SPSs efficiently with what we call the \textit{basis string edit map}. This map enables computation of subspaces using only $k$-local operations on basis state vectors, so the computational complexity of operation with this map scales with $k$ instead of $n$.
\begin{definition}[Basis String Edit Map]\label{def:stringeditoperator}
Let $U_{i}(t)$ be a unitary operator that acts for a time $t$ non-trivially on $k$ of $n$ qubits, $Q_k(i) \subseteq \{q_0, \ldots, q_{n-1}\}$. The \textit{basis string edit map} $\mathcal{L}_{U_i}$ maps a basis state $\ket{b}$ to the set of basis states $\{\ket{b'}\}$ to which $\ket{b}$ can evolve after an arbitrary amount of time under $U_i(t)$. Formally, $\mathcal{L}_{U_i}(\ket{b}) = \{\ket{b'} \: | \:\exists\, t\, \bra{b'} U_i(t)\ket{b} \neq 0\}$.
\end{definition}
We can apply this construction to any unitary operator, including any $k$-local Trotter decomposition. Given a unitary operator which is a product of local operators $U = \prod_i U_i $, we form the set of local operators used in the Trotter decomposition, $\{U_i\}$. Then a basis string edit map can be formed for any subset of operators from $\{U_i\}$, as long as every operator $U_i \in U$ is included in at least one string edit map. We will use the decomposition
\begin{equation}\label{eqn:setofeditmaps}
\mathbb{L} \equiv \{\mathcal{L}_{U_i} : U_i \in U \},
\end{equation}
which has one string edit map for each local operator in the Trotter decomposition.
In Def.~\ref{def:stringeditoperator} we have deliberately left out the exact space upon which $\mathcal{L}_{U_i}$ acts. When operating on states in a basis $B$ with $\mathcal{L}_{U_i}$, when $\text{dim}(\mathcal{L}_{U_i}) < \text{dim}(B)$, we will call $\mathcal{L}_{U_i}$ a ``substring edit map'', and when $\text{dim}(\mathcal{L}_{U_i}) = \text{dim}(B)$, we refer to it as a ``string edit map'' or ``full string edit map".
For example, in the hopping unitary, $U_{\text{Hop}}$ given in Eq.~\eqref{eq:hop_unitary}, we can define $\mathcal{L}_{U_{\text{Hop}}}$ which would give $\mathcal{L}_{U_{\text{Hop}}}( \ket{0111}) = G_3$ where $G_3$ is in Fig.~\ref{fig:workflow}(c). More usefully, we can take the Trotterization of Eq.~\eqref{eq:hop_unitary} given by Eq.~\eqref{eq:trot_unitary}, and define string edit maps for the local iSWAP operators, $\mathbb{L}_{\text{Trot}} = \{\mathcal{L}_{\text{iSWAP}_{0,1}(\theta)}, \mathcal{L}_{\text{iSWAP}_{1,2}(\theta)}, \mathcal{L}_{\text{iSWAP}_{2,3}(\theta)}\}$. In Eq.~\eqref{eq:trot_unitary}, each local operator $U_i = \text{iSWAP}_{i,i+1}(\theta)$ would generate the corresponding edit map $\mathcal{L}_{\text{iSWAP}_{i,i+1}(\theta)}$, which operate as, e.g., $\mathcal{L}_{\text{iSWAP}_{1,2}(\theta)}(\ket{0\textbf{10}0})=\{\ket{0\textbf{01}0}, \ket{0\textbf{10}0}\}$. Here we have written the operators as acting on the full $2^n$-dimensional Hilbert space and highlighted in bold the qubits that are part of the $2^k$ dimensional subset of this space upon which $\mathcal{L}_{\text{iSWAP}_{1,2}(\theta)}$ acts. This k-local string edit map does not require information about any states besides those at the relevant indices, $1$ and $2$ in this case.
Algorithm~\ref{alg:computeeditfunc} creates $\mathcal{L}_{U_i}$ by taking transitive closure of the adjacency matrix $\mathcal{A}_{U_i}$ of $D_{U_i}$ via boolean matrix multiplication~\cite{fischer1971}. This algorithm requires $O(2^{3k})$ time to compute $\mathcal{L}_{U_i}$, $O(2^k)$ space to store it, and $O(1)$ time to use $\mathcal{L}_{U_i}$.
\begin{algorithm}[H]
\caption{Create a string edit map $\mathcal{L}_{U_i}$}\label{alg:computeeditfunc}
\begin{algorithmic}
\Require{Local unitary $U_i$, orthonormal basis $B_i=\{\ket{b}\}$}
\Ensure{$\text{Dim}(B_i)=\text{Dim}(U_i) = 2^k$ and $\text{Col}(U_i) = \text{Col}(B_i)$}
\State $\mathcal{A}_{U_i} \gets \text{bit}[U_i] + \text{bit}[U_i^\dagger]$ \Comment{\parbox[t]{.5\linewidth}{Add $U_i^\dagger$ to make the adjacency matrix undirected}}
\State $\mathcal{A}_{U_i}' \gets \text{bit}[\mathcal{A}_{U_i}^2]$
\While{$\mathcal{A}_{U_i}' \neq \mathcal{A}_{U_i}$}
\State $\mathcal{A}_{U_i} \gets \mathcal{A}_{U_i}'$
\State $\mathcal{A}_{U_i}' \gets \text{bit}[\mathcal{A}_{U_i}^2]$
\EndWhile
\For{$\ket{b} \in B_i$}
\For{$\ket{b'} \in B_i$}
\If{$\mathcal{A}_{U_i}[\ket{b}, \ket{b'}] = 1$}
\State $\mathcal{L}_{U_i}(\ket{b}) \gets \mathcal{L}_{U_i}(\ket{b}) \cup \ket{b'}$
\EndIf
\EndFor
\EndFor\\
\Return{$\mathcal{L}_{U_i}$}
\end{algorithmic}
\end{algorithm}
\noindent
Algorithm~\ref{alg:computeeditfunc} is computationally trivial to compute for small unitary operators ($k=2$ in the $\text{iSWAP}$ example), but very expensive for the large unitary operators encountered in quantum simulations. Throughout the rest of this paper, we will use $\mathcal{L}_{U_i}$ defined on small $k$ to compute our subspaces in order to keep a small overhead (thus, in our terminology, we will always be talking about ``substring edit maps").
The object $\mathbb{L}$ lets us create a version of the state interaction graph from Def.~\ref{def:stateinteractiongraph} which we call the \textit{string interaction graph} $D_\mathbb{L}$; this construction compactly shows the state-to-state interactions in the algorithms presented in Sec.~\ref{sec:algorithm1} and~\ref{sec:algorithm2}. The mapping itself is defined as $\mathbb{L}(\ket{b}) \equiv \{\mathcal{L}_i(\ket{b}) \: : \: \mathcal{L}_i \in \mathbb{L}\}$.
\begin{definition}[String Interaction Graph]\label{def:basisstringinteractiongraph}
Let $D_\mathbb{L}$ be a graph defined by an ordered tuple $D_\mathbb{L} \equiv (V_\mathbb{L},E_\mathbb{L})$ and $\mathbb{L} \equiv \{\mathcal{L}_{U_i} \: : \: U_i \in U\}$ for some Trotterized unitary $U=\prod U_i$. The vertex set is then given by $V_\mathbb{L} \equiv B(\mathcal{H}_d)$ and the edgeset is given by $E_\mathbb{L} \equiv \{(\ket{b}\leftrightarrow\ket{b'}) \: : \: \ket{b},\ket{b'} \in B \: \mathrm{and} \: \ket{b'} \in \mathbb{L}(\ket{b})\}$.
\end{definition}
Notice that, while edges in $D_U$ can capture the action of multiple operators at once, each edge in $D_{\mathbb{L}}$ is the action of only a single operator; therefore, if $D_U\equiv(V,E)$ is the state interaction graph and $D_\mathbb{L}\equiv(V,E_\mathbb{L})$ is the string interaction graph, then $E_\mathbb{L} \subseteq E$. Later, we will show with the proof in Appendix~\ref{appendix:prooftc} for Thm.~\ref{thm:spstheorem} that despite this inequality, the transitive closure of the graph $D_\mathbb{L}^+$, is equivalent to the transitively closed state interaction graph $D_\mathbb{L}^+ \equiv D_U^+$.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{recursiverelation.pdf}
\caption{\label{fig:recursiverelation} This figure shows how we can incrementally build the symmetry-protected subspace for $U=\prod_{i=0}^2\text{iSWAP}_{i,i+1}(\theta)$. (a) A single un-closed symmetry-protected subspace, or a subgraph of $D_\mathbb{L}$ for an iSWAP network. All operations $\mathbb{L}_{\text{Trot}}=\{\mathcal{L}_{\text{iSWAP}_{0,1}(\theta)}, \mathcal{L}_{\text{iSWAP}_{1,2}(\theta)}, \mathcal{L}_{\text{iSWAP}_{2,3}(\theta)}\}$ which are non-identity on each node are shown as an edge. Notice the similarities between this graph and the $s=2$ subgraph of Fig.~\ref{fig:workflow}(a): both have the same vertex set, but the edgeset of $D_\mathbb{L}$ is a subset of the edgeset in $D_U$. (b) Follow the recursion relation in Eq.~\eqref{eqn:recursionrelation} to iteratively build the symmetry-protected subspace $G_{\ket{1100}}$, starting from $\ket{\psi_0}=\ket{1100}$. Even though the graph in (a) is not equivalent to the graph in Fig.~\ref{fig:workflow}(a), their transitively closed graphs are equivalent, as can be seen by the vertices covered by the red line.
}
\end{figure*}
\section{Algorithm: creating the symmetry-protected Subspace of an Initial State}\label{sec:algorithm1}
We now turn to the construction of the entire symmetry-protected subspace of a given initial state using $k$-local substring edit maps and transitive closure. The algorithm by which we do so works by establishing a recurrence relation for computing new states, associated by symmetry-protection to the initial state, in the simulation. This recurrence relation furnishes an efficient way to compute the transitive closure of select subgraphs of the entire Hilbert space, with no extraneous information. For a unitary simulation operator decomposed into enumerated local operations, $U(\tau_j) = \prod_{i=1}^{m} U_i(\tau_j)$, where each $U_i(\tau_j)$ is $k$-local, we will show that a symmetry-protected subspace $G_{\ket{\psi_0}}$ of the initial state $\ket{\psi_0}$ costs $O((m+1)\times|G_{\ket{\psi_0}}|)$ to compute and $O(|G_{\ket{\psi_0}}|)$ to store with a breadth-first search~\cite{Skiena2008}. We will give the algorithm for the case where the initial state $\ket{\psi_0}$ is a single measurement basis state (i.e., a product state). If $\ket{\psi_0}$ is a linear combination of measurement basis vectors, the algorithm can be repeated for each basis vector in the sum; this does not impact the asymptotic performance of the algorithm, as it only makes the computed subspace bigger.
We enumerate every state in a symmetry-protected subspace by transitively closing subgraphs created by the local basis substring edit maps established in Sec.~\ref{subsec:stringeditoperator}, which return a set of basis states evolved to by their corresponding unitary operators in $O(1)$ when operating on a single basis state. We recursively build the subspace by checking the set of substring edit maps $\mathbb{L}\equiv \{\mathcal{L}_{U_i}\: : \: U_i \in U\}$ on each new state, until none are added. This process can be seen as the transitive closure of a subgraph of the graph $D_\mathbb{L}$.
For $U_{\text{Trot}}$ in Eq.~\eqref{eq:trot_unitary}, Fig.~\ref{fig:recursiverelation}(a) shows an example of the subgraph of $D_{\mathbb{L}}$ corresponding to the action of each $\mathcal{L}_{\text{iSWAP}_{i,i+1}(\theta)} \in \mathbb{L}_{\text{Trot}}$ starting from the initial state $\ket{\psi_0}=\ket{1100}$ (self-edges are ignored as elsewhere in the manuscript).
\subsection{Recurrence relation}\label{subsec:recurrence_relation}
To find the symmetry-protected subspace of an initial state, we begin by computing the string edit map for each $k$-local unitary, $\mathbb{L}\equiv\{\mathcal{L}_{U_i} \: : \: U_i \in U\}$. Next, we check the set of measurement basis strings generated by operating with each substring edit map on the initial state, notated $\mathbb{L}(\ket{\psi_0}) \equiv \{\mathcal{L}_i(\ket{\psi_0}) \: : \: \mathcal{L}_i \in \mathbb{L}\}$. We define the set $T_{\ket{\psi_0}}^1\equiv \{\ket{\psi_0}\}\: \cup \: \mathbb{L}(\ket{\psi_0})$. Then, for each \textit{new} state $\ket{\phi}\in \mathbb{L}(\ket{\psi_0})$, we check operations under the substring edit maps: $T_{\ket{\psi_0}}^2=\mathbb{L}(T_{\ket{\psi_0}}^1)\equiv \{\mathbb{L}(\ket{\phi}) \: : \: \ket{\phi} \in \mathbb{L}(\ket{\psi_0})\} $. This process repeats until no new states are found through the following recurrence relation.
\begin{equation}\label{eqn:recursionrelation}
\begin{cases}
T_{\ket{\psi_0}}^1 \leftarrow \{\ket{\psi_0}\} \cup \mathbb{L}(\ket{\psi_0}) & \text{base case}\\
T_{\ket{\psi_0}}^{i+1} \leftarrow T_{\ket{\psi_0}}^{i} \cup \mathbb{L}(T_{\ket{\psi_0}}^i) & \text{recursive case}\\
T_{\ket{\psi_0}}^{i+1} = T_{\ket{\psi_0}}^i & \text{stop condition}
\end{cases}
\end{equation}
The stop condition activates if no new states are found, that is, when additional operations drawn from $\mathbb{L}$ do not unveil any new states. Steps 0-4 in Fig.~\ref{fig:recursiverelation}(b) show how the recurrence relation manifests for the input state $\ket{\psi_0}=\ket{1100}$ and the set of substring edit maps generated from the Trotterization in Eq.~\eqref{eq:trot_unitary}. We then define the symmetry-protected subspace $G_{\ket{\psi_0}}$ to which the state $\ket{\psi_0}$ belongs via
\begin{equation}\label{eq:TG_equiv}
G_{\ket{\psi_0}} \equiv T_{\ket{\psi_0}}^{i+1},
\end{equation}
where the definition ``$\equiv$'' in Eq.~\eqref{eq:TG_equiv} should be taken to mean ``all basis states in $T_{\ket{\psi_0}}^{i+1}$ viewed as nodes in a complete graph''. We can create the Kleene Closure of the set of substring edit maps, denoted $\mathbb{L}^{\star}$, which is the set of all finite concatenations of substring edit maps, including the identity. Any arbitrary string of substring edit maps $\mathbb{L}^{\star}$ applied to $\ket{\psi_0}$ will result in a state in $G_{\ket{\psi_0}}$ by its definition. Hence, one can write
\begin{equation}
\mathbb{L}^{\star}(\ket{\psi_0}) = G_{\ket{\psi_0}}.
\end{equation}
We now state our main result.
\begin{theorem}\label{thm:spstheorem}
Let $U(t)=\mathcal{O}_{op} \big[\prod_{j=1}^p \prod_{i=1}^m U_i(\tau_j)\big]$ be a quantum simulation unitary of duration $t$ as in Eq.~\eqref{eqn:timeevol}, divided into $p$ time-steps, where each $U_i(\tau_j)$ is $k$-local and is time-step parameterized by $\tau_j$, and some operator ordering (such as time-ordering) is specified. Let $\mathcal{L}_{U_i}$ be the string edit map corresponding to any available parameterization of $U_i(\tau_j)$ and the set of such maps $\mathbb{L} \equiv \{\mathcal{L}_{U_i} \: : \: i \in \{1,\ldots,m\}\}$. Let $B(\mathcal{H}_{2^n})\equiv \{\ket{b}\}$ be the basis in which computations (measurements) are being performed classically (quantumly).
Then, given an input state $\ket{\psi_0}$, expressed in the basis $B$, if $\ket{b}\notin G_{\ket{\psi_0}}$, where $G_{\ket{\psi_0}}$ is constructed according to Eqs.~\eqref{eqn:recursionrelation}-\eqref{eq:TG_equiv}, then $\bra{b}U(t)\ket{\psi_0} = 0$.
\end{theorem}
\noindent
For a proof, see Appendix~\ref{appendix:prooftc}. Note that for $\bra{b_f}U(t)\ket{\psi_0}$ to vanish under these conditions, $G_{\ket{\psi_0}}$ must satisfy Def.~\ref{def:sps} as demonstrated in Lem.~\ref{lem:thm1_connection}. In other words, for $G_{\ket{\psi_0}}$ to be able to exclude particular basis states for arbitrary evolution times, it must be a symmetry-protected subspace.
Note that Thm.~\ref{thm:spstheorem} immediately provides two corollaries.
\bigskip
\noindent
\textit{Corollary 1 (Post-Selection).} For simulation on an idealized, noise-free quantum computer, if $\ket{b_f}\notin G_{\ket{\psi_0}}$ then $||\bra{b_f}U(t)\ket{\psi_0}||^2 = 0$. Hence, if the state $\ket{b_f}$ is measured in the output of a noisy quantum device, it can be assumed that the state arose as a result of error, and may be discarded.
\bigskip
\noindent
\textit{Corollary 2 (Global Subspace).} We assumed a Trotterized form for $U(t)$ in the statement and proof of Theorem~\ref{thm:spstheorem}. However, we can formally recover the corresponding global simulation unitary by taking the limit $p\rightarrow \infty$ where $\tau_j = jt/p$ in the time-ordered case and $\tau_j = t/p \: \forall j$ when time-ordering is unnecessary (such as when the Hamiltonian is time-independent). Nothing in the proof of Theorem~\ref{thm:spstheorem} relies upon the finiteness of $p$ or discreteness of the corresponding time differential $t/p$. Therefore, our result holds for global simulation unitaries as well. This implies that one can reduce the resource requirements in classical simulations of $U(t)$ by only evolving basis states $\ket{b_f}\in G_{\ket{\psi_0}}$.
\subsection{Pseudocode}
With an understanding of the recurrence relation in Eq.~\eqref{eqn:recursionrelation} and how it can compute symmetry-protected subspaces, we present an algorithm which can enumerate these subspaces using a breadth-first search. Breadth-first search to enumerate an entire graph $(V,E)$ has computational complexity $O(|V|+|E|)$. In our implementation, there are $|G_{\ket{\psi_0}}|$ vertices and we check for $m$ edges at each vertex, giving $O(m\times |G_{\ket{\psi_0}}|)$ edges in the entire graph. Thus, our breadth-first search to enumerate the symmetry-protected subspace is $O(|G_{\ket{\psi_0}}| + m\times|G_{\ket{\psi_0}}|) = O((m+1)\times|G_{\ket{\psi_0}}|)$
\begin{algorithm}[H]
\caption{Enumerate symmetry-protected subspace $G_{\ket{\psi_0}}$ with a breadth-first search}\label{alg:makesps}
\begin{algorithmic}
\Require{String edit maps $\mathbb{L}$ of the simulation operator, initial state $\ket{\psi_0}$}
\State $G \gets \{\ket{\psi_0}\}$
\State Let $Q$ be a first-in-first-out queue
\State $Q\text{.enqueue(}\psi_0)$
\While{$Q \neq \emptyset$}
\State $\ket{b} \gets Q\text{.dequeue()}$
\For {$\ket{b'} \in \{\mathcal{L}_{U_i}(\ket{b}) \: : \: \mathcal{L}_{U_i} \in \mathbb{L}\}$}
\If{$\ket{b'} \notin G$}
\State $G \gets G \cup \ket{b'}$
\State $Q\text{.enqueue(}\ket{b'})$
\EndIf
\EndFor
\EndWhile\\
\Return{$G_{\ket{\psi_0}} \gets G$}
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:makesps} uses the set $G$, which is eventually the symmetry-protected subspace, to track which states have already been added to $Q$ during the runtime of the algorithm and prevent them from being checked more than once. If this set uses the hash of the basis state's bitstrings, insertion and search will be average-case $O(1)$. Getting the set of single-operator transitions $T_{\ket{b}}^1$ is $O(m)$, where $m$ is the number of unitary operators in the system, for a single state $\ket{b}$; while not all $m$ substring edit maps will provide an edge, as many might act as identity on the state $\ket{b}$, each string edit map must still be checked. This set is computed for each state discovered, and each state discovered is never added to the queue $Q$ more than once, which confirms our original complexity analysis of $O((m+1)\times |G_{\ket{\psi_0}}|)$
This algorithm computes a set equivalent to that described by Eq.~\eqref{eqn:recursionrelation}. See Appendix~\ref{appendix:prooftc} for a proof that this set is a symmetry-protected subspace.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{allSubspaceSizes.pdf}
\caption{The number of states in symmetry-protected subspaces for each model, on a $\log_2$ scale. Each blue dot is the size of an individual subspace. (a) Heisenberg-XXX. (b) $T_6$ QCA. (c) $F_4$ QCA, which has no known symmetry operator.}
\label{fig:scaling}
\end{figure}
\subsection{Usage and limits}\label{subsec:complexityscaling}
As stated in Sec.~\ref{sec:introduction}, post-selection for quantum simulations is the aim of our methods. In order to perform post-selection with the algorithm from this section, the entire symmetry-protected subspace $G_{\ket{\psi_0}}$ must be known, and each measurement $\ket{b_f}$ is verified via
\begin{equation}
\begin{cases}
\ket{b_f}\in G_{\ket{\psi_0}} & \text{assumed no error}\\
\ket{b_f}\notin G_{\ket{\psi_0}} & \text{known error.}
\end{cases}
\end{equation}
Because this requires the computation of the entire subspace, it can still become computationally intractable. As mentioned in Sec.~\ref{sec:introduction}, we benchmark this algorithm against the Heisenberg-XXX, T6 QCA, and F4 QCA quantum simulations. The computationally limiting factor of this algorithm is the size of the symmetry-protected subspace, as it has time complexity $O((m+1)\times|G_{\ket{\psi_0}}|)$ and spatial complexity $O(|G_{\ket{\psi_0}}|)$. See Fig.~\ref{fig:scaling} for a depiction of the size of the subspaces, up to 17 qubits. Figure~\ref{fig:scaling}(c) is especially significant because, as alluded to in Sec.~\ref{subsubsec:f4qca}, this model previously had no known conservation laws, but this data shows the partitioning of Hilbert space into symmetry-protected subspaces. By examining Fig.~\ref{fig:scaling}, we can see that the worst case of each subspace size, $\max(\{|G|\})$, is $\log_2(\max(\{|G|\}))\approx \log_2(|\mathcal{H}_d|) - k$ when the model is comprised of $k$-local operations. The important thing to see is that the size of symmetry-protected subspaces still scales exponentially in the worst and average cases; despite them being smaller than the full Hilbert space, they are only linearly smaller. We address this stop to post-selection with another algorithm in Sec.~\ref{sec:algorithm2}.
\section{Algorithm: Verification of a Shared symmetry-protected Subspace}\label{sec:algorithm2}
Because the worst-case size of a symmetry-protected subspace is still exponential in the number of qubits, Alg.~\ref{alg:makesps} is only applicable at relevant system sizes for simulations where $G_{\ket{\psi_0}}$ is not exponentially large. Thus, to generalize the usability of symmetry-protected subspaces to any simulation, we present an alternative algorithm in this section that uses an efficient, but greedy, heuristic. Instead of computing the entire exponentially large SPS, this method performs an efficient search for a path of substring edit maps to connect the initial state $\ket{\psi_0}$ and a measured state $\ket{b_f}$ in the graph $D_\mathbb{L}$. The heuristic nature of this algorithm means that it produces only approximate results (albeit with a degree of approximation that can be continuously improved at the cost of more time complexity), and since its runtime scales favorably, it can be used to check the output of quantum simulations well beyond the scale where other classical methods become intractable.
Naive search algorithms in a graph traditionally use a breadth-first search from an initial vertex~\cite{5219222}, which is what we described in Alg.~\ref{alg:makesps} to enumerate the symmetry-protected subspace through transitive closure; as stated, this is too computationally expensive in many cases.
Our greedy algorithm works as follows: Following all edges (application of the set of substring edit maps $\mathbb{L}$) from a state generates a set of possible states accessible from that state. Working both forward from $\ket{\psi_0}$ and backward from $\ket{b_f}$, we have found a \textit{path} when steps from both directions lead to a shared element. To detect whether this has happened, we use a simple observation about ordered sets: two sets are the \textit{same} set if they have the same minimal (or maximal) element. A natural order for sets of quantum computational basis states (i.e., binary strings) is just the integer they encode. So, our greedy algorithm works by building two sets (one starting from the initial state and one from the measured state) and comparing them; these sets are subsets of the symmetry-protected subspace corresponding to the initial and final states. If these sets have any common elements (which we check in constant time by looking at their \textit{minimal} elements), then we have found a path connecting the initial and measured state, and they occupy the same SPS.
In our methods, post-selection's accept/reject decision is made by checking for a measurement result in the symmetry-protected subspace of the initial state. Using the rationale outlined above, we can declare with certainty that two states share a SPS when their paths collide. On the other hand, if the two paths terminate in dissimilar minima we \textit{assume} the initial and measured states inhabit disjoint subspaces. We say ``assume'' because the accuracy of our heuristic differs depending on the simulation; when the locally-minimal choice at each step from either state does not build a path to the true minimum element, the two states may build distinct paths while occupying the same SPS.
\subsection{Searching in the string interaction graph} \label{subsec:pathfinding}
This algorithm builds a single branch, following local minima in depth-limited breadth-first searches, from a single starting vertex in the $D_\mathbb{L}$ graph, where each vertex $\ket{b}$'s edgeset is given by $\mathbb{L}(\ket{b})$.
Throughout this section we use the integers encoded by the bitstrings $b$ of basis states $\ket{b} \in B(\mathcal{H}_{2^n})$, which provides a natural ordering to states. Let the function $\min(A)$ on a set of states $A$ return the state with the smallest encoded integer in that set:
\begin{equation}
\min(A) = \ket{b} \: s.t. \: b \leq b',\, \forall \, \ket{b'} \in A .
\end{equation}
To find the minimal element of the symmetry protected subspace, we start with an element $\ket{b_0} =\ket{\psi_0}$ or $\ket{b_f}$ and build a set towards the minimum of $G_{\ket{b_0}}$, notated $\min(G_{\ket{b_0}})$, with locally optimal decisions. At each step $\ket{b_{\text{curr}}}$ in the search, we compute $T_{\ket{b_{\text{curr}}}}^\mu$ from Eq.~\eqref{eqn:recursionrelation}, which is a breadth-first search to depth $\mu$, or every state in $\mathbb{L}^\mu(\ket{b_{\text{curr}}})$. Then, the starting point for the next step, $\ket{b_{\text{next}}}$, is the state with the smallest binary encoded integer in $T_{\ket{b_{\text{curr}}}}^\mu$, notated $\min(T_{\ket{b_{\text{curr}}}}^\mu)$. Repeat until the set $T_{\ket{b_{\text{curr}}}}^\mu$ does not offer a state smaller than $\ket{b_{\text{curr}}}$. Let this process be represented by the recursive function $\chi$:
\begin{equation}\label{eqn:iterativepathbuild}
\chi(\ket{b_0}, \mu)=
\begin{cases}
\ket{b_{\text{curr}}} &\text{if} \ket{b_{\text{curr}}} = \min(T_{\ket{b_{\text{curr}}}}^\mu)\\
\chi(\min(T_{\ket{b_{\text{curr}}}}^\mu), \mu) & \text{otherwise}
\end{cases}.
\end{equation}
To reiterate, because each step is given by applications of the substring edit maps, $\chi(\ket{b_0},\mu)$ and $\ket{b_0}$ must share a symmetry-protected subspace, i.e., $\chi(\ket{b_0},\mu) \in G_{\ket{b_0}}$. Therefore, when $\chi(\ket{\psi_0},\mu) = \chi(\ket{b_f},\mu)$ there \textit{must} be a sequence of substring edit maps between $\ket{\psi_0}$ and $\ket{b_f}$; i.e., a there is a path $\{\ket{\psi_0} \leftrightarrow \dots \leftrightarrow \ket{b_f}\} \subseteq D_\mathbb{L}$ and $\ket{b_f} \in \mathbb{L}^*(\ket{\psi_0})$.
When both searches conclude in the true minimal element of their corresponding symmetry-protected subspaces, $\chi(\ket{\psi_0},\mu) = \min(G_{\ket{\psi_0}})$ and $\chi(\ket{b_f},\mu)=\min(G_{\ket{b_f}})$, we can conclude with certainty that the states do or do not inhabit the same subspace if $\chi(\ket{\psi_0},\mu) = \chi(\ket{b_f},\mu)$ or $\chi(\ket{\psi_0},\mu) \neq \chi(\ket{b_f},\mu)$. However, if either search does \textit{not} conclude in their targeted minimal state, our conclusions can be wrong. Suppose $\ket{b_f} \in G_{\ket{\psi_0}}$, which means that $\min(G_{\ket{\psi_0}})= \min(G_{\ket{b_f}})$, but the search result $\chi(\ket{b_f},\mu)$ finds a false minima of $G_{\ket{b_f}}$, meaning $\chi(\ket{b_f},\mu)\neq \min(G_{\ket{b_f}})$. When post-selecting with our procedure under these conditions, state $\ket{b_f}$ would be wrongly rejected because the underlying assumption for this heuristic, that $\chi(\ket{b_0},\mu) = \min(G_{\ket{b_0}})$, is wrong. In practice, the true minimal element of the symmetry-protected subspace is unknown; thus, the assertion that $\chi$ finds the minimum is always an assumption.
Consequently, if the result of the two searches is a collision, $\chi(\ket{\psi_0},\mu) = \chi(\ket{b_f},\mu)$, we \textit{know} the two states must share a symmetry-protected subspace, even if the searches are at a false minima. If the two searches do not find a common element, we \textit{assume} the initial and measured states occupy separate symmetry-protected subspaces. As such, measurement outcomes that lie within the intial state's symmetry-protected subspace can be rejected. Thus, each measurement $\ket{b_f}$ is verified with
\begin{equation}\label{eq:ps_with_path}
\begin{cases}
\chi(\ket{b_f},\mu) = \chi(\ket{\psi_0},\mu) & \text{assumed no error}\\
\chi(\ket{b_f},\mu) \neq \chi(\ket{\psi_0},\mu) & \text{assumed error}
\end{cases}.
\end{equation}
Formally, we can state that the protected subspace formed by $\chi$ around an initial state $\ket{\psi_0}$ is
\begin{equation}
P_G' = \sum_{b} \ket{b}\bra{b} \; s.t. \; \chi(\ket{b}, \mu) = \chi(\ket{\psi_0}, \mu)
\end{equation}
which is approximately equal to the true symmetry-protected subspace, $P_G \approx P_G'$.
The confidence of this assumption depends on the quantum system being studied, and the depth $\mu$ of the local breadth-first searches. Thus, we present arguments to support it for our three exemplar systems: Heisenberg-XXX, $T_6$ QCA, and $F_4$ QCA as outlined in Sec.~\ref{subsec:quantumsimulation}. For the Heisenberg-XXX model this assumption is always correct at $\mu=1$: systems whose substring edit maps $\mathbb{L}$ are isomorphic to nearest-neighbor SWAP substring edit maps $\mathbb{L}_{\text{SWAP}}$, such as the Heisenberg-XXX model, provably find $\min(G_{\ket{b_0}})$ using $\chi(\ket{b_0}, \mu=1)$, as shown in Appendix~\ref{appendix:proofofswap}. We also numerically find that $\chi(\ket{b_0},\mu=2)$ is exact for the $T_6$ QCA model. Out of our three studied systems the $F_4$ QCA demonstrates the worst accuracy; $\chi(\ket{b_0},\mu=9)$ still returns false minima, which potentially causes a false rejection. In Sec.~\ref{subsubsec:pathaccuracydata} we analyze these properties of the $T_6$ and $F_4$ QCAs more closely. In Sec.~\ref{subsec:probps} we show that post-selection is still effective when $\chi$ causes a false rejection to occur.
The computational performance of this method is also model-dependent. Let $|Depth|$ be the depth of our search, or the number of intermediate states traversed to (the second line in Eq.~\eqref{eqn:iterativepathbuild}) before finding a locally optimal vertex. This means the sequence of substring edit maps between $\ket{b_0}$ and $\chi(\ket{b_0},\mu)$ is length $O(\mu\times|Depth|)$, and the computational complexity of Eq.~\eqref{eqn:iterativepathbuild} is $O(m^\mu \times |Depth|)$.
Because the reliability and computational complexity of this post-selection method varies, before applying the algorithm to a given quantum simulation, benchmarks should be performed on a small version of the system to understand the $\mu$ required to obtain reliable results.
\subsection{Pseudocode}\label{subsec:pathfinding_pseudocode}
Here we describe the algorithm, Alg.~\ref{alg:greedypath}, to find the minimum binary encoded state in the symmetry-protected subspace of a given basis state. This algorithm starts at a state $\ket{b_0}$ and follows the local minima of limited breadth-first searches in an attempt to find the set's minimal element, resulting in $\chi(\ket{b_0},\mu)$.
Notice that the innermost \textbf{while} loop is borrowed from Alg.~\ref{alg:makesps} to compute each $T_{\ket{b_{\text{curr}}}}^\mu$, where $\ket{b_{\text{curr}}}$ is the current best minimum state. This time, the queue $Q$ uses the ordered tuple $(\ket{b'},\eta)$ as its elements, where $\ket{b'}$ is the current, un-checked, state in the breadth-first search and $\eta$ is the number of edges (i.e., depth) that $\ket{b'}$ is away from the current optimal solution $\ket{b_{\text{curr}}}$. Each node checked in this loop has $O(m)$ new edges, and we check a maximum depth of $\mu$, giving the inner loop computational complexity $O(m^\mu)$. The set $T^\mu_{\ket{b_\text{curr}}}$ is checked for every node traversed to in the search, and our search traverses to $|Depth|$ elements, which is how we arrive at an overall complexity of $O(m^\mu \times|Depth|)$ to compute $\chi(\ket{b_0},\mu)$.
\begin{algorithm}[H]
\caption{Greedy pathfinding to the minimum binary encoded state}\label{alg:greedypath}
\begin{algorithmic}
\Require{Path starting state $\ket{b_0}$, substring edit maps $\mathbb{L}$, search limiting integer $\mu$}
\State{$\ket{b_{\text{next}}} \gets \ket{b_0},\,\ket{b_{\text{curr}}} \gets$ null }
\While{$\ket{b_{\text{curr}}} \neq \ket{b_{\text{next}}} $}
\State $T_{\ket{b_{\text{curr}}}} \gets \{\ket{b_{\text{curr}}}, \ket{b_{\text{next}}}\}$
\State $\ket{b_{\text{curr}}} \gets \ket{b_{\text{next}}}$
\State Let $Q$ be a queue
\State $Q\text{.enqueue}(\ket{b_{\text{curr}}}, 0)$ \Comment{\parbox[t]{.5\linewidth}{Each element in $Q$ is tuple representing (state, depth)}}
\While{$Q \neq \emptyset$} \Comment{\parbox[t]{.5\linewidth}{Breadth-first search centered around $\ket{b_{\text{curr}}}$}}
\State $\ket{b'}, \eta \gets Q\text{.dequeue()}$
\If{$\eta = \mu$}
\textbf{break} \Comment{Reached the depth limit}
\Else
\For{$\ket{b''} \in \{\mathcal{L}_{U_i}(\ket{b'}) \: : \: \mathcal{L}_{U_i} \in \mathbb{L}\}$}
\If{$\ket{b''} \notin T_{\ket{b_{\text{curr}}}}$}
\State $Q\text{.enqueue}(\ket{b''}, \eta + 1)$
\State $T_{\ket{b_{\text{curr}}}} \gets T_{\ket{b_{\text{curr}}}} \cup \ket{b''}$
\EndIf
\EndFor
\EndIf
\EndWhile
\State $\ket{b_{\text{next}}} \gets \min(T_{\ket{b_{\text{curr}}}})$
\EndWhile\\
\Return $\chi(\ket{b_0}, \mu) = \ket{b_{\text{curr}}}$
\end{algorithmic}
\end{algorithm}
To post-select with Alg.~\ref{alg:greedypath}, run it once for $\ket{\psi_0}$ to get $\chi(\ket{\psi_0},\mu)$, again for each measurement $\ket{b_f}$ to get $\chi(\ket{b_f},\mu)$, and check the results in Eq.~\eqref{eq:ps_with_path}.
As an aside, if all states computed in the search, $T^\mu_{\ket{b_{\text{curr}}}}$, are cached with the result of their search, subsequent executions of the algorithm can be preempted with their previously calculated result if a state is found inside the cache.
\subsection{Benchmarking data for Algorithm~\ref{alg:greedypath}} \label{subsec:greedypathdata}
This subsection presents benchmark data for Alg.~\ref{alg:greedypath} that demonstrates its practical applicability for our example systems. These benchmarks include heuristic reliability and search depth. We supplement this data with proofs or explicit analytical forms when possible. When this is not possible, we instead rely on extrapolation from the data to inform the asymptotics of these quantities. We find that for the Heisenberg-XXX model, the heuristic is provably exact (Appendix~\ref{appendix:proofofswap}) and the search depth has an explicit equation (Appendix~\ref{appendix:pathlengthcalcalc}), while the $T_6$ and $F_4$ QCA models rely on data-driven intuition for these quantities.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{allErrorPathsData.pdf}
\caption{The proportion of states $\ket{b}$ where $\chi(\ket{b_0},\mu) \neq \min(G_{\ket{b}})$ via Alg.~\ref{alg:greedypath}, called ``failed searches'', at different system sizes, for each $\mu$ used. (a) $T_6$ QCA. At $\mu=2$, Alg.~\ref{alg:greedypath} becomes exact for this system. (b) $F_4$ QCA. High $\mu$ can still fail, but does asymptotically better for $\mu\geq5$.}
\label{fig:falseminima}
\end{figure}
\subsubsection{Search reliability}\label{subsubsec:pathaccuracydata}
If searching for the minimal element from $\ket{b_0} =\ket{\psi_0}$ or $\ket{b_f}$ fails, because $\chi(\ket{b_0},\mu) \neq \min(G_{\ket{b_0}})$, then it is possible to falsely assume that two states occupy disjoint subspaces. As such, we examine the reliability of the searching function $\chi$ on each exemplar system.
In the Heisenberg-XXX model, the base case for $\chi$, as shown in Eq.~\eqref{eqn:iterativepathbuild}, only activates at a state which is always the minimum binary-encoded state in the symmetry-protected subspace. This is because the set of substring edit maps for the Heisenberg-XXX system is isomorphic to the set of substring edit maps for the nearest-neighbor SWAP network: $\mathbb{L}_{\text{HeisXXX}} = \mathbb{L}_{\text{SWAP}}$. Appendix~\ref{appendix:proofofswap} proves that $\chi$ at $\mu=1$ is exact for $\mathbb{L}_{\text{SWAP}}$.
Because the $T_6$ QCA and $F_4$ QCA do not have SWAP-isomorphic substring edit maps, we do not have any available proofs for their exactness, and instead rely on simulating $\chi$ for these models. The results of this are in Fig.~\ref{fig:falseminima}, where for the $T_6$ and $F_4$ QCA models we plot the proportion of states $\ket{b} \in B(\mathcal{H}_d)$ such that $\chi(\ket{b},\mu) \neq \min(G_{\ket{b}})$ for each $n \in [k, 17]$.
First, notice Fig.~\ref{fig:falseminima}(a), which shows search failures for the $T_6$ QCA model; while $\mu=1$ causes most searches to fail as $n$ increases, $\mu=2$ causes \textit{every} search to succeed for every $n$ in the domain. Thus, it we can assume that the heuristic for Alg.~\ref{alg:greedypath} is accurate for this model and that $\chi(\ket{b},\mu=2) = \min(G_{\ket{b}}),$ $\forall \ket{b} \in B$.
Next, notice Fig.~\ref{fig:falseminima}(b), which shows search failures for the $F_4$ QCA model; we use $\mu =1,3,5,7,9$, and find that $\chi$ can fail even with high $\mu$. We see that $\mu =1,3$ sees majority failures over the domain, that $\mu=5,7,9$ has bounded failure $\approx 0.05$, and that no $\mu$ completely removes failures.
This metric only determines the reliability of $\chi$, not the reliability of post-selecting with $\chi$. This is because, as discussed previously in Sec.~\ref{subsec:pathfinding}, two searches can fail and obtain the same false minima, $\chi(\ket{\psi_0},\mu) =\chi(\ket{b_f},\mu) \neq \min(G_{\ket{\psi_0}})=\min(G_{\ket{b_f}})$, i.e., two failed searches can still successfully identify that the states share a conserved quantity. Therefore, the search reliability shown in Fig.~\ref{fig:falseminima} should be seen as an \textit{upper-bound} on the error caused by the heuristic.
Sections~\ref{subsec:exactps} and~\ref{subsec:probps} explore this reliability further in the context of post-selection on noisy quantum data. In the context of post-selection, we find that even small $\mu=3$ is acceptable for the $F_4$ QCA, as the most likely measurement outcomes also share false minima with the initial state. We discuss this observation more with the data that supports it in Sec.~\ref{subsec:probps}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{allPathLengthsData.pdf}
\caption{The longest path taken at each $n$ for the models (a) Heisenberg-XXX (b) $T_6$ QCA (c) $F_4$ QCA.}
\label{fig:pathlength}
\end{figure}
\subsubsection{Search depth}\label{subsubsec:pathlengthdata}
The time performance of Alg.~\ref{alg:greedypath} is heavily dependent on the length of the path taken. The asymptotic worst-case of the search depth is system-dependent, being identified as $n^2/4$ for the Heisenberg-XXX model in Appendix~\ref{appendix:pathlengthcalcalc}, and closer to $O(n\ln(n))$ for the $T_6$ QCA and $O(n\log_{10}(n))$ for the $F_4$ QCA by observing the largest search depth for the algorithm at small $n$, shown in Fig.~\ref{fig:pathlength}. In this figure we find the largest path length at each system size for each relevant $\mu$. For the $T_6$ model, shown in Fig.~\ref{fig:pathlength}(a), search depth becomes longer as the parameter $\mu$ increases, indicating that false minima are avoided. For the $F_4$ model, shown in Fig.~\ref{fig:pathlength}(b), the search depth becomes shorter as $\mu$ increases; this is because higher $\mu$ means more edges are traversed at each step in the search, and thus less depth is needed in the search overall.
These path lengths, combined with the $\mu$ found in Sec.~\ref{subsubsec:pathaccuracydata}, results in the computational complexity of Alg.~\ref{alg:greedypath} on a single state being: Heisenberg-XXX is $O(mn^2)$, $T_6$ QCA is $O(m^2n\ln(n))$, and $F_4$ QCA is $O(m^\mu n\log_{10}(n))$. The $\mu$ exponent is left in the $F_4$ QCA because, as shown in Sec.~\ref{subsubsec:pathaccuracydata}, we find no value of $\mu$ which guarantees that a search finds the minimal element, and thus expect $\mu$ to vary depending on the symmetry-protected subspace occupied for the simulation. As we can see in Fig.~\ref{fig:falseminima}(b), $\mu \geq 5$ is mostly sufficient.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\linewidth]{allPSDatashots10000.pdf}
\caption{The fidelities $\mathcal{F}(P_{\text{sim}}(p), P_{\text{ideal}}(p))$, $P_{\text{sim}}=P_{\text{raw}}, P_{\text{ps},\mu=1}, P_{\text{ps},\mu=2}, P_{\text{ISPS}}$ of each model (rows) at errors $\epsilon/3=0.005, 0.01, 0.02, 0.05$ (columns). There are $10,000$ measurements at each Trotter step. The $\mu$ used is the smallest required to reach perfect shared subspace verification under Alg.~\ref{alg:greedypath}. This figure shows that with post-selection, model fidelity remains above random noise at least as long as $p=2*n$. }
\label{fig:allps}
\end{figure*}
\section{Post-Selection in Emulated Noisy Simulations}\label{sec:psdemo}
To demonstrate the power of our algorithm, we perform automated post-selection on simulated quantum computations with noise. We perform discretized time evolutions with our exemplar systems, the Heisenberg-XXX, $T_6$ QCA, and $F_4$ QCA models, and show that our post-selection methods restructure noisy measurement probability distributions to be closer to the one found in an ideal simulation. We first run the simulation without errors via $U(t)\ket{\psi_0}$, constructing the measurement distribution at each time step. Next, we repeated the simulations with depolarizing noise injected after each layer of gates with independent probability $\epsilon/3=0.005, 0.01, 0.02, 0.05$ of $X_i, Y_i, Z_i$ on each qubit $i$. See Appendix~\ref{appendix:simnoisemodel} for more noise model details. We use Kullback-Liebler divergence~\cite{kullback1951information} to quantify the distance between the ideal and noisy measurement distributions constructed at each measurement layer, with and without post-selection using Alg.~\ref{alg:greedypath} and the symmetry-check in Eq.~\eqref{eq:ps_with_path}.
We observe a significant increase in accuracy of the data when using our post-selection methods without adding circuit runs, even in simulation subspaces which have imperfect pathfinding.
\subsection{Methods}\label{subsec:methods}
We simulate discrete time evolution, and for each discrete time step a sequence of measurements is used to construct a probability distribution. Let $P(p)$ be the measurement distribution of the wavefunction after $p$ Trotter steps, such that $P(p)\approx || \ket{\psi(p)} ||^2$ up to shot noise. We compare an ideal simulation measurement distribution $P_{\text{ideal}}(p)$, a noisy simulation without post-selection $P_{\text{raw}}(p)$, and a noisy simulation with post-selection using Alg.~\ref{alg:greedypath} at a given $\mu$, $P_{\text{ps}, \mu}(p)$. Let $P(p)$ be a normalized probability distribution constructed by a sequence of $M$ measurements after $p$ Trotter steps, and $P(p,b)\approx ||\bra{b}\ket{\psi(p)}||^2$, which is the probability amplitude of state $\ket{b}$ at that layer $p$, up to shot noise. We use Kullback-Liebler divergence~\cite{kullback1951information},
\begin{equation}\label{eqn:kldiv}
\mathcal{D}(P(p), Q(p)) \equiv \sum_{b \in B(\mathcal{H}_d)} P(p,b) \ln (\frac{P(p,b)}{Q(p,b)}),
\end{equation}
which measures the distance between two probability distributions, and use the following equation from~\cite{PhysRevResearch.2.043042} as a simulation's fidelity,
%
\begin{equation}\label{eqn:errormetric}
\mathcal{F}(P_{\text{sim}}(p), P_{\text{ideal}}(p)) \equiv 1 - \frac{\mathcal{D}(P_{\text{sim}}(p), P_{\text{ideal}}(p))} {\mathcal{D}(P_{\text{IRN}}, P_{\text{ideal}}(p))},
\end{equation}
%
where $P_{\text{IRN}}$ is the \textbf{i}ncoherent \textbf{r}andom \textbf{n}oise probability distribution
\begin{equation}
P_{\text{IRN}}(b) \equiv \frac{1}{d} \: \: \forall \: \: \ket{b} \in B(\mathcal{H}_d)
\end{equation}
which we expect to measure once noise has proliferated in the computation. Equation~\eqref{eqn:errormetric} will return 1 if $P_{\text{sim}}= P_{\text{ideal}}$, and 0 if $P_{\text{sim}}= P_{\text{IRN}}$. We compute Eq.~\eqref{eqn:errormetric} at each measurement layer in the computation, with $P_{\text{sim}}$ being either the raw data $P_{\text{raw}}$ or the post-selected data $P_{\text{ps}}$.
We post-select by removing measurement results that fail under Alg.~\ref{alg:greedypath} and Eq.~\eqref{eq:ps_with_path} and re-normalizing the probability distribution. If $P_\text{raw}(p)$ is the measurement distribution with noise at step $p$ of $M$ measurements, $P_{\text{ps}}(p)$ is the post-selected measurement distribution, given by checking each state in $P_\text{raw}(p)$ in Alg.~\ref{alg:greedypath}, at the same Trotter-step $p$ with measurements $M_\text{kept} \leq M$.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{allPSBADDatashots5000.pdf}
\caption{The maximum, minimum, mean, and standard deviation of the set of fidelities, $\{\mathcal{F}\}$, generated by simulating the $F_4$ QCA for each initial condition in its rocky subspaces at $\mu=9$ with post-selected distributions $P_{\text{sim}} =P_{\text{ps}, \mu=5}, P_{\text{ps}, \mu=7},P_{\text{ps}, \mu=9}, P_{\text{SPS}}$. The error rate is (a) $\epsilon/3=0.02$ and (b) $\epsilon/3=0.0$, or an ideal simulation.}
\label{fig:badps}
\end{figure*}
\subsection{Post-selection with perfect symmetry-protected subspaces}\label{subsec:exactps}
We run a 15 qubit simulation of each of the three exemplar models, using initial conditions seen in the literature, the anti-ferromagnetic state $\ket{\psi_0} = \ket{1010\hdots101}$ for the Heisenberg-XXX model, a single bit-flip on the middle qubit $\ket{\psi_0} = \ket{0\hdots1\hdots0}$ for the $T_6$ QCA~\cite{Jones2022,Hillberry2021}, and two bit-flips neighboring the center qubit $\ket{\psi_0} = \ket{0\hdots101\hdots0}$ for the $F_4$ QCA~\cite{Hillberry2021}. No false minima are seen past $\mu=2$ in any of the simulations for these initial conditions, meaning post-selection functions effectively. The data uses $M=10,000$ measurements at each measurement layer and does not add any measurements to replace error-victim circuit runs.
Results can be seen in Fig.~\ref{fig:allps}. We observe that post-selected data approaches incoherent noise in the SPS of that initial state $P_{\text{ISPS}}$, defined as
\begin{equation}
P_{\text{ISPS}}(b) \equiv
\begin{cases}
\frac{1}{|G|} &\: \ket{b} \in G\\
0 &\: \ket{b} \notin G\\
\end{cases}
\end{equation}
Because our simulation's raw data converges to random noise, which is a uniform probability distribution over the Hilbert space, and post-selection only removes measurement outcomes, we should expect that the result of post-selection is still a uniform probability distribution, just over the symmetry-protected subspace instead. In other words, we should observe that
\begin{equation}
\lim_{p\rightarrow\infty}\mathcal{F}(P_{\text{ps}}(p), P_{\text{ideal}}(p)) \approx \mathcal{F}(P_{\text{ISPS}}(p), P_{\text{ideal}}(p))
\end{equation}
given enough measurements.
\subsection{Post-selection with incorrect symmetry-protected subspaces}\label{subsec:probps}
We show that even in subspaces of the $F_4$ QCA model where Alg.~\ref{alg:greedypath} has failures at high $\mu$, which is equivalent to false-positive mislabeling some basis vectors as \textit{outside} the symmetry-protected subspace, our methods mostly mitigate more errors than they introduce and thus still show promise for NISQ quantum simulations.
We found 18 symmetry-protected subspaces of the $F_4$ QCA that had false minima at $n=15$ with $\mu\geq9$ (henceforth called rocky subspaces) and ran simulations to $p=29$ with $M=5000$ measurements at each cycle, initialized in one state from each rocky subspace. See Appendix~\ref{appendix:f4_sim_initconds} for a list of these initial states. This way, Alg.~\ref{alg:greedypath} is highly likely to meet false minima and reject measurements that were in the symmetry-protected subspace of the initial state. We calculate $\mathcal{F}(P_{\text{ps},\mu}(p), P_{\text{ideal}}(p))$ for each simulation, and run our post-selection method with $\mu \in=5,7,9$. See Fig.~\ref{fig:badps}(a) for the set of fidelities $\{\mathcal{F}\}$ at $\epsilon/3=0.02$, compared to post-selection with the ``Full SPS'', which is post-selection using the full symmetry-protected subspace, instead of the heuristic; in the average case our post-selection keeps the simulation data above incoherent noise, even with $\mu=5,7$. The worst-case $\mathcal{F}$ (in red) drops below the statistical difference of incoherent noise, which indicates that our methods would not work on this particular simulation. However, the mean (orange) and standard deviation (orange bars) remain reliably higher than incoherent noise, with the maximum $\mathcal{F}$ keeping close to the ideal simulation.
The other important observation is that our post-selection, which is using an inaccurate symmetry-protected subspace, obtains a higher mean and maximum fidelity than post-selection with a well-defined symmetry-protected subspace. We theorize the reason for this is as follows: if two states share a false minima with $\chi$, they are likely close to each other in the string interaction graph $D_\mathbb{L}$, and thus have a higher probability of transitions between each other in the simulation. On the other hand, if a measured state does \textit{not} share a false minima with the initial state, it is less likely to be observed in a simulation, but a noisy simulation will artificially amplify the probability of that state. The result is that post-selection with the true SPS will accept these artificially amplified results, where our false SPS will coincidentally reject states that can possibly have this happen.
We also examine our post-selection in a simulation absent any error; this is to see how much a false rejection can degrade a perfect computation. See Fig.~\ref{fig:badps}(b) for post-selection with an error-free simulation. It can be seen that most fidelities remain above $\approx0.95$, meaning most of the time this error-prone post-selection method will have minimal impact even on an ideal simulation. The worst-case degraded fidelity is $\approx 0.85$.
\section{Discussion and conclusion}
We introduce a graph theory interpretation of quantum time evolution, which provides a theoretical framework through which symmetry-protected subspaces can be constructed via transitive closure. We identify that these invariant subspaces are an operator-free method for characterizing symmetry in the system, by indicating a conserved quantity of an initial state as it is manipulated by the dynamical system without needing to explicitly identify it.
We observe that a symmetry-protected subspace can be used to provide a smaller computational space, post-select noisy quantum simulation data, or be analyzed to deduce a symmetry operator. We identify post-selection as a pertinent application and introduce two main classical algorithms that elucidate the features of a quantum system's symmetry-protected subspaces. These algorithms employ a \textit{basis string edit map}, which is an efficient construction to provide the local dynamics of an operator by focusing on the presence or absence of basis vectors through its action.
The first algorithm uses transitive closure, calculated with breadth-first search of these basis string edit maps, to enumerate every state within the symmetry-protected subspace of the initial state. Because post-selection with this subspace requires totalling the entire SPS, which can still be exponentially difficult, we provide a second, polynomial-scaling algorithm. The second algorithm attempts to find the smallest basis state (by binary-encoded integer) in an SPS by following a locally-optimal heuristic, and establishing that a path of basis string edit maps exists between an initial and final state if their respective searches collide at any elements.
We conclude by demonstrating post-selection using our second algorithm, which shows that even when the raw simulation data degrades to incoherent noise, our methods effectively recover a probability distribution closer to an ideal computation. Our methods are compatible with any other error-mitigation technique compatible with post-selection, such as zero noise extrapolation, and thus present a further addition to a growing array ot techniques to improve noisy quantum computation~\cite{endo2021hybrid}.
We identify a few obvious extensions to this project. First, if Alg.~\ref{alg:makesps} concludes with certain classical resources, this implies that the wavefunction can be stored with constant overhead on those same resources; this could shrink the computational memory requirements from the quantum regime to the classical regime. Second, we speculate that more reliable algorithms than Alg.~\ref{alg:greedypath} may fulfill the same function; a string-matching algorithm similar to Needleman-Wunsch~\cite{NEEDLEMAN1970443} with the basis string edit maps, instead of insert/delete/shift edits, may fulfill this function.
This work also presents interesting results on post-selection that are worth further exploration. The first is the convergence to incoherent noise \emph{within} a symmetry-protected subspace; this shows that, as expected, post-selection will not converge to an ideal computation, regardless of the additional circuit runs to replace error-victim runs detected, as error will still have proliferated within the the SPS. However, such incoherent distributions still have significant complexity and nontrivial structure (see e.g.,~\cite{Jones2022}), making them particularly interesting in cases where the underlying symmetry is not analytically known. The second is the observation that mis-identifying measurements within the SPS paradoxically results in a more accurate computation than what results from full knowledge of the SPS. The exact cause of this is up to speculation, and is likely a problem-dependent effect, but is interesting in its own right.
Code and data are available upon reasonable request to the corresponding authors.
\section{Acknowledgements}
This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359, and by National Science Foundation grant PHY-1653820.
This work was also authored in part by the National Renewable Energy Laboratory (NREL), operated by Alliance for Sustainable
Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36- 08GO28308. This work
was supported by the Laboratory Directed Research and Development (LDRD) Program at NREL. The views expressed
in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains
and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive,
paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others
to do so, for U.S. Government purposes.
\renewcommand{ |
{
"arxiv_id": "2302.08636",
"language": "en",
"timestamp": "2023-02-20T02:04:24",
"url": "https://arxiv.org/abs/2302.08636",
"yymm": "2302"
} | \section{Introduction}
Since 1970, Black and Scholes \cite{black1973pricing} and Merton \cite{merton1973theory} discover the pioneering option pricing formula and received the Nobel Prize in Economics \cite{ferreyra1998mathematics}, known as the classical Black–Scholes (or BS–Merton) options pricing model, this formula has widespread been attentive for academia and practitioners. Options as an important element of the financial derivative market are hedging tools for investors to devise risk-protected strategies against fluctuations in the price of the underlying assets. Additionally, Greeks, the sensitivity of the option price with respect to the different parameters, are another standard gauge for market makers to design the best hedge for their position.
However, the analytical solution for pricing these financial instruments except in some special cases is not obtainable. Among those instruments, Exotic options which are path-dependent financial derivatives, are very challenging problems to value. American-style options, Asian-type options, and Barrier options are the most appealing example of these families of hedging devices for traders, where they are too complex problems to price analytically.
Admittedly, developing efficient numerical methods for option pricing as a natural remedy began shortly after developing the B-S model in 1970. To name some of the most widely used methods, one can cite analytical approximation \cite{barone1987efficient,geske1984american}, stochastic mesh method \cite{broadie2004stochastic}, Monte Carlo method \cite{boyle1997monte, boyle1977options, acworth1998comparison}, the lattice-based method for the corresponding partial differential equations like finite element, and finite difference methods \cite{achdou2005computational, chiarella2014numerical, duffy2013finite, seydel2006tools, tavella2000pricing}, and mesh-free methods \cite{kim2014option, fasshauer2004using,bastani2013radial} can be mentioned.
Among the aforementioned numerical methods, the weighted residual methods (or Galerkin methods) method has always been the center of interest from community \cite{seydel2006tools, achdou2005computational} due to the undoubted merit in the numerical solution of differential equations. Galerkin's methods benefit from the variety of advantages, having an elaborate and apprehended theory on prior and posterior error estimates, and significant flexibility for non-rectangle domains to just name a few. Thus, in the quantitative finance field, path-dependent options can use the well-understood error estimates to adapt a refiner mesh in their domain where needed. American options near the optimal exercise boundary or multi-factor options \cite{seydel2006tools, achdou2005computational} with the complicated domains can be a good example where the strength of variational methods can be simply exploited.
The state-of-art Black-Scholes PDE is a time-dependent parabolic partial differential equation that can be classified as a convection-diffusion equation. It is well-known in the literature (\cite{ern2004theory}, \cite{strang1974analysis}, and \cite{douglas1982numerical} references therein) that this family of problems can be numerically unstable once the coercivity violates due largely to the small coefficient of second order differential operator. This instability can emerge as a loss of accuracy or oscillatory behavior of the solution.
Since the discontinuous Petrov-Galerkin with optimal test space (DPG) method developed by Demkowicz and Golapalakrishnan \cite{demkowicz2010class}, it has been widely used for the numerical solution of Differential equation (\cite{mustapha2014discontinuous,roberts2015discontinuous,ernesti2019space,ellis2016space,fuhrer2017time,roberts2021time}, and the references therein) consist of convection-dominated diffusion problems \cite{demkowicz2013robust,ellis2016robust,chan2014robust,chan2013dpg}, and PDE-constraint optimization problems \cite{bui2014pde, causin2005discontinuous} from computational mechanics.
Designing the DPG method with optimal test space which is different from trial space, as a projection of trial space, at a continuous space, implies continuity and coercivity of the discrete scheme under adequate regularity characteristic of test and trial space on any mesh. Moreover, the automatic adaptive version of this method is guaranteed by a built-in error indicator. However, to the best of the author's knowledge, this method has not yet been used in the quantitative finance community in spite of its potential benefits, and so many more interesting characteristics of the method that one can find in the literature.
In this paper, owing to the unconditional stability and solid mathematical theory of the DPG method, we proposed this method for the problem of option pricing and estimating Greeks under the Black-Scholes model. We propose both ultraweak and primal formulations of the DPG method for pricing Vanilla options, American options, Asian options, and double Barrier option and their sensitivity analysis. However, the time-space DPG is not the target here and a time-stepping strategy is used for solving the problem through time. Graph norm in which the optimal test space is established is designed and through different numerical examples, the efficiency of the proposed methods is assessed for both ultraweak and primal formulation. Besides, the DPG method for the free boundary value problem and linear complementarity (LCP) problem corresponding to the American option is provided and the early exercise boundary is obtained accordingly. Finally, the sensitivity analysis of the option price, Greeks, with respect to the underlying parameters are evaluated.
Computing the optimal test space through the test-to-trial operator introduced in the original mathematical theory of the method \cite{chan2013dpg}, \cite{roberts2013discontinuous} makes this method relatively computationally expensive. However, utilizing a broken test space overcomes this issue by localizing the evaluation of optimal test space that is conforming element-wise. Using the method with discontinuous optimal test space will allow parallelizing the assembly of the computation, and alongside local computation of test space makes the method reliable and viable. This feature of the DPG method can help to develop a high-performance implementation of the method and take advantage of highly parallel computers. Recently, some effort has been made in the form of designing a software framework to simplify the implementation of the DPG method. Camellia \cite{roberts2014camellia} is a C++ software introduced by Nathan V. Robert in the Argonne National Laboratory to allow developers to create a hp-adaptive DPG method. Astaneh et. al \cite{astaneh2018high} proposed PolyDPG in MATLAB to implement the polygonal DPG method using ultraweak formulation. However, in this paper, a prototype high-performance C++ code is developed independent of the aforementioned software for option pricing using both ultraweak and primal DPG formulation which is accessible to the public in
\href{https://github.com/DavoodDamircheli/HPCUltraweakDPGforOptionPricing}{HPC DPG for Option pricing}.
It is worth noticing that our intention is not to compete with the previous numerical schemes used in the literature despite all the desirable aspects of the DPG method. Using this method more widely by researchers, the features of this method might be handier in more complicated and more challenging problems in quantitative finance including option pricing in higher dimensions than one-dimensions.
The outline of this paper is as follows. A very brief introduction of the Discontiouse Petrov Galerkin method with optimal test space is present in section \ref{DPGmethod}, and notation and elementary tools from the functional analysis are set in section \ref{FuncSpac}. In the sequel, we first present the DPG method for the vanilla option in section \ref{VanlOption}. We will introduce the graph norm of the DPG method for both primal and ultraweak formulation in this section. An experimental base convergent analysis is conducted for the European option pricing. Exotic option pricing including American option, Asian option, and a double barrier is numerically solved in section \ref{exotcOption}. in this part of the paper, we introduced the graph norm pertaining to every exotic option considered in this context. Standard examples in the literature are investigated with the proposed method. Finally, the DPG method is presented for sensitivity analysis of the option pricing problem in section \ref{greeks}, and the performance of the DPG method for evaluating Greeks for both exotic and vanilla options are assessed in the section.
\begin{comment}
\textbf{Federico Fuentes thesis section 1.1 covection diffusion problem}
Due to its solid mathematical structure, it has been used to solve many challenging
problems, especially those involving numerical stability issues where the robust control of relevant
equation parameters is crucial, like convection-diffusion problems [75, 102, 60, 184, 43, 44, 42] and
other singularly perturbed problems [183, 147, 121, 118].\\
\textbf{thesis chan}\\
Petrov-Galerkin methods, in which the test space differs from the trial space, have been explored for over 30 years, beginning with the approximate symmetrization method of Barrett and Morton [32]. The idea was continued with the SUPG method of Hughes, and the characteristic Petrov-Galerkin approach of Demkowicz and Oden [33], which introduced the idea of tailoring the test space to change the norm in which a finite element method would converge\\
\textbf{chan page 27}
An additional consequence of adopting such an energy norm is that, without
knowing the exact solution, the energy error ∥u − uh∥U,E = ∥Bu − Buh∥ ′ = ����R−1 (l − Buh)���� VVV
can be determined by computing the error representation function e := R−1 (l − Buh) through V
(e,δv)V =b(u−uh,δv)=l(δv)−b(uh,δv)
and measuring its norm ∥e∥V . This is simply a consequence of the least-squares nature of DPG
Practically speaking, this implies that the DPG method is discretely stable on any mesh. In particular, DPG is unconditionally stable for higher order adaptive meshes, where discrete stability is often an issue.;
\\
These methods are usually referred to as minimum
residual finite element methods. In general, the (discrete) inversion of the Riesz map has to be
performed at a global level, and this is extremely expensive, but if the Riesz map is somehow
localized to each element, the associated minimum residual method becomes much cheaper and
the overall method becomes useful from a practical standpoint. This is what happens when the
t est spaces are broken" or discontinuous across elements.\\
As the enriched test space is made larger, the resulting discrete inf-sup constant
is designed to approach the best possible discrete inf-sup constant, and mathematically this can
be analyzed through the introduction of a Fortin operator [133].\\
On
a separate note, DPG methods are conforming nite element methods (the discrete trial and test
spaces are subspaces of their innite-dimensional counterparts)\\
This, combined with their inherent
numerical stability, means that DPG methods do not need to include ad hoc stabilization terms,
as other methods do, and it also means that the mathematical analysis of the convergence of the
methods is often simplied.\\
In fact, high-order DPG methods are the norm rather than the exception.\\
Besides the numerical stability and its consequences, DPG methods have several other
assets. First, the DPG methodology is extremely general because it applies to any well-posed
broken variational formulation. Moreover, the discretization possesses a rich algebraic structure.
In fact, the associated stiness matrices are always symmetric (or Hermitian) positive denite.
Lastly, it carries a natural residual-based high-order a posteriori error estimator which is ideal to
implement adaptivity. On the downside, it typically comes at a higher computational cost when
compared to other methods.
Indeed, the versatility with which DPG methods can be
applied to dierent variational formulations has permitted the development of numerical methods
that discretize non-conventional formulations involving non-symmetric functional settings, like ultraweak
variational formulations. These formulations have many attractive properties which have
been taken advantage of in several dierent contexts, including robust error control in singularly
perturbed problems, superconvergence, and polygonal element methods [95, 96, 201, 46, 102, 43,
147, 60, 86, 116, 115, 229].\\
\end{comment}
\newpage
\section{The DPG Method}\label{DPGmethod}
In this section, we briefly provide a high-level introduction to the Discontinuous Petrov-Galerkin Method with Optimal Test Function. A review of the method is given for the steady-state problem, and the transient version of the method with a more concrete definition of the spaces to treat the option problem will be presented in section (\ref{VanlOption}).
Let's begin with the standard well-posed abstract variational formulation which has not necessarily symmetric functional setting, seeking $u\in U$ such that
\begin{align}\label{stdSytem}
b(u,v)=l(v), \quad v\in V,
\end{align}
where trial space $U$ and test space $V$ are proper Hilbert spaces. $l(\cdot)$ is a continuous linear functional, $b(\cdot, \cdot)$ is a bilinear (sesquilinear) form that satisfies the inf-sup condition as follows:
\begin{align}\label{infSupCont}
\sup_{v\in V} \frac{|b(u,v)|}{\|v\|_V}\geq \gamma \|u\|_U, \quad \forall u\in U,
\end{align}
which guarantees the well-posedness of the variational form (\ref{stdSytem}). Therefore, discretize version of variational form (\ref{stdSytem}) with Petrov-Galerkin method is problem of finding $u_h\in U_h\subset U$ such that
\begin{align}\label{stdPetrovGalSytem}
b(u_h,v_h)=l(v_h), \quad v_h\in V_h.
\end{align}
Based on Babu$\check{s}$ka's theorem (\cite{babuvska1971error}) for a discretized system (\ref{stdPetrovGalSytem}) in a case where $\dim(U_h)=\dim(V_h)$, is stable or to another word the system is well-posed if the discrete inf-sup condition is satisfied as follows
\begin{align}\label{infSupDis}
\sup_{v_h\in V_h} \frac{|b(u_h,v_h)|}{\|v_h\|_V}\geq \gamma_h \|u_h\|_U, \quad \forall u_h\in U,
\end{align}
where the inf-sup constant $\gamma_h$ must be bounded away from zero meaning
$\gamma_h\geq\gamma >0$. Now, choosing the discrete spaces of trail and test space is of matter of importance. Indeed, trial space $U_h$ is usually picked by approximability, but trial space $V_h$ can be chosen in such a way as to dictate special properties of the numerical algorithm such as being well-posed.
The Petrov-Galerkin method with optimal test space has been designed in a way that for each discrete function $u_h$ from trial space $U_h$, it finds a corresponding optimal test function $v_h\in V$ as a supremizer of inf-sup condition, i.e optimal test function $v_h\in V$ construct such that
\begin{align}
\sup_{v\in V} \frac{|b(u,v)|}{\|v\|_V}= \frac{|b(u,v_h)|}{\|v_h\|_V}.
\end{align}
Given any trial space $U_h$, let's define a trial-to-test operator $T:U\longrightarrow V$. The optimal test space is defined as the image of the trail space via this operator $V_h^{\text{opt}}:=T(U_h)$, where the function from optimal test space $v^{\text{opt}}\in V_h^{\text{opt}}$ is satisfying in
\begin{align}\label{trialToTestOpt}
(v^{\text{opt}},v)_V=(Tu_i,v)_V=b(u_i,v), \quad \forall v\in V,
\end{align}
in which $(\cdot,\cdot)_V$ is the inner product on the test space. In fact, the equation (\ref{trialToTestOpt}) uniquely determines the optimal test space with the Riesz representation theorem with which discrete stability of the discrete form (\ref{stdPetrovGalSytem}) automatically is attained. The test function defined in (\ref{trialToTestOpt}) is designed in a way that the supremizer of the inf-sup continuous condition implies the satisfaction of the discrete inf-sup condition and as a result, it guarantees the discrete stability. Moreover, we will have
\begin{align}
\sup_{v_h\in V_h^{\text{opt}}} \frac{|b(u_h,v_h)|}{\|v_h\|_V}\geq \frac{|b(u_h,Tu_h)|}{\|Tu_h\|_V}=\sup_{v\in V}\frac{|b(u_h,v)|}{\|v\|_V}\geq \gamma \|u_h\|_U,
\end{align}
so, we have inf-sup constant $\gamma_h\geq \gamma$.
\begin{thm}
The trial to test operator $T:U\longrightarrow V$ is defined by:
\begin{align}
Tu = R_V^{-1}Bu, \quad u\in U
\end{align}
where $R_V:V\longrightarrow V'$ is the Riesz operator corresponding to test inner product. In particular, T is indeed linear.
\end{thm}
\begin{proof}
see \cite{demkowicz2020oden}
\end{proof}
It can be shown (\cite{demkowicz2020oden}) that the Ideal Petrov-Galerkin method introduced above is equivalent to a mixed method as well as a minimum residual method where residual is defined in a dual norm. The ideal PG method benefits from a built-in error indicator for mesh adaptivity thanks to the corresponding mixed method where Riesz's representation of the residual in the dual test norm has been exploited. Assume $\epsilon$ is the solution of the following variational form for a given $u_h\in U_h$
\begin{align}\label{RisRpMix}
(\epsilon,v)_V=l(v)-b(u_h,v),\quad \forall v\in V.
\end{align}
So, the Riesz representation of the residual $\epsilon$ is uniquely defined by (\ref{RisRpMix}). Then the following mixed problem can be defined
\begin{equation}\label{mixedGal}
\left\{\begin{split}
&u_h\in U_h, \quad \epsilon \in V,\\
\\
&(\epsilon,v)_V+b(u_h,v)=l(v), \quad v\in V,\\
\\
&b(\delta u_h,\epsilon)= 0, \quad \delta u_h\in U_h,
\end{split}\right.
\end{equation}
where the solution of the Ideal Petrov-Galerkin problem with optimal test space can be derived from solving the mixed Galerkin problem (\ref{mixedGal}). Thus, the method inherently has a built-in residual a-posteriori error $\epsilon$ measured in the test norm.
Nevertheless, determining the optimal test functions analytically except for some simple model problems is impossible. Therefore, to some extent approximating optimal test space in a way that the discrete inf-sup condition satisfies, is a necessity. An enriched test subspace $V_h\subset V$ is exploited as a remedy for this approximation. So, the Practical Petrov-Galerkin method with optimal test space approximated by enriched test space can be obtained as follows:
\begin{equation}
\left\{\begin{split}
&u^r_h\in U_h, \\
\\
&b(u^r_h, T^r \delta u_h)=l(T^r \delta u_h),\quad \delta u_h\in U_h,
\end{split}\right.
\end{equation}
where approximated optimal test space computes with component satisfy the standard discretization
\begin{equation}
\left\{\begin{split}
&T^ru\in V^r, \\
\\
&( T^r u, \delta u_h)_V = b(u, \delta v),\quad \delta v\in V_r.
\end{split}\right.
\end{equation}
Indeed, we increase the dimension of the discrete enriched test space in order to meet the discrete inf-sup condition for the system (\ref{stdPetrovGalSytem}). This strategy is valid due to Brezzi's theory \cite{demkowicz2020oden} that allows the dimension of discrete test space $V^r$ exceed the dimension of the trial space in spite of Babu$\check{s}$ka's theory which enforces the dimension of discrete trial and dimension of discrete test space to overlap. Analysis of stability reduction in practical Petrov-Galerkin method can be performed exploiting Fortin operators \cite{gopalakrishnan2014analysis}, \cite{nagaraj2017construction}.
In spite of the myriad of advantages that the practical Petrov-Galerkin Method introduced so far, due to the computation of optimal test space globally through operator $T$, it is very expensive. Utilizing a broken test space overcomes the issue of localizing the evaluation of optimal test space that is conforming element-wise. Therefore, using the method with discontinuous optimal test space will parallelize the assembly of the computation alongside the local computation of test space making the method reliable and viable. Besides, this will justify the name of the Discontinuous Petrov-Galerkin method (DPG) with optimal test functions. However, breaking the test space will bring the need for introducing additional trace variables and flux variables on the mesh skeleton on the element interface. This will discuss thoroughly in section (\ref{VanlOption}) by proposing the DPG method on the Ultraweak and primal formulation of the option pricing problem.
\section{Functional Spaces and Preliminaries}\label{FuncSpac}
we define following energy spaces to deal wit option pricing problem
\begin{equation}\label{omegaSpace}
\begin{split}
&L^2(\Omega) = \{f:\Omega\to \mathbb{R}\quad|\quad\|f\|_{L_2}\leq \infty\},\\
&H^1(\Omega) = \{f:\Omega\to \mathbb{R}\quad|\quad f\in L^2(\Omega) \quad
\Delta f\in(L^2(\Omega))^d\},
\end{split}
\end{equation}
with the $L_2$-norm defined as
\begin{align}
\|f\| :=(f,f)^{\frac{1}{2}}( \int_{\Omega}|f|^2dx )^{\frac{1}{2}}.
\end{align}
The domain of the problem $\Omega$ is partitioned into a set of computational domain $\Omega_h$ with open disjoint elements of $\{L\}_{L\in \Omega_h}$. This computational domain of trial space for all problems is $[-6,6]$, except for the Asian option which is $[-2,2]$. Having the finite element mesh $\Omega_h$, we can define corresponding broken energy space as
\begin{equation}\label{omegaSpace1}
\begin{split}
&L^2(\Omega_h) = \{f\in L^2(\Omega)\quad|\quad|f|_{L}\in L^2(L),\forall L\in \Omega_h\},\\
&H^1(\Omega_h) = \{f\in L^2(\Omega)\quad|\quad|f|_{L}\in H^1(L),\forall L\in \Omega_h\},
\end{split}
\end{equation}
using broken test space, we need to define the energy space for the trace variable as well. We define these spaces on the mesh skeleton $\Gamma_h$,as
\begin{align}
H^{\frac{1}{2}}(\Gamma_h)=\{\hat{f}\in \prod_{L\in \Omega_h} H^{\frac{1}{2}}(\partial L)\quad | \quad \exists y\in H^{1}(\Omega)\quad \text{s.t} \quad \phi(y\big|_L) = \hat{f} \},
\end{align}
where the operator $\phi(\cdot)$ is continuous trace operator can be defined element-wise
\begin{align}
\phi :H^1(\Omega_h)\to \prod_{L\in \Omega_h} H^{\frac{1}{2}}(\partial L).
\end{align}
Moreover, we need to define appropriate space for the variational inequality defined on problem of American option pricing. Thus, we define a half space $\mathcal{H}$ as following
\begin{align}
\mathcal{H}:=\{f\in L^2(\mathbb{R}_{+})\quad | \quad f\geq f^{*}\},
\end{align}
where $f^{*} \in L^2(\mathbb{R}_{+}) $ is the obstacle function. One can see more detail of this space in \cite{achdou2005computational}, \cite{tremolieres2011numerical}.
It is worth noticing that for option pricing in one-dimension we consider a uniform discretization of time interval $[0,T]$, truncated domain of space $[x_{\min},x_{\max}]$ as the finite element mesh $\Omega_h$.
\section{Pricing Vanilla Options}\label{VanlOption}
In this section, we use the DPG method introduced in section \ref{DPGmethod} to numerically solve the option pricing problem. pricing vanilla option with the DPG method is presented in this section, and the exotic option is investigated in the next section.
\subsection{Vanilla European Options Based on Black-Scholes Model}\label{BlkSholes}
In this part, we use the DPG method for the popular Black-Scholes Model which simply provides a closed-form solution to all European-type derivatives (vanilla option). It is worth mentioning that even though assumptions of this model are not worldwide valid, there are still a large group of people on the market that will use the Black-Scholes model plus a premium \cite{higham2004introduction}. Besides, this model can be used as a test model to assess the efficiency of the method. Let's recall the Black-Scholes model briefly. This model assumes that the price of a risky asset, $S_t$, is evolving as a solution of the stochastic differential equation as follows
\begin{align}\label{geoBrownMo}
dS_t = r S_t dt + \sigma_t S_t dW_t,
\end{align}
in which $W_t$ is the Wiener process on a appropriate probability space $(\Omega, \mathcal{A},\mathbb{P},\mathcal{F}_t)$, $r$ is a risk free interest rate, and $\sigma_t$ is volatility of the return on the underlying security. The SDE (\ref{geoBrownMo}) is called geometric Brownian motion as well. Let's consider a European style call option on an underlying asset $S_t$, where this spot price $S_t$ satisfies in the geometric Brownian motion like (\ref{geoBrownMo}) and with the payoff of $\max \{ S_T-K,0\} = (S_T-K)_{+}$ at the date of expiration $T$ for the striking price $K$. We are interested in the fair price of this option at the current moment,$U(S_{0},0)$, if we denote the value of the option by $U(S_t,t)$ at time $t$, when the underlying price has the value $S_t$. The Black-Scholes formula express value of the option as
\begin{align}
U(S_t,t) = \mathbb{E}^{Q}(e^{-\int{t}^{T}r_tdt}(S_T-K)_{+}|\mathcal{F}_t),
\end{align}
It can be shown \cite{achdou2005computational,higham2004introduction} that option price of $U(S_t,t)$ satisfies in the followings deterministic partial differential equation.
\begin{align}\label{BlS1}
\frac{\partial U}{\partial t}+\frac{\sigma^2}{2}S_t^2\frac{\partial U^2}{\partial S^2}+r S \frac{\partial U}{\partial S}-rU(S,t) = 0,
\end{align}
with the following boundary condition
\begin{equation}\label{BC1}
\begin{split}
&U(0,t) = 0, \quad \forall t\in [0,T], \\
\\
&\lim_{S_t \to \infty}U(S_t,t) = S_t-e^{-r(T-t)}, \quad \forall t\in [0,T].
\end{split}
\end{equation}
It is well-known \cite{shreve2004stochastic,higham2004introduction,achdou2005computational}, having the upper tail of the standard normal distribution
\begin{align}
N(x) = \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^{x}e^{-\frac{z^2}{2}}dz,
\end{align}
and
\begin{align}
d1 &= \frac{\log(S_0/K)+(r+\frac{\sigma^2}{2}T)}{\sigma \sqrt{T}},
&
d2 &= d1-\sigma \sqrt{T},
\end{align}
the solution of equation (\ref{BlS1}) for a European call option can be expressed as
\begin{align}\label{BlSanalytic}
U(S_t,t) = S_tN(d_1)-Ke^{-r(T-t)}N(d2).
\end{align}
The closed-form analytical solution (\ref{BlSanalytic}) for the European call option is used as a benchmark to study the accuracy and efficiency of the DPG method. Switching log-prices $x = \log(\frac{S_t}{S_0})$, and changing variable $\tau = T-t$, the partial differential equation (\ref{BlS1}), and the boundary conditions (\ref{BlS1}) can transfer to the following initial value constant coefficient partial differential equations solution of equation (\ref{BlS1}) for a European call option can be expressed as
\begin{equation}\label{BlSconst}
\left\{
\begin{split}
&\frac{\partial U}{\partial \tau}-\frac{\sigma^2}{2}\frac{\partial U^2}{\partial x^2}-(r +\frac{\sigma^2}{2})\frac{\partial U}{\partial x}+rU(x,t)= 0, \\
\\
&U(x,0) = \max(e^x-K,0), \\
\\
&U(0,\tau) = 0,\\
\end{split}
\right.
\end{equation}
Noting that equation (\ref{BlSconst}) can be used for pricing of derivatives whose payoff depends on the price of the underlying asset at the maturity date, and more complicated options whose price are path-dependent such as American options and Asian options will use different approaches that we present them in the coming sections.
We use finite-difference $\theta$-method to discretize the time derivative of the problem (\ref{BlSconst}) with the following form
\begin{align}\label{TimeDisc}
\frac{u^{n+1}-u^{n}}{\Delta \tau}-(\theta \mathcal{L}_{BS}u^{n+1}+(1-\theta)\mathcal{L}_{BS}u^{n}) = 0,
\end{align}
for $n=0,1,2,N_{\tau}-1$, with the time step $\Delta \tau = T/N_{\tau}$, and implicitness factor $\theta\in [0,1]$.
Besides, operator $\mathcal{L}_{BS}$ is defined as follows
\begin{align*}\label{BSoperator}
\mathcal{L}_{BS}u = -\frac{\sigma^2}{2}\frac{\partial^2u(x,t)}{\partial x^2}-(r +\frac{\sigma^2}{2})\frac{\partial u(x,t)}{\partial x}+r u(x,t),
\end{align*}
So, different values for $\theta$ can lead us to different well-known time-stepping schemes such Backward Euler method ($\theta = 1.0$), Crank-Nicolson method ($\theta = 0.5$), and forward Euler method ($\theta = 0.0$). The Numerical efficiency of the finite difference method is well-known in the literature \cite{bulirsch2002introduction}. We proceed with introducing the DPG methodology for spatial discretization. Varieties of the variational formulation can be developed for the semi-discrete model problem (\ref{TimeDisc}) with different properties. In this investigation, we concentrate on two formulations including the classical (primal) formulation and the ultraweak formulation.
\subsection{Primal formulation for Vanilla options}
In this subsection we propose the standard classical varational formulation for DPG method that is called the DPG primal formulation. Testing semi-discrete problem (\ref{TimeDisc}) with a proper test function $v$, integrating over the domain and using Green identity, we will have
\begin{equation}
\begin{split}
&(u^{n+1},v)-(u^{n},v)\\
&\qquad \Delta \tau \theta\Big[-(\frac{\sigma^2}{2}\frac{\partial}{\partial x} u^{n+1},\frac{\partial}{\partial x}v)_{\Omega_h} +((r +\frac{\sigma^2}{2})\frac{\partial}{\partial x} u^{n+1},v)_{\Omega_h}-
(r u^{n+1},v)_{\Omega_h}+
\langle \frac{\partial}{\partial x} u^{n+1},v\rangle_{\partial \Omega_h}\Big]\\
&\qquad+\Delta \tau (1-\theta)\Big[-(\frac{\sigma^2}{2}\frac{\partial}{\partial x} u^{n},v)_{\Omega_h} +((r +\frac{\sigma^2}{2})\frac{\partial}{\partial x} u^{n},v)_{\Omega_h}-
(r u^{n},v)_{\Omega_h}+\langle \frac{\partial}{\partial x} u^{n+1},, v\rangle_{\partial \Omega_h}\Big]=0,\\
\end{split}
\end{equation}
where $(\cdot,\cdot)$ are standard inner product in the Hilbert space $L_2$ and $\langle\cdot,\cdot\rangle$ is the duality pair in the $L^2(\Gamma)$. Trial space is tested with a broader discontinuous (broken) space in the DPG methodology, so as a result we don't assume that test functions disappear on the Dirichlet boundary conditions. However, the term $\frac{\partial u}{\partial x}$ will be recognized as the flux variable $\hat{q}_n$ which is a new unknown on the mesh skeleton. Thus, Defining a new group variable ${\bf u}=(u, \hat{q})\in H_1(\Omega)\times H^{-1/2}(\partial \Omega)$, the broken primal formulation for Black-Scholes (\ref{BlSconst}) reads
\begin{equation}\label{primalform}
\left\{
\begin{array}{l}
b_{\text{primal}}({\bf u},v)=l(v), \\
\\
{\bf u}(e^x,0) = \max(x-K,0),\\
\\
{\bf u}(0,\tau) = 0.
\end{array}
\right.
\end{equation}
where bilinear form $b_{\text{primal}}(\cdot,\cdot)$ and linear operator $l(\cdot)$ are defining as follows
\begin{equation}\label{primalBS}
\begin{split}
&b_{\text{primal}}({\bf u},v) = (u^{n+1},v)+\Delta \tau \theta\Big[(-\frac{\sigma^2}{2}\frac{\partial}{\partial x} u^{n+1},\frac{\partial}{\partial x} v)_{\Omega_h} +((r +\frac{\sigma^2}{2})\frac{\partial}{\partial x} u^{n+1},v)_{\Omega_h}\\
&\qquad-(u^{n+1},v)_{\Omega_h}+ \langle \hat{q}^{n+1}, v\rangle_{\partial \Omega_h}\Big], \qquad \quad n=1,\cdots, N_t,\\
\\
&l(v) = (u^{n},v)+\Delta \tau (1-\theta)\Big[(\frac{\sigma^2}{2}\frac{\partial}{\partial x} u^{n},\frac{\partial}{\partial x}v)_{\Omega_h} +((r +\frac{\sigma^2}{2})\frac{\partial}{\partial x} u^{n},v)_{\Omega_h}\\
&\qquad-(u^{n},v)_{\Omega_h}-\langle \hat{q}^{n}, v\rangle_{\partial \Omega_h}\Big],\qquad \quad n=1,\cdots, N_t,\\
\end{split}
\end{equation}
and boundary conditions ${\bf u}^{0} = \max(e^x-K,0), \forall x\in \Omega_h$, and ${\bf u}^i(x=0) = 0, \forall i=1,\cdots,N_t$.
Let's recall that here element-wise operations are denoted by subscribing $h$.
Having the new flux unknown on the mesh skeleton in the primal formulation (\ref{primalBS}) is the price that we pay to use a larger test space (enriched test space).
\subsection{Ultraweak Formulation for Vanilla Options}
In this section, we will derive the ultraweak DPG formulation for the pricing problem. The first step for finding ultraweak formulation is to transform the Black-Scholes problem (\ref{BlSconst}) into a first-order system of differential equation by defining a new variable $\vartheta(x,t) =\frac{\partial U}{\partial x}(x,t), \quad \forall (x,t)\in \Omega \times [0,T] $ as following
\begin{equation}\label{firstOrderBS}
\left\{
\begin{split}
&\frac{\partial U}{\partial \tau}-\frac{\sigma^2}{2}\frac{\partial {\bf \vartheta}}{\partial x}-(r +\frac{\sigma^2}{2})\vartheta+rU(x,t)= 0, \\
\\
&{\bf \vartheta}-\frac{\partial U}{\partial x}=0,\\
\\
& U(x,0) = \max(e^x-K,0), \\
\\
&U(0,\tau) = 0.\\
\end{split}
\right.
\end{equation}
By defining a new group variable ${\bf u} = (u,\vartheta)$, testing the equation (\ref{firstOrderBS}) with the test variables ${\bf v} = (v,\omega)$, and integrating and using Green's identity, we will have
\begin{equation}\label{ultraVanila}
\begin{array}{l}
(u^{n+1},v)+(u^{n},v)+\\
\Delta \tau \theta
\bigg[({\bf \vartheta}^{n+1},\frac{\sigma^2}{2} \frac{\partial}{\partial x} v)_{\Omega_h} +({\bf \vartheta}^{n+1},(r +\frac{\sigma^2}{2}) v)_{\Omega_h}-(u^{n+1},v)_{\Omega_h}+
-(u^{n+1},\frac{\partial }{\partial x}\omega)
-({\bf \vartheta}^{n+1},\omega)+\\
\\
\hspace{2cm }\langle \frac{\partial}{\partial x} u^{n+1}, v\rangle_{\partial \Omega_h}
\langle \frac{\partial}{\partial x} {\bf \vartheta }^{n+1}, v\rangle_{\partial \Omega_h}\bigg]
+\Delta \tau \theta
\bigg[({\bf \vartheta}^{n},\frac{\sigma^2}{2} \frac{\partial}{\partial x} v)_{\Omega_h} +({\bf \vartheta}^{n},(r +\frac{\sigma^2}{2}) v)_{\Omega_h}-\\
\\
\hspace{2cm} (u^{n},v)_{\Omega_h}+
-(u^{n},\frac{\partial }{\partial x}\omega)
-({\bf \vartheta}^{n},\omega)
+\langle \frac{\partial}{\partial x} u^{n}, v\rangle_{\partial \Omega_h}
+\langle \frac{\partial}{\partial x} {\bf \vartheta }^{n}, v\rangle_{\partial \Omega_h}\bigg] =0,
\end{array}
\end{equation}
As has been noted above we use a discontinuous test space where this space is element-wise conforming to the DPG methodology. Besides, in ultraweak formulation, there is no derivative of the trial variable in this weak formulation, and these trial variables are defined in $L_2(\Omega)$, therefore, the boundary values of the field variables are meaningless on the skeleton $\Gamma$. Thus, we introduce two trace variables $\hat{u}_{n+1}\in H^{1/2}(\Omega)$, and $\hat{\vartheta}^{n+1}\in H^{1/2}(\Omega)$ that are unknown on the skeleton. If we define the group variables ${\bf u} = (u,{\vartheta})$, ${\bf \hat{u}} = (\hat{u},\hat{{\vartheta}})$, and ${\bf v} = (v,\omega)$, the broken ultraweak formulation corresponding to the Black-Scholes model will be finding ${\bf u} = (u,\vartheta)\in L_2(\Omega)\times L_2(\Omega)$, and ${\bf \hat{u}} = (\hat{u},\hat{{\vartheta}})\in H^{1/2}(\Omega)\times H^{1/2}(\Omega) $ such that
\begin{equation}\label{ultrweakForm}
\left\{
\begin{array}{l}
b_{\text{ultraweak}}(({\bf u},\hat{{\bf u}}),{\bf v})=l({\bf v}) \\
\\
({\bf u},\hat{{\bf u}})|_{(x,0)} = \max(e^x-K,0), \\
\\
({\bf u},\hat{{\bf u}})|_{(0,\tau)} = 0,\\
\end{array}
\right.
\end{equation}
where
\begin{align}
\begin{split}
&b_{\text{ultraweak}}(({\bf u},\hat{{\bf u}}),{\bf v}) =
b_{\text{ultraweak}}(((u,{\vartheta}), (\hat{u},\hat{{\vartheta}})), (v,\omega))
\\
\\
&\hspace{3cm}=(u^{n+1},v)+\Delta \tau \theta\bigg[({\bf \vartheta}^{n+1},\frac{\sigma^2}{2} \frac{\partial}{\partial x} v)_{\Omega_h} +({\bf \vartheta}^{n+1},(r +\frac{\sigma^2}{2}) v)_{\Omega_h}-(u^{n+1},v)_{\Omega_h}-\\
\\
&\qquad(u^{n+1},\frac{\partial }{\partial x}\omega)
-({\bf \vartheta}^{n+1},\omega)+
\langle \hat{u}^{n+1}, v\rangle_{\partial \Omega_h}+
\langle \hat{{\vartheta}}^{n+1}, v\rangle_{\partial \Omega_h}\bigg],\qquad n=1,\cdots, N_t\\
\\
&l({\bf v}) = l(v,\omega)= (u^{n},v)+\Delta \tau \theta
\bigg[({\bf \vartheta}^{n},\frac{\sigma^2}{2} \frac{\partial}{\partial x} v)_{\Omega_h} +({\bf \vartheta}^{n},(r +\frac{\sigma^2}{2}) v)_{\Omega_h}
-(u^{n},v)_{\Omega_h}+\\
\\
&\hspace{2cm}-(u^{n},\frac{\partial }{\partial x}\omega)
-({\bf \vartheta}^{n},\omega)+\langle \hat{u}^{n}, v\rangle_{\partial \Omega_h}
+\langle \hat{{\vartheta}}^{n}, v\rangle_{\partial \Omega_h}\bigg], \qquad n=1,\cdots, N_t,
\end{split}
\end{align}
with the boundary condition $({\bf u},\hat{{\bf u}})^{0} = \max(x-K,0), $, and $({\bf u},\hat{{\bf u}})^{i} = 0, n=1,\cdots, N_t$.
It is well-known fact \cite{demkowicz2010class,demkowicz2011analysis}, that the DPG method significantly depends on the choice of the test space's inner product since it determines the norm and as a result the structure of test space in which the DPG method is optimal. As an illustration, if the errors in $L_2$-norm are of interest, there is a tangible theory \cite{demkowicz2020oden} that shows that the graph norm is a suitable choice for the test space in ultraweak formulation, and the standard energy norm induced form bilinear $\| \cdot\|_E = b_{\text{primal}}(v,v)$ is the candidate the primal formulation. we employ the following test norms for the formulations proposed above.
In this paper, we propose the following graph norm (\ref{ultrweakForm}), and (\ref{primalform})
\begin{align}\label{testNormVanial}
\begin{split}
&\text{Primal}: \|v\|^2_{V} = \frac{1}{\Delta t} \|v\|^2+\frac{1}{(\Delta t)^2} \|\frac{\sigma^2}{2}\frac{\partial }{\partial x}v\|^2, \\
\\
&\text{Ultraweak}: \|{\bf v}\|^2_{V} = \|(v,\omega)\|^2_{V}=\frac{1}{(\Delta t)^2} \| \frac{\sigma^2}{2}\frac{\partial }{\partial x} v- r v-\omega\|^2+ \frac{1}{\Delta t}\| (r +\frac{\sigma^2}{2}) v-\frac{\partial }{\partial x}\omega\|^2, \\
\end{split}
\end{align}
Having the graph norm and energy norm defined in (\ref{testNormVanial}), and the inner product of the corresponding test space as a direct result of it, we are ready to discretize the weak forms and construct the DPG system.
In the classical Galerkin method, the convention is to choose the same discrete space for both trial and test spaces, so a squared linear system is expected. However, in the DPG method, discrete trial $U_h\subset U$ and test space $V_h \subset V$ have different dimensions. The practical DPG method with optimal test space benefits from enriched test space, meaning that $\dim V_h\geq \dim U_h$. We assume that $\{u_j\}^N_{j=1}$ , and $\{v_j\}^M_{j=1}$ are the bases of trial and test spaces respectively where $M\geq N$. In the DPG methodology, each trial space basis function $u_i$ and corresponding optimal test function $v^{\text{opt}}_i$ satisfy in the following system
\begin{align}
(v^{\text{opt}}_i, \delta v)_V = b(u_i, \delta v), \quad \forall \delta v\in V.
\end{align}
Now let's define $M\times M$ Gram matrix $G= (G_{ij})_{M\times M}$ as
\begin{align*}
G_{ij} = (v_i,v_j)_V,
\end{align*}
and $N \times M$ stiffness matrix $B = (B_{ij})_{N\times M}$
\begin{align*}
B_{ij} = b(u_i, v_j),
\end{align*}
for primal formulation finding matrix $B$ is straightforward from the bilinear form and test norm, however, calculating this matrix for ultraweak formulation can be confusing, where $B$ has the following structure
\begin{align}
B = \begin{bmatrix}
B_{uv} & B_{\vartheta v} & B_{\hat{u}v} & B_{\hat{\vartheta}v}\\
B_{u{\omega}} & B_{\vartheta {\omega}} & B_{\hat{u}{\omega}} & B_{\hat{\vartheta}{\omega}}\\
\end{bmatrix}_{N\times M},
\end{align}
and $l$ the mass matrix $l(v) = (f,r)$. We use high-order Lagrange basis of different orders to expand the trial space with order $P$, and enriched test space with order $p+\Delta p$ for $\Delta p = 2$. Thus
The global assembly will have the following form
\begin{align}
B^{\text{n-op}}{\bf u}_h = B^{T}G^{-1} B {\bf u}_h= B^{T} G^{-1}l = l^{\text{n-op}},
\end{align}
where discrete operators $B^{\text{n-op}}$, and $l^{\text{n-op}}$ are near-optimal mass and stiffness matrix for the DPG formula. It is worth noting that thanks to the broken structure of the test space, evaluating optimal test functions in the Gram matrix and its inversion are localized and therefore the global assembly can be paralleled, which makes the DPG method a practical method to solve the option pricing problem.
\begin{figure}[!htb]
\includegraphics[width=\linewidth]{plots/1.png}
\caption{The surface for of two European options using DPG method with $\sigma = 0.4$, $r = 0.1$, and $k=100$. }
\label{UltraPrimaPriceSurfaceVanailla}
\end{figure}
\subsection{Numerical Results}
In this section, we provide numerical experiments to showcase the efficiency and accuracy of the DPG method in pricing vanilla options using both the primal and ultraweak DPG methods. For this experiment, risk-free rate $r$ is set to be $0.05$, time to maturity $T$ is one year, and the strike price $K$ is $100$. The computational domain is $[-6,6]$, and a variety of values for the market volatility $\sigma$ is considered in this part.
Through this paper, we report the relative errors of $L_2$-error, $L_{\infty}$-error of the solution obtained by the proposed numerical scheme. The binomial method implemented in \cite{higham2002nine} is utilized as a benchmark and analytical solution to compare with the approximated solution obtained with the DPG method. The relative errors are defined as follows
\begin{align}\label{errorDef}
\|E\|^2_{L_2} &= \| \frac{u-\Tilde{u}}{u}\|^2_{L_2},&\|E\|_{\infty} &= \| \frac{u-\Tilde{u}}{u}\|_{L_{\infty}},
\end{align}
where $\Tilde{u}$ represents the estimated value attained from the numerical method. Fig. \ref{UltraPrimaPriceSurfaceVanailla} depicts the surface of a call option with volatility $\sigma=0.4$ for both primal and ultraweak DPG formulation.
\begin{figure}[ht]
\includegraphics[width=\linewidth]{plots/2.png}
\caption{Accuracy properties of primal DPG for European put options $r=0.05$, $K=100$, and different volatility}
\label{orderSpacePrimalDPGvanialPut}
\end{figure}
In this part of the experiment, we study the asymptotic convergence of relative errors of the numerical method for uniform mesh refinement both in time and steps.
It is worth mentioning that error is small in general, and the relative error is of order of $10^{-6}$.
In this regard, Fig. (2a), and (2b) displays the space order of convergence of the primal DPG method for volatilises of $\sigma = 0.3$ and $\sigma = 0.015$ pricing a European put option. It is evident that the convergence rate of primal DPG scheme is super linear in space.
The same investigation for ultraweak DPG scheme Fig. (2a), and (2b) shows that although the convergence rate in space is super-linear the errors in this scheme decay moderately gently. We observe that for the space order in both ultraweak and primal schemes initially we see some inconsistency in the linear decreasing of the error but once a number of elements approach a certain point, we witness the expected linear convergence $\mathcal{O}(h)$, which can cause this overall super-linear convergence rate.
\begin{figure}[ht]
\includegraphics[width=\linewidth]{plots/3.png}
\caption{Accuracy properties of ultraweak DPG for European put options r=0.05, K=100, and different volatility}
\label{orderSpaceUltraweakDPGvanialPut}
\end{figure}
However, Fig. (3a), and (3b), and Fig. (3c), and Fig. (3d) depicts this observation more precisely when for the same scenario the rate of convergence for the Primal DPG and Ultraweak DPG method is linear in time due to the fact that the $h=0.01$ is fixed for this experiment.
\section{Exotic Options}\label{exotcOption}
Financial institutions issue other forms of options that are not vanilla calls or put introduced in section \ref{VanlOption}. This modern financial instrument is traded between companies and banks to cope with a variety of demands \cite{zhu2004derivative}. So, exotic options are traded in the over-the-counter (OTC) market to satisfy special needs. Being a complicated financial instrument is the common property of exotic options where the value of of these options depends on the whole or part of the path of the underlying security. Thus, exotic options are path-dependent options. In this section, we proposed the DPG method for the numerical solution of the important examples of path-dependent exotic options including American options, Asian options, Barrier options, and look-back options.
\subsection{American options}\label{AmericanOptionsec}
In this section, we briefly review American option pricing under the simple model of Black-Scholes. Contrary to the European option, the holder of this contract has the right to exercise the option at any time before maturity.
It is well known that this slight difference brings the analysis of American options much more complicated. Indeed, this right turn problem of valuing the American option into a stochastic optimization problem. The price of an American option under the risk-neutral pricing principle can be obtained as
\begin{align}
U(x,t) = \sup_{t\leq \tau \leq T}\mathbb{E}[e^{-\int_{t}^{\tau}r(s)ds}h(x)|\mathcal{F}_t]
\end{align}
where $h(x)$ is the option payoff, and $\tau$ is a stopping time. Stopping time is the time that owner of the option exercises the contract, besides, the stopping time is a concept in the stochastic analysis as well. It is worth noting that due to the complexity of the American option problem, this problem does not have a closed-form solution. One way of formulating American options thanks to the no-arbitrage principle is the free boundary value problem. Indeed, the free boundary happens when the option is deep in-the-money, and finding this boundary alongside pricing the American option brings extra difficulties to the problem. Here we briefly recall the different forms of American options and the corresponding DPG formulation for the formulations, for more detail one can see \cite{seydel2006tools}.
Considering the log-prices $x=\log(\frac{S_t}{S_0})$, changing tenor $T-t$ to $\tau$, the free boundary formulation of the American put option yields:
\begin{equation}\label{AmerFree}
\left\{
\begin{split}
&\frac{\partial U}{\partial \tau}(x,\tau)-\frac{\sigma^2}{2}\frac{\partial U^2}{\partial x^2}(x,\tau)-(r +\frac{\sigma^2}{2})\frac{\partial U}{\partial x}(x,\tau)+rU(x,\tau) = 0,\qquad \forall x>S_f,\\
\\
&U(x,\tau) = K-e^x,\qquad \forall x\leq S_f,\\
\\
&U(x,0) = (K-e^x)^{+},\\
\\
&\lim_{x\to \infty}U(x,\tau) = 0,\\
\\
&\lim_{x\to S_f}U(x,\tau) = K-e^{S_f},\\
\\
&\lim_{x\to S_f}\frac{\partial U(x,\tau)}{\partial x} = -1,\\
\end{split}
\right.
\end{equation}
in which, $S_f$ is the free boundary of the American option pricing. It is evident that solving the problem of American option in a free boundary framework needs evaluating the free boundary along the finding the solution. Whereas, There is another approach to deriving the American option pricing problem called a linear complementarity problem (LCP). The advantage of this approach is that free boundary is not present in the formulation anymore. However, solving the LCP problem has its own complexity, and techniques \cite{murty1988linear} . The complementarity problem of the American option can be written as
\begin{equation}\label{LCP0}
\left\{\begin{split}
&\left(\frac{\partial U}{\partial \tau}(x,\tau)-\frac{\sigma^2}{2}\frac{\partial U^2}{\partial x^2}(x,\tau)-(r +\frac{\sigma^2}{2})\frac{\partial U}{\partial x}(x,\tau)+rU(x,\tau)
\right)(U(x,\tau)-h(x))= 0,\\
\\
& \frac{\partial U}{\partial \tau}(x,\tau)-\frac{\sigma^2}{2}\frac{\partial U^2}{\partial x^2}(x,\tau)-(r +\frac{\sigma^2}{2})\frac{\partial U}{\partial x}(x,\tau)+rU(x,\tau)
\geq 0,\\
\\
& U(x,\tau)-h(x)\geq 0,\\
\\
& U(x,0) = (K-e^x)^{+}.
\end{split}
\right.
\end{equation}
The main approach here is to utilize the DPG formulation for the governing equations of the equ. \ref{LCP0}, and \ref{AmerFree} and then consider the free boundary condition for them. The using DPG method for a (LCP) is examined before in \cite{fuhrer2018dpg} for using DPG formulation for the Signorini type problem as a contact problem. However, Thomas Fuhrer et al. in \cite{fuhrer2018dpg} proposed the ultraweak formulation of the corresponding problem, here we derive both ultraweak and primal formulation of the DPG method for the problem of American option pricing as a special case of obstacle problem.
Now, for the DPG formulation in LCP framework, we multiply the second inequality condition in the equ. \ref{LCP0} with the smooth no-negative test functions $v\in V$ where test space is a broken convex cone and following the same process of defining trail and flux variable presented in the section \ref{VanlOption}, and after some integration by part we obtain
\begin{align}\label{VI1}
\begin{split}
\frac{d}{d\tau}({\bf u}, {\bf v})+ b^{\tau}({\bf u},{\bf v})\geq 0, \\
\end{split}
\end{align}
where bilinear form for primal formulation defies as
\begin{align}\label{primalVI}
\begin{split}
b^{\tau}_{\text{primal}}({\bf u},v) = (-\frac{\sigma^2}{2}\frac{\partial u}{\partial x} ,\frac{\partial v}{\partial x} )_{\Omega_{+}} +\bigl((r +\frac{\sigma^2}{2})\frac{\partial u}{\partial x} ,v \bigr)_{\Omega_{+}}
-(u,v)_{\Omega_{+}}+ \langle \hat{q}, v\rangle_{\partial \Omega_{+}},\\
\end{split}
\end{align}
where $\Omega_{+}$ shows the non-negative part of the domain, with a set of trial and flux variables ${\bf u}=(u,\hat{q})\in H^1(\Omega)\times H^{\frac{1}{2}}(\Omega)$ , and test variable ${\bf v} = v \in H^1(\Omega)$.
Moreover, defining trail variables ${\bf u} = (u,\vartheta)\in L_2(\Omega)\times L_2(\Omega)$, and flux variables ${\bf \hat{u}} = (\hat{u},\hat{{\vartheta}})\in H^{1/2}(\Omega)\times H^{1/2}(\Omega) $, one can define the bilinear form in \ref{VI1} for the ultraweak formulation as following
\begin{align}\label{ultraWkVI}
\begin{split}
&b^{\tau}_{\text{ultraweak}}(({\bf u},\hat{{\bf u}}),{\bf v}) =
b^{\tau}_{\text{ultraweak}}\left( \bigl((u,{\vartheta}), (\hat{u},\hat{{\vartheta}})\bigl), (v,\omega)\right),
\\
\\
&\hspace{3cm}=({\bf \vartheta},\frac{\sigma^2}{2} \frac{\partial v}{\partial x} )_{\Omega_{+}} +({\bf \vartheta},(r +\frac{\sigma^2}{2}) v)_{\Omega_{+}}-(u,v)_{\Omega_{+}}-
(u,\frac{\partial \omega}{\partial x})_{\Omega_{+}}
\\
\\
&\hspace{3cm}-({\bf \vartheta},\omega)_{\Omega_{+}}+
\langle \hat{u}, v\rangle_{\partial \Omega_{+}}+
\langle \hat{{\vartheta}}, v \rangle_{\partial \Omega_{+}}.
\end{split}
\end{align}
It is well-known that the two variational inequality proposed with the bilinear forms \ref{primalVI}, and \ref{ultraWkVI} are the parabolic variational inequalities of the first kind that admit a unique solution \cite{kinderlehrer2000introduction}. Having well-posed variational inequality of (\ref{VI1}), we can approximate the problem in a finite-dimensional space. Thus, similar to estimating the price of vanilla options, we consider the time partition $0\leq \cdots \leq T$ of the time interval $[0, T]$, and discrete trial space $U_h\subset U$, and enriched test space $V_h\subset V$ (dim $V_h\geq$ dim $U_h$) and the corresponding basis spanned $\{u_j\}^N_{j=1}$, and $\{v_j\}^M_{j=1}$ for the aforementioned spaces. We use the backward finite difference Euler method to approximate the time derivative, and as a result, the discrete DPG for variational inequalities arising from the American option pricing problem yields
\begin{align}\label{discreteVI}
\begin{split}
(u^{n+1}-u^n, {\bf v})+ \Delta \tau b^{\tau}_n(u^n,{\bf v})\geq 0, \qquad \forall {\bf v} \in V_h. \\
\end{split}
\end{align}
However, writing the $\theta$-method for the second term in left hand side of the discrete variational inequality (\ref{discreteVI}) will be performed very similarly to what is proposed for vanilla options. Let $B$ and $G$ be the stiffness and Gram matrices defined by
\begin{align}
B_{ij}&=b_n^{\tau}(u_i,{\bf v}_j), & G_{ij}&=({\bf v}_i,{\bf v}_j)_v , &l_i &= (u_i,v),
\end{align}
where $(\cdot,\cdot)_v$ inner product of test space obtained from the energy norm for primal DPG and graph norm for ultraweak form introduced in (\ref{testNormVanial}). So, the discrete variational inequality (\ref{discreteVI}) is equivalent to
\begin{equation}\label{discreteVI1}
\left\{\begin{split}
&B^TG^{-1}l(u^{n+1}-u^n)+ \Delta \tau B^TG^{-1}B u^{n}\geq 0, \\
\\
&u^{n}\geq h(x),\\
\\
&(u^{n}-h(x))\bigl(B^TG^{-1}l(u^{n+1}-u^n)+ \Delta \tau B^TG^{-1}B u^{n}\bigr)=0,
\\
\end{split}\right.
\end{equation}
for $n=1,\cdots,N_{\tau}$. Setting near the optimal discrete operators of $B^{\text{n-op}}=B^T G^{-1}B$,$l^{\text{n-op}}=B^T G^{-1}l$ discrete LCP (\ref{discreteVI1}) will attain the following form
\begin{equation}\label{discreteVI2}
\left\{\begin{split}
&l^{\text{n-op}}(u^{n+1}-u^n)+ \Delta \tau B^{\text{n-op}} u^{n}\geq 0,& \\
\\
&u^{n}-h(x)\geq 0 ,&\forall n=1,\cdots,N_{\tau}\\
\\
&(u^{n}-h(x))\big (l^{\text{n-op}}(u^{n+1}-u^n)+ \Delta \tau B^{\text{n-op}} u^{n}\big)=0.&
\\
\end{split}\right.
\end{equation}
There are different approaches to solve the discrete variational inequality (\ref{discreteVI2}) including fix-point approach, penalization method, iterative method to just name few \cite{damircheli2019solution}.
To close the section we will present the DPG formulation for solving the free boundary value problem \ref{AmerFree}. Similar to the procedure for governing equation of vanilla options, one can test the governing equation \ref{AmerFree} with the appropriate test functions, and define the following system
\begin{equation}\label{DPGfree}
\begin{split}
\frac{d}{d\tau}({\bf u}, {\bf v})+ b^{\tau}({\bf u},{\bf v})=0,\qquad \forall x>S_f, \\
\end{split}
\end{equation}
Where the \textit{bilinear form} in the equation \ref{DPGfree} has the form of \ref{primalVI} for the primal formulation and \ref{ultraWkVI} for the ultraweak formulation. Like our approach so far, we use the Backward Euler method for time derivative and trial and test space defined for the LCP form to find the following discreet system of equation
\begin{align}\label{discreteVI?}
\begin{split}
(u^{n+1}-u^n, {\bf v})+ \Delta \tau b^{\tau}_n(u^n,{\bf v})= 0, \quad \forall x_h>S_f,\qquad \forall {\bf v} \in V_h. \\
\end{split}
\end{align}
Having enough fine time discretization in the above form, using the information with one time step lag can attain a good approximation of the solution of the American option. In another word, one need to notice that the final price of the American option will find from the following implicitly boundary condition
\begin{equation}\label{freeBound}
u^n = \left\{\begin{array}{ll}
\max\{h(x),u^{n-1}\}, &\forall x\in \Omega^{\mathrm{o}}, \\
\\
h(x), & x=\inf{ \partial\Omega},\\
\\
0, & x=\sup{\partial\Omega}.
\end{array}\right.
\end{equation}
in which $h(x)$ is the payoff of American option, Boundary conditions presented in \ref{freeBound} are necessary boundary conditions of the Problem of Valuing American option pricing.
\begin{figure}[!ht]
\includegraphics[width=\linewidth]{plots/4.png}
\caption{Value of American option for r=0.05, K=100, and different volatilises}
\label{DPGdifferentVolatilityAmeicanOption}
\end{figure}
\subsection{Numerical Experiments}
In this set of numerical experiments, we study the problem of valuing the American option with the ultraweak and primal DPG method. we intend to verify that DPG is a reliable and efficient method for solving this free boundary value problem. Fig. (4a), and Fig. (4b) illustrate the price of an American put option for a fixed interest rate $r= 0.05$, maturity $K= 100$, and different volatilises. It is a well-known fact that the price of an American option is greater than a European option due to the right of the owner of the American option for exercising the financial contract anytime before maturity, this can vividly be seen in Fig. (4c), and Fig. 4d for the payoff and value of an American option. Thus, the proposed methods can mimic this behavior accurately for different volatility of the market for both primal and ultraweak formulations.
\begin{table}[htbp]
{\footnotesize
\caption{{Value of American Option r = 0.05, $\sigma$ = 0.15, k=100 }\label{TableAmericanerror}}
\begin{center}
\begin{tabular}{cccccccc} \hline
$\Delta{\tau}$ & $h$ & \multicolumn{2}{c}{value} && \multicolumn{2}{c}{$\|E\|_{\infty}$}&\\ \hline
& & Primal & Ultraweak && Primal & Ultraweak &\\
\cline{3-4} \cline{6-7}
0.01 & 0.46 & 14. & 15. && 0.0159930 & 0.00963025 &\\
0.01 & 0.23 & 14. & 15. && 0.00379765 & 0.00050253 &\\
0.01 & 0.11 & 14. & 15. && 0.00074574 & 0.00133183 &\\
0.01 & 0.05 & 14. & 15. && 0.00034449 & 0.00027304 &\\
0.01 & 0.03 & 14. & 15. && 5.83E-05 & 6.21E-05 &\\
0.01 & 0.02 & 14. & 15. && 4.12E-05 & 4.48E-05 &\\
0.01 & 0.01 & 14. & 15. && 1.77E-05 & 1.75E-05 &\\ \hline
\end{tabular}
\end{center}
}
\end{table}
Error analysis of the American option conducted with the relative $L_2$, and $L_{\infty}$ error of the solution very similar to the definitions (\ref{errorDef}). Besides, the bench mark for the exact solution is opt the value of binomial method introduced and implemented in \cite{higham2002nine}. Table \ref{TableAmericanerror} is prepared to show the error of the DPG numerical scheme for both primal and Ultraweak formulation. In this study, the time step is fixed $\Delta \tau = 0.01$, and we use a finer mesh in spatial dimension on each step. One can see that the trend of error is descending as $h$ decreases and we get more accurate value of the American options.
\begin{figure}[h]
\includegraphics[width=\linewidth]{plots/5.png}
\caption{Accuracy properties of ultraweak and primal DPG for American put options in the spatial dimension with the parameters $r=0.05$, $K=100$, and $\sigma = 0.15$}
\end{figure}
Although the magnitude of error is important, the order by which error is lessened is of a great importance in our error analysis. In this investigation we used the high order DPG method as well to study the effect of the order of interpolation on the valuing of the American option pricing. Let's commence with the spatial order of convergency. Fig.(5a), and Fig.(5b) illustrates the order of convergence of both primal and ultraweak formulation for valuing American option for the fixed interest rate $r=0.05$, exercise prices of $K=100$, and the market volatility of $\sigma = 0.15$ in space order for first order and second order DPG. The experiment shows that asymptotic convergence of $L_2$ error is super linear, but it doesn't reach the $o(h^2)$ for the second order DPG scheme. One possible explanation of the diminishing the order could be an adverse impact of free boundary in the pricing problem. However, the error is relatively small, and table (\ref{TableAmericanerror}) reinforce this trend as well for relative sup-error for both primal and ultraweak formulation, where ultraweak formulation has a tiny better performance in majority of cases.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{plots/6.png}
\caption{Accuracy properties of Ultreawek and primal DPG scheme for American put options with respect to time step with parameters r=0.05, K=100, and $\sigma = 0.15$}
\label{orderTimePrimalUltraweakDPGAmericanPut}
\end{figure}
In order to study the stability and convergence in time stepping scheme, we prepared fig (\ref{orderTimePrimalUltraweakDPGAmericanPut}).A fixed mesh in space with $N_s = 64$ elements is used and decrease the time step $\Delta \tau$ and record the $L_2$-error for first and second order DPG method. The convergence analysis shows that this both primal (Fig. 6a) and ultraweak (Fig. 6b) formulation benefit from the rate of convergence of $\mathcal{O}(\Delta \tau)$ as we expected and the backward Euler method is unconditionally stable. However, the rate of convergence for time stepping captures for initial time steps (almost $N_{\tau} = 100$), where as after this point spacial discretization dictates it's impact afterwards for both DPG forms.
\begin{figure}[H]
\includegraphics[width=\linewidth]{plots/7.png}
\caption{Optimal Exercise boundary for an American put. Computed via the primal DPG method.
b) the green part is for exercise and red for "Do not exercise"}
\label{FreeBoundaryDPG}
\end{figure}
Besides accurately pricing the American-type financial derivative, finding the optimal exercise boundary for an American option is essential. The DPG method proposed in this section can find the optimal exercise boundary implicitly thanks to the projection-based method just by checking the price with the payoff at each moment or through an automatic procedure in the first active points at each time step in the primal-dual active set strategy. Fig. (7a) depicts finding this free boundary for the different interest rates of the market at each time to maturity. This optimal boundary is a powerful indicator for practitioners to choose the appropriate positions due to the hedging strategy.
Thus, the optimal exercise boundary partitions the domain of the problem into an "Exercise region" and "Do not Exercise" region (7b) where the owner of the option will exercise the option when the stock price is at the green region, and will await in the red region.
\subsection{Asian Options}
Asian options can be classified as path-depended financial derivatives where the payoff of the option depends on the time average of the underlying security over some period of time such as the lifetime of an option \cite{shreve2004stochastic, kemna1990pricing}. This average can be taken over continuous sampling or discrete sampling and the type of average can be an arithmetic average or geometric average. The closed-form value of an Asian option is not in hand, so a numerical scheme is an essential remedy to find the value of an Asian option.
Seeking a closed-form solution such as the Laplace transform of the price for this path-dependent derivatives has been a popular approach \cite{levypricing}, \cite{vorst1992prices}, and \cite{turnbull1991quick}. However, the numerical implementation of the aforementioned methods is troublesome for low volatility cases \cite{fu1999pricing}. The Monte Carlo method can be used for the numerical solution, where it is well-known that this method is computationally expensive \cite{kemna1990pricing}, and \cite{broadie1996estimating}. Another popular approach is solving two dimensions in space PDE to find the value of an Asian option \cite{ingersoll1987theory}, \cite{vecer2001new}, and \cite{kim2014option}. Besides, Rogers and Shi \cite{rogers1995value} proposed a reduction approach where solving one-dimensional PDE obtains the value of the desired Asian option.
However, both one and two-dimensional PDEs are susceptible to oscillatory solution and can blow up through time due to existing small diffusion terms.
In this section, we propose the DPG method for pricing the option based on the Black-Scholes pricing framework. Assume the dynamic of the underlying asset satisfies in a geometries Brownian motion defined in \ref{geoBrownMo}, then the payoff of an Asian call option at maturity with the fixed-strike is following
\begin{align}\label{AsianPayoff}
U(T) = \max\{\frac{1}{T}\int_{0}^{T}S(t)dt-K,0\}=\big(\frac{1}{T}\int_{0}^{T}S(t)dt\big)^{+}
\end{align}
based on the risk-neutral pricing theory, the price $U(t)$ of this Asian option at time $t\in[0,T]$ yields
\begin{align}\label{expAsian}
U(t) = \mathbb{E}[e^{-r(T-t)}U(T) | \mathcal{F}_t], \qquad \forall t\in[0,T],
\end{align}
where expectation in \ref{expAsian} is a conditional expectation with respect to the filter $\mathcal{F}_t$ of the probability space $(\Omega, P, \mathcal{F})$. Since the payoff defined in \ref{AsianPayoff} depends on the whole path of stock price $S(t)$, the price of this option is a function of $t$, $S(t)$, and the evolution of value underlying security over the path. Thus, we extend the pricing model presented in previous sections for the European and American options by defining a second process
\begin{align}
Y(t)= \int_{0}^{t}S(v)dv,
\end{align}
where the dynamic of this new process $Y(t)$ follows a stochastic differential equation as following
\begin{align}
dY(t)= S(t)dt.
\end{align}
Therefore, the value of the Asian option is also a function of $Y(t)$, so we denote the price of the Asian option with $U(t, S_t, Y(t))$. This function satisfies $\forall t\in[0,T]$, and $\forall (x, y)\in \mathbb{R}^{+}\times\mathbb{R}$ in the following two-dimension in space, partial differential equation(see \cite{shreve2004stochastic,kemna1990pricing} for details)
\begin{equation}
\left\{
\begin{split}
&\frac{\partial U(t,S,y)}{\partial t}+\frac{\sigma^2}{2}S^2\frac{\partial U(t,S,y)^2}{\partial S^2}+r S\frac{\partial U(t,S,y)}{\partial S}
+S\frac{\partial U(t,S,y)}{\partial y}-rU(t,S,y)=0,\\
\\
&U(t,0,y) = e^{-r(T-t)}(\frac{y}{T}-K)^{+},\qquad t\in[0,T],\quad y\in \mathbb{R},\\
\\
&U(T,S,y) = (\frac{y}{T}-K)^{+},\quad S\geq 0,\quad y\in \mathbb{R},\\
\\
&\lim_{y\to -\infty}U(t,S,y) = 0,\quad t\in[0,T],\quad S\geq 0.\\
\end{split}
\right.
\end{equation}
Now, let's define a new state variable
\begin{align}
x = \frac{1}{S_t}(K-\frac{1}{T}\int_{0}^{t}S(t')dt').
\end{align}
Then, it has been shown \cite{rogers1995value,ingersoll1987theory} that the price of the Asian option satisfies the following nonlinear backward partial differential equation
\begin{equation}\label{AsianPDEs}
\left\{\begin{split}
&\frac{\partial U}{\partial t}+\frac{\sigma^2}{2}x^2\frac{\partial U^2}{\partial x^2}- (\frac{1}{T}+rx)\frac{\partial U}{\partial x}=0,\\
\\
&U(T,x)=(-x)^{+},
\end{split}\right.
\end{equation}
where the partial differential equation \ref{AsianPDEs} is one dimensional PDE in space. Eq. (\ref{AsianPDEs}) is a nonlinear partial differential equation of convection-diffusion type with a convection term that is a function of volatility and spatial variable $x$. Thus, this differential equation belongs to the family of convection dominant problems where the coefficient of the convection term (second-order derivative) can be a very small number in this model. As we mentioned earlier in this section, this small coefficient could imply an oscillatory behavior such that it can lead to numerical instability for the numerical scheme \cite{douglas1982numerical}. On the other hand, the efficiency and robustness of the DPG method for the convection-diffusion problem have been successfully shown for this family of problems (\cite{ellis2016robust}, \cite{chan2014robust}, \cite{chan2013dpg} and the references therein). Demkowicz et.al. in \cite{demkowicz2013robust}, as an illustration, thoroughly analyzed the DPG method for the convection-dominated problems. They show that it benefits from a robust $L^2$ error estimate for trail variables in this set of differential equations.
Having the solution of Equ. (\ref{AsianPDEs}), the value of an Asian option with strike price $K$ and initial stock value $S_0$ can be computed as $S_0U(0,\frac{K}{S_0})$. After using a truncated computational domain $x\in [-2,2]$ for the Equ. (\ref{AsianPDEs}) and change of variable $\tau = T-t$ in time, the system of partial differential equation (\ref{AsianPDEs}) will build into the following form,
\begin{equation}\label{AsianPDEs2}
\left\{\begin{split}
&\frac{\partial U}{\partial \tau}-\frac{\sigma^2}{2}x^2\frac{\partial U^2}{\partial x^2}+ (\frac{1}{T}+rx)\frac{\partial U}{\partial x}=0,&\forall x\in [-2,2],\quad \forall \tau \in [0,T],\\
\\
&U(0,x)=(-x)^{+},\\
\\
&U(\tau,2) = 0,\\
\\
&\frac{\partial U^2}{\partial x^2}(\tau,-2)= 0.\\
\end{split}\right.
\end{equation}
So, the option value will be $S_0 U(T,\frac{K}{S_0})$. Using our convention for the DPG method, we can write the weak form for the Equ. (\ref{AsianPDEs2}) as following
\begin{align}\label{AsianDPG}
\frac{d}{d\tau}(u, {\bf v})+ b^{\tau}(u,{\bf v})= 0, \\
\end{align}
where the bilinear form for primal formulation defies as
\begin{align}\label{primalAsian}
\begin{split}
b^{\tau}_{\text{primal}}({\bf u},v) =
(\frac{\sigma^2}{2}x^2\frac{\partial}{\partial x} u,\frac{\partial}{\partial x} v)_{\Omega} +\big((\frac{1}{T}+(r+2)x\big)\frac{\partial}{\partial x} u,v)_{\Omega}
-\langle \hat{q}, v\rangle_{\partial \Omega},\\
\end{split}
\end{align}
with a set of trial and flux variables ${\bf u}=(u,\hat{q})\in H(\Omega)\times H^{\frac{1}{2}}(\Omega)$, test variable ${\bf v} = v \in L^2(\Omega)$. Moreover, considering trail variables ${\bf u} = (u,\vartheta)\in L_2(\Omega)\times L_2(\Omega)$, and flux variables ${\bf \hat{u}} = (\hat{u},\hat{{\vartheta}})\in H^{1/2}(\Omega)\times H^{1/2}(\Omega) $ the bilinear form (\ref{AsianDPG}) for the ultraweak formulation reads
\begin{align}\label{ultraWkAsian}
\begin{split}
&b^{\tau}_{\text{ultraweak}}(({\bf u},\hat{{\bf u}}),{\bf v}) =
b^{\tau}_{\text{ultraweak}}(((u,{\vartheta}), (\hat{u},\hat{{\vartheta}})), (v,\omega)),
\\
\\
&\qquad=-({\bf \vartheta},\frac{\sigma^2}{2}x^2 \frac{\partial}{\partial x} v)_{\Omega} +
({\bf \vartheta},(\frac{1}{T}+(r-\sigma^2) x v)_{\Omega}
-(u,\frac{\partial \omega}{\partial x})_{\Omega}
-({\bf \vartheta}, \omega)_{\Omega}\\
\\
&\qquad+\langle \hat{u}, \omega\rangle_{\partial \Omega}+
\langle \hat{{\vartheta}}, v \rangle_{\partial \Omega}.
\end{split}
\end{align}
Now, using backward Euler approximation for time derivative and appropriate discrete test and trial space for DPG explained in the section \ref{AmericanOptionsec}, the discrete DPG formulation for the Asian option pricing problem reads
\begin{align}\label{AsianWeak}
(u^{n+1}-u^n, {\bf v})+ \Delta \tau b^{\tau}_n(u^n,{\bf v})= 0, \qquad \forall {\bf v} \in V_h.
\end{align}
We propose the following graph norm for ultraweak formulation and energy norm for primal DPG formulation to solve the valuing Asian option problem formulated by Equ. \ref{AsianWeak}
\begin{align}\label{testNormAsian}
\begin{split}
&\text{Primal}: \|v\|^2_{V} = \frac{1}{\Delta t} \|v\|^2+\frac{1}{(\Delta t)^2} \|{\sigma^2}\frac{\partial }{\partial x}v\|^2, \\
\\
&\text{Ultraweak}: \|{\bf v}\|^2_{V} = \|(v,\omega)\|^2_{V}\\
\\
&\qquad \qquad =\frac{1}{(\Delta t)^2} \|{\sigma^2}\frac{\partial }{\partial x} v-(\frac{1}{T}+(r-\sigma^2)v-\omega\|^2+ \frac{1}{\Delta t}\|\frac{\partial }{\partial x}\omega\|^2. \\
\end{split}
\end{align}
Therefore, one can obtain the corresponding discrete operators
\begin{align}
B_{ij}&=b^{\tau}(u_i,{\bf v}_j), &G_{ij}&=({\bf v}_i,{\bf v}_j)_v , &l_i &= (u_i,v).
\end{align}
However, it is worth mentioning that the above rectangle matrix $B$ is a function of the spatial variable, and the induced inner product $(\cdot,\cdot)_v$ is formed by the associated norms (\ref{testNormAsian}) defined in the procedure of the DPG formulation.
Thus, discrete DPG formulation of the equation (\ref{AsianPDEs2}) $\forall n \in \{1,\cdots, N_{\tau}\},$ yields
\begin{equation}\label{discreteAsian1}
\left\{\begin{split}
&B^TG^{-1}l(u^{n+1}-u^n)+ \Delta \tau B^TG^{-1}B u^{n}=0, \\
\\
&\hat{u}^{0} = (-x)^{+}, \qquad \forall x\in[-2,2] \\
\\
&\hat{u}^n|_{x=2} = 0, \qquad \forall n \in {1,\cdots, N_{\tau}},
\end{split}\right.
\end{equation}
Thus, we can define near the optimal discrete operators $B^{\text{n-op}}=B^T G^{-1}B$,$l^{\text{n-op}}=B^T G^{-1}l$ discrete DPG for the equ (\ref{AsianPDEs2}) for all $\forall n \in \{1,\cdots, N_{\tau}\},$ finds
\begin{equation}\label{discreteAsian2}
\left\{\begin{split}
&l^{\text{n-op}}(u^{n+1}-u^n)+ \Delta \tau B^{\text{n-op}} u^{n}=0, \\
\\
&\hat{u}^{0} = (-x)^{+}, \quad \forall x\in[-2,2], \\
\\
&\hat{u}^n|_{x=2} = 0, \qquad \forall n \in \{1,\cdots, N_{\tau}\}.
\end{split}\right.
\end{equation}
The system of Equ. (\ref{discreteAsian2}) can be solved by an iterative method or linear solver. In the next section, we examine the efficiency of the proposed DPG method.
\subsection{Numerical Experiments}
As mentioned before, the set of the partial differential equations (\ref{AsianPDEs2}) is a nonlinear and convection-dominant problem, and developing a numerical scheme for this problem can be problematic due to the convection term.
In this section, we select some famous test problems from the literature to showcase the efficiency and accuracy of the proposed numerical scheme (\ref{discreteAsian2}). In this example, all the results are generated by the first-order DPG method, and corresponding to the enriched test spaces ($\Delta p = 2$). we used $N_s=100$ number of spatial elements, and the $N_t=100$ time step for all the experiments in this section.
\begin{figure}[!ht]
\includegraphics[width=\linewidth]{plots/8.png}
\caption{Value of an Asian option with DPG method for $r = 0.015$, and different volatility }
\label{DPGAsianOption}
\end{figure}
Fig. (\ref{DPGAsianOption}) displays the value of the Asian option with two ultraweak and primal DPG formulations for different values of $\sigma = 0.05,0.1,0.2, 0.3$.
As it can be seen the value of the Asian option is smooth and stable even for a small value of $\sigma = 0.05$ which leads to the convection-dominated case for the system of \ref{discreteAsian2}.
\begin{table}[htbp]
\footnotesize
\caption{{Asian call option with $r=0.09$, $T=1$, $S_0=100$ }\label{AsianTable1}}
\centering
\scalebox{0.85}{
\begin{tabular}{c|l|l|c|c|c} \hline
$\sigma$ & Reference &Method& $K=95$&$K=100$&$K=105$\\ \hline
\multirow{7}{*}{0.05}& Zhang \cite{zhang2001semi} & & 8.8088392 & 4.3082350 & 0.9583841\\
& Zhang-AA2 \cite{zhang2003pricing} & &8.80884 &4.30823 &0.95838\\
& Zhang-AA3 \cite{zhang2003pricing} & & 8.80884 & 4.30823 & 0.95838\\
& & Ultraweak DPG& 8.8085332 & 4.3081967 & 0.958371\\
& & Primal DPG & 8.8088363 & 4.3082291 & 0.9583836\\
\hline
\multirow{7}{*}{0.10}& Zhang\cite{zhang2001semi} & & 8.9118509 &4.9151167 & 2.0700634\\
& Zhang-AA2\cite{zhang2003pricing}& & 8.91171& 4.91514 & 2.07006\\
& Zhang-AA3\cite{zhang2003pricing} & & 8.91184 & 4.915126 & 2.07013\\
& & Ultraweak DPG& 8.910986 &4.915116769 & 2.0700633\\
& & Primal DPG & 8.9118498 &4.9151265 & 2.0700634\\
\hline
\multirow{7}{*}{0.20}& Zhang \cite{zhang2001semi}& & 9.9956567 & 6.7773481 & 4.2965626\\
& Zhang-AA2\cite{zhang2003pricing} &&9.99597 &6.77758 & 2.745\\
& Zhang-AA3\cite{zhang2003pricing} & & 9.99569 & 6.77738 & 4.29649\\
& & Ultraweak DPG& 9.99565668 & 6.7773481 & 4.2965626\\
& & Primal DPG & 9.9956567 & 6.7773429 & 4.2965619\\
\hline
\multirow{7}{*}{0.30}&Zhang\cite{zhang2001semi} & &11.6558858&8.8287588 &6.5177905\\
&Zhang-AA2 \cite{zhang2003pricing} & &11.65747&8.82942& 6.51763\\
&Zhang-AA3 \cite{zhang2003pricing} & & 11.65618&8.82900 &6.51802\\
& & Ultraweak DPG&11.6558853&8.8287498 &6.51779047\\
& & Primal DPG &11.6558857&8.8287580 &6.51779054\\
\hline
\end{tabular}
}
\end{table}
we prepared the table \ref{AsianTable1} to compare the result of DPG methodology for pricing an Asian option with interest rate $r=0.09$, $T=1$, $S_0=100$, different strike price $K=95,100,105$, and different volatility with the result report in \cite{zhang2001semi}, \cite{zhang2003pricing}. Considering the result from \cite{zhang2001semi} as a benchmark with the PDE method, one can see that the obtained results from DPG ultraweak and primal method are so close (less than $0.001\%$ deviation).
\begin{table}[htbp]
\footnotesize
\caption{{Asian call option with $r=0.15$, $T=1$, $S_0=100$ }\label{AsianTable2}}
\centering
\scalebox{0.85}{
\begin{tabular}{c|l|l|c|c|c} \hline
$\sigma$ & Reference &Method& $K=95$&$K=100$&$K=105$\\ \hline
\multirow{7}{*}{0.05} & vecer \cite{vecer2001new} & Monte Carlo & 11.094 & 6.795 & 2.745\\
& & FDM & 11.094 & 6.795 & 2.745\\
& Rogers and Shi \cite{rogers1995value} & Lower Bound & 11.094 & 6.795 & 2.745\\
& Foufas and Larson \cite{foufas2008valuing} & FEM & 11.112 & 6.810 & 2.754\\
& Kim et al. \cite{kim2007meshfree} & MPCM & 11.093 & 6.79 & 2.78\\
& & Ultraweak DPG& 11.09398 & 6.79512 & 2.74481\\
& & Primal DPG & 11.09401 & 6.7948 & 2.74499\\
\hline
\multirow{7}{*}{0.10}& vecer\cite{vecer2001new} & Monte Carlo & 15.399 & 7.028 & 1.418\\
& & FDM & 15.399 & 7.029 & 1.415\\
& Rogers and Shi \cite{rogers1995value} & Lower Bound & 15.399 & 7.028 & 1.413\\
& Foufas and Larson \cite{foufas2008valuing} & FEM & 15.416 & 7.042 & 1.422\\
& Kim et al. \cite{kim2007meshfree} & MPCM & 15.398 & 7.028 & 1.448\\
& & Ultraweak DPG& 15.3984 & 7.0277 & 1.41769\\
& & Primal DPG & 15.39899 & 7.02812 & 1.418001\\
\hline
\multirow{7}{*}{0.20} & vecer\cite{vecer2001new} & Monte Carlo & 15.642 & 8.409 & 3.556\\
& & FDM & 15.643 & 8.412 & 3.560\\
& Rogers and Shi\cite{rogers1995value} & Lower Bound & 15.641 & 8.408 & 3.554\\
& Foufas and Larson \cite{foufas2008valuing} & FEM & 15.659 & 8.427 & 3.570\\
& Kim et al. \cite{kim2007meshfree} & MPCM & 15.66437 & 8.421 & 3.573\\
& & Ultraweak DPG& 15.64218 & 8.4091 & 3.5559\\
& & Primal DPG & 15.641865 & 8.4102 & 3.5584\\
\hline
\multirow{7}{*}{0.30}& vecer\cite{vecer2001new} & Monte Carlo & 16.516 & 10.210 & 5.731\\
& & FDM & 16.516 & 10.215 & 5.736\\
& Rogers and Shi\cite{rogers1995value} & Lower Bound & 16.512 & 10.208 & 5.728\\
& Foufas and Larson\cite{foufas2008valuing} & FEM & 16.553 & 10.231 & 5.750\\
& Kim et al. \cite{kim2007meshfree} & MPCM & 16.5179 & 10.2194 & 5.742\\
& & Ultraweak DPG& 16.51615 & 10.21045 & 5.73074\\
& & Primal DPG & 16.51617 & 10.20964 & 5.730865\\
\hline
\end{tabular}
}
\end{table}
To compare the accuracy and stability of the proposed method with the broader method in the literature, table. (\ref{AsianTable2}) is produced. In this test, the results from the Monte Carlo method are exploited as an exact solution. we compute the value of an Asian option for different strike prices $K=95,100,105$, the interest rate of $r = 0.15$, time to maturity $T=1$, initial asset value $S_0=100$, with different volatility $\sigma = 0.05,0.1,0.2,0.3$. The result from the DPG methods is a maximum $0.001\%$ deviation from the benchmark.
\subsection{Barrier Options}
A double knock-out Barrie option is a financial contract that gives a payoff $h(S)$ at maturity $T$, as far as the price of the underlying asset stays in the predetermined barriers $[S_L(t), S_U(t)]$, otherwise, if the spot price is hit barriers, the option gets knocked out. Although the barriers are checked continuously in time, it is more feasible to check the barriers discretely in the real-world application \cite{shreve2004stochastic}.
It is well-known that the closed-form analytical solution for the discrete double barrier option is not known, so devising accurate and efficient numerical methods for valuing this type of option is essential. Thus, over the past years, researchers try to develop semi-analytical and numerical schemes for approximating the price of Barrier options. Here, we briefly address some of them. Kunitomo et.al \cite{kunitomo1992pricing} used sequential analysis to find the solution as a series, analytical approach by contour integration is used by Pelsser \cite{pelsser2000pricing} to price the barrier options. The binomial method is used by Cheuk et. al in \cite{cheuk1996complex}, and the Monte Carlo method as a probability-based method is devised in \cite{ndogmo2007high} to price this exotic option
PDE method such as the finite difference method by Zevan et. al in \cite{zvan2000pde}, a finite element in \cite{golbabai2014highly} by Golbabai et.al, and quadrature method in \cite{milev2010numerical} is developed for the pricing discrete barrier options.
We begin by stating the model of the problem which is inspired by the work \cite{milev2010numerical}, and \cite{tse2001pricing}. Assume that dynamic of the underlying asset $\{S_t\}_{t\in [0,T]}$ is following the stochastic differential equation in (\ref{geoBrownMo}), with the standard winner process $W_t$, interest rate $r$, volatility of $\sigma$, and fixed initial asset price $S_0$. Defining the Brownian motion $Z_t$, with instantaneous drift value $\hat{r} = r-(\sigma^2/2)$, and standard deviation $\sigma$, the price process will follow $S_t = S_0 e^{Z_t}$. Moreover, we define the discrete counterpart process $\Tilde{X}_n = S_0 e^{\Theta_n}$, for $n=1,2,\cdots,N$, and $\Theta_n=\theta_1+\theta_2+\cdots+\theta_n$, $\Theta_0 = 0$. Random variables $\theta_i$ are independent normally distributed random variables i.e. $N(r-\sigma^2/2, \Delta t \sigma)$ with $\Delta t = \frac{T}{N}$ for $N$ predetermined monitoring instants.
Consider the discrete monitoring dates of $t_1=0\leq t_2\leq \cdots \leq t_N=T$ with the constant upper and lower barriers of $S_U$, and $S_L$ respectively. Besides, we assume that barriers are not active on the first, and last dates of our time interval. The price of a discrete double barrier option can be computed by discount of expected payoff at expiration time $T$ to the present time $t$ as follows.
\begin{align*}
e^{-rT} E [h(S_T)| \chi_{B_1}\chi_{B_2}\cdots \chi_{B_n}],
\end{align*}
where the indicator functions of $\chi_{\cdot}$ is evaluating on sub set of $B_i=\{S_i\in(S_L,S_U)\}$.
Denoting $U(t, S)$ the value of a discrete double barrier option with the date of maturity of $T$, strike price $K$,(for simplicity), this value will satisfy in the following system of $N$ partial differential equations
\begin{equation}\label{barrierPDE1}
\left\{
\begin{split}
&\frac{\partial U(t,S)}{\partial t}+\frac{\sigma^2}{2}S^2\frac{\partial U(t,S)^2}{\partial S^2}
+r S\frac{\partial U(t,S)}{\partial S}-rU(t,S)=0,
\hspace{0.0cm} \quad \forall t\in [t_i,t_i+1], \quad \forall i=1,2,\cdots,N \\
\\
&U(t,0 )=0,\\
\\
&\lim_{S\to +\infty}U(t,s) = h(S)\\
\\
&U(S,t_i) = h_i(S)\qquad \forall i=1,2,\cdots,N\\
\\
&U(S,T) = h_T(S),
\end{split}
\right.
\end{equation}
where boundary conditions $h_i(S)$, and $h_T(S)$ are also defined as
\begin{equation}\label{barrierBC1}
h_i(S) = \left\{
\begin{split}
&\lim_{t\to t^{+}} U(S,t),\quad \text{if}\quad S_L\leq S\leq S_U, \hspace{0.5cm}\quad \forall t\in [t_i,t_i+1], \quad \forall i=1,2,\cdots,N\\
\\
& 0, \hspace{2.5cm} \text{if}\quad S=\mathbb{R}^{+}\backslash [S_L,S_U],
\end{split}
\right.
\end{equation}
,and
\begin{equation}\label{barrierBC2}
h_T(S) = \left\{
\begin{split}
&(S-K)^{+},\quad \text{if}\quad S_L\leq S\leq S_U, \hspace{0.5cm}\quad \forall t\in [t_i,t_i+1], \quad \forall i=1,2,\cdots,N,\\
\\
& 0, \hspace{2.5cm} \text{if} \quad S=\mathbb{R}^{+}\backslash [S_L,S_U].
\end{split}
\right.
\end{equation}
As we can observe, the set of partial differential equations (\ref{barrierPDE1}) is a system of consecutive partial differential equations where on each time interval $[t_i,t_{i+1}]$ has the final boundary conditions of (\ref{barrierBC1}), and the final PDE has the boundary condition (\ref{barrierBC2}). Besides, the system of PDEs presented in (\ref{barrierPDE1}) with the aforementioned boundary condition is a non-smooth and nonlinear partial differential equation associated with the functions (\ref{barrierBC1}), (\ref{barrierBC2}), therefore, designing an accurate and stable numerical scheme is tricky here.
We use the change of variable in space and time similar to the change of variables for vanilla options in section (\ref{BlkSholes}) to obtain the following piecewise constant coefficient partial differential equations.
\begin{equation}\label{barrierPDE}
\left\{
\begin{split}
&\frac{\partial u(\tau,x)}{\partial \tau}+\frac{\sigma^2}{2}\frac{\partial u(\tau,x)^2}{\partial x^2}
+(r+\frac{\sigma^2}{2}) \frac{\partial u(\tau,x)}{\partial x}-ru(\tau,x)=0,
\hspace{0.0cm}\quad \forall \tau\in [\tau_i,\tau_{i+1}], \quad \forall i=1,2,\cdots,N \\
\\
&u(\tau,0 )=0,\\
\\
&\lim_{x\to +\infty}u(\tau,x) = h(x),\\
\\
&u(x,\tau_i) = h_i(x),\qquad \forall i=1,2,\cdots,N,\\
\\
&u(x,T) = h_T(x).
\end{split}
\right.
\end{equation}
Now if we concentrate on one of the equations as a generic differential equation on the interval $[\tau_j,\tau_{j+1}]$, where $j\in \{1,2,\cdots,N\}$, we propose the following weak formulation for DPG formulation
\begin{align}\label{BarrirDPG}
\frac{d}{d\tau}({\bf u}, {\bf v})+ b^{\tau}({\bf u},{\bf v})=0,
\qquad \forall \tau\in [\tau_j,\tau_j+1],
\end{align}
where the bilinear form is similar to the primal and ultraweak formulation defined in Eq. (\ref{primalVI}), and (\ref{ultraWkVI}) on this sub-interval. However, the boundary conditions introduced (\ref{barrierPDE}) are performing on the interval $[\tau_j,\tau_{j+1}]$ as a sub-interval of the computational domain. Utilizing a generic partition $\tau_{j}=\tau_{j1},\tau_{j2},\cdots, \tau_{jN_j} =\tau_{j+1},$ for each interval, and using backward Euler scheme for time derivative, the approximate of equation (\ref{BarrirDPG}) in the finite dimension space, the discrete DPG for each sub-partial differential equations reads
\begin{align}\label{discrtBarier}
(u^{n+1}-u^n, {\bf v})+ \Delta \tau_i b^{\tau}_n(u^n,{\bf v})= 0, \qquad \forall {\bf v} \in V_h., \quad \forall n\in \{1,2, \cdots, N_{i}\},
\end{align}
where the time steps on the domain of each sub-problem defined as $\Delta \tau_i = \frac{\tau_{i+1}-\tau_i}{h_i},\quad h_i = \frac{\tau_{i+1}-\tau_i}{N_i}$. Indeed, on each problem (\ref{discrtBarier}) we need to solve a nonlinear non-smooth discrete system of equations (see the psudo code \ref{algBarrier}). Defining the graph and energy norm defined in (\ref{testNormVanial}) for each sub-domain $[\tau_j,\tau_{j+1}]$, and denoting the discrete operators of $B$, $G$, and $l$ accordingly as following
\begin{align}
B_{ij}&=b^{\tau}(u_i,{\bf v}_j), & G_{ij}&=({\bf v}_i,{\bf v}_j)_v , & l_i &= (u_i,v), & U_h&=[\tau_j,\tau_{j+1}],
\end{align}
One can find the discrete nonlinear generic problems on each sub-domain
\begin{equation}\label{discreBarreri2}
\left\{\begin{split}
&l^{\text{n-op}}(u^{n+1}-u^n)+ \Delta \tau B^{\text{n-op}} u^{n}=0,\qquad \forall n \in \{1,\cdots, N_{\tau_i}\}, \\
\\
&{u}^{n}|_{x=0}=0, \qquad \forall n \in \{1,\cdots, N_{\tau_i}\},\\
\\
&\lim_{x\to +\infty}{u}^{n+1}(x) = h(x),\\
\\
&{u}^{{N_{\tau_i}}}|_x = h_i(x).\\
\end{split}\right.
\end{equation}
where the near optimal DPG operators are denied as $B^{\text{n-op}}=B^T G^{-1}B$, $l^{\text{n-op}}=B^T G^{-1}l$. The non-smooth system of equations of (\ref{discreBarreri2}) can be solved by a projected iterative solver such as Gradient descent for different consecutive intervals till the time of maturity \cite{beck2014introduction}.
\begin{algorithm}
\caption{Numerical algorithm for the double barrier option}\label{algBarrier}
\begin{algorithmic}
\Require $S_0\in[S_L,S_U]$
\State $u^N|_S \gets h(S)$
\For{$\tau_i\in[t_1,t_N]$}
\For{$\tau_{i_j}\in[\tau_i,\tau_{i+1}]$}
\If{$S$ is in $[S_L,S_U]$}
\State $u^n|_{S=0} = 0$,
\State $u^{N_{\tau_i}}|_{S\to\infty} = h(S)$,
\State $u^{N_{\tau_i}}|_S = h_i(S)$,\\
\State Solve the sub-partial differential equation \ref{discreBarreri2}.\\
\ElsIf{$S$ is out of $[S_L,S_U]$}\\
\State The option will be knocked out!\\
\EndIf
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Numerical Experiments}
Here we solve the standard test problem solved in \cite{kim2014option} problem. we use the DPG method to price a barrier option with volatility $\sigma=0.2$, interest rate $r = 0.1$, strike price $k=100$, and upper and lower boundary of $S_L=95$, and $S_U = 125$ respectively. It is known that a trading year includes $250$ a working day, and a working week has five days. In this example, we report the numerical estimate for daily and weekly monitoring. in another words, if we take $T=1$ (half year $T=0.5$) for one trading year , then time increments of $\Delta t = 0.004$ (half year $\Delta t = 0.002$) corresponds with daily check and $\Delta = 0.02$ (half year $\Delta t = 0.01$) corresponds to weekly check. Using the first-order DPG method with $N_s=100$ spatial element, $N_t = 100$ stepping time, and enriched test space with $\Delta p=2$, the desired results will accomplish.
Fig. \ref{UltraPrimaPriceBarrier} depicts the surface of the price of the barrier option with the two primal and ultraweak formulations, as we expect this option is cheaper than the European option due to the convenience that brings for the trader. Moreover, in spite of the non-smooth boundary condition the surface of the price is smooth and stable.
\begin{figure}[!htb]
\includegraphics[width=\linewidth]{plots/9.png}
\caption{Surface of the price of barrier option, $\sigma =0.2$, r =0.1, K=100, $[S_L,S_p]=[95,125]$ via DPG method. }\label{UltraPrimaPriceBarrier}.
\end{figure}
We prepared fig. \ref{DPGBbarrierValue} to show the price of the barrier option with the aforementioned market parameters. The primal and ultraweak formulation is implemented to find the value of the option by checking both weekly and Daily for the barriers. One difficulty in pricing barrier options is that the value of the option can be oscillatory near the barriers of $S_L$, and $S_U$, whereas the illustrations show the stable and smooth behavior of the price for the value of stock price close to the boundaries.
\begin{figure}[!ht]
\includegraphics[width=\linewidth]{plots/10.png}
\caption{Value of Barrier option with DPG method}
\label{DPGBbarrierValue}
\end{figure}
Table (\ref{BarrierTable}) compares the accuracy of the DPG method with the path integral method \cite{milev2010numerical}, and MPCM method \cite{kim2007meshfree}. in this experiment, we see the value of the option for daily and weekly monitoring when the price of the underlying price is $S=95$, $S=95.0001$, $S=124.9999$, and $S=125$ when barriers are $[S_L, S_U] = [95,125]$. As we mentioned before the numerical scheme can have unstable behavior close to barriers, and in this example, we try to catch the accuracy of the method when the stock price is in a very close neighborhood of barriers. As can be seen, the DPG method is accurate and very close to the recorded value in \cite{milev2010numerical}, and \cite{kim2007meshfree}.
{\small
\begin{table}[H]
{\footnotesize
\caption{{Double Barrier option with $\sigma=0.2$, $r=0.1$, $T=0.5$, $K=100$, $L=95$,$U=125$ }\label{BarrierTable}}
\begin{center}
\begin{tabular}{c|l|l|c|c} \hline
$S$ & Reference &Method& Weekly checking & Daily checking\\ \hline
\multirow{4}{*}{95}
& Milev \cite{milev2010numerical} & Path integral& 11.094 & 6.795 \\
& Kim \cite{kim2007meshfree} & MPCM & 11.094 & 6.795 \\
& & Primal DPG & 11.094 & 6.795 \\
& & Ultraweak DPG& 11.094 & 6.795 \\
\hline
\multirow{5}{*}{95.0001}
& Milev \cite{milev2010numerical} & Path integral& 11.094 & 6.795 \\
& Monte Carlo & $10^7$ paths & 11.094 & 6.795 \\
&Kim \cite{kim2007meshfree} & MPCM & 11.094 & 6.795 \\
& & Primal DPG & 11.094 & 6.795 \\
& & Ultraweak DPG& 11.094 & 6.795 \\
\hline
\multirow{5}{*}{124.9999}
&Milev \cite{milev2010numerical} & Path integral& 11.094 & 6.795 \\
& Monte Carlo & $10^7$ paths & 11.094 & 6.795 \\
&Kim \cite{kim2007meshfree} & MPCM & 11.094 & 6.795 \\
& & Primal DPG & 11.094 & 6.795 \\
& & Ultraweak DPG& 11.094 & 6.795 \\
\hline
\multirow{4}{*}{125}
& Milev\cite{milev2010numerical} & Path integral& 11.094 & 6.795 \\
&Kim \cite{kim2007meshfree} & MPCM & 11.094 & 6.795 \\
& & Primal DPG & 11.094 & 6.795 \\
& & Ultraweak DPG& 11.094 & 6.795 \\
\hline
\end{tabular}
\end{center}
}
\end{table}
}
\section{Option Pricing problem with Stable Method }
\begin{thm}[Main Theorem]\label{thm:Mainthm}
option pricing with DPG has a unique solution!!!
\end{thm}
\section{Sensitivity Analysis with Greeks}\label{greeks}
In this section, we use the DPG methodology to calculate the sensitivity of option pricing under the Black-Scholes model. Sensitivity of the option with respect to model parameters, Greeks, explains the reaction of the option value to the fluctuation of the market environment. Greeks are compasses in the trader's hand to find the correct direction in the hope of hedging their portfolio by buffering against market changes.
Thus, the efficiency and accuracy of the numerical scheme are of paramount importance to trace the option price changes when the state of the market changes. Let $u(s,t)$ be the solution of Black-Scholes partial differential (\ref{BlS1}) with the appropriate boundary condition pertaining to that specific option, and $\alpha$ is the desired parameter for which we want to see the changes of price, then $\frac{\partial u(x,t)}{\partial \alpha}$ which for simplicity it will be denoted by $u_{\alpha}(x,t)$ is the sensitivity. This sensitivity can be found with the direct method or dual method (the avid readers can see \cite{damircheli2019solution}). Taking the derivative with respect to the parameter $\alpha$ from Black-Scholes, one can find a system of partial differential equation that seeks for $u_{\alpha}(x,t)$
\begin{align}\label{sensPDE1}
\frac{\partial u_{\alpha}}{\partial t}+
\frac{\partial}{\partial \alpha}(\frac{\sigma^2}{2}x^2)\frac{\partial u^2}{\partial x^2}+\frac{\sigma^2}{2}x^2\frac{\partial u_{\alpha}^2}{\partial x^2}
+\frac{\partial}{\partial \alpha}(r x) \frac{\partial u}{\partial x}+
r x \frac{\partial u_{\alpha}}{\partial x}
-\frac{\partial r}{\partial \alpha}u(x,t)- r u_{\alpha} = 0,
\end{align}
Note, the $u(x,t)$ is already evaluated the value of the option in the initial state of parameter $\alpha$ (see \cite{seydel2006tools} for more detail). One can develop a DPG formulation either primal or ultraweak for solving the PDE presented in (\ref{sensPDE1}) to find the desired sensitivity of $u_{\alpha}(x,t)$ with appropriate boundary condition. In this paper, we study the first and second derivative of price with respect to the underlying asset that are named as Delta and Gamma respectively.
To start, it is worth mentioning that in the ultraweak formulation of DPG method (for example see (\ref{ultraVanila})) inherently and implicitly we are evaluating the Delta since our primary trail variables are $(u(x,t), \sigma = \frac{\partial u}{\partial x})$.
Fig. (\ref{GreekUltraAsian}) the numerical result of ultraweak solution of the Asian option pricing problem as an example is prepared to show how Delta can implicitly be calculated without extra computational cost for recalculation of sensitivity.
\begin{figure}[!ht]
\includegraphics[width=\linewidth]{plots/11.png}
\caption{Computing the Delta for Asian option alongside the value of the option with the ultraweak DPG method}
\label{GreekUltraAsian}
\end{figure}
However, one can indirectly find the Gamma and Delta of Asian option with ultraweak formulation and primal formulation Fig. (\ref{DirectGreekAsianPrimal}) using the PDE (\ref{sensPDE1}) for different volatility of the market.
\begin{figure}[H]
\includegraphics[width=\linewidth]{plots/12.png}
\caption{Greeks of Asian option with primal DPG method}
\label{DirectGreekAsianPrimal}
\end{figure}
It is well-known that delta is positive for call options Fig. (13a) and negative for put option Fig. (13b), whereas Gamma is always positive for both call options Fig. (14a) and put options Fig. (14b). Fig.(\ref{GreekEuropeanCallPrimal}) is prepared to illustrate Delta and Gamma of the European call option for different times to maturity, strike price $k=100$, $r = 0.05$, and $\sigma = 0.15$ with primal DPG method. The sensitivity of the European put option with the same market parameters is depicted in Fig. (\ref{GreekEuropeanPutUltrawk}) using the ultraweak DPG method.
\begin{figure}[!ht]
\includegraphics[width=\linewidth]{plots/13.png}
\caption{Greek of European call option with Primal DPG parameters: r=0.05, $\sigma$=0.15,K=100}
\label{GreekEuropeanCallPrimal}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\linewidth]{plots/113Plus.png}
\caption{Greek of European Put option with ultraweak DPG parameters: r=0.05, $\sigma$=0.15,K=100}
\label{GreekEuropeanPutUltrawk}
\end{figure}
Admittedly, the American option is one of the most attractive options for market makers since they have the right to exercise the contract once they find the appropriate moment based on their hedging strategy. Thus, not only the monitoring Delta is important, but practitioners are curious about the rate of change in Delta itself (Gamma) for each one-basis point movement in the underlying asset. However, we can expect that the free boundary attained by the early exercise feature has a significant impact on the sensitivity of the option as well. Fig.(\ref{GreekAmericanPrimal}) shows the violation in Delta and Gamma for an American Put option based on the Primal DPG method in the different time to maturities. As we can see this chaotic behavior as the time approaches maturity increases such that at $t=0.01$ shortly after locking the option we have smooth behavior like the European option and at time $t=1.0$ we have maximum fluctuation.
\begin{figure}[H]
\includegraphics[width=\linewidth]{plots/14.png}
\caption{Greek of American Put option with primal DPG parameters: r=0.05, $\sigma$=0.15,K=100}
\label{GreekAmericanPrimal}
\end{figure}
Greeks for barrier option with the double barrier $u_l = 95$, and $u_u = 125$ has shown in Fig. (\ref{GreekBarPrimal}), and Fig. (\ref{GreekBarultraweak}) using DPG method for different initial stock price $S0 = 95.0001$, $S0 = 100$. Both figures show that the sensitivity has sinusoidal behavior around the barriers when the underlying price is close to $95$, and $125$. One can see that in both cases rate of change in price and Delta are more smooth for weekly checking the barriers in comparison to daily check of the barrier which stands to reason.
\begin{figure}[!ht]
\includegraphics[width=\linewidth]{plots/15.png}
\caption{primal Barrier for weekly and daily, p=2, Nt = 25, S0=95.0001}
\label{GreekBarPrimal}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=\linewidth]{plots/16.png}
\caption{Ultraweak Barrier for weekly and daily, p=2, Nt = 25, S0 = 100}
\label{GreekBarultraweak}
\end{figure}
\section{Conclusion}
In this manuscript, a numerical scheme based on the discontinuous Petrov–Galerkin (DPG)
is proposed to deal with the option pricing problem as one of the most important branches of quantitative finance. The Black-Scholes PDE arisen from option pricing is a special member of the family of the convection-diffusion problem which is known for being unstable in the case of having a convention-dominant term. The DPG method automatically yields a stable numerical method to estimate the solution of the very same PDE. In this investigation, we derived detailed DPG formulations for European, American, Asian, and Barrier options, and their sensitivity. Besides, computational experiments is performed to inspect the numerical efficiency of the method for each option and corresponding Greek. An HPC code for option pricing with the DPG method is provided to motivate the market makers and researchers to utilize the DPG method by customizing the code for their applications and more complicated problems.
\begin{comment}
\newpage
\subsection{Some Anlaysis}
\textcolor{red}{use the follwoing interpretation for European Call and put}
At-the-money options generally have deltas around 50. Deep-in-the-money options might have a delta of 80 or higher, while out-of-the-money options have deltas as small as 20 or less. As the stock price moves, delta will change as the option becomes further in- or out-of-the-money. When a stock option gets very deep in the money (delta near 100), it will begin to trade like the stock, moving almost dollar-for-dollar with the stock price. Meanwhile, far-out-of-the-money options won't move much in absolute dollar terms. Delta is also a very important number to consider when constructing combination positions.
\textcolor{purple}{one theorem can be to show that if matix of DPG is positve definite then {\bf LCP has solution}-->> look at the farsi thesis of student of Dr. Usefi}\\
\textcolor{red}{we use primal Dual active set strategy!!!}\\
{\textcolor{blue}{for last edit see:::::\\Paul Wilmott, Sam Howison, Jeff Dewynne - The Mathematics of Financial Derivatives A Student Introduction-Cambridge University Press (1995).pdf}}
{\textcolor{green}{variational Inequlatity chap 6, page 188 \cite{achdou2005computational}}}\\
where $h(x)$ is the payoff option. the optimal early exercise boundary is automatically determined in (\ref{LCP}).
{\textcolor{blue}{Barrier option code in the book of: (Statistics in Practice) Paolo Brandimarte - Numerical Methods in Finance and Economics A MATLAB-Based Introduction-Wiley-Interscience (2006)}}\\
\textcolor{blue}{look at the follwoing website for option pricing codes:https:$//github.com/jkirkby3/PROJ_Option_Pricing_Matlab/blob/master/PROJ/LEVY/Barrier_Options/PROJ_Double_Barrier.m$}\\
\textcolor{red}{complete it with
A Highly Accurate Finite Element Method to Price Discrete Double Barrier Options}\\
{\textcolor{blue}{variational Inequlatity chap 6, page 187 \cite{achdou2005computational}}}\\
\textcolor{green}{ look at the formula 4.23 book of computational methods for option pricing }
\textcolor{red}{Need to define broken spaces here too!!}
\begin{equation}\label{}
\begin{array}{l}
b^{\tau}_{\text{primal}}({\bf u},v) = (\frac{\partial}{\partial x} u,-\frac{\sigma^2}{2}\frac{\partial}{\partial x} v)_{\Omega_{+}} +(\frac{\partial}{\partial x} u,(r+\frac{\sigma^2}{2})v)_{\Omega_{+}}
-(u,v)_{\Omega_{+}}+ \langle \hat{q}, v\rangle_{\partial \Omega_{+}}\\
\end{array}
\end{equation}
with a set of trial and flux variables ${\bf u}=(u,\hat{q})\in V\times V^{\frac{1}{2}}$ , test variable ${\bf v} = v \in H???$. Moreover, considering trail variables ${\bf u} = (u,\vartheta)\in L_2(\Omega)\times L_2(\Omega)$, and flux variables ${\bf \hat{u}} = (\hat{u},\hat{{\vartheta}})\in H^{1/2}(\Omega_h)\times H^{1/2}(\Omega_h) $ the bilinear form (\ref{VI}) for the ultraweak formulation reads
\begin{equation}\label{}
\begin{array}{l}
b^{\tau}_{\text{ultraweak}}(({\bf u},\hat{{\bf u}}),{\bf v}) =
b^{\tau}_{\text{ultraweak}}(((u,{\vartheta}), (\hat{u},\hat{{\vartheta}})), (v,\omega))
\\
\\
\hspace{3cm}=({\bf \vartheta},\frac{\sigma^2}{2} \frac{\partial}{\partial x} v)_{\Omega_{+}} +({\bf \vartheta},(r+\frac{\sigma^2}{2}) v)_{\Omega_{+}}-(u,v)_{\Omega_{+}}-
(u,\frac{\partial }{\partial x}\omega)_\Omega_{+}
\\
\\\hspace{3cm}-({\bf \vartheta},\omega)_\Omega_{+}+
\langle \hat{u}, v\rangle_{\partial \Omega_{+}}+
\langle \hat{{\vartheta}}, v\rangle_{\partial \Omega_{+}}
\end{array}
\end{equation}
c1 = \frac{\sigma^2}{2}
c2 = (r +\frac{\sigma^2}{2})
\end{comment}
|
{
"arxiv_id": "2302.08644",
"language": "en",
"timestamp": "2023-02-20T02:04:42",
"url": "https://arxiv.org/abs/2302.08644",
"yymm": "2302"
} | \section{Introduction}
In network communication, latency is defined as the time elapsed between the transmission of a packet from the source and its recovery at the receiver. Ensuring reliable communication with low latency is the main challenge in designing next-generation communication systems. To this end, we focus on studying error-correcting codes that ensure reliability in the presence of low decoding delay constraints. Our motivation stems from several modern applications such as Extended Reality (XR), Virtual Reality (VR), and Cloud Gaming.
In this work, we focus on forward error correction (FEC). FEC codes for correcting erasures (i.e., packet drops) with low decoding delay were first investigated by Martinian {\em et al.} in~\cite{Martinian2004, Martinian2007}. The works in \cite{Martinian2004, Martinian2007} focused on designing codes that correct a burst of $B$ erasures with a decoding delay that is at most $T$, and showed that the optimal code rate for this setting is $R=\frac{T}{T+B}$. In~\cite{Martinian2007}, Martinian and Trott presented an explicit rate-optimal code that can be constructed for all values of $B, T$, and $R$, with a linear field size $q=O(T)$. In~\cite{Badr2017}, the authors generalized the study on codes with low decoding delay by introducing and studying a channel model that considers both burst and arbitrary erasures. This channel model was referred to as the sliding-window erasure channel, denoted by $C(N, B, W)$, where in any sliding window of size $W$, the channel can introduce either an erasure burst of maximum length $B$, or else, up to $N$ arbitrary erasures. The authors in \cite{Badr2017} showed that, without loss of generality, the channel parameter $W$ could be set to $W=T+1$, and the optimal code rate for this channel is~$R=\frac{T+1-N}{T+1-N+B}$.
The term {\em streaming codes} is widely used in the literature to refer to codes that can correct {\em all} erasure patterns generated by the $C(N, B, W)$ sliding-window channel model. A series of works focused on designing rate-optimal streaming codes, e.g.,~\cite{Badr2017,Kumar2018, Fong2019l, dudzicz2019explicit, Rakumar2020, Domanovitz2020, Kumar2020IT, Kumar2020L,fong2020optimal,badr2017fec,badr2011diversity}. These codes have different field size requirements in terms of $T$, ranging from exponential $q=O(e^T)$ to linear $q=O(T)$. Existing streaming codes with linear field sizes can be constructed only for certain values of $N, B,$ and $T$, e.g., \cite{Rakumar2020, Kumar2020L}. The state-of-the-art optimal streaming code that can be constructed for {\em all} values of the parameters $N, B, T$, is the one that was recently presented by Domanovitz {\em et al.}~\cite{Domanovitz2020}, which requires a quadratic field size $q=O(T^2)$. Some works also considered variations of the $C(N, B, W)$ sliding-window channel, the authors in~\cite{badr2015streaming} proposed a code with cubic field size $q=O(T^3)$ that can correct most erasures when a window of size $W$ contains a {\em single} arbitrary erasure {\em in addition} to a burst erasure of maximum size $B$.
The goal of introducing sliding-window channels such as the $C(N, B, W)$ channel in~\cite{Badr2017} was to provide a theoretically tractable approximation of the Gilbert-Elliot (GE) channel\cite{Gilbert, Elliot}, which is a random channel with memory that is commonly used to model packet erasures over networks. However, the accuracy of this approximation heavily depends on the values of $N,B,T$, and the GE channel parameters. Namely, some realizations of the GE channel may include both burst and multiple arbitrary erasures occurring in the same window, and such cases are not covered by the sliding-window erasure channels studied in~\cite{Badr2017} and~\cite{badr2015streaming}. These realizations could have a relatively high probability of occurrence depending on the values of the system parameters.
In our work, we adopt a different approach from the literature for constructing low-delay erasure codes, where we do not optimize the performance of the code over sliding-window channels of any kind. Instead, we focus on constructing low-delay codes with linear field size that can also correct erasure patterns consisting of both burst and multiple arbitrary erasures occurring simultaneously in the same window. Moreover, our code design is focused on minimizing the packet loss probability by allowing the recovery of a maximal number of erased packets in scenarios where the erasure pattern is beyond the error correction capability of the code. The parameters of our code are $N,B,$ and $T$, where $T$ represents the maximum decoding delay and $(N,B)$ determines the erasure correction capability of the code. Our codes are explicit, systematic and can be constructed for all values of $N, B$, and $T$, with a linear field size $q\geq T$. We provide simulation results on the packet loss probability of our code over the GE channel and compare the results to existing constructions in the literature. The comparison shows that our code can outperform existing streaming codes in terms of both packet loss probability and maximum delay, for the same code rate and lower field size. An important implication of our results is that constructing rate-optimal streaming codes with zero error over theoretical channels, such as sliding-window channels, does not necessarily translate into codes with optimal performance over practical channels, such as the GE channel.
\section{Preliminaries}
\label{sec:2}
\subsection{System Model}
\label{model}
In what follows, we discuss coding approaches for tackling the low-delay packet transmission problem. Convolutional codes are typically used for the low-delay packet transmission setting, and the delay is measured in terms of the time units that one needs to wait for before being able to decode an erased packet. These codes can be constructed in multiple ways but they share the basic idea of introducing redundant symbols or packets through time. The redundancy is generated by encoding the information symbols or packets causally. More specifically, the parity symbols generated at time $i$ can only depend on the information symbols at times $j\leq i$. We will be discussing two main approaches for constructing such convolutional codes that differ in terms of the allocation of redundancy across time instances. We will focus on systematic code constructions since the systematic property ensures that unerased packets are recovered with zero delay.
The first known approach~\cite{Martinian2004} for designing low-delay codes is the following. Consider the setting where at each time~$i$, a source generates an information packet denoted by \mbox{$\mathbf{u}[i]=(u_1[i],u_2[i],\ldots,u_{k}[i])\in \mathbb{F}_q^k$} consisting of $k$ symbols over $\mathbb{F}_q$. At time $i$, a systematic encoder maps the information packet $\mathbf{u}[i]$ into an $n$-symbol coded packet denoted by \begin{equation}
\label{de}
\mathbf{x}[i]=(u_1[i],\ldots,u_{k}[i],p_1[i],\ldots,p_{n-k}[i])\in \mathbb{F}_q^n,
\end{equation}
where the parity symbols $p_1[i],\ldots,p_{n-k}[i]$ are generated as a function of $\mathbf{u}[j]$ with $j\leq i$. The rate of the code is $R=k/n$. Our goal is to design the code such that it allows decoding an erased packet under a maximum delay constraint $T$, where $T$ is a code parameter. Therefore, in the case where $\mathbf{x}[i]$ is erased, the underlying code should allow recovering the information packet $\mathbf{u}[i]$ by the time the packet $\mathbf{x}[i+T]$ is received.
In the previous description, a delay-constrained code with rate $R=k/n$ is constructed by introducing $n-k$ redundant parity symbols at each time $i$. Alternatively, a second known approach~\cite{VajhaITW, Vajha} for constructing a code with the same delay properties and same rate $R=k/n$, is to transmit $k$ information packets, followed by $n-k$ parity packets, in $n$ consecutive time instances. More specifically, the first $n$ transmitted packets would be given by
\begin{equation}
\label{he}
\mathbf{x}[i] = \left\{\def\arraystretch{1.2}%
\begin{array}{@{}c@{\quad}l@{}}
\mathbf{u}[i] \in \mathbb{F}_q^k, & \text{for $i=1,\ldots,k$},\\
\mathbf{p}[i] \in \mathbb{F}_q^k, & \text{for $i=k+1,\ldots,n$},\\
\end{array}\right.
\end{equation}
where $\mathbf{p}[k+1], \ldots, \mathbf{p}[n]$ are $n-k$ parity packets of length $k$ that are generated as a function of the $k$ preceding information packets $\mathbf{u}[1],\ldots,\mathbf{u}[k]$. This process can be repeated for every group of $k$ information packets.
\subsection{Diagonal and Horizontal Interleaving}
\label{interleaving}
Next, we present two known interleaving techniques for transforming a low-delay block code into a convolutional one with equivalent code properties. The importance of these interleaving techniques is that they reduce our problem to designing classical block codes that satisfy certain properties.
Consider an $(n,k)$ block code $\mathcal{C}\subseteq \mathbb{F}_q^n$. Suppose that a codeword $\mathbf{x}=(x_1,\ldots,x_n)\in \mathcal{C}$ is affected by erasures resulting in a string $\mathbf{y}=(y_1,\ldots,y_n)$, where $y_i\in \mathbb{F}_q \cup \{\star\}$. We say that an erased codeword symbol $x_i$ is decoded with maximum delay $T$, if this symbol can be recovered using $(y_1,y_2,\ldots,y_{i+T^*})$, where $T^*=\min \{T,n-i\}$.
The two interleaving techniques that we explain next, called diagonal and horizontal interleaving, allow transforming any block code $\mathcal{C}$ into a convolutional code that has the same code rate and equivalent decoding delay properties.
The diagonal interleaving technique~\cite{Martinian2007} is applied in the setting corresponding to~\eqref{de}. In diagonal interleaving, the convolutional code is obtained from its block code counterpart~$\mathcal{C}$ by positioning the codeword symbols diagonally across time instances, i.e., for all $i$, the symbols of the transmitted packet $\mathbf{x}[i]=(x_1[i],\ldots,x_n[i])\in \mathbb{F}_q^n$ satisfy the following property
$\left(x_1[i],x_2[i+1],x_3[i+2],\ldots,x_n[i+n-1]\right)\in \mathcal{C}.$
The horizontal interleaving technique~\cite{VajhaITW,Vajha} is applied in the setting corresponding to~\eqref{he}. Let $\mathcal{I}\triangleq \{1+mn~|~m\in \mathbb{N}\}$. In horizontal interleaving, each transmitted packet $\mathbf{x}[i]$ is of length~$k$, such that for $i\in \mathcal{I}$ we have
\[
\mathbf{x}[i+a-1] = \left\{\def\arraystretch{1.2}%
\begin{array}{@{}c@{\quad}l@{}}
\mathbf{u}[i+a-1] \in \mathbb{F}_q^k, & a=1,2,\ldots,k,\\
\mathbf{p}^{a-k+1}[i+a-1] \in \mathbb{F}_q^k, & a=k+1,\ldots,n,\\
\end{array}\right.
\]
where $\mathbf{p}^1[i+k],\mathbf{p}^2[i+k+1], \ldots,\mathbf{p}^{n-k}[i+n-1]$ represent the $n-k$ parity packets generated by horizontally encoding the symbols of $\mathbf{u}[i], \mathbf{u}[i+1],\ldots,\mathbf{u}[i+k-1]$ using the underlying systematic code $\mathcal{C}$. Namely, for $i\in \mathcal{I}$ and $j=1,\ldots,k$,
$$\left( u_j[i],\ldots,u_j[i+k-1], p^1_j[i+k],\ldots,p^{n-k}_j[i+n-1]\right) \in \mathcal{C}.$$
One can easily show that the aforementioned interleaving techniques result in a convolutional code that has equivalent decoding properties as the block code $\mathcal{C}$~\cite{Martinian2007, Kumar2020IT, VajhaITW, Vajha}. For more details on these interleaving techniques, we refer interested readers to~\cite{Martinian2007, Vajha} and references therein.
\begin{comment}
\subsection{Gilbert-Elliot (GE) and Sliding-Window Channels}
\label{GE}
The GE channel is a commonly-accepted model for packet erasures in networks. A GE channel with parameters $(\alpha,\beta,\epsilon_0,\epsilon_1)$ can be represented by a 2-state discrete-time Markov chain. At any instance of the GE channel, the channel is either in a good state or in a bad one (Figure~\ref{fig2}). In the good and bad states, the GE channel behaves as an i.i.d. erasure channel with erasure probabilities $\epsilon_0$ and $\epsilon_1$, respectively (typically $\epsilon_1 \gg \epsilon_0$). The probability of transitioning from the good state to the bad one between any two consecutive instances of the GE channel is $\alpha$, and the probability of transitioning from the bad state to the good one is $\beta$.
\begin{figure}[h!]
\vspace{0.2cm}
\centering
\begin{tikzpicture}
\node[state, text width=1cm, align=center] (s) {Good $\epsilon_0$};
\node[state, text width=1cm, align=center, right=of s] (r) {Bad \\ $\epsilon_1$};
\draw[every loop]
(r) edge[bend right, auto=right] node {$\beta$} (s)
(s) edge[bend right, auto=right] node {$\alpha$} (r)
(s) edge[loop left] node {$1-\alpha$} (s)
(r) edge[loop right] node {$1-\beta$} (r);
\end{tikzpicture}
\caption{2-state Markov chain illustration of the GE channel.}
\label{fig2}
\end{figure}
The GE channel has memory, which makes the theoretical analysis of the performance of codes over this channel challenging.
The $C(N, B, W)$ sliding-window-erasure channel was introduced in~\cite{Badr2017} as a theoretically tractable channel that approximates the behavior of the GE channel.
The $C(N, B, W)$ channel model assumes that in any sliding window of length~$W$, the channel can introduce either a single erasure burst of maximum length $B$, or a maximum of $N\leq B$ arbitrary erasures. The main intuition behind this approximation is that in a given time interval (window), the channel is likely to be and remain in one of the two states (good and bad). In the bad state, the channel is expected to introduce up to a certain number of burst erasures ($B$), and in the good state, the channel is expected to introduce up to a certain number of arbitrary erasures ($N\leq B$).
The previous works in the literature adopt this approximation and focus on optimizing their code constructions over the $C(N, B, W)$ channel. More specifically, the term streaming codes is used to refer to codes that have zero-error over the $C(N, B, W)$ channel, i.e., can correct all erasure patterns generated by this channel under the maximum delay constraint. As for the empirical performance of the code, the GE channel with $\epsilon_0=\epsilon$ and $\epsilon_1=1$ is typically used to simulate the performance of streaming codes. The accuracy of the aforementioned approximation depends on several factors including, but not limited to, the size of the window and values of the GE channel parameters $(\alpha,\beta,\epsilon_0,\epsilon_1)$. In general, a realization of the GE channel over a certain time interval may include both burst and arbitrary erasures occurring simultaneously in that interval, which is not covered by the $C(N, B, W)$ channel model. Therefore, streaming codes have a non-zero packet loss probability when applied over the GE channel.
In our work, we do not restrict ourselves to optimizing our construction over the $C(N, B,W)$ channel, instead, we construct codes that can also correct some combinations consisting of both burst and arbitrary erasures. Furthermore, towards minimizing the packet loss probability, our design aims at maximizing the number of packets that can be successfully recovered in scenarios where the erasure pattern is uncorrectable, i.e., not all erasures can be recovered.
\end{comment}
\section{Code Construction}
\label{cons}
\subsection{Encoding}
\label{encoding}
Suppose that we are given a target maximum delay $T$, and we choose certain code design parameters $B\geq 0$ and $N\geq 0$. Let \mbox{$\mathcal{C}\subseteq \mathbb{F}_q^n$} be an $(n,k)$ block code with $k=T-N$ and $n=T+B$, such that $B+N\leq T$ and $q\geq T$. Let $\mathbf{G}$ be the $k\times n$ generator matrix of $\mathcal{C}$. Let $\mathbf{u}=(u_1,\ldots, u_k)\in \mathbb{F}_q^k$ be the information message and $\mathbf{x}\in \mathbb{F}_q^n$ be the codeword. Since the code is systematic, we have $\mathbf{x}=\mathbf{u}\mathbf{G}=(u_1,\ldots, u_k, p_1, \ldots, p_{n-k}),$ where $p_1, p_2, \ldots, p_{n-k}$ represent the $n-k=B+N$ parity symbols. Next, we present our construction by giving the expressions of the parity symbols $p_1, \ldots, p_{B+N}$, as a function of the information symbols $u_1,\ldots, u_k$.
\begin{enumerate}[leftmargin=*]
\item The first $N$ parity symbols $p_{1},p_{2},\ldots,p_{N}$ are generated using a systematic MDS code over $\mathbb{F}_q$. For explicitness, the systematic Reed-Solomon (RS) encoder based on the Cauchy matrix construction is used.
\item The $B$ parity symbols $p_{N+1},p_{N+2},\ldots,p_{N+B},$ are generated as interleaved parity checks with an interleaving factor of $B$, such that $\forall i \in \{1,2,\ldots,B\}$,
\begin{equation}
\label{p2}
p_{N+i}=u_i+u_{i+B}+\ldots+u_{i+(q-1)B}+ \mathds{1}_{\{i\leq r\}} u_{k-r+i},
\end{equation}
\end{enumerate}
where $k=qB+r$ and $r=k\Mod B$. The final step of the construction is to transform this block code into a convolutional one by applying either diagonal or horizontal interleaving, as explained in Section~\ref{interleaving}.
\begin{comment}
\begin{example}[Encoding, $T=10, B=4, N=2$]
\label{ex3}
Suppose that we are given that $T=10$ and we want to construct a code with design parameters $B=4$ and $N=2$. It follows from the construction that $k=T-N=8, n=k+B+N=14$. Consider a field size $q=11$ which satisfies $q\geq k+N$.
One way to construct the Cauchy matrix $\mathbf{M}$ is to choose the elements $a_i=i-1$ for $i=1,\ldots,k$, and $b_j=k+i-1$ for $j=1,\ldots,N$, from the field $\mathbb{F}_{11}$. This choice results in a $8 \times 2$ matrix
$$\mathbf{M}= \begin{pmatrix}
4 & 6 \\
3 & 4 \\
9 & 3 \\
2 & 9 \\
8 &2 \\
7 &8 \\
5 &7 \\
10 &5
\end{pmatrix},
$$
where the elements $m_{i,j}$ of $\mathbf{M}$ follow from~\eqref{eq:m} by computing the multiplicative inverses of $a_i-b_j$ in $\mathbb{F}_{11}$. Therefore, the codeword $\mathbf{x}\in \mathcal{C}$ is given by
$$\mathbf{x}=\left(u_1,u_2,u_3,u_4,u_5,u_6,u_7,u_8, \begin{matrix} 4u_1+3u_2+9u_3+2u_4 \\ + \\ 8u_5+7u_6+5u_7+10u_8 \end{matrix}, \begin{matrix} 6u_1+4u_2+3u_3+9u_4 \\ + \\ 2u_5+8u_6+7u_7+5u_8 \end{matrix}, \begin{matrix} u_1 \\ + \\ u_5 \end{matrix},\begin{matrix} u_2 \\ + \\ u_6 \end{matrix},\begin{matrix} u_3 \\ + \\ u_7 \end{matrix},\begin{matrix} u_4 \\ + \\ u_8 \end{matrix} \right).$$
\end{example}
\begin{example}[Encoding, $T=16,B=4,N=4$]
\label{ex4}
Suppose that we are given that $T=16$ and we want to construct a code with design parameters $B=4$ and $N=4$. It follows from the construction that $k=T-N=12, n=k+B+N=20$. Consider a field size $q=17$ which satisfies $q\geq k+N$.
One way to construct the Cauchy matrix $\mathbf{M}$ is to choose the elements $a_i=i-1$ for $i=1,\ldots,k$, and $b_j=k+i-1$ for $j=1,\ldots,N$, from the field $\mathbb{F}_{17}$. This choice results in a $12 \times 4$ matrix
$$\mathbf{M}= \begin{pmatrix}
7 &13 &6 &9 \\
3 &7 &13 &6 \\
5 &3 &7 &13 \\
15 &5 &3 &7 \\
2 &15 &5 &3 \\
12 &2 &15 &5 \\
14 &12 &2 &15 \\
10 &14 &12 &2 \\
4 &10 &14 &12 \\
11 &4 &10 &14 \\
8 &11 &4 &10 \\
16 &8 &11 &4
\end{pmatrix}, $$
where the elements of $\mathbf{M}$ follow from~\eqref{eq:m} by computing the multiplicative inverses of $a_i-b_j$ in $\mathbb{F}_{17}$. Therefore, it follows from~\eqref{p1} and \eqref{p2} that the parities of the codeword $\mathbf{x}\in \mathcal{C}$ are given by
\begin{align*}
p_1 &= 7u_1+3u_2+5u_3+15u_4+2u_5+12u_6+14u_7+10u_8+4u_9+11u_{10}+8u_{11}+16u_{12}, \\
p_2 &= 13u_1+7u_2+3u_3+5u_4+15u_5+2u_6+12u_7+14u_8+10u_9+4u_{10}+11u_{11}+8u_{12},\\
p_3 &= 6u_1+13u_2+7u_3+3u_4+5u_5+15u_6+2u_7+12u_8+14u_9+10u_{10}+4u_{11}+11u_{12}, \\
p_4 &= 9u_1+6u_2+13u_3+7u_4+3u_5+5u_6+15u_7+2u_8+12u_9+14u_{10}+10u_{11}+4u_{12}, \\
p_5 &= u_1+u_5+u_9, \\
p_6 &= u_2+u_6+u_{10}, \\
p_7 &= u_3+u_7+u_{11}, \\
p_8 &= u_4+u_8+u_{12}.
\end{align*}
\end{example}
\end{comment}
\subsection{Decoding}
\label{decoding}
To explain the decoding scheme, we start by examining the simple case when the total number of erasures in $(x_1,\ldots,x_{k+N})$ is $\leq N$. Notice that the punctured code obtained by only considering the first $k+N$ symbols of each codeword, is an $(k+N,k)$ systematic RS code. Hence, if the total number of erasures in the first $k+N$ symbols is $\leq N$, we apply RS decoding to recover these erasures. In this case, it follows from the code design that the erased symbols can be recovered with a delay that is $<T$.
Next, we examine the case when the total number of erasures in $(x_1,\ldots,x_{k+N})$ is $> N$. In this case, we are interested in decoding as many information symbols (packets) as possible while respecting the maximum delay constraint $T$. It follows from the code design that the first $B-1$ information symbols $u_1,u_2,\ldots,u_{B-1}$ fall within the maximum decoding delay constraint $T$ since these symbols need to be decoded before the end of the code. Whereas the delay constraint for the remaining information symbols $u_{B},u_{B+1},\ldots,u_{k}$ is inactive because the constraint falls right at the end or past the end of the code. We refer to the symbols $u_1,\ldots,u_{B-1}$ as urgent symbols, and to $u_{B},\ldots,u_{k}$ as non-urgent symbols.
We start our decoding process by decoding the urgent symbols $u_1,\ldots,u_{B-1}$ symbol-by-symbol from left to right. Suppose that $u_1$ is erased, we attempt to decode $u_1$ using the interleaved parity check $p_{N+1}$. If $u_1$ is the only summand that is erased in the interleaved parity check $p_{N+1}$, then $u_1$ can be successfully recovered by a simple addition/subtraction operation. Otherwise, $u_1$ is considered to be lost because it could not be decoded within its delay constraint. Then, similarly, we proceed to decode the next erased urgent symbol. If at any point of this symbol-by-symbol decoding process, the number of erased symbols in $(x_1,\ldots,x_{k+N})$ becomes $\leq N$, we use the punctured RS code to decode all symbols, and the decoding is subsequently terminated. In some scenarios, an urgent symbol may not be decodable with delay $\leq T$, but could be decoded at later stages of the decoding process with delay $>T$. In such scenarios, we still consider the urgent symbol to be lost since its recovery was not successful within the maximum decoding delay constraint $T$.
As for the non-urgent symbols $u_{B},\ldots,u_{k}$, the general decoding approach for these symbols consists of two phases. In the first phase, we decode as many symbols as possible using the interleaved parity checks $p_{N+1},\ldots,p_{N+B}$, given in~\eqref{p2}. In the second phase, we decode the remaining symbols that were not recovered in phase 1 by using the RS parities $p_1,\ldots,p_N$. The worst-case overall decoding complexity is $O(T^2)$~\cite{berlekamp}.
\subsection{Erasure Correction Capability}
In this section, we discuss the erasure correction capability of the code in terms of the code parameters $N,B,$ and $T$. Namely, we detail some of the cases where {\em all} erasures can be decoded, and we present a general condition under which the partial recovery of {\em some} erasures is possible.
{\em Case 1:} Suppose that the erasure pattern consists of \mbox{$N'\leq N$} arbitrary erasures and no other erasures are present. In this case, all erased information symbols can be decoded using the punctured RS code obtained by only considering the first $k+N$ codeword symbols. Furthermore, it follows from the construction that the corresponding decoding delay is $<T$.
{\em Case 2:} Suppose that the erasure pattern consists of a burst of size $B'\leq B$ and no other erasures are present. In this case, we show that, under certain conditions, all the erased symbols can be decoded using the interleaved parity checks $p_{N+1},\ldots,p_{N+B}$. We focus our discussion on the case where all the symbols are erased in a given burst since it is the most challenging one. Recall that $k=qB+r$, with $r=k\Mod B$. For $i\in \{1,2,\ldots,B\}$, if $r=0$, let $S_i= \emptyset$; otherwise, let $S_i= \{u_{k-r+i}\}$, and define the sets
$ U_i\triangleq \{u_i, u_{i+B}, u_{i+2B}, \ldots, u_{i+(q-1)B}\} \cup S_i, $
which partition the set of information symbols $\{u_1,\ldots,u_k\}$. Notice that $U_i$ contains all the information symbols that appear as summands in the interleaved parity check $p_{N+i}$. Assume that the burst erases the systematic symbols $u_{i'},u_{i'+1},\ldots,u_{i'+B'-1}$, with $i'\in \{1,\ldots,k-B'+1\}$. It follows from the construction of the interleaved parity checks that the \mbox{$B'\leq B$} consecutive message symbols $u_{i'},\ldots,u_{i'+B'-1}$ belong to different partitions $U_i$. Therefore, any burst of size $B'\leq B$ can erase at most one of the summands in $p_{N+1},p_{N+2},\ldots,p_{N+B}$. Hence, the erased symbols $u_{i'},\ldots,u_{i'+B'-1}$ can be easily decoded by a simple subtraction/addition operation applied on the parity checks.
The argument above covers the case where the burst affects only the systematic symbols. Since the code is systematic, the case where the burst affects only the parity symbols is trivial. For this reason, we examine next the case where the burst affects some systematic and parity symbols simultaneously. Suppose that a systematic symbol $u_i$, $i\in \{1,\ldots,k\}$, appears as a summand in the interleaved parity check $p_j$, $j\in \{N+1,\ldots,N+B\}$. If $B \vert k$, i.e., $r=0$, it follows from the definition of the interleaved parity checks and their positioning in the codeword that any burst of size $B'\leq B$ cannot erase $u_i$ and $p_j$ simultaneously. Namely, if $r=0$, the construction guarantees the availability of every systematic symbol in at least one coded symbol after any $B'\leq B$ burst erasures. Hence, all erased information symbols can be decoded using $p_{N+1},\ldots,p_{N+B}$. Also, if $B\nmid k$, i.e., $r>0$, one can easily show that any $B'$ burst erasures can be decoded if $B'\leq N+r$. Furthermore, it follows from the construction that the decoding delay is $\leq T$ for all recovered symbols.
{\em Case 3:} Suppose that the erasure pattern consists of a burst of size $B'\leq B$ in addition to $1\leq N'\leq N$ arbitrary erasures. We will focus on the case where $B'=B>N$. Consider the case where the burst affects the non-urgent symbols $u_{B},\ldots,u_{k}$. In this case, the following general rule applies. The code can correct $B$ burst erasures in addition to $N'\leq N$ arbitrary erasures if the number of erasures to be decoded in phase~2 of the decoding process (Section~\ref{decoding}) is $N_2<N$, where $N_2$ denotes the number of erasures to be decoded in phase~2. To understand this, recall that any burst of size $B$ can erase only one of the summands in $p_{N+1},\ldots,p_{N+B}$. Hence, if $N'$ arbitrary erasures occur in addition to the $B$ burst erasures, then at least $B-N'$ out of the $B$ interleaved parity checks $p_{N+1},\ldots,p_{N+B}$ will have only one erased summand (that can be recovered by simple addition/subtraction operation). Therefore, by the end of decoding phase~1, at least $B-N'$ out of the $B+N'$ erasures can be successfully decoded. Consequently, the number of erasures to be decoded in phase~2 satisfies $N_2 \leq B+N'-(B-N')=2N'$. Using the $(k+N,k)$ punctured code
systematic RS code, up to $N$ erasures can always be decoded in phase 2. Notice that if $N'\leq N/2$, then $N_2\leq N$, and hence, all erased symbols can be decoded successfully. Whereas if $N'>N/2$, then some erasure patterns, but not all, can be corrected depending on the locations of the $N'$ arbitrary erasures. We omit the discussion of more cases due to space limitations.
Furthermore, a key feature of our code is that it allows the recovery of a large number of erased symbols in many cases where the erasure pattern is not correctable. Note that this is contrary to codes such as $(n,k)$ systematic RS codes, where not a single erasure can be recovered when the total number of erasures is $> n-k$. In practice, this decoding feature translates into a low packet loss probability as we show in Section~\ref{simulnew}. The general condition under which an information symbol can be recovered with maximum delay $T$ is given in Proposition~\ref{prop1}.
\begin{proposition}
\label{prop1}
Let $\bm{\varepsilon}\in\{0,1\}^{n}$ denote an erasure pattern, where $\varepsilon_j=1$, $j=1,\ldots,n$, indicates that the codeword symbol $x_i$ is erased. Let $\mathbf{e}_i\in \{0,1\}^k$, $i=1,\ldots,k$, be the $i^{th}$ canonical basis vector, and $\Tilde{\mathbf{G}}(\bm{\varepsilon}_1^{j})\in \mathbb{F}_q^{k\times n^*}$ be the generator matrix of the code $\mathcal{C}$ (Section~\ref{cons}) punctured column-wise at the indices of $\bm{\varepsilon}_1^{j}=(\varepsilon_1,\ldots,\varepsilon_j)$ whose entries are zero, with $n^*=j-\sum_{i=1}^j \varepsilon_i$. For a given $\bm{\varepsilon}$, an information symbol $u_i$ can be decoded with maximum delay $T$ if and only if $\mathbf{e}_i$ is in the column span of $\Tilde{G}(\bm{\varepsilon}_1^{i+T^*})$, with $T^*=\min \{T,n-i\}$.
\end{proposition}
\begin{comment}
\setcounter{example}{2}
\begin{example}[Continued, Decoding Scenario 1, $T=10, B=4, N=2$]
We continue Example~\ref{ex3} given in Section~\ref{encoding} by providing a decoding example. Suppose that a burst of size $4$ erases the symbols $u_2,u_3,u_4,u_5$, and $1$ arbitrary erasure affects $u_8$. The erased symbols are highlighted in red:
$$\mathbf{x}=\left(u_1,{\color{red} u_2},{\color{red} u_3},{\color{red} u_4},{\color{red} u_5},u_6,u_7,{\color{red}u_8}, \begin{matrix} 4u_1+3{\color{red} u_2}+9{\color{red}u_3}+2{\color{red}u_4} \\ + \\ 8{\color{red}u_5}+7u_6+5u_7+10{\color{red} u_8} \end{matrix}, \begin{matrix} 6u_1+4{\color{red}u_2}+3{\color{red}u_3}+9{\color{red}u_4} \\ + \\ 2{\color{red}u_5}+8u_6+7u_7+5{\color{red} u_8} \end{matrix}, \begin{matrix} u_1 \\ + \\ {\color{red}u_5} \end{matrix},\begin{matrix} {\color{red}u_2} \\ + \\ u_6 \end{matrix},\begin{matrix} {\color{red}u_3} \\ + \\ u_7 \end{matrix},\begin{matrix} {\color{red} u_4} \\ + \\ {\color{red} u_8} \end{matrix} \right).$$
The urgent symbols $u_2,u_3$ can be recovered with delay $10$ from the interleaved parity checks $p_4$ and $p_5$, respectively. The non-urgent symbol $u_5$ can be recovered with delay $6$ from the interleaved parity check $p_3$. After the symbols $u_2, u_3,$ and $u_5$ are recovered using $p_3,p_4,$ and $p_5$, respectively, the only symbols yet to be decoded are $u_4$ and $u_8$. These two symbols can be recovered using the Reed-Solomon parities $p_1$ and $p_2$ by inverting the $2 \times 2$ matrix $\begin{pmatrix} 2 & 9 \\ 10 & 5 \end{pmatrix}$ in $\mathbb{F}_{11}$. The invertibility of $\mathbf{A}_1$ follows from the MDS property of the encoding. The decoding delays for $u_4$ and $u_8$ are $9$ and $5$, respectively. Note that the parity $p_6$ was not used in the decoding process, so the decoding would still have been successful if $p_6$ were also affected by an additional arbitrary erasure.
\end{example}
\setcounter{example}{2}
\begin{example}[Continued, Decoding Scenario 2, $T=10, B=4, N=2$]
\label{inv1}
We continue Example~\ref{ex3} given in Section~\ref{encoding} by providing another decoding example. Suppose that a burst of size $4$ erases the symbols $u_4,u_5,u_6,u_7$, and $2$ arbitrary erasures affect $u_8$ and $p_1$. The erased symbols are highlighted in red:
$$\mathbf{x}=\left(u_1,u_2,u_3,{\color{red} u_4},{\color{red} u_5},{\color{red} u_6},{\color{red} u_7},{\color{red}u_8}, {\color{red}\begin{matrix} 4u_1+3{\color{red} u_2}+9{\color{red}u_3}+2{\color{red}u_4} \\ + \\ 8{\color{red}u_5}+7u_6+5u_7+10{\color{red} u_8} \end{matrix}}, \begin{matrix} 6u_1+4u_2+3u_3+9{\color{red}u_4} \\ + \\ 2{\color{red}u_5}+8{\color{red}u_6}+7{\color{red}u_7}+5{\color{red} u_8} \end{matrix}, \begin{matrix} u_1 \\ + \\ {\color{red}u_5} \end{matrix},\begin{matrix} u_2 \\ + \\ {\color{red} u_6} \end{matrix},\begin{matrix} u_3 \\ + \\ {\color{red} u_7} \end{matrix},\begin{matrix} {\color{red} u_4} \\ + \\ {\color{red} u_8} \end{matrix} \right).$$
The symbols $u_5, u_6$, and $u_7$ are recovered with delay $6$ from $p_3, p_4,$ and $p_5$, respectively. Then, the symbols $u_4$ and $u_8$ are recovered using the parities $p_2$ and $p_6$ by inverting the $2 \times 2$ matrix $\begin{pmatrix} 9 & 1 \\ 5 & 1 \end{pmatrix}$ in $\mathbb{F}_{11}$. The invertibility of this matrix follows from the fact that the Cauchy matrix $\mathbf{M}$ has distinct elements in each column, and hence any vector composed of any 2 elements from a column in $\mathbf{M}$ is linearly independent of the ones vector $(1~~1)$. The decoding delays for $u_4$ and $u_8$ are $10$ and $5$, respectively.
\end{example}
\setcounter{example}{2}
\begin{example}[Continued, Decoding Scenario 3, $T=10, B=4, N=2$]
We continue Example~\ref{ex3} given in Section~\ref{encoding} by providing an additional decoding example. Suppose that a burst of size $4$ erases the symbols $u_1,u_2,u_3,u_4$, and $2$ arbitrary erasures affect $u_5$ and $p_3$. The erased symbols are highlighted in red:
$$\mathbf{x}=\left({\color{red} u_1},{\color{red} u_2},{\color{red} u_3},{\color{red} u_4},{\color{red} u_5},u_6,u_7,u_8, \begin{matrix} 4 {\color{red} u_1}+3{\color{red} u_2}+9{\color{red}u_3}+2{\color{red}u_4} \\ + \\ 8{\color{red}u_5}+7u_6+5u_7+10 u_8 \end{matrix}, \begin{matrix} 6 {\color{red} u_1}+4{\color{red}u_2}+3{\color{red}u_3}+9{\color{red}u_4} \\ + \\ 2{\color{red}u_5}+8u_6+7u_7+5u_8 \end{matrix}, {\color{red} \begin{matrix} u_1 \\ + \\ {\color{red}u_5} \end{matrix}},\begin{matrix} {\color{red}u_2} \\ + \\ u_6 \end{matrix},\begin{matrix} {\color{red}u_3} \\ + \\ u_7 \end{matrix},\begin{matrix} {\color{red} u_4} \\ + \\ u_8 \end{matrix} \right).$$
The symbol $u_1$ cannot be recovered within the maximum delay constraint $T=10$, and therefore $u_1$ is considered to be lost. The symbols $u_2, u_3$, and $u_4$ are recovered with delay $10$ from $p_4, p_5,$ and $p_6$, respectively. After recovering $u_2, u_3$, and $u_4$, the symbol $u_5$ can recovered from $p_1$ and $p_2$ with delay $9$ by inverting the $2 \times 2$ matrix $ \begin{pmatrix} 4 & 6 \\ 8 & 2 \end{pmatrix}$ in $\mathbb{F}_{11}$. Note that at this point, the symbol $u_1$ can also be recovered; however, for delay considerations we consider this symbol to be lost. Therefore, in this example, all symbols except $u_1$ were successfully recovered within the delay constraint $T=10$.
\end{example}
\begin{example}[Continued, Decoding Scenario 1, $T=16,B=4,N=4$]
We continue Example~\ref{ex4} given in Section~\ref{encoding} by providing a decoding example. Suppose that a burst of size $4$ erases the symbols $u_4,u_5,u_6,u_7$, and $2$ arbitrary erasures affect $u_{10}$ and $u_{12}$. The erased symbols are highlighted in red in the parities:
\begin{align*}
p_1 &= 7u_1+3u_2+5u_3+15{\color{red} u_4}+2{\color{red} u_5}+12{\color{red}u_6}+14{\color{red} u_7}+10u_8+4u_9+11{\color{red} u_{10}}+8u_{11}+16{\color{red} u_{12}}, \\
p_2 &= 13u_1+7u_2+3u_3+5{\color{red} u_4}+15{\color{red} u_5}+2{\color{red} u_6}+12{\color{red} u_7}+14u_8+10u_9+4{\color{red} u_{10}}+11u_{11}+8{\color{red} u_{12}},\\
p_3 &= 6u_1+13u_2+7u_3+3{\color{red} u_4}+5{\color{red} u_5}+15{\color{red} u_6}+2{\color{red} u_7}+12u_8+14u_9+10{\color{red} u_{10}}+4u_{11}+11{\color{red} u_{12}}, \\
p_4 &= 9u_1+6u_2+13u_3+7{\color{red} u_4}+3{\color{red} u_5}+5{\color{red} u_6}+15{\color{red} u_7}+2u_8+12u_9+14{\color{red} u_{10}}+10u_{11}+4{\color{red} u_{12}}, \\
p_5 &= u_1+{\color{red} u_5}+u_9, \\
p_6 &= u_2+{\color{red} u_6}+{\color{red} u_{10}}, \\
p_7 &= u_3+{\color{red} u_7}+u_{11}, \\
p_8 &= {\color{red} u_4}+u_8+{\color{red} u_{12}}.
\end{align*}
The symbol $u_5$ can be decoded with delay $12$ using $p_5$. The symbol $u_7$ can be decoded with delay $12$ using $p_7$. Then, the symbols $u_4,u_6, u_{10}, u_{11}$ can be decoded by inverting the $4 \times 4$ matrix
$$\begin{pmatrix}
15 & 5 & 3 & 7 \\
12 & 2 & 15 & 5 \\
11 & 4 & 10 & 14 \\
16 & 8 & 11 & 4
\end{pmatrix},$$ in $\mathbb{F}_{17}$.
The invertibility of follows from the MDS property of the encoding.
\end{example}
\setcounter{example}{3}
\begin{example}[Continued, Decoding Scenario 2, $T=16,B=4,N=4$]
\label{inv2}
We continue Example~\ref{ex4} given in Section~\ref{encoding} by providing an additional decoding example. Suppose that a burst of size $3$ erases the symbols $u_1,u_2,u_3$, and $5$ arbitrary erasures affect $u_{7}, u_{11},p_1,p_2,p_8$. The erased symbols are highlighted in red in the parities:
\begin{align*}
{\color{red} p_1} &{\color{red} = 7u_1+3u_2+5u_3+15{\color{red} u_4}+2{\color{red} u_5}+12{\color{red}u_6}+14{\color{red} u_7}+10u_8+4u_9+11{\color{red} u_{10}}+8u_{11}+16{\color{red} u_{12}}}, \\
{\color{red} p_2} &{\color{red} = 13u_1+7u_2+3u_3+5{\color{red} u_4}+15{\color{red} u_5}+2{\color{red} u_6}+12{\color{red} u_7}+14u_8+10u_9+4{\color{red} u_{10}}+11u_{11}+8{\color{red} u_{12}}},\\
p_3 &= 6{\color{red} u_1}+13{\color{red} u_2}+7{\color{red} u_3}+3u_4+5u_5+15u_6+2{\color{red} u_7}+12u_8+14u_9+10 u_{10}+4{\color{red} u_{11}}+11u_{12}, \\
p_4 &= 9{\color{red} u_1}+6{\color{red} u_2}+13{\color{red} u_3}+7u_4+3u_5+5u_6+15{\color{red}u_7}+2u_8+12u_9+14u_{10}+10{\color{red} u_{11}}+4u_{12}, \\
p_5 &= {\color{red}u_1}+u_5+u_9, \\
p_6 &= {\color{red}u_2}+u_6+u_{10}, \\
p_7 &= {\color{red}u_3}+{\color{red} u_7}+{\color{red}u_{11}}, \\
{\color{red} p_8} &{\color{red} = {\color{red} u_4}+u_8+u_{12}}.
\end{align*}
The symbols $u_1$ and $u_2$ are recovered with delay $16$ using $p_5$ and $p_6$, respectively. Then, the symbols $u_3,u_7, u_{11}$ can be decoded by inverting the $3 \times 3$ matrix
$$\begin{pmatrix}
7 & 13 & 1 \\
2 & 15 & 1 \\
4 & 10 & 1 \\
\end{pmatrix},$$ in $\mathbb{F}_{17}$. The invertibility of this matrix follows from the fact that the Cauchy matrix $\mathbf{M}$ has distinct elements in each column, and hence any vector composed of any 3 elements from a column in $\mathbf{M}$ is linearly independent of the ones vector $(1~~1~~1)$. The decoding delays for $u_3, u_7,$ and $u_{11}$ are $16, 12,$ and $8$, respectively.
\end{example}
\end{comment}
\section{Simulation Results}
\label{simulnew}
In this section, we compare the packet loss probability (PLP) of our proposed code with other existing constructions over the GE channel (Figure~\ref{fig2}). The empirical PLP is computed as the ratio of the total number of lost information packets to the total number of transmitted information packets. The properties of the simulated codes are shown in Table~\ref{tabn1}. Due to space limitations, we only show results for the case of \mbox{$\alpha=5\times 10^{-3}$}, $\beta=0.45$, $\epsilon_1=1$, and varying $\epsilon_0=\epsilon$.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=0.2]
\node[state, text width=1cm, align=center] (s) {Good $\epsilon_0$};
\node[state, text width=1cm, align=center, right=of s] (r) {Bad \\ $\epsilon_1$};
\draw[every loop]
(r) edge[bend right, auto=right] node {$\beta$} (s)
(s) edge[bend right, auto=right] node {$\alpha$} (r)
(s) edge[loop left] node {$1-\alpha$} (s)
(r) edge[loop right] node {$1-\beta$} (r);
\end{tikzpicture}
\caption{{\small 2-state Markov chain illustration of the GE channel. The erasure probabilities in the good and bad states are denoted by $\epsilon_0$ and $\epsilon_1$, respectively. The transition probabilities between the good and bad states are denoted by $\alpha$ and $\beta$.}}
\label{fig2}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& $T$ & $(n,k)$ & Rate & $(N,B)$ \\ \hline
MDS Code & $15$ & $(16,8)$ & $0.5$ &$(8,8)$ \\ \hline
Martinian-Trott Code~\cite{Martinian2007} & $15$ & $(30,15)$ & $0.5$ &$(1,16)$ \\ \hline
Domanovitz et al. Code~\cite{Domanovitz2020} & $15$ & $(24,12)$ & $0.5$ &$(4,12)$\\ \hline
New Code 1 & $15$ & $(22,11)$ & $0.5$ &$(4,7)$ \\ \hline
New Code 2 & $14$ & $(20,10)$ & $0.5$ &$(4,6)$ \\ \hline
\end{tabular}
\caption{{\small The properties of the codes simulated in Figure~\ref{newf1}.}}
\label{tabn1}
\end{table}
\begin{figure}[h!]
\vspace{-0.5cm}
\centering
\includegraphics[width=0.47\textwidth]{Figures/PEICASSP.pdf}
\caption{{\small PLP of the codes in Table~\ref{tabn1} over the GE channel with $\alpha=$5e\text{-}3$, \beta=0.45$, and varying $\epsilon$. The horizontal interleaving setting is considered (Section \ref{interleaving}) and the channel length is set to $10^7$.}}
\vspace{-0.7cm}
\label{newf1}
\end{figure}
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{Figures/Pall1.pdf}
\caption{Comparing the packet loss probability of different code construction over the GE channel while varying $\epsilon$. The simulation results are for $\alpha=5\times 10^{-4}, \beta=0.5$. The properties of the compared codes are given in Table~\ref{tabn1}. The channel length is set to $10^7$.}
\label{newf2}
\end{figure}
\end{comment}
The results in Figure~\ref{newf1} show that for $\epsilon=0$, the Martinian-Trott code, which is optimized for correcting bursts, achieves the best performance. However, its performance rapidly deteriorates as $\epsilon$ increases. For $0<\epsilon\leq 0.04$, our code with $(N=4, B=7, T=15)$ achieves the lowest PLP among all constructions. Also, for $\epsilon\geq 0.005$, our code with $(N=4, B=6, T=14)$ presents a twofold gain since both the PLP and the maximum delay $T$ are lower than that of other constructions. Here it is noteworthy to also mention that these constructions require different field sizes. For $R=0.5$, the Martinian-Trott code can be constructed in $GF(2)$, the MDS code and our code require a linear field size $q\geq T$, and the Domanovitz code requires a quadratic field size $q=O(T^2)$.
In conclusion, the results demonstrate that optimizing code constructions over theoretical sliding-window channels does not necessarily give the best performance over practical channels such as the GE channel. Namely, the results in Figure~\ref{newf1} show that with the same code rate and lower field size, our code can outperform existing rate-optimal streaming codes in terms of both packet loss probability and maximum decoding delay. As part of future work, our goal is to derive a theoretical bound on the PLP of our code over the GE channel.
\bibliographystyle{ieeetr}
|
{
"arxiv_id": "2302.08658",
"language": "en",
"timestamp": "2023-02-20T02:05:42",
"url": "https://arxiv.org/abs/2302.08658",
"yymm": "2302"
} | \section{Introduction}
The development of Internet of Things (IoT) has quicken the construction progress of Intelligent Transport Systems (ITS), which has aroused great interests in industry and academia\cite{lin2020spatiotemporal,zhu2019parallel}. In recent years, an increasing number of traffic nodes, e.g., traffic sensors and loop detectors are mounted along roads to generate overwhelming data for enhanced traffic service, e.g., traffic flow forecasting \cite{liu2020dynamic,chen2020edge}.
By far, plenty of deep learning-based methods are proposed to improve performance gains in traffic flow forecasting \cite{lv2014traffic,tan2019recognizing}.
Recurrent Neural Network (RNN) and its variants, e.g., Long Short-Term Memory network (LSTM) \cite{zhao2017lstm} and Gated Recurrent Unit network (GRU) \cite{li2017diffusion,zhang2018combining}, are widespreadly used in forecasting traffic flow, due to the effectiveness of capturing temporal patterns inside traffic data sequences.
Besides, these methods combined with graphing approaches, such as Graph Neural Network (GNN) \cite{li2021hierarchical} or Graph Attention Network (GAT) \cite{wu2018graph}, can extract spatial correlation among traffic nodes, thus further increasing forecasting accuracy.
Despite prominent performance improvement, these methods are based on centralized learning strategy, which involves transmitting large quantities of privately-owned traffic data from decentralized traffic nodes to the central server for training prediction models.
This strategy yields considerable communication overhead and privacy concern, since private information (e.g., plate numbers) contained in traffic data may be leaked during data exchange.
As a result, it is crucial to forecast traffic flow with keeping traffic data decentralized for mitigating communication burden and preserving privacy.
Federated Learning (FL), as a novel distributed computing paradigm can resolve the problem above, where traffic nodes (called clients) collaboratively train a prediction model based on their private data, and only exchange intermediate model parameters with the server for model aggregation \cite{yang2019federated,li2020federated}.
Owing to the merit of guaranteeing competitive forecasting performance without direct data transmission, there have been a few efforts in forecasting traffic flow depicting spatio-temporal correlation based on FL framework \cite{liu2020privacy,zhang2021fastgnn,meng2021cross}.
However, all of these approaches employ batch learning manner, where prediction models are trained in advance based on batches of historical traffic data.
This coarse-grained manner of optimizing prediction models makes the optimized models fail to extensively capture the hidden temporal patterns in traffic flows.
Furthermore, due to traffic fluctuation, these optimized models easily suffer from poor prediction performance, when directly applied to the upcoming traffic data.
Therefore, these pre-trained models need to be repetitively trained, which incurs extra computation consumption, leads to delayed prediction, and fails to fulfill real-time prediction.
Online Learning (OL) manner can break through the above-mentioned limitations, in which traffic nodes consistently update prediction models and preform predictions once they observe new traffic data.
Naturally, OL manner has the inherent advantages of real-time prediction and non-redundant deployment and hence applies to the task of traffic flow forecasting.
Traffic flows fluctuate severely at some time stamps, which poses great challenges to prediction models updated in batch learning manner.
However, in OL manner, the fine-grained optimization mechanism of prediction models can adapt to the dynamic changes of temporal patterns hidden in traffic flows, thus guaranteeing the prediction performance.
\textbf{To the best of our knowledge, there are no researches focusing on traffic flow forecasting employing OL manner in FL framework} and this article fills this research gap.
However, there remain two main challenges to be tackled.
First, we should dynamically evaluate the spatial correlation among clients at each round so as to guarantee prediction performance, which remarkably increases the difficulty of prediction and model update. Second, existing FL approaches adopt the averaging mechanism to yield fresh global models which have poor performance on the subsequent traffic data, due to traffic fluctuation.
Therefore, it is necessary to explore the fluctuation patterns of traffic flow in the process of model aggregation so as to increase the generalization ability of global models and eventually improve performance gains.
To this end, we propose a novel prediction method named \textbf{O}nline \textbf{S}patial-\textbf{T}emporal \textbf{C}orrelation-based \textbf{F}ederated \textbf{L}earning (FedOSTC) for traffic flow forecasting, which possesses excellent capabilities of dynamically capturing spatio-temporal dependence and boosting model adaptation regardless of traffic fluctuation.
Specifically, a GRU with encoder-decoder architecture is regarded as the prediction model and trained collaboratively by multiple traffic nodes.
The temporal patterns inside traffic flows are captured independently via encoders at clients (traffic nodes).
Then, the central server adaptively evaluates spatial closeness from temporal patterns using GAT, which can mitigate the effect of traffic fluctuation on spatial correlation.
Relying on the instantly-captured spatio-temporal correlation, clients perform prediction and then incrementally optimize model parameters using Online Gradient Descent (OGD).
Finally, based on the observation of periodic patterns of traffic flows, we generate fresh global models using the newly-developed period-aware aggregation mechanism rather than averaging mechanism, which can increase the generalization of global models for subsequent traffic flows.
The main contributions of this paper are described as follows:
\begin{itemize}
\item{ We propose a novel prediction model named FedOSTC for forecasting traffic flow in FL framework, to tackle with the problems of high communication burden and privacy leakage in centralized methods. Specifically, FedOSTC includes online evaluating spatio-temporal correlation and incrementally optimizing prediction models.}
\item{ We obtain temporal patterns inside traffic data at clients using GRU-based encoders, and the central server adopts GAT to dynamically evaluate spatial closeness based on temporal patterns transmitted from clients, aiming to adapt to the dynamic changes of spatial correlation aroused by traffic fluctuation.}
\item{ We incrementally update local models using OGD and further propose a period-aware aggregation mechanism, which can increase the generalization of global models and eventually improve performance gains.}
\item{We conduct comprehensive experiments to validate that our proposed FedOSTC outperforms the related state-of-the-art methods from the perspectives of prediction performance and model generalization ability.}
\end{itemize}
The remainder of this paper is organized as below.
Section II reviews the related literature.
The problem of traffic flow forecasting adopting OL manner in FL framework is formulated in Section III.
Then in Section IV, detailed information of our proposed method FedOSTC is elaborated.
Furthermore, extensive experiments are conducted in Section V. In the end, the paper is concluded in Section VI.
\section{Related Work}
\subsection{Traffic Flow Forecasting}
A great variety of methods have been proposed to increase the accuracy of traffic flow forecasting.
Overall, existing methods can be divided into two categories: parametric and non-parametric methods.
AutoRegressive Integrated Moving Average (ARIMA) and its variants are classical parametric methods and have been researched extensively \cite{lee1999application, williams2003modeling}.
However, this series of methods have limitations in regressing traffic flows with high fluctuation, because they are based on the assumption that traffic flows have stationary distribution, which is not the case in reality.
Non-parametric methods are based on machine learning models, e.g., Graph Network (GN) \cite{wu2020comprehensive} and RNN \cite{ma2020daily}.
GN-based methods are devoted to evaluating spatial correlation among multiple nodes based on the constructed topology network, while RNN can capture temporal dependence inside traffic sequences.
Therefore, many researches combine GN approaches and RNN to measure spatio-temporal correlation and have achieved satisfactory performance \cite{guo2019attention, zhu2020novel}.
In \cite{guo2019attention}, Guo \emph{et al.} proposed an Attention based Spatio-Temporal Graph Convolutional Network (ASTGCN), which contains a spatio-temporal attention mechanism for dynamically capturing the spatial patterns and temporal features.
In \cite{zhu2020novel}, Zhu \emph{et al.} proposed a traffic prediction method based on Graph Convolutional Network (GCN) and RNN, where the topology graph of road network is utilized for depicting the spatial correlation, aiming to increase forecasting accuracy.
However, these researches all adopt the centralized training strategy, where the central server updates prediction models directly based on traffic data transmitted from decentralized traffic nodes, thus resulting in huge communication overhead and privacy leakage.
\subsection{Federated Learning for Traffic Flow Forecasting}
Some recent researches pay attention to forecasting traffic flow using FL \cite{qi2021privacy,liu2020privacy,zhang2021fastgnn,meng2021cross}.
In \cite{qi2021privacy}, Qi \emph{et al.} treated the vehicles as clients and introduced the blockchain to avoid the single point of failure. This scenario is not similar to ours.
In \cite{liu2020privacy,zhang2021fastgnn}, Liu \emph{et al.} and Zhang \emph{et al.} investigated a similar scenario where traffic sensors mounted along roads belong to some certain organizations.
The traffic data detected by sensors should be directly transmitted to the organizational servers for training, also confronting with excessive communicating burden.
In order to preserve privacy, multiple organizational servers adopt FL manner to collaboratively train the prediction model. However, the two proposed methods are far from reasonable, because only the spatial correlation among intra-organization nodes is evaluated and that of inter-organization ones is ignored, the lack of which leads to low forecasting accuracy.
Different from the above researches, Meng \emph{et al.} proposed a model named cross-node federated graph neural network (CNFGNN) \cite{meng2021cross}.
In this work, each traffic sensor serves as a client, and GNN is utilized to capture the spatial relation at the central server.
However, in the backpropagation process of GNN, the central server has to communicate with clients for updating parameters, which incurs considerable communication overhead.
Besides, since the transportation graph composed by clients at each round remains unchanged, the captured spatial correlation among multiple clients barely changes, thus failing to cater for traffic fluctuation.
Given the dynamic changes of traffic flow, our work mainly focuses on designing an online method for dynamically capturing spatio-temporal correlation among traffic nodes in FL framework, which is quite different from the above researches.
\section{Problem Formulation and Preliminary Knowledge}
In this section, we firstly formulate the traffic flow forecasting problem with OL mode in FL framework, and then present the preliminary knowledge.
\subsection{Problem Formulation}
Suppose there is a central server and $N$ traffic nodes. All traffic nodes compose a transportation network, denoted as a directed graph $\mathcal{G}= (\mathcal{S}, \mathcal{E})$. $\mathcal{S}= \{s_{1}, s_{2}, \cdots, s_{N}\}$ represents the node set and $s_n$ denotes the $n$-th traffic node. $\mathcal{E} = \left( {{e_{m,n}}} \right) \in {\mathbb{R}^{N \times N}}$ denotes the adjacency matrix of these nodes.
If $s_m$ is adjacent to $s_n$, $e_{m,n}=1$ (otherwise $e_{m,n}=0$).
Let $\mathcal{A}_n$ represent the set of traffic nodes adjacent to $s_n$, and we define that $s_n \in \mathcal{A}_n$.
Let $x_{t,n}$ denote the traffic speed observed by $s_n$ at the $t$-th time stamp. Primary descriptions are listed in Table \ref{notation}.
\begin{table}
\caption{Primary Notations and Definitions in Section III}
\label{notation}
\centering
\begin{tabular}{lll}
\toprule
Notation& Definition\\
\midrule
$\mathcal{G}$& The transportation network composed by traffic nodes.\\
$s_n$& The $n$-th traffic node. \\
$e_{m,n}$& The adjacency between $s_n$ and $s_m$.\\
$\mathcal{A}_n$& Adjacent node set of $s_n$.\\
$x_{t,n}$& Traffic speed observed by $s_n$ at the $t$-th time stamp.\\
$f(\cdot)$& Prediction model.\\
$T$& Historical step.\\
$F$& Forecasting step.\\
$N$& Client number.\\
$X_{t,n}^{T}$& Training speed sequence of $s_n$ at the $t$-th round.\\
$\hat{X}_{t,n}^{F}$& Predicted values generated by $s_n$ at the $t$-th round.\\
$l$& Loss function.\\
$\mathcal{R}$& Maximum global round.\\
$w_t$& Global model parameters of the $t$-th round.\\
$REG$& Total regret over all clients.\\
\bottomrule
\end{tabular}
\end{table}
In this scenario, traffic nodes are regarded as clients to perform traffic flow forecasting using global model parameters from the central server and then incrementally update local models based on newly-observed speed data.
The central server aggregates local models uploaded by clients to yield the fresh global model.
At the $t$-th round, $s_n$ receives the global model parameters $w_t$ from the central server and forecasts traffic speeds of future $F$ time stamps based on traffic speeds of $T$ historical time stamps, which can be formulated as
\begin{equation}
\hat X_{t,n}^F = {f}(X_{t,n}^T;{w_t}),
\end{equation}
where $X_{t,n}^T= \left( x_{t-T+1,n}, x_{t-T+2,n}, \dots, x_{t,n}\right)$ denotes the sequence of traffic speeds from $t-T+1$ to $t$ observed by $s_n$, and $\hat{X}_{t,n}^F= \left(\hat{x}_{t+1,n}, \hat{x}_{t+2,n}, \dots, \hat{x}_{t+F,n}\right)$ represents the sequence of predicted traffic speeds from ${t+1}$ to ${t+F}$.
$f(\cdot)$ denotes the chosen prediction model.
Let $X_{t,n}^F = \left(x_{t+1,n}, x_{t+2,n}, \dots, x_{t+F,n} \right)$ denote the true traffic speeds of the future time stamps $s_n$ observes. Then $s_n$ computes the prediction error as follows:
\begin{equation}
{l}(X_{t,n}^F,f(X_{t,n}^T;w_t)) = {l}(X_{t,n}^F,\hat X_{t,n}^F;{w_t}),
\end{equation}
where $l$ denotes the loss function, and is adopted to evaluate the bias between the predicted and true values.
Let $\mathcal{R}$ represent the maximum number of global round.
We denote $w^*$ as the optimal model parameters which can be obtained in hindsight after the server seeing the data sequences of all $N$ clients over $\mathcal{R}$ rounds \cite{shalev2011online}. The total prediction regret of $N$ clients can be denoted by $REG$, which evaluates the difference in prediction loss using the actual model parameters and $w^*$ \cite{hoi2021online}.
$REG$ is formulated as
\begin{equation}
REG = \frac{1}{N} \sum\limits_{t = 1}^\mathcal{R} \sum\limits_{n = 1}^N ({{l}(X_{t,n}^F,\hat X_{t,n}^F;{w_t})} - {{l}(X_{t,n}^F,\hat X_{t,n}^F;{w^*})}).
\end{equation}
The objective of traffic flow forecasting adopting OL manner in FL is to minimize the global regrets over all $N$ clients, i.e., $\min REG.$ Since $w^*$ remains constant, we have to iteratively optimize the optimal model parameters $w_t (1\le t\le \mathcal{T})$, so as to generate the minimum regret.
\subsection{Preliminary Knowledge of Federated Learning with Online Learning Manner}
We integrate the OL manner into the widespread used Federated Averaging (FedAvg) method \cite{mcmahan2017communication}, which is elaborated in Algorithm \ref{ag1}.
Specifically, at the $t$-th round,
each client concurrently optimizes its locally-owned prediction model via Online Gradient Descent (OGD) (Line 9-10) for $E$ epochs using the newly arriving traffic data.
This update manner differs from traditional offline learning mechanism where the prediction model is pre-trained based on historical traffic data obtained in advance.
After accomplishing the local training procedure, clients upload their updated local models to the central server.
Then, the central server performs model aggregation and yields the fresh global model based on averaging mechanism (Line 5).
\begin{algorithm}
\caption{FedAvg with OL manner}
\label{ag1}
\LinesNumbered
\KwIn{Initialized model $w_1$, learning rate $\eta$.}
\KwOut{The global model $w_{\mathcal{R}+1}$.}
\SetKwFunction{Fexecute}{ClientExecute}
\textsc{\textbf{ServerExecute:}}\\
\For {$t = 1, 2, \cdots, \mathcal{R}$}{
\For {$s_n$ in $\mathcal{S}$}{
{$w_{t+1,n} \gets $ \Fexecute($w_{t}$, $n$, $t$)}\\
}
{${w_{t+1}} \gets \frac{1}{{{N_t}}}\sum\nolimits_{{s_n} \in {\mathcal{S}}} {{w_{t+1,n}}} $}\\
}
\KwRet $w_{\mathcal{R}+1}$\\
\SetKwProg{Fn}{Function}{:}{}
\Fn{\Fexecute{$w$, $n$, $t$}}{
Make prediction via (1).\\
\For{$e = 1, 2, \cdots, E$}{
$w \gets w - \eta \nabla l(X_{t,n}^F,\hat X_{t,n}^F;w)$\\
}
\KwRet $w$}
\end{algorithm}
\section{Methodology}
In this section, we provide a detailed introduction of FedOSTC, and the execution process of FedOSTC is elaborated in Fig. \ref{framework}.
Specifically, we propose a novel spatio-temporal correlation evaluation mechanism which includes obtaining hidden temporal patterns inside traffic speed sequences using GRU-based encoders at clients and assessing spatial correlation among clients at the central server.
After that, we update local prediction models using OGD algorithm at clients.
Based on the periodic patterns inside traffic flows, we further propose a period-aware aggregation mechanism to generate the fresh global model.
Then, we elaborate the process of the proposed FedOSTC in Algorithm \ref{ag2}.
Last but not least, we analyze the convergence of FedOSTC which theoretically validates the high-performance of FedOSTC.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.49\textwidth]{framework.pdf}
\caption{The procedure of FedOSTC. (1) The server distributes the global model. (2) The client uploads the temporal pattern. (3) The server returns the updated temporal pattern. (4) The client uploads local model parameters.
}
\label{framework}
\end{figure}
\subsection{Spatio-Temporal Correlation Evaluation}
An efficient way of assessing spatio-temporal correlation is fundamental to satisfactory prediction performance. Therefore, in this subsection, we elaborates how to evaluate spatial and temporal correlation among clients.
Firstly, clients capture the hidden temporal patterns inside speed sequences respectively with GRU-based encoders.
Then the central server measures the spatial correlation among clients using GAT and returns the updated hidden temporal patterns to clients.
Finally, clients input the updated hidden patterns into the decoders to generate the predicted results.
\subsubsection{\textbf{Obtaining Temporal Patterns from Speed Sequences at Clients}}
Considering the effectiveness of GRU in processing temporal sequences, we adopt GRU to capture the temporal patterns inside traffic speed sequences.
At the $t$-th round, given $x_{d,n} \in X_{t,n}^F (t-T+1 \le d \le t)$, $s_n$ executes the process of encoding as follows:
\begin{equation}
z_{t,n}^d = sigmoid (W_{t,n}^zx_{d,n}^{} + U_{t,n}^z\hat{h}_{t,n}^{d - 1}),
\end{equation}
\begin{equation}
r_{t,n}^d = sigmoid (W_{t,n}^rx_{d,n}^{} + U_{t,n}^r\hat{h}_{t,n}^{d - 1}),
\end{equation}
\begin{equation}
\tilde h_{t,n}^d = \tanh (W_{t,n}^hx_{d,n} + U_{t,n}^h(r_{t,n}^d \odot \hat{h}_{t,n}^{d - 1})),
\end{equation}
\begin{equation}
\hat{h}_{t,n}^d = (1 - z_{t,n}^d) \odot \hat h_{t,n}^{d - 1} + z_{t,n}^d \odot \tilde h_{t,n}^d,
\end{equation}
where $z_{t,n}^d$, $r_{t,n}^d$, and $\hat{h}_{t,n}^{d}$ represent the update gate, reset gate, and candidate hidden state respectively. $W_{t,n}^z, U_{t,n}^z, W_{t,n}^r, U_{t,n}^r, W_{t,n}^h$, and $U_{t,n}^h$ denote weight matrices and are contained in $w_t^{(e)}$ which denotes the encoder parameters included in $w_t$.
Let $h_{t,n} = \hat{h}_{t,n}^{t}$ denote the hidden temporal pattern learned by the encoder from $X_{t,n}^{T}$.
Then, $h_{t,n}$ is uploaded to the central server for evaluating spatial correlation among clients.
\subsubsection{\textbf{Evaluating Spatial Correlation Among Clients at the Server}}
In the scenario of traffic flow forecasting, due to the closeness of clients' locations, each client is adjacent to some others in the transportation network $\mathcal{G}$.
Based on this, traffic speed observations at different clients are definitely correlated with each other.
When a certain client makes prediction, historical traffic speed observations of the adjacent clients should be also taken into consideration.
However, since raw traffic data stored at clients cannot be shared for privacy preservation, it is infeasible to capturing spatial correlation among clients using raw data like many centralized methods.
Inspired by \cite{meng2021cross}, we draw that raw traffic sequences and temporal patterns have similar distribution.
The distributions of temporal patterns and traffic sequences after dimensional reduction by Principal Component Analysis (PCA) are depicted in Fig. \ref{data_and_state}.
We mark different clients in different colors and can observe that ``circles" and ``triangles" have the same distribution.
Based on this, we utilize temporal patterns output by encoders as the alternative to raw traffic speed for the assessment of spatial correlation among clients.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{data_and_state.pdf}
\caption{The distributions of raw speed sequences and temporal patterns after dimensional reduction by PCA.}
\label{data_and_state}
\end{figure}
In this subsection, we evaluate spatial correlation among clients using GAT, where the closeness of spatial relation is assessed by directly calculating attention coefficients of temporal patterns using attention mechanism.
It has the advantages that attention coefficients eventually express the relation among raw data, thus adapting to the dynamic changes of spatial correlation aroused by traffic fluctuation.
Specifically, after receiving temporal patterns from clients, the central server firstly calculates the attention scores between two adjacent clients.
The attention score between $s_n$ and $s_m$ at the $t$-th round is denoted as $\xi_{m,n}^t (\xi_{m,n}^t \in \mathbb{R})$, which can be calculated by
\begin{equation}
{\xi_{m,n}^t} = a([{h_{t,n}}||{h_{t,m}}]),\forall {s_m} \in {\mathcal{A}_n},
\end{equation}
where $[\cdot||\cdot]$ represents the operation of concatenating.
$a(\cdot)$ is a projection function.
Furthermore, we denote the attention coefficient between $s_m$ and $s_n$ at the $t$-th round as $\alpha _{m,n}^t$ and it is calculated by
\begin{equation}
{\alpha _{m,n}^t} = \frac{{\exp ({\rm{LeakyReLU}}({\xi_{m,n}^t}))}}{{\sum\nolimits_{s_k \in {\mathcal{A}_n}} {\exp ({\rm{LeakyReLU}}({\xi_{k,n}^t}))} }}.
\end{equation}
It is remarkable that $\alpha _{m,n}^t$ indicates the correlation strength of $s_m$ and $s_n$.
The larger $\alpha _{m,n}^t$ is, the more related $s_m$ is to $s_n$.
Finally, the central server updates $h_{t,n}$ according to
\begin{equation}
{h'_{t,n}} = sigmoid (\sum\nolimits_{{s_m} \in {\mathcal{A}_n}} {{\alpha_{m,n}^t}{h_{t,m}}} ),
\end{equation}
where ${h'_{t,n}}$ denotes the updated version of $h_{t,n}$.
It is worth noting that ${h'_{t,n}}$ contains not only the temporal pattern inside $X_{t,n}^{T}$, but the temporal patterns inside speed sequences from adjacent clients.
Then, the central server transmits ${h'_{t,n}}$ back to $s_n$ for generating predicted values.
The decoder of $s_n$ inputs $h'_{t,n}$ and outputs the predicted results. $w_t^{(d)}$ denotes the decoder parameters and is also included in $w_t$. Each decoder is also a GRU whose execution follows the process of (4)-(7), besides appending a fully-connected layer for making decisions of the final prediction and outputting $\hat{X}_{t,n}^{F}$.
\subsection{Prediction Model Update}
In this subsection, we illustrate the process of prediction model update, which includes incrementally optimizing local models at clients and the period-aware model aggregation mechanism.
\subsubsection{\textbf{Incremental Local Optimization at Clients}}
At the $t$-th round, $s_n$ outputs the predicted result $\hat{X}_{t,n}^{F}$, followed by calculating the prediction loss based on (2).
Then, $s_n$ executes the backpropagation process to compute the current gradient of encoder and decoder.
At the $e$-th local epoch, the model parameters of $s_n$ are optimized using OGD as follows \cite{hoi2021online}:
\begin{equation}
g_{t ,n}^{(e)} = \nabla {l}(X_{t,n}^F,\hat X_{t,n}^F;
w_{t,n}^{(e - 1)}),
\end{equation}
\begin{equation}
w_{t,n}^{(e)} = w_{t,n}^{(e - 1)} - \eta g_{t,n}^{(e)},
\end{equation}
where $g_{t ,n}^{(e)}$ and $w_{t,n}^{(e)}$ denote the gradient and the local model parameters of $s_n$ at the $e$-th epoch in the $t$-th global round respectively.
$\eta$ represents the learning rate and $w_{t,n}^{(0)} = {w_t}$.
$s_n$ executes the above process for $E$ epochs, and then transmits the up-to-date local model parameters $w_{t+1,n}$ ($w_{t+1,n}=w_{t,n}^{(E)}$) to the central server for aggregation.
\subsubsection{\textbf{Period-Aware Aggregation Mechanism at the Server}}
In traditional FL methods, such as FedAvg \cite{mcmahan2017communication}, the server aggregates local model parameters, and generates the fresh global model $w_{t+1}$ at the $t$-th round as
\begin{equation}
{w_{t+1}} = \frac{1}{N}\sum\limits_{n = 1}^N {{w_{t+1,n}}.}
\end{equation}
It is plain that $w_{t+1}$ is updated based on traffic data at the $t$-th round.
Since traffic sequences at different rounds have variant distributions, this aggregation mechanism makes $w_{t+1}$ more applicable to the traffic data of the $t$-th round, compared with $w_t$.
Therefore, if the server treats $w_{t+1}$ as the initial model parameters at beginning of the $t$-th round, the prediction accuracy can be increased.
However, it is far from reasonable, since $w_{t+1}$ is generated in the end of the $t$-th round.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.5\textwidth]{traffic_flow.pdf}
\caption{Raw traffic flows of three clients in 5 days.}
\label{traffic_speed}
\end{figure}
Furthermore, we discover that each traffic flow has the same built-in period.
As shown in Fig. \ref{traffic_speed}, each client has similar distribution every 288 time stamps (clients observe traffic speeds every 5 minutes and therefore one day is split into 288 time stamps), except for a few ones.
Let $\mathcal{T}$ denote the period of traffic flows.
It is reasonable to deem that local model parameters $w_{t,n}$ and global model parameters $w_{t+1}$ are similar with $w_{t+\mathcal{T},n}$ and $w_{t+\mathcal{T}+1}$ respectively.
Inspired by this, at the $t$-th round, we calculate the correlation coefficients between previous local models $w_{t,n} (\forall s_n \in \mathcal{S})$ and the fresh global model $w_{t+1}$ as
\begin{equation}
{\rho _{t,n}} = \frac{\exp (-||w_{t,n} - w_{t+1}||)}{\sum\nolimits_{m = 1}^N {\exp (-||w_{t,m} - w_{t+1}||)}}, \forall s_n \in \mathcal{S},
\end{equation}
where $\rho_{t,n}$ denotes the correlation coefficient between $w_{t,n}$ and $w_{t+1}$.
If the server performs weighted aggregation based on $\rho_{t,n}$ in the end of the ($t-1$)-th round as $\sum\nolimits_{{s_n} \in {\mathcal{S}}} \rho_{t,n} w_{t,n}$, the newly-generated global model $w_t$ will apply to the data sequences of the $t$-th round better than the original $w_t$ yielded using averaging mechanism as (13).
Furthermore, $\rho_{t,n}$ also enables to approximate the similarity between $w_{t+\mathcal{T},n}$ and $w_{t+\mathcal{T}+1}$, and thus it is directly adopted as the aggregation weight of $s_n$ at the $(t+\mathcal{T}-1)$-th round.
Specifically, the execution process of the proposed period-aware aggregation mechanism is as follows.
\begin{itemize}
\item {If $t \le \mathcal{T}$, the central server firstly generates the fresh global model $w_t$ as (13) and then calculates correlation coefficients as (14).
}
\item {If $t > \mathcal{T}$, the central server calculates correlation coefficients as (14), following yielding $w_{t+1}$ as
\begin{equation}
{w_{t+1}} = \sum\limits_{n = 1}^N {{\rho _{t+1-\mathcal{T},n}}} \cdot {w_{t+1 ,n}}.
\end{equation}
}
\end{itemize}
It is reasonable to deem that the averaging aggregation mechanism formulated in (13) is a special case of the weighted aggregation in (15), with $\rho_{t+1-\mathcal{T},n}=\frac{1}{N},\forall{s_n \in \mathcal{S}}$.
By this means, the generated fresh global model can well adapt to the subsequent traffic data better, which yields more satisfactory prediction performance.
\begin{algorithm}
\caption{FedOSTC}
\label{ag2}
\LinesNumbered
\KwIn{initialized global model $w_1$, traffic flow sequence [$x_{t,n}^{T}$, $x_{t,n}^{F}$], maximum global round $R$, maximum local epoch $E$.}
\KwOut{The global model $w_{R+1}$}
\SetKwFunction{Fexecute}{ClientExecute}
\SetKwFunction{Fencode}{ClientEncode}
\SetKwFunction{Fdecode}{ClientDecode}
\textsc{\textbf{ServerExecute:}}\\
\For {$t = 1, 2, \cdots, \mathcal{R}$}{
\For {$s_n \in \mathcal{S}$}{
$w_{t+1,n} \gets$ \Fexecute($w_t$, $n$, $t$)\\
\If {$t \le \mathcal{T}$}{
Generate $w_{t+1}$ via (13).\\
}
\Else{
Generate $w_{t+1}$ via (15).\\
}
Calculate $\rho_{t,n}$ via (14).\\}
}
\KwRet $w_{\mathcal{R}+1}$.\\
\SetKwProg{Fn}{Function}{:}{}
\Fn{\Fexecute{$w_t$, $n$, $t$}}{
[$w_{t,n,e}^{(0)}$, $w_{t,n,d}^{(0)}$] = $w_{t,n}^{(0)}$ = $w_t$.\\
\For {$e = 1, 2, \cdots, E$}{
$h_{t,n}$ $\gets$ \Fencode($x^{T}_{t,n}$, $w_{t,n,e}^{(e)}$, $n$).\\
\If{$e = 1$}{
Get $h^{'}_{t,n}$ via (8)-(10).\\
}
$\hat{x}^{F}_{t,n} \gets $ \Fdecode($h_{t,n}$, $h^{'}_{t,n}$, $w_{t,n,d}^{(e)}$, $n$).\\
$w_{t,n}^{(e)}$ = [$w_{t,n,e}^{(e)}$, $w_{t,n,d}^{(e)}$].\\
$loss$ $\gets$ ${l}(X_{t,n}^F,\hat X_{t,n}^F;w_{t,n}^{(e)}$)\\
Update $w_{t,n}^{(e)}$ via (11)(12).\\
}
$w_{t+1,n}$ = $w_{t,n}^{(E)}$ = [$w_{t,n,e}^{(E)}$, $w_{t,n,d}^{(E)}$].\\
\KwRet $w_{t+1,n}$.\\
}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\Fencode{$x^{T}_{t,n}$, $w_{t,n,e}^{(e)}$, $n$}}{
With $x_{t,n}^T$ input, get $h_{t,n}$ through (4)-(7).\\
\KwRet $h_{t,n}$\\
}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\Fdecode{$h_{t,n}$, $h^{'}_{t,n}$, $w_{t,n,d}^{(e)}$, $n$}}{
Get $\hat{x}^{F}_{t,n}$ from GRU.\\
\KwRet $\hat{x}^{F}_{t,n}$.
}
\end{algorithm}
\subsection{\textbf{Execution Process of FedOSTC}}
Algorithm \ref{ag2} elaborates the process of FedOSTC, where $w_{t,n,e}^{(e)}$ and $w_{t,n,d}^{(e)}$ denote the parameters of encoder and decoder on $s_n$ at the $e$-th epoch in the $t$-th round respectively.
At each global round, the server transmits current global model to clients.
Encoders at clients input traffic sequences and output the temporal patterns of these traffic sequences (Line 23-25).
The server evaluates the spatial correlation among clients using GAT, and then updates hidden states from clients (Line 16).
When clients receive the updated hidden states from the server, they make prediction, calculate prediction loss and optimize parameters of encoders and decoders via OGD, which is iterated for $E$ epochs (Line 17-20).
Then clients transmit their updated model parameters to the central server for aggregation using the period-aware weighted aggregation mechanism (Line 5-9).
The process continues until the maximum round $\mathcal{R}$ arrives.
\subsection{Convergence Analysis of FedOSTC}
In this subsection, the convergence analysis of FedOSTC is demonstrated theoretically. Based on the theory of Online Convex Optimization (OCO) \cite{shalev2011online}, we obtain the upper bound of $REG$.
For simplicity, we reformulate ${l}(X_{t,n}^F,\hat X_{t,n}^F;{w_t})$ as $l(w_t)$, since the loss and prediction results are dependent on the adopted model parameters.
Firstly, some assumptions are introduced.
\newtheorem{assumption}{Assumption}
\begin{assumption}
The loss function $l(\cdot)$ is L-smooth, i.e., $l(x) - l(y) \le \left\langle {\nabla l(y),x - y} \right\rangle + \frac{L}{2}||x - y|{|^2}, \forall x,y, \exists L > 0.$
\end{assumption}
\begin{assumption}
The gradient of loss function $l(\cdot)$ is uniformly bounded, i.e., ${\rm \mathbb{E}}(||\nabla {l}( \cdot )|{|^2}) \le {G^2}.$
\end{assumption}
\begin{assumption}
REG is bounded below, i.e., $RE{G_{\min }} = REG{^*} > - \infty.$
\end{assumption}
To Facilitate the analysis on the convergence rate of FedOSTC, the period-aware weighted aggregation mechanism described in (15) is transformed.
\newtheorem{lemma}{Lemma}
\begin{lemma}
The aggregation mechanism described in (15) can be reformulated as
\begin{equation}
{w_{t + 1}} = {w_t} - \eta \sum\limits_{n = 1}^N {\sum\limits_{e = 1}^E {{\rho _{t,n}} \cdot g_{t,n}^{(e)}} } .
\end{equation}
\end{lemma}
Therefore, the dynamic weighting aggregation mechanism can be considered as a relational expression of $w_t$ and $w_{t+1}$. The proof of {\scshape Lemma} 4.1 is elaborated as follows.
\begin{proof}
With (11) and (12), we can get
\begin{align}
w_{t,n}^{(E)} &= w_{t,n}^{(E - 1)} - \eta g_{t,n}^{(E)}
= w_{t,n}^{(E - 2)} - \eta g_{t,n}^{(E - 1)} - \eta g_{t,n}^{(E)}
\notag
\\
&= \cdots = w_{t,n}^{(0)} - \eta g_{t,n}^{(1)} - \cdots - \eta g_{t,n}^{(E - 1)} - \eta g_{t,n}^{(E)}
\notag
\\
&= {w_t} - \eta \sum\limits_{e = 1}^E {g_{t,n}^{(e)}}.
\end{align}
Hence, we can complete the proof as
\begin{align}
{w_{t + 1}} &= \sum\limits_{n = 1}^N {{\rho _{t,n}} {w_{t + 1,n}}} = \sum\limits_{n = 1}^N {{\rho _{t,n}}({w_t} - \eta \sum\limits_{e = 1}^E {g_{t,n}^{(e)}} )}
\notag
\\
&= \sum\limits_{n = 1}^N {{\rho _{t,n}}{w_t} - \eta \sum\limits_{n = 1}^N {\sum\limits_{e = 1}^E {{\rho _{t,n}} g_{t,n}^{(e)}} } }
\notag
\\
&= {w_t} - \eta \sum\limits_{n = 1}^N {\sum\limits_{e = 1}^E {{\rho _{t,n}} g_{t,n}^{(e)}} }.
\end{align}
\end{proof}
\newtheorem{theorem}{Theorem}
\begin{theorem}
With {\scshape Assumption} 1-3 and {\scshape Lemma} 1 held, after $\mathcal{R}$ round, $REG$ in FedOSTC has an upper bound:
\begin{equation}
{\rm \mathbb{E}}[REG_{\min}] \le {\rm{(1}} + EN + \frac{L}{2}EN{\rm{)}}\eta \mathcal{R}{G^2}.
\end{equation}
\end{theorem}
{\scshape Theorem} 4.2 indicates that the minimum regret of FedOSTC has an upper bound. Furthermore, $REG_{\min}=o(\mathcal{R})$ indicates that FedOSTC performs as well as the the optimal parameters $w^*$. Therefore, FedOSTC is efficient in tackling with the issue of traffic flow forecasting. The proof of {\scshape Theorem} 4.2 is detailed.
\begin{proof}
Since $l$ is L-smooth, we can transform {\scshape Assumption} 1 as
\begin{align}
& {l}({w_t}) - {l}({w_{t + 1}})
\le \left\langle {\nabla {l}({w_{t + 1}}),{w_t} - {w_{t + 1}}} \right\rangle + \frac{L}{2}||{w_t} - {w_{t + 1}}|{|^2}
\notag
\\
&= \left\langle {\nabla {l}({w_{t + 1}}),\eta \sum\limits_{e = 1}^E {\sum\limits_{n = 1}^N {{\rho _{t,n}}g_{t,n}^{(e)}} } } \right\rangle
{\rm{ }} + \frac{L}{2}||\eta \sum\limits_{e = 1}^E {\sum\limits_{n = 1}^N {{\rho _{t,n}}g_{t,n}^{(e)}} } |{|^2}
\notag
\\
&\le \eta ||\nabla {l}({w_{t + 1}}) - \sum\limits_{e = 1}^E {\sum\limits_{n = 1}^N {{\rho _{t,n}}g_{t,n}^{(e)}} } |{|^2}
{\rm{ }} + \frac{L}{2}||\eta \sum\limits_{e = 1}^E {\sum\limits_{n = 1}^N {{\rho _{t,n}}g_{t,n}^{(e)}} } |{|^2}
\notag
\\
&\le \eta ||\nabla {l}({w_{t + 1}})|{|^2} + (1+\frac{L}{2})\eta \sum\limits_{e = 1}^E {\sum\limits_{n = 1}^N {||{\rho _{t,n}}g_{t,n}^{(e)}} } |{|^2}
\end{align}
With {\scshape Assumption} 2, the inequation above can be reformulated as
\begin{align}
{\mathbb{E}}[{l}({w_t}) - {l}({w_{t + 1}})]
&\le \eta {\rm \mathbb{E}}||\nabla {l}({w_{t + 1}})|{|^2}
\notag
\\
& + (1+\frac{L}{2})\eta \sum\limits_{e = 1}^E {\sum\limits_{n = 1}^N {{\rm \mathbb{E}}||{\rho _{t,n}}g_{t,n}^{(e)}} } |{|^2}
\notag
\\
&\le \eta {G^2} + (1+\frac{L}{2})\eta EN{G^2}.
\end{align}
With {\scshape Assumption} 3, the minimum regret can be obtained as
\begin{align}
{\mathbb{E}}[REG_{\min}]
&= {\rm \mathbb{E}}\left\{ {\frac{1}{N}\sum\limits_{n = 1}^N {\sum\limits_{t = 1}^{\mathcal{R}} {[{l}({w_t}) - {l}({w^*})]} } } \right\}
\notag
\\
&\le {\rm \mathbb{E}}\left\{ {\frac{1}{N}\sum\limits_{n = 1}^N {\sum\limits_{t = 1}^{\mathcal{R}} {[{l}({w_t}) - {l}({w_{t + 1}})]} } } \right\}
\notag
\\
&\le \frac{1}{N}\sum\limits_{n = 1}^N {\sum\limits_{t = 1}^{\mathcal{R}} {(\eta {G^2} + \eta EN{G^2} + \frac{L}{2}\eta EN{G^2})} }
\notag
\\
&= (1+EN + \frac{L}{2}EN{\rm{)}}\eta {\mathcal{R}}{G^2}.
\end{align}
\end{proof}
\section{Experiments}
\begin{table*}[h]
\centering
\caption{Comparison in Prediction Performance of Six Methods on Two Datasets with Different Forecasting Steps}
\renewcommand\arraystretch{1.4}
\small
\begin{tabular}{c||cc|cc|cc}
\hline
\multirow{3}[6]{*}{Methods} & \multicolumn{6}{c}{PEMS-BAY}\\
\cline{2-7} & \multicolumn{2}{c|}{5min ($F=1$)} & \multicolumn{2}{c|}{30min ($F=6$)} & \multicolumn{2}{c}{1h ($F=12$)} \\
\cline{2-7} & RMSE & MAE & RMSE & MAE & RMSE & MAE \\
\hline
CenterOff & 1.898 & 1.019 & 3.602 & 1.736 & 4.693 & 2.255 \\
CenterOn & 1.614 & \underline{1.007} & 2.243 & 1.353 & \underline{2.327} & \underline{1.371} \\
FedAvgOff & 1.802 & 1.023 & 3.717 & 1.784 & 5.253 & 2.406 \\
FedAvgOn & \textbf{0.996} & \textbf{0.996} & \underline{1.952} & \underline{1.712} & 2.661 & 2.306\\
CNFGNN & 1.691 & 0.983 & 3.758 & 1.882 & 5.265 & 2.523 \\
\hline
FedOSTC & \underline{1.014} & 1.014 & \textbf{1.450} & \textbf{1.209} & \textbf{1.601} & \textbf{1.254} \\
\hline
$\uparrow$ & -0.108 & -0.018 & +0.502 & +0.503 & +0.726 & +0.117\\
\hline
\hline
\multirow{3}[6]{*}{Methods} & \multicolumn{6}{c}{METR-LA} \\
\cline{2-7} & \multicolumn{2}{c|}{5min ($F=1$)} & \multicolumn{2}{c|}{30min ($F=6$)} & \multicolumn{2}{c}{1h ($F=12$)} \\
\cline{2-7} & RMSE & MAE & RMSE & MAE & RMSE & MAE \\
\hline
CenterOff & 5.796 & 3.248 & 7.975 & 4.392 & 9.346 & 5.184 \\
CenterOn & 4.949 & 3.261 & 5.889 & \underline{3.639} & 6.137 & \underline{3.673} \\
FedAvgOff & 5.843 & 3.321 & 8.163 & 4.445 & 9.522 & 5.223 \\
FedAvgOn & \textbf{2.945} & \textbf{2.945} & \underline{4.588} & 3.877 & \underline{5.714} & 4.744 \\
CNFGNN & 5.705 & 3.157 & 8.160 & 4.536 & 9.523 & 5.242 \\
\hline
FedOSTC & \underline{3.006} & \underline{3.006} & \textbf{4.129} & \textbf{3.353} & \textbf{4.598} & \textbf{3.484} \\
\hline
$\uparrow$ & -0.061 & -0.061 & +0.459 & +0.286 & +1.116 & +0.189\\
\hline
\end{tabular}%
\label{resultes}%
\end{table*}%
In this section, comprehensive experiments are conducted to validate the effectiveness of our proposed method FedOSTC. Firstly, the system configuration is introduced. Then, the prediction performance of FedOSTC is compared with that of five baselines.
Furthermore, we compare the generalization ability of the global models in the FL-based methods. Finally, additional experiments are conducted to explore the effect of local epoch on prediction performance.
\subsection{System Configuration}
\subsubsection{\textbf{Datasets and Metrics}}
The experiments are conducted on two real-world datasets, i.e., PEMS-BAY and METR-LA respectively\cite{li2017diffusion}. PEMS-BAY contains traffic speed from 01/01/2017 to 31/05/2017 collected by 325 sensors. METR-LA contains speed information from 207 sensors ranging from 01/03/2012 to 30/06/2012. In the two datasets, sensors observe traffic speed every 5 minutes. We randomly select 50 sensors as clients and utilize the speed data from Sunday to Thursday in May for experiments. The adjacency matrixes of clients for the two datasets are constructed like \cite{meng2021cross}.
Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) are adopted to evaluate the prediction error.
\subsubsection{\textbf{Experiment Setting}}
FedOSTC is implemented with PyTorch and all experiments are conducted on a server with an Intel(R) Xeon(R) Gold 6230 CPU and eight Nvidia Tesla V100S-PCIe GPUs. In all experiments, the encoder and decoder are both a GRU layer with 64 and 128 cells respectively. $T$, $E$, and $\eta$ are set to 12, 5, and 0.001 respectively. The settings of other methods are the same as the respective literature, unless mentioned otherwise.
\subsection{Comparisons of Forecasting Performance}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=1.0\textwidth]{RMSE.pdf}
\caption{Prediction RMSEs of clients.}
\label{rmse}
\end{figure*}
\subsubsection{\textbf{Baseline Methods}}
We compare FedOSTC with five baseline methods described as follows.
\begin{itemize}
\item {
\textbf{CenterOff}\cite{cho2014learning}: The central server trains the prediction model using traffic data uploads from traffic nodes by batch learning manner. Spatial correlation among traffic nodes is not evaluated.
}
\item{
\textbf{CenterOn}: The execution process is the same as CenterOff except for that the central server online updates the prediction model.
}
\item{
\textbf{FedAvgOff}\cite{mcmahan2017communication}: In the FedAvg method, clients update the prediction model based on batch learning manner and the server aggregates updated local models via averaging mechanism. Spatial correlation among clients is not evaluated.}
\item{
\textbf{FedAvgOn}: It is the same as FedAvgOff, except for that clients update the prediction model based on OL manner.}
\item{
\textbf{CNFGNN}\cite{meng2021cross}: The execution process is like FedAvgOff, except for that the server evaluates spatial dependence using GNN.}
\end{itemize}
The prediction errors of FedOSTC and the baselines on two datasets with different forecasting steps are presented in Table \ref{resultes}. Thereinto, `$\uparrow$' denotes the performance gains of FedOSTC over the best-performed baseline.
If the forecasting step is shorter ($F$ is lower), the speed sequence to be forecasted ($X_{t,n}^F$) is more likely to have similar distribution with the historical speed sequence ($X_{t,n}^T$). This is the reason why prediction errors increase with the longer forecasting step.
When $F=1$, the performance gains of FedOSTC are negative and the absolute values are so low as to be ignored.
However, when $F=6$ and 12, FedOSTC performs best and generates considerable performance gains.
In terms of both RMSE and MAE, CenterOn and FedAvgOn outperform CenterOff and FedAvgOff respectively, which indicates the high-efficiency of OL manner in the task of traffic flow forecasting. It is because that the prediction model is incrementally optimized based on fine-grained speed data, which can sufficiently obtain the dynamic changes in speed data.
In the three methods adopting batch learning manner, CenterOff achieves the best prediction performance on both datasets. It shows that the centralized method (CenterOff) perform better in evaluating spatio-temporal correlation among clients than the FL methods (FedAvgOff and CNFGNN). However, our proposed FedOSTC performs best among the three methods adopting OL manner, which indicates the effectiveness of spatio-temporal correlation mechanism and period-aware weighted aggregation mechanism.
To further evaluate the high-efficiency of FedOSTC in traffic flow forecasting, ground truth values and prediction results of FedOSTC and CNFGNN are illustrated in Fig. \ref{y_and_y_pred}.
It is explicit that FedOSTC yields much smaller deviation, compared with CNFGNN, which is the existing best FL method. Furthermore, the distribution of prediction values from FedOSTC is similar to that of ground truth values.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=1\textwidth]{y_pred_and_y.pdf}
\caption{Ground truth and forecasting values of CNFGNN and FedOSTC.}
\label{y_and_y_pred}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=1.0\textwidth]{Epoch.pdf}
\caption{Local epoch versus prediction errors.}
\label{epoch}
\end{figure*}
\subsection{Comparisons of Generalization Ability}
In the scenario of traffic flow forecasting, the observed traffic speeds from clients have different distribution. While in FL framework, all clients train a same prediction model cooperatively. We hope the global prediction model from the central server has good generalization ability so that all clients can achieve satisfactory performance. Therefore, in this subsection, we compare model generalization ability of the four methods based on FL framework. The prediction RMSEs of clients on two datasets with different forecasting steps are illustrated in Fig.\ref{rmse}.
By comparing FedAvgOff and CNFGNN with FedAvgOn and FedOSTC respectively, we can observe that RMSE variance of OL manner is much smaller than that of batch learning manner, which indicates that OL manner can yield global model with better generalization ability in FL.
Furthermore, the prediction RMSE of FedOSTC is lower than that of FedAvgOn for almost all clients. The RMSE variance over all clients in FedOSTC is obviously lower than that in FedAvgOn. Therefore, by adopting the proposed spatio-temporal correlation evaluating mechanism and period-aware weighted aggregation mechanism, FedOSTC can increase the generalization ability of global model.
\subsection{Effect of E on Prediction Performance}
In the scenario of traffic flow forecasting, clients make prediction and perform incremental optimization based on only a speed sequence. It is quite different from those methods adopting batch learning manner, where the local batch size is set to 128.
In this subsection, we explore the effect of local epoch on prediction performance and further consider whether 5 epochs are redundant for incremental optimization.
The prediction RMSEs and MAEs on two datasets with different settings of $E$ are illustrated in Fig. \ref{epoch}.
We observe that different settings of $E$ yield various effects on prediction errors. On the two datasets, both prediction RMSEs and MAEs get lower with more local epochs, which indicates that the internal patterns of speed sequences can be obtained better with more local optimizations. On the other hand, larger local epoch means clients have to spare more computation resources for local optimization, which poses much burden on the resource-limited clients. Therefore, in the future research, we will explore the balance between local epoch and prediction performance.
\section{Conclusion}
In this paper, we proposed a novel method named FedOSTC for traffic flow forecasting.
To boost prediction performance, we dynamically assessed spatio-temporal correlation among clients.
Specifically, the temporal patterns inside traffic data sequences are obtained at clients.
Since spatial correlation among clients varies with traffic fluctuation, the spatial dependence of the observed traffic data is assessed dynamically via GAT.
Given the periodic changes of traffic flows, we proposed a period-aware aggregation mechanism to aggregate the online-updated local models, aiming to improve the generalization of the fresh global model for the subsequent traffic data.
Last but not least, we conducted extensive experiments on METR-LA and PEMS-BAY datasets to verify the effectiveness of OL manner in traffic flow forecasting and the superiority of FedOSTC, compared with state-of-the-art methods.
Given the observation that larger local epochs yield higher prediction accuracy but more computation sources, we will explore the balance between computation overhead and prediction performance in the future research and refine FedOSTC from the aspects of reducing communication overhead, decreasing forecasting time, etc.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08634",
"language": "en",
"timestamp": "2023-02-20T02:04:22",
"url": "https://arxiv.org/abs/2302.08634",
"yymm": "2302"
} | \section{Introduction}
Different observational spectroscopic phenomena between broad emission line AGN (Active Galactic Nuclei)
(Type-1 AGN) and narrow emission line AGN (Type-2 AGN) can be well explained by the well-known constantly being
improved Unified Model (UM) of AGN, after considering effects of different orientation angles of central accretion
disk \citep{an93}, combining with different central activities and different properties of inner dust torus etc.
\citep{bm12, mb12, Oh15, ma16, aa17, bb18, bn19, kw21}. More recent reviews on the UM can be found in \citet{nh15}.
Accepted the UM, Type-2 AGN are intrinsically like Type-1 AGN, but Type-2 AGN have their central accretion disk
around black hole (BH) and broad line regions (BLRs) seriously obscured by central dust torus, leading to no
optical broad line emission features in Type-2 AGN. The simple UM has been strongly supported by clearly detected
polarized broad emission lines and/or clearly detected broad infrared emission lines for some Type-2 AGN \citep{mg90,
hl97, tr03, nk04, zs05, or17, sg18, mb20}, and the strong resonance of silicate dust at 10${\rm \mu m}$ seen in
absorption towards many Type-2 AGN but in emission in Type-1 AGN \citep{sh05}.
However, even after considering necessary modifications to the UM, some challenges have been reported.
Different evolutionary patterns in Type-1 and Type-2 AGN have been reported in \citet{fb02}. Higher average star
formation rates in Type-2 AGN than in Type-1 AGN have been reported in \citet{hi09}. Different neighbours around
Type-1 AGN and Type-2 AGN can be found in \citet{vk14} and then followed in \citet{jw16}. Lower stellar masses
of host galaxies in Type-1 AGN than in Type-2 AGN have been reported in \citet{zy19}, through 2463 X-ray selected
AGN in the COSMOS field. More recently, different host galaxy properties have been discussed in \citet{bg20},
to favour an evolutionary scenario rather than a strict unified model in obscured and unobscured AGN. Probably
higher stellar velocity dispersions have been reported in our recent paper \citet{zh22}. As detailed discussions
in \citet{nh15}, the UM has been successfully applied to explain different features between Type-1 and Type-2
AGN in many different ways, however, there are many other features of structures/environments proved to be far
from homogeneous among the AGN family.
In order to provide further clues to support and/or to modify the current UM of AGN, better classifying
Type-1 AGN and Type-2 AGN is the first thing which should be well done in optically selected AGN, in order to
ignore effects of mis-classified AGN on discussions on UM of AGN. \citet{ba14} have reported g-band variabilities
in 17 out of 173 Type-2 quasars covered in the Stripe 82 region, indicating part of Type-2 objects with their
central AGN activity regions are directly being observed, and the part of Type-2 quasars are not the UM expected
Type-2 quasars obscured by central dust torus. In other words, based on only optical spectroscopic broad emission
line features, it is not efficient enough to classify optically selected Type-1 AGN and Type-2 AGN.
Among optically selected Type-2 AGN with none detected broad emission lines, there are at least two
interesting kinds of mis-classified AGN (AGN have their central AGN activity regions been directly observed, but
been classified as Type-2 AGN through single-epoch spectroscopic results): the called True Type-2 AGN (TT2 AGN)
without hidden central BLRs (broad emission line regions), the called changing-look AGN (CLAGN) with transitioned
types between Type-1 with broad emission lines and Type-2 without apparent broad emission lines.
The first kind of mis-classified AGN, the True Type-2 AGN (TT2 AGN) without hidden central BLRs, have been
firstly reported in \citet{tr03} that there are no hidden central BLRs in some Type-2 AGN through studying
polarized broad emission lines. And then, \citet{sg10, ba14, zh14, ly15, pw16} have confirmed the existence of
the rare kind of TT2 AGN. More recently, We \citet{zh21d} have reported the composite galaxy SDSS J1039 to be
classified as a TT2 AGN, due to its apparent long-term optical variabilities and none existence of broad emission
lines. Moreover, considering study of TT2 AGN should provide further clues on formation and/or suppression of
central BLRs in AGN, \citet{eh09, cao10, nm13, ip15, en16} have discussed and proposed theoretical models and/or
explanations on probable disappearance of BLRs in TT2 AGN, either depending on physical properties of central
AGN activities and/or depending on properties of central dust obscurations. Although the very existence of TT2
AGN is still an open question, as the well known case of NGC 3147 previously classified as a TT2 AGN but now
discussed in \citet{ba19} reported with detected broad H$\alpha$ in high quality HST spectrum. Probably, either
the very existence of TT2 AGN with none hidden central BLRs or the quite weak broad emission lines in TT2 AGN
can lead some Type-1 AGN (central regions directly observed) to be mis-classified as Type-2 AGN, indicating that
there are some mis-classified Type-1 AGN in the optically selected Type-2 AGN, if only single-epoch spectroscopic
emission line features are considered.
The second kind of mis-classified AGN, the changing-look AGN (CLAGN), have been firstly reported in NGC
7603\ in \citet{to76} with its broad H$\beta$ becoming much weaker in one year. After studying on CLAGN for more
than four decades, there are more than 40 CLAGN reported in the literature, according to the basic properties
that spectral types of AGN are transitioned between Type-1 AGN (apparent broad Balmer emission lines and/or
Balmer decrements near to the theoretical values) and Type-2 AGN (no apparent broad Balmer emission lines and/or
Balmer decrements much different from the theoretical values).
There are several well known individual CLAGN, such as the CLAGN Mrk1086 which has firstly reported in
\citet{cr86} with its type changed from Type-1.9 to Type-1\ in 4 years, and then followed in \citet{mh16}, and
as the CLAGN in NGC 1097 reported in \citet{sb93} with the detected Seyfert 1 nucleus which has previously shown
only LINER characteristics, and as the CLAGN NGC 7582 reported in \citet{aj99} with the transition toward a Type-1
Seyfert experienced by the classical Type-2 Seyfert nucleus, and as the CLAGN NGC 3065 reported in \citet{eh01}
with the new detected broad Balmer emission lines, and as the CLAGN Mrk 590 reported in \citet{dd14} with its
type changed from Seyfert 1\ in 1970s to Seyfert 1.9\ in 2010s, and as the CLAGN NGC 2617 reported in \citet{sp14}
classified as a Seyfert 1.8 galaxy in 2003 but as a Seyfert 1 galaxy in 2013, and as the CLAGN SDSS J0159
reported in \citet{la15} with its type transitioned from Type-1\ in 2000 to Type-1.9\ in 2010, and as the CLAGN
SDSS J1554 reported in \citet{gh17} with its type changed from Type-2 to Type-1\ in 12 years.
More recently, we \citet{zh21b} have reported the bluest CLAGN SDSS J2241 with its flux ratio of broad
H$\alpha$ to broad H$\beta$ changed from 7\ in 2011 to 2.7\ in 2017. Moreover, besides the individual CLAGN,
\citet{mr16} have reported ten CLAGN with variable and/or changing-look broad emission line features, and
\citet{yw18} have reported a sample of 21 CLAGN with the appearance or the disappearance of broad Balmer
emission lines within a few years, and \citet{pv21} have reported six CLAGN within the redshift range $0.1<z<0.3$
using SDSS difference spectra, and \citet{gp22} have reported 61 newly discovered CLAGN candidates through
multi-epoch spectroscopic properties in the known Time Domain Spectroscopic Survey in SDSS-IV. Different
models have been proposed to explain the nature of CLAGN, such as the dynamical movement of dust clouds well
discussed in \citet{em12}, the common variations in accretion rates well discussed in \citet{em14}, the
variations in accretion rates due to transient events as discussed in \citet{er95, bn17}. Although theoretical
explanations to CLAGN are still unclear, spectroscopic properties in quiet state of CLAGN could lead to
CLAGN with central activity regions directly observed but mis-classified as Type-2 AGN.
Besides the spectroscopic features well applied to classify Type-1 AGN and Type-2 AGN, long-term
variabilities, one of fundamental intrinsic characteristics of AGN \citep{mr84, um97, ms16, bv20} tightly related
to central BH accreting processes, can also be well applied to classify AGN: Type-1 AGN with apparent
long-term optical variabilities, but Type-2 AGN with no apparent long-term optical variabilities because of
serious obscurations of central AGN activities. And the long-term optical variabilities of AGN can be be
well modeled by the well-applied Continuous AutoRegressive process (CAR process) firstly proposed by
\citet{kbs09} and then the improved damped random walk process (DRW process) in \citet{koz10, zk13, kb14, sh16,
zk16}. Therefore, combining variability properties and spectroscopic properties can lead to more confident
classifications of AGN, which is the main objective of the paper, in order to check how many Type-1 AGN
(central activity regions directly observed) mis-classified as Type-2 AGN through optical spectroscopic
emission features.
The manuscript is organized as follows. Section 2 presents the data sample of the pipeline classified
Type-2 AGN in SDSS DR16 (Sloan Digital Sky Survey, Data Release 16, \citet{ap20}). Section 3 shows the method
to describe the long-term variabilities from CSS (Catalina Sky Survey) \citep{dr09, dg14, gr17, sb21} for the
collected SDSS pipeline classified Type-2 AGN and the basic results on a small sample of optically selected
SDSS Type-2 AGN with apparent long-term variabilities. Section 4 shows the main spectroscopic results for the
collected Type-2 AGN but with apparent long-term optical variabilities. Section 5 shows the necessary
discussions. Section 6 gives the final summaries and conclusions. And in the manuscript, the cosmological
parameters of $H_{0}~=~70{\rm km\cdot s}^{-1}{\rm Mpc}^{-1}$, $\Omega_{\Lambda}~=~0.7$ and $\Omega_{m}~=~0.3$
have been adopted.
\section{Parent Samples of SDSS pipeline classified Type-2 AGN}
As well described in \url{https://www.sdss.org/dr16/spectro/catalogs/}, each spectroscopic object has
a main classification\footnote{Not considering the 'UNKNOWN' classification in SDSS databases.} in SDSS: GALAXY,
QSO or STAR. And, for objects classified as GALAXY, there are at least three subclasses\footnote{Not considering
the 'UNKNOWM' subclass in SDSS databases}, STARBURST, STARFORMING, AGN. Here, the main classification of GALAXY
(class='galaxy') and subclass of AGN (subclass = 'AGN') are mainly considered to collect the SDSS pipeline
classified low redshift Type-2 AGN. Here, only one main criterion of redshift smaller than 0.3 ($z~<~0.3$) is
applied to collect all the low redshift Type-2 AGN from SDSS pipeline classified main galaxies in DR16, through
the SDSS provided SQL (Structured Query Language) Search tool
(\url{http://skyserver.sdss.org/dr16/en/tools/search/sql.aspx}) by the following query
\begin{lstlisting}
SELECT plate, fiberid, mjd, ra, dec
FROM SpecObjall
WHERE
class='galaxy' and subclass = 'AGN'
and (z between 0 and 0.30) and zwarning=0
and snmedian > 10
\end{lstlisting}
In the query above, 'SpecObjall' is the SDSS pipeline provided database including basic properties of spectroscopic
emission features of emission line galaxies in SDSS DR16, 'snmedian' means the median signal-to-noise (SN) of SDSS
spectra, class='galaxy' and subclass='AGN' mean the SDSS spectrum can be well identified with a galaxy template
and the galaxy has detectable emission lines that are consistent with being a Seyfert or LINER by the dividing line
applied in the known BPT diagram \citep{bpt, kb01, ka03a, kb06, kb19, zh20} through flux ratios of
[O~{\sc iii}]$\lambda5007$\AA~ to narrow H$\beta$ (O3HB) and of [N~{\sc ii}]$\lambda6583$\AA~ to narrow H$\alpha$
(N2HA). More detailed information of the database 'SpecObjall' can be found in the SDSS webpage
\url{http://skyserver.sdss.org/dr16/en/help/docs/tabledesc.aspx}. The SQL query leads 14354 emission line main
galaxies collected as Type-2 AGN in DR16.
Before proceeding further, two points are noted. On the one hand, the criteria of $z<0.3$ and $S/N>10$
are applied, mainly in order to ensure reliable narrow emission lines of [N~{\sc ii}] doublet and narrow H$\alpha$
totally covered in SDSS spectra, to well apply the narrow emission line ratios to classify Type-2 AGN in the BPT
diagram. On the other hand, due to few contaminations of host galaxy starlight to narrow emission lines, different
techniques to determine starlight in SDSS spectrum can lead to totally similar measurements of emission lines.
Therefore, there are no further discussions on emission line properties of the collected Type-2 AGN, but the
reliable line parameters measured by the MPA-JHU group have been well collected from the database of 'GalSpecLine'.
And more detailed descriptions on measurements of emission line properties in main galaxies can be found in
\url{https://www.sdss.org/dr16/spectro/galaxy_mpajhu/} and in \citet{bc04, ka03a, th04}. Certainly, when the sample
is created for the SDSS Type-2 AGN but with apparent optical variabilities, there are detailed discussions in the
following sections in the manuscript on procedure to determine host galaxy contributions in SDSS spectra and
procedure to describe emission lines after subtractions of starlight.
Based on the collected line parameters and the other necessary information of the Type-2 AGN from the SDSS
databases, Fig.~\ref{bpt} shows properties of BPT diagram of $\log(O3HB)$ (mean value about 0.451, standard deviation
about 0.298) versus N2HA (mean value about -0.058, standard deviation about 0.141), distributions of redshift $z$
(mean value about 0.098, standard deviation about 0.044) and luminosity of [O~{\sc iii}] line $\log(L_{O3}/{\rm erg/s})$
(mean value about 40.255, standard deviation about 0.658).
\begin{figure*}
\centering\includegraphics[width = 18cm,height=5cm]{bpt_color.ps}
\caption{Left panel shows the collected Type-2 AGN in the BPT diagram (contour filled by bluish colors) of O3HB
versus N2HA. Solid and dashed purple lines show the dividing lines between different kinds of narrow emission
line galaxies in \citet{kb06, ka03a}, HII galaxies, composite galaxies and AGN. In left panel, small circles plus
error bars in dark green show the collected 156 Type-2 AGN with apparent long-term variabilities. Middle panel
and right panel show the distributions of redshift and [O~{\sc iii}] line luminosity ($L_{O3}$) of the 14353
collected Type-2 AGN (histogram filled by blue lines) and of the 155 Type-2 AGN with apparent long-term
variabilities (histogram filled by dark green lines). The individual Type-2 AGN SDSS J075736.47+532557.1
(plate-mjd-fiberid=1870-53383-0466) has measured redshift to be zero in SDSS, therefore, the SDSS J075736.47+532557.1
is not included in the middle panel and right panel.
}
\label{bpt}
\end{figure*}
\section{Long-term CSS photometric variabilities of the SDSS Type-2 AGN}
As well described in the homepage of the Catalina Sky Survey (CSS)
(\url{https://catalina.lpl.arizona.edu/about/facilities/telescopes}) and in \citet{dr09}, the three CSS telescopes
are located in the Santa Catalina Mountains just north of Tucson, Arizona. The 1.5-meter Cassegrain telescope
(field of view about 5.0 deg$^2$) is located on the 9157-foot summit of Mt. Lemmon, along-side the 1.0-meter
telescope (field of view about 0.3 deg$^2$). The 0.7-meter Schmidt telescope (field of view about 19.4 deg$^2$)
is located on Mt. Bigelow, just east of Mt. Lemmon. The CSS telescopes operate 24 nights per lunation with a 4-5
day break surrounding the full moon. The CSS database encompasses about 10-years long photometry for more than
500 million objects with V magnitudes between 11.5 and 21.5 from an area of 33,000 square degrees.
Based on RA and DEC of the collected 14354 SDSS Type-2 AGN, long-term light curves can be collected and
downloaded from \url{http://nesssi.cacr.caltech.edu/DataRelease/} for 12881 out of the collected 14354 SDSS Type-2
AGN, with searching radius of 3 arcseconds. Meanwhile, as discussed in \citet{dr09} and descriptions in
CSS homepage, there is a photometry blending flag for CSS light curves, probably causing by different photometric
detections in different seeing conditions and/or by causing by two and more sources detected within the searching
radius. Fortunately, among the download light curves of the 12881 Type-2 AGN, all the blending flags are zero,
indicating that the light curves can be used reliably. Moreover, when cross matching database between CSS and SDSS,
3520 of the 12881 Type-2 AGN have their collected light curves with more than one CSS IDs. Therefore, two simple
criteria are applied to determine the final accepted CSS light curves for the 3520 Type-2 AGN. First, if the multiple
CSS light curves with different IDs for one Type-2 AGN have the same mean photometric magnitudes within the same
time durations, the CSS light curve with more data points (corresponding longer time durations) is accepted as the
CSS light curve of the Type-2 AGN. Second, if the multiple CSS light curves with different IDs for one Type-2 AGN
have quiet different mean photometric magnitudes, the CSS light curve with mean photometric V-band magnitude nearer
to SDSS photometric g-band Petrosian magnitude is accepted as the CSS light curve of the Type-2 AGN, due to the CSS V-band
and the SDSS g-band covering similar wavelength ranges. Fig.~\ref{Lex} shows the light curves of SDSS 0271-51883-0252
(PLATE-MJD-FIBERID) and 1239-52760-0047 as examples to show applications of the two criteria above to determine
the final accepted CSS light curve. For the SDSS 0271-51883-0252, there are two CSS light curves with IDs of
1001055046612 and 3001077014184. For the same time durations of the two light curves with MJD-53000 from 912 to
3365, there are similar mean photometric magnitudes about 13.9, therefore, the CSS light curve with ID:1001055046612
having more data points and also longer time duration is accepted as the final light curve of SDSS 0271-51883-0252.
For the SDSS 1239-52760-0047, the collected CSS light curves with two IDs of 1107056047455 and 2108145011988 have
mean magnitude difference to be about 0.5mag. Considering the SDSS photometric g-band magnitude about 18.36, therefore,
the CSS light curve (ID:1107056047455) with mean V-band magnitude 17.85 is the final accepted light curve of SDSS
1239-52760-0047. Actually, the final accepted light curves of the SDSS 0271-51883-0252 and 1239-52760-0047, two
candidates of Type-2 AGN with apparent variabilities in the manuscript, are also shown in the following Fig.~\ref{lmc}.
\begin{figure*}
\centering\includegraphics[width = 18cm,height=6cm]{lmc_mIDs.ps}
\caption{CSS V-band light curves of SDSS 0271-51883-0252 and 1239-52760-0047. In each panel, symbols in
different colors show the light curves with different CSS IDs as shown in legends. And the light curve shown in dark
green is the final accepted CSS light curve.}
\label{Lex}
\end{figure*}
Then, in order to check whether are there apparent variabilities, the commonly accepted DRW process
\citep{kbs09, koz10, zk13} is applied to describe the collected light curves. There are many other reported
studies on the AGN variabilities through the DRW process. \citet{mi10} have modeled the variabilities of about
9000 spectroscopically confirmed quasars covered in the SDSS Stripe82 region, and found correlations between
the AGN parameters and the DRW process determined parameters. \citet{bj12} proposed an another fully probabilistic
method for modeling AGN variabilities by the DRW process. \citet{ak13} have shown that the DRW process is preferred
to model AGN variabilities, rather than several other stochastic and deterministic models, by fitted results of
long-term variabilities of 6304 quasars. \citet{zk13} have checked that the DRW process provided an adequate
description of AGN optical variabilities across all timescales. \citet{zh17a} have checked long-term variability
properties of AGN with double-peaked broad emission lines, and found the difference in intrinsic variability
timescales between normal broad line AGN and the AGN with double-peaked broad emission lines. More recently,
in our previous paper, \citet{zh21d} have shown apparent long-term variabilities well described by the DRW
process to report a composite galaxy as a better candidate of true Type-2 AGN. Therefore, the DRW process
determined parameters from the long-term variabilities can be well used to predict whether are there apparent
clues to support central AGN activities.
In the manuscript, the public code of JAVELIN (Just Another Vehicle for Estimating Lags In Nuclei)
described in \citet{koz10} and provided by \citet{zk13} has been applied here to describe the long-term CSS
variabilities of the 12881 Type-2 AGN. When the JAVELIN code is applied, through the MCMC (Markov Chain Monte
Carlo) \citep{fh13} analysis with the uniform logarithmic priors of the DRW process parameters of $\tau$ and
$\sigma$ covering every possible corner of the parameter space ($0~<~\tau/days~<~1e+5$ and
$0~<~\sigma/(mag/days^{0.5})~<~1e+2$), the posterior distributions of the DRW process parameters can be well
determined and provide the final accepted parameters and the corresponding statistical confidence limits.
Meanwhile, when the JAVELIN is applied, each long-term CSS light curve has been firstly re-sampled with one
mean value for multiple observations per day. Then, the best descriptions to each re-sampled light curve can
be well determined.
Based on the JAVELIN code determined process parameters of $\sigma$ and $\tau$ at least three times
larger than their corresponding uncertainties, there are 156 Type-2 AGN which have apparent long-term
variabilities. The long-term light curves and corresponding best-fitting results of the 156 Type-2 AGN are
shown in Fig.~\ref{lmc}, and the corresponding two dimensional posterior distributions in contour of the
parameters of $\sigma$ and $\tau$ are shown in Fig.~\ref{mcmc}. And the basic information of the 156
Type-2 AGN are listed in Table~1, including information of plate-mjd-fiberid, SDSS coordinate based name
(Jhhmmss.s$\pm$ddmmss.s), redshift $z$, averge photometric CSS V-band magnitude, SDSS provided g-band Petrosian
magnitude, $O3HB$, $N2HA$, the parameters and uncertainties of $\ln(\sigma)$ and $\ln(\tau)$. Properties of
$O3HB$ and $N2HA$ of the 156 Type-2 AGN are also shown as small circles in dark green in left panel of
Fig.~\ref{bpt}, with mean $\log(O3HB)$ and standard deviation about 0.421 and 0.316, and with mean $\log(N2HA)$
and standard deviation about -0.036 and 0.160, which are roughly similar as the values of all the 14354
Type-2 AGN. Meanwhile, distribution of redshift $z$ of the 155 Type-2 AGN (the SDSS J075736.47+532557.1 not
included due to its zero redshift in SDSS) is also shown in the middle panel of Fig.~\ref{bpt}, with mean
value about 0.097 and standard deviation about 0.056, roughly similar as the values of all the 14353 Type-2
AGN (SDSS J075736.47+532557.1 not included). Distribution of luminosity of [O~{\sc iii}] line
$\log(L_{O3}/{\rm erg/s})$ of the 155 Type-2 AGN (the SDSS J075736.47+532557.1 not included due to its zero
redshift) is also shown in the right panel of Fig.~\ref{bpt}, with mean value about 40.257 and standard
deviation about 0.730, roughly similar as the values of all the 14353 Type-2 AGN (SDSS J075736.47+532557.1 not included).
Before proceeding further, one additional point is noted. As discussed in \citet{gr17, lv20},
uncertainties of CSS photometric magnitudes are overestimated at the brighter magnitudes, but underestimated
at fainter magnitudes. Therefore, it is necessary to check effects of corrections of photometric magnitude
uncertainties on the variability properties. Fig.~\ref{db2} shows photometric magnitude distributions of
the collected 156 Type-2 AGN with apparent variabilities and the 12881 Type-2 AGN in the parent sample.
The mean photometric magnitudes are 16.17 (standard deviation 1.51) and 16.58 (standard deviation 0.92)
for the 156 Type-2 AGN with apparent variabilities and for the 12881 Type-2 AGN, respectively. Therefore,
the collected 156 Type-2 AGN with apparent variabilities do not tend to be chosen from brighter or fainter
sources. In other words, there are tiny effects of corrections of uncertainties of photometric magnitudes
on the collected 156 Type-2 AGN through variability properties.
Besides properties of $\log(N2HA)$, $\log(O3HB)$, $z$ and $\log(L_{O3}/{\rm erg/s})$, left panel
of Fig.~\ref{dis} shows properties of $\ln(\sigma/(mag/days^{0.5}))$ (mean value about -2.45 and standard
deviation about 0.54) and $\ln(\tau/days)$ (mean value about 5.06 and standard deviation about 1.09).
Before proceeding further, as well discussed on variability properties of OGLE-III quasars in \citet{koz10}
(see their Figure~9) by the same method as the JAVELIN code, the DRW process parameters of
$\sigma/(mag/days^{0.5})$ and $\tau/days$ have values about 0.005-0.52\footnote{In \citet{koz10}, the unit
of $\sigma$ is ${\rm mag/years^{0.5}}$. Here, the values 0.005-0.52 are calculated by $\sigma/{\rm mag/years^{0.5}}$
larger than 0.1 and smaller than 10 as shown in Figure~9\ in \citet{koz10}.} and about 10-1000days, strongly
indicating that the measured parameters of $\ln(\sigma/(mag/days^{0.5}))$ and $\ln(\tau/days)$ of the 156
Type-2 AGN are reliable enough. Moreover, right panel of Fig.~\ref{dis} shows distributions of $NP$, with
mean values of 81 and 84 for the 156 Type-2 AGN and the other Type-2 AGN, respectively, indicating there
are no effects of different numbers of data points $NP$ on the measured DRW process parameters. Besides the
156 Type-2 AGN with $\ln(\sigma/(mag/days^{0.5}))$ larger than -4, the other Type-2 AGN have their measured
$\ln(\sigma/(mag/days^{0.5}))$ smaller than -10. Therefore, in left panel of Fig.~\ref{dis}, there are no
plots on the other Type-2 AGN with $\ln(\sigma/(mag/days^{0.5}))$ smaller than -10.
Finally, based on the DRW process well applied to describe intrinsic AGN activities, there are
1.2\% (156/12881) optically selected Type-2 AGN which have apparent long-term optical variabilities,
indicating the 156 Type-2 AGN of which central AGN activity regions directly in the line-of-sight.
\begin{figure*}
\centering\includegraphics[width = 18cm,height=20cm]{lmc.ps}
\caption{CSS long-term light curves (solid circles plus error bars in dark green, with MJD-53000 (days) in
x-axis and CSS apparent magnitude in y-axis) of the 156 Type-2 AGN with apparent variabilities well
described by the DRW process. In each panel, solid blue line and dashed blue lines show the JAVELIN code
determined best descriptions to the light curve and the corresponding 1sigma confidence bands, respectively.
The information of plate-mjd-fiberid is marked as title of each panel.
}
\label{lmc}
\end{figure*}
\setcounter{figure}{2}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=22cm]{lmc2.ps}
\caption{--to be continued}
\end{figure*}
\setcounter{figure}{2}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=22cm]{lmc3.ps}
\caption{--to be continued}
\end{figure*}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=20cm]{mcmc1_new.ps}
\caption{The corresponding two dimensional posterior distributions shown in contour of the DRW process
parameters of $\ln(\sigma/(mag/days^{0.5}))$ (x-axis) and $\ln(\tau/days)$ (y-axis) of the light curves
shown in Fig.~\ref{lmc} of the 156 Type-2 AGN. The information of plate-mjd-fiberid is marked as title
of each panel. In order to show clear parameter difference between different objects, the same
limit from -4.5 to 0 has been applied to x-axis, and the same limit from 3 to 8 has been applied to y-axis.
}
\label{mcmc}
\end{figure*}
\setcounter{figure}{3}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=22cm]{mcmc2_new.ps}
\caption{--to be continued}
\end{figure*}
\setcounter{figure}{3}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=10cm]{mcmc3_new.ps}
\caption{--to be continued}
\end{figure*}
\begin{figure}
\centering\includegraphics[width = 8cm,height=5cm]{magdis.ps}
\caption{Distributions of CSS V-band magnitude of the 156 Type-2 AGN (histogram filled by dark
green lines) and with apparent long-term variabilities and the 12881 Type-2 AGN (histogram filled by blue lines).}
\label{db2}
\end{figure}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=6cm]{dis.ps}
\caption{Left panel shows properties of $\ln(\sigma/(mag/days^{0.5}))$ and $\ln(\tau/days)$ of the 156
Type-2 AGN with apparent long-term variabilities. Vertical dashed blue lines and horizontal dashed red
lines show the available ranges of $\ln(\sigma/(mag/days^{0.5}))$ and of $\ln(\tau/days)$ from the OGLE-III
quasars in \citet{koz10}. Right panel shows distributions of $NP$ of the 156 Type-2 AGN (histogram filled
by dark green lines) with apparent long-term variabilities and the other Type-2 AGN (histogram filled by
blue lines).
}
\label{dis}
\end{figure*}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=6cm]{spec.ps}
\caption{Top left panel shows spectrum (in dark green) of SDSS 2374-53765-0174 and the corresponding SSP
method determined host galaxy contributions (in red). Bottom left panel shows the line spectrum of SDSS
2374-53765-0174 by the SDSS spectrum minus the host galaxy contributions. Top right panel shows the best
descriptions (in red) to the emission lines (in dark green) around H$\alpha$ by multiple Gaussian functions,
in the line spectrum shown in bottom left panel. In bottom region of top right panel, solid and dashed lines
in blue show the determined first and second narrow [N~{\sc ii}] components, solid and dashed lines in pink
show the determined first and second narrow [S~{\sc ii}] components, solid and dashed purple lines show
the determined first and second narrow components in H$\alpha$. Bottom right panel shows the residuals
calculated by the line spectrum minus the best fitting results and then divided by uncertainties of the
spectrum. In the bottom right panel, solid and dashed red lines show $residual=0,\pm1$, respectively.
}
\label{ssp}
\end{figure*}
\section{Spectroscopic Results of the 156 Type-2 AGN but with long-term variabilities}
In order to confirm the 156 Type-2 AGN are well classified as Type-2 AGN by spectroscopic results,
the spectra properties should be further discussed. Due to apparent host galaxy contributions to the SDSS
spectra of the 156 Type-2 AGN, the starlight in the SDSS spectra should be firstly determined. Here, the
commonly accepted SSP (simple stellar population) method has been applied. More detailed descriptions on
the SSP method can be found in \citet{bc03, ka03, cm05, cm17}. And the SSP method has been applied in our
previous papers \citet{zh14, zh16, ra17, zh19, zh21m, zh21a, zh21b, zh22}. Here, we do not show further
discussions on the SSP method any more, but simple descriptions on SSP method are described as follows.
The 39 simple stellar population templates in \citet{bc03} have been exploited, which can be used to
well-describe the characteristics of almost all the SDSS galaxies as detailed discussions in \citet{bc03}.
Meanwhile, there is an additional component, a fourth degree polynomial function describe component, which
is applied to describe probable intrinsic AGN continuum emissions after considering the apparent variabilities
or to modify continuum shapes when to describe the SDSS spectra by the templates. Meanwhile, when the SSP
method is applied, the narrow emission lines listed in
\url{http://classic.sdss.org/dr1/algorithms/speclinefits.html#linelist} are masked out by full width at
zero intensity about 450${\rm km~s^{-1}}$, And the wavelength ranges from 4450 to 5600\AA~ and from 6250
to 6750\AA~ are also masked out for the probably broad H$\beta$ and the broad H$\alpha$ emission lines.
\begin{figure*}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0271-51883-0252.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0284-51662-0558.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0284-51943-0551.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0311-51665-0073.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0375-52140-0063.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0382-51816-0426.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0501-52235-0366.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0540-51996-0549.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0564-52224-0041.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0580-52368-0084.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0656-52148-0390.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0743-52262-0591.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0892-52378-0058.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0917-52400-0260.fits-line.ps}
\caption{Each two columns in each row show the similar results as those shown in Fig.~\ref{ssp}, but for the
31 Type-2 AGN with both long-term variabilities and probable broad H$\alpha$ emission component described
by dashed purple line. }
\label{line}
\end{figure*}
\setcounter{figure}{7}
\begin{figure*}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-0929-52581-0196.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1208-52672-0519.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1239-52760-0047.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1365-53062-0057.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1385-53108-0192.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1594-52992-0566.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1827-53531-0008.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1864-53313-0466.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1865-53312-0129.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1927-53321-0261.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-1990-53472-0604.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-2139-53878-0330.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-2375-53770-0517.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-2419-54139-0050.fits-line.ps}
\caption{--to be continued}
\end{figure*}
\setcounter{figure}{7}
\begin{figure*}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-2572-54056-0139.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-2593-54175-0025.fits-line.ps}
\centering\includegraphics[width = 8.5cm,height=3.2cm]{New_spec-2744-54272-0490.fits-line.ps}
\caption{--to be continued}
\end{figure*}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=5cm]{spec1870.ps}
\caption{Left panel shows spectrum (in dark green) of SDSS J075736.47+532557.1 (plate-mjd-fiberid =
1870-53383-0466) with apparent Ca~{\sc ii}$\lambda3934, 3969$\AA~ absorption features marked by vertical
red line. Right panel shows the spectrum around H$\alpha$ to ensure no broad component in H$\alpha$.
}
\label{s1870}
\end{figure*}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=18cm]{mspec.ps}
\caption{Multi-epoch spectra in different colors of the 14 AGN. In each panel, plate-mjd-fiberid of each
spectrum is marked in top left region by corresponding color.
}
\label{check}
\end{figure*}
\begin{figure}
\centering\includegraphics[width = 8cm,height=5cm]{tde.ps}
\caption{TDE roughly expected variabilities described by $t^{-5/3}$ shown in solid red line in the long-term
CSS light curve of SDSS 0747-52234-0400.
}
\label{tde}
\end{figure}
\begin{figure*}
\centering\includegraphics[width = 18cm,height=12cm]{tde2.ps}
\caption{TDE roughly expected variabilities described by $t^{-5/3}$ shown in solid red line followed by none
variability component described by horizontal dashed red line in the long-term CSS light curve of the seven
Type-2 AGN with apparent variabilities.
}
\label{tde2}
\end{figure*}
Moreover, when the SSP method is applied, there is only one criterion that the strengthen factor of
each simple stellar population template is not smaller than zero. Then, through the Levenberg-Marquardt
least-squares minimization technique (the MPFIT package), SDSS spectra with emission lines being masked out
can be well described by the SSP method for the 155 Type-2 AGN with apparent variabilities, besides the SDSS
1870-53383-0466 (plate-mjd-fiberid) with quite different redshifts estimated from emission lines and from
absorption lines which will be individually discussed in the following section. Here, the SSP method determined
host galaxy contributions are not shown in plots for all the 155 Type-2 AGN (SDSS 1870-53383-0466 not included),
but left panels of Fig.~\ref{ssp} show an example on the SSP determined host galaxy contributions in the Type-2
AGN SDSS 2374-53765-0174 due to its stronger narrow emission lines.
After subtractions of starlight, emission lines around H$\alpha$ (rest wavelength range from 6400\AA~
to 6800\AA) are described by multiple Gaussian functions as follows, in order to check whether are there
broad H$\alpha$ component in the optically selected 156 Type-2 AGN with apparent variabilities. In order to
well describe the narrow emission lines, each two Gaussian functions, one component (first component) for
common narrow component and the other component (second component which is assumed to be wider than the
first component) for probable extended component, are applied to describe each narrow emission line, i.e,
there are 10 Gaussian components applied to describe the [N~{\sc ii}]$\lambda6548,6583$\AA~ doublet, H$\alpha$
and [S~{\sc ii}]$\lambda6716,6731$\AA~ doublet. Properties of the second component in H$\alpha$ will provide
clues to support whether are there broad components in H$\alpha$. Then, through the Levenberg-Marquardt
least-squares minimization technique, the emission lines around H$\alpha$ can be well described. Meanwhile,
when the Gaussian functions above are applied, the following three criteria are accepted. First, each Gaussian
component has line intensity not smaller than zero. Second, the first (the second) components of the each
doublet have the same redshift and the same line width. Third, the first (the second) components of the
[N~{\sc ii}] doublet have the flux ratio to be fixed to the theoretical value 3. As an example, right panels
of Fig.~\ref{ssp} show the best descriptions to the emission lines around H$\alpha$ in SDSS 2374-53765-0174,
and the corresponding residuals calculated by line spectrum minus the best fitting results and then divided
by uncertainties of the spectrum.
Then, line parameters of [N~{\sc ii}] and H$\alpha$ of the 156 Type-2 AGN but with apparent
variabilities can be well measured and listed in Table~2, after subtractions of host galaxy contributions.
Then, based on the measured parameters, 31 out of the 156 Type-2 AGN have probable broad H$\alpha$,
considering the following two criteria. On the one hand, the second moment of H$\alpha$ has its measured line
parameters at least three times larger than corresponding uncertainties. On the other hand, the second
component of H$\alpha$ has its measured second moment larger than 1000${\rm km/s}$, or at least three times
larger than both the first moment of H$\alpha$ and the second moment of [N~{\sc ii}] doublet. There are 17
Type-2 AGN with measured second moment of the second component of H$\alpha$ larger than 1000${\rm km/s}$,
and 14 Type-2 AGN with the second component of H$\alpha$ at least three times larger than both the first
moment of H$\alpha$ and the second moment of [N~{\sc ii}] doublet. Therefore, among the 156 Type-2 AGN,
the 31 Type-2 AGN with probable broad component in H$\alpha$ should be preferred to be classified as
Type-1.8/1.9 AGN, not as Type-2 AGN.
Before proceeding further, two points are noted. On the one hand, as the shown results in
Fig.~\ref{line}, due to complex broad emission lines in SDSS 2419-54139-0050, the best fitting results
to the emission lines around H$\alpha$ are not so good. However, the fitting results can be clearly
applied to support a broad emission component in H$\alpha$. Therefore, there are no further applications
of more Gaussian components to describe the broad H$\alpha$ in SDSS 2419-54139-0050, because there are
no further discussions on broad line properties in the manuscript. On the other hand, there are two
Gaussian functions applied to describe each narrow emission line, in order to test whether are there
broad emission line features. However, the 10 Gaussian functions should lead to larger uncertainties
of each model parameters, leading the measured parameters (especially of the second component) 3 times
smaller than the corresponding uncertainties. Therefore, in Table~2, there are no listed parameters in
some collected Type-2 AGN, due to measured line parameters smaller than corresponding uncertainties.
Besides the 31 AGN of which spectra shown in Fig.~\ref{line}, the other Type-2 AGN have been
carefully checked through the F-test technique, as what we have done in \citet{zh21d} by two different
model functions. Besides the model function discussed above, the other model function has been
considered by 10 Gaussian functions plus an additional broad Gaussian component. Then comparing with
different estimated $\chi^2$ values by the two model functions, it can be confirmed that the broad
Gaussian component is not preferred with confidence level higher than 5sigma. In other words, the other
optically selected 125 Type-2 AGN (156-31) have no clues on broad emission lines but have apparent
long-term optical variabilities.
\section{Discussions}
\subsection{SDSS J075736.47+532557.1}
As described above, the AGN SDSS J075736.47+532557.1 is an unique object, due to its redshift
determined to be zero through emission line features. However, as shown in Fig.~\ref{s1870}, there are
apparent Ca~{\sc ii}$\lambda3934, 3969$\AA~ absorption features, leading the absorption line determined
redshift to be about 0.13. The quite different redshifts from emission lines and absorption lines should
indicate an interesting dual AGN system in SDSS J075736.47+532557.1, as more recent reviews in \citet{dv19}.
Moreover, as the shown emission lines around H$\alpha$ in right panel of Fig.~\ref{s1870} in SDSS
J075736.47+532557.1, there are no clues on probably broad emission component in H$\alpha$. Therefore,
besides the quite different redshifts from absorption and emission lines, SDSS J075736.47+532557.1 is an
optically selected Type-2 AGN but with apparent long-term optical variabilities. Further discussions on
quite different redshifts from absorption and emission lines in SDSS J075736.47+532557.1 is beyond the scope
of the manuscript, but will be give in our manuscript in preparation, and there are no further discussions
on SDSS J075736.47+532557.1\ in the manuscript.
\subsection{Are the optically selected Type-2 AGN with apparent variabilities Changing-Look AGN?}
As described in the Introduction, the optically selected Type-2 AGN but with apparent long-term
variabilities could be candidates of changing-look AGN at quiet state. However, multi-epoch spectra are
necessary to check whether an AGN was a changing-look AGN. Among the collected Type-2 AGN with apparent
variabilities, there are 14 objects which have multi-epoch spectra, collected based on the SDSS pipeline
provided parameter 'Nspecobs' which can be found in the second extension of Fits file of each SDSS spectrum.
Then, the collected multi-epoch spectra of each Type-2 AGN are well checked and shown in Fig.~\ref{check},
and no signs can be found on appearance or disappearance of broad emission lines in different epochs.
As the shown information of plate-mjd-fiberid in top-left corner in each panel of Fig.~\ref{check},
time differences among the epochs of the multiple spectra can be well determined as 375days, 281days,
281days, 277days, 5days, 5days, 84days, 97days, 20days, 3days, 18days, 20days, 239days and 17days for the
Type-2 AGN listed from left to right and from top to bottom. Therefore, at current stage, there are not
enough evidence to support that the collected Type-2 AGN but with apparent long-term variabilities are
changing-look AGN (probably due to smaller time differences among the multi-epochs?).
\subsection{Are the optically selected Type-2 AGN with apparent variabilities True Type-2 AGN?}
As described in the Introduction, the optically selected Type-2 AGN but with apparent long-term
optical variabilities could be candidates of true Type-2 AGN. The long-term optical variabilities can
be well applied to confirm that the central regions have been directly observed, along with the loss
of broad emission lines, the optically selected Type-2 AGN, especially the 125 objects with apparent
variabilities but without broad emission lines, can be accepted as the good candidates of true Type-2 AGN.
Actually, among the Type-2 AGN with apparent long-term optical variabilities, there is an unique
TDE (Tidal disruption event) expected variabilities in the SDSS 0747-52234-0400. TDEs have been well
studied in detail for more than four decades \citep{re88, lu97, gm06, ve11, ce12, gs12, gr13, gm14,
ko15, ht16, st18, wy18, tc19, zh19, vg21, sg21, zh21n, zh22a}, with accreting fallback debris from stars
tidally disrupted by central black holes (BHs) leading to apparent time-dependent variabilities. Here,
systematic studying on TDE expected variabilities in SDSS 0747-52234-0400 with apparent long-term
variabilities is beyond the scope of the manuscript. Therefore, there are no further discussions on
SDSS 0747-52234-0400 with apparent long-term variabilities related to TDEs which will be discussed
in detail in our manuscript in preparation, but Fig.~\ref{tde} shows the TDE expected variabilities
roughly described by $t^{-5/3}$ in SDSS 0747-52234-0400. Besides SDSS 0747-52234-0400, there are
several other Type-2 AGN with their long-term light curves having variability features including a smooth
declining trend but with small decrease followed by none-variability component, such as in SDSS
0284-51662-0558, SDSS 0284-51943-0551, SDSS 0538-52029-0359, SDSS 0914-52721-0233, SDSS 1052-52466-0280,
SDSS 2789-54555-0281, SDSS 2572-54056-0139, etc., of which light curves roughly described by $t^{-5/3}$
are shown in Fig.~\ref{tde2}. The probable TDEs provide strong and further evidence to support the
objects as true Type-2 AGN, due to tidal disruption radii about tens of Schwarzschild radii quite
close to central black hole. In our manuscript in preparation, detailed discussions on variabilities
related to probable TDEs should be given for the small sample of optically selected Type-2 AGN but
with apparent variabilities.
Besides the long-term variabilities related to central AGN activities or to probable TDEs,
another evidence should be reported to support the loss of broad emission lines. However, based on
the collected SDSS spectra of the Type-2 AGN, there are only 31 AGN with probably broad emission
lines, leading the 31 AGN to be classified as Type-1.8/1.9 AGN. For the other 125 Type-2 AGN with
no broad emission lines in current collected SDSS spectra, it is confident to be accept as true
Type-2 AGN. Certainly, high quality spectra should be necessary in the near future to confirm them
as true Type-2 AGN without no broad emission lines.
\subsection{Comparing with previous work}
As described in Introduction, \citet{ba14} have reported g-band variabilities in 17 out of
173 Type-2 quasars covered in the Stripe 82 region, indicating about 5.8\% ((17-7)/173, with 7 objects
having detected broad emission lines) of optical selected Type-2 AGN have apparent variabilities.
However, in the manuscript, among the 12881 SDSS pipeline classified Type-2 AGN with CSS light curves,
there are 156 objects with apparent long-term variabilities, indicating about 0.97\% ((156-31)/12881,
with 31 objects having detected broad emission lines) optical selected low redshift Type-2 AGN have
apparent variabilities, about six times smaller than the reported results in \citet{ba14}. Therefore,
in the subsection, it is interesting to explain the different number ratios in \citet{ba14} and in
the manuscript, for Type-2 AGN with and without long-term variabilities. The following two reasons
are mainly considered.
On the one hand, as well discussed in \citet{ba14}, the selected 7 of the 17 Type-2 quasars
have detected broad emission lines in their re-observed Keck high quality spectra. Probably the other
10 Type-2 quasars could have detected broad emission lines in high quality spectra, leading a number
ratio quite smaller than 5.8\%.
On the other hand, if Type-2 AGN with apparent long-term variabilities were evenly distributed
in space, the quite different number ratios should give interesting clues on properties of central dust
torus in AGN. As one of fundamental structures in Unified model of AGN, properties of dust torus have
been well studied for more than four decades, especially on properties of opening angles (covering factor)
of dust torus. \citet{at05} have proposed the receding torus model, based on the statistically significant
correlation between the half opening angle of the torus and [O~{\sc iii}] emission-line luminosity.
\citet{zo18} have reported that the half-opening angle of the torus declines with increasing accretion
rate until the Eddington ratio reaches 0.5, above which the trend reverses. However, \citet{nl16, sr16}
have found no evidence for a luminosity dependence of the torus covering factor in AGN. More recent
review on dust torus can be found in \citet{ar17}.
There are no definite conclusions on the receding torus model, however, the receding torus model
can be well applied to explain the quite different number ratios on the Type-2 AGN with apparent
variabilities. For our sample in lower redshift and in lower line luminosity than the Type-2 quasars
in \citet{ba14}, smaller opening angles expected by the receding torus model are expected for the low
redshift objects in our sample, leading to fewer Type-2 AGN with apparent variabilities which can be
detected in our sample.
\section{Summaries and Conclusions}
The main summaries and conclusions are as follows.
\begin{itemize}
\item Through the SDSS provided SQL search tool in DR16, 14354 SDSS pipeline classified low redshift
($z<0.3$) Type-2 AGN can be collected from SDSS main galaxies with spectral signal-to-noise
larger than 10, and located into the AGN region in the BPT diagram of O3HB versus N2HA.
\item Among the collected 14354 Type-2 AGN, long-term CSS V-band light curves of 12881 Type-2 AGN can
be collected from the CSS.
\item The well-known DRW process is applied to describe the CSS light curves of the 12881 Type-2 AGN,
leading to the well determined process parameters of $\tau$ and $\sigma$. And based on the
measured $\ln(\sigma/(mag/days^{0.5}))$ larger than -4 and 3 times larger than corresponding
uncertainties, 156 Type-2 AGN are collected with apparent long-term variabilities.
\item For the 156 Type-2 AGN with apparent variabilities, the well-applied SSP method is accepted to
determine host galaxy contributions in SDSS spectra. After subtractions of starlight, emission
lines around H$\alpha$ can be well measured by multiple Gaussian functions, leading to 31 objects
with detected broad H$\alpha$. There are no signs (confidence level higher than 5sigma by the
F-test technique) for broad emission lines in the other 125 Type-2 AGN with long-term variabilities.
\item For the 156 optically selected Type-2 AGN with apparent variabilities, there are 14 objects having
multi-epoch SDSS spectra. After checking the multi-epoch SDSS spectra, there are no clues to
appearance or disappearance of broad lines. The results indicate that the collected Type-2 AGN
classified as the changing-look AGN should be not preferred at current stage.
\item For the 125 (156-31) optically selected Type-2 AGN without broad lines but with apparent variabilities,
they could be well accepted as candidates of True Type-2 AGN (AGN without hidden central broad
emission line regions).
\item For the 156 optically selected Type-2 AGN with apparent variabilities, there are a small sample
of objects with their long-term variabilities having features roughly described by TDE expected
$t^{-5/3}$, indicating probable central TDEs which can be applied as further evidence to support
true Type-2 AGN.
\item The smaller number ratio of Type-2 AGN with variabilities to normal Type-2 AGN than in high
redshift and high luminosity Type-2 quasars reported in \citet{ba14} could be applied to probably
support the receding torus model in AGN, if the reported Type-2 quasars with variabilities in
\citet{ba14} were true Type-2 quasar without central hidden BLRs.
\end{itemize}
\begin{table*}
\caption{Basic information of the 156 Type-2 AGN with apparent long-term variabilities}
\begin{center}
\begin{tabular}{cccccccccc}
\hline\hline
pmf & $z$ & name & m1 & m2 & $\log(L_{O3})$ & $\log(O3HB)$ & $\log(N2HA)$ & $\ln(\sigma)$ & $\ln(\tau)$ \\
\hline
0271-51883-0252 & 0.049 & J101439.5-004951.2 & 13.85 & 14.71 & 40.55$\pm$0.01 & 0.767$\pm$0.034 &
0.2379$\pm$0.018 & -3.15$\pm$0.31 & 6.36$\pm$0.77 \\
0283-51959-0600 & 0.096 & J114815.9-000328.4 & 16.18 & 17.03 & 40.55$\pm$0.01 & 0.675$\pm$0.058 &
-0.072$\pm$0.029 & -2.68$\pm$0.26 & 5.53$\pm$0.71 \\
0284-51662-0220 & 0.138 & J115135.0-005655.9 & 16.51 & 18.89 & 40.51$\pm$0.02 & 0.369$\pm$0.084 &
-0.237$\pm$0.055 & -2.84$\pm$0.28 & 4.54$\pm$1.09 \\
0284-51662-0558 & 0.078 & J115601.1+001237.9 & 15.78 & 23.86 & 40.03$\pm$0.02 & 0.398$\pm$0.081 &
-0.028$\pm$0.024 & -2.39$\pm$0.28 & 6.76$\pm$0.62 \\
0284-51943-0551 & 0.078 & J115601.1+001237.9 & 15.78 & 23.86 & 40.06$\pm$0.01 & 0.445$\pm$0.061 &
-0.023$\pm$0.019 & -2.38$\pm$0.27 & 6.82$\pm$0.63 \\
0288-52000-0152 & 0.154 & J121958.1-003530.6 & 17.39 & 24.31 & 40.57$\pm$0.02 & 0.460$\pm$0.094 &
-0.223$\pm$0.045 & -0.97$\pm$0.12 & 3.79$\pm$0.32 \\
0300-51666-0073 & 0.124 & J135244.6-001526.1 & 16.56 & 17.67 & 40.83$\pm$0.01 & 0.534$\pm$0.048 &
0.0593$\pm$0.022 & -3.30$\pm$0.39 & 6.49$\pm$0.97 \\
0305-51613-0044 & 0.086 & J142934.0-000237.8 & 16.03 & 16.86 & 40.41$\pm$0.01 & 0.480$\pm$0.049 &
-0.019$\pm$0.020 & -2.43$\pm$0.24 & 4.42$\pm$0.58 \\
0305-51613-0146 & 0.150 & J142557.7-002218.9 & 17.50 & 18.76 & 40.18$\pm$0.05 & 0.182$\pm$0.138 &
-0.201$\pm$0.040 & -1.82$\pm$0.24 & 5.85$\pm$0.61 \\
0311-51665-0073 & 0.139 & J151235.5-000007.3 & 17.11 & 18.01 & 41.28$\pm$0.01 & 0.740$\pm$0.030 &
0.0425$\pm$0.014 & -2.23$\pm$0.26 & 4.28$\pm$0.69 \\
0315-51663-0549 & 0.072 & J154121.3+003336.1 & 16.99 & 18.25 & 39.13$\pm$0.12 & 0.455$\pm$0.851 &
-0.018$\pm$0.119 & -2.66$\pm$0.29 & 6.34$\pm$0.74 \\
0375-52140-0063 & 0.130 & J222707.1-001836.4 & 16.38 & 19.08 & 41.62$\pm$0.01 & 0.956$\pm$0.024 &
0.0233$\pm$0.012 & -1.10$\pm$0.12 & 3.23$\pm$0.26 \\
0379-51789-0271 & 0.117 & J225053.0-000041.2 & 17.25 & 18.49 & 40.24$\pm$0.04 & 0.203$\pm$0.121 &
-0.137$\pm$0.035 & -1.96$\pm$0.17 & 3.54$\pm$0.39 \\
0379-51789-0401 & 0.071 & J225222.3+010700.0 & 15.30 & 17.15 & 39.94$\pm$0.04 & 0.116$\pm$0.110 &
-0.174$\pm$0.041 & -3.22$\pm$0.29 & 5.49$\pm$0.88 \\
0382-51816-0426 & 0.178 & J231501.7+001613.9 & 17.23 & 18.04 & 41.65$\pm$0.01 & 0.988$\pm$0.035 &
0.0104$\pm$0.018 & -2.45$\pm$0.23 & 3.30$\pm$0.42 \\
0390-51816-0305 & 0.039 & J001558.2-001812.6 & 14.00 & 14.91 & 39.48$\pm$0.03 & 0.198$\pm$0.084 &
-0.111$\pm$0.038 & -3.32$\pm$0.28 & 5.55$\pm$1.20 \\
0390-51900-0306 & 0.039 & J001558.2-001812.6 & 14.00 & 14.91 & 39.47$\pm$0.02 & 0.152$\pm$0.048 &
-0.061$\pm$0.020 & -3.28$\pm$0.31 & 5.77$\pm$1.26 \\
0411-51817-0217 & 0.013 & J030349.1-010613.4 & 12.53 & 13.69 & 39.98$\pm$0.01 & 0.967$\pm$0.020 &
-0.302$\pm$0.016 & -3.10$\pm$0.30 & 6.30$\pm$0.86 \\
0411-51873-0203 & 0.013 & J030349.1-010613.4 & 12.53 & 13.69 & 39.98$\pm$0.01 & 0.955$\pm$0.020 &
-0.302$\pm$0.016 & -3.13$\pm$0.29 & 6.25$\pm$0.86 \\
0424-51893-0634 & 0.045 & J012839.8+144553.8 & 14.76 & 15.67 & 40.70$\pm$0.01 & 0.827$\pm$0.016 &
0.0378$\pm$0.009 & -2.48$\pm$0.28 & 6.49$\pm$0.72 \\
0447-51877-0287 & 0.096 & J084012.3+512127.0 & 16.07 & 16.93 & 39.42$\pm$0.11 & 0.009$\pm$0.262 &
-0.008$\pm$0.061 & -2.37$\pm$0.22 & 4.87$\pm$0.79 \\
0451-51908-0092 & 0.049 & J092043.1+553804.1 & 16.15 & 16.85 & 39.29$\pm$0.04 & 0.499$\pm$0.234 &
0.0628$\pm$0.077 & -1.98$\pm$0.21 & 4.70$\pm$0.84 \\
0465-51910-0378 & 0.129 & J040403.3-052345.4 & 16.89 & 18.06 & 39.92$\pm$0.07 & 0.118$\pm$0.212 &
0.0711$\pm$0.058 & -2.29$\pm$0.18 & 3.84$\pm$0.57 \\
0500-51994-0390 & 0.098 & J095456.3+022403.3 & 17.35 & 18.33 & 39.86$\pm$0.05 & 0.024$\pm$0.127 &
0.0045$\pm$0.051 & -2.77$\pm$0.27 & 5.31$\pm$0.63 \\
0501-52235-0366 & 0.023 & J100207.0+030327.6 & 13.94 & 14.79 & 40.34$\pm$0.01 & 0.456$\pm$0.010 &
-0.182$\pm$0.008 & -3.26$\pm$0.31 & 6.71$\pm$0.79 \\
0504-52316-0492 & 0.071 & J102813.5+024657.5 & 16.41 & 17.27 & 39.92$\pm$0.02 & 0.288$\pm$0.075 &
-0.200$\pm$0.030 & -2.62$\pm$0.28 & 5.64$\pm$0.88 \\
0533-51994-0187 & 0.127 & J141445.2+015000.3 & 16.74 & 17.87 & 41.27$\pm$0.01 & 0.513$\pm$0.020 &
-0.308$\pm$0.012 & -2.26$\pm$0.23 & 5.27$\pm$0.68 \\
0538-52029-0359 & 0.069 & J145019.0+015205.2 & 15.89 & 16.79 & 41.20$\pm$0.01 & 0.780$\pm$0.014 &
-0.220$\pm$0.011 & -2.57$\pm$0.29 & 7.05$\pm$0.61 \\
0540-51996-0549 & 0.219 & J151215.7+020316.9 & 17.24 & 18.48 & 41.41$\pm$0.01 & 0.564$\pm$0.075 &
0.0898$\pm$0.038 & -1.17$\pm$0.22 & 5.18$\pm$0.58 \\
0543-52017-0342 & 0.117 & J075057.0+380035.3 & 15.96 & 17.10 & 40.23$\pm$0.03 & 0.083$\pm$0.088 &
-0.056$\pm$0.034 & -2.65$\pm$0.13 & 3.26$\pm$0.41 \\
0564-52224-0019 & 0.068 & J085009.2+021606.2 & 17.70 & 18.25 & 39.14$\pm$0.06 & 0.086$\pm$0.169 &
0.0054$\pm$0.057 & -1.92$\pm$0.20 & 4.37$\pm$0.86 \\
0564-52224-0041 & 0.060 & J084654.8+021457.7 & 16.91 & 17.51 & 40.04$\pm$0.01 & 0.182$\pm$0.020 &
-0.070$\pm$0.007 & -2.23$\pm$0.17 & 3.95$\pm$0.56 \\
0571-52286-0339 & 0.159 & J094739.4+044128.4 & 17.13 & 18.63 & 40.14$\pm$0.07 & 0.172$\pm$0.201 &
0.3545$\pm$0.194 & -2.46$\pm$0.26 & 5.09$\pm$0.88 \\
0576-52325-0639 & 0.155 & J103448.5+042419.7 & 15.63 & 16.48 & 40.13$\pm$0.10 & -0.11$\pm$0.201 &
0.3116$\pm$0.066 & -2.51$\pm$0.27 & 5.85$\pm$0.71 \\
0580-52368-0084 & 0.091 & J110027.1+034047.4 & 16.89 & 17.61 & 40.66$\pm$0.01 & 0.740$\pm$0.036 &
-0.079$\pm$0.018 & -2.12$\pm$0.13 & 4.09$\pm$0.61 \\
0581-52356-0028 & 0.028 & J111044.8+043039.0 & 14.01 & 14.86 & 39.97$\pm$0.01 & 0.181$\pm$0.015 &
-0.015$\pm$0.008 & -2.93$\pm$0.28 & 6.61$\pm$0.78 \\
0589-52055-0263 & 0.043 & J150011.9+033819.4 & 15.81 & 16.67 & 39.13$\pm$0.06 & -0.11$\pm$0.107 &
0.0419$\pm$0.024 & -2.90$\pm$0.32 & 6.38$\pm$0.87 \\
0594-52045-0276 & 0.187 & J154317.4+023552.2 & 17.26 & 18.51 & 41.62$\pm$0.01 & 0.951$\pm$0.061 &
0.3197$\pm$0.019 & -1.90$\pm$0.18 & 3.96$\pm$0.69 \\
0630-52050-0241 & 0.121 & J164716.6+384327.6 & 16.57 & 17.45 & 41.57$\pm$0.01 & 0.948$\pm$0.013 &
-0.103$\pm$0.008 & -2.37$\pm$0.26 & 5.54$\pm$0.77 \\
0656-52148-0390 & 0.055 & J004150.4-091811.3 & 13.15 & 19.11 & 39.21$\pm$0.05 & -0.07$\pm$0.120 &
0.2844$\pm$0.033 & -3.42$\pm$0.34 & 5.70$\pm$1.05 \\
0696-52209-0234 & 0.017 & J012116.5-003240.1 & 12.64 & 13.68 & 38.95$\pm$0.02 & 0.121$\pm$0.053 &
0.2841$\pm$0.018 & -3.17$\pm$0.28 & 5.66$\pm$0.94 \\
\hline
\end{tabular}
\end{center}
\tablecomments{
The first column shows the pmf information of plate-mjd-fiberid of each Type-2 AGN, the second column
shows the redshift information of each Type-2 AGN, the third column shows the SDSS coordinate
based name (Jhhmmss.s$\pm$ddmmss.s) of each Type-2 AGN, the fourth column shows the mean photometric
magnitude of the CSS light curve of each Type-2 AGN, the fifth column shows the SDSS provided photometric
g-band Petrosian magnitude of each Type-2 AGN, the sixth column shows the [O~{\sc iii}] line luminosity
$\log(L_{O3}/(erg/s))$ of each Type-2 AGN, the seventh column and the eighth column show the $\log(O3HB)$
and $\log(N2HA)$ of each Type-2 AGN, the last two columns show the determined $\ln(\sigma/(mag/days^{0.5}))$
and $\ln(\tau/days)$ of each Type-2 AGN.
}
\end{table*}
\setcounter{table}{0}
\begin{table*}
\caption{--to be continued}
\begin{center}
\begin{tabular}{cccccccccc}
\hline\hline
pmf & $z$ & name & m1 & m2 & $\log(L_{O3})$ & $\log(O3HB)$ & $\log(N2HA)$ & $\ln(\sigma)$ & $\ln(\tau)$ \\
\hline
0699-52202-0163 & 0.083 & J014638.4-002527.3 & 15.96 & 16.99 & 40.92$\pm$0.01 & 0.688$\pm$0.026 &
-0.135$\pm$0.016 & -2.71$\pm$0.19 & 3.58$\pm$0.72 \\
0720-52206-0191 & 0.085 & J222219.6-085607.1 & 14.84 & 16.01 & 39.67$\pm$0.07 & 0.154$\pm$0.241 &
0.1478$\pm$0.069 & -2.46$\pm$0.26 & 6.10$\pm$0.76 \\
0743-52262-0591 & 0.016 & J231221.4+143633.7 & 12.67 & 14.11 & 39.66$\pm$0.01 & 0.198$\pm$0.025 &
0.0590$\pm$0.015 & -2.59$\pm$0.23 & 5.46$\pm$0.80 \\
0747-52234-0400 & 0.107 & J233454.0+145712.8 & 17.13 & 18.18 & 40.90$\pm$0.01 & 0.423$\pm$0.025 &
0.0085$\pm$0.009 & -1.64$\pm$0.22 & 5.98$\pm$0.50 \\
0758-52253-0514 & 0.129 & J080728.1+391135.3 & 16.87 & 18.12 & 40.51$\pm$0.02 & 0.737$\pm$0.136 &
0.0955$\pm$0.051 & -2.44$\pm$0.22 & 4.92$\pm$0.61 \\
0817-52381-0412 & 0.176 & J163124.2+401735.7 & 17.05 & 18.23 & 40.46$\pm$0.07 & 0.313$\pm$0.196 &
0.3812$\pm$0.119 & -1.97$\pm$0.25 & 5.69$\pm$0.61 \\
0892-52378-0058 & 0.082 & J081013.0+345136.8 & 16.69 & 17.47 & 40.93$\pm$0.01 & 0.568$\pm$0.018 &
-0.149$\pm$0.009 & -2.94$\pm$0.31 & 6.80$\pm$0.68 \\
0914-52721-0233 & 0.075 & J135251.6-023630.5 & 14.48 & 15.52 & 39.98$\pm$0.03 & 0.214$\pm$0.091 &
0.1092$\pm$0.039 & -2.45$\pm$0.25 & 6.99$\pm$0.60 \\
0917-52400-0260 & 0.141 & J141726.0-024948.4 & 17.44 & 18.45 & 41.34$\pm$0.01 & 0.808$\pm$0.023 &
-0.144$\pm$0.011 & -2.71$\pm$0.36 & 6.20$\pm$1.15 \\
0929-52581-0196 & 0.098 & J075846.9+270515.5 & 16.34 & 18.09 & 39.96$\pm$0.03 & -0.14$\pm$0.071 &
-0.170$\pm$0.041 & -2.01$\pm$0.22 & 5.49$\pm$0.64 \\
0931-52619-0094 & 0.097 & J081917.5+301935.7 & 16.62 & 17.54 & 40.63$\pm$0.01 & 0.789$\pm$0.066 &
-0.000$\pm$0.024 & -2.83$\pm$0.27 & 7.12$\pm$0.62 \\
0938-52708-0404 & 0.073 & J091829.0+405803.5 & 16.83 & 17.09 & 41.03$\pm$0.01 & 0.986$\pm$0.025 &
0.0416$\pm$0.013 & -2.15$\pm$0.16 & 4.02$\pm$0.50 \\
0979-52427-0095 & 0.178 & J172000.4+261855.0 & 16.79 & 18.02 & 41.44$\pm$0.01 & 0.411$\pm$0.022 &
-0.068$\pm$0.010 & -2.34$\pm$0.20 & 5.01$\pm$0.62 \\
0983-52443-0594 & 0.135 & J205720.3+001207.3 & 16.88 & 17.98 & 40.79$\pm$0.02 & 0.356$\pm$0.057 &
-0.050$\pm$0.024 & -1.97$\pm$0.20 & 4.28$\pm$0.65 \\
0984-52442-0348 & 0.157 & J205510.0+003533.3 & 16.63 & 17.67 & 40.88$\pm$0.01 & 0.451$\pm$0.064 &
-0.016$\pm$0.026 & -2.39$\pm$0.28 & 6.22$\pm$0.79 \\
1001-52670-0389 & 0.026 & J104729.5+071503.8 & 13.77 & 14.25 & 39.01$\pm$0.04 & -0.11$\pm$0.086 &
0.0537$\pm$0.026 & -3.41$\pm$0.32 & 5.72$\pm$0.95 \\
1036-52582-0135 & 0.058 & J223044.8-010030.5 & 17.93 & 18.94 & 39.79$\pm$0.01 & 0.595$\pm$0.039 &
-0.395$\pm$0.028 & -2.39$\pm$0.21 & 3.68$\pm$0.59 \\
1036-52582-0389 & 0.107 & J222651.8+003023.0 & 17.54 & 18.98 & 39.44$\pm$0.08 & 0.154$\pm$0.230 &
-0.032$\pm$0.062 & -2.75$\pm$0.32 & 5.70$\pm$1.44 \\
1052-52466-0280 & 0.018 & J152945.0+425507.1 & 12.53 & 13.54 & 39.18$\pm$0.02 & 0.275$\pm$0.058 &
0.2313$\pm$0.023 & -1.86$\pm$0.25 & 6.98$\pm$0.61 \\
1065-52586-0237 & 0.065 & J031634.4-002419.1 & 16.42 & 17.72 & 39.83$\pm$0.02 & 0.331$\pm$0.080 &
-0.131$\pm$0.026 & -1.92$\pm$0.26 & 5.65$\pm$0.64 \\
1198-52669-0150 & 0.095 & J085716.7+391545.8 & 17.46 & 17.52 & 40.19$\pm$0.02 & 0.484$\pm$0.070 &
-0.025$\pm$0.025 & -1.70$\pm$0.13 & 3.48$\pm$0.36 \\
1203-52669-0502 & 0.128 & J074713.0+221721.6 & 16.79 & 17.47 & 40.09$\pm$0.05 & 0.119$\pm$0.141 &
0.0067$\pm$0.031 & -1.58$\pm$0.19 & 4.49$\pm$0.45 \\
1204-52669-0281 & 0.181 & J075043.8+225353.3 & 17.78 & 18.89 & 41.32$\pm$0.01 & 1.012$\pm$0.076 &
0.1156$\pm$0.036 & -2.16$\pm$0.17 & 3.88$\pm$0.66 \\
1204-52669-0524 & 0.016 & J075820.4+250857.1 & 12.35 & 13.49 & 38.83$\pm$0.03 & 0.219$\pm$0.083 &
0.1854$\pm$0.031 & -1.76$\pm$0.27 & 6.28$\pm$0.65 \\
1208-52672-0519 & 0.093 & J083318.0+310238.7 & 15.68 & 16.64 & 40.80$\pm$0.01 & 0.592$\pm$0.038 &
0.0255$\pm$0.016 & -3.29$\pm$0.33 & 4.40$\pm$0.74 \\
1237-52762-0623 & 0.103 & J101805.5+084225.7 & 17.40 & 17.74 & 40.89$\pm$0.01 & 0.434$\pm$0.027 &
-0.148$\pm$0.013 & -2.10$\pm$0.18 & 4.60$\pm$0.60 \\
1239-52760-0047 & 0.113 & J103213.9+080048.5 & 17.81 & 18.36 & 40.52$\pm$0.02 & 0.620$\pm$0.147 &
-0.185$\pm$0.045 & -1.99$\pm$0.17 & 4.86$\pm$0.44 \\
1264-52707-0102 & 0.091 & J075502.8+214953.5 & 16.46 & 18.36 & 41.11$\pm$0.01 & 0.829$\pm$0.025 &
-0.402$\pm$0.021 & -1.99$\pm$0.16 & 3.52$\pm$0.38 \\
1265-52705-0448 & 0.142 & J080738.6+244116.8 & 16.88 & 18.36 & 40.06$\pm$0.06 & 0.233$\pm$0.176 &
0.1669$\pm$0.059 & -1.81$\pm$0.10 & 2.82$\pm$0.12 \\
1275-52996-0568 & 0.150 & J093740.2+374048.2 & 17.04 & 18.05 & 40.45$\pm$0.03 & 0.298$\pm$0.108 &
-0.146$\pm$0.030 & -2.18$\pm$0.17 & 3.81$\pm$0.73 \\
1294-52753-0109 & 0.134 & J153758.5+373450.8 & 17.56 & 18.28 & 40.74$\pm$0.01 & 0.510$\pm$0.047 &
-0.158$\pm$0.025 & -2.56$\pm$0.27 & 4.70$\pm$1.20 \\
1305-52757-0458 & 0.047 & J094732.8+104508.5 & 15.76 & 17.04 & 39.47$\pm$0.03 & 0.057$\pm$0.088 &
-0.097$\pm$0.048 & -2.70$\pm$0.19 & 4.82$\pm$0.62 \\
1307-52999-0065 & 0.149 & J100422.9+095143.6 & 17.28 & 18.56 & 41.22$\pm$0.01 & 0.844$\pm$0.048 &
-0.157$\pm$0.025 & -2.66$\pm$0.22 & 5.07$\pm$0.74 \\
1339-52767-0304 & 0.096 & J163330.6+340921.1 & 17.05 & 17.86 & 39.73$\pm$0.06 & 0.332$\pm$0.285 &
0.0291$\pm$0.064 & -2.53$\pm$0.22 & 5.31$\pm$0.57 \\
1341-52786-0159 & 0.136 & J164359.9+321009.8 & 17.27 & 18.46 & 40.97$\pm$0.01 & 0.792$\pm$0.079 &
0.1142$\pm$0.034 & -1.69$\pm$0.13 & 4.13$\pm$0.38 \\
1353-53083-0483 & 0.065 & J151437.4+362938.8 & 16.67 & 17.47 & 39.81$\pm$0.02 & 0.176$\pm$0.066 &
-0.272$\pm$0.029 & -2.36$\pm$0.16 & 3.32$\pm$0.44 \\
1363-53053-0109 & 0.114 & J110539.4+420834.0 & 17.70 & 18.54 & 40.51$\pm$0.01 & 0.250$\pm$0.039 &
-0.149$\pm$0.018 & -1.98$\pm$0.15 & 3.54$\pm$0.40 \\
1365-53062-0057 & 0.121 & J112532.9+435354.1 & 17.06 & 18.16 & 41.01$\pm$0.01 & 0.723$\pm$0.042 &
-0.057$\pm$0.022 & -1.97$\pm$0.14 & 3.50$\pm$0.34 \\
1368-53084-0298 & 0.299 & J114538.5+442021.9 & 17.89 & 18.66 & 41.51$\pm$0.01 & 0.365$\pm$0.076 &
0.0173$\pm$0.087 & -1.83$\pm$0.16 & 3.93$\pm$0.60 \\
1370-53090-0201 & 0.066 & J121101.5+443205.2 & 17.57 & 18.07 & 40.44$\pm$0.01 & 0.852$\pm$0.034 &
-0.097$\pm$0.019 & -2.13$\pm$0.22 & 4.35$\pm$0.62 \\
1382-53115-0193 & 0.014 & J143722.1+363404.2 & 12.52 & 13.28 & 40.44$\pm$0.01 & 0.992$\pm$0.014 &
0.1352$\pm$0.010 & -3.10$\pm$0.28 & 5.74$\pm$0.69 \\
1385-53108-0192 & 0.158 & J150745.0+341122.0 & 17.44 & 18.71 & 40.73$\pm$0.02 & 0.657$\pm$0.101 &
0.0805$\pm$0.110 & -2.50$\pm$0.23 & 4.57$\pm$0.68 \\
1567-53172-0431 & 0.053 & J165642.0+184605.3 & 16.16 & 17.21 & 39.22$\pm$0.06 & 0.528$\pm$0.364 &
0.2051$\pm$0.085 & -2.77$\pm$0.20 & 3.91$\pm$0.62 \\
1591-52976-0432 & 0.048 & J090901.0+321658.9 & 14.92 & 15.56 & 39.58$\pm$0.02 & 0.329$\pm$0.062 &
-0.175$\pm$0.034 & -1.28$\pm$0.09 & 3.63$\pm$0.29 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{0}
\begin{table*}
\caption{--to be continued}
\begin{center}
\begin{tabular}{cccccccccc}
\hline\hline
pmf & $z$ & name & m1 & m2 & $\log(L_{O3})$ & $\log(O3HB)$ & $\log(N2HA)$ & $\ln(\sigma)$ & $\ln(\tau)$ \\
\hline
1592-52990-0418 & 0.020 & J091941.4+334417.3 & 13.44 & 14.79 & 39.58$\pm$0.01 & 0.677$\pm$0.021 &
-0.133$\pm$0.013 & -2.26$\pm$0.29 & 5.79$\pm$0.77 \\
1594-52992-0566 & 0.022 & J094319.1+361452.1 & 12.92 & 13.74 & 39.82$\pm$0.01 & 0.236$\pm$0.020 &
-0.072$\pm$0.011 & -3.66$\pm$0.36 & 6.20$\pm$0.92 \\
1691-53260-0603 & 0.035 & J165630.5+275839.0 & 13.37 & 14.74 & 39.65$\pm$0.02 & 0.218$\pm$0.076 &
0.0918$\pm$0.040 & -3.45$\pm$0.30 & 6.58$\pm$0.75 \\
1692-53473-0151 & 0.052 & J165721.8+253350.0 & 16.73 & 17.84 & 40.38$\pm$0.01 & 0.707$\pm$0.023 &
-0.116$\pm$0.009 & -2.72$\pm$0.23 & 3.78$\pm$0.69 \\
1722-53852-0289 & 0.132 & J152614.9+095307.6 & 15.88 & 17.57 & 40.48$\pm$0.02 & 0.429$\pm$0.097 &
0.2224$\pm$0.034 & -2.47$\pm$0.17 & 3.38$\pm$0.59 \\
1722-53852-0625 & 0.087 & J153550.4+104420.2 & 15.31 & 16.74 & 40.56$\pm$0.01 & 0.430$\pm$0.022 &
-0.034$\pm$0.011 & -2.16$\pm$0.17 & 3.35$\pm$0.28 \\
1732-53501-0363 & 0.107 & J161907.2+081701.3 & 16.82 & 18.21 & 39.28$\pm$0.18 & -0.11$\pm$0.431 &
0.1642$\pm$0.220 & -2.00$\pm$0.15 & 3.83$\pm$0.60 \\
1752-53379-0416 & 0.051 & J111353.7+145900.0 & 15.32 & 16.44 & 39.45$\pm$0.04 & 0.038$\pm$0.092 &
0.0302$\pm$0.045 & -2.74$\pm$0.28 & 6.52$\pm$0.79 \\
1754-53385-0485 & 0.080 & J113122.6+151513.1 & 17.28 & 18.04 & 40.81$\pm$0.01 & 0.491$\pm$0.016 &
-0.150$\pm$0.009 & -2.86$\pm$0.25 & 5.38$\pm$0.73 \\
1758-53084-0178 & 0.111 & J082832.1+075923.7 & 17.45 & 18.22 & 40.50$\pm$0.01 & 0.348$\pm$0.044 &
-0.124$\pm$0.018 & -2.57$\pm$0.23 & 5.37$\pm$0.55 \\
1775-53847-0226 & 0.077 & J133127.7+134005.8 & 15.89 & 16.20 & 39.82$\pm$0.05 & 0.083$\pm$0.123 &
-0.128$\pm$0.033 & -1.14$\pm$0.17 & 4.31$\pm$0.50 \\
1794-54504-0415 & 0.094 & J130116.2+080310.6 & 17.16 & 18.13 & 39.88$\pm$0.03 & 0.327$\pm$0.127 &
-0.187$\pm$0.064 & -2.81$\pm$0.26 & 5.40$\pm$0.75 \\
1821-53167-0528 & 0.134 & J154601.0+063121.7 & 17.41 & 18.55 & 40.58$\pm$0.01 & 0.153$\pm$0.039 &
-0.205$\pm$0.015 & -1.74$\pm$0.25 & 5.83$\pm$0.71 \\
1827-53531-0008 & 0.200 & J143238.6+051921.8 & 17.57 & 18.26 & 42.02$\pm$0.01 & 0.577$\pm$0.014 &
-0.232$\pm$0.009 & -2.72$\pm$0.27 & 4.79$\pm$0.90 \\
1829-53494-0062 & 0.082 & J144947.2+053205.3 & 16.92 & 17.60 & 40.23$\pm$0.01 & 0.551$\pm$0.079 &
-0.100$\pm$0.037 & -2.48$\pm$0.23 & 4.56$\pm$0.94 \\
1833-54561-0586 & 0.102 & J151845.7+061356.1 & 15.19 & 16.45 & 39.83$\pm$0.09 & -0.53$\pm$0.144 &
-0.065$\pm$0.089 & -2.63$\pm$0.27 & 6.64$\pm$0.68 \\
1842-53501-0073 & 0.053 & J144552.2+304854.0 & 16.07 & 16.89 & 39.45$\pm$0.03 & 0.053$\pm$0.088 &
0.0886$\pm$0.044 & -2.60$\pm$0.30 & 6.63$\pm$0.73 \\
1854-53566-0011 & 0.103 & J163018.5+193017.0 & 17.36 & 18.40 & 40.51$\pm$0.01 & 0.742$\pm$0.121 &
0.0201$\pm$0.051 & -2.69$\pm$0.33 & 5.44$\pm$1.19 \\
1864-53313-0466 & 0.070 & J072520.5+413900.8 & 15.78 & 16.63 & 39.75$\pm$0.03 & 0.097$\pm$0.093 &
0.1225$\pm$0.041 & -3.30$\pm$0.24 & 4.92$\pm$0.63 \\
1865-53312-0129 & 0.066 & J072932.9+424145.7 & 16.57 & 17.12 & 39.65$\pm$0.02 & 0.304$\pm$0.077 &
-0.038$\pm$0.035 & -2.39$\pm$0.15 & 3.60$\pm$0.52 \\
1870-53383-0466 & 0.000 & J075736.4+532557.0 & 16.43 & 18.98 & 0.000$\pm$0.01 & 0.560$\pm$0.023 &
0.4368$\pm$0.011 & -1.52$\pm$0.24 & 6.01$\pm$0.67 \\
1920-53314-0489 & 0.044 & J075238.9+181917.7 & 15.21 & 15.85 & 40.72$\pm$0.01 & 0.562$\pm$0.011 &
-0.112$\pm$0.007 & -2.80$\pm$0.31 & 6.35$\pm$0.80 \\
1927-53321-0261 & 0.018 & J081937.8+210651.4 & 13.77 & 14.43 & 40.33$\pm$0.01 & 1.011$\pm$0.010 &
-0.032$\pm$0.006 & -2.88$\pm$0.18 & 4.01$\pm$0.55 \\
1930-53347-0150 & 0.070 & J083949.2+250458.4 & 16.71 & 17.96 & 39.14$\pm$0.08 & 0.336$\pm$0.432 &
0.0637$\pm$0.104 & -3.14$\pm$0.24 & 4.73$\pm$0.83 \\
1945-53387-0487 & 0.133 & J094713.9+334535.4 & 17.57 & 18.61 & 40.79$\pm$0.01 & 0.590$\pm$0.045 &
-0.278$\pm$0.025 & -2.64$\pm$0.21 & 4.39$\pm$0.63 \\
1951-53389-0400 & 0.118 & J100323.4+352503.8 & 17.57 & 18.67 & 40.89$\pm$0.01 & 0.812$\pm$0.048 &
-0.234$\pm$0.033 & -2.26$\pm$0.25 & 6.27$\pm$0.57 \\
1990-53472-0604 & 0.072 & J110742.3+310627.5 & 16.23 & 17.31 & 39.84$\pm$0.03 & 0.195$\pm$0.086 &
-0.295$\pm$0.038 & -2.63$\pm$0.21 & 3.56$\pm$0.46 \\
2004-53737-0520 & 0.192 & J121329.4+322332.1 & 17.32 & 18.26 & 41.36$\pm$0.01 & 0.508$\pm$0.026 &
-0.115$\pm$0.015 & -1.16$\pm$0.15 & 4.06$\pm$0.41 \\
2007-53474-0631 & 0.145 & J105801.5+383912.0 & 16.29 & 17.55 & 40.48$\pm$0.02 & 0.231$\pm$0.070 &
0.2100$\pm$0.024 & -3.05$\pm$0.27 & 3.88$\pm$0.70 \\
2025-53431-0173 & 0.082 & J104424.1+334642.8 & 16.52 & 17.03 & 39.98$\pm$0.03 & 0.132$\pm$0.088 &
-0.050$\pm$0.033 & -2.15$\pm$0.13 & 3.37$\pm$0.33 \\
2090-53463-0487 & 0.134 & J104758.9+362518.0 & 17.49 & 17.94 & 40.76$\pm$0.01 & 0.612$\pm$0.074 &
0.1504$\pm$0.024 & -1.35$\pm$0.17 & 4.57$\pm$0.51 \\
2139-53878-0330 & 0.032 & J143727.8+254556.0 & 13.91 & 14.66 & 41.34$\pm$0.01 & 1.381$\pm$0.026 &
0.1180$\pm$0.009 & -2.60$\pm$0.28 & 6.73$\pm$0.67 \\
2168-53886-0218 & 0.129 & J154528.1+191934.1 & 17.45 & 18.58 & 40.34$\pm$0.03 & 0.309$\pm$0.112 &
0.0628$\pm$0.041 & -2.14$\pm$0.24 & 4.37$\pm$0.83 \\
2196-53534-0565 & 0.031 & J160523.3+174525.8 & 14.27 & 15.10 & 39.08$\pm$0.03 & -0.02$\pm$0.063 &
-0.091$\pm$0.019 & -2.58$\pm$0.15 & 4.04$\pm$0.85 \\
2227-53820-0518 & 0.028 & J120638.9+281028.1 & 11.74 & 25.11 & 38.75$\pm$0.06 & -0.19$\pm$0.117 &
0.0126$\pm$0.026 & -2.58$\pm$0.22 & 5.49$\pm$0.67 \\
2268-53682-0136 & 0.108 & J080750.5+130723.7 & 17.57 & 18.50 & 39.56$\pm$0.09 & 0.163$\pm$0.269 &
-0.283$\pm$0.307 & -1.96$\pm$0.22 & 4.80$\pm$0.67 \\
2269-53711-0508 & 0.179 & J081437.9+172208.3 & 17.35 & 18.65 & 40.56$\pm$0.03 & 0.602$\pm$0.184 &
0.0931$\pm$0.078 & -2.78$\pm$0.28 & 5.91$\pm$0.62 \\
2270-53714-0145 & 0.131 & J081635.7+143538.7 & 17.20 & 18.35 & 39.94$\pm$0.06 & 0.088$\pm$0.152 &
-0.092$\pm$0.036 & -2.36$\pm$0.20 & 4.29$\pm$0.49 \\
2271-53726-0055 & 0.153 & J082448.2+164044.7 & 17.55 & 18.33 & 41.11$\pm$0.01 & 0.733$\pm$0.047 &
-0.078$\pm$0.017 & -1.70$\pm$0.14 & 4.22$\pm$0.46 \\
2278-53711-0043 & 0.073 & J084230.7+160618.6 & 17.51 & 18.18 & 40.93$\pm$0.01 & 0.694$\pm$0.016 &
-0.198$\pm$0.010 & -2.41$\pm$0.23 & 5.43$\pm$0.57 \\
2282-53683-0335 & 0.082 & J085149.1+221209.9 & 17.49 & 18.15 & 39.83$\pm$0.03 & 0.415$\pm$0.119 &
0.0267$\pm$0.028 & -2.17$\pm$0.18 & 5.25$\pm$0.51 \\
2288-53699-0015 & 0.029 & J092045.6+193327.5 & 16.72 & 17.64 & 39.45$\pm$0.01 & 0.506$\pm$0.047 &
0.1366$\pm$0.013 & -2.26$\pm$0.17 & 5.27$\pm$0.42 \\
2295-53734-0631 & 0.152 & J095307.4+223307.7 & 17.08 & 18.49 & 40.06$\pm$0.07 & -0.01$\pm$0.162 &
-0.117$\pm$0.043 & -1.66$\pm$0.12 & 3.77$\pm$0.34 \\
2341-53738-0154 & 0.034 & J095308.5+241756.8 & 15.84 & 16.43 & 38.97$\pm$0.03 & 0.150$\pm$0.072 &
0.0812$\pm$0.021 & -3.07$\pm$0.26 & 4.97$\pm$0.65 \\
2346-53734-0073 & 0.037 & J102153.8+234017.4 & 14.21 & 15.14 & 39.54$\pm$0.02 & 0.428$\pm$0.077 &
0.0913$\pm$0.027 & -3.55$\pm$0.35 & 6.44$\pm$0.85 \\
2356-53786-0589 & 0.168 & J104158.6+274237.5 & 17.35 & 18.75 & 40.90$\pm$0.01 & 0.611$\pm$0.068 &
-0.004$\pm$0.030 & -2.18$\pm$0.17 & 4.17$\pm$0.53 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{0}
\begin{table*}
\caption{--to be continued}
\begin{center}
\begin{tabular}{cccccccccc}
\hline\hline
pmf & $z$ & name & m1 & m2 & $\log(L_{O3})$ & $\log(O3HB)$ & $\log(N2HA)$ & $\ln(\sigma)$ & $\ln(\tau)$ \\
\hline
2365-53739-0157 & 0.004 & J101738.5+214117.6 & 11.55 & 13.93 & 38.92$\pm$0.01 & 0.566$\pm$0.012 &
-0.102$\pm$0.008 & -3.06$\pm$0.27 & 6.21$\pm$0.70 \\
2366-53741-0416 & 0.134 & J102313.2+225438.8 & 17.16 & 18.21 & 40.08$\pm$0.05 & -0.08$\pm$0.116 &
-0.053$\pm$0.044 & -2.81$\pm$0.21 & 4.76$\pm$0.57 \\
2374-53765-0174 & 0.064 & J101936.8+193313.5 & 16.24 & 17.09 & 41.27$\pm$0.01 & 1.025$\pm$0.016 &
-0.152$\pm$0.010 & -2.17$\pm$0.11 & 3.56$\pm$0.39 \\
2375-53770-0517 & 0.114 & J102654.3+201914.4 & 17.48 & 18.36 & 40.90$\pm$0.01 & 0.783$\pm$0.051 &
-0.127$\pm$0.022 & -2.74$\pm$0.22 & 4.92$\pm$0.71 \\
2418-53794-0079 & 0.131 & J080611.5+110113.3 & 17.16 & 18.31 & 40.72$\pm$0.01 & 0.398$\pm$0.034 &
-0.225$\pm$0.017 & -2.44$\pm$0.19 & 4.98$\pm$0.62 \\
2419-54139-0036 & 0.052 & J080819.4+090050.0 & 14.05 & 15.12 & 39.79$\pm$0.02 & 0.130$\pm$0.071 &
0.0146$\pm$0.031 & -2.85$\pm$0.21 & 4.60$\pm$0.71 \\
2419-54139-0050 & 0.046 & J080431.3+083920.1 & 15.72 & 16.79 & 39.56$\pm$0.02 & 0.566$\pm$0.133 &
0.2181$\pm$0.055 & -3.16$\pm$0.27 & 5.85$\pm$0.77 \\
2422-54096-0372 & 0.066 & J082013.0+134108.3 & 15.63 & 16.33 & 39.57$\pm$0.03 & 0.249$\pm$0.091 &
-0.202$\pm$0.030 & -3.33$\pm$0.28 & 5.81$\pm$0.73 \\
2422-54096-0425 & 0.173 & J081909.8+132817.0 & 16.43 & 18.03 & 40.31$\pm$0.07 & 0.363$\pm$0.308 &
0.0102$\pm$0.135 & -3.23$\pm$0.59 & 5.77$\pm$1.17 \\
2425-54139-0093 & 0.140 & J083021.7+122533.5 & 17.99 & 18.86 & 40.35$\pm$0.02 & 0.270$\pm$0.056 &
-0.289$\pm$0.027 & -2.05$\pm$0.17 & 4.01$\pm$0.78 \\
2434-53826-0012 & 0.125 & J090950.5+133248.4 & 17.01 & 17.74 & 39.92$\pm$0.08 & -0.02$\pm$0.182 &
-0.076$\pm$0.040 & -2.71$\pm$0.23 & 4.86$\pm$0.80 \\
2436-54054-0552 & 0.113 & J091231.4+150559.8 & 16.42 & 17.33 & 40.08$\pm$0.04 & 0.087$\pm$0.091 &
0.1267$\pm$0.034 & -2.59$\pm$0.20 & 4.73$\pm$0.75 \\
2485-54176-0587 & 0.047 & J110156.4+193648.1 & 15.74 & 16.60 & 39.00$\pm$0.06 & 0.249$\pm$0.213 &
-0.035$\pm$0.085 & -3.11$\pm$0.26 & 5.48$\pm$0.62 \\
2489-53857-0561 & 0.166 & J111045.0+250801.9 & 17.29 & 18.54 & 41.12$\pm$0.01 & 0.799$\pm$0.065 &
-0.113$\pm$0.037 & -2.25$\pm$0.19 & 3.50$\pm$0.26 \\
2526-54582-0319 & 0.123 & J160427.4+092933.9 & 17.00 & 18.25 & 41.08$\pm$0.01 & 0.636$\pm$0.032 &
-0.133$\pm$0.011 & -2.24$\pm$0.30 & 6.58$\pm$0.73 \\
2572-54056-0139 & 0.075 & J083202.7+093759.1 & 15.76 & 17.82 & 40.44$\pm$0.01 & 0.727$\pm$0.055 &
0.1424$\pm$0.021 & -2.64$\pm$0.29 & 6.80$\pm$0.68 \\
2593-54175-0025 & 0.335 & J104043.7+174705.7 & 18.41 & 19.26 & 41.50$\pm$0.01 & 0.389$\pm$0.043 &
-0.127$\pm$0.044 & -2.69$\pm$0.32 & 5.95$\pm$0.71 \\
2644-54210-0229 & 0.023 & J120807.0+220604.1 & 16.64 & 16.68 & 38.90$\pm$0.02 & 0.251$\pm$0.061 &
-0.312$\pm$0.030 & -1.84$\pm$0.13 & 3.34$\pm$0.38 \\
2744-54272-0490 & 0.244 & J140244.5+155956.6 & 17.30 & 18.18 & 41.17$\pm$0.01 & 0.020$\pm$0.041 &
-0.263$\pm$0.050 & -2.77$\pm$0.35 & 6.54$\pm$0.94 \\
2771-54527-0146 & 0.126 & J140728.1+194946.1 & 16.97 & 18.09 & 40.98$\pm$0.01 & 0.443$\pm$0.026 &
-0.281$\pm$0.015 & -2.00$\pm$0.24 & 5.25$\pm$1.27 \\
2789-54555-0281 & 0.030 & J143916.1+200237.3 & 12.93 & 13.96 & 39.73$\pm$0.02 & 0.114$\pm$0.059 &
0.1780$\pm$0.029 & -2.18$\pm$0.23 & 7.20$\pm$0.51 \\
2790-54555-0124 & 0.044 & J145120.9+184028.9 & 16.60 & 16.72 & 39.05$\pm$0.04 & 0.235$\pm$0.148 &
-0.103$\pm$0.051 & -2.57$\pm$0.23 & 4.48$\pm$0.85 \\
2886-54498-0629 & 0.072 & J105742.9+095255.7 & 16.24 & 17.66 & 39.83$\pm$0.02 & 0.319$\pm$0.077 &
-0.318$\pm$0.030 & -2.93$\pm$0.28 & 5.24$\pm$0.80 \\
2948-54553-0408 & 0.121 & J163447.6+284323.3 & 17.57 & 17.83 & 40.28$\pm$0.02 & 0.443$\pm$0.099 &
0.1082$\pm$0.040 & -2.02$\pm$0.16 & 3.92$\pm$0.69 \\
2968-54585-0061 & 0.083 & J161601.6+175653.8 & 16.44 & 17.17 & 41.13$\pm$0.01 & 0.726$\pm$0.020 &
-0.045$\pm$0.009 & -2.31$\pm$0.25 & 5.05$\pm$0.62 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{Line parameters of the 156 Type-2 AGN with apparent long-term variabilities}
\begin{tabular}{ccccccccccc}
\hline\hline
pmf & \multicolumn{2}{c}{first [N~{\sc ii}]} & \multicolumn{2}{c}{second [N~{\sc ii}]} & \multicolumn{2}{c}{first H$\alpha$} &
\multicolumn{2}{c}{second H$\alpha$} & $\chi^2$ & broad? \\
& $\sigma$ & flux & $\sigma$ & flux & $\sigma$ & flux & $\sigma$ & flux & & \\
\hline
0271-51883-0252 & 3.72$\pm$0.09 & 609$\pm$18 & $\dots$ & $\dots$ & 3.50$\pm$0.26 & 203$\pm$20 & 20.2$\pm$1.14 & 537$\pm$45 & 0.80 & yes \\
0283-51959-0600 & 2.65$\pm$0.10 & 112$\pm$3 & $\dots$ & $\dots$ & 2.81$\pm$0.10 & 121$\pm$4 & $\dots$ & $\dots$ & 0.78 & no \\
0284-51662-0220 & 2.46$\pm$0.24 & 39$\pm$4 & $\dots$ & $\dots$ & 2.04$\pm$0.16 & 52$\pm$3 & $\dots$ & $\dots$ & 0.90 & no \\
0284-51662-0558 & 3.02$\pm$0.13 & 128$\pm$6 & $\dots$ & $\dots$ & 2.75$\pm$0.13 & 126$\pm$6 & 26.2$\pm$2.68 & 173$\pm$18 & 0.98 & yes \\
0284-51943-0551 & 2.79$\pm$0.13 & 117$\pm$6 & $\dots$ & $\dots$ & 2.58$\pm$0.10 & 118$\pm$5 & 24.1$\pm$1.96 & 158$\pm$14 & 0.94 & yes \\
0288-52000-0152 & 2.87$\pm$0.24 & 71$\pm$5 & $\dots$ & $\dots$ & 1.98$\pm$0.31 & 63$\pm$18 & 2.07$\pm$0.44 & 48$\pm$18 & 1.02 & no \\
0300-51666-0073 & 4.40$\pm$0.11 & 280$\pm$6 & $\dots$ & $\dots$ & 4.18$\pm$0.13 & 242$\pm$6 & $\dots$ & $\dots$ & 1.78 & no \\
0305-51613-0044 & 3.98$\pm$0.13 & 204$\pm$7 & $\dots$ & $\dots$ & 4.30$\pm$0.30 & 188$\pm$15 & $\dots$ & $\dots$ & 0.74 & no \\
0305-51613-0146 & 2.99$\pm$0.34 & 54$\pm$7 & $\dots$ & $\dots$ & 2.64$\pm$0.10 & 89$\pm$3 & $\dots$ & $\dots$ & 1.21 & no \\
0311-51665-0073 & 3.44$\pm$0.09 & 228$\pm$7 & $\dots$ & $\dots$ & 2.86$\pm$0.09 & 200$\pm$7 & 18.3$\pm$0.81 & 270$\pm$17 & 1.16 & yes \\
0315-51663-0549 & 2.88$\pm$0.84 & 17$\pm$5 & $\dots$ & $\dots$ & 1.38$\pm$0.31 & 10$\pm$2 & $\dots$ & $\dots$ & 0.89 & no \\
0375-52140-0063 & 3.24$\pm$0.07 & 324$\pm$8 & $\dots$ & $\dots$ & 2.64$\pm$0.07 & 275$\pm$8 & 15.8$\pm$0.79 & 279$\pm$18 & 1.45 & yes \\
0379-51789-0271 & 4.52$\pm$0.25 & 140$\pm$7 & $\dots$ & $\dots$ & 4.30$\pm$0.19 & 192$\pm$7 & $\dots$ & $\dots$ & 2.04 & no \\
0379-51789-0401 & 2.68$\pm$0.17 & 153$\pm$9 & $\dots$ & $\dots$ & 2.27$\pm$0.10 & 216$\pm$8 & $\dots$ & $\dots$ & 1.03 & no \\
0382-51816-0426 & 3.26$\pm$0.10 & 186$\pm$7 & $\dots$ & $\dots$ & 3.00$\pm$0.11 & 169$\pm$8 & 16.2$\pm$2.81 & 77$\pm$19 & 0.87 & yes \\
0390-51816-0305 & 3.17$\pm$0.18 & 177$\pm$30 & $\dots$ & $\dots$ & 3.25$\pm$0.25 & 208$\pm$15 & $\dots$ & $\dots$ & 0.84 & no \\
0390-51900-0306 & 3.83$\pm$0.27 & 162$\pm$17 & $\dots$ & $\dots$ & 1.72$\pm$0.24 & 85$\pm$26 & 3.52$\pm$0.41 & 127$\pm$30 & 0.79 & no \\
0411-51817-0217 & 3.27$\pm$0.12 & 760$\pm$29 & $\dots$ & $\dots$ & 3.22$\pm$0.28 & 1308$\pm$150 & $\dots$ & $\dots$ & 0.61 & no \\
0411-51873-0203 & 3.30$\pm$0.18 & 728$\pm$56 & $\dots$ & $\dots$ & 3.12$\pm$0.09 & 1351$\pm$41 & $\dots$ & $\dots$ & 0.47 & no \\ 0424-51893-0634 & 2.66$\pm$0.07 & 619$\pm$35 & 6.54$\pm$0.91 & 189$\pm$37 & 2.41$\pm$0.10 & 494$\pm$46 & 5.84$\pm$0.96 & 225$\pm$38 & 1.36 & no \\
0447-51877-0287 & 2.32$\pm$0.20 & 56$\pm$4 & $\dots$ & $\dots$ & 1.78$\pm$0.24 & 47$\pm$8 & $\dots$ & $\dots$ & 0.74 & no \\
0451-51908-0092 & 2.86$\pm$0.25 & 63$\pm$5 & $\dots$ & $\dots$ & 3.39$\pm$0.49 & 50$\pm$10 & $\dots$ & $\dots$ & 0.63 & no \\
0465-51910-0378 & 3.83$\pm$0.32 & 55$\pm$4 & $\dots$ & $\dots$ & 3.63$\pm$0.71 & 47$\pm$4 & $\dots$ & $\dots$ & 1.10 & no \\
\hline
\end{tabular}
\end{center}
\tablecomments{
The first column shows the pmf information of plate-mjd-fiberid of each Type-2 AGN, the second column and
the third column show the second moment (in unit of \AA) and line flux (in unit of $10^{-17}{\rm erg/s/cm^2}$)
of the first component of [N~{\sc ii}]$\lambda6583$\AA~ of each Type-2 AGN, the fourth column and the fifth column
show the second moment and line flux of the second component of [N~{\sc ii}]$\lambda6583$\AA~ of each Type-2 AGN,
the sixth column, the seventh column, the eighth column and the ninth column show the second moment and line flux
of the first and the second component of H$\alpha$ of each Type-2 AGN, the tenth column shows the determined
$\chi^2$ (calculated by the summed squared residuals divided by degree of freedom) related to the best fitting
results to the emission lines, the last column shows whether are there broad components in H$\alpha$: yes means
there is probable broad H$\alpha$. \\
Symbol of $\dots$ means no reliable parameter which is three times smaller than its corresponding uncertainty.\\
Symbol of $**$ means the unique spectroscopic features in SDSS 1870-53383-0466 with different redshifts from emission
lines and from absorption features.
}
\end{table*}
\setcounter{table}{1}
\begin{table*}
\begin{center}
\caption{--to be continued.}
\begin{tabular}{ccccccccccc}
\hline\hline
pmf & \multicolumn{2}{c}{first [N~{\sc ii}]} & \multicolumn{2}{c}{second [N~{\sc ii}]} & \multicolumn{2}{c}{first H$\alpha$} &
\multicolumn{2}{c}{second H$\alpha$} & $\chi^2$ & broad?\\
& $\sigma$ & flux & $\sigma$ & flux & $\sigma$ & flux & $\sigma$ & flux & & \\
\hline
0500-51994-0390 & 5.07$\pm$0.35 & 76$\pm$4 & $\dots$ & $\dots$ & 1.97$\pm$0.34 & 26$\pm$6 & 2.91$\pm$0.54 & 37$\pm$7 & 0.80 & no \\
0501-52235-0366 & 1.98$\pm$0.04 & 1197$\pm$33 & 6.48$\pm$1.15 & 206$\pm$43 & 2.06$\pm$0.02 & 2134$\pm$22 & 44.6$\pm$0.74 & 4705$\pm$76 & 1.30 & yes \\
0504-52316-0492 & 2.52$\pm$0.57 & 80$\pm$26 & $\dots$ & $\dots$ & 3.03$\pm$0.12 & 158$\pm$7 & $\dots$ & $\dots$ & 0.68 & no \\
0533-51994-0187 & 2.89$\pm$0.41 & 197$\pm$64 & $\dots$ & $\dots$ & 4.03$\pm$0.04 & 616$\pm$7 & $\dots$ & $\dots$ & 1.56 & no \\
0538-52029-0359 & 2.70$\pm$0.04 & 564$\pm$8 & $\dots$ & $\dots$ & 2.82$\pm$0.03 & 970$\pm$11 & $\dots$ & $\dots$ & 1.90 & no \\
0540-51996-0549 & 1.92$\pm$0.31 & 63$\pm$24 & $\dots$ & $\dots$ & 2.50$\pm$0.29 & 50$\pm$7 & 23.8$\pm$2.37 & 185$\pm$25 & 1.00 & yes \\
0543-52017-0342 & 4.83$\pm$0.23 & 157$\pm$6 & $\dots$ & $\dots$ & 4.24$\pm$0.21 & 154$\pm$7 & $\dots$ & $\dots$ & 1.12 & no \\
0564-52224-0019 & 2.58$\pm$0.68 & 30$\pm$2 & $\dots$ & $\dots$ & 3.32$\pm$0.26 & 38$\pm$2 & $\dots$ & $\dots$ & 1.12 & no \\
0564-52224-0041 & 2.41$\pm$0.03 & 325$\pm$4 & $\dots$ & $\dots$ & 1.90$\pm$0.02 & 382$\pm$4 & 15.3$\pm$0.41 & 353$\pm$10 & 2.23 & yes \\
0571-52286-0339 & 2.87$\pm$0.89 & 34$\pm$9 & $\dots$ & $\dots$ & 5.53$\pm$1.93 & 27$\pm$8 & $\dots$ & $\dots$ & 1.02 & no \\
0576-52325-0639 & 4.20$\pm$0.29 & 157$\pm$10 & $\dots$ & $\dots$ & 3.00$\pm$0.30 & 66$\pm$5 & $\dots$ & $\dots$ & 1.16 & no \\
0580-52368-0084 & 2.10$\pm$0.12 & 132$\pm$8 & $\dots$ & $\dots$ & 1.77$\pm$0.04 & 160$\pm$4 & 15.1$\pm$1.97 & 74$\pm$11 & 0.95 & yes \\
0581-52356-0028 & 2.09$\pm$0.04 & 1081$\pm$32 & 6.82$\pm$0.55 & 449$\pm$29 & 1.80$\pm$0.04 & 1145$\pm$47 & 4.60$\pm$0.51 & 306$\pm$44 & 0.63 & no \\
0589-52055-0263 & 2.56$\pm$0.07 & 231$\pm$6 & $\dots$ & $\dots$ & 2.46$\pm$0.08 & 192$\pm$5 & $\dots$ & $\dots$ & 0.72 & no \\
0594-52045-0276 & 3.80$\pm$0.11 & 378$\pm$15 & $\dots$ & $\dots$ & 3.89$\pm$0.21 & 171$\pm$13 & $\dots$ & $\dots$ & 1.07 & no \\
0630-52050-0241 & 2.06$\pm$0.06 & 265$\pm$16 & 5.39$\pm$0.41 & 174$\pm$15 & 1.99$\pm$0.07 & 297$\pm$23 & 4.87$\pm$0.30 & 260$\pm$21 & 1.58 & no \\
0656-52148-0390 & 3.28$\pm$0.12 & 149$\pm$6 & $\dots$ & $\dots$ & 3.09$\pm$0.26 & 69$\pm$7 & 25.5$\pm$4.48 & 96$\pm$19 & 1.15 & yes \\
0696-52209-0234 & 2.99$\pm$0.05 & 861$\pm$14 & $\dots$ & $\dots$ & 2.64$\pm$0.09 & 388$\pm$12 & $\dots$ & $\dots$ & 0.76 & no \\
0699-52202-0163 & 3.81$\pm$0.09 & 260$\pm$5 & $\dots$ & $\dots$ & 3.85$\pm$0.07 & 343$\pm$6 & $\dots$ & $\dots$ & 0.74 & no \\
0720-52206-0191 & 2.67$\pm$0.38 & 91$\pm$22 & $\dots$ & $\dots$ & 3.11$\pm$1.08 & 46$\pm$12 & $\dots$ & $\dots$ & 0.57 & no \\
0743-52262-0591 & 2.66$\pm$0.08 & 1204$\pm$37 & $\dots$ & $\dots$ & 2.25$\pm$0.12 & 613$\pm$36 & 16.9$\pm$0.42 & 2998$\pm$94 & 0.87 & yes \\
0747-52234-0400 & 2.77$\pm$0.07 & 417$\pm$16 & 9.70$\pm$0.46 & 439$\pm$16 & 2.76$\pm$0.13 & 513$\pm$66 & $\dots$ & $\dots$ & 1.06 & no \\
0758-52253-0514 & 4.24$\pm$0.25 & 79$\pm$4 & $\dots$ & $\dots$ & 3.31$\pm$0.27 & 50$\pm$3 & $\dots$ & $\dots$ & 1.13 & no \\
0817-52381-0412 & 6.08$\pm$0.49 & 93$\pm$6 & $\dots$ & $\dots$ & 5.68$\pm$0.82 & 49$\pm$6 & $\dots$ & $\dots$ & 1.02 & no \\
0892-52378-0058 & 2.39$\pm$0.15 & 389$\pm$125 & $\dots$ & $\dots$ & 2.40$\pm$0.02 & 595$\pm$7 & 33.1$\pm$1.15 & 538$\pm$21 & 1.39 & yes \\
0914-52721-0233 & 2.84$\pm$0.13 & 183$\pm$7 & $\dots$ & $\dots$ & 2.94$\pm$0.17 & 143$\pm$7 & $\dots$ & $\dots$ & 0.58 & no \\
0917-52400-0260 & 1.22$\pm$0.30 & 14$\pm$7 & 2.87$\pm$0.10 & 145$\pm$9 & 2.72$\pm$0.04 & 225$\pm$4 & 15.5$\pm$1.95 & 63$\pm$10 & 1.39 & yes \\
0929-52581-0196 & 2.74$\pm$0.40 & 73$\pm$12 & $\dots$ & $\dots$ & 3.18$\pm$0.62 & 42$\pm$13 & 11.2$\pm$3.38 & 78$\pm$15 & 0.75 & yes \\
0931-52619-0094 & 3.75$\pm$0.12 & 189$\pm$5 & $\dots$ & $\dots$ & 3.32$\pm$0.11 & 177$\pm$5 & $\dots$ & $\dots$ & 0.86 & no \\
0938-52708-0404 & 4.13$\pm$0.07 & 435$\pm$9 & $\dots$ & $\dots$ & 4.12$\pm$0.07 & 407$\pm$6 & $\dots$ & $\dots$ & 0.92 & no \\
0979-52427-0095 & 4.09$\pm$0.05 & 461$\pm$6 & $\dots$ & $\dots$ & 2.96$\pm$0.03 & 521$\pm$5 & $\dots$ & $\dots$ & 2.52 & no \\
0983-52443-0594 & 4.50$\pm$0.15 & 186$\pm$5 & $\dots$ & $\dots$ & 4.11$\pm$0.16 & 182$\pm$11 & 1.40$\pm$0.58 & $\dots$ & 1.18 & no \\
0984-52442-0348 & 4.11$\pm$0.12 & 212$\pm$5 & $\dots$ & $\dots$ & 1.95$\pm$0.22 & 76$\pm$13 & 2.29$\pm$0.27 & 124$\pm$15 & 1.07 & no \\
1001-52670-0389 & 3.33$\pm$0.10 & 473$\pm$14 & $\dots$ & $\dots$ & 3.00$\pm$0.11 & 382$\pm$13 & $\dots$ & $\dots$ & 0.42 & no \\
1036-52582-0135 & 1.67$\pm$0.08 & 30$\pm$1 & $\dots$ & $\dots$ & 1.63$\pm$0.04 & 71$\pm$1 & $\dots$ & $\dots$ & 1.05 & no \\
1036-52582-0389 & 3.15$\pm$0.48 & 32$\pm$4 & $\dots$ & $\dots$ & 3.37$\pm$0.43 & 27$\pm$3 & $\dots$ & $\dots$ & 1.32 & no \\
1052-52466-0280 & 2.77$\pm$0.07 & 850$\pm$22 & $\dots$ & $\dots$ & 2.78$\pm$0.10 & 530$\pm$19 & $\dots$ & $\dots$ & 0.57 & no \\
1065-52586-0237 & 4.17$\pm$0.16 & 145$\pm$5 & $\dots$ & $\dots$ & 4.49$\pm$0.16 & 176$\pm$5 & $\dots$ & $\dots$ & 0.91 & no \\
1198-52669-0150 & 2.33$\pm$0.55 & 90$\pm$3 & $\dots$ & $\dots$ & 2.07$\pm$0.07 & 99$\pm$3 & $\dots$ & $\dots$ & 0.94 & no \\
1203-52669-0502 & 3.98$\pm$0.20 & 131$\pm$6 & $\dots$ & $\dots$ & 6.02$\pm$1.32 & 69$\pm$32 & $\dots$ & $\dots$ & 0.98 & no \\
1204-52669-0281 & 2.55$\pm$0.11 & 105$\pm$5 & $\dots$ & $\dots$ & 2.50$\pm$0.19 & 77$\pm$7 & $\dots$ & $\dots$ & 0.96 & no \\
1204-52669-0524 & 2.93$\pm$0.10 & 544$\pm$17 & $\dots$ & $\dots$ & 2.24$\pm$0.12 & 287$\pm$14 & $\dots$ & $\dots$ & 0.50 & no \\
1208-52672-0519 & 3.39$\pm$0.10 & 309$\pm$10 & $\dots$ & $\dots$ & 2.75$\pm$0.09 & 273$\pm$10 & 21.3$\pm$1.48 & 283$\pm$25 & 0.85 & yes \\
1237-52762-0623 & 3.61$\pm$0.19 & 354$\pm$46 & $\dots$ & $\dots$ & 3.01$\pm$0.17 & 377$\pm$51 & $\dots$ & $\dots$ & 1.26 & no \\
1239-52760-0047 & 2.20$\pm$0.20 & 53$\pm$5 & $\dots$ & $\dots$ & 2.00$\pm$0.27 & 38$\pm$5 & 23.9$\pm$1.41 & 293$\pm$18 & 0.88 & yes \\
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{1}
\begin{table*}
\begin{center}
\caption{Line parameters of the 156 Type-2 AGN with apparent long-term variabilities}
\begin{tabular}{ccccccccccc}
\hline\hline
pmf & \multicolumn{2}{c}{first [N~{\sc ii}]} & \multicolumn{2}{c}{second [N~{\sc ii}]} & \multicolumn{2}{c}{first H$\alpha$} &
\multicolumn{2}{c}{second H$\alpha$} & $\chi^2$ & broad?\\
& $\sigma$ & flux & $\sigma$ & flux & $\sigma$ & flux & $\sigma$ & flux & & \\
\hline
1264-52707-0102 & 2.73$\pm$0.10 & 143$\pm$5 & $\dots$ & $\dots$ & 2.63$\pm$0.05 & 327$\pm$6 & $\dots$ & $\dots$ & 1.03 & no \\
1265-52705-0448 & 2.76$\pm$0.18 & 71$\pm$4 & $\dots$ & $\dots$ & 2.93$\pm$0.33 & 44$\pm$6 & $\dots$ & $\dots$ & 0.98 & no \\
1275-52996-0568 & 4.80$\pm$0.21 & 156$\pm$6 & $\dots$ & $\dots$ & 2.16$\pm$0.38 & 43$\pm$12 & 3.51$\pm$0.30 & 151$\pm$13 & 0.90 & no \\
1294-52753-0109 & 1.87$\pm$0.22 & 68$\pm$18 & $\dots$ & $\dots$ & 1.90$\pm$0.06 & 131$\pm$4 & $\dots$ & $\dots$ & 0.88 & no \\
1305-52757-0458 & 3.65$\pm$0.25 & 144$\pm$9 & $\dots$ & $\dots$ & 2.72$\pm$0.17 & 143$\pm$8 & $\dots$ & $\dots$ & 0.50 & no \\
1307-52999-0065 & 3.67$\pm$0.21 & 117$\pm$18 & $\dots$ & $\dots$ & 2.95$\pm$0.35 & 138$\pm$27 & $\dots$ & $\dots$ & 0.94 & no \\
1339-52767-0304 & 3.46$\pm$0.25 & 61$\pm$4 & $\dots$ & $\dots$ & 3.98$\pm$0.88 & 43$\pm$10 & $\dots$ & $\dots$ & 1.50 & no \\
1341-52786-0159 & 2.97$\pm$0.27 & 89$\pm$13 & 9.16$\pm$1.32 & 87$\pm$15 & 4.23$\pm$0.26 & 102$\pm$6 & $\dots$ & $\dots$ & 0.83 & no \\
1353-53083-0483 & 3.94$\pm$0.19 & 110$\pm$5 & $\dots$ & $\dots$ & 2.01$\pm$0.27 & 71$\pm$24 & 2.95$\pm$0.43 & 117$\pm$25 & 0.90 & no \\
1363-53053-0109 & 1.89$\pm$0.07 & 178$\pm$8 & 8.47$\pm$1.93 & 60$\pm$9 & 1.51$\pm$0.08 & 225$\pm$32 & 2.98$\pm$0.60 & 67$\pm$32 & 0.95 & no \\
1365-53062-0057 & 2.36$\pm$0.08 & 148$\pm$5 & $\dots$ & $\dots$ & 2.62$\pm$0.10 & 159$\pm$6 & 51.7$\pm$2.01 & 711$\pm$28 & 1.33 & yes \\
1368-53084-0298 & 3.89$\pm$0.41 & 83$\pm$11 & $\dots$ & $\dots$ & 9.04$\pm$2.31 & 75$\pm$15 & $\dots$ & $\dots$ & 1.55 & no \\
1370-53090-0201 & 1.85$\pm$0.56 & 113$\pm$3 & $\dots$ & $\dots$ & 2.08$\pm$0.55 & 141$\pm$13 & $\dots$ & $\dots$ & 0.80 & no \\
1382-53115-0193 & 4.05$\pm$0.04 & 4012$\pm$44 & $\dots$ & $\dots$ & 4.23$\pm$0.06 & 2905$\pm$42 & $\dots$ & $\dots$ & 0.56 & no \\
1385-53108-0192 & 1.96$\pm$0.23 & 34$\pm$3 & $\dots$ & $\dots$ & 1.40$\pm$0.31 & 25$\pm$4 & 64.2$\pm$6.22 & 279$\pm$34 & 1.27 & yes \\
1567-53172-0431 & 1.72$\pm$0.36 & 48$\pm$4 & $\dots$ & $\dots$ & 1.73$\pm$0.35 & 18$\pm$3 & $\dots$ & $\dots$ & 0.80 & no \\
1591-52976-0432 & 2.57$\pm$0.16 & 101$\pm$7 & $\dots$ & $\dots$ & 2.53$\pm$0.10 & 145$\pm$5 & $\dots$ & $\dots$ & 0.57 & no \\
1592-52990-0418 & 2.47$\pm$0.04 & 355$\pm$6 & $\dots$ & $\dots$ & 2.49$\pm$0.03 & 443$\pm$6 & $\dots$ & $\dots$ & 0.62 & no \\
1594-52992-0566 & 3.34$\pm$0.09 & 865$\pm$30 & $\dots$ & $\dots$ & 3.38$\pm$0.10 & 987$\pm$36 & 21.2$\pm$0.67 & 1730$\pm$76 & 1.25 & yes \\
1691-53260-0603 & 3.52$\pm$0.33 & 363$\pm$97 & $\dots$ & $\dots$ & 3.64$\pm$0.78 & 295$\pm$15 & $\dots$ & $\dots$ & 0.50 & no \\
1692-53473-0151 & 2.94$\pm$0.03 & 400$\pm$4 & $\dots$ & $\dots$ & 2.90$\pm$0.03 & 507$\pm$5 & $\dots$ & $\dots$ & 1.19 & no \\
1722-53852-0289 & 3.16$\pm$0.34 & 91$\pm$20 & 8.54$\pm$1.58 & 101$\pm$19 & 3.69$\pm$0.30 & 75$\pm$7 & $\dots$ & $\dots$ & 1.52 & no \\
1722-53852-0625 & 3.20$\pm$0.04 & 287$\pm$4 & $\dots$ & $\dots$ & 2.62$\pm$0.03 & 303$\pm$4 & $\dots$ & $\dots$ & 1.94 & no \\
1732-53501-0363 & 2.71$\pm$0.50 & 19$\pm$3 & $\dots$ & $\dots$ & 1.87$\pm$0.52 & 12$\pm$3 & $\dots$ & $\dots$ & 1.04 & no \\
1752-53379-0416 & 3.76$\pm$0.29 & 131$\pm$16 & $\dots$ & $\dots$ & 3.46$\pm$0.23 & 121$\pm$7 & $\dots$ & $\dots$ & 0.54 & no \\
1754-53385-0485 & 2.28$\pm$0.05 & 382$\pm$18 & 5.53$\pm$0.98 & 64$\pm$17 & 2.48$\pm$0.02 & 592$\pm$6 & $\dots$ & $\dots$ & 1.05 & no \\
1758-53084-0178 & 3.45$\pm$0.15 & 176$\pm$13 & $\dots$ & $\dots$ & 1.73$\pm$0.19 & 85$\pm$29 & 2.34$\pm$0.29 & 143$\pm$29 & 1.20 & no \\
1775-53847-0226 & 4.20$\pm$0.20 & 197$\pm$8 & $\dots$ & $\dots$ & 4.48$\pm$0.19 & 248$\pm$9 & $\dots$ & $\dots$ & 0.72 & no \\
1794-54504-0415 & 3.19$\pm$0.32 & 37$\pm$3 & $\dots$ & $\dots$ & 3.15$\pm$0.23 & 52$\pm$3 & $\dots$ & $\dots$ & 0.92 & no \\
1821-53167-0528 & 3.20$\pm$0.08 & 175$\pm$4 & $\dots$ & $\dots$ & 2.65$\pm$0.04 & 263$\pm$4 & $\dots$ & $\dots$ & 1.87 & no \\
1827-53531-0008 & 2.44$\pm$0.03 & 606$\pm$10 & $\dots$ & $\dots$ & 2.19$\pm$0.02 & 1025$\pm$13 & 17.1$\pm$0.62 & 617$\pm$24 & 2.06 & yes \\
1829-53494-0062 & 4.05$\pm$0.35 & 101$\pm$22 & $\dots$ & $\dots$ & 2.26$\pm$0.44 & 60$\pm$26 & 4.68$\pm$0.72 & 76$\pm$27 & 1.50 & no \\
1833-54561-0586 & 3.63$\pm$0.35 & 83$\pm$7 & $\dots$ & $\dots$ & 3.43$\pm$0.80 & 34$\pm$7 & $\dots$ & $\dots$ & 0.65 & no \\
1842-53501-0073 & 2.50$\pm$0.12 & 134$\pm$6 & $\dots$ & $\dots$ & 2.97$\pm$0.20 & 105$\pm$6 & $\dots$ & $\dots$ & 0.64 & no \\
1854-53566-0011 & 3.03$\pm$0.63 & 59$\pm$17 & $\dots$ & $\dots$ & 3.88$\pm$0.31 & 76$\pm$5 & $\dots$ & $\dots$ & 1.02 & no \\
1864-53313-0466 & 4.16$\pm$0.32 & 95$\pm$9 & $\dots$ & $\dots$ & 3.40$\pm$0.40 & 57$\pm$9 & 18.5$\pm$3.63 & 68$\pm$23 & 0.58 & yes \\
1865-53312-0129 & 2.02$\pm$0.11 & 61$\pm$3 & $\dots$ & $\dots$ & 1.91$\pm$0.10 & 60$\pm$3 & 34.1$\pm$7.28 & 75$\pm$15 & 0.67 & yes \\
1870-53383-0466$^{**}$ & 1.47$\pm$0.01 & 800$\pm$7 & $\dots$ & $\dots$ & 1.56$\pm$0.02 & 283$\pm$4 & $\dots$ & $\dots$ & 2.28 & no \\
1920-53314-0489 & 4.28$\pm$0.04 & 1075$\pm$13 & $\dots$ & $\dots$ & 4.09$\pm$0.03 & 1362$\pm$11 & $\dots$ & $\dots$ & 4.06 & no \\
1927-53321-0261 & 2.40$\pm$0.01 & 1468$\pm$12 & $\dots$ & $\dots$ & 2.31$\pm$0.02 & 1511$\pm$14 & 16.0$\pm$0.66 & 488$\pm$23 & 2.30 & yes \\
1930-53347-0150 & 2.75$\pm$0.43 & 28$\pm$4 & $\dots$ & $\dots$ & 2.25$\pm$0.63 & 20$\pm$2 & $\dots$ & $\dots$ & 0.81 & no \\
1945-53387-0487 & 3.99$\pm$0.19 & 86$\pm$4 & $\dots$ & $\dots$ & 1.61$\pm$0.28 & 47$\pm$16 & 3.35$\pm$0.32 & 128$\pm$21 & 1.02 & no \\
1951-53389-0400 & 4.37$\pm$0.20 & 125$\pm$5 & $\dots$ & $\dots$ & 4.08$\pm$0.44 & 109$\pm$45 & $\dots$ & $\dots$ & 2.86 & no \\
1990-53472-0604 & 4.61$\pm$0.81 & 62$\pm$15 & $\dots$ & $\dots$ & 3.66$\pm$0.30 & 119$\pm$14 & 16.4$\pm$2.72 & 114$\pm$34 & 0.83 & yes \\
2004-53737-0520 & 3.99$\pm$0.08 & 283$\pm$5 & $\dots$ & $\dots$ & 3.82$\pm$0.05 & 389$\pm$5 & $\dots$ & $\dots$ & 1.36 & no \\
2007-53474-0631 & 1.81$\pm$0.38 & 32$\pm$11 & 5.12$\pm$0.34 & 183$\pm$12 & 1.61$\pm$0.19 & 53$\pm$10 & 4.90$\pm$0.60 & 87$\pm$11 & 1.10 & no \\
2025-53431-0173 & 3.59$\pm$0.17 & 151$\pm$7 & $\dots$ & $\dots$ & 2.88$\pm$0.84 & 154$\pm$16 & $\dots$ & $\dots$ & 0.52 & no \\
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{1}
\begin{table*}
\begin{center}
\caption{--to be continued.}
\begin{tabular}{ccccccccccc}
\hline\hline
pmf & \multicolumn{2}{c}{first [N~{\sc ii}]} & \multicolumn{2}{c}{second [N~{\sc ii}]} & \multicolumn{2}{c}{first H$\alpha$} &
\multicolumn{2}{c}{second H$\alpha$} & $\chi^2$ & broad?\\
& $\sigma$ & flux & $\sigma$ & flux & $\sigma$ & flux & $\sigma$ & flux & & \\
\hline
2090-53463-0487 & 5.13$\pm$0.21 & 263$\pm$12 & $\dots$ & $\dots$ & 3.54$\pm$0.23 & 102$\pm$8 & $\dots$ & $\dots$ & 0.85 & no \\
2139-53878-0330 & 1.58$\pm$0.21 & 391$\pm$149 & 2.85$\pm$0.09 & 1488$\pm$147 & 2.51$\pm$0.04 & 1278$\pm$24 & 16.9$\pm$0.62 & 1186$\pm$58 & 0.58 & yes \\
2168-53886-0218 & 3.21$\pm$0.38 & 86$\pm$21 & $\dots$ & $\dots$ & 2.87$\pm$0.17 & 80$\pm$4 & $\dots$ & $\dots$ & 0.86 & no \\
2196-53534-0565 & 2.87$\pm$0.07 & 256$\pm$6 & $\dots$ & $\dots$ & 2.64$\pm$0.06 & 294$\pm$6 & $\dots$ & $\dots$ & 0.52 & no \\
2227-53820-0518 & 3.02$\pm$0.39 & 212$\pm$84 & $\dots$ & $\dots$ & 2.12$\pm$0.28 & 208$\pm$78 & $\dots$ & $\dots$ & 0.35 & no \\
2268-53682-0136 & 2.15$\pm$0.60 & 11$\pm$2 & $\dots$ & $\dots$ & 2.55$\pm$0.66 & 23$\pm$7 & $\dots$ & $\dots$ & 0.89 & no \\
2269-53711-0508 & 2.90$\pm$0.24 & 43$\pm$3 & $\dots$ & $\dots$ & 2.81$\pm$0.34 & 28$\pm$3 & $\dots$ & $\dots$ & 1.16 & no \\
2270-53714-0145 & 4.25$\pm$0.22 & 80$\pm$3 & $\dots$ & $\dots$ & 3.81$\pm$0.17 & 91$\pm$3 & $\dots$ & $\dots$ & 0.93 & no \\
2271-53726-0055 & 3.17$\pm$0.40 & 113$\pm$37 & $\dots$ & $\dots$ & 3.70$\pm$0.12 & 175$\pm$8 & $\dots$ & $\dots$ & 0.98 & no \\
2278-53711-0043 & 2.11$\pm$0.06 & 244$\pm$11 & 6.56$\pm$0.55 & 142$\pm$10 & 1.79$\pm$0.07 & 298$\pm$24 & 4.09$\pm$0.21 & 258$\pm$23 & 0.70 & no \\
2282-53683-0335 & 2.28$\pm$0.31 & 65$\pm$13 & 2.36$\pm$0.38 & 56$\pm$14 & 3.52$\pm$0.14 & 110$\pm$4 & $\dots$ & $\dots$ & 1.30 & no \\
2288-53699-0015 & 2.13$\pm$0.03 & 425$\pm$6 & $\dots$ & $\dots$ & 2.18$\pm$0.04 & 285$\pm$5 & $\dots$ & $\dots$ & 1.07 & no \\
2295-53734-0631 & 3.74$\pm$0.24 & 65$\pm$3 & $\dots$ & $\dots$ & 3.14$\pm$0.16 & 76$\pm$3 & $\dots$ & $\dots$ & 0.98 & no \\
2341-53738-0154 & 2.11$\pm$0.09 & 113$\pm$7 & 6.85$\pm$1.62 & 36$\pm$8 & 1.93$\pm$0.14 & 88$\pm$12 & $\dots$ & $\dots$ & 0.77 & no \\
2346-53734-0073 & 3.37$\pm$0.10 & 327$\pm$9 & $\dots$ & $\dots$ & 3.18$\pm$0.18 & 213$\pm$24 & $\dots$ & $\dots$ & 0.98 & no \\
2356-53786-0589 & 3.27$\pm$0.12 & 106$\pm$3 & $\dots$ & $\dots$ & 3.40$\pm$0.14 & 100$\pm$3 & $\dots$ & $\dots$ & 1.21 & no \\
2365-53739-0157 & 2.50$\pm$0.02 & 3085$\pm$32 & $\dots$ & $\dots$ & 2.25$\pm$0.02 & 3751$\pm$38 & $\dots$ & $\dots$ & 0.90 & no \\
2366-53741-0416 & 4.73$\pm$0.28 & 89$\pm$4 & $\dots$ & $\dots$ & 4.51$\pm$0.28 & 84$\pm$4 & $\dots$ & $\dots$ & 0.98 & no \\
2374-53765-0174 & 1.45$\pm$0.35 & 54$\pm$17 & 3.23$\pm$0.07 & 531$\pm$26 & 1.73$\pm$0.24 & 137$\pm$41 & 3.48$\pm$0.07 & 687$\pm$42 & 0.79 & no \\
2375-53770-0517 & 3.08$\pm$0.14 & 135$\pm$6 & $\dots$ & $\dots$ & 2.58$\pm$0.08 & 177$\pm$6 & 36.7$\pm$3.23 & 238$\pm$20 & 0.89 & yes \\
2418-53794-0079 & 2.85$\pm$0.07 & 135$\pm$3 & $\dots$ & $\dots$ & 2.94$\pm$0.05 & 211$\pm$3 & $\dots$ & $\dots$ & 1.75 & no \\
2419-54139-0036 & 3.64$\pm$0.14 & 300$\pm$10 & $\dots$ & $\dots$ & 3.97$\pm$0.19 & 262$\pm$11 & $\dots$ & $\dots$ & 0.69 & no \\
2419-54139-0050 & 3.60$\pm$0.21 & 138$\pm$8 & $\dots$ & $\dots$ & 3.28$\pm$0.39 & 69$\pm$7 & 103.$\pm$8.14 & 2372$\pm$391 & 0.80 & yes \\
2422-54096-0372 & 2.57$\pm$0.12 & 68$\pm$2 & $\dots$ & $\dots$ & 2.11$\pm$0.06 & 108$\pm$3 & $\dots$ & $\dots$ & 0.97 & no \\
2422-54096-0425 & 3.25$\pm$0.85 & 22$\pm$7 & $\dots$ & $\dots$ & 2.85$\pm$0.75 & 36$\pm$12 & $\dots$ & $\dots$ & 0.81 & no \\
2425-54139-0093 & 2.46$\pm$0.22 & 51$\pm$7 & $\dots$ & $\dots$ & 2.22$\pm$0.06 & 99$\pm$2 & $\dots$ & $\dots$ & 1.11 & no \\
2434-53826-0012 & 2.77$\pm$0.55 & 73$\pm$18 & $\dots$ & $\dots$ & 2.48$\pm$0.39 & 58$\pm$16 & 6.46$\pm$1.24 & 74$\pm$16 & 1.32 & no \\
2436-54054-0552 & 4.62$\pm$0.51 & 116$\pm$15 & $\dots$ & $\dots$ & 1.67$\pm$0.63 & 13$\pm$4 & 2.69$\pm$0.20 & 78$\pm$4 & 0.97 & no \\
2485-54176-0587 & 2.39$\pm$0.52 & 38$\pm$16 & $\dots$ & $\dots$ & 1.78$\pm$0.57 & 37$\pm$3 & $\dots$ & $\dots$ & 0.67 & no \\
2489-53857-0561 & 4.10$\pm$0.20 & 93$\pm$4 & $\dots$ & $\dots$ & 3.75$\pm$0.18 & 104$\pm$4 & $\dots$ & $\dots$ & 1.43 & no \\
2526-54582-0319 & 1.89$\pm$0.05 & 161$\pm$5 & 10.9$\pm$0.00 & 239$\pm$11 & 1.90$\pm$0.05 & 222$\pm$9 & 6.90$\pm$0.54 & 167$\pm$17 & 1.67 & no \\
2572-54056-0139 & 2.99$\pm$0.09 & 237$\pm$7 & $\dots$ & $\dots$ & 2.80$\pm$0.13 & 140$\pm$7 & 26.1$\pm$2.63 & 192$\pm$20 & 0.72 & yes \\
2593-54175-0025 & 1.59$\pm$0.27 & 53$\pm$13 & 5.65$\pm$1.99 & 50$\pm$15 & 2.19$\pm$0.11 & 167$\pm$7 & 89.5$\pm$5.91 & 1466$\pm$128 & 1.05 & yes \\
2644-54210-0229 & 1.94$\pm$0.10 & 76$\pm$3 & $\dots$ & $\dots$ & 1.55$\pm$0.48 & 125$\pm$3 & $\dots$ & $\dots$ & 0.81 & no \\
2744-54272-0490 & 2.25$\pm$0.78 & 36$\pm$10 & $\dots$ & $\dots$ & 3.25$\pm$0.44 & 61$\pm$11 & 17.9$\pm$2.58 & 92$\pm$26 & 1.22 & yes \\
2771-54527-0146 & 2.21$\pm$0.26 & 83$\pm$17 & 2.37$\pm$0.20 & 127$\pm$17 & 3.88$\pm$0.06 & 373$\pm$6 & $\dots$ & $\dots$ & 1.74 & no \\
2789-54555-0281 & 4.31$\pm$0.25 & 1101$\pm$191 & $\dots$ & $\dots$ & 4.58$\pm$0.70 & 630$\pm$125 & $\dots$ & $\dots$ & 0.28 & no \\
2790-54555-0124 & 1.83$\pm$0.14 & 49$\pm$3 & $\dots$ & $\dots$ & 1.82$\pm$0.11 & 55$\pm$3 & $\dots$ & $\dots$ & 0.69 & no \\
2886-54498-0629 & 2.83$\pm$1.28 & 72$\pm$21 & $\dots$ & $\dots$ & 2.02$\pm$0.31 & 49$\pm$12 & 2.34$\pm$0.25 & 90$\pm$12 & 0.75 & no \\
2948-54553-0408 & 4.14$\pm$0.24 & 90$\pm$5 & $\dots$ & $\dots$ & 2.18$\pm$0.46 & 30$\pm$12 & 6.32$\pm$1.55 & 49$\pm$11 & 1.08 & no \\
2968-54585-0061 & 3.19$\pm$0.03 & 683$\pm$7 & $\dots$ & $\dots$ & 3.09$\pm$0.03 & 718$\pm$8 & $\dots$ & $\dots$ & 2.67 & no \\
\hline
\end{tabular}
\end{center}
\end{table*}
\section*{Acknowledgements}
Zhang gratefully acknowledges the anonymous referee for giving us constructive comments and suggestions
to greatly improve the paper. Zhang gratefully thanks the scientific research funds provided by GuangXi University
and the kind grant support from NSFC-12173020. This manuscript has made use of the data from the SDSS projects,
\url{http://www.sdss3.org/}, managed by the Astrophysical Research Consortium for the Participating Institutions
of the SDSS-III Collaborations. The manuscript has made use of the data from the Catalina Sky Survey (CSS)
\url{http://nesssi.cacr.caltech.edu/DataRelease/}, funded by the National Aeronautics and Space
Administration under Grant No. NNG05GF22G issued through the Science Mission Directorate Near-Earth
Objects Observations Program. The paper has made use of the public JAVELIN code
\url{http://www.astronomy.ohio-state.edu/~yingzu/codes.html#javelin} as an approach to reverberation
mapping that computes the lags between the AGN continuum and emission line light curves and their
statistical confidence limits, and the MPFIT package \url{https://pages.physics.wisc.edu/~craigm/idl/cmpfit.html}
to solve the least-squares problem through the Levenberg-Marquardt technique, and the MCMC code
\url{https://emcee.readthedocs.io/en/stable/index.html}.
\iffalse
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the corresponding author
(\href{mailto:aexueguang@qq.com}{aexueguang@qq.com}).
\fi
|
{
"arxiv_id": "2302.08665",
"language": "en",
"timestamp": "2023-02-20T02:05:51",
"url": "https://arxiv.org/abs/2302.08665",
"yymm": "2302"
} | \section{Introduction}
The last decade has seen the rapid development of the fruitful field of cavity magnonics \cite{soykal2010strong,huebl2013high,tabuchi2015coherent,goryachev2014high,lachance2019hybrid,bhoi2020roadmap,cao2015exchange,boventer2018complex,yu2019prediction,grigoryan2018synchronized,rao2021interferometric,li2022coherent}. Because of its strong potential in quantum information processing and spintronics, this field of research has attracted increasing attention. A cavity magnonic system is typically composed of microwave cavities and bulk magnets \cite{huebl2013high,tabuchi2015coherent}. Strong photon-magnon interactions lead to the formation of quasi-particles named ``cavity magnon polaritons'' (CMPs). These quasi-particles acquire complementary properties from cavity photons and magnons, and thus are highly tunable and have long coherence\cite{zhang2015magnon,Tobar2014,yu2020}, diverse non-linearities\cite{BSCMP,TSCMP,Paul2018}, and excellent compatibility\cite{NK2016,xu2021,AFM2021}. By exploiting these merits, numerous techniques and applications have been developed, such as long time memories \cite{zhang2015magnon,shen2021long}, polaritonic logic gates \cite{rao2019analogue}, magnon-phonon entanglement \cite{li2018magnon}, spin current manipulation \cite{bai2017cavity}, and information processing at the quantum limit \cite{lachance2020entanglement,tabuchi2015coherent,tabuchi2014hybridizing}. All these applications rely on an effective access to manipulate the photon-magnon coupling, which is usually implemented by delicately designing the microwave cavities.
\begin{figure} [h!]
\begin{center}
\epsfig{file=Figure1.pdf,width=8.4cm} \caption{(a) Upper panel shows a schematic diagram of a cavity magnonic device, which simply consists of a CPW cavity and a YIG sphere. Two tones (i.e., pump and probe) drive and detect the resonances in this device. Lower panelshows a schematic drawing of the PIM-CMP hybridization. (b) Transmission spectra of empty cavity. (c), (d) Transmission spectra of the device without and with a pump. During these two measurements, the Walker mode matches the cavity mode( i.e., $\omega_w=\omega_c$). The pump is set as $\omega_d/2\pi=\omega_c/2\pi=3.34$ GHz and $P_d=10$ dBm. The green strip in panel(d) indicates the pump. The black curves in panels(b)-(d) are the results of calculations using Eq. (\ref{Trans}).}\label{fig1}
\end{center}
\end{figure}
A recent work has experimentally demonstrated that a ferrimagnet driven by a strong microwave can exhibit non-trivial properties, in particular the emergence of the pump-induced magnon mode (PIM) \cite{RaoPhysRevLett}. The formation of this peculiar magnnon mode is attributed to the cooperative precession of unsaturated spins in the ferrimagnet under a microwave pump. The inherent nature of PIM allows it to be highly tunable by the microwave pump. Such a property is not possessed by any other magnon modes in magnets, including magnetostatic modes or Damon-Eshbach modes. If we incorporate a PIM into cavity magnonics, one straightforward consequence is that the cavity magnonic system will acquire an additional manipulation freedom inherited from the PIM and hence become tunable by the microwave pump. Comparing to conventional cavity magnonic devices that highly depend on the electromagnetic properties of microwave cavities, a tuning technique that merely needs to adjust the power or frequency of a microwave pump can greatly simplify the design of cavity magnonic devices. Moreover, the PIM can be strongly couple to normal magnon modes in magnets via spin-spin interactions. This effect makes possible the hybridization of the PIM and CMP, and may hence produce new quasi-particles in cavity magnonics. Combined with the flexibility of PIM dynamics, such a quasiparticle would provide an ideal solution and significant impact on the high-dimensional tunability of the cavity magnonic system.
In this work, we introduce a PIM into cavity magnonics and investigate the strong interaction between the PIM and CMPs. We demonstrate the stunning tunability of the cavity magnonic system by a microwave pump, which has never been reported before. Such a result may inspire the following design of cavity magnonic devices. The magnon mode commonly used to produce CMPs is the Walker modes \cite{walker1957magnetostatic,dillon1957ferrimagnetic,gloppe2019resonant}, i.e., a kind of magnetostatic modes in bulk magnets. Its effective spin number is constant determined by the saturated magnetization and volume of the bulk magnet, while its mode frequency is governed by the well-known Kittel formula that is a function of the effective magnetic field felt by a Walker mode. By contrast, the PIM differs in that its effective spin number and mode frequency are tunable via the pump. As a consequence, the PIM-CMP coupling strength $g_d$ approximately follows a fourth root of the pump power (i.e., $g_d\propto P_d^{1/4}$). It can also be controlled via the detuning between the pump and the cavity mode because this detuning regulates the effective field intensity on the intra-cavity yttrium iron garnet (YIG) sphere. These results demonstrate for the first time the control of a cavity magnonic system with a microwave pump. This technique enriches the tunability of cavity magnonic systems and is useful for the magnon-based information processing. Furthermore, the PIM-CMP hybridization produces an anti-crossing behaviour of the system's eigenmodes. Such a signature is usually regard as the sign for the generation of quasi-particles in solids. Given this, we deduce that another type of quasi-particles consisting of multiple magnons and photons are produced during the PIM-CMP hybridization, similar to the formation of trion-polaritons in semiconductors \cite{mak2013tightly,dhara2018anomalous}. Attaining a better understanding their nontrivial properties will be beneficial for the development of coherent/quantum information processing based on cavity magnonics.
\begin{figure} [h!]
\begin{center}
\epsfig{file=Figure2.pdf,width=8.4cm} \caption{Transmission spectra of our device measured at different magnetic fields (a) without and (b) with the pump, which is set at $\omega_d=\omega_c$ and $P_d=10$ dBm. Black dashed lines are eigenfrequencies calculated from the Hamiltonian in two cases. The anticrossing at 100 mT arises from the coupling between another Walker mode and the cavity mode. (c) Three transmission spectra measured at $\mu_0H_{ext}=$111.7, 113.9 and 116.1 mT [indicated by white arrows in panel (b)] with the pump on. Black solid lines are transmission spectra calculated by using Eq. (\ref{Trans}).}\label{fig2}
\end{center}
\end{figure}
\section{Experiment}
Figure\,\ref{fig1}(a) shows the schematic picture of a simplest cavity magnonic device, which consists of a coplanar waveguide (CPW) cavity and a polished YIG sphere of diameter 1 mm. The CPW cavity is fabricated on a $22\times50$~mm$^2$ RO4350b board with a resonant frequency of $\omega_c/2\pi=3.34$ GHz. Its external and intrinsic damping rates fitted from the transmission spectrum [Fig.\,\ref{fig1}(b)] are $\kappa/2\pi=2.9$ MHz and $\beta/2\pi=16.8$ MHz, respectively. A strong microwave pump generated by a microwave generator is used to drive the system. It is a single-frequency signal at the frequency $\omega_d$. A weak probe is emitted and collected by a vector network analyser, whose power is fixed at $-25$ dBm throughout this work and whose frequency ($\omega_p$) is swept over a wide range for the transmission measurement.
An external magnetic field $H_{ext}$ parallel to the plane of the CPW is used to tune the Walker modes ($\omega_w$) in the YIG sphere. When a Walker mode, i.e. (2,2,0) mode in this work, matches the cavity mode (i.e., $\omega_w=\omega_c$), the strong photon-magnon coupling produces CMPs. The transmission spectrum [Fig.\,\ref{fig1}(c)] shows two resonance peaks with a Rabi gap, corresponding to two CMP eigenstates. We then turn on the pump and set its frequency to $\omega_d=\omega_c$ and its power to $P_d=10$ dBm. The strong pump drives the unsaturated spins in the YIG sphere via the cavity. The collective precession of these unsaturated spins forms a spin wave at the frequency $\omega_d$, namely the PIM \cite{RaoPhysRevLett}. The PIM has much fewer spins than Walker modes, so it couples negligibly with the cavity mode. However, the PIM can strongly couple to the Walker mode via the spin-spin interaction, which is orders of magnitude stronger than the magnetic dipole-dipole interaction between a magnet and the rf-field. The PIM-CMP hybridization process is depicted in Fig.\,\ref{fig1}(a) as a three-mode coupling. The transmission spectrum in Fig.\,\ref{fig1}(d) shows three resonant peaks instead of the former Rabi splitting, representing the three eigenstates of the PIM-CMP hybridization.
\begin{figure} [h]
\begin{center}
\epsfig{file=Figure3.pdf,width=8.4cm} \caption{(a) Transmission spectra measured at $P_d=$ -10, 0, 10, and 19 dBm with setting $\omega_w=\omega_d=\omega_c$. (b) Frequencies of three hybridized modes extracted from transmission spectra measured at different pump powers. (c) Two coupling strengths $g$ and $g_d$ fitted from the transmission spectra shown in (a). Red solid line is a calculating curve by using Eq. (\ref{CS0}). (d) Half coupling gaps [i.e., $(\omega_+-\omega_-)/2$] measured at different pump powers. Red solid line is a calculating curve of $\sqrt{g_d^2+g^2}/2\pi$ by using Eq. (\ref{CS0}). }\label{fig3}
\end{center}
\end{figure}
To quantitatively understand the PIM-CMP hybridization, we construct a Hamiltonian model. We focus our research on the coupling effects in our system and thus disregard the self- and cross-Kerr effects \cite{wang2018bistability,wu2021observation,anderson1955instability}. The Hamiltonian is
\begin{eqnarray}
\mathcal{H}/\hbar&=&\tilde{\omega}_w\hat{a}^\dag\hat{a}+\tilde{\omega}_d\hat{b}^\dag\hat{b}+\tilde{\omega}_c\hat{c}^\dag\hat{c}+g_d(\hat{a}^\dag\hat{b}+\hat{a}\hat{b}^\dag)+\nonumber \\
&g&(\hat{a}^\dag\hat{c} +\hat{a}\hat{c}^\dag)+[i\sqrt{\kappa}(A_pe^{-i\omega_pt}+A_de^{-i\omega_dt})\hat{c}^\dag \nonumber\\
&+&\rm H.C.],
\label{Hamiltonian}
\end{eqnarray}
where $\hat{a}^\dag$ ($\hat{a}$), $\hat{b}^\dag$ ($\hat{b}$) and $\hat{c}^\dag$ ($\hat{c}$) represent the creation (annihilation) operators of the Walker mode, PIM and cavity photon mode, respectively. $\tilde{\omega}_w=\omega_w-i\alpha$, $\tilde{\omega}_d=\omega_d-i\xi$ and $\tilde{\omega}_c=\omega_c-i(\beta+\kappa)$ correspond to their complex frequencies, where $\alpha/2\pi=0.8$ MHz and $\xi/2\pi=0.4$ MHz are the damping rates of the Walker mode and of the PIM, respectively. $A_d$ and $A_p$ are respectively the pump and probe intensities. They follow a square-root relation of microwave power, such as $A_d\propto\sqrt{P_d}$. $g/2\pi=24$ MHz is the coupling strength between the cavity and the Walker mode, larger than the damping rates of all subsystems. $g_d$ is the coupling strength between the Walker mode and the PIM and has the form $g_d=g_0\sqrt{N_p}$, where $g_0$ is a constant and $N_p$ is the effective spin number of the PIM \cite{RaoPhysRevLett,tabuchi2014hybridizing}. Because PIM is induced by the pump, its spin number should be equal to the magnon number excited by the pump, i.e., $N_p=\langle\hat{b}^\dag\hat{b}\rangle$.
Under the two-tone driving, our system oscillates in a superposition of a steady oscillation driven by the strong pump and fluctuations perturbed by the weak probe. we assume that $\hat{a}=[A+\delta u(t)]e^{-i\omega_dt}$, $\hat{b}=[B+\delta v(t)]e^{-i\omega_dt}$ and $\hat{c}=[C+\delta\rho(t)]e^{-i\omega_dt}$, where $A$, $B$ and $C$ respectively represent the steady oscillating amplitudes of three modes at the frequency of $\omega_d$. $\delta u(t)$, $\delta v(t)$ and $\delta\rho(t)$ correspond to their fluctuations induced by the probe. These fluctuations can be derived from Eq. (\ref{Hamiltonian}) (see Appendix A). By using the input-output relation of $S_{21}=\sqrt{\kappa}\delta\rho(t)/A_p$, we get the transmission spectrum of our device:
\begin{equation}
S_{21}=-\frac{\kappa}{i(\omega_p-\omega_c)-\kappa'+\frac{g^2}{i(\omega_p-\omega_w)-\alpha+\frac{g_d^2}{i(\omega_p-\omega_d)-\xi}}}
\label{Trans}
\end{equation}
where $\kappa'=\kappa+\beta$. It can well reproduce the measured transmission spectra. This expression has three maxima, corresponding to the system's eigenfrequencies. Turning off the pump, the PIM disappears, so that our system degrades to a normal cavity magnonic system with two CMP eigenfrequencies.
The Walker-PIM coupling strength $g_d$ can also be derived from Eq. (\ref{Hamiltonian}) (see Appendix B). It is determined by a cubic equation with a form of \begin{equation}
{\tilde{\Delta}_c}g_0^2|B|^2B-i\sqrt{\kappa}A_dgg_0|B|-i\xi(g^2-\tilde{\Delta}_c\tilde{\Delta}_w)B=0
\label{CS},
\end{equation}
where $\tilde{\Delta}_w =\tilde{\omega}_w -\omega_d$ and $\tilde{\Delta}_c =\tilde{\omega}_c -\omega_d$ are the complex detunings of the Walker mode and the cavity mode with respect to the pump. If we set $\omega_c=\omega_w=\omega_d$, the Walker-PIM coupling strength becomes
\begin{equation}
g_d=\sqrt{\frac{\sqrt{\kappa}gg_0A_d-\xi g^2}{\kappa'}-\alpha\xi}
\label{CS0}.
\end{equation}
We now test the theoretical model from three aspects: the dependence of the PIM-CMP hybridization on the magnetic field $H_{ext}$, the pump power $P_d$ and the pump frequency $\omega_d$. Figure\,\ref{fig2}(a) shows the transmission spectra of our device measured at different $H_{ext}$ without the pump. When the Walker mode is tuned to get close to the cavity mode, a typical anticrossing occurs near 114 mT, indicating the strong photon-magnon coupling and hence the formation of CMPs. We then turn on the pump and set it at $\omega_d=\omega_c$ and $P_d=10$ dBm. A PIM at $\omega_d$ is excited by the pump, producing three hybridized modes from the transmission map [Fig.\,\ref{fig2}(b)]. The black dashed lines in Figs.\,\ref{fig2}(a) and \ref{fig2}(b) are eigenfrequencies of our system in the two cases. Transmission spectra measured at $\mu_0H_{ext}=$111.7, 113.9 and 116.1 mT with the pump are plotted in Fig. \ref{fig2}(c), which can be well reproduced by using Eq. (\ref{Trans}). $g_d/2\pi$ fitted to these curves is $9.8$ MHz.
Unlike Walker modes, the PIM can be manipulated by the pump. Consequently, the PIM-CMP hybridization inherits this property, and becomes tunable by the pump power and frequency. We fix the Walker mode ($\omega_w$) and the pump frequency ($\omega_d$) to the cavity mode, i.e., $\omega_w=\omega_d=\omega_c$ and then tune the pump power and record the transmission spectrum at each power value. Figure\,\ref{fig3}(a) shows four typical spectra measured at $P_d=$-10, 0, 10 and 19 dBm. The PIM starts to appear at around $P_d=0$ dBm. Three hybridized modes then occur at $\omega_c$ and $\omega_\pm=\omega_c\pm\sqrt{g^2+g_d^2}$. Using Eq. (\ref{Trans}) to fit each transmission spectra, we extract three hybridized modes' frequencies and two coupling strengths $g$ and $g_d$, which are plotted in Fig.\,\ref{fig3}(b) and (c). As the pump power increases, the gap between $\omega_\pm$ increases, whereas $g$ remains essentially constant at 24 MHz. Nevertheless, $g_d$ is approximately proportional to the fourth root of the pump power ($g_d\propto P_d^{1/4}$). The physical origin is that the PIM's spin number is equal to the magnons excited by the pump. In another word, the PIM's spin number is altered by the pump power. Such a variation of $g_d$ can be well explained by Eq. (\ref{CS0}), as shown by the red solid line in Fig. \ref{fig3}(c). During the calculation, $g_0$ is a constant $5.6\times10^{-4}$, while $A_d$ is approximately set as $\sqrt{P_d}$. The measured half coupling gaps at different pump powers, [i.e., $(\omega_+-\omega_-)/2$] are plotted in Fig.\,\ref{fig3} (d) as purple circles. They can be fitted by $\sqrt{g^2+g_d^2}$, as shown by the red solid line in Fig.\,\ref{fig3}(d). At high powers, the experimental data (circles in Fig.\,\ref{fig3}(c) and \ref{fig3}(d)) gradually deviate from the fitting curves (red solid lines) due to the occurrence of nonlinearities that are not considered in this work.
\begin{figure} [h]
\begin{center}
\epsfig{file=Figure4.pdf,width=8.4cm} \caption{(a) Transmission spectra measured at different pump frequencies. Small coupling gaps appear when the Walker mode matches the pump frequency (i.e., $\omega_w=\omega_d$). Red circles indicate the case of $\omega_w=\omega_d=\omega_c$. (b) Transmission spectra measured at different $H_{ext}$, when setting $\omega_d/2\pi=3.42$ GHz. Black dashed lines are calculated eigenfrequencies from the Hamiltonian. (c) $g_d$ at different pump frequencies extracted from the transmission spectra of panel (a). The red solid line is calculated by using Eq. (\ref{CS}). (d) $g_d$ as a function of $P_d$ and $\omega_d$ and calculated by using Eq. (\ref{CS}). }\label{fig4}
\end{center}
\end{figure}
Another way to control the PIM-CMP hybridization is changing the pump detuning relative to the cavity mode ($\Delta_c=\omega_c-\omega_d$). The PIM strictly follows the pump frequency, making the anti-crossing of three hybridized modes controllable by the pump frequency. Moreover, the PIM-Walker coupling strength $g_d$ also exhibits a dependency on the pump detuning, because the pump field felt by the YIG sphere that determines the PIM's spin number is regulated by the cavity, as well as its coupling with Walker modes. Figure\,\ref{fig4}(a) shows transmission spectra measured at different pump frequencies [see green lines in Fig.\,\ref{fig4}(a)]. Two resonance peaks appear with a coupling gap, when the CMP mode matches the pump frequency. [blue circles in Fig.\,\ref{fig4}(a)]. Specifically, we fix the pump frequency at $\omega_d/2\pi=3.42$ GHz, and measure the transmission spectra at different $H_{ext}$, as shown in Figure\,\ref{fig4}(b). The red color in the map indicates the upper branch of CMP modes. An anticrossing appears when the CMP mode approaches the pump at 3.42 GHz, because of the strong PIM-CMP coupling. Black dashed lines are eigenfrequencies calculated from the Hamiltonian (\ref{Hamiltonian}), which are consistent with measurement results. The PIM-CMP coupling strength ($g_d$) is tunable by the pump detuning. It becomes larger, as the pump frequency approaches the cavity mode. $g_d$ fitted from Fig.\,\ref{fig4}(a) are plotted in Fig.\,\ref{fig4} (c). It reaches the maximum when the pump frequency matches the cavity mode, and decreases rapidly as the pump detuning increases. Using Eq. (\ref{CS}), we can calculate $g_d$ at different pump detuning, as shown by the red solid line in Fig.\,\ref{fig4}(c). Furthermore, we also calculate the variation of $g_d$ in a parameter space formed by the pump power $P_d$ and the pump frequency $\omega_d$, as shown in Fig.\,\ref{fig4}(d). Increasing the pump power or decreasing the pump detuning can enhance the effective pump field applied to the YIG sphere. It increases the PIM's spin number $N_p$, and hence enhances the PIM-CMP coupling strength $g_d$.
\section{Conclusion}
We apply a strong microwave pump on a cavity magnonic system and observe the hybridization between a PIM and CMP modes. This hybrid system inherits intriguing properties from the PIM, allowing it to be controlled by the microwave pump, a feature that is absent in all previous cavity magnonic systems. By using either the pump power or the pump frequency, we can thus tune the PIM-CMP hybridization. In conventional cavity magnonic systems, photon-magnon coupling strength is determined by three parameters, i.e., the spin number of magnets, the mode volume of the cavity photon mode and the overlap coefficient between the cavity photon mode and the magnon mode. This fact causes that the photon-magnon coupling strength is a constant once a cavity magnonic device is fabricated. To adjust the photon-magnon coupling, we must redesign the device or add complicate mechanical structures to change the magnet position in the cavity. Now by utilizing the PIM-CMPs hybridization, we find a handy method that can in situ control the photon-magnon coupling by modulating the pump. These techniques enrich the toolbox for controlling cavity magnonic systems, which should benefit magnon-based signal processing.
After years of study, CMP generated by the photon-magnon hybridization has become a well-developed technology, and our present work takes a step further. In addition to the formation of CMPs, coherent magnons excited by a pump are strongly coupled with CMPs. From the transmission spectra, we can clear see the anticrossing behaviour arsing from this PIM-CMP hybridization. Normally, the anticrossing is an important experimental evidence for the formation of new elementary excitation. We thus conjecture that the PIM-CMP hybridization may generate new elementary excitation that is the combination of two magnons and one photon. This elementary offers a high-dimensional control to realize the photon-magnon entanglement for coherent/quantum information technologies. Our current results just uncover the tip of the iceberg of this elementary excitation and it is expected that more features of non-equilibrium cavity magnonic systems would be discovered.
\acknowledgements
This work has been funded by National Natural Science Foundation of China under Grants Nos.12122413, 11974369, 11991063 and 12204306, STCSM Nos.21JC1406200 and 22JC1403300, the Youth Innovation Promotion Association No. 2020247 and Strategic priority research No. XDB43010200 of CAS, the National Key R\&D Program of China (No. 2022YFA1404603), the SITP Independent Foundation, the Shanghai Pujiang Program (No. 22PJ1410700).
|
{
"arxiv_id": "2302.08578",
"language": "en",
"timestamp": "2023-02-20T02:01:50",
"url": "https://arxiv.org/abs/2302.08578",
"yymm": "2302"
} | \section{Introduction}
The gauge models for the electroweak and strong forces based in the gauge group $SU(3)_C\times SU(3)_L\times U(1)_X$ (3-3-1 )\cite{Singer:1980sw,Pisano:1992bxx,Frampton:1992wt,Foot:1994ym,Montero:1992jk} accommodate two versions determined by the parameter $\beta$ that appears in the linear combination of the diagonal generators that define the electric charge operator,
\begin{equation}
\frac{Q}{e}=\frac{1}{2}(\lambda_3+\beta \lambda_8)+N.
\end{equation}
The values $\beta$ can take are $-\frac{1}{\sqrt{3}}$ (case A) and $-\sqrt{3}$ (case B). Each case leads to different models concerning theoretical and phenomenological aspects\footnote{It is interesting to stress that both models explain family replication\cite{Ng:1992st} and electric charge quantization\cite{deSousaPires:1998jc}}. Here we address a problem that only affect exclusively case B that is a Landau-like pole that, as showed in previous studies, manifest already at TeV regime\cite{Ng:1992st,Dias:2004dc}. The most popular version of 3-3-1 models related to the case B is the minimal 3-3-1 model.
The landau pole arises in the minimal 3-3-1 model because the couplings $g_X$, associated to $U(1)_X$, and $g_L$, associated to $SU(3)_L$, are related to the Weinberg angle $\theta_W$ in the following way\cite{Ng:1992st,Dias:2004dc}
\begin{equation}
\frac{g_X}{g_L}\sim \frac{\sin^2 \theta_W}{1-4\sin^2 \theta_W}.
\label{coupling-relationI}
\end{equation}
This dangerous relation generates a Landau-polo, $g_X(\Lambda) \rightarrow \infty$, when $\sin^2\theta_W(\Lambda) \rightarrow 1/4$. It was showed in Refs. \cite{Ng:1992st,Dias:2004dc} that this happens already at TeV scale more precisely in the range $\Lambda \approx 2-6$ TeV depending on the particle content that contribute to the running of the $\sin^2 \theta_W$. Current bound on $Z^{\prime}$ demands that the $SU(3)_L \times U(1)_X$ symmetry breaks spontaneously around 4 TeV\cite{Cao:2016uur}. This means that, in its original version, the minimal 3-3-1 model practically lost its power of making prediction.
It is important to stress that the minimal 3-3-1 model does not accommodate neutrino masses\cite{Pires:2014xsa}, neither has a candidate for dark matter and faces difficult in accommodating the recent result of $g-2$ of the muon\cite{deJesus:2020ngn} and the B anomalies\cite{CarcamoHernandez:2022fvl}. In view of all this the model need to be modified in order to explains such set of points. Thus, it is natural to look for extensions of the model that provides an answer to such points and evade or shift the Landau pole to a harmless energy scale. Following this line of thought, it was showed in \cite{Dias:2004wk} that the addition of octet of leptons may evade the Landau pole or shift it to a harmless energy scale.
Here we follow this line of thought and address this issue with the addition of scalar leptoquarks\cite{Pati:1973uk} to the minimal 3-3-1 model. Leptoquarks are very well motivated proposal of new physics that may manifest at TeV scale. It also has an important role in flavor physics\cite{Dorsner:2016wpm}. Here, we perform an exhaustive investigation of the impact of scalar leptoquarks on the landau pole of the model. We show that leptoquarks are as efficient as octet of leptons to evade or shift the non-pertubative regime of the minimal 3-3-1 model to a harmless energy scale.
\section{Revisiting the problem}
\subsection{The particle content of the model}
In the minimal 3-3-1 model leptons are arranged in triplet representation of $SU(3)_L$,
\begin{equation}
f_{aL}= \begin{pmatrix}
\nu_{a_L} \\
\ell_{a_L} \\
\ell^{c}_{a_R} \\
\end{pmatrix} \sim (1,3,0),
\label{lp-rep}
\end{equation}
with $a=1,2,3$ representing the three generations of leptons.
In the Hadronic sector, anomaly cancellation requires that one family transforms differently from the other two. This fact allows three possibility of arrangements for the quark families. Here we chose the third generation coming in triplet and the other two coming in anti-triplet representation of $SU(3)_L$,
\begin{eqnarray}
&&Q_{i_L} = \left (
\begin{array}{c}
d_{i} \\
-u_{i} \\
d^{\prime}_{i}
\end{array}
\right )_L\sim(3\,,\,3^*\,,\,-1/3)\,,u_{iR}\,\sim(3,1,2/3),\,\,\,\nonumber \\
&&\,\,d_{iR}\,\sim(3,1,-1/3)\,,\,\,\,\, d^{\prime}_{iR}\,\sim(3,1,-4/3),\nonumber \\
&&Q_{3L} = \left (
\begin{array}{c}
u_{3} \\
d_{3} \\
u^{\prime}_{3}
\end{array}
\right )_L\sim(3\,,\,3\,,\,2/3),u_{3R}\,\sim(3,1,2/3),\nonumber \\
&&\,\,d_{3R}\,\sim(3,1,-1/3)\,,\,u^{\prime}_{3R}\,\sim(3,1,5/3),
\label{quarks-rep}
\end{eqnarray}
where $i=1,2$. The primed quarks are heavy quarks.
The gauge sector is composed by nine gauge bosons where four of them are the standard ones $A\,\,\,,\,\,\, W^{\pm}\,\,\,,\,\,\, Z^0$ and the other five are the typical 3-3-1 gauge bosons $U^{\pm \pm} \,\,\,,\,\,W^{\prime \pm}\,\,,\,\,Z^{\prime}$.
The original scalar sector of the model involves three triplets and one sextet of scalars, namely
\begin{eqnarray}
&&\eta = \left (
\begin{array}{c}
\eta^0 \\
\eta^-_1 \\
\eta^{+}_2
\end{array}
\right )\sim \,\,(1\,,\,3\,,\,0),\,\rho = \left (
\begin{array}{c}
\rho^+ \\
\rho^0 \\
\rho^{++}
\end{array}
\right )\sim\,(1\,,\,3\,,\,1)\,\,,\,
\chi = \left (
\begin{array}{c}
\chi^- \\
\chi^{--} \\
\chi^{ 0}
\end{array}
\right )\sim(1\,,\, 3\,,\,-1).\,\nonumber\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, S=\left(\begin{array}{ccc}
\, \Delta^{0} & \Delta^{-} & \Phi^{+} \\
\newline \\
\Delta^{-} & \, \Delta^{--} & \Phi^{0} \\
\newline \\
\Phi^{+} & \Phi^{0} & \, H_2^{++} \end{array}\right)\sim(1\,,\,6\,,\,0).
\label{scalar-cont}
\end{eqnarray}
When $\chi^0$ develops vacuum expectation value(VEV) different from zero, $v_\chi$, the $SU(3)_L \times U(1)_X$ symmetry breaks to the $ SU(2)_L \times U(1)_Y$ one. When the others neutral scalars develop VEV different from zero, the standard symmetry is spontaneously broken to the electromagnetic one. Such scalar content generate masses for all massive particles of the model, except neutrinos.
After symmetry breaking $Z^{\prime}$ acquires the following mass expression\cite{Ng:1992st}
\begin{equation}
M^2_{Z^{\prime}} \approx \frac{g^2\cos^2 \theta_W}{3(1-4\sin^2 \theta_W)} v^2_\chi.
\label{primemass}
\end{equation}
The other terms are proportional to the electroweak scale and can be neglected. Current collider bound on $Z^{\prime}$ imposes $M_{Z^{\prime}}> 5$ TeV which implies $v_\chi > 4.3$ TeV\cite{Cao:2016uur}. As we show below, the model has a Landau pole that will manifest at some energy scale $\Lambda$. If $\Lambda < 4.3$ TeV than the model is not predictive at all.
\subsection{Landau-Pole}
In order to study the behaviour of the Weinberg angle with energy we, first, need to know how gauge couplings run with energy. In general the running of gauge couplings at one-loop is dictated by the relation
\begin{equation}
\frac{1}{\alpha(\Lambda)_i}=\frac{1}{\alpha(\mu)_i}+\frac{1}{2\pi}b_i\log(\frac{\mu}{\Lambda}),
\label{run}
\end{equation}
where $\alpha_i=\frac{g^2_i}{4 \pi}$. The renormalization coefficients for a general $SU(N)$ gauge group are given by \begin{equation}
b_i=\frac{2}{3}\sum_{fermions}Tr(F)_i+\frac{1}{3}\sum_{scalars}Tr(S)_i-\frac{11}{3}C_{2}(G)_i.
\label{coef}
\end{equation}
For $SU(N)$ we have $T_R(F,S)=1/2$ and $C_2(G)=N$. For $U(1)$ we have $C_2(G)=0$. We also use $\sum Tr(F,S)=\sum y^2$ for $U(1)_y$ with $y=\frac{Y}{2}$, for the standard model case, and $y=X$ for the 3-3-1 case.
The running of $\sin^2 \theta_W(\Lambda)$ for $\Lambda < \mu_{331}$, where $\mu_{331}=\langle \chi ^0 \rangle=\frac{v_\chi}{\sqrt{2}}$, is given by
\begin{equation}
\sin^2 \theta_W(\Lambda)=\frac{1}{1+\frac{\alpha_2(\Lambda)}{\alpha_1(\Lambda)}}
\label{angle1}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{fig1-LP-10-02.pdf}
\caption{ In this figure we show the running of the electroweak mixing angle , given by Eq.(\ref{angle1}), Fig.(1a), and Eq.(\ref{angle2}) that leads to Figs.(1b-1c). In the figure (1d) we show the $\alpha_X(\Lambda)$ running to $\mu_{331}$ = 4TeV assuming all particle content of the model. The contextualization of the curves behavior are described in the text.}
\label{fig1}
\end{figure}
For we have an idea about the energy scale the Landau pole arises, let us consider the simplest case in which the scalar sector of the model is composed only with three triplets of scalars. In this case, when of the $SU(3)_L \times U(1)_N$ symmetry breaking, the scalar sector decouples into the effective two Higgs doublet model (THDM) plus a set of scalars with mass belonging to the 3-3-1 scale. We, then, consider only the effective THDM which means the triplet $\chi$ and the singlets $\rho^{++}$ and $\eta_2^+$ and the exotic quarks are not active degrees of freedom. In this case we get
\begin{equation}
b_1=\frac{20}{9}N_F+\frac{1}{6}N_S=7\,\,\,\,\,\mbox{and} \,\,\,b_2=\frac{2}{3}N_F+\frac{1}{6}N_S-\frac{22}{3}=-3,
\label{bpred}
\end{equation}
where $N_F$ is the number of fermion families, and $N_S=2$, is the number of scalar doublets. Our results are displayed in FIG. 1.
In this simple case the running of $\sin^2 \theta_W(\Lambda)$ for $\Lambda < \mu_{331}$ is displayed in Fig.(1a), see dot-dashed line in green, where the landau pole corresponds to $\Lambda \sim 3.5TeV$. We run the Eqs. (\ref{run}) and (\ref{angle1}) considering the following values at the $M_Z$ energy scale $\sin^2 \theta _W(M_{Z}) = 0.2311$ , $\alpha_1(M_Z) = 1/128$ , $M_Z = 91.188GeV$ and $\alpha_1(M_Z) = \alpha_2(M_Z)\tan^2 \theta _W(M_{Z})$.
In the Fig. (1a), dot-dashed line in black, we considered the scalar sextet S\footnote{ This scalar decouples in one triplet with $Y=-2$, one iso-doublet with $Y=1$ and one doubly charged scalar with $Y=2$}, which implies an additional iso-doublet and a non-hermitian triplet, so that $N_S = 3$ and the addition $N_T = 1$ that leads to
\begin{equation}
b_1=\frac{20}{9}N_F+\frac{1}{6}N_S + N_T = \frac{49}{6} \,\,\,\,\,\mbox{and} \,\,\,b_2=\frac{2}{3}N_F+\frac{1}{6}N_S + \frac{2}{3}N_T -\frac{22}{3}= - \frac{13}{6}.
\label{bpred}
\end{equation}
In this case the pole was pushed up a little bit, to the value $\Lambda \sim 6TeV$ which means that the sextet of scalars is sufficient to recover the pertubative regime of the model concerning current bounds.
Let us now consider that all particle content of the model, the three triples, the sextet of scalars and the exotic quarks are active degrees of freedom which means that their masses are below $\mu_{331}$.
To the case of energies above the scale $\mu_{331}$, the running of $\sin^2 \theta_W(\Lambda)$ for $\Lambda > \mu_{331}$ is
\begin{equation}
\sin^2 \theta_W(\Lambda)=\frac{1}{4(1+\frac{\alpha_L(\Lambda)}{4\alpha_X(\Lambda)})}
\label{angle2}
\end{equation}
where now $\alpha_X(\Lambda)$ running equation we will be given by
\begin{equation}
\frac{1}{\alpha_X(\Lambda)}=\left(1 - 4\sin^2 \theta _W(M_{Z})\right)\frac{1}{\alpha(M_Z)} +\frac{1}{2\pi}\left(b_1 - 3b_2\right)\log(\frac{M_Z}{\mu_{331}}) + \frac{1}{2\pi}b_{X}\log(\frac{\mu_{331}}{\Lambda} ),
\label{running}
\end{equation}
and $\alpha_L(\Lambda =\mu_{331}) = \alpha_2(\mu_{331})$. In the equation above $b_X$ is the renormalization coefficient for $U(1)_X$. When the degrees of freedom above $\mu_{331}$ are taken into account we have $b_X = 20 + N_{\rho} + N_{\chi}$. Furthermore, when exotic quarks are omitted, we can introduce the notation ${b\!\!\!/}_X = 6 + N_{\rho} + N_{\chi}$, where now ${b\!\!\!/}_X = 8$.
Our results are showed in Fig.(1b). The dot-dashed line in red corresponds to the case where $\mu_{331}=4TeV$, while the blue line to $\mu_{331}=5TeV$ and finally the green line to $\mu_{331} = 6TeV$. In this figure we considered all particle content of the minimal 3-3-1 model. This case
leads to $b_X = 22$ and presents a Landau pole around (2-2.5) TeV which is below 4.3 TeV. This meas that in this case the model is no predictive at all.
In Fig(1c) contributions of the exotic quarks are omitted ( which gives ${b\!\!\!/}_X = 8$). We also assumed the same choices for $\mu_{331}$ as in Fig.(1b). This corresponds to a more restrictive case for $\mu_{331} > 4TeV$, which implies the existence of a Landau pole to $\Lambda < 2 TeV$. We see here that the Landau Pole is sensitive to the fact if the exotic quarks are active degrees of freedom or do not and to the $\mu_{331}$ energy scale.
Finally, in Fig(1d) we present the behavior of the running of
$\alpha_X(\Lambda)$ given by Eq.(\ref{running}) to $\mu_{331}$ = 4TeV assuming all particle content of the model. The figure let clear the position of the Landau pole at $\sim 2.5$ TeV and indicates the loss of the perturbative character of the model already in $\sim 2$ TeV which is very close to the Landau pole. In what follow we just discuss the Landau pole.
We made here a short review of the problem concerning the Landau pole that arises in the minimal 3-3-1 model. The results we obtained is in agreement with the previous one. Conjugating this results with the current bound on the scale of the 3-3-1 symmetry breaking , given by $\mu_{331}=\langle \chi ^0 \rangle=\frac{v_\chi}{\sqrt{2}}$, which is around $4.3$ TeV, we conclude that the perturbative regime of the minimal 3-3-1 model in its original form depends strongly if the exotic quarks are active degrees of freedom or do not. Even in this case the model is predictive up to the energy scale of 6 TeV, only, which is very close of $ 4.3$ TeV.
In what follow we make an exhaustive investigation of the unique proposal existent in the literature that bring the model to the game by evading the Landau pole with a particular extension of the particle content of the model.
\subsection{ Evading the pole with octet of leptons}
It was proposed in \cite{Dias:2004wk} that we could evade the pole in adding three octet of leptons to the minimal 3-3-1 model content. The octet is composed by the following leptons
\begin{eqnarray}
\Xi=\left(\begin{array}{ccc}
\, \frac{1}{\sqrt{2}}t^0+\frac{1}{\sqrt{6}}\lambda^0 & t^+ & \delta^- \\
\newline \\
t^-& \,-\frac{1}{\sqrt{2}}t^0+\frac{1}{\sqrt{6}}\lambda^0 & \delta^{--} \\
\newline \\
\xi^{++}& \xi^{++} & \, -\frac{2}{\sqrt{6}}\lambda^0 \end{array}\right)\sim(1\,,\,8\,,\,0).
\label{octeto}
\end{eqnarray}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{fig2-Lept-10-02.pdf}
\caption{ In this figure we show the running of the electroweak mixing angle , given by Eq.(\ref{angle1}), Figs.(2a-2b), and Eq.(\ref{angle2}) that leads to Figs.(2c-2d). The contextualization of the curves behavior are described in the text.}
\label{fig1}
\end{figure}
Here we calculate the contribution of the octet of leptons to the running of $\sin^2 \theta_W(\Lambda)$ given by Eq. (\ref{angle1}),
that correspond to $\Lambda < \mu_{331}$, for the cases when we add one, two or three octets of leptons. In FIG. 2 we show our results. In the Fig. (2a) we have the running of the minimal 3-3-1 model (dot-dashed line in black) and, the dot-dashed light-blue curve, we
have the running of the minimal 3-3-1 model+1-octet of leptons(1[8]). Observe that the addition of 1-octet of leptons is sufficient to shift the landau pole to energies above 50 TeV. In Fig(2b) we have the running for the cases of
two and three octets of leptons.
When the degrees of freedom above $\mu_{331}$ are considered, which means all particle content of the model is took into account, the running of $\sin^2 \theta_W(\Lambda)$ is given by Eq. (\ref{angle2}). The results for this case is displayed in Fig(2c) where dot-dashed line in blue corresponds to tree octets and $\mu_{331}= 4TeV$. As we can see in this case the landau pole is shifted to $\Lambda > O(50) TeV$.
For sake of completeness, in Fig.(2d) we present the running of the Weinberg angle for three values of $\mu_{331}$ when the contributions of the exotic quarks are omitted which means ${b\!\!\!/}_X = 8$, where only in this plot we consider the addition of one octet. We see that for $\mu_{331} > 4$ TeV we have the absence of the Landau pole just for one octet of leptons. In general the higher $\mu_{331}$ is, the harmless the pole get. So we conclude that octet of leptons are efficient in circumventing the landau pole. Next we analyse other possibilities potentially interesting that evade the landau pole.
\section{Evading the landau pole with leptoquarks }
In this section we consider the contribution of leptoquarks on the running of the Weinberg angle $\sin ^2\theta_W(\Lambda)$. Leptoquarks are very well motivated form of new physic that is expect to manifest at TeV scale and then engenders an interesting flavor physics scenario\cite{Dorsner:2016wpm}, give robust contributions to the $g-2$ of the muon and may generate neutrino masses at 1-loop\cite{Babu:2020hun,Parashar:2022wrd}. Moreover, leptoquarks may be probed at LHC\cite{Kramer:2004df}.
Due to the the fact that the Yukawa interactions in 3-3-1 models discriminate family\cite{Ng:1992st,Oliveira:2022vjo}, i.e., one family of quarks must transform differently from the other two, then we are going to have a proliferation of leptoquark multiplets. To see this, observe that from the quark and lepton content of the minimal 3-3-1 model we can have scalar leptoquarks in the following representations,
\begin{eqnarray}
&& \bar L^C_{a_L} Q_{3_L} \sim (1\,,\,3\,,\,0)\times (3\,,\,3\,,\,2/3) \sim (3\,,\,3^* \oplus 6\,,\,2/3),\nonumber \\
&&\bar L_{a_L} Q_{3_L} \sim (1\,,\,3^*\,,\,0)\times (3\,,\,3\,,\,2/3) \sim (3\,,\,1 \oplus 8\,,\,2/3),\nonumber \\
&&\bar L^C_{a_L} Q_{i_L} \sim (1\,,\,3,0)\times (3\,,\,3^*\,,\,-1/3) \sim (3\,,\,1 \oplus 8\,,\,-1/3),\nonumber \\
&&\bar L_{a_L} Q_{i_L} \sim (1\,,\,3^*,0)\times (3\,,\,3^*\,,\,-1/3) \sim (3\,,\,3 \oplus 6^*\,,\,-1/3),\nonumber \\
&&\bar L_{a_L} d_{a_R} \sim (1\,,\,3^*,0)\times (3\,,\,1\,,\,-1/3) \sim (3\,,\,3^*_d\,,\,-1/3),\nonumber \\
&&\bar L_{a_L} u_{a_R} \sim (1\,,\,3^*,0)\times (3\,,\,1\,,\,2/3) \sim (3\,,\,3^*_u\,,\,2/3)
\end{eqnarray}
There are also the singlet leptoquarks that we do not consider in this work. The leptoquarks we are interested are these ones
\begin{eqnarray}
&& \phi^8_a \sim (3\,,\, 8\,,\, 2/3),\,\,\,\,\,\,\,\,\,\Phi^8_a \sim (3\,,\, 8\,,\, -1/3),\nonumber \\
&& \phi^6_a \sim (3\,,\,6\,,\,2/3),\,\,\,\,\,\,\,\,\,\, \Phi^6_a \sim (3\,,\,6^*\,,\,-1/3),\nonumber \\ && \phi^3_a \sim(3\,,\ 3^*\,,\,2/3),\,\,\,\,\,\,\,\,\,\,\Phi^3_a \sim (3\,,3\,,\,-1/3),
\label{LQrep}
\end{eqnarray}
The indice $a$ refer to color. After symmetry breaking, these multiplet decompose as
\begin{eqnarray}
&&[8]_{X=2/3}=[3]_{Y=4/3}+[2]_{Y=-5/3}+[2^{\prime}]_{Y=13/6}+[1]_{Y=4/3}\nonumber \\
&& [8]_{X=-1/3}=[3]_{Y=-2/3}+[2]_{Y=7/3}+[2^{\prime}]_{Y=-11/3}+[1]_{Y=-2/3}\nonumber \\
&& [6]_{X=2/3}=[3]_{Y=-2/3}+[2]_{Y=7/3}+[1]_{Y=16/3}\nonumber \\
&& [6]_{X=-1/3}=[3]_{Y=-4/3}+[2]_{Y=5/3}+[1]_{Y=14/3}\nonumber \\
&& [3_d]_{X=2/3}=[2]_{Y=-2/3}+[1]_{Y=7/3}\nonumber \\
&&[3_u]_{X=-1/3}=[2]_{Y=-5/3}+[1]_{Y=5/3},
\label{decompo}
\end{eqnarray}
where $Y$ refers to the hypercharges of the leptoquarks .
Similarly to the previous cases discussed above, here we calculated the contributions of these leptoquarks to the running of $\sin^2 \theta_W(\Lambda)$ given by Eqs.(\ref{angle1}) as well by Eq.(\ref{angle2}) always having in mind economical scenarios. Our results are displayed in FIG. 3.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{fig3-lepQ-10-02.pdf}
\caption{ In this figure we show the running of the electroweak mixing angle , given by Eq.(\ref{angle1}), Figs.(3a-3b), and Eq.(\ref{angle2}) that leads to Figs.(3c-3d). The contextualization of the curves behavior are described in the text.}
\label{fig1}
\end{figure}
In the Fig. (3a), in the dot-dashed light-blue curve, we assumed the addition of one octet of leptoquarks ($\phi^8_a $) to the minimal 3-3-1 model. In this case the Landau pole is shifted to $\Lambda \sim 50 TeV$, while the case for two and three octet of leptoquarks is depicted in the Fig(3b) where the dot-dashed orange line correspond to two octets of leptoquarks ($\phi^8_a, \Phi^8_a$) and the magenta to tree octets, and again the Landau pole is avoided at $\Lambda < \mu_{331}$. Observe that octet of leptoquarks have the same effect than the octet of leptons discussed above and presented in Fig.2a and Fig.2b.
However, when we consider all degrees of freedom above $\mu_{331}$, i.e, when we consider that all particle content of the model are active, in this case the running of $\sin^2 \theta_W(\Lambda)$ is given by Eq.(\ref{angle2}) and the results are presented in Fig(3c) with the dot-dashed line in red corresponding again to the minimal 3-3-1 model at $ \mu_{331}=4TeV$ with $b_X =22$. In this case the addition of one octet of leptoquarks( dot-dashed light-blue curve) is sufficient to push the Landau pole to values above 5 TeV which is a harmless scale according to current bounds. The other cases, as the addition of one triplet or one sextet to the octet of leptoquarks ($\phi^8_a, \phi^{(3,6)}_a$), are considered(dashed orange and green curves) and the Landau pole is shifted to $\Lambda \sim O(11-20) TeV$ which is a harmeless scale, too. In the Fig(3d) we present the most general scenario involving leptoquarks with hipercharges $(2/3, -1/3)[\phi,\Phi]$.
\section{Conclusions}
In this work we calculated the contributions of leptoquarks to the running of $\sin^2 \theta_W$ with the aim of obtaining their impact on the non-perturbative regime of the model. The non-perturnative regime (a Landau-like pole) of the model may manifest, depending on the particle content that contribute to the running of $\sin^2 \theta_W$, already at few TeVs. Current bounds demand the $SU(3)_L \times U(1)_X$ symmetry breaks around 4 TeVs leaving the model, in its original form, phenomenologicaly unpredictable. Even in its original form, without the sextet of scalars, the model get non-perturbative below 4 TeV. The presence of the sextet, which is necessary to generate lepton masses, push this value to 6 TeV.
We have to resort to extension of the model in order to evade the non-perturbative regime and then recover its predictability. Thinking in this way, the addition of octet of leptons to the particle content of the minimal 3-3-1 model can do the job, as we can see in FIG. 2. In general, when we consider that all the particle content of the model, including the three octet of leptons, contribute to the running of $\sin^2 \theta_W$ we have that the model predicts a Landau pole around $\Lambda \sim 100$ TeV. The Landau pole is completely evaded only when we consider the contributions of three octet of leptons in an effective 2HDM scenario.
In this work we analyzed the capacity of leptoquarks in evading the Landau pole. Our results are displayed in FIG. 3. Firstly, the model allows leptoquarks in the representation of octets, sextets, triplets and singlets. As main result we showed that leptoquarks are so efficient in evading the Landau pole as octet of leptons. For example, in adding leptoquarks to the scenario of 2HDM, one or two octet of leptoquarks are sufficient to shift the Landau pole to a harmless energy scale while the case of three octet evade completely the Landau pole. When we consider all particle content active in the running of $\sin^2 \theta_W$, the most interesting scenario we found was the case of one octet plus one sextet of leptoquarks which pushed the Landau pole to an energy scale around 20 TeV. However, if we wish to push up the Landau pole we may resort to the combinations presented in Fig.3d where the case of two octets and two sextet are very efficient pushing this energy scale to $\Lambda \sim 70$ TeV.
In concluding, we have that octet of leptons or multiplets of leptoquarks, both, are very efficient in evading the Landau pole that arises in in the minimal 3-3-1 model at TeV scale. The advantage of leptoquarks are in their phenomenology once they are very attractive concerning flavor physics.
\section*{Acknowledgments}
C.A.S.P was supported by the CNPq research grants No. 311936/2021-0 and A. Doff. was supported by the CNPq research grants No. 310015/2020-0.
|
{
"arxiv_id": "2302.08630",
"language": "en",
"timestamp": "2023-02-20T02:04:11",
"url": "https://arxiv.org/abs/2302.08630",
"yymm": "2302"
} |
\section*{Acknowledgements}
This work was supported by the U.S.\,Department of Energy, Office of Science, Office of Fusion Energy Sciences and Office of Advanced Scientific Computing Research through the Scientific Discovery through Advanced Computing project on Plasma-Surface Interactions.
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S.\,Department of Energy Office of Science User Facility operated under Contract DE-AC02-05CH11231.
This article has been co-authored by employees of National Technology \& Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employees co-own right, title and interest in and to the article and are responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan https://www.energy.gov/downloads/doe-public-access-plan.
\section{Bayesian compressive sensing}\label{sec:bayeseian_compressive_sensing}
The least-squares regression problem from \cref{eq:least_squares} can be written in matrix form as
\begin{align}
{\boldsymbol{c}}^{\mathrm{LS}} &= \argmin_{\boldsymbol{c}} \|{\boldsymbol{y}} - {\boldsymbol{P}} {\boldsymbol{c}}\|_2^2, \label{eq:least_squares_matrix}
\end{align}
where ${\boldsymbol{y}}$ is an $N$-dimensional vector of model outputs with $y_n = y^{(n)}$ for $n = 1, 2, \ldots, N$, ${\boldsymbol{P}}$ is a matrix of size $N \times K$ with $P_{nk} = \Phi_{k}({\boldsymbol{\xi}}^{(n)})$ for $n = 1, 2, \ldots, N$ and $k = 1, 2, \ldots, K$, where $N$ is the number of training samples, $K$ is the number of PCE basis terms, and where $\|\cdot\|_2$ denotes the usual (Euclidean) $\ell_2$-norm.
When the number of model evaluations $N$ is lower than the number of polynomial terms $K$, the least-squares problem in~\eqref{eq:least_squares_matrix} is underdetermined. In such cases, the solution ${\boldsymbol{c}}$ is not well-defined, and one can add an $\ell_1$ regularization term that enforces sparsity in the polynomial expansion coefficients, i.e.,
\begin{equation}\label{eq:regularized_least_squares}
{\boldsymbol{c}}^{\mathrm{CS}} = \argmin_{\boldsymbol{c}} \left(\|{\boldsymbol{q}} - {\boldsymbol{P}} {\boldsymbol{c}}\|^2_2 + \gamma \|{\boldsymbol{c}}\|_1\right),
\end{equation}
where $\gamma>0$ is a regularization parameter, typically determined using cross-validation, and where $\|\cdot\|_1$ denotes the $\ell_1$-norm. The regularization term will ensure that only a small subset of basis terms will be retained in the solution vector ${\boldsymbol{c}}^{\mathrm{CS}}$. This approach is in line with the compressive sensing (CS) method commonly used in signal recognition, see \cite{candes2006,donoho2006}.
\begin{algorithm*}
\begin{algorithmic}[1]
\Statex \strut\textbf{input:} training data $\mathscr{D}_{\textrm{train}}$, number of iterations $L$
\Statex \textbf{output:} higher-order sparse basis ${\mathcal{I}}_d^{(L)}$, coefficient vector ${\boldsymbol{c}}^{(L)}$
\Statex
\Procedure{\textsf{construct\texttt{\_}basis}}{$\mathscr{D}_{\textrm{train}}$, $L$}
\State ${\mathcal{I}}_d^{(0)} \gets \{{\boldsymbol{0}}\}$ \Comment{initialize the index set}
\For{$\ell$ \textbf{from} 1 \textbf{to} $L$} \Comment{perform $L$ iterations}
\State ${\mathcal{I}}_\mathrm{new} \gets {\mathcal{I}}_d^{(\ell - 1)}$
\ForEach {${\mathfrak{u}} \in {\mathcal{I}}_d^{(\ell - 1)}$} \Comment{loop over each index in the index set}
\For{$j$ \textbf{from} 1 \textbf{to} $s$}
\State ${\mathcal{I}}_\mathrm{new} \gets {\mathcal{I}}_\mathrm{new} \cup \{{\mathfrak{u}} + {\boldsymbol{e}}_j\}$ \Comment{add all forward neighbors}
\EndFor
\EndFor
\State ${\boldsymbol{c}}^{(\ell)}, {\mathcal{I}}_d^{(\ell)} \gets \textsf{BCS}(\mathscr{D}_{\textrm{train}}, {\mathcal{I}}_\mathrm{new})$ \Comment{run BCS to obtain sparse basis}
\EndFor
\EndProcedure
\end{algorithmic}
\caption{Adaptive basis growth procedure from~\cite{sargsyan2014}.}
\label{alg:wibcs}
\end{algorithm*}
In Bayesian compressive sensing (BCS), the regression problem in~\eqref{eq:regularized_least_squares} is embedded in a Bayesian framework, see, e.g.,~\cite{ji2008,babacan2009,sargsyan2014}. In BCS, one constructs an \emph{uncertain} PCE surrogate model by assigning a prior probability distribution $p({\boldsymbol{c}})$ to the coefficient vector ${\boldsymbol{c}}$. The goal is then to obtain the posterior distribution $p({\boldsymbol{c}} | \mathscr{D}_\textrm{train})$, i.e., the probability distribution of the coefficient vector ${\boldsymbol{c}}$ after observing the training data $\mathscr{D}_\textrm{train}$. The posterior distribution is related to the prior probability via the likelihood function ${\mathcal{L}}_{\mathscr{D}_\textrm{train}}({\boldsymbol{c}})$, a consequence of Bayes' law, i.e.,
\begin{equation}
p({\boldsymbol{c}} | \mathscr{D}_\textrm{train}) \propto {\mathcal{L}}_{\mathscr{D}_\textrm{train}}({\boldsymbol{c}}) p({\boldsymbol{c}}).
\end{equation}
The likelihood expresses the goodness-of-fit of the PCE surrogate model with coefficients ${\boldsymbol{c}}$ to the available input-output data $\mathscr{D}_\textrm{train}$. Maximizing the (log of the) posterior probability, the optimal value for the coefficients is
\begin{equation}\label{eq:bayesian_compressive_sensing}
{\boldsymbol{c}}^{\mathrm{BCS}} = \argmax_{\boldsymbol{c}} \left(\log {\mathcal{L}}_{\mathscr{D}_\textrm{train}}({\boldsymbol{c}}) + \log p({\boldsymbol{c}})\right).
\end{equation}
This is clearly equivalent to the CS formulation in~\eqref{eq:regularized_least_squares}, where the objective function is the negative log-likelihood and the regularization term is the negative log-prior. In particular, combining an i.i.d.\,Gaussian likelihood with constant variance $\sigma^2$, i.e.,
\begin{align}
&{\mathcal{L}}_{\mathscr{D}_\textrm{train}}({\boldsymbol{c}}) = \\
&\frac{1}{(\sqrt{2\pi}\sigma)^N} \exp \left( - \frac{1}{2\sigma^2} \sum_{n=1}^{N} \left(f({\boldsymbol{x}}^{(n)}) - \sum_{k=1}^K c_k \Phi_k({\boldsymbol{\xi}}^{(n)})\right)^2 \right)
\end{align}
with a weighted Laplace prior
\begin{equation}
p({\boldsymbol{c}}) = \left(\frac{\gamma}{2}\right)^{K} \exp\left(- \gamma \sum_{k=1}^K |c_k|\right),
\end{equation}
the solution of~\eqref{eq:regularized_least_squares} can be recovered, see, e.g.,~\cite{sargsyan2014}. A key strength of the BCS approach in \cref{eq:bayesian_compressive_sensing}, however, is that, when only a small number of training samples are available, as will be the case in our numerical experiments later on, it leads to a probabilistic surrogate model that quantifies the additional uncertainty in the prediction through the posterior distribution $p({\boldsymbol{c}} | \mathscr{D}_\textrm{train})$ of the PCE coefficients.
It can be shown that the BCS solution for the coefficients ${\boldsymbol{c}}^{\mathrm{BCS}}$ is a multivariate Gaussian posterior distribution, and a fast approximate solution can be obtained using techniques borrowed from the relevance vector machine (RVM) literature, see, e.g.,~\cite{tipping2003}.
An adaptive basis growth procedure to construct the PCE index set ${\mathcal{I}}_d$ has been suggested in~\cite{sargsyan2014}. Starting from an initial set of basis terms ${\mathcal{I}}_d^{(0)} = \{{\boldsymbol{0}}\}$, we gradually enrich the polynomial basis ${\mathcal{I}}_d^{(\ell)}$ for $\ell = 1, 2, \ldots, L$ with more terms, in each step running the BCS algorithm to reduce the set of basis vectors and only retain the relevant components. The full procedure is shown in~\cref{alg:wibcs}. The procedure is terminated when a predefined maximum number of iterations $L$ is reached. Alternatively, the procedure may be terminated early when there is no change in the index set ${\mathcal{I}}_d^{(\ell)}$ for two successive iterations $\ell - 1$ and $\ell$. In practice,~\cref{alg:wibcs} is repeated $R=3$ times, each time using a random partitioning of the data $\mathscr{D}$ into 90\% training samples and 10\% test samples, to avoid overfitting. The final index set is then computed as the intersection of the $R$ trial index sets ${\mathcal{I}}_d^{(r, L)}$, to avoid any remaining overfitting, i.e.,
\begin{equation}
{\mathcal{I}}_d = \bigcap_{r=1}^R {\mathcal{I}}_d^{(r, L)}.
\end{equation}
Once the final index set ${\mathcal{I}}_d$ has been obtained, we perform a final least-squares fit to compute the coefficients of the PCE with only the remaining basis terms.
\section{Conclusions and Future Work}\label{sec:conclusions_and_future_work}
Although there have been various attempts to model the mechanics involved in the material subsurface evolution of tungsten exposed to helium plasmas, a global sensitivity analysis framework to investigate the effect of model parameters in these models has hitherto been lacking. Yet, information about the sensitivity of model outputs to these input parameters is of crucial importance in order to improve understanding of the underlying complex physical processes and interactions that control the subsurface morphology, as well as to facilitate further model development, and ultimately to target new experiments that can be used to reduce the model uncertainty. As the computer codes used to simulate these models require several hours to days of computation time on HPC systems for even a single model evaluation, it is important to use efficient sensitivity analysis methods that limit the size of the ensemble of model evaluations, but provide sufficient accuracy to extract the needed information. In this work, we employed such a global sensitivity analysis framework using polynomial chaos surrogate models that allow extraction of the required sensitivity information directly from the polynomial coefficients. These sparse, high-dimensional nonlinear polynomial surrogates are constructed by relying on Bayesian compressive sensing methods~\cite{sargsyan2014}.
We used our framework to perform a global sensitivity analysis of \texttt{FTX}{}, a coupled code part of a larger multiscale, multiphysics modeling attempt to predict the material evolution of plasma-facing components in future fusion reactors~\cite{lasa2021}. We considered two problem settings: the \textsf{ITER-He}{} setting that mimics ITER-like conditions of early operation with a helium plasma, and the \textsf{PISCES-A}{} setting that mimics the experimental conditions inside the PISCES-A linear plasma device. Given specified uncertainty in model parameters/inputs, our analysis highlights the importance of individual parameters as measured by their fractional contribution to the variance in uncertain model outputs of interest. Our results indicate that, for the \textsf{ITER-He}{} setting, where we assume the ion energy-angle distribution (IEAD) is fixed, 99\% of the variability in the surface growth and helium retention can be explained by the variation in the migration energies $E_{\ce{He}_1}$, $E_{\ce{He}_2}$ and $E_{\ce{He}_4}$. In the \textsf{PISCES-A}{} setting, the most important parameters are the surface binding energy of tungsten, the ion impact energy and the migration energy $E_{\ce{He}_1}$. With this key set of important parameters identified, we constructed a more accurate surrogate model in only 5 dimensions. This surrogate model can be used to replace the coupled \texttt{FTX}{} code in the multiphysics modeling hierarchy from~\cite{lasa2020}. Model calibration may also be performed with this more accurate surrogate model, using the data reported in, e.g.,~\cite{woller2015}. This will be the topic of future research efforts.
In addition, we plan to use our framework to repeat the global sensitivity analysis study for the coupled \texttt{FTX}{} code, employing a recently developed non-stochastic bubble bursting model by the authors of~\cite{blondel2017a}. This new model should reduce the computational burden involved with running the coupled code, allowing larger ensemble sizes to be considered in the sensitivity analysis study. Finally, we mention that the more modern \texttt{RustBCA} code developed in~\cite{drobny2021} can be used as an alternative to the \texttt{F-TRIDYN}{} code we employed in this study. However, a code coupling between \texttt{RustBCA} and \texttt{Xolotl}{} similar to~\cite{lasa2020} is yet to be developed.
Finally, the UQ framework developed in this paper will be used to perform global sensitivity analysis and surrogate construction for a larger part of the multiphysics framework modeling plasma and surface material interactions under construction\cite{lasa2020}. In particular, ongoing efforts include a sensitivity analysis for the coupling of the impurity transport code \texttt{GITR}, see~\cite{younkin2021}, with the ion-solid interaction code \texttt{F-TRIDYN}{}, and for the coupled \texttt{GITR}--\texttt{FTX}{} simulation framework.
\section{Global Sensitivity Analysis Framework}\label{sec:global_sensitivity_analysis_framework}
In this section, we outline our framework for global sensitivity analysis using polynomial chaos expansion (PCE) surrogate models. As a well-established tool for uncertainty representation of and propagation through computational models, PCEs have gained widespread popularity over the past three decades, see, e.g.,~\cite{ghanem1991,lemaitre2010}. In this work, we resort to PCE surrogates because they offer two attractive features. First, they can be used as a surrogate to replace a computationally expensive model in studies that require intensive sampling. Second, owing to the orthogonality of the polynomials, PCE surrogates allow closed-form expressions for variance-based sensitivity indices. We exploit both features in our numerical results presented in~\cref{sec:results_and_discussion}.
\subsection{Polynomial chaos expansion surrogate models}\label{sec:polynomial_chaos_expansion_surrogate_models}
Suppose that, for a given physical model $f$, the relation between the model output $y$ and the $d$-dimensional input parameters ${\boldsymbol{x}} = (x_1, x_2, \ldots, x_d)$ can be written as
\begin{equation}\label{eq:model}
y = f({\boldsymbol{x}}).
\end{equation}
In what follows, we will refer to $y$ as the \emph{quantity of interest}. Furthermore, in the context of this paper, we assume that the only information available about any $x_j$ is its lower bound $a_j$ and upper bound $b_j$, $j=1, 2, \ldots, d$. In this case, each $x_j$ can be rescaled to a random variable variable $\xi_j \in [-1, 1]$, where
\begin{equation}\label{eq:mapping}
\xi_j = 2 \left( \frac{x_j - a_j}{b_j - a_j} \right) - 1, \quad j = 1, 2, \ldots, d.
\end{equation}
A PCE surrogate for $y$ can be written as
\begin{equation}\label{eq:pce_infinite}
\tilde{y} = \sum_{{\boldsymbol{u}} \in {\mathcal{I}}_d} c_{{\boldsymbol{u}}} \Phi_{\boldsymbol{u}}({\boldsymbol{\xi}}),
\end{equation}
where ${\boldsymbol{u}} = (u_1, u_2, \ldots, u_d) \in {\mathcal{I}}_d$ is a multi-index of length $d$, ${\mathcal{I}}_d \subset {\mathbb{N}}} % natural numbers {1, 2, ..._0^d$ is an appropriately-chosen set of multi-indices, $\Phi_{\boldsymbol{u}}$ is a multivariate orthogonal polynomial expressed in terms of the rescaled inputs ${\boldsymbol{\xi}} = (\xi_1, \xi_2, \ldots, \xi_d)$, and the $c_{\boldsymbol{u}}$ are deterministic coefficients~\cite{ghanem2003, smith2013}.
In our setting, the multivariate orthogonal polynomials $\Phi_{\boldsymbol{u}}$ correspond to a product of (normalized) univariate Legendre polynomials, i.e.,
\begin{equation}
\Phi_{\boldsymbol{u}}({\boldsymbol{\xi}}) \coloneqq \prod_{j=1}^d \phi_{u_j}(\xi_j),
\end{equation}
where $\phi_{u_j}$ is the (normalized) one-dimensional Legendre polynomial of degree $u_j$, $j=1, 2, \ldots, d$, see, e.g.,~\cite{sargsyan2016}. The first four normalized univariate Legendre polynomials $\phi_m(\xi)$ are given by
\begin{align}
\phi_0(\xi) &= z_0,\\
\phi_1(\xi) &= z_1 \xi,\\
\phi_2(\xi) &= \frac{z_2}{2}(3\xi^2 - 1), \text{ and } \\
\phi_3(\xi) &= \frac{z_3}{2}(5\xi^3 - 3\xi),
\end{align}
where $z_m = \sqrt{2m+1}$ is the normalization constant. By convention, the \emph{order} $|u|$ of the multivariate polynomial $\Phi_{\boldsymbol{u}}$ is given as the sum of the orders of all univariate polynomials in the expansion, i.e., $|u| \coloneqq u_1 + u_2 + \ldots + u_d$.
An essential part of the PCE construction process is the choice of the index set ${\mathcal{I}}_d$. Suppose we assign an index $k$ to each multi-index ${\boldsymbol{u}}^{(k)} = (u_1^{(k)}, u_2^{(k)}, \ldots, u_d^{(k)})$ in the index set ${\mathcal{I}}_d$ and let $K = |{\mathcal{I}}_d|$ denote the total number of terms in the expansion. In that case, we can write the PCE as
\begin{equation}\label{eq:pce_finite}
\tilde{y} = \sum_{k = 1}^K c_{k} \Phi_k({\boldsymbol{\xi}}),
\end{equation}
where the $c_{k}$ are the coefficients and $\Phi_k$ is the multivariate polynomial associated with multi-index ${\boldsymbol{u}}^{(k)}$. In a preprocessing step, one may then identify important parameters or important interaction terms to decide on suitable truncation rules for constructing the index set ${\mathcal{I}}_d$. The vector of coefficients ${\boldsymbol{c}} = (c_k)_{k=1}^K$ can subsequently be determined from a set of input-output pairs $\mathscr{D} = \{({\boldsymbol{x}}^{(n)}, y^{(n)})\}_{n=1}^N$ by solving a least-squares regression problem of the form
\begin{equation}
{\boldsymbol{c}}^{\mathrm{LS}} = \argmin_{\boldsymbol{c}} \sum_{n=1}^N \left( y^{(n)} - \sum_{k=1}^K c_k \Phi_k({\boldsymbol{\xi}}^{(n)}) \right)^2, \label{eq:least_squares}
\end{equation}
where ${\boldsymbol{\xi}}^{(n)}$ corresponds to the rescaled input parameters ${\boldsymbol{x}}^{(n)}$ defined in~\eqref{eq:mapping}.
In a high-dimensional setting, i.e., when he number of input parameters $d$ is large, the number of basis terms $K$ retained using such a predefined truncation rule may lead to an underdetermined regression problem, especially when only a small number of measurements $N$ are available. This naturally leads to overfitted model approximations, meaning that the model is too complex and has too many degrees of freedom for the data at hand, and eventually to inaccurate sensitivity information extracted from these models. While there are dimension truncation schemes, such as the Smolyak construction~\cite{smolyak1963,conrad2013}, hyperbolic cross construction~\cite{blatman2011} or dimension-adaptive construction~\cite{gerstner2003} that somewhat overcome the dramatic growth of the number of basis terms and work well for low to moderate dimensional problems, these methods are impractical for the 17-dimensional \texttt{FTX}{} setting at hand. In \cref{sec:bayeseian_compressive_sensing}, we discuss an adaptive approach for constructing the sparse, high-dimensional multi-index set ${\mathcal{I}}_d$ based on Compressive Sensing (CS), see~\cite{candes2006,donoho2006}, that overcomes the aforementioned overfitting problem.
\begin{table*}[t]
\centering
\begin{tabular}{lll S[table-format=3.3] S[table-format=3.3] S[table-format=3.3] l} \toprule
& parameter name & symbol & {nominal value} & {lower bound} & {upper bound} & unit \\ \midrule
\input{dat/coupled_parameters}
\end{tabular}
\caption{Specification of the uncertain parameters used in the \texttt{FTX}{} setting. The ion impact energy and incident angle are only relevant for the \textsf{PISCES-A} {} setting.}
\label{tab:uncertain_parameters}
\end{table*}
\subsection{Global sensitivity analysis using PCEs}\label{sec:global_sensitivity_analysis_using_PCEs}
Global sensitivity analysis (GSA) methods are used to quantify the sensitivity of the model output $y = f({\boldsymbol{x}})$ to the model input parameters ${\boldsymbol{x}}$. Variance-based sensitivity analysis using Sobol' indices is one of the most commonly used GSA methods, see~\cite{sobol2001}. The Sobol' indices capture the amount of variance in the model output attributed to a certain (set of) input parameter(s). In this section, we briefly recall the definition of the total effect sensitivity indices, and recall how they can be estimated from the PCE for $y$. For a more in-depth analysis, we refer to~\cite{saltelli2008}.
The \emph{total-effect} Sobol' sensitivity indices are defined as
\begin{equation}\label{eq:total_effect_sensitivity_index}
S^{\text{total}}_j = 1 - \frac{\V_{{\boldsymbol{x}}_{\shortminus{}j}}[\E_{x_j}[f({\boldsymbol{x}}|{\boldsymbol{x}}_{\shortminus{}j})]]}{\V[f({\boldsymbol{x}})]},
\end{equation}
where $\V_{{\boldsymbol{x}}_{\shortminus{}j}}[\;\cdot\;]$ is the variance with respect to all parameters ${\boldsymbol{x}}$ except the $j$th parameter $x_j$, and $\E_{x_j}[\;\cdot\;]$ is the expected value with respect to the $j$th parameter $x_j$. The total-effect sensitivity index is a measure of the fraction of the total variance of the model output that can be attributed to the $j$th parameter, including its interaction with other input variables, see~\cite{sobol2001}.
The total-effect Sobol' sensitivity indices allow for a natural way to order the parameters $x_j$, $j=1, 2, \ldots, d$, according to their relative contribution to the total output variance. We will exploit this property in \cref{sec:results_and_discussion} to identify a subset of important parameters in \texttt{FTX}{}.
The Sobol' sensitivities are usually computed using a random sampling approximation for~\eqref{eq:total_effect_sensitivity_index} written in integral form -- the so-called \emph{pick-and-freeze} approach, see, e.g.,~\cite{jansen1999, saltelli2002, saltelli2008}. However, if the model $f({\boldsymbol{x}})$ is computationally expensive to evaluate, this approach may be computationally infeasible, since typically many model evaluations will be required to achieve reasonable accuracy. Alternatively, the values for the sensitivity indices can be computed \emph{analytically} from the coefficients of a PCE surrogate, exploiting the orthogonality of the PCE basis functions. Thus, instead of directly computing the sensitivity indices in~\eqref{eq:total_effect_sensitivity_index} from the statistics derived from the random samples, we construct a PCE surrogate for the model fitted to these same samples, and extract the total effect indices as a post-processing step from the PCE. The advantage of the PCE approach is that it exploits a presumed smoothness in the quantity of interest. This smoothness assumption improves the precision of the estimated sensitivity indices extracted from the surrogate, but may introduce an additional bias due to the truncation of the polynomial order in the PCE, see, e.g.,~\cite{sudret2008, crestaux2009}.
In particular, the total-effect Sobol' sensitivity indices defined in \cref{eq:total_effect_sensitivity_index} can be obtained as
\begin{align}
S^{\text{total}}_j &\approx \frac{\sum_{{\boldsymbol{u}} \in {\mathcal{I}}_j^{\mathrm{total}}} c_{\boldsymbol{u}}^2}{\sum_{{\boldsymbol{u}} \in {\mathcal{I}}_d \setminus \{{\boldsymbol{0}}\}} c_{\boldsymbol{u}}^2} \label{eq:total_effect_sensitivity_index_with_pce}
\end{align}
with ${\mathcal{I}}_j^{\mathrm{total}} = \{{\boldsymbol{u}} \in {\mathcal{I}}_d : u_j > 0 \}$. Hence, having constructed a PCE surrogate, the sought-after sensitivity indices can be computed using the analytical expression in~\eqref{eq:total_effect_sensitivity_index_with_pce}.
\section{Introduction}\label{sec:introduction}
Nuclear fusion is a potential source of safe, carbon-free and virtually limitless energy. However, the design of next-generation tokamak fusion reactors like ITER and DEMO is challenged by difficulties in understanding and predicting material damage in plasma-facing components (PFCs)~\cite{meade2009}. Tungsten (\ce{W}) is a premier candidate for the reactor wall material, because it can withstand the harsh thermal and radiation loads present in these reactors. Unfortunately, its material properties degrade when exposed to the fusion plasma, causing detrimental effects on the operation of the fusion reactor~\cite{brezinsek2015}. Experiments with linear plasma devices, as well as in situ tokamak plasmas, have shown that the helium (\ce{He}) generated by the fusion reaction causes drastic changes in the microstructure of the tungsten wall, leading to erosion, material performance degradation and ultimately material failure~\cite{hammond2017,wright2013}.
The need for fundamental insight into the mechanics of the microstructure evolution in plasma-surface interactions (PSI) has motivated a variety of computational studies and modeling attempts. For example, in~\cite{sefta2013}, it was shown that helium atoms quickly form clusters that grow inside the tungsten lattice. Larger helium clusters diffuse below the tungsten surface, until they eventually become over-pressurized and burst. The presence of these subsurface gas bubbles produce high levels of stress in the tungsten matrix, causing a change in the mechanical properties of the surface material and the surface morphology. The evolution of the subsurface gas inventories at short time scales has been investigated in~\cite{sefta2013a, ito2014, hammond2018}. In these studies, Molecular Dynamics (MD) simulations, which model the physical movements of individual helium atoms in the tungsten lattice, are used as the main computational technique. Because of computational constraints, and despite acceleration attempts proposed in~\cite{sandoval2015}, these MD simulations are typically restricted to small spatial domains, large implantation fluxes and short timescales, conditions that are far away from those associated with the available experimental data.
One way to overcome these challenges is to use a continuum-scale reaction-diffusion cluster dynamics model, see, e.g.,~\cite{blondel2017, perez2017, blondel2018}. In this model, microstructure evolution is described in terms of concentration fields of helium and vacancy clusters, as opposed to the modeling of individual atoms in MD. In this work, we will use the cluster dynamics software \texttt{Xolotl}{}~\cite{xolotl}. It has been shown that \texttt{Xolotl}{} captures the essential physical behavior governing the tungsten subsurface evolution, and quantitatively reproduces the values predicted by representative MD simulation benchmarks~\cite{blondel2017, blondel2017a, maroudas2016}.
\texttt{Xolotl}{} has been combined with the ion-solid interaction software \texttt{F-TRIDYN}{} in~\cite{lasa2020}. \texttt{F-TRIDYN}{} is a particle code that predicts the helium implantation profile and sputtering yields of helium and tungsten, that are used as inputs in \texttt{Xolotl}{}~\cite{drobny2017}. In what follows, we will denote coupled \texttt{F-TRIDYN}{}-\texttt{Xolotl}{} as \texttt{FTX}{}.
The model implemented in \texttt{FTX}{} depends on 17 input parameters whose values have been obtained from accompanying MD simulations. However, a significant uncertainty is associated with each of these parameter values, resulting in a corresponding uncertainty in the predicted model outcomes. To increase the fidelity of the model predictions, it is important to improve understanding about the global sensitivity of model outputs with respect to specific uncertain parameters, as well as the contribution of parametric uncertainty to the overall predictive capability of the model.
Variance-based Global Sensitivity Analysis (GSA), involving the computation of Sobol' indices, is particularly attractive, because it allows one to attribute fractions of the model output variance to the variability in the input parameters~\cite{saltelli2008, sobol2001}. However, this approach typically involves the computation of conditional expectations, and, when these are computed using random sampling, requires a prohibitively large number of model evaluations to reach an acceptable level of accuracy. An alternate approach to obtain the Sobol' indices is to first construct a surrogate for the model response using a limited set of training samples, and to replace the expensive model with the surrogate when evaluating the integrals in the expectation. Of particular interest are polynomial chaos expansion (PCE) surrogate models~\cite{ghanem1991,ghanem1999,lemaitre2001,reagan2003,najm2009,ernst2012}, because they allow estimation of sensitivities directly from the PCE~\cite{sudret2008, crestaux2009}.
In this paper, we develop a GSA approach for \texttt{FTX}{}, focusing on the prediction of surface evolution and helium retention. The latter is the amount of subsurface helium retained in the tungsten material during exposure to the fusion plasma, and is an important performance metric of PFCs. Our framework is based on PCE surrogate-based sensitivity analysis, where we exploit recent advancements in sparse and adaptive constructions~\cite{sargsyan2014}. We use our framework to perform sensitivity analysis in two problem settings:
\begin{itemize}
\item \textsf{ITER-He}{}, a problem setting for ITER-like conditions using a helium plasma~\cite{lasa2020}, and
\item \textsf{PISCES-A}{}, a problem setting that mimics the PISCES-A linear plasma device~\cite{baldwin2010, doerner2001, baldwin2019}.
\end{itemize}
A similar sensitivity analysis for predicting the impact of plasma impurities in the \textsf{PISCES-A}{} setting using the impurity transport code \texttt{GITR}{}, see~\cite{younkin2021}, has been performed in~\cite{younkin2022}. However, no sensitivity analysis framework has yet been proposed for the formation of subsurface gas bubbles in tungsten materials exposed to helium plasma. This will be the focus of the present work.
The structure of this paper is as follows. First, in~\cref{sec:multiphysics_modeling_of_plasma-exposed_surfaces}, we give a brief overview of \texttt{F-TRIDYN}{} and \texttt{Xolotl}{}, the two codes used to predict the surface evolution of PFCs under exposure to helium plasmas. Next, in \cref{sec:global_sensitivity_analysis_framework}, we discuss our approach for sensitivity analysis, including the construction of PCE surrogate models. In \cref{sec:results_and_discussion}, we present our main results for both the \textsf{ITER-He}{} (\cref{sec:gsa_study_for_the_iter_he_setting}) and \textsf{PISCES-A}{} (\cref{sec:gsa_study_for_the_pisces_setting}) problem settings, followed by a discussion in \cref{sec:discussion}. Conclusions and pointers to future work are presented in \cref{sec:conclusions_and_future_work}.
\section{Multiphysics modeling of plasma-exposed surfaces}\label{sec:multiphysics_modeling_of_plasma-exposed_surfaces}
\texttt{FTX}{} is part of a larger multiphysics modeling framework for predicting the evolution of PFCs under irradiation, including physical phenomena spanning multiple decades in length and time scales, such as clustering and bursting of helium bubbles, tungsten wall material erosion, transport and redeposition of eroded impurities, and the evolution of the scrape-off layer plasma inside the reactor~\cite{lasa2020, lasa2021}. Many of these physical processes are strongly coupled, since the plasma response is strongly dependent on the material surface evolution and vice versa.
The multiscale framework from~\cite{lasa2020, lasa2021} involves a hierarchy of multiphysics codes, including \texttt{SOLPS} for the simulation of the steady-state background plasma~\cite{schneider2006}, \texttt{hPIC} to resolve the near-surface sheath effects~\cite{khaziev2015}, \texttt{GITR} to calculate the migration and redeposition of impurities eroded from the tungsten surface~\cite{younkin2021}, and finally \texttt{FTX}{} to compute the sputtering and implantation of ions impacting the material and to obtain the response of the wall surface under these plasma conditions~\cite{lasa2020, lasa2021}.
\begin{figure*}[t]
\centering
\small
\input{fig/coupling_ftridyn_xolotl.tex}
\caption{Schematic overview of the coupling between \texttt{F-TRIDYN}{} and \texttt{Xolotl}{}, exposing the relevant uncertain input parameters and the relevant model outputs. Input parameters are denoted by \textbullet\;{}, outputs are denoted by $\clubsuit$\;{}, and information exchanged between \texttt{F-TRIDYN}{} and \texttt{Xolotl}{} is denoted by $\spadesuit$\;{}. See also \cref{tab:uncertain_parameters} for an overview of each parameter.}
\label{fig:coupling_ftridyn_xolotl}
\end{figure*}
In the coupled \texttt{FTX}{} setting, the ion energy-angle distribution (IEAD), describing the distribution of the incoming kinetic energy as a function of the impact angle for each ion species in the plasma and computed by \texttt{hPIC}, is used as an input to \texttt{F-TRIDYN}{}. \texttt{F-TRIDYN}{} provides the implantation profiles of the incident helium plasma ions, i.e., the concentration of helium ions implanted in the material as a function of depth, as well as the sputtering yields of helium and tungsten, i.e., the average number of atoms removed from a target per incident helium particle, to \texttt{Xolotl}{}. \texttt{Xolotl}{} in turn then models the subsurface material composition as a function of time, based on the diffusion and reaction of subsurface helium atoms and defects. The time-dependent simulation of \texttt{FTX}{} involves running \texttt{F-TRIDYN}{} and \texttt{Xolotl}{} in tandem within each time step, as the subsurface composition predicted by \texttt{Xolotl}{} affects the implantation profile and sputtering yield predicted by \texttt{F-TRIDYN}{} and vice versa. We refer to~\cite{lasa2021} for more details regarding the different aspects of the code coupling. A schematic overview of the \texttt{FTX}{} coupling is shown in \cref{fig:coupling_ftridyn_xolotl}. We remark that the \texttt{FTX}{} workflow also uses particle and heat fluxes from \texttt{SOLPS}, the IEAD of tungsten particles from \texttt{hPIC}, as well as the plasma composition predicted by \texttt{GITR}. However, these dependencies are not shown in \cref{fig:coupling_ftridyn_xolotl} for clarity of exposition.
In the remainder of this section, we will briefly summarize the role of \texttt{F-TRIDYN}{} and \texttt{Xolotl}{} in the framework shown in \cref{fig:coupling_ftridyn_xolotl}, with an emphasis on the relevant uncertain parameters in both.
\subsection{The \texttt{F-TRIDYN}{} binary collision approximation code}\label{sec:the_xolotl_binary_collision_approximation_code}
An efficient simulation method to compute the penetration depth of, and defect production by, energetic particles in solids is the binary collision approximation (BCA). In BCA, particles are assumed to travel through the material in straight paths, undergoing elastic binary collisions with the atoms of the solid. In every collision, the classical scattering integral between the two colliding particles is solved in order to obtain the scattering angle and the amount of energy loss~\cite{smith1997}. A binding energy $E_s$ of the incoming particle to the material can also be taken into account, by adding it to the computed energy loss. The history of a moving particle of species $k$ is terminated when it has been slowed down to an energy below a predefined cutoff energy $E_{f, k}$ (equal to or lower than the surface binding energy).
In the Monte Carlo BCA method, the distance to the next collision, as well as the impact parameter of the collision, are chosen from a predefined probability distribution. By simulating the trajectory of many independent particles, one can obtain approximations for the sought-after material properties. This is the basis of the \texttt{TRIDYN} code introduced in~\cite{moller1988}. \texttt{TRIDYN} is based on the Monte Carlo program \texttt{TRIM}~\cite{biersack1980}. \texttt{F-TRIDYN}{} is a flexible code for simulating atomic-scale ion-surface interactions, and can be used to predict sputtering and implantation profiles in the context of plasma-surface interactions~\cite{drobny2018}. We note that \texttt{F-TRIDYN}{} improves upon \texttt{TRIDYN} by including a robust, fractal model of the surface and additional output modes for coupling to other plasma codes, such as \texttt{Xolotl}{}. Only this last capability is of interest in this work, however.
In our numerical results presented in \cref{sec:results_and_discussion}, where we consider helium ions irradiated on a tungsten surface, we will define the surface binding energy of tungsten ($E_s$), as well as the cutoff energies of helium and tungsten ($E_{f, \ce{He}}$ and $E_{f, \ce{W}}$) as uncertain parameters. Furthermore, when considering the \textsf{PISCES-A}{} problem setting, the IEAD reduces to a single value for the energy $E_\mathrm{in}$ and angle $\alpha_\mathrm{in}$ of the incoming helium flux, which we will also consider as uncertain operators to \texttt{FTX}{}.
Finally, we note that, since \texttt{F-TRIDYN}{} is a particle-based Monte Carlo code, it is highly parallelizable. We will exploit this property in our numerical experiments.
\subsection{The \texttt{Xolotl}{} cluster dynamics code}\label{sec:the_xolotl_cluster_dynamics_code}
\texttt{Xolotl}{} is a general-purpose cluster dynamics simulator used to predict gas bubble and defect evolution in solids. The code predicts the evolution of a concentration field $C_k({\boldsymbol{z}}, t)$ of a species $k$ as a function of space ${\boldsymbol{z}}$ and time $t$ by solving the spatio-temporal drift-diffusion reaction equations~\cite{faney2014}. We consider four different types of clusters: tungsten self-interstitials ($\ce{I}_l$), lattice vacancies ($\ce{V}_n$), free helium ($\ce{He}_m$), and vacancy-trapped helium ($\ce{He}_m\ce{V}_n$). In these expressions, subscripts indicate the number of atoms in each cluster. Let $\mathscr{K}$ denote the set of all possible species $k$. Assuming symmetry in directions perpendicular to the irradiation flux, the evolution of the concentration $C_k(z, t)$ of each species $k$ in the now one-dimensional geometry is governed by a system of partial differential equations (PDEs) of the form
\begingroup
\allowdisplaybreaks
\begin{align}\label{eq:xolotl_governing_equations}
\frac{\partial C_k}{\partial t}(z, t) &= D_k \frac{\partial^2 C_k}{\partial z^2}(z, t) \\
&+ \textrm{Generation}_k(z) \\
&+ \textrm{Reaction}_k(C_k(z, t) \textrm{ for each } k \in \mathscr{K}) \\
&+ \textrm{Dissociation}_k(C_k(z, t) \textrm{ for each } k \in \mathscr{K})
\end{align}
\endgroup
for $k \in \mathscr{K}$~\cite{faney2014}. In this expression, the first term on the right-hand side represents the diffusion of the species due to the concentration gradient, with $D_k$ the diffusion coefficient of species $k$. The second term is a generation or production term, representing the change in concentration due to irradiation. The third term represents the change in concentration due to interactions between different types of clusters, and the fourth term represents the change in concentration due to the splitting (thermal emission) of clusters into smaller ones.
Self-interstitials ($\ce{I}_l$), single vacancies ($\ce{V}_1$) and small helium clusters ($\ce{He}_1$ to $\ce{He}_7$) are modeled as mobile species, and diffuse isotropically in the bulk subsurface. Vacancy-trapped helium species ($\ce{He}_m\ce{V}_n$) represent the gas bubbles that grow in the subsurface of the material. The self-interstitials diffuse to the surface and become so-called \textit{adatoms} that grow the tungsten surface.
\Cref{eq:xolotl_governing_equations} is discretized using finite differences in space, on a variable-size grid with increasing resolution near the surface. An implicit Runge--Kutta time integration scheme has been used in \cite{faney2014} due to the stiffness of the diffusion and reaction operators, and will be used here. For fusion-relevant conditions, the number of species $|\mathscr{K}|$, and hence the number of cluster dynamics equations in \eqref{eq:xolotl_governing_equations}, can grow very large, as larger helium-vacancy clusters are required to resolve bubble growth. As noted in~\cite{faney2014}, the nonlinear system in \cref{eq:xolotl_governing_equations} can be split in two parts. The first part describes the movement of the limited number of small, mobile clusters. In this case, the system of equations is typically very small, and it does not grow with the number of species considered. The second part describes the vast number of larger, immobile clusters. In this case, the system of equations reduces to a system of ordinary differential equations (ODEs) that can be trivially parallelized over the different grid points in the domain. For more details on the governing equations underlying \texttt{Xolotl}{}, we refer to~\cite{maroudas2016, blondel2017}. In our numerical experiments in \cref{sec:results_and_discussion}, the governing equations of \texttt{Xolotl}{} are solved using \texttt{PETSc}, a scalable partial differential equation solver~\cite{petsc,petsc-web-page}.
The diffusion coefficient $D_k$ of a species $k$ is modeled using an Arrhenius relation,
\begin{equation}
D_{k} = D_{k, 0} \exp\left(-\frac{E_k}{k_BT}\right),
\end{equation}
where $E_k$ is the migration energy of the species, $D_{k, 0}$ is the pre-exponential factor, $k_B$ is the Boltzmann constant ($k_B \approx \SI{1.38e-23}{\joule\per\kelvin}$) and $T$ is temperature. In~\cite{faney2014}, the pre-exponential factor and migration energy for each cluster are estimated from large-scale MD simulations. We define the migration energies of all mobile species (i.e., $\ce{He}_1$ to $\ce{He}_7$, and $\ce{V}_1$) as uncertain. On the other hand, we ignore the uncertainties in the migration energies of the self-interstitials $\ce{I}_l$, which are also mobile, because they diffuse much faster compared to the other clusters and do not interact with helium in this model.
The reaction and dissociation terms in the right-hand side of~\eqref{eq:xolotl_governing_equations} are obtained from chemical reaction theory. For two reacting species $k_1$ and $k_2$, the reaction rate $\kappa_{1, 2}$ can be calculated using diffusion-limited reaction theory~\cite{faney2014}. In particular, we have that
\begin{equation}\label{eq:reaction_rates}
\kappa_{1, 2} = 4 \pi (r_{k_1} + r_{k_2}) (D_{k_1} + D_{k_2})
\end{equation}
where $D_{k_1}$ and $D_{k_2}$ are the diffusion coefficients, and $r_{k_1}$ and $r_{k_2}$ are the \emph{capture radii} of species $k_1$ and $k_2$. For pure helium clusters, the capture radius can be computed from that of the $\ce{He}_1$ cluster, while for interstitial clusters, the capture radius is computed from the capture radii of $\ce{He}_1$ and $\ce{V}_1$, see~\cite[equations (9) and (10)]{faney2014}. The capture radius of $\ce{V}_1$ is multiplied by an additional bias factor in order to reflect that interstitial clusters have a larger surrounding strain field. Both the radius of the $\ce{He}_1$ cluster ($r_{\ce{He}_1}$) and the bias factor ($b_{\ce{I}_l}$) are defined as uncertain parameters, because they directly influence the reaction rates $\kappa_{1, 2}$ in~\eqref{eq:reaction_rates}.
A simplified helium bubble bursting model is included to take into account the gas release mechanisms from near-surface, over-pressurized bubbles. To model the bursting process of these over-pressurized bubbles as reported in \cite{sefta2013, hammond2018}, \texttt{Xolotl}{} uses a stochastic process. At the end of each time step of the finite difference scheme, the total quantity of mobile and trapped helium atoms $n_\mathrm{He}$ is computed at each grid point. Next, the radius of a helium bubble is computed as
\begin{equation}\label{eq:bubble_radius}
r_\mathrm{bubble} = a_0 \left( \frac{\sqrt{3}}{4} + \sqrt[3]{\frac{3 n_\mathrm{He}}{32\pi}} - \sqrt[3]{\frac{3}{8\pi}} \right),
\end{equation}
where $a_0$ is a lattice parameter, and assuming perfectly spherical bubbles~\cite{faney2014}. When the distance between the bubble boundary and the surface is non-positive, we assume the bubble to have burst. Otherwise, a bubble may burst with probability
\begin{equation}\label{eq:bubble_bursting_probability}
p_\mathrm{burst} \propto \left(\frac{r_\mathrm{bubble}}{r}\right) \cdot \gamma \cdot \min\left(1, \exp\left(-\frac{r - \tau_r}{2\tau_r}\right)\right),
\end{equation}
where $r$ is the distance of the center of the bubble to the surface, $\tau_r$ is a depth parameter that prevents bursting from happening deeper in the material, and $\gamma$ is a tuning parameter that ensures the bursting frequency matches the results predicted by a large-scale MD simulation~\cite{blondel2017}. In the present study, we will define $a_0$ as uncertain, while $\tau_r$ and $\gamma$ are defined as deterministic. The bubble bursts are assumed to be pinhole events, in the sense that, after a bubble bursting event occurs, the concentration of vacancy-trapped helium ($\ce{He}_m\ce{V}_n$) is transferred to the concentration of vacancies ($\ce{V}_n$) as if the helium dissipates instantaneously to the surface. Finally, we also include the initial vacancy concentration $C_{\ce{V}_1}$ as an uncertain parameter. The initial vacancy concentration is used as a proxy for material impurities, as commercial tungsten is not perfect.
For computational tractability, the maximum cluster size of the species is limited. We express this limitation in terms of a \emph{network size}, equal to the allowed maximum number of vacancies $n$ in a cluster, imposed for both free vacancies ($\ce{V}_n$) and vacancy-trapped helium ($\ce{He}_m\ce{V}_n$). Finally, to avoid an exponential growth in the number of clusters to be tracked, \texttt{Xolotl}{} uses a grouping scheme in which clusters of similar size are grouped together into \emph{super-clusters} with averaged properties, and the governing equations are solved for these clusters instead. See also~\cite{kohnert2016} for the first-order moment scheme on which this grouping is based.
\section{Results and discussion}\label{sec:results_and_discussion}
In this section, we describe the results of our GSA study of \texttt{FTX}{}. First, in \cref{sec:problem_setup}, we provide additional details on the computational and problem setup. Next, in \cref{sec:gsa_study_for_the_iter_he_setting,sec:gsa_study_for_the_pisces_setting}, we present the sensitivity analysis results for the \textsf{ITER-He}{} and \textsf{PISCES-A}{} settings, respectively.
\subsection{Problem setup}\label{sec:problem_setup}
\texttt{F-TRIDYN}{} and \texttt{Xolotl}{} are coupled through the integrated plasma simulation (IPS) framework developed in~\cite{elwasif2010}. IPS is a high-performance computing framework providing resource and data management tools for loosely-coupled simulation components. \texttt{FTX}{} is connected to IPS with the help of a Python wrapper, which controls the time-stepping behavior of the coupled code, running \texttt{F-TRIDYN}{} and \texttt{Xolotl}{} sequentially, providing the necessary file-based coupling between them~\cite{ips-wrappers}. The wrapper also deals with code restarts, and with grid extension for \texttt{Xolotl}{} when needed. For convenience, we also developed \texttt{pyFTX}, a Python interface to the IPS wrapper, that allows us to control multiple \texttt{FTX}{} simulations at once, including possible checkpoint/restarts.
We construct PCE surrogates and perform GSA using the uncertainty quantification (UQ) toolkit (\texttt{UQTk})~\cite{debusschere2004, debusschere2017}. We also implemented the adaptive sparse basis growth procedure outlined in \Cref{alg:wibcs} in \cref{sec:bayeseian_compressive_sensing}, relying on the BCS procedure implemented in \texttt{UQTk}.
As stated in \cref{sec:global_sensitivity_analysis_using_PCEs}, the uncertain input parameters are assumed to follow a uniform distribution between a lower and upper bound given in \cref{tab:uncertain_parameters}. It should be noted that we construct the PCE surrogate models in the transformed space $[-1, 1]^d$ for convenience, and map them back to the original parameter space for the sensitivity analysis, following \cref{eq:mapping}.
In each problem setting, we construct a training data set $\mathscr{D}_\textrm{train} = \{({\boldsymbol{x}}_\textrm{train}^{(n)}, y_\textrm{train}^{(n)})\}_{n=1}^{N_\textrm{train}}$ with $N_\textrm{train}$ training samples and a test data set $\mathscr{D}_\textrm{test} = \{({\boldsymbol{x}}_\textrm{test}^{(n)}, y_\textrm{test}^{(n)})\}_{n=1}^{N_\textrm{test}}$ with $N_\textrm{test}$ test samples. Inspired by~\cite{bohn2018}, we choose the model inputs in the training data set according to a sparse Clenshaw--Curtis quadrature rule, because this may lead to faster convergence of the training error with respect to the number of training samples. However, because of the noise in the outputs, we don't use the associated quadrature rule, but treat these training points as random samples, arguing that this does not necessarily lead to a larger training error. The number of training and test samples in each problem are reported in \cref{tab:problem_setup}. To assess the accuracy of our PCE surrogate models, we compute the training error $e_\textrm{train}$ as
\begin{align}\label{eq:training_error}
e_\textrm{train} &= \sqrt{\frac{\sum_{n=1}^{N_\textrm{train}} \left(\sum_{k = 1}^K c_{k} \Phi_k({\boldsymbol{\xi}}_\textrm{train}^{(n)}) - y_\textrm{train}^{(n)}\right)^2}{\sum_{n=1}^{N_\textrm{train}} (y_\textrm{train}^{(n)})^2}}
\end{align}
where ${\boldsymbol{\xi}}_\textrm{train}^{(n)}$ corresponds to the rescaled training input parameters according to~\eqref{eq:mapping}. The test error $e_\textrm{test}$ is computed analogously.
\begin{table}[t]
\centering
\small
\begin{tabular}{l l S[table-format=2] S[table-format=3] S[table-format=3] c} \toprule
{setting} & {code} & {$d$} & {$N_\textrm{train}$} & {$N_\textrm{test}$} & {cost} \\ \midrule
\multirow{3}{*}{\textsf{ITER-He}} & \texttt{F-TRIDYN}{} & 3 & 441 & 50 & \SI{1}{\hour} \\
& \texttt{Xolotl}{} & 12 & 313 & 50 & \SI{12}{\hour} \\
& \texttt{FTX}{} & 15 & 481 & 50 & \SI{47}{\hour} \\ \midrule
\multirow{2}{*}{\textsf{PISCES-A}} & \texttt{FTX}{} & 17 & 613 & 100 & \SI{95}{\hour} \\
& \texttt{FTX}{} & 5 & 801 & 100 & \SI{95}{\hour} \\ \bottomrule
\end{tabular}
\caption{Number of uncertain parameters $d$, number of training and test samples $N_\textrm{train}$ and $N_\textrm{test}$, and average computational cost per sample for each problem setting.}
\label{tab:problem_setup}
\end{table}
We will analyze three outputs of \texttt{F-TRIDYN}{}: the sputtering yields of tungsten and helium, and the implantation profile. The sputtering yields are scalar output quantities of interest, while the implantation profile changes with the material depth. In the latter case, the depth where the helium implantation reaches its maximum will be the quantity of interest we consider. This depth is obtained from a polynomial fit to a histogram of the particle positions at the simulation end time.
\begin{figure*}[t]
\centering
\tikzinput{iter_he_f_tridyn}
\caption{Comparison of the predicted outputs from the surrogate and the actual model outputs (\emph{top}) and total sensitivity indices (\emph{bottom}) for sputtering yields of helium (\emph{left}) and tungsten (\emph{middle}), and the depth where the helium implantation in the material is maximal (\emph{right}) for \texttt{F-TRIDYN}{} in the \textsf{ITER-He}{} setting. We indicate the relative training error $e_\mathrm{train}$, the relative test error $e_\mathrm{test}$, and the order of the PCE $|u|$.}
\label{fig:iter_he_f_tridyn}
\end{figure*}
Similarly, we will analyze two outputs of \texttt{Xolotl}{}: the surface growth and the helium retention. Both output quantities are time-dependent. For the surface growth, we construct a total of $M_t = 25$ surrogates at time stamps $t_m = m/M_t$ for $m=1, 2, \ldots, M_t$. Note that the initial height of the surface is assumed to be constant for all samples, so no surrogate needs to be constructed at $t = \SI{0}{\second}$. The helium retention is expressed as a percentage with respect to the implanted helium flux. Because it contains the effect of the individual bubble bursting events, we first smooth the output using a low-pass filter with a window size of $\delta t = \SI{0.1}{\second}$. Next, we construct $M_t = 25$ surrogates for the smoothed helium retention at time stamps $t_m = \delta t + (m - 1)(1 - \delta t)/(M_t - 1)$ for $m=1, 2, \ldots, M_t$.
In the \textsf{ITER-He}{} setting, we select an incoming helium flux of \SI{3.49e20}{\per\metre\squared\per\second} incident on a (100)-oriented tungsten surface at \SI{343}{\kelvin}, based on our previous integrated simulations of the early stage of planned ITER operation with a helium plasma~\cite{blondel2017a}. These conditions correspond to the location with peak plasma temperature predicted by simulation in~\cite{lasa2020}. We assume no heat flux is applied to the surface in this case, because the change in surface temperature for this location, once the steady state is reached, is less than \SI{2}{\kelvin}, which is negligible for our study. Moreover, the inclusion of surface heat flux would negatively impact the computational cost, see~\cite{lasa2020}. Furthermore, we use a material with a thickness of \SI{6}{\milli\metre} to mimic the distance between the PFC surface and the cooling channel of a divertor tile~\cite{pitts2017}. We obtain the IEAD, required by \texttt{F-TRIDYN}{}, from a simulation with \texttt{hPIC}~\cite{khaziev2015}.
In the \textsf{PISCES-A}{} setting, we select an incoming helium flux of \SI{540e20}{\per\metre\squared\per\second} incident on a (111)-oriented tungsten surface at \SI{1093}{\kelvin} with a thickness of \SI{5}{\micro\metre}, and with a biasing voltage of \SI{250}{\volt}. A value of \SI{5}{\micro\metre} was selected as it is much larger than the helium diffusion distance for this timescale. For comparison, the radius of the largest bubble we consider in this case is \SI{0.96}{\nano\metre}. In the \textsf{PISCES-A}{} setting, the IEAD reduces to a single value for the ion impact energy and angle, and both energy and angle are considered as additional uncertain operating conditions, see \cref{tab:uncertain_parameters}.
In both the \textsf{ITER-He}{} and \textsf{PISCES-A}{} settings, we simulate an exposure time of \SI{1}{\second}. \texttt{F-TRIDYN}{} is run for a total of $10^5$ particles. We solve the one-dimensional governing equations in \texttt{Xolotl}{} on a grid with 367 cells, and assuming a network size of at most 250 vacancies ($\ce{V}_n$), which amounts to 13,113 degrees of freedom for each cell.
\subsection{GSA study for the \textsf{ITER-He}{} setting}\label{sec:gsa_study_for_the_iter_he_setting}
We now report the results of our GSA study in the \textsf{ITER-He}{} setting. We first consider \texttt{F-TRIDYN}{} and \texttt{Xolotl}{} in isolation, before presenting the results for the coupled \texttt{FTX}{} case.
\subsubsection{Sensitivity analysis of \texttt{F-TRIDYN}{}}\label{sec:sensitivity_analysis_of_f-tridyn}
In the top row of \cref{fig:iter_he_f_tridyn}, we show a comparison of the predicted \texttt{F-TRIDYN}{} outputs from the PCE surrogates with the actual model outputs. We also report the corresponding relative training and test errors $e_\textrm{train}$ and $e_\textrm{test}$ computed using~\eqref{eq:training_error}, and the order of the PCE $|u|$. Because the training and test errors are close to each other, we conclude that the PCE surrogates are not overfitted. Furthermore, note that the surrogate models for the sputtering yields are more accurate compared to the surrogate model for the implantation profile. The total order of the PCE ($|u|=8$) for the latter quantity is much larger, indicating a strong nonlinear dependence of the output quantity on the respective the input parameters.
In the bottom row of \cref{fig:iter_he_f_tridyn}, we report the total sensitivity indices for each output. For the sputtering yields, almost all the uncertainty in the output can be explained by the uncertainty in the surface binding energy of tungsten (note the logarithmic scale). For the depth where the helium implantation profile has a maximum, both the surface binding energy of tungsten and the cutoff energy of tungsten appear to be important.
To illustrate the efficacy of our method, we include an additional 500 training samples, and reconstruct the PCE for the maximum helium implantation depth, for an increasing number of training samples $N_{\textrm{train}}$. We consider a PCE surrogate with maximum polynomial order $|u|=1$ and $|u|=2$, as well as BCS with adaptive basis growth. In \cref{fig:iter_he_f_tridyn_convergence}, we check the convergence of the sensitivity indices of these PCEs as a function of $N_{\textrm{train}}$. Note how the error in the total sensitivity index obtained with BCS is consistently lower than the order-1 and order-2 PCE constructions. These results indicate the merits of our higher-order, adaptive basis growth construction in the presence of only a limited amount of training data.
\begin{figure*}
\centering
\tikzinput{iter_he_f_tridyn_convergence}
\caption{Error in the total effect Sobol' sensitivity index as a function of the number of training samples for $E_s$ (\emph{left}), $E_{f, \ce{W}}$ (\emph{middle}) and $E_{f, \ce{He}}$ (\emph{right}) for \texttt{F-TRIDYN}{} in the \textsf{ITER-He}{} setting. The error in the total sensitivity index predicted by BCS is much lower compared to the error predicted by a PCE of order 1 or 2 for the same number of training samples.}
\label{fig:iter_he_f_tridyn_convergence}
\end{figure*}
\subsubsection{Sensitivity analysis of \texttt{Xolotl}{}}\label{sec:sensitivity_analysis_of_xolotl}
In the top pane of~\cref{fig:iter_he_x} we show a comparison of the helium retention predicted by the PCE surrogate with the actual helium retention predicted by \texttt{Xolotl}{} as a function of time, as well as the corresponding relative training and test errors, and the PCE order for the surrogate with the largest test error across all time stamps.
In the bottom pane of~\cref{fig:iter_he_x}, we show the total-effect Sobol' sensitivity indices as a function of time, extracted from the PCE surrogates. Almost all uncertainty in the helium retention can be explained by the variation of the migration energy $E_{\ce{He}_1}$ (note the logarithmic $y$-axis), and this seems to be consistent across all time values considered in this case.
The solid line in the bottom pane of~\cref{fig:iter_he_x} is a measure of the reliability of the sensitivity indices in the presence of surrogate error. We computed this threshold by inflating the posterior on the PCE coefficients $c_{\boldsymbol{u}}$, obtained from BCS, such that the standard deviation of the model predictions with these (now uncertain) surrogates matches the (absolute) validation error. We then plot the maximum value of the standard deviation of the sensitivity predictions as a function of time. In doing so, the surrogate error is effectively reflected in the uncertainty of the sensitivity indices. Values of the sensitivity indices below the threshold are considered to be unreliable due to surrogate error.
As there was no significant change in the surface height within the simulated exposure time of \SI{1}{\second}, a GSA study for the latter is not required.
\begin{figure}[t]
\centering
\tikzinput{iter_he_x}
\caption{Comparison of the predicted outputs from the surrogate and the actual model outputs for the helium retention (\emph{top}) and total sensitivity indices (\emph{right}) for \texttt{Xolotl}{} in the \textsf{ITER-He}{} setting.}
\label{fig:iter_he_x}
\end{figure}
\subsubsection{Sensitivity analysis of coupled \texttt{FTX}{}}\label{sec:sensitivity_analysis_of_coupled_ftx}
In the top row of~\cref{fig:iter_he_ftx}, we illustrate the accuracy of the surrogate models for both the surface growth and the helium retention predicted by \texttt{FTX}{}. Different colors indicate different time stamps. The maximum test error is observed at the end time ($\SI{1}{\second}$) for both output quantities of interest.
In the bottom row of~\cref{fig:iter_he_ftx}, we show the total sensitivity indices for the most important parameters as a function of time. The solid line is a measure of the reliability of the sensitivity indices in the presence of surrogate error. Again, the migration energy $E_{\ce{He}_1}$ is the most important parameter, followed by the migration energies $E_{\ce{He}_2}$ and $E_{\ce{He}_4}$, and the impurity radius. This is the case for both the surface growth and the helium retention outputs. The relative importance of $E_{\ce{He}_1}$ seems to decrease as a function of time. Note that this is also the most important parameter for \texttt{Xolotl}{} in isolation as shown above (\cref{sec:sensitivity_analysis_of_xolotl}). None of the \texttt{F-TRIDYN}{} parameters seem to have an impact on the surface growth and helium retention predicted by the coupled code in the \textsf{ITER-He}{} problem setting, indicating that most of the uncertainty in the output is due to variations in the \texttt{Xolotl}{} parameters. We believe the reason for this is twofold. First, the simulated exposure time of \SI{1}{\second} is not long enough to change the surface height considerably. This value is the best we could achieve given the computational constraints. As a consequence, the change in the value of the sputtering yields, predicted by \texttt{F-TRIDYN}{} and used as an input to \texttt{Xolotl}{}, is small. Second, no uncertainties are considered in the IEAD, an input to \texttt{F-TRIDYN}{}. Together, these two observations explain why the \texttt{F-TRIDYN}{} parameters have a negligible impact on the helium retention predicted by the \texttt{FTX}{} in the \textsf{ITER-He}{} setting.
\begin{figure*}[t]
\centering
\tikzinput{iter_he_ftx}
\caption{Comparison of the predicted outputs from the surrogate and the actual model outputs (\emph{top}) and total sensitivity indices as a function of time (\emph{bottom}) for surface growth (\emph{left}) and helium retention (\emph{right}) for \texttt{FTX}{} in the \textsf{ITER-He}{} setting.}
\label{fig:iter_he_ftx}
\end{figure*}
\subsection{GSA study for the \textsf{PISCES-A}{} setting}\label{sec:gsa_study_for_the_pisces_setting}
In this section, we report the results of our GSA study in the \textsf{PISCES-A}{} setting, focusing on \texttt{FTX}{}.
\subsubsection{Sensitivity analysis of coupled \texttt{FTX}{}}
In \cref{fig:pisces_ftx_outputs}, we illustrate the surface growth and helium retention as predicted by \texttt{FTX}{} as a function of time, for select parameter values. Note that the computational burden per sample in the \textsf{PISCES-A}{} setting, reported in \cref{tab:problem_setup}, is almost twice that of the \textsf{ITER-He}{} setting. This results from the higher incoming helium flux, which causes more bursting events with associated smaller time steps.
\begin{figure}[t]
\centering
\tikzinput{pisces_ftx_outputs}
\caption{Select realizations of the surface growth as a function of time (\emph{top}), and helium retention in the tungsten material expressed as a percentage of the implanted helium flux as a function of time (\emph{bottom}), for the \textsf{PISCES-A}{} setting. We use a moving average filter to smooth the helium retention before performing GSA.}
\label{fig:pisces_ftx_outputs}
\end{figure}
In the top row of \cref{fig:pisces_ftx}, we show a comparison between the actual \texttt{FTX}{} outputs and the PCE surrogate predictions. Notice how the surrogate models are now much less accurate compared to the surrogates for the coupled \texttt{FTX}{} code in the \textsf{ITER-He}{} setting. However, we deem the accuracy of the surrogates to be sufficient in order to identify at least the most important parameters. This inferior accuracy is probably due to the larger dimensionality of the input space (17 instead of 15) and the limited computational budget to obtain the training samples, as well as the increase in the number of bursting events compared to the \textsf{ITER-He}{} setting. Below, we will illustrate how a more accurate surrogate model can be constructed with a reduced number of parameters.
The corresponding sensitivity indices for each parameter are reported in the bottom row of \cref{fig:pisces_ftx}. Again, the solid line is a measure of the reliability of the sensitivity indices in the presence of surrogate error. For the surface growth, the most important parameters, sorted according to the sum of their sensitivity indices across time, are the surface binding energy of tungsten ($E_s$), the ion impact energy ($E_{\textrm{in}}$) and the migration energy $E_{\ce{He}_1}$. For the helium retention, the most important parameters are the migration energy $E_{\ce{He}_1}$, the ion impact energy ($E_{\textrm{in}}$) and the surface binding energy of tungsten ($E_s$).
In order to explain 99\% of the output variability across all time instances, it is sufficient to include those three parameters, together with the migration energy $E_{\ce{V}_1}$. With this key set of parameters identified, we construct a more accurate surrogate model using only this subset of parameters. This surrogate can then be used to replace the expensive evaluation of the coupled \texttt{FTX}{} code in the multiscale framework of~\cite{lasa2021}. To this end, we also include the ion impact angle $\alpha_{\textrm{in}}$ to the reduced set of parameters, because it is considered to be an important operating condition.
\begin{figure*}[t]
\centering
\tikzinput{pisces_ftx}
\caption{Comparison of the predicted outputs from the surrogate and the actual model outputs (\emph{top}) and total sensitivity indices as a function of time (\emph{bottom}) for surface growth (\emph{left}) and helium retention (\emph{right}) for \texttt{FTX}{} in the \textsf{PISCES-A}{} setting.}
\label{fig:pisces_ftx}
\end{figure*}
\begin{figure*}[t]
\centering
\tikzinput{pisces_ftx_reduced}
\caption{Comparison of the predicted outputs from the surrogate and the actual model outputs (\emph{top}) and total sensitivity indices as a function of time (\emph{bottom}) for surface growth (\emph{left}) and helium retention (\emph{right}) for \texttt{FTX}{} in the \textsf{PISCES-A}{} setting using only the 5 most important parameters.}
\label{fig:pisces_ftx_reduced}
\end{figure*}
A comparison between the actual \texttt{FTX}{} outputs and the surrogate predictions including only the 5 most important parameters is shown in the top row of~\cref{fig:pisces_ftx_reduced}. Note that the surrogate models are now much more accurate, reducing the test errors $e_{\textrm{test}}$ from 0.427 to 0.172 for the surface growth, and from 0.293 to 0.085 for the helium retention. The corresponding total sensitivity indices as a function of time are shown in the bottom row of~\cref{fig:pisces_ftx_reduced}. Again, the solid line in this plot is a measure of the reliability of the sensitivity indices in the presence of surrogate error. Note that, for the helium retention, the predicted sensitivity indices for the migration energy $E_{\ce{He}_1}$, the ion impact energy $E_{\textrm{in}}$, and the surface binding energy of tungsten $E_s$, agree with the ones obtained from the 17-dimensional surrogate in terms of their relative magnitudes. For the surface growth, the migration energy $E_{\ce{He}_1}$ has been attributed a larger share of the total output variance, compared to the predicted sensitivities from the 17-dimensional surrogate model. Also, the ion impact angle ($\alpha_{\textrm{in}}$) seems to be more important than the migration energy $E_{\ce{V}_1}$, something we did not observe with the 17-dimensional surrogate model. Note, however, that the sensitivity index for the ion impact angle is below the computed threshold reflecting surrogate accuracy, and its apparent importance may be due to the error in the surrogate.
\subsection{Discussion}\label{sec:discussion}
One of the insights gained from the GSA study for \texttt{F-TRIDYN}{} is the importance of the IEAD. For example, in the \textsf{PISCES-A}{} case discussed in \cref{sec:gsa_study_for_the_pisces_setting}, where ion impact energy $E_\textrm{in}$ and angle $\alpha_\textrm{in}$ are considered as uncertain parameters, these parameters are consistently identified as the most important parameters associated with \texttt{F-TRIDYN}{}. The ion impact energy $E_\textrm{in}$ seems to be more important than the ion impact angle $\alpha_\textrm{in}$, probably because of the small range (\SI{0}{\degree} -- \SI{30}{\degree}) used during the study, see \cref{tab:uncertain_parameters}. In effect, the sputtering yield reaches its maximum around \SI{60}{\degree} to \SI{70}{\degree} under this target-projectile combination, see~\cite{behrisch2007}, a value which is unrealistic in the \textsf{PISCES-A}{} case. On the other hand, in the \textsf{ITER-He}{} setting considered in \cref{sec:gsa_study_for_the_iter_he_setting}, the IEAD is provided by \texttt{hPIC}, another code in the multiscale code hierarchy, in the form of a distribution in the energy-angle space, see, e.g.,~\cite{lasa2020}. This is in contrast to the mono-energetic, mono-angle irradiation considered in the \textsf{PISCES-A}{} setting. In order to keep the number of input parameters tractable, we did not include the energy-angle distribution in the UQ study, but instead chose to consider the IEAD as a fixed, deterministic quantity. Under such conditions, the surface binding energy of tungsten was identified as the most important parameter. Including the IEAD in the GSA study will require a study of coupled \texttt{hPIC}-\texttt{FTX}. A code coupling effort for the latter is currently ongoing.
The migration energy of the \ce{He1} cluster, $E_{\ce{He}_1}$, is consistently identified as the most important parameter in predicting the helium retention, for both the \textsf{PISCES-A}{} and \textsf{ITER-He}{} settings we considered in this study. This makes sense, given that it is the most abundant and mobile species. We should also note that the migration energy of self-interstitials is not subject to uncertainty, because they are assumed to diffuse much faster than the other clusters.
Furthermore, we found that the importance of the migration energy of the \ce{He1} cluster, $E_{\ce{He}_1}$, diminishes over time, see \cref{fig:iter_he_ftx,fig:pisces_ftx}. A possible explanation could be that, as more clusters are formed, the tungsten lattice contains more sinks where the newly generated, mobile \ce{He1} clusters can get trapped. Initially, in the pristine tungsten lattice, the \ce{He1} clusters migrate over a longer period of time, until they eventually diffuse back to the surface and desorb, self-cluster or join an existing, small cluster. As more and more clusters appear over time, especially in the near-surface, the implanted \ce{He1} is shorter-lived, and hence its importance diminishes over time.
Considering the coupled \texttt{FTX}{} setup for the \textsf{PISCES-A}{} setting, the GSA results presented in \cref{fig:iter_he_ftx,fig:pisces_ftx} reveal that the surface binding energy of tungsten and the ion impact energy, two input parameters of \texttt{F-TRIDYN}{}, appear to be important for both the surface growth and the helium retention, through their effect on the sputtering yield. This observation confirms the advantage of considering the coupled setting, as such coupling is clearly crucial to the system behavior.
Also from \cref{fig:iter_he_ftx,fig:pisces_ftx}, the surface binding energy of tungsten only seems to be important in the \textsf{PISCES-A}{} case, and not in the \textsf{ITER-He}{} setting. In fact, none of the \texttt{F-TRIDYN}{} input parameters appear to be important for the \textsf{ITER-He}{} case. A possible explanation for this observation is as follows. The sputtering yield controls how quickly the near-surface \ce{He}-containing layer is removed. The surface binding energy determines how strongly a tungsten atom is bound to the surface, and directly affects the energy threshold required for a sputtering event, i.e., it determines the amount of energy an incoming ion needs to transfer to the surface atom to sputter it. If the energy of the incoming flux is larger than the energy threshold required for sputtering, as is the case in the \textsf{PISCES-A}{} setting, changes in the surface binding energy will be important. However, when the energy of the incoming flux is much lower than the energy threshold required for sputtering, as is the case in the \textsf{ITER-He}{} setting, changes in the surface binding energy will have little to no effect. Besides the secondary effect caused by changes in the sputtering yield, the ion impact energy also directly affects the implantation depth, which can impact the helium retention because helium implanted deeper in the substrate is less likely to undergo trap mutation and become immobile.
Furthermore, it appears that the migration energy of the \ce{He1} clusters $E_{\ce{He}_1}$, which was identified as the most important parameter for the helium retention, has little or no effect on the surface growth for the \textsf{PISCES-A}{} settings, in contrast to the \textsf{ITER-He}{} experiments, see \cref{fig:pisces_ftx}. Again, a possible explanation may be the larger value for the incoming flux in the \textsf{PISCES-A}{} experiments compared to the relatively low plasma density anticipated for early \textsf{ITER-He}{} operation with a helium plasma. Only parameters that introduce a change in the trap mutation may impact the surface growth. In particular, in order for the surface to grow, we require a helium clustering event creating a vacancy and an interstitial. The latter interstitial may eventually diffuse towards the surface, leading to surface growth. Under a sufficiently large incoming helium flux, such as the one used in the \textsf{PISCES-A}{} setting, the implanted helium experiences a higher rate of self-clustering, or joining other clusters, relative to the timescale where the diffusion, determined by the migration energies, matters. As such, we expect the migration energies to be less important under high-flux conditions, such as the \textsf{PISCES-A}{} setting.
|
{
"arxiv_id": "2302.08653",
"language": "en",
"timestamp": "2023-02-20T02:05:26",
"url": "https://arxiv.org/abs/2302.08653",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
The Standard Model (SM) of particle physics has been tremendously successful in explaining a wide variety of experimental results and characterizing the fundamental particles and their interactions. However, there are still many unresolved issues that need to extend the SM by introducing new particles and interactions, such as the Higgs sector extension models, which introduce additional scalar fields~\cite{Gunion:1989we,Gunion:2002in,Carena:2002es,Lykken:2010mc}.
The neutral and doubly charged Higgs bosons are hypothetical particles predicted by certain extensions of the SM, such as the Left-Right Symmetric Model (LRSM)~\cite{Pati:1974yy,Mohapatra:1974gc,Senjanovic:1975rk,Mohapatra:1977mj,BhupalDev:2018vpr}, Two Higgs Doublet Model (2HDM)~\cite{Lee:1973iz,Gunion:1989we,Haber:1984rc,Gunion:1984yn,Branco:2011iw} and Higgs Triplet Model (HTM)~\cite{Konetschny:1977bn,Magg:1980ut,Schechter:1980gr,Cheng:1980qt,Lazarides:1980nt}. The new neutral (dubbed as `$H_{3}$') and doubly-charged (dubbed as `$H^{\pm\pm}$') scalar fields might couple to the SM charged leptons through Yukawa interactions:
\begin{align}
\mathcal{L}_{H_{3}} \ &\supset \ Y_{\alpha\beta}\overline{\ell_{\alpha}}H_{3}\ell_{\beta}+{\rm H.c.},\\
\mathcal{L}_{H^{++}} \ &\supset \ Y_{\alpha\beta}\overline{\ell^{C}_{\alpha}}H^{++}\ell_{\beta}+{\rm H.c.}.
\label{eq:Largangian}
\end{align}
For example, in LRSM, the physical fields $H_{3}$ and $H^{\pm\pm}$ come from the triplet Higgs fields $\Delta_{\rm L,R}$: $H_{3}\equiv {\rm Re}(\Delta^{0})$ and $H_{\rm L,R}^{\pm\pm}\equiv \Delta^{\pm\pm}_{\rm L,R}$, where
\begin{equation}
\Delta_{\rm L,R} \ = \ \begin{pmatrix}\Delta_{\rm L,R}^{+}/\sqrt{2}&\Delta_{\rm L,R}^{++}\\\Delta_{\rm L,R}^{0}&-\Delta_{\rm L,R}^{+}/\sqrt{2}\end{pmatrix},
\label{eq:triplet Higgs}
\end{equation}
and the triplet $\Delta_{\rm L,R}$ couples to the lepton doublets $L_{{\rm L},\alpha} = (\nu_{\rm L},e_{\rm L})_{\alpha}^{\sf T}$ and $L_{{\rm R},\alpha} = (\nu_{\rm R},e_{\rm R})_{\alpha}^{\sf T}$ through Yukawa interactions
\begin{align}
\mathcal{L}_{\rm Y} \ \supset \ & Y_{{\rm L},\alpha\beta}L_{{\rm L},\alpha}^{\sf T}C^{-1}\sigma_{2}\Delta_{\rm L}L_{{\rm L},\beta} \nonumber \\
& +Y_{{\rm R},\alpha\beta}L_{{\rm R},\alpha}^{\sf T}C^{-1}\sigma_{2}\Delta_{\rm R}L_{{\rm R},\beta}+{\rm H.c.},
\label{eq:Lagrangian LRSM}
\end{align}
where $\alpha$ and $\beta$ denote the lepton flavor $e,\mu{\rm \ or \ }\tau$ and $C$ is the charge conjugation matrix.
These Yukawa interactions are important in addressing the (smallness) of the neutrino masses, the baryon asymmetry of the universe and are responsible for the origin of the electroweak symmetry breaking. The observation of the Yukawa couplings could have important implications for different BSM models, such as theories of neutrino mass and baryogenesis.
If the mass scale of the scalars is less than a few TeV, the lepton colliders (with the $\sqrt{s} \sim$ TeV scale) are especially well-suited for the detections of the scalar fields as well as the study of the corresponding BSM scenarios because they provide clean backgrounds and signals compared to hadron colliders. Lepton colliders can also be run at a fixed center-of-mass energy, which allows for precise control of the collision energy and provides a spectacular chance for the study of the new particles at high precision, such as their masses, couplings, and decay modes.
In this work, ignoring the actual detailed form of the Yukawa interactions, I simply treat the overall Yukawa couplings as the model-independent observables and study their discovery prospect at future lepton colliders such as International Linear Collider (ILC)~\cite{ILC:2013jhg,Evans:2017rvt} with a center of mass energy of 1.0 TeV and Compact Linear Collider (CLIC)~\cite{CLICPhysicsWorkingGroup:2004qvu,CLIC:2016zwp} with center of mass energies of 1.5 TeV and 3 TeV.
The rest of the paper is organized as follows: in Section~\ref{sec:theoretical}, I briefly discuss the characteristic cases and the related experimental constraints that are used in this paper. Section~\ref{sec:collider} presents the detailed analysis of the collider signal considered in this paper, as well as the background evaluation and different cuts that are used for different cases. The results and other discussions are summarized in Section~\ref{sec:results} and Section~\ref{sec:discussions}. Section~\ref{sec:conclusions} gives the conclusions. The related invariant mass distributions are shown in Appendix~\ref{sec:invariant mass}.
\section{Theoretical analysis}
\label{sec:theoretical}
For simplicity, I only consider the $e, \mu$ sector of the Yukawa matrices and simply assume the diagonal and off-diagonal terms are equal separately: $|Y_{e e}| = |Y_{\mu \mu}|$ and $|Y_{e \mu}| = |Y_{\mu e}|$.
Because of the extremely strong constraints of $\mu \to e e \Bar{e} \ (< 1.0 \times 10^{-12} {\rm \ at \ 90\% \ CL})$~\cite{SINDRUM:1987nra} and $\mu \to e \gamma \ (< 4.2 \times 10^{-13} {\rm \ at \ 90\% \ CL})$~\cite{MEG:2016leq}, the diagonal and off-diagonal terms cannot be large at the same time. Otherwise, these LFV rare lepton decay processes can happen at the tree level (for $\mu \to e e \Bar{e}$) and one-loop level (for $\mu \to e \gamma$) with a neutral or doubly-charged mediator in the diagrams.
Thus, I only consider four characteristic cases in this work:
\begin{enumerate}
\item [(a)] Neutral scalar $H_{3}$ with non-zero diagonal Yukawa couplings $|Y_{e e}| = |Y_{\mu \mu}|$.
\item [(b)] Neutral scalar $H_{3}$ with non-zero off-diagonal Yukawa couplings $|Y_{e \mu}| = |Y_{\mu e}|$.
\item [(c)] Doubly-charged scalar $H^{\pm\pm}$ with non-zero diagonal Yukawa couplings $|Y_{e e}| = |Y_{\mu \mu}|$.
\item [(d)] Doubly-charged scalar $H^{\pm\pm}$ with non-zero off-diagonal Yukawa couplings $|Y_{e \mu}| = |Y_{\mu e}|$.
\end{enumerate}
From now on, we will use a convention in this paper that the letters (a), (b), (c), and (d) appearing in the equations, figures, and tables below respectively correspond to the four cases (a), (b), (c), and (d) presented here.
For the possible experimental constraints in the four cases, I use the data from the rare LFV decays $\ell_\alpha \to \ell_\beta \ell_\gamma \ell_\delta$, $\ell_\alpha \to \ell_\beta \gamma$~\cite{Workman:2022ynf,HFLAV:2016hnz}, the muonium oscillation~\cite{Willmann:1998gd}, the LEP $e^+ e^- \to \ell^+ \ell^-$~\cite{DELPHI:2005wxt}, and the LHC multi-lepton~\cite{ATLAS:2021eyc,ATLAS:2022pbd}.
As for the electron~\cite{Parker2018MeasurementOT,morel2020determination} and muon~\cite{Muong-2:2021ojo} anomalous magnetic moments, I have checked and agree with the previous theoretical expressions~\cite{Queiroz:2014zfa,Lindner:2016bgg} for the $\Delta a_e$ and $\Delta a_\mu$ contributions induced by neutral and doubly-charged scalar fields. Specifically, the $(g-2)_\mu$ contributions in our case (b) and case (d) are:
\begin{subequations}
\begin{align}
\setcounter{equation}{1}
\Delta& a_{\mu}^{\rm (b)} = \frac{1}{8\pi^{2}}\frac{m^{2}_{\mu}}{m^{2}_{H_{3}}}\int_{0}^{1}{\rm d}x\frac{|Y_{e\mu}|^{2}x^{2}(1-x+\frac{m_{e}}{m_{\mu}})}{(1-x)(1-\frac{m_{\mu}^{2}}{m_{H_{3}}^{2}}x)+\frac{m_{e}^{2}}{m_{H_{3}}^{2}}x},\\
\setcounter{equation}{3}
\Delta& a_{\mu}^{\rm (d)} = -\frac{1}{\pi^{2}}\frac{m^{2}_{\mu}}{m^{2}_{H^{\pm\pm}}}\int_{0}^{1}{\rm d}x\frac{|Y_{e\mu}|^{2}x(1-x)(x+\frac{m_{e}}{m_{\mu}})}{\frac{m_{e}^{2}}{m_{H^{\pm\pm}}^{2}}(1-x)(1-\frac{m_{\mu}^{2}}{m_{e}^{2}}x)+x}\nonumber\\
&-\frac{1}{2\pi^{2}}\frac{m^{2}_{\mu}}{m^{2}_{H^{\pm\pm}}}\int_{0}^{1}{\rm d}x\frac{|Y_{e\mu}|^{2}x^{2}(1-x+\frac{m_{e}}{m_{\mu}})}{(1-x)(1-\frac{m_{\mu}^{2}}{m_{H^{\pm\pm}}^{2}}x)+\frac{m_{e}^{2}}{m_{H^{\pm\pm}}^{2}}x}.
\end{align}
\label{eq:gminus2}
\end{subequations}
The corresponding Feynman diagrams are shown in Fig.~\ref{fig:gminus2}. The Feynman diagrams and the expressions of the $(g-2)_\mu$ contribution in case (a) and case (c) can be easily obtained by changing all the $e$ indices to $\mu$ in Fig.~\ref{fig:gminus2} and Eq.~\eqref{eq:gminus2}. The Feynman diagrams and the expressions of the $(g-2)_e$ can be easily obtained by exchanging all the $e$ and $\mu$ indices in the corresponding $(g-2)_\mu$ figures and expressions.
\begin{figure*}[t!]
\centering
\setcounter{subfigure}{0}\renewcommand{\thesubfigure}{b}\makeatletter
\subfloat[${H_3}$, $|Y_{e\mu}| \neq 0$]{\includegraphics[width=0.32\textwidth]{figures/gminus2_01.pdf}}
\setcounter{subfigure}{0}\renewcommand{\thesubfigure}{d\arabic{subfigure}}\makeatletter
\subfloat[${H^{\pm\pm}}$, $|Y_{e\mu}| \neq 0$]{\includegraphics[width=0.32\textwidth]{figures/gminus2_03.pdf}}
\subfloat[${H^{\pm\pm}}$, $|Y_{e\mu}| \neq 0$]{\includegraphics[width=0.32\textwidth]{figures/gminus2_02.pdf}}
\caption{Feynman diagrams for the $(g-2)_\mu$ in case (b): $H_{3}$, $|Y_{e\mu}| \neq 0$ and case (d): $H^{\pm\pm}$, $|Y_{e\mu}| \neq 0$. Feynman diagrams for the $(g-2)_\mu$ in case (a): $H_{3}$, $|Y_{ee}| = |Y_{\mu\mu}| \neq 0$ and case (c): $H^{\pm\pm}$, $|Y_{ee}| = |Y_{\mu\mu}| \neq 0$ can be obtained simply by changing all the $e$ indices to $\mu$ in the figures. Feynman diagrams for the $(g-2)_e$ can be obtained simply by exchanging all the $e$ and $\mu$ indices in the corresponding $(g-2)_\mu$ diagrams.}
\label{fig:gminus2}
\end{figure*}
\section{Signal and background analysis}
\label{sec:collider}
I focus on the ILC~\cite{Evans:2017rvt} and CLIC~\cite{CLIC:2016zwp} as two benchmark machines for future lepton colliders and present in Table~\ref{tab:colliders} their planned final center-of-mass energy $\sqrt{s}$ and the expected integrated luminosity $\mathscr{L}_{\rm int}$.
\begin{table}[th!]
\caption{The planned center-of-mass energy and expected integrated luminosity for the International Linear Collider (ILC) and two stages of Compact Linear Collider (CLIC)}
\centering
\begin{tabular}{c|cc}
\hline
\hline
Collider&$\sqrt{s}$ (TeV)&$\mathscr{L}_{\rm int}$ (ab$^{-1}$)\\ \hline
ILC&1.0&4.0\\ \hline
\multirow{2}*{CLIC}&
1.5&2.5\\
\cline{2-3}
&3.0&5.0\\ \hline\hline
\end{tabular}
\label{tab:colliders}
\end{table}
Based on the four cases mentioned above, I propose two collider signals that can be used to test the $(m_{\rm scalar},Y_{\alpha \beta})$ parameter space: $e^+ e^- \to e^+ e^- \mu^+ \mu^-$ and $e^+ e^- \to e^+ e^+ \mu^- \mu^- / e^- e^- \mu^+ \mu^+$. The second signal that has two same-sign dilepton pairs violates the lepton flavor and the SM background mainly comes from the misidentification of the lepton flavor in the final states. For lepton colliders, the mis-ID rate for electron and muon is less than 0.5\%~\cite{Yu:2017mpx}. This makes $e^\pm e^\pm \mu^\mp \mu^\mp$ almost background free and I find that the background would not have substantial effects on our estimates of the signal sensitivities. For simplicity, I neglect the $e^\pm e^\pm \mu^\mp \mu^\mp$ SM background for all the prospects below.
In this work, I perform a simulation for the signal and background processes using \textsc{MadGraph5\_aMC@NLO}\xspace~\cite{Alwall:2014hca}, requiring all leptons in the final states to satisfy the minimal trigger cuts $p_{\rm T}>10$ GeV, $|\eta|<2.5$, and $\Delta R>0.4$.
The $e^+ e^- \mu^+ \mu^-$ and $e^\pm e^\pm \mu^\mp \mu^\mp$ signals come from the single production of $H_{3}$ or $H^{\pm\pm}$, while there are also contributions from the
Drell–Yan pair production of $H^{\pm\pm}$ which are dominant when $m_{H^{\pm\pm}} \lesssim \sqrt{s} / 2$ as shown in Fig.~\ref{fig:signal}.
\begin{figure*}[t!]
\centering
{\includegraphics[width=0.4\textwidth]{figures/signal01.pdf}}
{\includegraphics[width=0.4\textwidth]{figures/signal02.pdf}}
{\includegraphics[width=0.4\textwidth]{figures/signal04.pdf}}
{\includegraphics[width=0.4\textwidth]{figures/signal03.pdf}}
\caption{Representative Feynman diagrams for the single production of $H_{3}$, $H^{\pm\pm}$ and pair production of $H^{\pm\pm}$.}
\label{fig:signal}
\end{figure*}
I present in Table~\ref{tab:signals} the possible signal(s) for each case and the corresponding invariant mass distributions that could be used to distinguish the signal from the background. As a demonstration, Fig.~\ref{fig:invariant mass} shows the relevant signal and background invariant mass distributions in the $e^+ e^- \mu^+ \mu^-$ channel. The parameter values used in Fig.~\ref{fig:invariant mass} correspond to the values at the * marks in Fig.~\ref{fig:results}. The clean red peaks in Fig.~\ref{fig:invariant mass} mainly come from the prompt decay of neutral or doubly-charged scalar. The invariant mass distributions in the $e^\pm e^\pm \mu^\mp \mu^\mp$ channel are not shown, since they are almost background free, and the signal distributions are very similar to the ones I present in Fig.~\ref{fig:invariant mass} except for the charge of the electron or muon. In Fig.~\ref{fig:invariant mass}(b1) or (d1), the signal events that mainly range from $0 \sim 500$~GeV correspond to the events in the peak of Fig.~\ref{fig:invariant mass}(b2) or (d2) and vice versa. For example, in Fig.~\ref{fig:invariant mass}(d1), the signal events that ranging from $0 \sim 500$~GeV mainly come from the process $e^+ e^- \to e^+ \mu^+ (H_{\rm L}^{--} \to e^- \mu^-)$ and should peak around 950~GeV in Fig.~\ref{fig:invariant mass}(d2), because in Fig.~\ref{fig:invariant mass}(d2), it shows the invariant mass distribution of $e^- \mu^-$. This is the feature of the single production channel and would be useful to enhance the signal sensitivity as one can choose a selection like $M_{e^+ \mu^-} || M_{e^- \mu^+} > 450$~GeV in Fig.~\ref{fig:invariant mass}(b) or $M_{e^+ \mu^+} || M_{e^- \mu^-} > 900$~GeV in Fig.~\ref{fig:invariant mass}(d).
\begin{table*}[t!]
\caption{Possible signal(s) for each case and the corresponding invariant mass distributions that could be used to distinguish the signal from the background. "-" means the signal is not possible (except for the mis-ID) in the corresponding case.}
\begin{tabular}{l|c|c}
\hline\hline
& $e^+ e^- \to e^+ e^- \mu^+ \mu^-$ & $e^+ e^- \to e^\pm e^\pm \mu^\mp \mu^\mp$ \\ \hline
(a) $H_{3},|Y_{ee}| = |Y_{\mu\mu}|$ & $M_{e^+ e^-}\&M_{\mu^+ \mu^-}$ & -- \\ \hline
(b) $H_{3},|Y_{e\mu}|$ & $M_{e^+ \mu^-}\&M_{e^- \mu^+}$ & $M_{e^+ \mu^-}\&M_{e^- \mu^+}$ \\ \hline
(c) $H^{\pm\pm},|Y_{ee}| = |Y_{\mu\mu}|$ & -- & $M_{e^+ e^+}\&M_{\mu^+ \mu^+}$ \\ \hline
(d) $H^{\pm\pm},|Y_{e\mu}|$ & $M_{e^+ \mu^+}\&M_{e^- \mu^-}$ & -- \\
\hline\hline
\end{tabular}
\label{tab:signals}
\end{table*}
Except for the basic cuts mentioned above, I also apply some specific cuts to enhance the sensitivity based on the characteristic signatures in each case:
\begin{enumerate}
\item [(a)] $H_{3}$ with diagonal couplings: Only $e^+ e^- \mu^+ \mu^-$ final state is possible in this case, where $e^+ e^- \to e^+ e^- (H_3 \to \mu^+ \mu^-)$ or $e^+ e^- \to \mu^+ \mu^- (H_3 \to e^+ e^-)$. Another important channel comes from $e^+ e^- \to ZH_{3} \to e^+ e^- \mu^+ \mu^-$. We expect the dielectron and dimuon invariant mass $M_{e^+e^-}$ and $M_{\mu^+\mu^-}$ to peak at $Z$ and $H_3$ mass; see Figs.~\ref{fig:invariant mass}(a1) and (a2). However, the SM background has similar $M_{e^+e^-}$ and $M_{\mu^+\mu^-}$ distributions around the $Z$ peak. I find that applying cut on $M_{\mu^+\mu^-}$ or $M_{e^+e^-}$ cannot improve the sensitivities much, and make the sensitivities worse in the region where $m_{H_3} \approx m_Z$. So, I first do not apply any further cut for this case. As a comparison, I also show sensitivities with the cut $M_{\mu^+\mu^-} > 120$~GeV in Fig.~\ref{fig:results}(a). The red, yellow, and blue solid (dashed) contours show the 3$\sigma$ sensitivities of the signal without (with) applying the cut $M_{\mu^+\mu^-} > 120$~GeV in Fig.~\ref{fig:results}(a). As we can see, the red, yellow, and blue dashed contours cannot improve the sensitivities much and are not valid when $m_{H_3} \lesssim 120$~GeV due to the cut I choose.
\item [(b)] $H_{3}$ with off-diagonal couplings: Both $e^+ e^- \mu^+ \mu^-$ and $e^\pm e^\pm \mu^\mp \mu^\mp$ final states are possible in this case. For the $e^+ e^- \mu^+ \mu^-$ final state, I further apply the cut $M_{e^+e^-},M_{\mu^+\mu^-} > 120$~GeV to reduce the events with a $Z$ boson decaying to a pair of leptons. We also expect the invariant mass $M_{e^\pm \mu^\mp}$ to peak at the $H_{3}$ mass; see Figs.~\ref{fig:invariant mass}(b1) and (b2). Since we do not know the mass of $H_{3}$ and cannot tell where the peak should be around, I do not apply cut on $M_{e^\pm \mu^\mp}$ for this case. Because $e^\pm e^\pm \mu^\mp \mu^\mp$ is almost background free, I do not apply any further cut in this channel. In Fig.~\ref{fig:results}(b), the red, yellow, and blue solid (dashed) contours now show the 3$\sigma$ sensitivities in $e^+ e^- \mu^+ \mu^-$ ($e^\pm e^\pm \mu^\mp \mu^\mp$) channel. Because the background is small in the $e^\pm e^\pm \mu^\mp \mu^\mp$ channel (assumed to be zero in this work), it is not surprising that the red, yellow, and blue dashed contours behave better than the solid contours.
\item [(c)] $H^{\pm\pm}$ with diagonal couplings: Only $e^\pm e^\pm \mu^\mp \mu^\mp$ final state is possible in this case. Because $e^\pm e^\pm \mu^\mp \mu^\mp$ is almost background free, I do not apply any further cut for this case.
\item [(d)] $H^{\pm\pm}$ with off-diagonal couplings: Only $e^+ e^- \mu^+ \mu^-$ final state is possible in this case, where $e^+ e^- \to e^\mp \mu^\mp (H^{\pm\pm} \to e^\pm \mu^\pm)$. We expect the $e^\pm \mu^\pm$ invariant mass $M_{e^\pm \mu^\pm}$ to peak at the $H^{\pm\pm}$ mass; see Figs.~\ref{fig:invariant mass}(d1) and (d2). Because the pair production channel that is independent of the Yukawa coupling is dominant when $m_{H^{\pm\pm}} \lesssim \sqrt{s} / 2$, the sensitivity in the $(m_{H^{\pm\pm}},Y_{e \mu})$ parameter space is only valid in the region where $m_{H^{\pm\pm}} \gtrsim \sqrt{s} / 2$. Based on this feature, I further apply the cut $M_{e^\pm \mu^\pm} \gtrsim \sqrt{s} / 2$ to maximize the sensitivity. To be specific, I require $M_{e^\pm \mu^\pm} > 500$~GeV at ILC 1.0 TeV stage, $M_{e^\pm \mu^\pm} > 750$~GeV at CLIC 1.5 TeV stage, and $M_{e^\pm \mu^\pm} > 1400$~GeV at CLIC 3.0 TeV stage.
\end{enumerate}
I summarize the further selections used in our analysis in Table~\ref{tab:selection} in the $e^+ e^- \mu^+ \mu^-$ channel.
\begin{table}[ht!]
\caption{Further selections for the analysis for each case in the $e^+ e^- \mu^+ \mu^-$ channel}
\begin{tabular}{c|c}
\hline\hline
$e^+ e^- \mu^+ \mu^-$ & Selection \\ \hline
case (a) & -- or $M_{\mu^+ \mu^-} > 120~$GeV \\ \hline
case (b) & $M_{e^+ e^-},M_{\mu^+ \mu^-} > 120~$GeV \\ \hline
case (d) & $M_{e^+ \mu^+},M_{e^- \mu^-} \gtrsim \sqrt{s}/2$ \\ \hline\hline
\end{tabular}
\label{tab:selection}
\end{table}
\section{Results} \label{sec:results}
\begin{figure*}[ht!]
\centering
\subfloat[$m_{H_3}$, $|Y_{ee}| = |Y_{\mu\mu}| \neq 0$]{\includegraphics[width=0.5\textwidth]{figures/H3_diagonal_new.pdf}}
\subfloat[$m_{H_3}$, $|Y_{e\mu}| \neq 0$]{\includegraphics[width=0.5\textwidth]{figures/H3_offdiagonal_new.pdf}}\\
\subfloat[$m_{H_{\rm L,R}^{\pm\pm}}$, $|Y_{ee}| = |Y_{\mu\mu}| \neq 0$]{\includegraphics[width=0.5\textwidth]{figures/HLR_diagonal_new.pdf}}
\subfloat[$m_{H_{\rm L,R}^{\pm\pm}}$, $|Y_{e\mu}| \neq 0$]{\includegraphics[width=0.5\textwidth]{figures/HLR_offdiagonal_new.pdf}}
\caption{Four cases in the $H_{3}$ or $H_{\rm L,R}^{\pm\pm}$ parameter space. The orange-shaded regions explain the $(g-2)_\mu$ anomaly at $2\sigma$ CL, while the brown-shaded regions display the $5\sigma$ disfavored regions of $\Delta a_\mu$. The purple-shaded regions with solid (dashed) boundaries are the $5\sigma$ disfavored regions of $\Delta a_e$ using Cs (Rb) measurements. The black solid and dashed contours are the 13~TeV limits derived from the different LHC multi-lepton searches discussed in the text. Other shaded regions show the relevant constraints on the parameter space from muonium oscillation (gray), LEP $ee \to ee$ (magenta), and LEP $ee \to \mu\mu$ (green). The red, yellow, and blue contours respectively show the $3\sigma$ sensitivities of the 1.0~TeV ILC, 1.5~TeV CLIC, and 3.0~TeV CLIC in the (a) $e^+ e^- \mu^+ \mu^-$ channel without (solid contours) or with (dashed contours) $M_{\mu^+ \mu^-} > 120$~GeV; (b) $e^+ e^- \mu^+ \mu^-$ channel (solid contours) and $e^\pm e^\pm \mu^\mp \mu^\mp$ channel (dashed contours); (c) $e^\pm e^\pm \mu^\mp \mu^\mp$ channel (solid contours for $H_{\rm L}^{\pm\pm}$ and dashed contours for $H_{\rm R}^{\pm\pm}$); (d) $e^+ e^- \mu^+ \mu^-$ channel (solid contours for $H_{\rm L}^{\pm\pm}$ and dashed contours for $H_{\rm R}^{\pm\pm}$). The * marks in the figures give the representative values of the corresponding parameters used in Fig.~\ref{fig:invariant mass}.}
\label{fig:results}
\end{figure*}
In the Left-Right Symmetric Model~\cite{Pati:1974yy,Mohapatra:1974gc,Senjanovic:1975rk,Mohapatra:1977mj,BhupalDev:2018vpr,BhupalDev:2018tox}, the triplet Higgs fields $\Delta_{\rm L}$ and $\Delta_{\rm R}$ transform as triplets under $SU(2)_L$ and $SU(2)_R$ gauge symmetries respectively. Both of them give rise to a doubly-charged Higgs. They are named as $H^{\pm\pm}_{\rm L}$ and $H^{\pm\pm}_{\rm R}$. Besides, the doubly-charged Higgs in the canonical type-II seesaw model~\cite{Akeroyd:2005gt,Hektor:2007uu,Chao:2008mq} is the same as $H^{\pm\pm}_{\rm L}$ and the doubly-charged scalar in the Zee–Babu neutrino mass model~\cite{Zee:1985rj,Zee:1985id,Babu:1988ki} has the same quantum numbers as $H^{\pm\pm}_{\rm R}$\footnote{I assume the $Z'$ is much heavier, then the electroweak production of the doubly-charged scalar in the Zee–Babu model is the same as $H^{\pm\pm}_{\rm R}$.}. In case (c) and case (d) of this work, I consider both $H^{\pm\pm}_{\rm L}$ and $H^{\pm\pm}_{\rm R}$. Because they have different couplings to the $Z$ boson~\cite{Fuks:2019clu}, their sensitivities (the red, yellow, and blue solid or dashed contours in Figs.~\ref{fig:results}(c) and (d)) are a little bit different but the LHC constraints derived from the Drell–Yan pair production process (the black solid or dashed contours in Figs.~\ref{fig:results}(c) and (d)) have a sizable difference\footnote{The cross-section for the pair production of $H^{\pm\pm}_{\rm L}$ is roughly two times larger than the $H^{\pm\pm}_{\rm R}$, which makes the constraints of $H^{\pm\pm}_{\rm L}$ stronger than $H^{\pm\pm}_{\rm R}$ in Figs.~\ref{fig:results}(c) and (d)}.
I consider all the four cases where $H_{3}$ or $H^{\pm\pm}_{\rm L,R}$ has non-zero diagonal or off-diagonal Yukawa couplings in the $e,\mu$ sector. In Fig.~\ref{fig:results}, the orange-shaded regions explain the $(g-2)_\mu$ anomaly~\cite{Muong-2:2021ojo} at $2\sigma$ CL, while the brown-shaded regions display the $5\sigma$ disfavored regions of $\Delta a_\mu$. As shown in Eq.~\eqref{eq:gminus2}, the doubly-charged scalar has an opposite contribution to $\Delta a_\mu$, so there is no $2\sigma$ favored region in Figs.~\ref{fig:results}(c) and (d). The purple-shaded regions in Fig.~\ref{fig:results} with solid (dashed) boundaries are the $5\sigma$ disfavored regions of $\Delta a_e$ using Cs~\cite{Parker2018MeasurementOT} (Rb~\cite{morel2020determination}) measurements. Again, because the doubly-charged scalar has an opposite contribution to $\Delta a_e$ compared with the neutral scalar, the $(g-2)_e^{\rm Cs}$ constraint is stronger in the $H_{3}$ parameter space (Figs.~\ref{fig:results}(a) and (b)), while the $(g-2)_e^{\rm Rb}$ constraint is stronger in the $H^{\pm\pm}_{\rm L,R}$ parameter space (Figs.~\ref{fig:results}(c) and (d)). I recast the ATLAS multilepton analysis~\cite{ATLAS:2021eyc} using the Signal Region $4\ell$ Off-Z with $M_{\rm inv} > (<) 400$~GeV and set new bounds on the neutral scalar $H_{3}$ shown as the black solid (dashed) contours in Figs.~\ref{fig:results}(a) and (b). The black solid (dashed) contours in Figs.~\ref{fig:results}(c) and (d) are the 95\% CL limits on $m_{H_{\rm L}^{\pm\pm}}$ ($m_{H_{\rm R}^{\pm\pm}}$) from the LHC multi-lepton search~\cite{ATLAS:2022pbd}, assuming $\Sigma_{\ell\ell^\prime}{\rm BR}(H^{\pm\pm} \to \ell^\pm \ell^{\prime\pm}) = 100\%$. Other shaded regions in Fig.~\ref{fig:results} show the relevant constraints on the parameter space from muonium oscillation~\cite{Willmann:1998gd} (gray), LEP $ee \to ee$~\cite{DELPHI:2005wxt} (magenta), and LEP $ee \to \mu\mu$~\cite{DELPHI:2005wxt} (green).
The red, yellow, and blue contours in Fig.~\ref{fig:results} show the $3\sigma$ sensitivities of the 1.0~TeV ILC, 1.5~TeV CLIC, and 3.0~TeV CLIC in the $ee\mu\mu$ channels respectively. To be specific, in Fig.~\ref{fig:results}(a), $H_{3}$ with non-zero diagonal Yukawa couplings, the red, yellow, and blue solid (dashed) contours show the $3\sigma$ sensitivities in the $e^+ e^- \mu^+ \mu^-$ channel without (with) cut $M_{\mu^+\mu^-} > 120$~GeV; in Fig.~\ref{fig:results}(b), $H_{3}$ with non-zero off-diagonal Yukawa couplings, the red, yellow, and blue solid (dashed) contours show the $3\sigma$ sensitivities in the $e^+ e^- \mu^+ \mu^-$ ($e^\pm e^\pm \mu^\mp \mu^\mp$) channel; in Fig.~\ref{fig:results}(c), $H_{\rm L}^{\pm\pm}$ ($H_{\rm R}^{\pm\pm}$) with non-zero diagonal Yukawa couplings, the red, yellow, and blue solid (dashed) contours show the $3\sigma$ sensitivities in the $e^\pm e^\pm \mu^\mp \mu^\mp$ channel; in Fig.~\ref{fig:results}(d), $H_{\rm L}^{\pm\pm}$ ($H_{\rm R}^{\pm\pm}$) with non-zero diagonal Yukawa couplings, the red, yellow, and blue solid (dashed) contours show the $3\sigma$ sensitivities in the $e^+ e^- \mu^+ \mu^-$ channel.
In Fig.~\ref{fig:results}, all the red, yellow, and blue contours asymptotically approach the line $m_{H_{3},H_{\rm L,R}^{\pm\pm}} = \sqrt{s}$ because this is the search of the single production channels of the $H_{3}$ and $H_{\rm L,R}^{\pm\pm}$. In case (a), because there is another important channel for the $H_3$ single production: $e^+ e^- \to Z H_3 \to e^+ e^- \mu^+ \mu^-$, the red, yellow, and blue contours in Fig.~\ref{fig:results}(a) have a kink at $m_{H_3} \approx \sqrt{s} - 90$~GeV which displays the feature that an on-shell $Z$ boson turning into an off-shell $Z$ boson as we increase the mass of $H_3$.
The red, yellow, blue, and black curves in Figs.~\ref{fig:results}(a) and (b) are increasing in the low-mass region because the leptons in the final state are soft and cannot pass the selection of the corresponding $p_{\rm T}$. Although the on-shell searches are not sensitive in the low-mass range because of soft leptons, the EFT approach for $e e \to \ell \ell$ data (like the LEP constraints in Figs.~\ref{fig:results}(a) and (b)) gives a flat limit at the low-mass region.
\section{Discussions}
\label{sec:discussions}
$\bullet$ {\bf Other possible bounds:}
For the neutral Higgs cases, I cannot find any LHCb limit beyond 5~GeV. Additional bounds might apply from dark matter direct detection experiments~\cite{Feng:2022inv}, but this requires the Higgs couples to dark matter. As for the doubly-charged Higgs, there are also studies on the HL-LHC such as Ref.~\cite{CMS:2022iuu} using Drell-Yan-like pair production channel and can improve the limit of the doubly-charged Higgs to 1400~GeV at 95\% CL~\cite{CMS:2022iuu}.
$\bullet$ {\bf Electron and muon (g-2):}
Remarkably, we observe that in Figs.~\ref{fig:results}(a) and (c), the $(g-2)_\mu$ bounds are more stringent than the $(g-2)_e$ ones, whereas in Figs.~\ref{fig:results}(b) and (d), the $(g-2)_e$ bounds are stronger than the $(g-2)_\mu$ ones. Roughly speaking, for the off-diagonal Yukawa coupling case in Eq.~\eqref{eq:gminus2}, the dominant contribution to $(g-2)_\mu$ comes from $\Delta a_\mu^{\rm off-diagonal} \propto \frac{m_\mu^2}{m_H^2} \int_{0}^{1}{\rm d}x \ x^2 = \frac{m_\mu^2}{3m_H^2}$, while the dominant contribution to $(g-2)_e$ should be $\Delta a_e^{\rm off-diagonal} \propto \frac{m_e^2}{m_H^2} \int_{0}^{1}{\rm d}x \ \frac{x^2(m_\mu / m_e)}{1-x+m_\mu^2 / m_H^2} \approx -\frac{m_e m_\mu}{m_H^2}{\rm ln}(\frac{m_\mu^2}{m_H^2})$. This gives $\left(\frac{\Delta a_e}{\Delta a_\mu}\right)_{\rm off-diagonal} \sim -3\frac{m_e}{m_\mu}{\rm ln}(\frac{m_\mu^2}{m_H^2}) \sim {\cal O}(0.01)$ to ${\cal O}(0.1)$. But for the diagonal Yukawa coupling case, the ratio is simply $\left(\frac{\Delta a_e}{\Delta a_\mu}\right)_{\rm diagonal} \sim \frac{m_e^2}{m_\mu^2} \sim {\cal O}(10^{-5})$. As a result, the $(g-2)_e$ gets a relatively larger contribution in the off-diagonal coupling case than the diagonal coupling case compared with the $(g-2)_\mu$. This means the constraints of the $(g-2)_e$ should be stronger in the off-diagonal case. And that is the reason in Figs.~\ref{fig:results}(b) and (d), the $(g-2)_e$ constraints are stronger than the $(g-2)_\mu$ ones, but in Figs.~\ref{fig:results}(a) and (c), the $(g-2)_\mu$ constraints are stronger.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figures/mu_egamma01.pdf}
\includegraphics[width=0.4\textwidth]{figures/mu_egamma02.pdf}
\caption{$\mu \to e \gamma$ contributions in case (b) $H_{3}$, $Y_{e\mu} \neq 0$. However, the total amplitude square is zero.}
\label{fig:mu_to_egamma}
\end{figure*}
$\bullet$ {\bf New lattice results on muon (g-2):}
In this paper, my goal is not to address the $(g-2)_\mu$ anomaly but rather focus on the discovery prospect of the neutral and doubly-charged scalars at future lepton colliders. However, in Fig.~\ref{fig:results}(a), there does exist a parameter space of neutral scalar with its mass ranging from $5 \sim 50$~GeV that can explain $(g-2)_\mu$ and can be partly tested at CLIC in the $e^+ e^- \mu^+ \mu^-$ channel.
The discrepancy of $\Delta a_\mu$ used in this paper comes from the result of the Fermilab Muon $(g-2)$ experiment~\cite{Muong-2:2021ojo}, which is compared with the world-average of the SM prediction using the ``R-ratio method''~\cite{Aoyama:2020ynm} and give a discrepancy of $4.2\sigma$:
\begin{align}
\Delta a_\mu \equiv a_\mu^{\rm exp} - a_\mu^{\rm SM} = (251 \pm 59) \times 10^{-11} \, .
\label{eq:deltamu}
\end{align}
But the lattice simulation result from the BMW collaboration~\cite{Borsanyi:2020mff} considering the leading hadronic contribution to $\Delta a_\mu$ reduce the discrepancy to only $1.5\sigma$. There are several new lattice results available now come from other collaborations~\cite{Ce:2022kxy,Alexandrou:2022amy,Colangelo:2022vok,talkLehner2022,talkGottlieb2022,talkColangelo2022} seem to agree with the BMW result and would result in a discrepancy of $\sim 3.3 \sigma$. Assuming the discrepancy of $\Delta a_\mu$ is reduced, the $(g-2)_\mu$ constraints in Fig.~\ref{fig:results} would become stronger while the $(g-2)_\mu$ $2\sigma$ favored region in Fig.~\ref{fig:results}(a) would become larger accordingly, because, for a same value of mass, a smaller coupling would be enough to generate the needed value of $\Delta a_\mu$. As a comparison, the lower orange boundary of the $(g-2)_\mu$ $2\sigma$ favored region in Fig.~\ref{fig:results}(a) or (b) corresponds to $\Delta a_\mu = 133 \times 10^{-11}$.
$\bullet$ {\bf $\bf{\mu \to e \gamma}$:}
Assuming the diagonal and off-diagonal terms of the Yukawa couplings are not zero separately is crucial. It allows us not to worry about the strong constraint from $\mu \to e \gamma$. One would argue that in case (b), even if only the $Y_{e\mu}$ coupling of $H_{3}$ is not 0, there are still contributions from the VEV of $H_3$; see Fig.~\ref{fig:mu_to_egamma}. Indeed, the spin averaged amplitude square of each diagram in Fig.~\ref{fig:mu_to_egamma} is
\begin{equation}
\left\langle|M_1|^2\right\rangle \ = \ \left\langle|M_2|^2\right\rangle \ = \ 4e^{2}|Y_{e\mu}|^{2}v^{2}\frac{p_{e}\cdot p_{\mu}-2m_{e}m_{\mu}}{(m_{e}-m_{\mu})^{2}},
\end{equation}
and could be large. But the interference terms will cancel this identically:
\begin{equation}
\left\langle M_1 M_2^*\right\rangle \ = \ \left\langle M_2 M_1^*\right\rangle \ = \ -4e^{2}|Y_{e\mu}|^{2}v^{2}\frac{p_{e}\cdot p_{\mu}-2m_{e}m_{\mu}}{(m_{e}-m_{\mu})^{2}}.
\end{equation}
\section{Conclusions} \label{sec:conclusions}
Lepton colliders allow for precise measurements of new physics beyond the SM. They produce cleaner collision events with less background noise compared with the hadron colliders.
Focusing on the $e,\mu$ sector of the Yukawa coupling matrix of the neutral and doubly-charged scalars, I proposed four characteristic cases and analyzed their discovery prospect in the $e^+ e^- \mu^+ \mu^-$ and $e^\pm e^\pm \mu^\mp \mu^\mp$ channels at future lepton colliders in a model-independent way. The corresponding Yukawa couplings can be detected ranging from $0.005 \sim 0.5$ at future lepton colliders depending on the cases, $\sqrt{s}$, luminosity, and the mass of the scalar.
\acknowledgments
I am very grateful to Bhupal Dev and Yongchao Zhang for their useful discussions and collaboration during the early stage of this project. I thank Amarjit Soni for the information on the new lattice results of muon $(g-2)$. I also thank Ahmed Ismail for an important remark on $\mu \to e \gamma$ with Fig.~\ref{fig:mu_to_egamma}. This work is supported by the US Department of Energy under Grant No.~DE-SC0017987.
|
{
"arxiv_id": "2302.08594",
"language": "en",
"timestamp": "2023-02-22T02:01:36",
"url": "https://arxiv.org/abs/2302.08594",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Range image-based convolution neural networks (CNNs) \cite{squeezeseg, rangenet++, squeezesegv2, squeezesegv3, salsanext} are a critical alternative to point-based \cite{pointnet++, randla, kpconv} and partition-based methods \cite{cylinder3d, point2voxel, minkowski} for LiDAR point cloud semantic segmentation (LiDAR PCSS) when real-time inference is needed for target applications. \cite{salsanext, fidnet} achieve higher segmentation accuracy by applying domain-specific network architecture modifications and loss functions to CNNs borrowed from other 2D vision tasks. \cite{cenet, lite} alleviate the boundary-blurring problem of CNNs by adding a boundary loss. The GPU-accelerated KNN refiner \cite{rangenet++, salsanext, fidnet, cenet} is a common post-processing technique of image-based methods to reclassify points blocked by foreground points during spherical projection, i.e., quantitation loss of spherical projection. However, image-based methods still fall behind state-of-the-art partition-based \cite{point2voxel} and hybrid methods \cite{rpvnet} in terms of segmentation accuracy, and the widely-used KNN refiner may incorrectly rectify points (see Fig. \ref{fig:p1}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{fig1.jpg}
\end{center}
\caption{The KNN refiner may rectify some points that are correctly predicted by CNN backbones. }
\label{fig:p1}
\end{figure}
In this paper, we propose a plug-and-play transformer-based refiner for image-based LiDAR PCSS to improve the segmentation accuracy by reclassifying uncertain points that are misclassified due to the boundary-blurring problem of CNNs and quantitation loss of spherical projection. Our pipeline combines the image-based CNNs with the point-based methods sequentially to benefit from each method efficiently via semantic-augmented point representation. As shown in Fig \ref{fig:p2}, the image-based CNNs first generate coarse semantic segmentation features. Following that, the uncertain points are selected in two ways: (i) choosing the points with lower probability of coarse semantic segmentation labels \cite{pointrend}. (ii) selecting background points, i.e., points blocked by foreground points during spherical projection and are far away from foreground points. We then use the geometry (e.g., coordinates) along with the locally augmented semantic segmentation features to train our designed learnable transformer-based refiner, which contains concatenated self-attention layer and fully connected layers for uncertain point classification.
The contributions of this paper are as follows: 1) We propose a plug-and-play transformer-based uncertain point refiner for image-based LiDAR PCSS, which is generic and can be easily integrated into existing image-based CNN backbones without any network architecture modifications. 2) the existing image-based LiDAR PCSS can be further improved using our TransUPR and achieves new state-of-the-art performance, i.e., 68.2\% mIoU, on Semantic-KITTI\cite{semantickitti}.
\section{Methodology}
\label{sec:method}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=\linewidth]{fig2.jpg}
\end{center}
\caption{The Pipeline of TransUPR.}
\label{fig:p2}
\end{figure*}
As shown in Fig. \ref{fig:p2}, for a LiDAR point cloud $P=\mathbb{R}^{N \times 3}$, we first convert each point $p_i = (x, y, z, remission)$ of $P$ into a 2D representation $R$, i.e., range image, by spherical projection. Each pixel in $R$ contains five channels $(x, y, z, r, remission)$, which only presents the geometry feature of the foreground point that has the closest range to the origin. We then utilize existing range image segmentation CNNs as our backbone that takes R as the input to generate coarse semantic segmentation features. Following that, our TransUPR module selects the uncertain points and locally aggregates the point features for refinement, and produces point-wise fine-grained semantic labels. We explain the mechanism of our proposed TransUPR module from the following three aspects: 1) the local feature aggregation of geometry features and coarse semantic segmentation features; (2) the strategies of selecting uncertain points for training and testing; (3) the architecture of the learnable transform-based network and self-attention layers.
\subsection{Local Feature Aggregation}
\label{sec:2.1}
We hypothesize that the output of the softmax layer in CNN backbones is the pixel-wise class probabilities, i.e., coarse semantic segmentation features \cite{pointrend}. The points projected into the same 2D pixel have duplicated coarse semantic segmentation features. Thus, we augment the features of each point by averaging the features of its neighbor points in the 3D space to make the features distinguishable. Our implementation is similar to the original KNN \cite{rangenet++}, which means GPU can accelerate the process without adding harsh computation burdens. At the same time, the local feature aggregation module have a identical KNN module with \cite{rangenet++} to generate refined $p_{c}$ (see Fig. \ref{fig:p2}).
In addition, we concatenate the point geometry attributes $(x, y, z, range, remission)$ and the augmented coarse semantic features as the input feature vectors for the following transformer. Therefore, a finalized feature vector of a 3D points have a size of $N_{geometry} + N_{classes}$, where $N_{geometry} = 5$ and $N_{classes} = 20$ in Semantic-KITTI \cite{semantickitti}.
\subsection{Strategies of Selecting Uncertain Points}
\label{sec:2.2}
Applying the typical transformer \cite{transformer} to the large-scale LiDAR point cloud severely burdens the GPU memory footprint \cite{pointtransformer}. However, we think the image-based methods are enough for comparable semantic segmentation labels, and only a small amount of points are misclassified due to the boundary-blurring problem and quantitation loss of projection. Thus, we localize the uncertain points to reduce the input size for the following transformer network.
One strategy is to identify uncertain points in 2D representation, where points are misclassified due to the boundary-blurring problem of CNNs. First, we rank the 2D pixels in ascending order of top-2 class probability differences, where the class probability is the output of the Softmax layer in CNNs. The 3D points projected into the same 2D pixel have identical top-2 class probability differences. Lower top-2 class probability differences imply that the assigned semantic labels are low confidence. Then, we randomly select the top 8192 points with the lowest top-2 class probability differences into the uncertain point pool.
Another uncertain point localization way is viewing all the background points of spherical projection as the uncertain points in 3D space. However, we noticed that most of the background points close to the foreground points belong to the same semantic label as the foreground points. Therefore, we set a cutoff $c_u$ to filter out the background points that are close to the foreground points as the uncertain points. If the distance between a background point and its corresponding foreground points is less than the $c_u$, i.e., $c_u = 1$ for Semantic-KITTI\cite{semantickitti}, we put the background point into the uncertain point pool. We randomly select $N_u$ points from the uncertain point pool for the refiner training, where $N_u = 4096$ for Semantic-KITTI. All points in the uncertain point pool are refined during the model inference via the transformer network.
\subsection{Transformer Network and Attention Layers}
Given an input uncertain point $p_u \in \mathbb{R}^{N_{geometry} + N_{classes}}$, a $d_e$-dimensional feature embedding $F_e \in \mathbb{R}^{d_e}$ is first learned via two fully connected layers, where $d_e = 256$. We modify the typical transformer \cite{transformer}, i.e., na\"ive point cloud transformer \cite{pointtransformer}, with four self-attention layers, where each uncertain point is viewed as a word. As shown in figure \ref{fig:p2}, the inputs of each self-attention layer are the output of the precedent self-attention layer, and the outputs of each self-attention layer are concatenated together for the following uncertain point classification. Let $\mathbf{Q}, \mathbf{K}, \mathbf{V}$ be the \textit{\textbf{query}}, \textit{\textbf{key}}, and \textit{\textbf{value}}. For each attention layer, the $\mathbf{Q}$ and $\mathbf{K}$ are first learned by a shared fully connected layer $W_{p}$, and the $\mathbf{V}$ is learned via another fully connected layer $W_{v}$,
\begin{equation}
\begin{split}
(\mathbf{Q}, \mathbf{K}) = W_{p}(F_{in}) \\
\mathbf{V} = W_{v}(F_{in}),
\end{split}
\end{equation}
where $F_{in}$ is the input feature embedding. The $\mathbf{Q}$ and $\mathbf{K}$ are utilized to calculate the correlation, i.e., attention weights, by the matrix dot-product, and the attention weights are normalized,
\begin{equation}
\bar{a}_{i, j} = \frac{\tilde{a}_{i, j}}{\sqrt{d}} = \frac{\mathbf{Q} \cdot \mathbf{K^{T}}}{\sqrt{d}},
\end{equation}
where $d$ is the dimension of the learned feature embedding of the $\mathbf{Q}$ and $\mathbf{K}$. The output feature of a attention layer is the sum of the $\mathbf{V}$ using the attention weights,
\begin{equation}
F_{out} = \mathbf{A} \cdot \mathbf{V} = softmax(\bar{a}_{i, j}) \cdot \mathbf{V} = \frac{\exp{\bar{a}_{i, j}}}{\sum_{k}\exp{\bar{a}_{i, k}}} \cdot \mathbf{V}.
\end{equation}
The dimension of the output features is $256$ for all four self-attention layers. The concatenated features are fed into the following three fully-connected layers for point classification. The loss function of our TransUPR is the sum of a softmax cross-entropy loss and a \textit{Lov{\'a}sz-Softmax} \cite{lovasz} loss,
\begin{equation}
L = L_{wce}(y, \hat{y}) + L_{ls}(C),
\end{equation}
where $y$ is the ground truth label of an uncertain point, and $\hat{y}$ is the predicted label. $C$ is the number of semantic classes for model training. After reclassifying the semantic label of selected uncertain points, we replace the label of uncertain points in refined $p_c$ (see Fig. \ref{fig:p2}) with the label of refined $p_u$.
\section{Experiments}
\subsection{Dataset and Experiment Settings}
We evaluate the performance of our proposed TransUPR on the large-scale challenging Semantic-KITTI dataset\cite{semantickitti} that contains 22 sequences of different scenarios. Over 19k scans (sequences between 00 and 10 except for 08) are used for training, and sequence 08 is used as a validation set. Note that we do not use sequence 08 for model fine-tuning to achieve higher performance on the benchmark. We combine our TransUPR with existing state-of-the-art image-based methods to demonstrate the efficiency and generality of our proposed uncertain point refiner. For the existing state-of-the-art methods, we utilize the available pre-trained models without fine-tuned by validation set and freeze the model weights when we train our refiner. We resubmit the predictions of the baseline methods and methods with our TransUPR to the Semantic KITTI benchmark for comparison.
We train our TransUPR for 50 epochs regardless of the CNN backbones. We quantitatively evaluate the results of the methods involved via mean intersection-over-union (mIoU), which can be formulated as $mIoU = \frac{1}{C}\sum_{c=1}^{C}\frac{TP_{c}}{FN_{c} + FP_{c} + TP_{c}}$, where $TP_c$ is the number of the true positive cases of the $c^{th}$ class, $FN_c$ is the number of the false negative cases of the $c^{th}$ class, and $FP_c$ is the number of the false positive cases of the $c^{th}$ class. The minor evaluation metric overall accuracy (oACC) is defined as $oACC = \frac{TP}{TP + FP}$.
\subsection{Experiment Results}
As results shown in Table \ref{tab:1}, existing image-based PCSS methods can achieve obvious improvements of mIoU and oACC without any architecture modifications via our proposed TransUPR. By combining CENet-512 \cite{cenet} with our TransUPR, we achieve the state-of-the-art performance, i.e., $68.2\%$ mIoU, for image-based methods, which is $+0.6\%$ in mIoU and $+0.4\%$ in oACC over the original state-of-the-art CENet-512\cite{cenet}. For the detailed class-wise IoU, our method improves the IoU in most of the classes except for \textit{motorcyclist} and \textit{traffic-sign}. At the same time, the CENet-512 with our TransUPR still can achieve about 19 FPS on a single Nvidia RTX 3090, which is much faster than the scanning frequency, i.e., 10Hz, of LiDAR sensors. Fig. \ref{fig:p3} shows the visualization results of CENet512 with our TransUPR on the validation sequence 08.
\begin{table*}[h]
\centering
\scalebox{0.6}{
\begin{tabular}{c|ccccccccccccccccccc|cc}
\hline
Methods & \textit{\rotatebox[origin=c]{90}{car}} & \textit{\rotatebox[origin=c]{90}{bicycle}} & \textit{\rotatebox[origin=c]{90}{motorcycle}}& \textit{\rotatebox[origin=c]{90}{truck}} & \textit{\rotatebox[origin=c]{90}{other-vehicle}} & \textit{\rotatebox[origin=c]{90}{person}} & \textit{\rotatebox[origin=c]{90}{bicyclist}} & \textit{\rotatebox[origin=c]{90}{motorcyclist}} & \textit{\rotatebox[origin=c]{90}{road}} & \textit{\rotatebox[origin=c]{90}{parking}} & \textit{\rotatebox[origin=c]{90}{sidewalk}} & \textit{\rotatebox[origin=c]{90}{other-ground}} & \textit{\rotatebox[origin=c]{90}{building}} & \textit{\rotatebox[origin=c]{90}{fence}} & \textit{\rotatebox[origin=c]{90}{vegetation}} &
\textit{\rotatebox[origin=c]{90}{trunk}} & \textit{\rotatebox[origin=c]{90}{terrain}} &
\textit{\rotatebox[origin=c]{90}{pole}} & \textit{\rotatebox[origin=c]{90}{traffic-sign}} & \textit{\textbf{\rotatebox[origin=c]{90}{oACC}}} & \textit{\textbf{\rotatebox[origin=c]{90}{mIoU}}}\\
\hline \hline
SqueezeSeg\cite{squeezeseg} & 68.8 & 16.0 & 4.1 & 3.3 & 3.6 & 12.9 & 13.1 & 0.9 & 85.4 & 26.9 & 54.3 & 4.5 & 57.4 & 29.0 & 60.0 & 24.3 & 53.7 & 17.5 & 24.5 & - & 29.5\\
SqueezeSeg-CRF\cite{squeezeseg} & 68.3 & 18.1 & 5.1 & 4.1 & 4.8 & 16.5 & 17.3 & 1.2 & 84.9 & 28.4 & 54.7 & 4.6 & 61.5 & 29.2 & 59.6 & 25.5 & 54.7 & 11.2 & 36.3 & - & 30.8 \\
SequeezeSegV2 \cite{squeezesegv2} & 81.8 & 18.5 & 17.9 & 13.4 & 14.0 & 20.1 & 25.1 & 3.9 & 88.6 & 45.8 & 67.6 & 17.7 & 73.7 & 41.1 & 71.8 & 35.8 & 60.2 & 20.2 & 36.3 & - & 39.7\\
SequeezeSegV2-CRF \cite{squeezesegv2} & 82.7 & 21.0 & 22.6 & 14.5 & 15.9 & 20.2 & 24.3 & 2.9 & 88.5 & 42.4 & 65.5 & 18.7 & 73.8 & 41.0 & 68.5 & 36.9 & 58.9 & 12.9 & 41.0 & - & 39.6\\
SequeezeSegV3 \cite{squeezesegv3} & 92.5 & 38.7 & 36.5 & 29.6 & 33.0 & 45.6 & 46.2 & 20.1 & 91.7 & 63.4 & 74.8 & 26.4 & 89.0 & 59.4 & 82.0 & 58.7 & 65.4 & 49.6 & 58.9 & - & 55.9\\
RangeNet++ \cite{rangenet++} & 91.4 & 25.7& 34.4 &25.7& 23.0& 38.3 &38.8 &4.8 &91.8& 65.0& 75.2& 27.8& 87.4& 58.6& 80.5& 55.1& 64.6& 47.9& 55.9& - & 52.2\\
SalsaNext \cite{salsanext} & 91.9 & 46.0& 36.5& 31.6& 31.9& 60.9& 61.7& 19.4& 91.9 & 63.6 & \textbf{76.1} & 28.9 & 89.8 & 63.1 & 82.1 & 63.5 & 66.4 & 54.0 & 61.4 & 90.0 & 59.0 \\
FIDNet + KNN \cite{fidnet} & 92.3 & 45.6 & 41.9 & 28.0 & 32.6 & 62.1 & 56.8 & 30.6 & \textbf{90.9} & 58.4 & 74.9 & 20.5 & 88.6 & 59.1 & \textbf{83.1} & 64.6 & 67.8 & 53.1 & 60.1 & 89.6 & 58.5\\
CENet512\cite{cenet} & 93.0 & 51.2 & 67.2 & 58.4 & 62.4 & 69.1 & 70.0 & \textbf{54.3} & 90.3 & 70.0 & 74.5 & 40.8 & 88.6 & 61.3 & 81.6 & 63.5 & 69.2 & 55.5 & \textbf{63.1} & 89.9 & 67.6 \\
\hline
SalsaNext \cite{salsanext} + \textbf{TransUPR} & 92.4 & 45.3 & 37.0 & 30.9 & 31.5 & 61.9 & 63.2 & 16.2 & 92.0 & 64.0 & 76.2 & 28.9 & \textbf{90.3} & \textbf{63.9} & 83.0 & 66.0 & 67.2 & 55.2 & 60.3 & \textbf{90.4} & 59.2 \\
FIDNet\cite{fidnet} + \textbf{TransUPR} & 92.6 & 43.6 & 42.0 & 29.7 & 32.7 & 63.3 & 58.8 & 27.0 & 90.8 & 58.5 & 74.5 & 20.2 & 88.7 & 59.9 & 82.9 & \textbf{66.5} & 67.7 & 54.1 & 61.0 & 90.1 & 58.7 \\
CENet512\cite{cenet} + \textbf{TransUPR} & \textbf{93.3} & \textbf{53.6} & \textbf{68.1} & \textbf{58.6} & \textbf{63.7} & \textbf{70.0} & \textbf{70.6} & 52.3 & 90.4 & \textbf{70.2} & 74.8 & \textbf{42.1} & 89.1 & 62.8 & 82.3 & 64.6 & \textbf{69.4}& \textbf{56.4}& 63.0 & 90.3 & \textbf{68.2} \\
\hline
\end{tabular}}
\caption{Quantitative Results Comparison on Semantic KITTI Test Set (Sequences 11 to 21). Scores are given in percentage (\%). The input size is $64 \times 2048$ except for CENet-512, which takes range images with a size of $64 \times 512$ as inputs.}
\label{tab:1}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth, height=6.4cm]{fig3.jpg}
\end{center}
\caption{Visualization results on sequence 08. Column 1: point clouds with ground truth labels. Column 2: point clouds with predicted labels of CENet 512 with a KNN refiner. Column 3: point clouds with predicted labels of CENet with our TransUPR.}
\label{fig:p3}
\end{figure*}
\subsection{Ablation Study}
Other than the geometry features discussed in \ref{sec:2.1}, FIDNet \cite{fidnet} calculates a normal vector for each point and concatenates the five-channel range image and normal vectors as inputs for CNN backbones. Thus, we explore the impact of normal vectors for training our TransUPR, and the quantitative results are shown in Table \ref{tab:2}. The normal vectors for selected uncertain points do not influence the performance of our TransUPR. Both experiments with TransUPR can achieve a 1.3\% improvement in mIoU on sequence 08 compared to the original FIDNet \cite{fidnet}.
Then, we explore the influence of the distance cutoff $c_u$ defined in \ref{sec:2.2} during inference with our TransUPR. Note that we make the $c_u = 1$ for the training of TransUPR to generate more uncertain points. As Table \ref{tab:3} shows, as the $c_u$ increases, the resulted mIoU of the methods with TransUPR decreases, which means when a background point has a distance over $1$ between its corresponding foreground point, the back-projected label of the background point has low confidence.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c}
\hline
Methods& oACC & mIoU\\
\hline\hline
FIDNet + KNN & 90.2 & 58.6 \\
FIDNet + TransUPR w/o normal & 90.5 & 59.8 \\
FIDNet + TransUPR w/ normal & 90.3 & 59.8 \\
\hline
\end{tabular}
\caption{Comparison of FIDNet + TransUPR with or without normal vectors on sequence 08. }
\label{tab:2}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{c|c|c}
\hline
Methods& oACC & mIoU\\
\hline\hline
SalsaNext & 89.3 & 59.0 \\
SalsaNext + TransUPR $c_u=1$ & 89.9 & 60.6 \\
SalsaNext + TransUPR $c_u=3$ & 89.9 & 60.5 \\
SalsaNext + TransUPR $c_u=4$ & 89.9 & 60.4 \\
\hline
\end{tabular}
\caption{Quantitative Results to explore influence of $c_u$ for uncertain point selection on sequence 08. }
\label{tab:3}
\end{table}
\section{Conclusion}
We propose a plug-and-play transformer-based uncertain point refiner for LiDAR PCSS, i.e., TransUPR, to improve the existing range image-based CNNs. Our TransUPR outperforms the existing methods and achieves state-of-the-art performance, i.e., 68.2\% mIoU, on challenging Semantic-KITTI. Extensive experiments demonstrate that our method is generic and easy to use to combine with other existing image-based LiDAR PCSS approaches.
\newpage
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08601",
"language": "en",
"timestamp": "2023-02-20T02:02:53",
"url": "https://arxiv.org/abs/2302.08601",
"yymm": "2302"
} |
\section{Introduction}
\label{sec:introduction}
Control barrier functions (CBFs) have been recently proposed as a systematic approach to ensure the forward invariance of control-affine systems \cite{xu2015robustness,ames2016control}. By including the CBF condition into a convex quadratic program (QP), a CBF-QP-based controller can act as a safety filter that modifies potentially unsafe control inputs in a minimally invasive fashion. However, most existing CBF works require precise model information, which is often challenging to obtain. Robust CBF control methods have been proposed to address this issue, ensuring safety in the presence of bounded model uncertainties \cite{garg2021robust,nguyen2021robust,verginis2021safety,buch2021robust,wang2022disturbance}. However, the design of a robust CBF controller relies on the bounds of the uncertainties or the Lipschitzness of the unknown dynamics, making it difficult to handle \emph{parametric uncertainties} that are generally unbounded.
Adaptive CBFs (aCBFs) have been proposed to ensure safety for control-affine systems in the presence of parametric uncertainties \cite{taylor2020adaptive,lopez2020robust,zhao2020adaptive,black2021fixed,isaly2021adaptive,cohen2022high,wang2022observer,wang2022robust,huang2022safety}. In contrast to the robust CBF-based methods that consider the worst-case of uncertainties, the aCBF-based approach estimates the unknown parameters online to generate a safe controller through solving a QP.
However, the aforementioned aCBF results only take into account parametric uncertainties in the drift terms. There is limited research that considers parametric uncertainties in the control-input matrix. For example, \cite{azimi2021exponential} uses a filtering-based concurrent learning algorithm in the CBF framework to design safe controllers for single-input-single-output systems with unknown control coefficients; the estimated parameter converges to the true value exponentially, but system safety is not guaranteed before the convergence of the parameter adaptations. In \cite{verginis2022funnel}, a zeroing CBF-based adaptive control algorithm is proposed to solve the funnel control problem for systems with parametrically uncertain control-input matrices, which can achieve tracking of a reference trajectory within a pre-defined funnel; however, this method may fail in singular configurations, as discussed in Remark 1 of that paper. Despite these early contributions, the aCBF-based control design for systems with parametric uncertainties in the control-input matrix terms is still an open field and merits further investigation.
\begin{figure}[!t]
\centering
\includegraphics[width=0.25\textwidth]{figure/overview.png}
\caption{Main results of this paper. }
\label{fig:illustraion}
\end{figure}
Consider a control-affine system $\dot{x} = f(x) + g(x) u$ where $f$, $g$ include parametric uncertainties (e.g., $f$ and $g$ are identified by universal approximators such as neural networks). The main challenge of stabilizing such a system using adaptive controllers arises from the so-called ``loss of controllability" problem; that is, although the system is controllable, the identification model may lose its controllability at some points in time, owing to parameter adaptations \cite{ioannou2012robust,bechlioulis2008robust}.
The same issue could happen in the aCBF-based control design, which will result in the emptiness of the admissible safe control set and therefore, the infeasibility of the QP.
To the best of our knowledge, the \emph{singularity-free} aCBF-based safe controller is not yet developed in the literature, though
relevant stabilizing adaptive control schemes have been proposed in \cite{ioannou2012robust,xu2003robust,velez2004adaptive,bechlioulis2008robust,krstic1995nonlinear}.
To bridge this gap, this paper proposes a singularity-free aCBF-based control design method for systems with parametric uncertainties in both $f$ and $g$. The safety constraint (i.e., the CBF condition) of the proposed method does not rely on the parameter estimations and thus, the non-emptiness of the admissible safe control set can be verified independent of the online parameter estimation process. The safe control is obtained by solving a \emph{nonlinear program} (NLP), which has a closed-form solution. Furthermore, a data-driven approach is developed to reduce the potential conservatism of the proposed controller by tightening the parameter bounds.
The effectiveness of the proposed control strategy is demonstrated by numerical simulations. Main results of this paper are shown in Fig. \ref{fig:illustraion}.
The rest of this paper is structured as follows. In
Section \ref{sec:pre}, introduction to CBFs and the problem formulation are provided; in Section \ref{sec:main}, the proposed aCBF-based control approaches are presented; in Section \ref{sec:data}, a data-driven method that aims to reduce the conservatism of the proposed control methods is developed; in Section \ref{sec:simulation}, numerical simulation results that validate the proposed methods are presented; and finally, the conclusion is drawn in Section \ref{sec:conclusion}.
\section{Preliminaries \& Problem Statement}
\label{sec:pre}
\subsection{Notation}
For a positive integer $n$, denote $[n]=\{1,2,\cdots,n\}$. For a column vector $x\in\mathbb{R}^{n}$ or a row vector $x\in\mathbb{R}^{1\times n}$, $x_i$ denotes the $i$-th entry of $x$ and $\|x\|$ represents its 2-norm. For a given matrix $A\in\mathbb{R}^{n\times m}$, $A_{ij}$ denotes the $(i,j)$-th entry of the matrix $A$ and $\|A\|$ represents its Frobenius norm. Denote ${\rm diag}(a_1,a_2,\cdots,a_n)\in\mathbb{R}^{n\times n}$ as a diagonal matrix with diagonal entries $a_1,a_2,\cdots,a_n\in\mathbb{R}$.
Given vectors $x,y\in\mathbb{R}^n$, $x\leq y$ is satisfied in the entry-wise sense, i.e., $x_i\leq y_i$, $\forall i\in[n]$, and $x\odot y$ represents the Hadamard product (element-wise product) \cite{horn2012matrix}.
Denote the set of intervals on $\mathbb{R}$ by $\mathbb{IR}$, the set of $n$-dimensional interval
vectors by $\mathbb{IR}^n$, and the set of $n\times m$-dimensional interval matrices by $\mathbb{IR}^{n\times m}$.
The definition of interval operations, e.g., addition, substraction, multiplication, etc., follows those in \cite{moore2009introduction}.
Given two vectors $x,y\in\mathbb{R}^n$ and $x\leq y$, $[x,y]=\left[[x_1,y_1]\ \cdots\ [x_n,y_n]\right]^\top\in\mathbb{IR}^{n}$ represents an interval vector.
Consider the gradient $h_x\triangleq\frac{\partial h}{\partial x} \in \mathbb{R}^{n\times 1}$ as a row vector, where $x\in\mathbb{R}^n$ and $h:\mathbb{R}^n\to\mathbb{R}$ is a function with respect to $x$.
\subsection{Control Barrier Function}
Consider a control affine system
\begin{equation}
\dot{x} = f(x) + g(x) u,\label{eqnsysp}
\end{equation}
where $x\in\mathbb{R}^n$ is the state, $u\in U\subset\mathbb{R}^m$ is the control input, $f: \mathbb{R}^n\to\mathbb{R}^n$ and $g:\mathbb{R}^n\to\mathbb{R}^{n\times m}$ are locally Lipchitz continuous functions.
Define a \emph{safe set} $\mathcal{C}=\{ x \in \mathbb{R}^n \mid h(x) \geq 0\}$
where $h$ is a continuously differentiable function.
The function $h$ is called a \emph{(zeroing) CBF} of relative degree 1, if there exists a constant $\gamma>0$ such that $\sup_{u \in U} \left[ L_f h(x) + L_g h(x) u + \gamma h(x)\right] \geq 0$
where $L_fh(x)=\frac{\partial h}{\partial x} f(x)$ and $L_gh(x)=\frac{\partial h}{\partial x} g(x)$ are Lie derivatives \cite{isidori1985nonlinear}. In this paper, we assume there is no constraint on the input $u$, i.e., $U=\mathbb{R}^m$. For any given $x\in\mathbb{R}^n$, the set of all control values that satisfy the CBF condition is defined as
$K(x) = \{ u \in U \mid L_f h(x) + L_g h(x) u + \gamma h(x) \geq 0\}.$
It was proven in \cite{Xu2015ADHS} that any Lipschitz continuous controller $u(x) \in K(x)$ will guarantee the forward invariance of $\mathcal{C}$, i.e., the \emph{safety} of the closed-loop system. The provably safe control law is obtained by solving a convex QP that includes the CBF condition as its constraint. The time-varying CBF with a general relative degree and its safety guarantee for a time-varying system are discussed in \cite{xu2018constrained}.
\subsection{Problem Formulation}
Consider the following system:
\begin{IEEEeqnarray}{rCl}\label{eqnsys2}
\hspace{-2mm}
\begin{pmatrix}
\dot x_1\\ \dot x_2
\end{pmatrix} &=
f(x) \!+\!f_u(x) +
\begin{pmatrix}
{\bf 0}_{m}\\ f_{\theta}(x)
\end{pmatrix}
+ \begin{pmatrix}
{\bf 0}_{m\times n}\\ g(x) +g_{\lambda}(x)
\end{pmatrix}u,
\end{IEEEeqnarray}
where $x= \begin{pmatrix}x_1\\x_2\end{pmatrix}\in\mathbb{R}^{m+n}$ is the state with $x_1\in\mathbb{R}^m$ and $x_2\in\mathbb{R}^n$, $u\in\mathbb{R}^{n}$ is the control input,
$f:\mathbb{R}^{m+n}\to\mathbb{R}^{m+n}$ and $g:\mathbb{R}^{m+n}\to\mathbb{R}^{n\times n}$ are known Lipschitz functions, $f_u:\mathbb{R}^{m+n}\to\mathbb{R}^{m+n}$ is an unknown Lipschitz function, and $f_\theta:\mathbb{R}^{m+n}\to\mathbb{R}^{n}$ and $g_\lambda:\mathbb{R}^{m+n}\to\mathbb{R}^{n\times n}$ are parametric uncertainties. We assume that $f_\theta$, $g$, and $g_\lambda$ have the following forms:
\begin{IEEEeqnarray}{rCl}
\IEEEyesnumber \label{eqnstructure}
\IEEEyessubnumber
f_\theta(x)&=&\begin{bmatrix}\theta_1^\top\varphi_1(x),&\theta_2^\top\varphi_2(x),&
\cdots,&
\theta_{n}^\top\varphi_{n}(x)
\end{bmatrix}^\top,\\
\IEEEyessubnumber
g(x)&=&{\rm diag}(g_1(x),g_2(x),\cdots,g_n(x)),\\
\IEEEyessubnumber
g_\lambda(x)&=&{\rm diag}(\lambda_1^\top\psi_1(x),\lambda_2^\top\psi_2(x),\cdots,\lambda_{n}^\top\psi_{n}(x)),
\end{IEEEeqnarray}
where $g_i:\mathbb{R}^{m+n}\to\mathbb{R}$ is a known function, $\theta_i\in\mathbb{R}^{p_i}$ and $\lambda_{i}\in\mathbb{R}^{q_i}$ are unknown parameters, and $\varphi_i:\mathbb{R}^{m+n}\to\mathbb{R}^{p_i}$ and $\psi_i:\mathbb{R}^{m+n}\to\mathbb{R}^{q_i}$ are known functions (regressors) with $p_i,q_i$ appropriate positive integers and $i\in[n]$.
Define a safe set $\mathcal{C}\subset\mathbb{R}^{m+n}$ as
\begin{equation}\label{setc}
\mathcal{C}=\{x: h(x)\geq 0\},
\end{equation}
where $h:\mathbb{R}^{m+n}\to\mathbb{R}$ is a continuously differentiable function.
We also make the following two assumptions on the boundedness of the unknown function $f_u$ and the unknown parameters $\theta_i,\lambda_i$.
\begin{assumption}\label{assump:0}
There exist known functions $\underline{f}_u(x)$, $\overline{f}_u(x):\mathbb{R}^{m+n}\to\mathbb{R}^{m+n}$ such that $\underline{f}_u(x)\leq f_u(x)\leq\overline{f}_u(x)$.
\end{assumption}
\begin{assumption}\label{assump:1}
For every $i\in[n]$, there exist known vectors $\overline{\theta}_i, \underline{\theta}_i\in\mathbb{R}^{p_i}$ and $\overline{\lambda}_i,\underline{\lambda}_i\in\mathbb{R}^{q_i}$ such that $\underline{\theta}_i\leq\theta_i \leq\overline{\theta}_i$ and $\underline{\lambda}_i\leq \lambda_i\leq\overline{\lambda}_i$.
\end{assumption}
\begin{remark}\label{remark1}
In the adaptive stabilizing control design problem, bounds for the unknown parameters as given in Assumption \ref{assump:1} are not necessarily required to be known; for example, in \cite{fossen1991adaptive} a control law is designed to ensure the non-positiveness of the derivative of the Lyapunov function, by which the asymptotic stability of the closed-loop system can be proven using Barbalat's lemma \cite{slotine1991applied}. Because CBFs do not have the favourable positive definiteness property as Lyapunov functions, the CBF-based safe control design is more challenging. Although an aCBF-based control approach is proposed in \cite{taylor2020adaptive} without assuming boundedness of the unknown parameters, its performance is conservative as the system only operates in a subset of the original safety set. In \cite{lopez2020robust}, a robust aCBF-based controller is developed under the assumption that is similar to Assumption \ref{assump:1}, i.e.,
the unknown parameters and the parameter estimation error both belong to known closed convex sets; however, the system model considered there does not include the parametric uncertainty $g_\lambda$ in the control-input matrix.
\end{remark}
The main problem that will be investigated in this paper is stated as follows.
\begin{problem}\label{prob1}
Consider the system \eqref{eqnsys2} with $f_\theta$, $g$, and $g_\lambda$ given in \eqref{eqnstructure} and the safe set defined in \eqref{setc} where $h$ has a relative degree 1. Suppose that Assumptions \ref{assump:0} and \ref{assump:1} hold. Design a feedback controller $u$ such that the closed-loop system is always safe, i.e., $h(x(t))\geq 0$ for all $t\geq 0$.
\end{problem}
We will propose an aCBF-NLP-based method for solving Problem \ref{prob1} in Section \ref{sec:mainadaptive} and generalize it to the case where $g$ and $g_\lambda$ are non-diagonal in Section \ref{sec:maingeneral}.
Moreover, although we only consider the CBF $h$ with a relative degree 1 in this work, our results can be easily extended to the higher relative degree cases by using techniques in \cite{xu2018constrained,nguyen2016exponential,tan2021high}; a mass-spring system that has a relative degree 2 will be shown in Example \ref{example3} of Section \ref{sec:simulation}.
\section{aCBF-NLP-based Safe Control Design}
\label{sec:main}
In this section, the main result of this work will be presented. In Section \ref{sec:mainadaptive}, an aCBF-NLP-based safe control design approach will be proposed to solve Problem \ref{prob1}; in Section \ref{sec:nlpsolution}, the closed-form solution to the NLP will be presented; in Section \ref{sec:maingeneral}, the proposed method is extended to a more general class of systems.
\subsection{aCBF-NLP-Based Control Design}
\label{sec:mainadaptive}
In this subsection, inspired by \cite{velez2004adaptive}, an aCBF-NLP-based control design method is proposed to solve Problem \ref{prob1}. Recall that $f_\theta$, $g$, $g_\lambda$ have the forms given in \eqref{eqnstructure} where $\theta_i\in\mathbb{R}^{p_i}$ and $\lambda_i\in\mathbb{R}^{q_i}$ are unknown parameters.
We choose arbitrary values $\theta^0_i\in\mathbb{R}^{p_i}$ and $\lambda^0_i\in\mathbb{R}^{q_i}$ satisfying $ \underline{\theta}_i\leq\theta^0_i\leq\overline{\theta}_i$ and $ \underline{\lambda}_i\leq\lambda^0_i\leq\overline{\lambda}_i$ as the nominal values for $\theta_i$ and $\lambda_i$, respectively. Furthermore, we define
\begin{align}\label{eqmunv}
\mu_i\triangleq\|\theta_i-\theta^0_i\|, \quad \nu_i\triangleq\|\lambda_i-\lambda_i^0\|,\quad \forall i\in[n].
\end{align}
According to Assumption \ref{assump:1} and the definition of 2-norm,
\begin{align*}
\mu_i\!\leq\! \bar{\mu}_i\!\triangleq\! \sqrt{\!\sum_{j=1}^{p_i}\max\{((\overline{\theta}_i)_j\!-\!({\theta}_i^0)_j)^2,((\underline{\theta}_i)_j\!-\!({\theta}_i^0)_j)^2\}},\\
\nu_i\!\leq\! \bar{\nu}_i\!\triangleq\!\sqrt{\!\sum_{j=1}^{q_i}\max\{((\overline{\lambda}_i)_j\!-\!({\lambda}_i^0)_j)^2,((\underline{\lambda}_i)_j\!-\!({\lambda}_i^0)_j)^2\}},
\end{align*}
where $(\overline \theta_i)_j$, $(\underline \theta_i)_j$, $(\overline \lambda_i)_j$, $(\underline \lambda_i)_j$ denote the $j$-th entry of $\overline\theta_i$, $\underline\theta_i$, $\overline \lambda_i$, $\underline\lambda_i$, respectively. Note that in this paper the adaptive laws are used to estimate parameters $\mu_i$ and $\nu_i$, which are \emph{scalars}, rather than parameters $\theta_i$ and $\lambda_i$, which are vectors.
In the adaptive stabilizing control design problem considered in \cite{xu2003robust,velez2004adaptive}, assumptions are imposed to ensure that the system is uniformly strongly controllable, such that the loss of controllability problem can be avoided. Analogous to \cite[Assumption 1]{xu2003robust}, the following assumption assumes that each
diagonal entry of $g(x)+g_\lambda(x)$ is away from zero.
\begin{assumption}\label{assump:2}
For any $i\in[n]$ and any $x\in\mathcal{C}$, there exists a positive constant $b_i>0$ such that $g_i(x)+\lambda_i^\top\psi_i(x)$, i.e., the $i$-th diagonal entry of $g(x)+g_\lambda(x)$, satisfies $|g_i(x)+\lambda_i^\top\psi_i(x)|\geq b_i$ and its sign is known. Without loss of generality, we assume that $g_i(x)+\lambda_i^\top\psi_i(x)>0$ for any $i\in[n]$.
\end{assumption}
The following theorem shows an aCBF-based controller that ensures the safety of system \eqref{eqnsys2}.
\begin{theorem}\label{theorem1}
Consider the system \eqref{eqnsys2} with $f_\theta,g,g_\lambda$ specified in \eqref{eqnstructure} and the safe set $\mathcal{C}$ defined in \eqref{setc}. Suppose that\\
(i) Assumptions \ref{assump:0}, \ref{assump:1} and \ref{assump:2} hold;\\
(ii) There exist positive constants $\gamma,\epsilon_1,\epsilon_2,\gamma^\theta_i,\gamma^{\lambda}_i>0$ where $i\in[n]$, such that the following set is non-empty
\begin{equation}\label{cbfcondition1}
K_{BF}(x)\triangleq\left\{{\mathfrak u}\in\mathbb{R}^n\mid \Psi_{0}(x)+\Psi_{1}(x) {\mathfrak u}\geq 0\right\},\;\forall x\in \mathcal{C},
\end{equation}
where $\Psi_{0}(x)=\mathcal{M}+\sum_{i=1}^{n}h_{x_2,i}\theta_i^{0\top}\varphi_i-n(\epsilon_1+\epsilon_2)+\gamma\left[h-\sum_{i=1}^{n}\left(\frac{\bar{\mu}_{i}^2}{2\gamma_{i}^\theta}+\frac{\bar{\nu}_{i}^2}{2\gamma_{i}^\lambda}\right)\right]$,
$\Psi_{1}(x)=[h_{x_2,1}^2(g_{1}+\lambda_1^{0\top} \psi_1),\\ h_{x_2,2}^2(g_{2}+\lambda_2^{0\top} \psi_2),\cdots,h_{x_2,n}^2(g_n+\lambda_{n}^{0\top} \psi_{n})]$,
$\mathcal{M}=h_{x}f+\sum_{j=1}^{m+n}\min\{h_{x,j}\underline{f}_{u,j},h_{x,j}\overline{f}_{u,j}\}$, $h_x=\frac{\partial h}{\partial x}$, $h_{x_2}=\frac{\partial h}{\partial x_2}$, and $h_{x,i}$, $h_{x_2,i}$, $\underline{f}_{u,i}$, $\overline{f}_{u,i}$ denotes the $i$-th entry of $h_x$, $h_{x_2}$, $\underline{f}_u$, $\overline{f}_u$, $i\in[n]$, respectively;\\
(iii) For any $i\in[n]$, $\hat\mu_i$ and $\hat\nu_i$ are estimated parameters governed by the following adaptive laws:
\begin{subequations}\label{adaptive:both}
\begin{align}
\dot {\hat{\mu}}_{i} &=-{\gamma} \hat\mu_{i}+\gamma^{\theta}_i|h_{x_2,i}|\|\varphi_i\|, \label{adaptivelaw1}\\
\dot {\hat{\nu}}_{i} &=-{\gamma} \hat\nu_{i}+\gamma^{\lambda}_ih_{x_2,i}^2|u_{0,i}|\|\psi_i\|,\label{adaptivelaw2}
\end{align}
\end{subequations}
where $\hat\mu_{i}(0)>0,\hat\nu_{i}(0)>0$ and $u_0=[u_{0,1},\cdots,u_{0,n}]^\top$ is a Lipschitz function satisfying $u_0 \in K_{BF}(x)$;\\
(iv) The following inequality holds:
$
h(x(0))\geq \sum_{i=1}^{n}\left( \frac{\hat\mu_i(0)^2+\bar{\mu}_{i}^2}{2\gamma^{\theta}_i}+\frac{\hat\nu_i(0)^2+\bar{\nu}_{i}^2}{2\gamma^{\lambda}_i} \right)
$;\\
\noindent Then, the control input $u=h_{x_2}^\top\odot s(u_0)\in\mathbb{R}^n$ will make $h(x(t))\geq 0$ for $t>0$, where
$s(u_0)\triangleq[s_1(u_{0,1}),s_2(u_{0,2}),\cdots,s_n(u_{0,n})]^\top$ and
\begin{equation}
s_i(u_{0,i})\triangleq u_{0,i}+\frac{\kappa_{1,i}}{b_i}+\frac{\kappa_{2,i}^2u^2_{0,i}}{b_i(\kappa_{2,i}|h_{x_2,i}||u_{0,i}|+\epsilon_2)},\label{adapcontrol}
\end{equation}
with $\kappa_{1,i}=\frac{\hat\mu_{i}^2\|\varphi_i\|^2}{\hat\mu_{i}\|\varphi_i\||h_{x_2,i}|+\epsilon_1}$, $\kappa_{2,i}=\hat\nu_{i}\|\psi_i\||h_{x_2,i}|$, $i\in[n]$.
\end{theorem}
\begin{proof}
From \eqref{adaptive:both}, $\dot {\hat{\mu}}_{i} \geq -{\gamma} \hat\mu_{i},\dot {\hat{\nu}}_{i} \geq -{\gamma} \hat\nu_{i}$ hold. Since $\hat\mu_{i}(0)>0,\hat\nu_{i}(0)>0$, it is easy to see that $\hat\mu_{i}(t)\geq 0$ and $\hat\nu_{i}(t)\geq 0$ for any $t>0$ by the Comparison Lemma \cite[Lemma 2.5]{khalil2002nonlinear}. Define a new candidate CBF $\bar h$ as
$\bar{h}(x,t)=h(x)-\sum_{i=1}^{n}\bigg(\frac{\tilde{\mu}_{i}^2}{2\gamma^{\theta}_i} +\frac{\tilde{\nu}_{i}^2}{2\gamma^{\lambda}_i} \bigg)$,
where $\tilde{\mu}_{i}={\mu}_{i}-\hat{\mu}_{i}$ and $\tilde{\nu}_i={\nu}_i-\hat{\nu}_i$.
It can be seen that $\bar{h}(x(0),0) = h(x(0))-\sum_{i=1}^{n}\bigg( \frac{(\mu_i-\hat\mu_i(0))^2}{2\gamma^{\theta}_i} +\frac{(\nu_i-\hat\nu_i(0))^2}{2\gamma^{\lambda}_i} \bigg)
\geq h(x(0))-\sum_{i=1}^{n}\bigg( \frac{\mu_i^2+\hat\mu_i^2(0)}{2\gamma^{\theta}_i} +\frac{\nu_i^2+\hat\nu_i^2(0)}{2\gamma^{\lambda}_i} \bigg)
\geq h(x(0))-\sum_{i=1}^{n}\bigg( \frac{\bar\mu_i^2+\hat\mu_i^2(0)}{2\gamma^{\theta}_i} +\frac{\bar \nu_i^2+\hat\nu_i^2(0)}{2\gamma^{\lambda}_i} \bigg)\geq 0$, where the first inequality comes from the fact that $\mu_i,\nu_i,\hat\mu_i(0),\hat\nu_i(0)\geq 0$, the second one arises from the definitions of $\bar{\mu}_i,\bar{\nu}_i$,
and the last one holds because of condition (iv).
We claim that $\dot{\bar{h}}\geq-\gamma \bar h$ where $\dot{\bar{h}}$ is the time derivative of $\bar h$. Indeed, $\dot{\bar{h}}
=h_x(f+f_u)+ \sum_{i=1}^{n} (h_{x_2,i}(\theta_i^\top\varphi_i+(g_{i}+\lambda_i^\top \psi_i)u_i)+\frac{\tilde{\mu}_{i}\dot{\hat\mu}_{i}}{\gamma^{\theta}_i} +\frac{\tilde{\nu}_{i}\dot{\hat{\nu}}_{i}}{\gamma^{\lambda}_i} )
\geq \mathcal{M}+\sum_{i=1}^{n} \left(h_{x_2,i}(\theta_i^\top\varphi_i+(g_{i}+\lambda_i^\top \psi_i)u_i)+\frac{\tilde{\mu}_{i}\dot{\hat{\mu}}_{i}}{\gamma^{\theta}_i} +\frac{\tilde{\nu}_{i}\dot{\hat{\nu}}_{i}}{\gamma^{\lambda}_i} \right)$. Substituting \eqref{adapcontrol} into the inequality above and recalling Assumption \ref{assump:2}, we have
\begin{IEEEeqnarray}{rCl}
\dot{\bar{h}}
&\geq&\mathcal{M}+\sum_{i=1}^{n}\bigg(h_{x_2,i}\theta_i^\top\varphi_i+\frac{\tilde{\mu}_{i}\dot{\hat{\mu}}_{i}}{\gamma^{\theta}_i} +\frac{\tilde{\nu}_{i}\dot{\hat{\nu}}_{i}}{\gamma^{\lambda}_i}\bigg)\nonumber\\
&&\!+\!\sum_{i=1}^{n}\!\!h_{x_2,i}^2\bigg((g_i\!+\!\lambda_i^\top \psi_i)u_{0,i}\!+\!\kappa_{1,i}\!+\!\frac{\kappa_{2,i}^2u^2_{0,i}}{\kappa_{2,i}|h_{x_2,i}||u_{0,i}|\!+\!\epsilon_2}\bigg)\nonumber\\
&\geq&\mathcal{M}+\sum_{i=1}^{n}\left(h_{x_2,i}\theta_i^{0\top}\varphi_i+h_{x_2,i}^2(g_{i}+\lambda_i^{0\top} \psi_i)u_{0,i}\right)\nonumber\\
&&\!+\!\sum_{i=1}^{n}\!\left(\!\frac{\tilde{\mu}_{i}\dot{\hat{\mu}}_{i}}{\gamma^{\theta}_i} \!+\!\frac{\tilde{\nu}_{i}\dot{\hat{\nu}}_{i}}{\gamma^{\lambda}_i}\! +\!h_{x_2,i}^2\!\left(\!\kappa_{1,i}\!+\!\frac{\kappa_{2,i}^2u^2_{0,i}}{\kappa_{2,i}|h_{x_2,i}||u_{0,i}|\!+\!\epsilon_2}\!\right)\!\right)\nonumber\\
&&+\sum_{i=1}^{n}(h_{x_2,i}
(\theta_i-\theta_i^0)^\top\varphi_i+h^2_{x_2,i}(\lambda_i-\lambda_i^0)^\top\psi_i u_{0,i}
)\nonumber\\
&\geq& \Psi_0+\Psi_1 u_0+n(\epsilon_1+\epsilon_2)-\gamma\bigg[h-\sum_{i=1}^{n}\bigg(\frac{\bar{\mu}_{i}^2}{2\gamma_{i}^\theta}+\frac{\bar{\nu}_{i}^2}{2\gamma_{i}^\lambda}\bigg)\bigg]\nonumber\\
&&+\sum_{i=1}^{n}\bigg(\!\frac{\tilde{\mu}_{i}\dot{\hat{\mu}}_{i}}{\gamma^{\theta}_i} \!+\!\frac{\tilde{\nu}_{i}\dot{\hat{\nu}}_i}{\gamma^{\lambda}_i}\! \bigg)+\sum_{i=1}^{n}\bigg(\!-\mu_{i}\|\varphi_i\||h_{x_2,i}|\!+\!\kappa_{1,i}h_{x_2,i}^2\nonumber\\
&&-\nu_{i}\|\psi_i\||u_{0,i}|h_{x_2,i}^2+\frac{\kappa_{2,i}^2 u_{0,i}^2 h_{x_2,i}^2}{\kappa_{2,i}|h_{x_2,i}||u_{0,i}|+\epsilon_2}\bigg),\label{dotbarh11}
\end{IEEEeqnarray}
where the third inequality arises from Cauchy–Schwarz inequality. It is easy to check that $-\hat\mu_{i}\|\varphi_i\||h_{x_2,i}|+\kappa_{1,i}h_{x_2,i}^2=-\frac{\hat\mu_{i}\|\varphi_i\||h_{x_2,i}|\epsilon_1}{\hat\mu_{i}\|\varphi_i\||h_{x_2,i}|+\epsilon_1}\geq -\epsilon_1$ and $-\hat\nu_{i}\|\psi_i\||u_{0,i}|h_{x_2,i}^2+\frac{\kappa_{2,i}^2 u_{0,i}^2 h_{x_2,i}^2}{\kappa_{2,i}|h_{x_2,i}||u_{0,i}|+\epsilon_2}=-\frac{\kappa_{2,i}|h_{x_2,i}||u_{0,i}|\epsilon_2}{\kappa_{2,i}|h_{x_2,i}||u_{0,i}|+\epsilon_2}\geq -\epsilon_2$; furthermore, $\Psi_0+\Psi_1 u_0\geq 0$ because $u_0\in K_{BF}$. Based on these two facts and recalling that $\mu_i=\tilde\mu_i+\hat\mu$, ${\nu}_i=\tilde{\nu}_i+\hat{\nu}_i$, one can see that \eqref{dotbarh11} can be expressed as $\dot{\bar{h}}\geq\sum_{i=1}^{n}\!\left(\!
\tilde{\mu}_{i}\!\bigg(\!\frac{1}{\gamma_{i}^\theta}\dot{\hat{\mu}}_{i}\!-\!\|\varphi_i\||h_{x_2,i}|\!\right)\!+\!\tilde{\nu}_{i}\left(\!\frac{1}{\gamma_{i}^\lambda}\dot{\hat{\nu}}_{i}\!-\!\|\psi_i\||u_{0,i}|h_{x_2,i}^2\!\bigg)\!
\right)-\gamma\left[h-\sum_{i=1}^{n}\left(\frac{\bar{\mu}_{i}^2}{2\gamma_{i}^\theta}+\frac{\bar{\nu}_{i}^2}{2\gamma_{i}^\lambda}\right)\right]$. Substituting \eqref{adaptive:both} into the inequality above yields $\dot{\bar{h}}\geq -\gamma\sum_{i=1}^{n}\left(\frac{\tilde{\mu}_{i}\hat{\mu}_{i}}{\gamma_{i}^\theta} +\frac{\tilde{\nu}_{i}\hat{\nu}_{i}}{\gamma_{i}^\lambda} \right)-\gamma\left[h-\sum_{i=1}^{n}\left(\frac{\bar{\mu}_{i}^2}{2\gamma_{i}^\theta}+\frac{\bar{\nu}_{i}^2}{2\gamma_{i}^\lambda}\right)\right]$. Since $\hat{\mu}_{i}\tilde{\mu}_{i}=({\mu}_{i}-\tilde{\mu}_{i})\tilde{\mu}_{i}\leq \frac{\mu_{i}^2-\tilde{\mu}_{i}^2}{2}\leq \frac{\bar{\mu}_{i}^2-\tilde{\mu}_{i}^2}{2}$ and $\hat{\nu}_{i}\tilde{\nu}_{i}=({\nu}_{i}-\tilde{\nu}_{i})\tilde{\nu}_{i}\leq \frac{{\nu}_{i}^2-\tilde{\nu}_{i}^2}{2}\leq \frac{\bar{\nu}_{i}^2-\tilde{\nu}_{i}^2}{2}$,
we have $\dot{\bar{h}}\geq-\gamma\left[h-\sum_{i=1}^{n}\left(\frac{\tilde{\mu}_{i}^2}{2\gamma^{\theta}_i} +\frac{\tilde{\nu}_{i}^2}{2\gamma^{\lambda}_i} \right)\right]=-\gamma \bar h$, which shows the correctness of the claim.
Because $\bar h(x(0),0)\geq 0$, it is easy to see that $\bar h(t)\geq 0$ for $t>0$. Since $\bar h\leq h$ by definition, we have $h(t)\geq 0$ for $t>0$, which completes the proof.
\end{proof}
\begin{remark}\label{remark2Kbf}
It should be noticed that the CBF condition $\Psi_{0}(x)+\Psi_{1}(x) {\mathfrak u}\geq 0$ shown in \eqref{cbfcondition1} is imposed on the intermediate variable, $u_0$, instead of the real control input, $u$. Furthermore, the CBF condition \eqref{cbfcondition1} only relies on the nominal values of the unknown parameters, such that the CBF condition (i.e., the non-emptiness of the set $K_{BF}$) can be verified conveniently by selecting the variables in Condition (ii) appropriately. In comparison, CBF conditions in other aCBF-based works rely upon parameter estimations online (i.e., $\Psi_0$ and $\Psi_1$ are functions of the estimated parameters in the adaptive laws), which renders the singular configuration (i.e., the set $K_{BF}$ is empty) difficult to verify \cite{taylor2020adaptive,lopez2020robust,verginis2022funnel}.
\end{remark}
\begin{remark}
The number of ODEs for parameter estimation in Theorem \ref{theorem1} is much less than that in other aCBF-based approaches such as \cite{taylor2020adaptive,lopez2020robust,verginis2022funnel}. As can be seen from the adaptive laws shown in \eqref{adaptive:both}, our method only requires solving $2n$ ODEs that estimate \emph{scalars} $\mu_i\in\mathbb{R}$ and $\nu_i\in\mathbb{R}$ for $i\in[n]$ (cf. \eqref{eqmunv}); in contrast, other aCBF methods have to estimate the original unknown parameters $\theta_i\in\mathbb{R}^{p_i}$ and $\lambda_i\in\mathbb{R}^{q_i}$ for $i\in[n]$, which results in a total of $2n\sum_{j=1}^{n}(p_j+q_j)$ ODEs. This reduction of number of ODEs is particularly useful when $p_i$ and $q_i$ are large, e.g., when $\theta_i$ and $\lambda_i$ are weights of deep neural networks.
\end{remark}
The safe control law $u(x)\triangleq[u_1,\dots,u_n]^\top$ in Theorem \ref{theorem1} can be obtained pointwise for any $x\in\mathcal{C}$. Specifically, each $u_i,i\in[n],$ can be obtained by solving the following optimization problem:
\begin{align}
\min_{u_{i}\in\mathbb{R}} \quad & (u_i-u_{d,i})^2\label{cbfnlp1}\tag{aCBF-NLP}\\
\textrm{s.t.}\quad & \Phi^i_0(x)+\Phi^i_1(x) u_{0,i}\geq 0,\nonumber\\
\quad & u_i=h_{x_2,i} s_i(u_{0,i}),\nonumber
\end{align}
where $s_i(\cdot)$ is the function defined in \eqref{adapcontrol}, $u_{d,i}$ is the $i$-th entry of the nominal controller,
\begin{IEEEeqnarray}{rCl}
\IEEEyesnumber \label{psij}
\IEEEyessubnumber \label{psi0j}
\Phi_0^i &=&\begin{cases} \frac{\rho_ih_{x_2,i}^2|g_i+\lambda_i^{0\top} \psi_i|}{\sum_{j=1}^{n}\rho_jh_{x_2,j}^2|g_j+\lambda_j^{0\top} \psi_j|}\Psi_0, & \ {\rm if} \ \Psi_1\neq 0,\\
\Psi_0/n, & \ {\rm otherwise},
\end{cases}\\
\IEEEyessubnumber \label{psi1j}
\Phi_1^i &=& h_{x_2,i}^2(g_i+\lambda_i^{0\top} \psi_i),
\end{IEEEeqnarray}
with $\Psi_0$, $\Psi_1$ defined in Theorem \ref{theorem1}, and $\rho_i>0, i\in[n],$ are tuning parameters. Note that \eqref{psi0j} is well-defined as $\sum_{j=1}^{n}\rho_jh_{x,j}^2|g_{j}+\lambda_j^{0\top} \psi_j|\neq 0$ if $\Psi_1\neq 0$ and $\sum_{i=1}^n\Phi_0^i=\Psi_0$, $\sum_{i=1}^n\Phi_1^iu_{0,i}=\Psi_1 u_0$.
Different from the traditional CBF-QP formulation \cite{ames2016control,xu2015robustness}, the optimization \eqref{cbfnlp1} is an NLP because of the nonlinear function $s_i(\cdot)$. Solving an NLP is computationally challenging in general; however, optimization \eqref{cbfnlp1} has a \emph{closed-form} solution, which will be discussed in the next subsection.
\begin{remark}
An alternative optimization to obtain the safe control law $u(x)$ can be formulated as:
\begin{align}
\min_{u\in\mathbb{R}^n} \quad & \|u-u_d\|^2\label{cbfnlp0}\\
\textrm{s.t.}\quad & \Psi_0+\Psi_1 u_0\geq 0,\nonumber\\
\quad & u=h_{x_2}^\top\odot s(u_0),\nonumber
\end{align}
where $\Psi_0,\Psi_1,s(\cdot)$ are defined in Theorem \ref{theorem1} and $u_d$ is the nominal controller. The admissible set of $u_0$ in \eqref{cbfnlp0} is larger than that of \eqref{cbfnlp1}, but the existence of a closed-form solution to \eqref{cbfnlp0} is still unclear to us.
\end{remark}
\begin{remark}
The main idea behind the formulation of \eqref{cbfnlp1} is to split the set $K_{BF}$ into $n$ independent set $K_{BF}^i\triangleq \{{\mathfrak u}\in\mathbb{R}: \Phi_0^i+\Phi_1^i{\mathfrak u} \geq 0\}$,
such that $u_{i}\in K_{BF}^i,\forall i\in[n]\implies u\in K_{BF}$.
It is easy to see that if $K_{BF}\neq \emptyset$, then $K_{BF}^i\neq\emptyset$ for any $i\in[n]$ and any $x\in\mathcal{C}$: if $\Psi_1\neq 0$, then $\Phi_1^i=0\implies\Phi_0^i=0\implies\Phi^i_0(x)+\Phi^i_1(x) u_{0,i}\geq 0$ always holds; if $\Psi_1=0$ and $K_{BF}\neq \emptyset\implies$ $\Phi^i_1(x)=0$ and $\Psi_0\geq 0\implies\Phi_0^i=\frac{\Psi_0}{n}\geq 0\implies\Phi^i_0(x)+\Phi^i_1(x) u_{0,i}\geq 0$ always holds.
\end{remark}
\subsection{Closed-form Solution to the aCBF-NLP}
\label{sec:nlpsolution}
In this subsection, we will discuss the closed-form solution to \eqref{cbfnlp1}. We will focus on the case $n=1$ because the $n>1$ case can be easily solved by considering the $n$ NLPs in \eqref{cbfnlp1} independently.
When $n=1$, the subscript $i$ for all relevant variables defined in Theorem \ref{theorem1} will be discarded for the sake of simplicity.
It is also easy to see that $\Phi_0=\Psi_1$, $\Phi_1=\Psi_1$, and $h_{x_2}=0\implies u=0$ according to Theorem \ref{theorem1}. Thus, without loss of generality, we assume that $h_{x_2}\neq 0$ in the analysis of this subsection. By substituting $u=h_{x_2}s(u_0)$ into the objective function of \eqref{cbfnlp1}, it is easy to see that \eqref{cbfnlp1} is equivalent to the following optimization when $n=1$:
\begin{align}
\min_{u_0\in\mathbb{R}} \quad & (s(u_0)- \bar u_d)^2\label{cbfnlp}\\%\tag{CBF-NLP-1}\\
\textrm{s.t.}\quad & \Psi_0+\Psi_1 u_0\geq 0,\nonumber
\end{align}
where $\bar u_d=u_d/h_{x_2}$ and $\Psi_0,\Psi_1,s(\cdot)$ are defined in Theorem \ref{theorem1}.
Based on the properties of the function $s(\cdot)$ presented in Lemma \ref{lemma:1} (see Appendix), the optimal solution to \eqref{cbfnlp} can be obtained, from which the closed-form solution to \eqref{cbfnlp1} can be obtained, as shown in the following proposition.
\begin{proposition}\label{theorem2}
The closed-form solution to \eqref{cbfnlp1} can be represented as
\begin{equation*
u=\begin{cases}
h_{x_2} \max \left( s\left(-\frac{\Psi_0}{\Psi_1}\right), \bar u_d\right), & {\rm if} \ \mathcal{A}_1\ {\rm holds},\\
h_{x_2}\min \left(s\left(-\frac{\Psi_0}{\Psi_1}\right),\bar u_d\right), & {\rm if} \ \mathcal{A}_2\ {\rm holds},\\
h_{x_2}\max(s(y^*), \bar u_d), & {\rm if}\ \mathcal{A}_3\ {\rm holds},\\
u_d, & {\rm if}\ \Psi_1=0\wedge\bar b-\kappa_2\geq 0,\\
0, &{\rm if} \ h_{x_2}=0,
\end{cases}
\end{equation*}
where $\kappa_2,\Psi_0,\Psi_1$ are given in Theorem \ref{theorem1}, $\bar b=b|h_{x_2}|$, $\bar\epsilon_2=\epsilon_2/|h_{x_2}|$, $y^*=\frac{\bar\epsilon_2[(\kappa_2-\bar b)-\sqrt{\kappa_2(\kappa_2-\bar b)}]}{\kappa_2(\kappa_2-\bar b)}$, $\mathcal{A}_1: h_{x_2} \neq 0\wedge ((\bar b -\kappa_2\geq 0\wedge \Psi_1>0)\vee (\bar b-\kappa_2<0\wedge \Psi_0+\Psi_1 y^*< 0))$, $\mathcal{A}_2: h_{x_2}\neq 0\wedge \bar b-\kappa_2\geq 0\wedge \Psi_1<0$, and $\mathcal{A}_3: h_{x_2}\neq 0\wedge \bar b-\kappa_2< 0\wedge \Psi_0+\Psi_1 y^*\geq 0$.
\end{proposition}
\begin{proof}
Note that similar to the aCBF-QPs presented in \cite{taylor2020adaptive,lopez2020robust}, the optimization \eqref{cbfnlp1} is solved pointwise for a given $(x,\hat\mu,\hat\nu)$, such that $\kappa_1$, $\kappa_2$ defined in \eqref{adapcontrol} and $h_{x_2}$ should be considered as constants when solving \eqref{cbfnlp}.
If $\bar b-\kappa_2\geq 0$, $s(y)$ is monotonically increasing, according to Lemma \ref{lemma:1}. When $\Psi_1>0$, one can see that $K_{BF}=\{{\mathfrak u}:{\mathfrak u}\geq -\frac{\Psi_0}{\Psi_1}\}$ and $s(u_0)\in\left[s\left(-\frac{\Psi_0}{\Psi_1}\right),+\infty\right]$ for any $u_0\in K_{BF}$. It is easy to verify that $s(u_0^*)=\bar u_d$ if $\bar u_d\geq s\left(-\frac{\Psi_0}{\Psi_1}\right)$ and $s(u_0^*)= s\left(-\frac{\Psi_0}{\Psi_1}\right)$ when $\bar u_d<s\left(-\frac{\Psi_0}{\Psi_1}\right)$, where $u_0^*$ denotes the solution to \eqref{cbfnlp}.
Hence, in conclusion, one has $s(u_0^*)=\max \left\{s\left(-\frac{\Psi_0}{\Psi_1}\right),\bar u_d\right\}$, such that the closed-form solution to \eqref{cbfnlp1} is $u=h_{x_2}\max \left\{s\left(-\frac{\Psi_0}{\Psi_1}\right),\bar u_d\right\}$. Performing the similar analysis one can see that the closed-form solution to \eqref{cbfnlp1} is $u=h_{x_2}\min \left\{s\left(-\frac{\Psi_0}{\Psi_1}\right),\bar u_d\right\}$ when $\Psi_1<0$. If $\Psi_1=0$, $K_{BF}=\mathbb{R}$ and $s(u_0)\in\mathbb{R}$ for any $u_0\in K_{BF}$, such that $u=u_d$.
On the other hand, if $\bar b-\kappa_2<0$, one knows that $s(y)$ has a global minimal $y^*$, according to Lemma \ref{lemma:1}. Note that $\Psi_0+\Psi_1y^*<0$ indicates $y^*\notin K_{BF}$, such that $s(u_0) \in \left[s\left(-\frac{\Psi_0}{\Psi_1}\right),+\infty\right]$ for any $u_0\in K_{BF}$ (note that the non-emptiness of $K_{BF}$ indicates $\Psi_0\geq 0$ if $\Psi_1=0$). Then, one can see that $s(u_0^*)=s\left(-\frac{\Psi_0}{\Psi_1}\right)$ if $\bar u_d\leq s\left(-\frac{\Psi_0}{\Psi_1}\right)$ and $s(u_0^*)=\bar u_d$ when $\bar u_d>s\left(-\frac{\Psi_0}{\Psi_1}\right)$, such that $s(u_0^*)=\max \left\{s\left(-\frac{\Psi_0}{\Psi_1}\right),\bar u_d\right\}$ and the closed-form solution to \eqref{cbfnlp} is $u=h_{x_2}\max \left\{s\left(-\frac{\Psi_0}{\Psi_1}\right),\bar u_d\right\}$. Furthermore, $\Psi_0+\Psi_1 y^*\geq 0$ implies $y^*\in K_{BF}$, such that $s(u_0)\in [s(y^*), +\infty]$ for any $u_0\in K_{BF}$. Using the similar procedure shown above, one can conclude that the closed-form solution to \eqref{cbfnlp} is $u=h_{x_2}\max \left\{s(x^*),\bar u_d\right\}$.
\end{proof}
\subsection{Extension to More General Systems}\label{sec:maingeneral}
In this subsection, we will generalize the aCBF-based control design method proposed in Sec. \ref{sec:mainadaptive} to more general systems. Specifically, we will design a safe controller $u$ for the system \eqref{eqnsys2} with the same $f_\theta$, $\theta_i$, $\varphi_i$, $i\in[n]$, as those defined in \eqref{eqnstructure} and non-diagonal $g$ and $g_\lambda$
whose $(i,j)$-th entries can be expressed as
\begin{equation}\label{ggla}
(g)_{ij}=g_{ij}(x),\ (g_\lambda)_{ij}=\lambda^\top_{ij}\psi_{ij}(x),
\end{equation}
where $g_{ij}:\mathbb{R}^{m+n}\to\mathbb{R}$, $\psi_{ij}:\mathbb{R}^{m+n}\to\mathbb{R}^{q_{ij}}$ are known functions and $\lambda_{ij}\in\mathbb{R}^{q_{ij}}$ are vectors of unknown parameters, $i\in[n],j\in[n]$.
Similar to Assumption \ref{assump:1}, we assume that $\theta_i$ and $\lambda_{ij}$ are upper and lower bounded by known vectors.
\begin{assumption}\label{assump:3}
For every $i,j\in[n]$,
there exist known vectors $\overline{\theta}_i, \underline{\theta}_i\in\mathbb{R}^{p_i}$ and $\overline{\lambda}_{ij},\underline{\lambda}_{ij}\in\mathbb{R}^{q_{ij}}$, such that $\underline{\theta}_i\leq\theta_i \leq\overline{\theta}_{i}$ and $\underline{\lambda}_{ij}\leq \lambda_{ij}\leq\overline{\lambda}_{ij}$.
\end{assumption}
Similar to \cite[Assumption 1]{xu2003robust}, we assume that $\tilde g\triangleq g+g_\lambda$ is away from the singularity point by letting the smallest singular value of $\frac{\tilde g(x)+\tilde g^\top (x)}{2}$ lower bounded by some known positive constant.
\begin{assumption}\label{assump:4}
The matrix $\frac{\tilde g(x)+\tilde g^\top (x)}{2}$ is either uniformly positive definite or uniformly negative definite for all $x\in\mathcal{X}$, where $\mathcal{X}\supset \mathcal{C},\tilde g=g+g_\lambda$ and $\mathcal{X}\in\mathbb{R}^{m+n}$ is a compact set, i.e., there exists a positive constant $b^*>0$ such that $\underline{\sigma}\left(\frac{\tilde g(x)+\tilde g^\top (x)}{2}\right)\geq b^*,\forall x\in\mathcal{X}$,
where $\underline{\sigma}(\cdot)$ represents the smallest singular value of a matrix.
\end{assumption}
Without loss of generality, we assume that $\frac{\tilde g(x)+\tilde g^\top (x)}{2}$ is positive definite for any $x\in\mathcal{C}$ in this subsection. We select arbitrary values $\theta^0_i\in\mathbb{R}^{p_i}$, $\lambda^0_{ij}\in\mathbb{R}^{q_{ij}}$ satisfying $\underline{\theta}_i\leq\theta^0_i\leq \overline{\theta}_i$, $\underline{\lambda}_{ij}\leq\lambda^0_{ij}\leq \overline{\lambda}_{ij}$, $i,j\in[n]$, as the nominal values of $\theta_i$ and $\lambda_{ij}$, respectively. We define
\begin{IEEEeqnarray}{rCl}
\IEEEyesnumber \label{matrixdefine}
\IEEEyessubnumber
\hspace{-5mm}\Theta &=& \begin{bmatrix}
\theta_1^\top &\theta_2^\top &\cdots&\theta_{n}^\top
\end{bmatrix}^\top,\; \Theta^0 = \begin{bmatrix}
\theta_1^{0\top} &\theta_2^{0\top} &\cdots&\theta_{n}^{0\top}
\end{bmatrix}^\top,\\
\IEEEyessubnumber
\hspace{-5mm}\Lambda &=& \begin{bmatrix}
\lambda_{11}^\top &\lambda_{12}^\top &\cdots&\lambda_{nn}^\top
\end{bmatrix}^\top,\; \Lambda^0 = \begin{bmatrix}
\lambda_{11}^{0\top} &\lambda_{12}^{0\top} &\cdots&\lambda_{nn}^{0\top}
\end{bmatrix}^\top, \\
\IEEEyessubnumber
\hspace{-5mm}\Omega_\varphi &=& \begin{bmatrix}
\varphi_1^\top &\varphi_2^\top &\cdots&\varphi_{n}^\top
\end{bmatrix}^\top, \;
\Omega_\psi = \begin{bmatrix}
\psi_{11}^\top &\psi_{12}^\top &\cdots&\psi_{nn}^\top
\end{bmatrix}^\top,\\
\hspace{-5mm}f_\theta^0 &=&\begin{bmatrix}
\theta_1^0\varphi_1\\\theta_2^0\varphi_2\\\vdots\\\theta_{n}^0\varphi_{n}
\end{bmatrix}, \;
\IEEEyessubnumber \label{matrixdefine:gla}
g_\lambda^0=\begin{bmatrix}
\lambda_{11}^{0\top} \psi_{11}&\lambda_{12}^{0\top} \psi_{12}&\cdots&\lambda_{1n}^{0\top} \psi_{1n}\\
\lambda_{21}^{0\top} \psi_{21}&\lambda_{22}^{0\top} \psi_{22}&\cdots&\lambda_{2n}^{0\top} \psi_{2n}\\
\cdots &\cdots &\cdots&\cdots\\
\lambda_{n1}^{0\top} \psi_{n1}&\lambda_{n2}^{0\top} \psi_{n2}&\cdots&\lambda_{nn}^{0\top} \psi_{nn}
\end{bmatrix}.
\end{IEEEeqnarray}
and
\begin{equation}\label{munugeneral}
\mu=\|\Theta-\Theta^0\|,\; \nu=\|\Lambda-\Lambda^0\|.
\end{equation}
According to Assumption \ref{assump:3}, one can see that
\begin{align*}
\mu\leq \bar\mu\triangleq\sqrt{\sum_{j=1}^{M}\max\{(\overline\Theta_j-\Theta^0_j)^2,(\underline\Theta_j-\Theta^0_j)^2\}},\\
\IEEEyessubnumber
\nu\leq\bar \nu\triangleq\sqrt{\sum_{j=1}^{N}\max\{(\overline\Lambda_j-\Lambda^0_j)^2,(\underline\Lambda_j-\Lambda^0_j)^2\}},
\end{align*}
where
$M=\sum_{i=1}^{n}p_i$ and $N=\sum_{j=1}^{n}\sum_{i=1}^{n}q_{ij}$.
Analogous to Theorem \ref{theorem1}, the following theorem provides an aCBF-based controller that ensures the safety of system \eqref{eqnsys2} with $g$ and $g_\lambda$ defined in \eqref{ggla}.
\begin{theorem}\label{theorem3}
Consider the system \eqref{eqnsys2} with $f_\theta$ defined in \eqref{eqnstructure} and $g,g_\lambda$ defined in \eqref{ggla}, as well as the safe set $\mathcal{C}$ defined in \eqref{setc}. Suppose that\\
(i) Assumptions \ref{assump:0}, \ref{assump:3} and \ref{assump:4} hold;\\
(ii) There exist positive constants $\gamma,\epsilon_1,\epsilon_2,\gamma_\theta,
\gamma_{\lambda}>0$, such that the following set is non-empty:
\begin{equation}\label{cbfconditiongeneral1}
K_{BF}^{g}\triangleq\{{\mathfrak u}\in\mathbb{R}\mid \Psi_0+\Psi_1 {\mathfrak u}\geq 0\}, \ \forall x\in\mathcal{C},
\end{equation}
where
$\Psi_{0}=\mathcal{M}+h_{x_2} f_\theta^0 -(\epsilon_1+\epsilon_2)+\gamma\left(h-\frac{\bar\mu^2}{2\gamma_\theta}-\frac{\bar\nu^2}{2\gamma_\lambda}\right)$, $\Psi_{1}=h_{x_2}(g+g_\lambda^0)h_{x_2}^\top$, $h_{x_2}=\frac{\partial h}{\partial x_2}$, and $\mathcal{M}$ is the same as that defined in Theorem \ref{theorem1};\\
(iii) $\hat\mu$ and $\hat\nu$ are parameter estimations governed by the following adaptive laws:
\begin{IEEEeqnarray}{rCl}
\IEEEyesnumber \label{adaptive:bothgeneral}
\IEEEyessubnumber \label{adaptivelgeneralaw1}
\dot {\hat{\mu}} &=&-{\gamma} \hat\mu+\gamma_{\theta}\|h_{x_2}\|\|\Omega_\varphi\|, \\
\IEEEyessubnumber \label{adaptivelgeneralaw2}
\dot {\hat{\nu}} &=&-{\gamma} \hat\nu+\gamma_{\lambda}\|h_{x_2}\|^2|u_{0}|\|\Omega_\psi\|,
\end{IEEEeqnarray}
where $\hat\mu(0), \hat\nu(0)>0$ and $u_0\in\mathbb{R}$ is a Lipschitz function satisfying $u_0\in K_{BF}^g(x)$;\\
(iv) The following inequality holds: $h(x(0))\geq \frac{\hat\mu(0)^2+\bar{\mu}^2}{2\gamma_{\theta}}+\frac{\hat\nu(0)^2+\bar{\nu}^2}{2\gamma_{\lambda}}$.\\
Then, the control input $u=s_g(u_0)h_{x_2}^\top\in\mathbb{R}^n$ will make $h(x(t))\geq 0$ for any $t>0$, where
\begin{equation}\label{ugeneral}
s_g(u_0)\triangleq u_0+\frac{\kappa_{1,g}}{b^*}+\frac{\kappa_{2,g}^2u_0^2}{b^*(\kappa_{2,g}\|h_{ x_2}\||u_0|+\epsilon_2)},
\end{equation}
with $\kappa_{1,g}=\frac{\hat\mu^2\|\Omega_\varphi\|^2}{\hat\mu \|\Omega_\varphi\|\left\|h_{x_2}\right\|+\epsilon_1}$ and $\kappa_{2,g}=\hat\nu\|\Omega_\psi\|\|h_{x_2}\|$.
\end{theorem}
\begin{proof}
Assumption \ref{assump:4} indicates that, for any $v\in\mathbb{R}^{n}$, $v^\top (g+g_\lambda)v\geq b^*\|v\|^2$ \cite{xu2003robust}. Similar to the proof of Theorem \ref{theorem1}, one can see that $\hat\mu(t)\geq 0,\hat\nu(t)\geq 0,\forall t>0$. Define a candidate CBF $\bar h$ as $\bar h = h-\frac{1}{2\gamma_\theta}\tilde{\mu}^2-\frac{1}{2\gamma_\lambda}\tilde{\nu}^2$,
where $\tilde\mu = \mu - \hat\mu$ and $\tilde\nu = \nu - \hat\nu$.
We claim that $\dot{\bar{h}}\geq-\gamma\bar h$ where $\dot{\bar{h}}$ is the time derivative of $\bar h$. Indeed, it is easy to see that $\dot{\bar{h}}\geq \mathcal{M} +h_{x_2}(f_\theta + (g+g_\lambda) u )+\frac{1}{\gamma_\theta}\tilde\mu \dot{\hat{\mu}} +\frac{1}{\gamma_\lambda}\tilde\nu \dot{\hat{\nu}}$.
Substituting \eqref{ugeneral} into the inequality above yields $\dot{\bar{h}}\geq \mathcal{M}+ h_{x_2} f_\theta +h_{x_2} (g+g_\lambda)h_{x_2}^\top u_0 + \frac{1}{\gamma_\theta}\tilde\mu \dot{\hat{\mu}} +\frac{1}{\gamma_\lambda}\tilde\nu \dot{\hat{\nu}}+
h_{x_2} (g+g_\lambda)h_{x_2}^\top \bigg(\frac{\hat\mu^2\|\Omega_\varphi\|^2}{b^*(\hat\mu \|\Omega_\varphi\|\|h_{x_2}\|+\epsilon_1)}+\frac{\hat\nu^2\|\Omega_\psi\|^2\|\|h_{x_2}\|^2u_{0}^2}{b^*(\hat\nu|u_{0}|\|\Omega_\psi\|\|\|h_{x_2}\|^2+\epsilon_2)}\bigg)
\geq \mathcal{M}+h_{x_2}f^0_\theta +h_{x_2} (g+g^0_\lambda)h_{x_2}^\top u_0 + \frac{1}{\gamma_\theta}\tilde\mu \dot{\hat{\mu}} +\frac{1}{\gamma_\lambda}\tilde\nu \dot{\hat{\nu}}+h_{x_2} (f_\theta-f_\theta^0)
+h_{x_2}(g_\lambda-g_\lambda^0)h_{x_2}^\top u_0\nonumber+\frac{\hat\mu^2\|h_{x_2}\|^2\|\Omega_\varphi\|^2}{\hat\mu \|\Omega_\varphi\|\|h_{x_2}\|+\epsilon_1}+\frac{\hat\nu^2\|\Omega_\psi\|^2\|\|h_{x_2}\|^4 u_{0}^2}{\hat\nu|u_{0}|\|\Omega_\psi\|\|\|h_{x_2}\|^2+\epsilon_2}
\geq \Psi_0+\Psi_1u_0 +(\epsilon_1+\epsilon_2)-\gamma\left(h-\frac{1}{2\gamma_\theta}\bar{\mu}^2-\frac{1}{2\gamma_\lambda}\bar{\nu}^2\right)+ \frac{1}{\gamma_\theta}\tilde\mu \dot{\hat{\mu}} +\frac{1}{\gamma_\lambda}\tilde\nu \dot{\hat{\nu}}-\mu\|h_{x_2}\|\|\Omega_\varphi\|
-\nu\|h_{x_2}\|^2\|\Omega_\psi\||u_0|+\frac{\hat\mu^2\|h_{x_2}\|^2\|\Omega_\varphi\|^2}{\hat\mu \|\Omega_\varphi\|\|h_{x_2}\|+\epsilon_1}+\frac{\hat\nu^2\|\Omega_\psi\|^2\|h_{x_2}\|^4u_{0}^2}{\hat\nu|u_{0}|\|\Omega_\psi\|\|h_{x_2}\|^2+\epsilon_2}$,
where the second inequality is from Assumption \ref{assump:4} and the third inequality comes from Lemma \ref{lemma1} shown in Appendix. Selecting $u_0\in K_{BF}^g$ we have $ \dot{\bar{h}}\geq \epsilon_1+\epsilon_2-\gamma\left(h-\frac{1}{2\gamma_\theta}\bar{\mu}^2-\frac{1}{2\gamma_\lambda}\bar{\nu}^2\right)+\frac{1}{\gamma_\theta}\tilde\mu\left(\dot{\hat{\mu}}-\gamma_\theta\|h_{x_2}\|\|\Omega_\varphi\|\right)+\frac{1}{\gamma_\lambda}\tilde\nu\left(\dot{\hat{\nu}}-\gamma_\lambda\|h_{x_2}\|^2\|\Omega_\psi\||u_0|\right)
-\hat\mu \|h_{x_2}\|\|\Omega_\varphi\|+\frac{\hat\mu^2\|h_{x_2}\|^2\|\Omega_\varphi\|^2}{\hat\mu \|\Omega_\varphi\|\|h_{x_2}\|+\epsilon_1}-\hat \nu\|h_{x_2}\|^2\|\Omega_\psi\||u_0| + \frac{\hat\nu^2\|\Omega_\psi\|^2\|h_{x_2}\|^4u_{0}^2}{\hat\nu|u_{0}|\|\Omega_\psi\|\|h_{x_2}\|^2+\epsilon_2}
\geq
-\gamma\left(h-\frac{1}{2\gamma_\theta}\bar{\mu}^2-\frac{1}{2\gamma_\lambda}\bar{\nu}^2\right)+\frac{1}{\gamma_\theta}\tilde\mu\left(\dot{\hat{\mu}}-\gamma_\theta\|h_{x_2}\|\|\Omega_\varphi\|\right)+\frac{1}{\gamma_\lambda}\tilde\nu\left(\dot{\hat{\nu}}-\gamma_\lambda\|h_{x_2}\|^2\|\Omega_\psi\||u_0|\right).$
Substituting \eqref{adaptive:bothgeneral} into the inequality above, we have $\dot{\bar{h}}\geq -\gamma\left(h-\frac{1}{2\gamma_\theta}\bar{\mu}^2-\frac{1}{2\gamma_\lambda}\bar{\nu}^2\right) -\frac{\gamma}{\gamma_\theta}\tilde\mu \hat\mu-\frac{\gamma}{\gamma_\lambda}\tilde\nu \hat\nu.$
Similar to the proof of Theorem \ref{theorem1}, one can see $\hat{\mu}\tilde{\mu}\leq \frac{\bar{\mu}^2}{2}-\frac{\tilde{\mu}^2}{2}$ and $\hat{\nu}\tilde{\nu}\leq \frac{\bar{\nu}^2}{2}-\frac{\tilde{\nu}^2}{2}$, which implies that $\dot{\bar{h}}\geq-\gamma\bar h$. Our claim is thus proven.
Note that $\bar h(x(0),0)\geq 0$ because of condition (iv). Hence, one can conclude that $\bar h(t)\geq 0,\forall t>0$, and thus, $h(x(t))\geq 0,\forall t>0$.
\end{proof}
\begin{remark}\label{remark:generaldisadvantage}
Compared with Theorem \ref{theorem1}, Theorem \ref{theorem3} provides a safety guarantee for a more general class of systems but the resulting safe controller tends to have more conservative performance. This is because the control $u\in \mathbb{R}^n$ is designed to have a particular structure $u=s_g(u_0)h_{x_2}^\top$, which requires $u$ always proportional to $h_{x_2}^\top$, to deal with the non-diagonal structures of $g$ and $g_\lambda$. How to improve the design to generate a less conservative safe controller will be our future work.
\end{remark}
The safe controller $u(x)$ in Theorem \ref{theorem3} can be obtained pointwise for any $x\in\mathcal{C}$ via solving the following optimization problem:
\begin{align}
\min_{u\in\mathbb{R}^n} \quad & \|u- u_d\|^2\label{cbfnlpgeneral}\\
\textrm{s.t.}\quad & \Psi_0+\Psi_1 u_0\geq 0,\nonumber\\
\quad & u=s_g(u_0)h_{x_2}^\top,\nonumber
\end{align}
where $\Psi_0$ and $\Psi_1$ are defined in Theorem \ref{theorem3}. The closed-form solution of \eqref{cbfnlpgeneral} can be obtained by using Proposition \ref{theorem2}.
\section{Tightening Parameter Bounds via a Data-driven Approach}\label{sec:data}
The controller design proposed in Section \ref{sec:main} relies on the bounds of unknown parameters as shown in Assumptions \ref{assump:1} and \ref{assump:3}. If the prior knowledge of the parameter bounds is poor, the control performance will tend to be conservative (see simulation examples in Section \ref{sec:simulation}). In this section, we present a data-driven approach to get tighter bounds and more accurate nominal values for the unknown parameters. Combining the aCBF-based control design and the data-driven parameter tightening approach provides a mechanism to achieve safety with less conservatism.
Our data-driven method is inspired by the differential inclusion technique proposed in \cite{verginis2021safety}. To better illustrate the main idea, we consider the system \eqref{eqnsys2} with $m=0$ and $n=1$ shown as follows:
\begin{equation}\label{eqnsyssimple}
\dot x = f(x)+f_u(x)+\theta^\top\varphi(x)+(g(x)+\lambda^\top \psi(x))u,
\end{equation}
where $x\in\mathbb{R}$ is the state, $u\in\mathbb{R}$ is the control input, $f:\mathbb{R}\to\mathbb{R}$ and $g:\mathbb{R}\to\mathbb{R}$ are known Lipschitz functions, $f_u:\mathbb{R}\to\mathbb{R}$ is an unknown Lipschitz function satisfying Assumption \ref{assump:0}, $\varphi:\mathbb{R}\to\mathbb{R}^p$, $\psi:\mathbb{R}\to\mathbb{R}^q$ are known functions, and $\theta\in\mathbb{R}^p,\lambda\in\mathbb{R}^q$ are unknown parameters. The proposed method can be readily extended to systems with multiple inputs by considering each control channel separately.
Recall that $x_i$ denotes the $i$-th entry of $x$ where $x$ is either a column or a row vector. Given a dataset $\mathcal{E}=\{x^i,\dot{x}^i,u^i\}_{i=1}^N$, the bounds of $\theta$, $\lambda$, and $f_u$ can be tightened as shown by the following theorem.
\begin{theorem}\label{theoremdatadriven}
Consider the system given in \eqref{eqnsyssimple}. Suppose that (i) Assumptions \ref{assump:0} and \ref{assump:1} hold; (ii) $f_{u}$ has a known Lipschitz constants $L$ for $x\in\mathcal{C}$; (iii) a dataset $\mathcal{E}=\{x^i,\dot{x}^i,u^i\}_{i=1}^N$ is given. Define intervals $\mathcal{P}^0=[\underline{\theta}, \overline{\theta}]^\top \in\mathbb{IR}^{1\times p}$ and $\mathcal{Q}^0=[\underline{\lambda}, \overline{\lambda}]^\top\in\mathbb{IR}^{1\times q}$. Let $x^0$ be an arbitrary state in $\mathcal{C}$ and define $\mathcal{F}^0=[ \underline{f}_{u}(x^0), \overline{f}_u(x^0)]$. For $i\in[N]$, $r\in[p]$, $s\in[q]$, define
\begin{IEEEeqnarray}{rCl}
\IEEEyesnumber
\IEEEyessubnumber
\hspace{-5mm}\mathcal{F}^{i} &=& \left(
\bigcap_{j=0}^{i-1}\{\mathcal{F}^{j}+L\|x^{i}-x^j\|[-1,1]\}
\right)\bigcap\nonumber\\
\IEEEyessubnumber
\hspace{-5mm}&& [\underline{f}_{u}(x^{i}), \overline{f}_u(x^{i})]\bigcap (y^{i}-\mathcal{P}^0\varphi^{i}-\mathcal{Q}^0\psi^iu^i),\label{boundfui}\\
\IEEEyessubnumber
\hspace{-5mm}v_0^i &=& (y^i-\mathcal{F}^{i}-\mathcal{Q}^0\psi^iu^i)\cap (\mathcal{P}^{i-1} \varphi^i),\\
\IEEEyessubnumber
\hspace{-5mm}v_{r}^i &=& (v_{r-1}^i-\mathcal{P}^{i-1}_{r}\varphi^i_{r})\cap \left(\sum_{l=r+1}^p\mathcal{P}^{i-1}_{l}\varphi^i_l\right),\\
\IEEEyessubnumber
\hspace{-5mm}\mathcal{P}^{i}_{r}\!&=&\!\begin{cases}
\!\!((v_{r-1}^i\!-\!\sum_{l=r+1}^p\!\!\mathcal{P}^{i-1}_{l}\!\varphi^i_l)\!\cap\!(\mathcal{P}^{i-1}_{r}\!\varphi^i_{r})))\!\frac{1}{\varphi^i_{r}}, \quad
&\!\!\!\!{\rm if} \ \varphi^i_r\neq 0,\\
\!\!\mathcal{P}^{i-1}_{r}, &\!\!\!\!{\rm otherwise},
\end{cases}
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{rCl}
\IEEEyesnumber
\IEEEyessubnumber
\hspace{-3mm}w_0^i &=& (y^i-{\mathcal{F}}^i-\mathcal{P}^{N}\varphi^i)\cap (\mathcal{Q}^{i-1} \psi^iu^i),\\
\IEEEyessubnumber
\hspace{-3mm}w_{s}^i &=& (w_{s-1}^i-\mathcal{Q}^i_{s}\psi^i_s u^i)\cap\left(\sum_{l=s+1}^q\mathcal{Q}^{i-1}_{l}\psi^i_l u^i\right),\\
\IEEEyessubnumber
\hspace{-3mm}\mathcal{Q}^i_{s}&=&\begin{cases}
((w_{s-1}^i-\sum_{l=s+1}^q\mathcal{Q}^{i-1}_{l}\psi^i_l u^i)\cap\\
\indent\indent(\mathcal{Q}^{i-1}_{s}\psi^i_{s}u^i))\frac{1}{\psi^i_{s}u^i},
&{\rm if} \ \psi^i_{s}u^i\neq 0,\\
\mathcal{Q}^{i-1}_{s}, &{\rm otherwise},
\end{cases}
\end{IEEEeqnarray}
where $\varphi^i=\varphi(x^i)$, $\psi^i=\psi(x^i)$, and $y^i=\dot{x}^i-f(x^i)-g(x^i)u^i$. Then, $\theta^\top\in\mathcal{P}^N$, $\lambda^\top\in\mathcal{Q}^N$, and $f_u(x)\in\mathcal{F}(x)\triangleq\bigcap_{j=0}^N\{\mathcal{F}^j+L\|x-x^j\|[-1,1]\}$, for any $x\in\mathcal{C}$.
\end{theorem}
\begin{proof}
Note that $f_u(x^i)\in[\underline{f}_u(x^i),\overline{f}_u(x^i)]$ from Assumption \ref{assump:0} and $f_u(x^i)\in y^i-\mathcal{P}^0\varphi^i-\mathcal{Q}^0\psi^iu^i$ from
$f_u(x^i)=y^i-\theta^\top\varphi^i-\lambda^\top\psi^iu^i$, $\theta^\top\in\mathcal{P}^0$, and $\lambda^\top\in\mathcal{Q}^0$. Because of the Lipschitzness of $f_u$, it is obvious that for any $i\in[N]$, $f_u(x^i)\in\left(\bigcap_{j=0}^{i-1}\{f_u(x^j)+L\|x^i-x^j\|[-1,1]\}\right)\bigcap[\underline{f}_{u}(x^{i}), \overline{f}_u(x^{i})]\bigcap\\ (y^{i}-\mathcal{P}^0\varphi^{i}-\mathcal{Q}^0\psi^iu^i)$ \cite{verginis2021safety}, which indicates $f_u(x^i)\in\mathcal{F}^i$ provided $f_u(x^k)\in\mathcal{F}^k$ for any $0\leq k<i$. Since $f_u(x^0)\in\mathcal{F}^0$,
using mathematical induction one can conclude that $f_u(x^i)\in\mathcal{F}^i$, $\forall i\in[N]$; thus, for any $x\in\mathcal{C}$, $f_u(x)\in\bigcap_{j=0}^N\{f_u(x^j)+L\|x-x^j\|[-1,1]\}\subset \mathcal{F}(x)$.
Next, we will prove that if $\theta^\top \in\mathcal{P}^{i-1}$, then
$\theta^\top \varphi^i-\sum_{l=1}^r\theta_l\varphi^i_l\in v^i_r$ holds for any $0\leq r\leq p$. When $r=0$, one can see that $\theta^\top\varphi^i\in v^i_0$ since $\theta^\top\varphi^i=y^i-f_u(x^i)-\lambda^\top \psi^iu^i\in y^i-\mathcal{F}^i -\mathcal{Q}^0\psi^iu^i$ and $\theta^\top \varphi^i\in\mathcal{P}^{i-1}\varphi^i$. Then, we assume $\theta^\top \varphi^i-\sum_{l=1}^{r-1}\theta_l\varphi^i_l\in v^i_{r-1}$ holds. It can be seen that $\theta^\top \varphi^i-\sum_{l=1}^{r}\theta_l\varphi^i_l=\theta^\top \varphi^i-\sum_{l=1}^{r-1}\theta_l\varphi^i_l-\theta_r\varphi^i_r\in v^i_{r-1}-\theta_r\varphi^i_r\in v^i_{r-1}-\mathcal{P}^{i-1}_r\varphi^i_r$. On the other hand, one can see $\theta^\top \varphi^i-\sum_{l=1}^{r}\theta_l\varphi^i_l=\sum_{l=r+1}^p\theta_l\varphi^i_l\in \sum_{l=r+1}^p\mathcal{P}^{i-1}_l\varphi_l^i$. Summarizing the discussion above, one can conclude that $\theta^\top \varphi^i-\sum_{l=1}^r\theta_l\varphi^i_l\in (v_{r-1}^i-\mathcal{P}^{i-1}_{r}\varphi^i_{r})\cap(\sum_{l=r+1}^p\mathcal{P}^{i-1}_{l}\varphi^i_l)=v_{r}^i$.
Finally, we will prove $\theta^\top\in\mathcal{P}^i$ for any $0\leq i\leq N$ using mathematical induction. For $i=0$, $\theta^\top\in\mathcal{P}^0$ because of Assumption \ref{assump:1}. Then we assume $\theta^\top \in\mathcal{P}^{i-1}$.
Note that $\theta^\top\varphi^i=y^i-f_u(x^i)-\lambda^\top \psi^iu^i\in y^i-\mathcal{F}^i -\mathcal{Q}^0\psi^iu^i$ and $\theta^\top \varphi^i\in\mathcal{P}^{i-1}\varphi^i$, which implies that $\theta^\top\varphi^i\in v_0^i$. It can be seen that for any $r\in[p]$ one has
$\theta_r\varphi^i_r=\theta^\top\varphi^i-\sum_{l=1}^{r-1}\theta_l\varphi_l^i-\sum_{l=r+1}^p\theta_l\varphi_l^i\in v^i_{r-1}-\sum_{l=r+1}^p \mathcal{P}^{i-1}_l\varphi_l^i$. Moreover, noticing that $\theta_r\varphi^i_r\in\mathcal{P}^{i-1}_r\varphi_r^i$, we have $\theta_r\in\mathcal{P}^i_r$ for any $r\in[p]$, which indicates $\theta^\top \in\mathcal{P}^i$. Following the similar procedure above, one can prove that $\lambda^\top \in\mathcal{Q}^i$.
\end{proof}
\begin{remark}
With tighter bounds on $\theta$, $\lambda$ and $f_u$ provided by Theorem \ref{theoremdatadriven}, a larger admissible set $K_{BF}(x)$ as defined in \eqref{cbfcondition1} can be obtained. As a result, the data-driven-augmented, aCBF-NLP controller tends to have a better control performance while always ensuring safety. It is expected that the system’s performance will be improved if the dataset $\mathcal{E}$ is large enough and the data in $\mathcal{E}$ are sufficiently ``diverse" (i.e., the whole state space is sufficient explored), but a formal proof is still under our investigation.
The Lipschitz constant $L$ is needed in Theorem \ref{theoremdatadriven} to induce the bounds of $f_u$ from a finite number of data. A lot of existing work can be leveraged to estimating the Lipschitz constant of an unknown function, such as \cite{wood1996estimation,fazlyab2019efficient}. Moreover, the data-driven approach can be also combined with the aCBF-based controller shown in \eqref{cbfnlpgeneral} to reduce its conservatism.
\end{remark}
\section{Simulation}
\label{sec:simulation}
In this section, three examples are provided to demonstrate the effectiveness of the proposed control method.
\begin{example}\label{example1}
\label{singleinput}
Consider the following single-input system:
\begin{equation}
\dot x=f_u+\theta_1\sin(x)+\theta_2 x^2+(\lambda_1+\lambda_2x^2)u,\label{sys:sim}
\end{equation}
where $x\in\mathbb{R}$ is the state and $u\in\mathbb{R}$ is the control input. The function $f_u=\cos(x)$ is unknown in the controller design; we choose the bounds of $f_u$ as $f_u\in[-2,2]$ such that Assumption \ref{assump:0} holds. The true values of the parameters $\theta_1=\theta_2=2,\lambda_1=1,\lambda_2=2$ are unknown in the controller design; we choose the bounds of these parameters as $\theta_1,\theta_2,\lambda_1,\lambda_2\in [-10,10]$ such that Assumption \ref{assump:1} holds. Note that loose bounds of the unknown parameters and the function are chosen deliberately. It is easy to verify that Assumption \ref{assump:2} is satisfied with $b=0.5$. We choose the safe set as $\mathcal{C}=\{x: h(x)\geq 0\}$ where $h(x)=x-1$, that is, we aim to make $x(t)\geq 1$ for all $t\geq 0$. The initial condition of \eqref{example1} is chosen as $x(0)=2$, the reference trajectory is selected as $x_d=3\sin(t)$ and the
nominal control $u_d$ is designed via feedback linearization.
First, we demonstrate the performance of the safe controller obtained from \eqref{cbfnlp1}. We choose the nominal values of the unknown parameters $\theta_1^0 = \theta_2^0=\lambda_2^0 = 0,\lambda_1^0=0.5$, which result in $\bar\mu=\bar\nu=15$, and positive constants $\epsilon_1=\epsilon_2=0.001,\gamma^\theta=550,\gamma^\lambda=300,\gamma=800$ and $\hat \mu(0)=\hat\nu(0)=0.1$. One can verify that Condition (iv) of Theorem \ref{theorem1} holds with the given initial condition, and Condition (ii) of Theorem \ref{theorem1} holds because $\Psi_1=\lambda_1^0+\lambda_2^0x^2 = 0.5> 0$ for any $x\in\mathcal{C}$, which implies that $K_{BF}\neq \emptyset$.
Therefore, all conditions of Theorem \ref{theorem1} are satisfied. Applying the safe controller obtained from \eqref{cbfnlp1}, the state evolution of the closed-loop system is shown as the blue line in Figure \ref{fig:1}(a). One can observe that safety is always guaranteed (i.e., the state does not cross the safety boundary represented by the dashed red line) in the presence of parameter uncertainties and the reference trajectory is well-tracked within the safe set. The evolution of the control input is shown as the blue line in Figure \ref{fig:1}(b).
Next, we consider the aCBF-NLP-based safe controller combined with the data-driven approach. We assume that $f_u$ has a Lipschitz constant $L=1$ and a dataset of 10 points are given. By using Theorem \ref{theoremdatadriven}, bounds of the unknown parameters are tightened as $\theta_1\in[-0.5042, 4.8060]$, $\theta_2\in[1.8848, 2.0618]$, $\lambda_1\in[0.9050, 1.1550]$, and $\lambda_2\in[0.8471, 3.6036]$. With these new bounds, we choose the nominal parameters as $\theta_1^0=2.1509,\theta_2^0=1.9733,\lambda_1^0=1.03,\lambda_2^0=2.2254$, which result in $\bar\mu=2.6566,\bar\nu=1.3839$. We select $\gamma^\theta=60$, $\gamma^\lambda=30$ in Theorem \ref{theorem1} with parameters $\epsilon_1,\epsilon_2,\gamma^\theta,\gamma^\lambda,\gamma,\hat\mu(0),\hat\nu(0)$ chosen the same as above. Applying the data-driven augmented, aCBF-NLP-based safe controller, the state evolution of the closed-loop system is shown as the pink line in Figure \ref{fig:1}(a). One can observe that this controller can also ensure safety and has a better tracking performance than the aCBF-NLP-based controller without parameter update, since it can better track the reference trajectory inside the safe region and it can be closer to the boundary of the safe set when the reference trajectory is outside the safe region. The evolution of the control input is shown as the pink line in Figure \ref{fig:1}(b).
\begin{figure}[!htbp]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/1dim/x.png}
\caption{Evolution of the state variable $x$}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/1dim/u.png}
\caption{Evolution of the control input $u$}
\end{subfigure}
\caption{Simulation results of Example \ref{example1}. From (a) it can be seen that both the aCBF-NLP controller and the data-driven augmented aCBF-NLP controller can ensure safety as the trajectories of $x$ always stay in the safe region (i.e., above the dashed red line). One can also see that, when the data-driven techniques developed in Theorem \ref{theoremdatadriven} are adopted, the aCBF-NLP controller has a better control performance. }
\label{fig:1}
\end{figure}
\end{example}
\begin{example}\label{example2}
Consider the following adaptive cruise control system \cite{xu2015robustness}:
\begin{equation}\label{acceqn}
\frac{{\rm d}}{{\rm d} t}\begin{bmatrix}
D\\v_l\\v_f
\end{bmatrix}=\begin{bmatrix}
v_l-v_f\\a\\-\frac{1}{m}(f_0+f_1v_f+f_2v_f^2)
\end{bmatrix}+\begin{bmatrix}
0\\0\\\frac{1}{m}
\end{bmatrix}u,
\end{equation}
where $v_l$ and $v_f$ are the velocities of the lead car and the following car, respectively, $D$ is the distance between the two vehicles, $u$ is the control input,
$F_r\triangleq f_0+f_1v_f+f_2v_f^2$ is the aerodynamic drag term with constants $f_0,f_1,f_2$, and $m$ is the mass of the following car. The true values of the parameters $f_0=0.1 N,f_1=5 N\cdot s/m,f_2=0.25 N\cdot s^2/m,m=1650\;kg$ are unknown in the controller design. We assume that $f_0\in[0,10]$, $f_1\in[0,50]$, $f_2\in[0,20]$, $m\in[100,3000]$, and let $\theta=\frac{1}{m}[-f_0\ -f_1\ -f_2]^\top$ , $\varphi(v_f)=[1\ v_f\ v_f^2]^\top$, $\lambda=\frac{1}{m}$; one can easily see that Assumption \ref{assump:1} is satisfied with $\theta_1\in[-0.1,0]$, $\theta_2\in[-0.5,0]$, $\theta_3\in[-0.2,0]$, $\lambda\in[0.00033,0.01]$. Note that in \eqref{example2} $f_u=0$, such that we selected $\underline{f}_u=\overline{f}_u=0$, from which one can see Assumption \ref{assump:0} is satisfied. Meanwhile, from \eqref{example2} one can easily verify that Assumption \ref{assump:2} holds with $b=1/3000$. The safety constraint of the following car is to keep a safe distance from the lead car, which can be expressed as $D/v_f\geq 1.8$ where 1.8 is the desired time headway in seconds. Therefore, the safe set is $\mathcal{C}=\{x: h(x)\geq 0\}$ where $h=D-1.8v_f$. The nominal controller $u_d$ is designed to keep the following car at a desired speed $v_{f, des}=22\; m/s$.
We first consider the aCBF-NLP-based controller. We choose the nominal parameters $\theta^0=[-0.05\ -0.5\ -0.2]$, $\lambda^0=1/3000$, which result in $\bar\mu=0.5408$, $\bar\nu=0.0097$, positive constants $\epsilon_1=\epsilon_2=0.001$, $\gamma=300$, $\gamma^\theta=0.01$, $\gamma^\lambda=0.0001$, and initial conditions $\hat \mu(0)=\hat\nu(0)=0.001$. One can verify that all conditions of Theorem \ref{theorem1} hold. Applying the safe controller obtained from \eqref{cbfnlp1}, the state and CBF evolution are shown as the blue lines in Figure \ref{fig:2}. Next, we consider the aCBF-NLP-based controller augmented with a dataset of 5 datapoints. By using Theorem \ref{theoremdatadriven}, the bounds of the unknown parameters are tightened as $\theta_1\in[-0.0013, 0]$, $\theta_2\in[-0.0036, 0]$, $\theta_3\in[-0.001, 0]$, $\lambda\in[0.0006, 0.0007]$. We select the nominal values of the unknown parameters as $\theta^0=[-0.00065\ -0.0018\ -0.0005]$, $\lambda^0=0.00065$, which result in $\bar \mu =0.002$, $\bar\nu=0.00005$. The state and CBF evolution of the closed-loop system with the data-driven-augmented aCBF-NLP controller are shown as the brown lines in Figure \ref{fig:2}. One can see that both controllers can ensure safety in the presence of parameter uncertainties since $h(t)\geq 0$ for any $t>0$; moreover, when the data-driven approach is employed, the aCBF-NLP controller has a better performance.
\begin{figure}[!htbp]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/ACC/vf.png}
\caption{Evolution of $v_f$}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/ACC/h.png}
\caption{Evolution of $h$}
\end{subfigure}
\caption{Simulation results of Example \ref{example2}. The aCBF-NLP controller, either with or without the data-driven techniques, can ensure safety of the system. When combined with the data-driven techniques, the aCBF-NLP controller has a slightly better control performance in terms of maintaining the desired velocity.}
\label{fig:2}
\end{figure}
\end{example}
\begin{example}\label{example3}
\label{sec:simmass}
Consider the mass-spring system as follows (shown in Fig. \ref{fig:mass}):
\begin{equation}\label{example:masseqn}
\frac{{\rm d}}{{\rm d} t}\begin{bmatrix}
x_1\\x_2\\\dot x_1\\\dot x_2
\end{bmatrix}=\begin{bmatrix}
0&0&1&0\\0&0&0&1\\
-\frac{k_1+k_2}{m_1}&\frac{k_2}{m_1}&0&0\\
\frac{k_2}{m_2}&-\frac{k_2}{m_2}&0&0
\end{bmatrix}\begin{bmatrix}
x_1\\x_2\\\dot x_1\\\dot x_2
\end{bmatrix}+\begin{bmatrix}
0&0\\0&0\\
\frac{1}{m_1}&0\\0&\frac{1}{m_2}
\end{bmatrix}\begin{bmatrix}
u_1\\u_2
\end{bmatrix},
\end{equation}
where $x_1, x_2\in\mathbb{R}$ denote the position of two mass points, $u_1,u_2\in\mathbb{R}$ are control inputs, $m_1=m_2=0.2$ represent the mass, and $k_1=k_2=1$ denote the stiffness of two springs. We assume that all functions in \eqref{example:masseqn} are known, that is, $f_u=0$.
Define $\theta_1=[-\frac{k_1+k_2}{m_1}\ \frac{k_2}{m_1}]^\top=[-10\ 5]^\top$, $\theta_2=[\frac{k_2}{m_2}\ -\frac{k_2}{m_2}]^\top=[5\ -5]^\top$, $\lambda_1=\frac{1}{m_1}=5$, and $\lambda_2=\frac{1}{m_2}=5$, which are unknown parameters in control design. One can easily verify that Assumption \ref{assump:1} is fulfilled
with $m_1,m_2\in[0.1,0.5]$ and $k_1,k_2\in[0,5]$, such that $[-100\ 0]^\top\leq\theta_1\leq [0 \ 50]^\top$,
$[0\ -50]^\top\leq\theta_1\leq [50 \ 0]^\top$, $\lambda_1\in[2,10]$, and $\lambda_2\in[2,10]$. It is obvious that Assumption \ref{assump:2} is fulfilled with $b_1=b_2=1$. The desired trajectories are selected as $x_{1d}=0$, $x_{2d}=1+\sin (t)$, with a nominal PD controller $u_d$ designed to track $x_{1d}$, $x_{2d}$,
The safe set is defined as $\mathcal{C}=\{x: h(x)\geq 0\}$ with $h=x_2-x_1-0.5$, which aims to keep the distance between two masses. Note that the relative degree of $h$ is equal to 2.
We first consider the aCBF-NLP-based controller proposed in Theorem \ref{theorem1}. We select the nominal parameters $\theta_1^0=[-50\ 25]^\top$, $\theta_2=[25\ -25]^\top$, $\lambda_1^0=\lambda_2^0=6$, which result in $\bar\mu_1=55.9$, $\bar\mu_2=35.36$, $\bar\nu_1=\bar\nu_2=4$, positive constants $\gamma=300$, $\epsilon_1=\epsilon_2=0.001$, $\gamma^\theta_1=20000$, $\gamma^\theta_2=5000$, $\gamma^\lambda_1=\gamma^\lambda_2=200$, $\rho_1=\rho_2=0.5$, and initial conditions $\hat\mu_1(0)=\hat\mu_2(0)=\hat\nu_1(0)=\hat\nu_2(0)=0.1$. It is easy to verify that all conditions in Theorem \ref{theorem1} are satisfied.
Applying the safe controller obtained from \eqref{cbfnlp1}, the state and CBF evolution are shown as the blue lines in Fig. \ref{fig:massresult}. Then, the aCBF-NLP-based controller is augmented with a dataset of 4 datapoints. Applying Theorem \ref{theoremdatadriven}, we get the tighter bounds of unknown parameters as $[-15\ 4.4]^\top \leq\theta_1\leq [-7\ 6]^\top$,
$[2.5\ -5.55]^\top\leq\theta_2\leq[6.5\ -4.0833]^\top$, $\lambda_1\in[3.55, 6.35]$, and $\lambda_2\in[2.7167, 6.85]$. The nominal values of unknown parameters are selected as $\theta_1^0=[-11\ 5.2]$, $\theta_2^0=[4.5\ -4.8]$, $\lambda_1^0=4.95$, $\lambda_2^0=4.7833$, which results in $\bar\mu_1=4.2059$, $\bar\mu_2= 2.1302$, $\bar\nu_1=1.4$, $\bar\nu_2=2.0667$. The state and CBF evolution are represented by the brown lines in Fig. \ref{fig:massresult}. It can be seen that both controllers can guarantee the safety since $h(x(t))\geq 0$ for any $t>0$, while the performance of the aCBF-NLP controller is improved if the data-driven approach is adopted.
Finally, we show how the results of Theorem \ref{theorem3} can be applied to \eqref{example:masseqn}. From now on we do not assume $g$ and $g_\lambda$ are diagonal matrices. It is easy to verify that Assumptions \ref{assump:0}, \ref{assump:3}, \ref{assump:4} hold true with $b^*=1$, $\lambda_{11},\lambda_{22}\in[-1,1]$, $\lambda_{12},\lambda_{21}\in [2,10]$, $[-100\ 0]^\top\leq\theta_1\leq [0 \ 50]^\top$, and
$[0\ -50]^\top\leq\theta_1\leq [50 \ 0]^\top$. We select the nominal parameters $\theta^0_1=[-50\ 25]$, $\theta^0_2=[25\ -25]$, $\lambda^0_{11}=\lambda^0_{22}=6$, $\lambda^0_{12}=\lambda^0_{21}=0$, which results in $\bar\mu=66.1438$, $\bar\nu=5.831$, positive constants $\epsilon_1=\epsilon_2=0.001$, $\gamma_\theta=7000$, $\gamma^\lambda=70$, $\gamma=1000$, and initial conditions $\hat\mu(0)=\hat\nu(0)=0.1$. One can easily verify that all conditions in Theorem \ref{theorem3} are satisfied. Applying the safe control law obtained from \eqref{cbfnlpgeneral}, the state and CBF evolution are shown in Fig. \ref{fig:massresult2}, from which one can see that the safety is ensured since $h(x(t))\geq 0$ for any $t>0$. However, from Fig. \ref{fig:massresult2}(a) and \ref{fig:massresult2}(b), it can be seen that the control performance is conservative, i.e., the desired control performance is not well preserved inside the safe region. This phenomenon verifies what we discussed in Remark \ref{remark:generaldisadvantage}, i.e., $u$ might not be close to $u_d$ since it is always proportional to the partial derivative of $h$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\textwidth]{figure/illustrate.png}
\caption{Illustration of the mass-spring system in Example \ref{example3}.}
\label{fig:mass}
\end{figure}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/mass/diagonal/x1.png}
\caption{Evolution of $x_1$}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/mass/diagonal/x2.png}
\caption{Evolution of $x_2$}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/mass/diagonal/h.png}
\caption{Evolution of $h$}
\end{subfigure}
\caption{Simulation results of Example \ref{example3} using the control scheme shown in \eqref{cbfnlp1}. From (c) it can be seen that the proposed aCBF-NLP-based controllers can guarantee safety as $h$ is always non-negative; from (a) it can be seen that, if the data-driven techniques are adopted, the control performance becomes less conservative since $x_1$ can track the reference trajectory better inside the safe region. }
\label{fig:massresult}
\end{figure}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/mass/nondiagonal/x1.png}
\caption{Evolution of $x_1$}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/mass/nondiagonal/x2.png}
\caption{Evolution of $x_2$}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{figure/mass/nondiagonal/h.png}
\caption{Evolution of $h$}
\end{subfigure}
\caption{Simulation results of Example \ref{example3} using the control strategy shown in \eqref{cbfnlpgeneral}. From (c) it can be seen that the aCBF-NLP-based controller obtained by solving \eqref{cbfnlpgeneral} can guarantee safety; however, the control performance is unsatisfactory (i.e., the tracking performance of the desired controller is not well-preserved) due to the intrinsic conservatism discussed in Remark \ref{remark:generaldisadvantage}. }
\label{fig:massresult2}
\end{figure}
\end{example}
\section{Conclusion}
\label{sec:conclusion}
This paper proposes a singularity-free aCBF-NLP-based control strategy for systems with parametric uncertainties in both drift terms and control-input matrices, where the solution to the aCBF-NLP has a closed form. Furthermore, a data-driven approach is developed to tighten the bounds of the unknown parameters and functions such that the performance of the proposed controller can be improved. Simulation results are also presented to validate the proposed approach. Future work includes relaxing the assumptions of this paper and integrating this control method into learning-based control frameworks.
\begin{appendices}
\begin{lemma}\label{lemma:1}
Define a function $s(\cdot)$ as
\begin{equation}
s(y)=y+\frac{\kappa_1}{b}+\frac{\kappa_2^2 y^2}{b(\kappa_2|h_{x_2}||y|+\epsilon_2)},
\end{equation}
where $\kappa_1,\kappa_2\geq 0$ and $b,\epsilon_2, |h_{x_2}|>0$ are considered as constants.
The function $s(\cdot)$ has the following properties:\\
(i) When $\bar b-\kappa_2\geq 0$, where $\bar b=b|h_{x_2}|$, $s(y)$ is monotonically increasing with respect to $y$, and $\lim_{y\to-\infty}s(y)=-\infty$, $\lim_{y\to+\infty}s(y)=+\infty$;\\
(ii) When $\bar b-\kappa_2<0$, $s(y)$ has a global minimum
$
y^*=\frac{\bar\epsilon_2[(\kappa_2-\bar b)-\sqrt{\kappa_2(\kappa_2-\bar b)}]}{\kappa_2(\kappa_2-\bar b)},
$
where $\bar\epsilon_2=\epsilon_2/|h_{x_2}|$, and $\lim_{y\to-\infty}s(y)=\lim_{y\to+\infty}s(y)=+\infty$.
\end{lemma}
\begin{proof}
Note that
the derivative of $s(y)$ with respect to $y$
can be expressed as
\begin{equation}
\frac{{\rm d} s}{{\rm d} y}=\begin{cases}
1+ \frac{\kappa_2^2y(\kappa_2 y+2\bar\epsilon_2)}{\bar b(\kappa_2 y+\bar\epsilon_2)^2}, & {\rm if} \ y\geq 0,\\
\frac{(\bar b-\kappa_2)\kappa_2^2y^2-2\kappa_2\bar\epsilon_2(\bar b-\kappa_2)y+\bar b\bar\epsilon_2^2}{\bar b(-\kappa_2 y+\bar\epsilon_2)^2}, & {\rm if} \ y<0.
\end{cases}
\end{equation}
(i) If $\kappa_2=0$, then $s(y)=\frac{\kappa_1}{b}+y$, from which one can see that the statement is true. We assume $\kappa_2\neq 0$ in the following analysis.
It can be seen that if $\bar b-\kappa_2\geq 0$, $\frac{{\rm d} s}{{\rm d} y}> 0$ for any $y\in \mathbb{R}$, such that $s(y)$ is monotonically increasing. Meanwhile, one can see $\lim_{y\to+\infty}s(y)=+\infty$ and $\lim_{y\to-\infty}s(y)=\lim_{y\to-\infty}\frac{y\bar b(-\kappa_2y+\bar\epsilon_2)+\kappa_2^2y^2}{\bar b(-\kappa_2y+\bar\epsilon_2)}+\frac{\kappa_1}{b}\xlongequal[]{z=-y} \lim_{z\to+\infty}\frac{-\bar b(\kappa_2z^2+\bar\epsilon_2z)+\kappa_2^2z^2}{\bar b(\kappa_2z+\bar\epsilon_2)}+\frac{\kappa_1}{b}$. Define $\alpha(z)=-\bar b(\kappa_2z^2+\bar\epsilon_2z)+\kappa_2^2z^2$, $\beta(z)=\bar b(\kappa_2z+\bar\epsilon_2)$, and $I=(0, +\infty)$, from which one can see $\lim_{y\to-\infty}s(y)=\frac{\kappa_1}{b}+\lim_{z\to+\infty}\frac{\alpha(z)}{\beta(z)}$. Since $\alpha$, $\beta$ are differentiable with respect to $z$ and $\beta'(z)\neq 0$ for any $z\in I$, applying L'Hôpital's rule gives
$\lim_{y\to-\infty}s(y)= \lim_{z\to+\infty}\frac{-2(\bar b-\kappa_2)\kappa_2z-\bar b\bar\epsilon_2}{\bar b\kappa_2}+\frac{\kappa_1}{b}=-\infty$.
(ii) If $\bar b-\kappa_2<0$, $\frac{{\rm d} s}{{\rm d} y}> 0$ still holds when $y\geq 0$. When $y< 0$, asking $\frac{{\rm d} s}{{\rm d} y}=0$ gives a stationary point $y^*$. It can be verified $\frac{{\rm d}^2 s}{{\rm d} y^2}\bigg|_{y=y^*}=\frac{2(\kappa_2(\kappa_2-\bar b))^{\frac{3}{2}}}{\kappa_2\bar b\bar\epsilon_2}>0$, such that $y^*$ is a global minimum. Similarly, one can prove $\lim_{y\to+\infty}s(y)=\infty$ and $ \lim_{y\to-\infty} s(y)\xlongequal[]{z=-y} \lim_{z\to+\infty}\frac{\kappa_2(\kappa_2-\bar b)z^2-\bar b\bar \epsilon_2 z}{\bar b(\bar\epsilon_2+\kappa_2 z)}+\frac{\kappa_1}{b}=\lim_{z\to+\infty}\frac{2\kappa_2(\kappa_2-\bar b)z-\bar b\bar\epsilon_2}{\bar b \kappa_2}+\frac{\kappa_1}{b}=+\infty,$
where the second equality arises from L'Hôpital's rule (the conditions of L'Hôpital's rule can be verified using the similar procedure in (i)).
\end{proof}
\begin{lemma}\label{lemma1}
For any $a\in\mathbb{R}^{n}$, $b\in\mathbb{R}^{n}$, $c\in\mathbb{R}$, the following inequalities hold:
\begin{IEEEeqnarray}{rCl}
\IEEEyesnumber \label{matrixcauchy}
\IEEEyessubnumber \label{matrixcauchy:1}
a^\top (f_\theta-f_\theta^0) &\geq& -\mu\|a\| \|\Omega_\varphi\|,\\
\IEEEyessubnumber\label{matrixcauchy:2}
b^\top (g_\lambda-g_\lambda^0) b c &\geq& -\nu \|\Omega_\psi\|\|b\|^2|c|,
\end{IEEEeqnarray}
where $f_\theta$ is defined in \eqref{eqnstructure},
$g_\lambda$ is defined in \eqref{ggla}, $f_\theta^0$, $g_\lambda^0$, $\Omega_\varphi$, $\Omega_\psi$ are defined in \eqref{matrixdefine}, and $\mu$, $\nu$ are defined in \eqref{munugeneral}.
\end{lemma}
\begin{proof}
One can verify that $a^\top (f_\theta-f_\theta^0)\geq -\|a\|\|f_\theta-f_\theta^0\|=-\|a\|\sqrt{\sum_{i=1}^{n}((\theta_i-\theta_i^0)^\top\varphi_i)^2}\geq -\|a\|\sqrt{\sum_{i=1}^{n}\|\theta_i-\theta_i^0\|^2\|\varphi_i\|^2}
\geq -\|a\|\sqrt{\sum_{i=1}^{n}\|\theta_i-\theta_i^0\|^2}\\\sqrt{\sum_{i=1}^{n}\|\varphi_i\|^2}=-\|a\|\|\Theta-\Theta^0\|\|\Omega_\varphi\|=-\mu\|a\|\|\Omega_\varphi\|$,
where the first and second inequality are derived from Cauchy-Schwarz inequality and the third inequality comes from the fact $\sum_{k=1}^{n} x_k^2y_k^2 \leq \left(\sum_{k=1}^{n} x_k^2 \right)
\left(\sum_{k=1}^{n} y_k^2 \right), \ \forall x_k,y_k\in\mathbb{R}.$ Therefore, \eqref{matrixcauchy:1} holds.
Similarly, using Cauchy-Schwarz inequality, one can get $b^\top (g_\lambda-g_\lambda^0)bc\geq -|b^\top (g_\lambda-g_\lambda^0) b||c|\geq -\ \|g_\lambda-g_\lambda^0\|\|b\|^2|c|.$
Invoking the definition of the Frobenius norm, $\|g_\lambda-g_\lambda^0\|$ satisfies $\|g_\lambda-g_\lambda^0\|=\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}((\lambda_{ij}-\lambda_{ij}^0)^\top \psi_{ij})^2}\leq \sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}\|\lambda_{ij}-\lambda_{ij}^0\|^2 \|\psi_{ij}\|^2}
\leq
\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}\|\psi_{ij}\|^2}\\
\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}\|\lambda_{ij}-\lambda_{ij}^0\|^2}=\|\Lambda-\Lambda^0\|\|\Omega_\psi\|=\nu\|\Omega_\psi\|$. Therefore, \eqref{matrixcauchy:2} holds.
\end{proof}
\end{appendices}
\bibliographystyle{elsarticle-num}
\section{}\label{}
\begin{figure}[<options>]
\centering
\includegraphics[<options>]{}
\caption{}\label{fig1}
\end{figure}
\begin{table}[<options>]
\caption{}\label{tbl1}
\begin{tabular*}{\tblwidth}{@{}LL@{}}
\toprule
& \\
\midrule
& \\
& \\
& \\
& \\
\bottomrule
\end{tabular*}
\end{table}
\section{}\label{}
\printcredits
\bibliographystyle{cas-model2-names}
\section{Introduction}
This bundle provides two classfiles, namely \verb+cas-sc.cls+ and
\verb+cas-dc.cls+ and corresponding template files for typesetting
journal articles supposed to go through Elsevier's updated workflow.
\verb+cas-sc.cls+ is meant for one-column, the other
\verb+cas-dc.cls+ for two-column layout. These are now accepted for
submitting articles both in Elsevier's electronic submission system and
elsewhere.
\subsection{Usage}
\begin{enumerate}
\item \verb+cas-sc.cls+ for single column journals.
\begin{vquote}
\documentclass[<options>]{cas-sc}
\end{vquote}
\item \verb+cas-dc.cls+ for single column journals.
\begin{vquote}
\documentclass[<options>]{cas-dc}
\end{vquote}
\end{enumerate}
and have an option \verb+longmktitle+ to handle long front matter.
\section{Front matter}
\begin{vquote}
\title [mode = title]{This is a specimen $a_b$ title}
\tnotemark[1,2]
\tnotetext[1]{This document is the results of the research
project funded by the National Science Foundation.}
\tnotetext[2]{The second title footnote which is a longer text
matter to fill through the whole text width and overflow
into another line in the footnotes area of the first page.}
\end{vquote}
\begin{vquote}
\author[1,3]{J.K. Krishnan}[type=editor,
auid=000,bioid=1,
prefix=Sir,
role=Researcher,
orcid=0000-0001-0000-0000]
\cormark[1]
\fnmark[1]
\ead{jkk@example.in}
\ead[url]{www.jkkrishnan.in}
\credit{Conceptualization of this study,
Methodology, Software}
\affiliation[1]{organization={Department of Physics,
J.K. Institute of Science},
addressline={Jawahar Nagar},
city={Trivandrum},
postcode={695013},
state={Kerala},
country={India}}
\author[2,4]{Han Thane}[style=chinese]
\author[2,3]{William {J. Hansen}}[%
role=Co-ordinator,
suffix=Jr,
]
\fnmark[2]
\ead{wjh@example.org}
\ead[URL]{https://www.university.org}
\credit{Data curation, Writing - Original draft preparation}
\end{vquote}
\begin{vquote}
\affiliation[2]{organization={World Scientific University},
addressline={Street 29},
postcode={1011 NX},
postcodesep={},
city={Amsterdam},
country={The Netherlands}}
\author[1,3]{T. Rafeeq}
\cormark[2]
\fnmark[1,3]
\ead{t.rafeeq@example.in}
\ead[URL]{www.campus.in}
\affiliation[3]{organization={University of Intelligent
Studies},
addressline={Street 15},
city={Jabaldesh},
postcode={825001},
state={Orissa},
country={India}}
\cortext[cor1]{Corresponding author}
\cortext[cor2]{Principal corresponding author}
\fntext[fn1]{This is the first author footnote, but is common
to third author as well.}
\fntext[fn2]{Another author footnote, this is a very long
footnote and it should be a really long footnote. But this
footnote is not yet sufficiently long enough to make two
lines of footnote text.}
\nonumnote{This note has no numbers. In this work we
demonstrate $a_b$ the formation Y\_1 of a new type of
polariton on the interface between a cuprous oxide slab
and a polystyrene micro-sphere placed on the slab.
}
\end{vquote}
\begin{vquote}
\begin{abstract}[S U M M A R Y]
This template helps you to create a properly formatted
\LaTeX\ manuscript.
\begin{abstract} ... \end{abstract} and \begin{keyword}
... \end{keyword} which contain the abstract and keywords
respectively. Each keyword shall be separated by
a \sep command.
\end{abstract}
\begin{keywords}
quadrupole exciton \sep polariton \sep WGM \sep BEC
\end{keywords}
\maketitle
\end{vquote}
\subsection{Title}
\verb+\title+ command have the below options:
\begin{enumerate}
\item \verb+title:+ Document title
\item \verb+alt:+ Alternate title
\item \verb+sub:+ Sub title
\item \verb+trans:+ Translated title
\item \verb+transsub:+ Translated sub title
\end{enumerate}
\begin{vquote}
\title[mode=title]{This is a title}
\title[mode=alt]{This is a alternate title}
\title[mode=sub]{This is a sub title}
\title[mode=trans]{This is a translated title}
\title[mode=transsub]{This is a translated sub title}
\end{vquote}
\subsection{Author}
\verb+\author+ command have the below options:
\begin{enumerate}
\item \verb+auid:+ Author id
\item \verb+bioid:+ Biography id
\item \verb+alt:+ Alternate author
\item \verb+style:+ Style of author name, eg.\ chinese
\item \verb+prefix:+ Prefix, eg.\ Sir
\item \verb+suffix:+ Suffix
\item \verb+degree:+ Degree
\item \verb+role:+ Role
\item \verb+orcid:+ ORCID
\item \verb+collab:+ Collaboration
\item \verb+anon:+ Anonymous author
\item \verb+deceased:+ Deceased author
\item \verb+twitter:+ Twitter account
\item \verb+facebook:+ Facebook account
\item \verb+linkedin:+ LinkedIn account
\item \verb+plus:+ Google plus account
\item \verb+gplus:+ Google plus account
\end{enumerate}
\begin{vquote}
\author[1,3]{Author Name}[type=editor,
auid=000,bioid=1,
prefix=Sir,
role=Researcher,
orcid=0000-0001-0000-0000,
facebook=<facebook id>,
twitter=<twitter id>,
linkedin=<linkedin id>,
gplus=<gplus id>]
\end{vquote}
\begin{figure}
\includegraphics[width=\textwidth]{sc-sample.pdf}
\caption{Single column output (classfile: cas-sc.cls).}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{dc-sample.pdf}
\caption{Double column output (classfile: cas-dc.cls).}
\end{figure}
\subsection{Various Marks in the Front Matter}
The front matter becomes complicated due to various kinds
of notes and marks to the title and author names. Marks in
the title will be denoted by a star ($\star$) mark;
footnotes are denoted by super scripted Arabic numerals,
corresponding author by an Conformal asterisk (*) mark.
\subsubsection{Title marks}
Title mark can be entered by the command, \verb+\tnotemark[<num>]+
and the corresponding text can be entered with the command
\verb+\tnotetext[<num>]+ \verb+{<text>}+. An example will be:
\begin{vquote}
\title[mode=title]{Leveraging social media news to predict
stock index movement using RNN-boost}
\tnotemark[1,2]
\tnotetext[1]{This document is the results of the research
project funded by the National Science Foundation.}
\tnotetext[2]{The second title footnote which is a longer
text matter to fill through the whole text width and
overflow into another line in the footnotes area of
the first page.}
\end{vquote}
\verb+\tnotemark+ and \verb+\tnotetext+ can be anywhere in
the front matter, but should be before \verb+\maketitle+ command.
\subsubsection{Author marks}
Author names can have many kinds of marks and notes:
\begin{vquote}
footnote mark : \fnmark[<num>]
footnote text : \fntext[<num>]{<text>}
affiliation mark : \author[<num>]
email : \ead{<emailid>}
url : \ead[url]{<url>}
corresponding author mark : \cormark[<num>]
corresponding author text : \cortext[<num>]{<text>}
\end{vquote}
\subsubsection{Other marks}
At times, authors want footnotes which leave no marks in
the author names. The note text shall be listed as part of
the front matter notes. Class files provides
\verb+\nonumnote+ for this purpose. The usage
\begin{vquote}
\nonumnote{<text>}
\end{vquote}
\noindent and should be entered anywhere before the \verb+\maketitle+
command for this to take effect.
\subsection{Abstract and Keywords}
Abstract shall be entered in an environment that starts
with\break \verb+\begin{abstract}+ and ends with
\verb+\end{abstract}+. Longer abstracts spanning more than
one page is also possible in slass file even in double
column mode. We need to invoke \verb+longmktitle+ option in the
class loading line for this to happen smoothly.
The key words are enclosed in a \verb+{keyword}+
environment.
\begin{vquote}
\begin{abstract}
This is an abstract. \lipsum[3]
\end{abstract}
\begin{keywords}
First keyword \sep Second keyword \sep Third
keyword \sep Fourth keyword
\end{keywords}
\end{vquote}
\section{Main Matter}
Main matter contains sections, paragraphs, equations and floats like
tables, figures, textboxes etc.
\subsection{Tables}
\subsubsection{Normal tables}
\begin{vquote}
\begin{table}
\caption{This is a test caption.}
\begin{tabular*}{\tblwidth}{@{} LLLL@{} }
\toprule
Col 1 & Col 2\\
\midrule
12345 & 12345\\
12345 & 12345\\
12345 & 12345\\
12345 & 12345\\
12345 & 12345\\
12345 & 12345\\
\bottomrule
\end{tabular*}
\end{table}
\end{vquote}
\subsubsection{Span tables}
\begin{vquote}
\begin{table*}[width=.9\textwidth,cols=4,pos=h]
\caption{This is a test caption.}
\begin{tabular*}{\tblwidth}{@{} LLLLLL@{} }
\toprule
Col 1 & Col 2 & Col 3 & Col4 & Col5 & Col6 & Col7\\
\midrule
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
\bottomrule
\end{tabular*}
\end{table*}
\end{vquote}
\subsection{Figures}
\subsubsection{Normal figures}
\begin{vquote}
\begin{figure}
\centering
\includegraphics[scale=.75]{Fig1.pdf}
\caption{The evanescent light - $1S$ quadrupole coupling.
See also Fig. \protect\ref{FIG:2}).}
\label{FIG:1}
\end{figure}
\end{vquote}
\subsubsection{Span figures}
\begin{vquote}
\begin{figure*}
\centering
\includegraphics[width=\textwidth,height=2in]{Fig2.pdf}
\caption{Schematic of formation of the evanescent polariton on
linear chain of \PMS.}
\label{FIG:2}
\end{figure*}\end{vquote}
\subsection{Theorem and theorem like environments}
CAS class file provides a few hooks to format theorems and
theorem like environments with ease. All commands the
options that are used with \verb+\newtheorem+ command will work
exactly in the same manner. Class file provides three
commands to format theorem or theorem like environments:
\begin{enumerate}
\item \verb+\newtheorem+ command formats a theorem in
\LaTeX's default style with italicized font for theorem
statement, bold weight for theorem heading and theorem
number typeset at the right of theorem heading. It also
optionally accepts an argument which will be printed as an
extra heading in parentheses. Here is an example coding and
output:
\begin{vquote}
\newtheorem{theorem}{Theorem}
\begin{theorem}\label{thm}
The \WGM evanescent field penetration depth into the
cuprous oxide adjacent crystal is much larger than the
\QE radius:
\begin{equation*}
\lambda_{1S}/2 \pi \left({\epsilon_{Cu2O}-1}
\right)^{1/2} = 414 \mbox{ \AA} \gg a_B = 4.6
\mbox{ \AA}
\end{equation*}
\end{theorem}
\end{vquote}
\item \verb+\newdefinition+ command does exactly the same
thing as with except that the body font is up-shape instead
of italic. See the example below:
\begin{vquote}
\newdefinition{definition}{Definition}
\begin{definition}
The bulk and evanescent polaritons in cuprous oxide
are formed through the quadrupole part of the light-matter
interaction:
\begin{equation*}
H_{int} = \frac{i e }{m \omega_{1S}} {\bf E}_{i,s}
\cdot {\bf p}
\end{equation*}
\end{definition}
\end{vquote}
\item \verb+\newproof+ command helps to define proof and
custom proof environments without counters as provided in
the example code. Given below is an example of proof of
theorem kind.
\begin{vquote}
\newproof{pot}{Proof of Theorem \ref{thm}}
\begin{pot}
The photon part of the polariton trapped inside the \PMS
moves as it would move in a micro-cavity of the effective
modal volume $V \ll 4 \pi r_{0}^{3} /3$. Consequently, it
can escape through the evanescent field. This evanescent
field essentially has a quantum origin and is due to
tunneling through the potential caused by dielectric
mismatch on the \PMS surface. Therefore, we define the
\emph{evanescent} polariton (\EP) as an evanescent light -
\QE coherent superposition.
\end{pot}
\end{vquote}
\end{enumerate}
\subsection{Enumerated and Itemized Lists}
CAS class files provides an extended list processing macros
which makes the usage a bit more user friendly than the
default LaTeX list macros. With an optional argument to the
\verb+\begin{enumerate}+ command, you can change the list
counter type and its attributes. You can see the coding and
typeset copy.
\begin{vquote}
\begin{enumerate}[1.]
\item The enumerate environment starts with an optional
argument `1.' so that the item counter will be suffixed
by a period as in the optional argument.
\item If you provide a closing parenthesis to the number in the
optional argument, the output will have closing
parenthesis for all the item counters.
\item You can use `(a)' for alphabetical counter and `(i)' for
roman counter.
\begin{enumerate}[a)]
\item Another level of list with alphabetical counter.
\item One more item before we start another.
\begin{enumerate}[(i)]
\item This item has roman numeral counter.
\end{vquote}
\begin{vquote}
\item Another one before we close the third level.
\end{enumerate}
\item Third item in second level.
\end{enumerate}
\item All list items conclude with this step.
\end{enumerate}
\section{Biography}
\verb+\bio+ command have the below options:
\begin{enumerate}
\item \verb+width:+ Width of the author photo (default is 1in).
\item \verb+pos:+ Position of author photo.
\end{enumerate}
\begin{vquote}
\bio[width=10mm,pos=l]{tuglogo.jpg}
\textbf{Another Biography:}
Recent experimental \cite{HARA:2005} and theoretical
\cite{DEYCH:2006} studies have shown that the \WGM can travel
along the chain as "heavy photons". Therefore the \WGM
acquires the spatial dispersion, and the evanescent
quadrupole polariton has the form (See Fig.\ref{FIG:3}):
\endbio
\end{vquote}
\section[CRediT...]{CRediT authorship contribution statement}
Give the authorship contribution after each author as
\begin{vquote}
\credit{Conceptualization of this study, Methodology,
Software}
\end{vquote}
To print the details use \verb+\printcredits+
\begin{vquote}
\author[1,3]{J.K. Krishnan}[type=editor,
auid=000,bioid=1,
prefix=Sir,
role=Researcher,
orcid=0000-0001-0000-0000]
\end{vquote}
\begin{vquote}
\cormark[1]
\fnmark[1]
\ead{jkk@example.in}
\ead[url]{www.jkkrishnan.in}
\credit{Conceptualization of this study, Methodology, Software}
\affiliation[1]{organization={Department of Physics,
J.K. Institute of Science},
addressline={Jawahar Nagar},
city={Trivandrum},
postcode={695013},
state={Kerala},
country={India}}
\author[2,4]{Han Thane}[style=chinese]
\author[2,3]{William {J. Hansen}}[%
role=Co-ordinator,
suffix=Jr,
]
\fnmark[2]
\ead{wjh@example.org}
\ead[URL]{https://www.university.org}
\credit{Data curation, Writing - Original draft preparation}
. . .
. . .
. . .
\printcredits
\end{vquote}
\section{Bibliography}
For CAS categories, two reference models are recommended.
They are \file{model1-num-names.bst} and \file{cas-model2-names.bst}.
Former will format the reference list and their citations according to
numbered scheme whereas the latter will format according name-date or
author-year style. Authors are requested to choose any one of these
according to the journal style. You may download these from
The above bsts are available in the following location for you to
download:
\url{https://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files}
\hfill $\Box$
\end{document}
|
{
"arxiv_id": "2302.08698",
"language": "en",
"timestamp": "2023-02-20T02:07:05",
"url": "https://arxiv.org/abs/2302.08698",
"yymm": "2302"
} | \section{Introduction}
The strength of polycrystalline materials is known to increase with the reduction of the grain size \cite{cheng2005tensile,yin2018ultrastrong}, called the Hall-Petch relation. One possible strengthening mechanism is based on dislocation pileups at grain boundaries (GBs), which hinds dislocation motion \cite{xu_comparing_2017}. However, the dislocation-based deformation mechanism transits to GB-dominated plasticity at extremely fine grain sizes \cite{schiotz1998softening,schiotz2003maximum,shen2007effect}, leading to an inverse Hall-Petch relation. Solute segregation is another strategy to increase the material strength \cite{pan2020heterogeneous,li2021role,hu2017grain,leyson2010quantitative}. Solute-dislocation interaction is believed to be a key factor for the solute strengthening effect \cite{li2021role,varvenne2017solute}. Pan and Sansoz \cite{pan2020heterogeneous} recently found that the solute clusters from heterogeneous solute segregation can suppress strain localization, and are responsible for solute strengthening. Moreover, Hu {\textit{et al}} \cite{hu2017grain} have demonstrated that Mo segregation can stabilize GBs in nanocrystalline Ni-Mo alloys, leading to an increased strength and a resurgence of Hall-Petch strengthening with grain size down to a few nanometers. Experimental results \cite{hu2017grain} also show that annealing can enhance the Hall-Petch strengthening. However, the atomistic strengthening mechanisms, particularly regarding the coupling effects of solute, annealing, and grain sizes, still need clarification because they are difficult to uncover in experiments \cite{hu2017grain}.
Due to the high computational cost, the density functional theory (DFT) calculations have been limited to several hundreds of atoms. Therefore, higher-scale computational tools such as molecular dynamics (MD) have become an important approach to study mechanical properties and the associated underlying mechanisms \cite{li2010dislocation,yamakov2002dislocation,zhang2017formation,antillon2019molecular} in metals and alloys. Atomistic simulations using linear-scaling interatomic potentials can potentially access large systems and long timescales. The accuracy of the atomistic simulations largely depends on the interatomic potentials used \cite{chavoshi_addressing_2017}. Li {\textit{et al}} \cite{li2019regulating} reported a mechanism transition from GB-accommodated plastic deformation to dislocation-based plasticity below the optimal size for the maximum strength after segregating Mo solute atoms in Ni-Mo alloys using MD simulations with an embedded atom method (EAM) potential. Sansoz {\textit{et al}} \cite{sansoz2022hall} recently used an EAM potential to systematically study the effects of solute concentration on the Hall-Petch strengthening limits with hybrid Monte-Carlo/MD (MC/MD) simulations in Ag-Cu alloys. The EAM potential, and broadly most classical interatomic potentials, are fitted mainly to elemental properties and thus generally perform poorly for alloys. Recently, the development of machine learning-based interatomic potential (ML-IAP) \cite{behler2007generalized,dragoni2018achieving,thompson2015spectral,shapeev2016moment,zuo2020performance} provides another possibility, which can reach near-DFT accuracy at several of orders magnitude lower cost than DFT. In the last few years, ML-IAP has been extensively applied to revealing the contributing factors of alloys' mechanical properties, including the lattice distortion \cite{kostiuchenko2019impact,jafary2019applying}, short-range ordering \cite{yin2021atomistic,li2020complex}, defect and dislocation properties \cite{maresca2018screw,goryaeva2021efficient}, etc. To the best of our knowledge, prediction of the inverse Hall-Petch relation by ML-IAP has not been realized yet. In addition, although plenty of theoretical studies have been performed to study different strengthening mechanisms, the coupling between these mechanisms, e.g. coupling between grain size strengthening, solute and annealing effects, remains elusive.
In this work, we utilize our previously developed spectral neighbor analysis potential (SNAP) \cite{li2018quantum} to investigate the grain size, solute, and annealing effects in Ni-Mo polycrystalline systems. We demonstrate that the ML-IAP can accurately predict the inverse Hall-Petch relation and reveal the different plasticity mechanisms with the dominant role involving GB or dislocation at different grain sizes. Our results indicate that both solute doping and annealing can reduce the atomic von Mises strain of the polycrystals at yielding under uniaxial tensile strain, stabilize the GB, and thus increase the strength of the polycrystals at small grain sizes, leading to the resurgence of the Hall-Petch strengthening at grain sizes within 10 nanometers. For large grain-size polycrystals, solute doping can increase the dislocation density giving rise to the enhancement of the polycrstal strength, while annealing, on the other hand, would induce sizable atomic strain at the GB during plastic flow deformation, leading to a decrease in the strength of the polycrystals.
\section{Method}
\subsection{Polycrystal model setup}
We generated the initial Ni polycrystal models using the Voronoi tessellation method \cite{brostowConstructionVoronoiPolyhedra1978} implemented in the Atomsk \cite{hirelAtomskToolManipulating2015} code. A number of cubic supercells were constructed with different side lengths, six grains were then randomly inserted giving rise to a series of polycrystals with different average grain diameters. We present six polycrystals with average grain diameters of 4.1, 6.1, 7.5, 8.8, 10.2 and 11.6 nm and corresponding edge lengths of 6, 9, 11, 13, 15, 17 nm, respectively. Periodic boundary conditions are imposed on all three dimensions. Neighboring atoms with a distance $<1.5$ \AA ~were removed at the GBs. The number of atoms in the polycrystals ranges from $\sim 2,000$ to $\sim 454,600$. The Ni-Mo polycrystalline models were constructed by randomly replacing 10$\%$ of Ni atoms with Mo. This percentage is lower than the limit of solubility of Mo in Ni \cite{hu2017grain}. Three atomistic models with different average grain diameters are shown in Fig.~\ref{fig:str}.
\begin{figure}[h]
\includegraphics[width=1.0\textwidth]{Fig1_str_new.pdf}
\caption{\label{fig:str}The polycrystalline models with different grain sizes. Configurations of nanostructured alloys with edge lengths of (a) 6 nm, (b) 11 nm, and (c) 17 nm and average grain size of (a) 4.1 nm, (b) 7.5 nm, and (c) 11.6 nm. Green spheres are atoms in local FCC structures, while white spheres are atoms in disordered structures, i.e., atoms within the GB regions. All atoms are Ni, except those blue ones which are Mo.}
\end{figure}
\subsection{Interatomic potential}
We performed the hybrid MD and Monte Carlo (MC) calculations using the large-scale atomic/molecular massively parallel simulator (LAMMPS) package \cite{plimptonFastParallelAlgorithms1995} with the Ni-Mo SNAP model \cite{li2018quantum}, previously developed by the lead author. The training structures of this ML-IAP include 1) Ni, Mo, Ni$_3$Mo compound, Ni$_4$Mo compound, and their distorted structures, 2) surface structures of Ni and Mo, 3) snapshots of {\textit{ab initio}} molecular dynamics (AIMD) simulations at different temperatures and distortions, 4) alloy structures constructed from partial substitution. The reliability of this potential has been checked and validated with the following properties: 1) lattice constants, 2) surface energies, 3) elastic constants, 4) defect properties, 5) melting points, 6) phase diagram. A good agreement within these properties between the results from the SNAP Ni-Mo model and that from DFT/experiment has been achieved \cite{li2018quantum}.
\subsection{Annealing and tensile deformation}
The polycrystals were thermally equilibrated at 300 K for 0.1 ns for pure Ni polycrystal and 0.4 ns for Ni-Mo polycrystal, via MD (for pure Ni polycrystal) and MC/MD (for Ni-Mo polycrystal with Mo solutes) simulations, respectively, in an isothermal-isobaric NPT ensemble. For annealing, the polycrystal will be further annealed at annealing temperature (600 K) for 0.75 ns, and then quenched from the annealing temperature to room temperature in 0.15 ns, followed by another equilibrium at room temperature for 50 ps. Uniaxial tensile deformation was then applied in the $z$-direction at a strain rate of 5$\times 10^8$ s$^{-1}$ for 0.2 ns at 300 K. To maintain zero lateral pressure (constant uniaxial strain rate), we use NPT ensemble in the $x$- and $y$-directions during the deformation. The time step was set to 1 fs. We use OVITO \cite{stukowskiVisualizationAnalysisAtomistic2009} to visualize the atomic configurations and analyze
simulation results by identifying phase structures (common neighbor analysis \cite{honeycuttMolecularDynamicsStudy1987}) and calculating the atomic strains \cite{falk1998dynamics} from two atomic configurations, a current configuration (deformed one) and a reference configuration (initial one). To capture the randomness in the distribution of GBs and Mo atoms, we performed three simulations with different initial polycrystalline structures for each grain size.
\section{Results}
\subsection{Hall-Petch and inverse Hall-Petch relation}
To test the Hall-Petch and inverse Hall-Petch relations, we plotted the stress-strain curves with varying grain sizes of Ni polycrystal, as shown in Fig.~\ref{fig:hallpetch}a. Since we start from perfect dislocation-free structures and the dislocations need to be nucleated at a large stress from GBs, no dislocations are observed at small strains. As the strain increases to around 4$\%$, the dislocations appear and massive plastic deformation occurs, corresponding to a stress dropping. When the strain surpasses about 8$\%$, the stress becomes more steady. Hence, we calculate the average stress in the strain interval from 8 to 10\%, and take it as the flow stress. Fig.~\ref{fig:hallpetch}b shows that the flow stress depends strongly on the grain size. Specifically, as the grain size decreases from 11.6 nm, the flow stress first increases, and after reaching the maximum (at the grain size of around 7~nm), it decreases. This is the well-known Hall-Petch and inverse Hall-Petch relations \cite{schiotz1998softening,schiotz2003maximum,shen2007effect}.
\begin{figure}[t]
\includegraphics[width=1.0 \textwidth]{Fig2_hall-petch.pdf}
\caption{\label{fig:hallpetch}The grain-size dependence of the flow stress. (a) Stress-strain curves for three simulations with average grain sizes ranging from 4.1 nm to 11.6 nm for Ni polycrystals. (b) The flow stress of pure Ni polycrystal (green) and Ni-Mo polycrystal with 10$\%$ Mo doping (red), defined as the average stress in the strain interval from 8 to 10$\%$ deformation. The error bars indicate the standard deviation of the three calculations with different initial polycrystalline structures. A maximum in the flow stress is seen for the grain size of around 7 nm for clean Ni and slightly left shift (smaller grain size) for Ni-Mo. The blue point is the flow stress for Ni-Mo polycrystalafter annealing at 600 K. The dark dashed line is guided for eyes for continuous Hall-Petch strengthening at even smaller grains after stabilizing the nano-polycrystals.}
\end{figure}
When the grain size is large (e.g. diameter $\geq 8$ nm in our simulations), the flow stress increases with a decreasing grain size, which is consistent with the Hall-Petch relation. This is because, at large grain sizes, dislocation-based deformation dominates, which is indicated by our stress-strain and dissociated dislocation-strain curves in Fig.~\ref{fig:analysis}a. The evolution of the strength of the stress is strongly correlated with the dissociated dislocation. As the grain size further decreases, the GB-mediated plasticity plays a more important role. We plotted the average atomic strain at yielding (from 0 to 3$\%$ applied strain) as a function of the strain for different grain sizes, as shown in Fig.\ref{fig:analysis}b. Noting that at small applied strain, the induced atomic strain is mainly contributed from the GB region. We can see that smaller grain size will induce larger atomic strain under small tensile strain, leading to the decrease of the flow stress. In particular, when the grain size is small (e.g. the grain diameter$=$4.1 nm), we observe an abrupt increase in the atomic strain around 2.5$\%$ strain in the average atomic strain-tensile strain curve. This abrupt change can exactly be identified in the stress-strain curve with obvious plasticity, as shown in Fig.~\ref{fig:analysis}c, which indicates that the GB-mediated plasticity dominates for polycrystals at small grain sizes and further demonstrates the validity of the SNAP model.
\begin{figure}[t]
\includegraphics[width=1.0 \textwidth]{Fig3_hall-petch_analysis.pdf}
\caption{\label{fig:analysis}The evolution of dissociated dislocation, stress, and atomic strain as a function of the tensile strain in the Ni polycrystals. (a) Stress-strain and dissociated dislocation-strain curves for polycrystal with large grain size (grain diameter $= 11.6$ nm). (b) The average atomic strain as a function of tensile strain for different grain sizes. (c) Stress-strain and average atomic strain-strain curves for Ni polycrystal with small grain size (grain diameter$=$4.1 nm). The black vertical lines indicate the locations with abrupt changes and guide for eyes.}
\end{figure}
\subsection{Solute strengthening}
\begin{figure}[t]
\includegraphics[width=1.0 \textwidth]{Fig4_doping_analysis_new.pdf}
\caption{\label{fig:doping}GB analysis of the polycrystal model. (a) Mo percentage in GB as a function of grain size. (b) The average atomic strain as a function of tensile strain for small grain sizes w/ and w/o doping of Mo. (c) Normalized excess free volume, and (d) dislocation density for different grain sizes w/ and w/o doping of Mo at $9\%$ applied strain.}
\end{figure}
We further investigated the solute effects on the mechanical properties of the polycrystals by replacing 10\% of all Ni atoms with Mo. As shown by the red points in Fig.~\ref{fig:hallpetch}b, the flow stress increases by a significant amount for all ranges of grain sizes, compared with pure Ni. To find out the underlying strengthening mechanism, we first analyze the segregation of Mo atoms. After MC/MD simulations, solute Mo atoms segregate at GBs, as shown in Fig.~\ref{fig:doping}a. This segregation effect becomes even stronger as the grain size decreases. In the meanwhile there still exists a significant amount of Mo atoms in the bulk region, which is consistent with the experimental results \cite{hu2017grain}. We further compare the average atomic strain of the doped systems with those without Mo doping in small grain size systems, as shown in Fig.~\ref{fig:doping}b. It is clear that after doping 10$\%$ Mo, the atomic strain at yielding is significantly reduced for grain sizes of 4.1 nm and 6.1 nm. This doping effect is weakened as the grain size increases (see Fig. S1 in the supplementary information). Therefore, for the small grain-size polycrystals when the GB mediated plasticity dominates, solute doping can greatly reduce the atomic strain leading to an increased flow stress. The calculations of normalized excess free volume in the GB also support this conclusion. We define the normalized excess free volume (denoted as $k$) as below,
\begin{equation}
\begin{aligned}
k & = (V_\mathrm{GB}-N_\mathrm{Ni,GB}\times v_\mathrm{Ni}-N_\mathrm{Mo,GB}\times v_\mathrm{Mo})/(N_\mathrm{Ni,GB}\times v_\mathrm{Ni}+N_\mathrm{Mo,GB}\times v_\mathrm{Mo}) \label{eq1}
\end{aligned}
\end{equation}
where $V_\mathrm{GB}$ is the total volume of GB atoms; $N_\mathrm{Ni,GB}$ and $N_\mathrm{Mo,GB}$ are numbers of Ni and Mo atoms in GBs, respectively; $v_\mathrm{Ni}$ and $v_\mathrm{Mo}$are the volumes of Ni and Mo atoms in the respective bulk. We use the common neighbor analysis algorithm in OVITO \cite{stukowskiVisualizationAnalysisAtomistic2009} to identify the GB atoms. In principle, GBs are less dense than a perfect crystal leading to a positive value of $k$. As shown in Fig.~\ref{fig:doping}c, the excess volume is reduced after Mo solute doping for different grain sizes. This means that the GB is denser after 10$\%$ Mo doping, making it more difficult for GB atoms jumping and free volume migrations. As a result, the resistance to GB-mediated plasticity increases, which will benefit for the flow stress increasing.
For large grain size polycrystals, the dislocation-based deformation becomes much more important. We thus plot the intragranular dislocation density for large grain sizes, as shown in Fig.~\ref{fig:doping}d. We can observe that dislocation density increases after Mo doping for large grain sizes. The increase of the dislocation density will in general result in the increase of the strength due to dislocation interactions and entanglements. This explains that the flow stress increases after Mo doping for polycrystals with large grain sizes.
\subsection{Annealing effect}
\begin{figure}[t]
\includegraphics[width=1.0 \textwidth]{Fig5_anneal_new.pdf}
\caption{\label{fig:anneal}Annealing effects on small-size polycrystal model. (a) The average atomic strain at yielding as a function of tensile strain under different conditions for polycrystal with grain diameter 4.1 nm. (b) The stress-strain curves under different conditions for polycrystal with grain diameter 4.1 nm. Pure: as-prepared Ni polycrystal after MC/MD calculations at room temperature; Doping: Doping the Ni polycrystal with 10$\%$ Mo; Annealing: anneal the pure Ni polycrystal at 600 K; Doping-Annealing: anneal the 10$\%$ Mo doped polycrystal at 600 K.}
\end{figure}
To study the annealing effect on the mechanical properties of the polycrystal, we annealed both the pure Ni and the doped with 10$\%$ Mo polycrystals with small grain size (e.g. grain diameter equals 4.1 nm) at 600 K. We calculated the average atomic strain for all four polycrystals, pure Ni (Pure), Ni with 10$\%$ Mo doping (Doping), pure Ni after annealing at 600 K (Annealing), and Ni with 10$\%$ Mo doping after annealing at 600 K (Doping-Annealing). As shown in Fig.~\ref{fig:anneal}a, doping or annealing alone will reduce the atomic strain at yielding with a considerable amount. With both doping and annealing, the atomic strain will reduce further compared with doping and annealing alone (also see Fig. S2 in the supplementary information for the strain distributions). The strain energy (defined as (potential energy-energy in the corresponding bulk)/number of atoms) of the polycrystals is 0.154 eV/atom for pure Ni, reduced to 0.145 eV/atom after aneealing and 0.122 eV/atom after Mo doping, and further reduced to 0.112 eV/atom with both doping and annealing. In other word, both doping and annealing can stabilize the GB, making the GB mediated plasticity more difficult. This can also be observed from the stress-strain curves in Fig.~\ref{fig:anneal}b. The plasticity occurs in pure Ni polycrystal much earlier (red curve), followed by the doping or annealing alone polycrystals, (green and blue curves); while the polycrystal with both doping and annealing has the stress increasing linearly for a much wider range of strain, resulting in the largest strength compared to other three polycrystals. Therefore, with both doping and annealing, the strength of the small-size polycrystals can be improved further, e.g., grain diameter small than 7 nm (see blue points in Fig.~\ref{fig:hallpetch}b).
\begin{figure}[h]
\includegraphics[width=1.0 \textwidth]{Fig6_anneal_different_size.pdf}
\caption{\label{fig:anneal1}Annealing analysis of the polycrystal models with different grain sizes. The average atomic strain during plastic flow deformation as a function of tensile strain under different conditions for polycrystal with grain diameters (a) 4.1 nm, and (b) 7.5 nm, at entire, GB, and dissociated dislocation (DD) regions, respectively.}
\end{figure}
On the other hand, annealing has a opposite effect on large-size polycrystals, e.g., grain diameter larger than 7 nm, as shown by blue points in Fig.~\ref{fig:hallpetch}b). In other words, further annealing after doping will reduce the strength of the large-size polycrystals. To reveal the underlying atomistic behavior, we plotted the average atomic strain during plastic flow deformation for both small-size (grain diameter equals 4.1 nm) and large-size (grain diameter equals 7.5 nm) polycrystals, as shown in Fig.~\ref{fig:anneal1}. It is clearly observed that the atomic strain during plastic deformation can be further reduced after annealing for small-size polycrystal, while for large-size polycrystal annealing will increase the atomic strain significantly, mainly at GB region (see Fig.~S3 in the supplymentary information for the atomic strain distribution in the model structure).
\section{Discussion}
ML-IAP has received plenty of interest in the field of materials science due to its high accuracy and good scalability \cite{mishin2021machine}. It has been used to investigate a variety of mechanical properties in metal and alloys, including the dislocation properties \cite{yin2021atomistic,zhao2021anomalous,byggmastar2021modeling}, stacking faults \cite{bartok2018machine,wang2021generalized,li2020complex}, phase transition \cite{zong2018developing,verdi2021thermal}, etc. However, few studies were dedicated to the GB strengthening problem, e.g., different grain sizes in polycrystals. This is mainly due to the high computational cost of ML-IAPs compared to the classical force fields, e.g., EAM potential \cite{li2018quantum}. Here, with the help of high performance computing clusters, We successfully study polycrystals using ML-IAP because this type of IAP, like the classical ones, still scales linearly with the number of atoms and is orders of magnitude computationally cheaper than DFT calculations. We demonstrated that the ML-IAP can successfully reproduce the Hall-Petch and inverse Hall-Petch relations in polycrytalline metals and reveal their underlying mechanisms.
Complex alloy strengthening mechanisms have been extensively explored, including the GB strengthening, solute strengthening, annealing strengthening, etc. GB strengthening, also known as Hall-Petch strengthening, will fail at extremely fine grain sizes, e.g., nanometer sizes. While after coupling with solute effects, the Hall-Petch maximum strengthening limit can be modified to lower grain size \cite{sansoz2022hall}. Annealing adds extra complications to the strengthening mechanisms, which will further strengthen the alloys for small grain size polycrystals while weakening the large-size ones by triggering large atomic movements in GB region during plastic deformation (see Fig.~S3 in the supplymentary information). The enhancement of the strength in small grain size polycrystals with solute doping and annealing can lead to the resurgence of Hall-Petch strengthening down to even smaller grain sizes, as shown in Fig.~\ref{fig:hallpetch}b (dashed black line).
\section{Conclusion}
The present computational and theoretical study has shown that the Hall-Petch and inverse Hall-Petch relations can be well predicted by the ML-IAP. A switch of the dominant mechanism from the dislocation based to GB mediated plasticity is observed as the grain size decreases, which is responsible for the Hall-Petch to inverse Hall-Petch transition. Atomistic analysis indicates that uniaxial tensile loading induced atomic strain can be significantly reduced by both solute doping and annealing for small grain size polycrystals, resulting in the great enhancement in the strength of the polycrystals. However, for the doped large-size polycrystals, annealing would weaken the samples due to the increased atomic strain in GB region. The combination effects of solute doping and annealing on different grain size of polycrystals can postpone the inverse Hall-Petch relation and keep continuous strengthening down to even smaller grain sizes. Our atomistic simulations reveal the underlying complex strengthening mechanisms in nano-sized polycrystals, theoretically supporting the potential of achieving ultra-strong nanograined materials.
\section*{Acknowledgements}
XL, QZ, SL, and JS would like to acknowledge financial support from the Hundreds of Talents Program of Sun Yat-sen University.
\section{Introduction}
The strength of polycrystalline materials is known to increase with the reduction of the grain size \cite{cheng2005tensile,yin2018ultrastrong}, called the Hall-Petch relation. One possible strengthening mechanism is based on dislocation pileups at grain boundaries (GBs), which hinds dislocation motion \cite{xu_comparing_2017}. However, the dislocation-based deformation mechanism transits to GB-dominated plasticity at extremely fine grain sizes \cite{schiotz1998softening,schiotz2003maximum,shen2007effect}, leading to an inverse Hall-Petch relation. Solute segregation is another strategy to increase the material strength \cite{pan2020heterogeneous,li2021role,hu2017grain,leyson2010quantitative}. Solute-dislocation interaction is believed to be a key factor for the solute strengthening effect \cite{li2021role,varvenne2017solute}. Pan and Sansoz \cite{pan2020heterogeneous} recently found that the solute clusters from heterogeneous solute segregation can suppress strain localization, and are responsible for solute strengthening. Moreover, Hu {\textit{et al}} \cite{hu2017grain} have demonstrated that Mo segregation can stabilize GBs in nanocrystalline Ni-Mo alloys, leading to an increased strength and a resurgence of Hall-Petch strengthening with grain size down to a few nanometers. Experimental results \cite{hu2017grain} also show that annealing can enhance the Hall-Petch strengthening. However, the atomistic strengthening mechanisms, particularly regarding the coupling effects of solute, annealing, and grain sizes, still need clarification because they are difficult to uncover in experiments \cite{hu2017grain}.
Due to the high computational cost, the density functional theory (DFT) calculations have been limited to several hundreds of atoms. Therefore, higher-scale computational tools such as molecular dynamics (MD) have become an important approach to study mechanical properties and the associated underlying mechanisms \cite{li2010dislocation,yamakov2002dislocation,zhang2017formation,antillon2019molecular} in metals and alloys. Atomistic simulations using linear-scaling interatomic potentials can potentially access large systems and long timescales. The accuracy of the atomistic simulations largely depends on the interatomic potentials used \cite{chavoshi_addressing_2017}. Li {\textit{et al}} \cite{li2019regulating} reported a mechanism transition from GB-accommodated plastic deformation to dislocation-based plasticity below the optimal size for the maximum strength after segregating Mo solute atoms in Ni-Mo alloys using MD simulations with an embedded atom method (EAM) potential. Sansoz {\textit{et al}} \cite{sansoz2022hall} recently used an EAM potential to systematically study the effects of solute concentration on the Hall-Petch strengthening limits with hybrid Monte-Carlo/MD (MC/MD) simulations in Ag-Cu alloys. The EAM potential, and broadly most classical interatomic potentials, are fitted mainly to elemental properties and thus generally perform poorly for alloys. Recently, the development of machine learning-based interatomic potential (ML-IAP) \cite{behler2007generalized,dragoni2018achieving,thompson2015spectral,shapeev2016moment,zuo2020performance} provides another possibility, which can reach near-DFT accuracy at several of orders magnitude lower cost than DFT. In the last few years, ML-IAP has been extensively applied to revealing the contributing factors of alloys' mechanical properties, including the lattice distortion \cite{kostiuchenko2019impact,jafary2019applying}, short-range ordering \cite{yin2021atomistic,li2020complex}, defect and dislocation properties \cite{maresca2018screw,goryaeva2021efficient}, etc. To the best of our knowledge, prediction of the inverse Hall-Petch relation by ML-IAP has not been realized yet. In addition, although plenty of theoretical studies have been performed to study different strengthening mechanisms, the coupling between these mechanisms, e.g. coupling between grain size strengthening, solute and annealing effects, remains elusive.
In this work, we utilize our previously developed spectral neighbor analysis potential (SNAP) \cite{li2018quantum} to investigate the grain size, solute, and annealing effects in Ni-Mo polycrystalline systems. We demonstrate that the ML-IAP can accurately predict the inverse Hall-Petch relation and reveal the different plasticity mechanisms with the dominant role involving GB or dislocation at different grain sizes. Our results indicate that both solute doping and annealing can reduce the atomic von Mises strain of the polycrystals at yielding under uniaxial tensile strain, stabilize the GB, and thus increase the strength of the polycrystals at small grain sizes, leading to the resurgence of the Hall-Petch strengthening at grain sizes within 10 nanometers. For large grain-size polycrystals, solute doping can increase the dislocation density giving rise to the enhancement of the polycrstal strength, while annealing, on the other hand, would induce sizable atomic strain at the GB during plastic flow deformation, leading to a decrease in the strength of the polycrystals.
\section{Method}
\subsection{Polycrystal model setup}
We generated the initial Ni polycrystal models using the Voronoi tessellation method \cite{brostowConstructionVoronoiPolyhedra1978} implemented in the Atomsk \cite{hirelAtomskToolManipulating2015} code. A number of cubic supercells were constructed with different side lengths, six grains were then randomly inserted giving rise to a series of polycrystals with different average grain diameters. We present six polycrystals with average grain diameters of 4.1, 6.1, 7.5, 8.8, 10.2 and 11.6 nm and corresponding edge lengths of 6, 9, 11, 13, 15, 17 nm, respectively. Periodic boundary conditions are imposed on all three dimensions. Neighboring atoms with a distance $<1.5$ \AA ~were removed at the GBs. The number of atoms in the polycrystals ranges from $\sim 2,000$ to $\sim 454,600$. The Ni-Mo polycrystalline models were constructed by randomly replacing 10$\%$ of Ni atoms with Mo. This percentage is lower than the limit of solubility of Mo in Ni \cite{hu2017grain}. Three atomistic models with different average grain diameters are shown in Fig.~\ref{fig:str}.
\begin{figure}[h]
\includegraphics[width=1.0\textwidth]{Fig1_str_new.pdf}
\caption{\label{fig:str}The polycrystalline models with different grain sizes. Configurations of nanostructured alloys with edge lengths of (a) 6 nm, (b) 11 nm, and (c) 17 nm and average grain size of (a) 4.1 nm, (b) 7.5 nm, and (c) 11.6 nm. Green spheres are atoms in local FCC structures, while white spheres are atoms in disordered structures, i.e., atoms within the GB regions. All atoms are Ni, except those blue ones which are Mo.}
\end{figure}
\subsection{Interatomic potential}
We performed the hybrid MD and Monte Carlo (MC) calculations using the large-scale atomic/molecular massively parallel simulator (LAMMPS) package \cite{plimptonFastParallelAlgorithms1995} with the Ni-Mo SNAP model \cite{li2018quantum}, previously developed by the lead author. The training structures of this ML-IAP include 1) Ni, Mo, Ni$_3$Mo compound, Ni$_4$Mo compound, and their distorted structures, 2) surface structures of Ni and Mo, 3) snapshots of {\textit{ab initio}} molecular dynamics (AIMD) simulations at different temperatures and distortions, 4) alloy structures constructed from partial substitution. The reliability of this potential has been checked and validated with the following properties: 1) lattice constants, 2) surface energies, 3) elastic constants, 4) defect properties, 5) melting points, 6) phase diagram. A good agreement within these properties between the results from the SNAP Ni-Mo model and that from DFT/experiment has been achieved \cite{li2018quantum}.
\subsection{Annealing and tensile deformation}
The polycrystals were thermally equilibrated at 300 K for 0.1 ns for pure Ni polycrystal and 0.4 ns for Ni-Mo polycrystal, via MD (for pure Ni polycrystal) and MC/MD (for Ni-Mo polycrystal with Mo solutes) simulations, respectively, in an isothermal-isobaric NPT ensemble. For annealing, the polycrystal will be further annealed at annealing temperature (600 K) for 0.75 ns, and then quenched from the annealing temperature to room temperature in 0.15 ns, followed by another equilibrium at room temperature for 50 ps. Uniaxial tensile deformation was then applied in the $z$-direction at a strain rate of 5$\times 10^8$ s$^{-1}$ for 0.2 ns at 300 K. To maintain zero lateral pressure (constant uniaxial strain rate), we use NPT ensemble in the $x$- and $y$-directions during the deformation. The time step was set to 1 fs. We use OVITO \cite{stukowskiVisualizationAnalysisAtomistic2009} to visualize the atomic configurations and analyze
simulation results by identifying phase structures (common neighbor analysis \cite{honeycuttMolecularDynamicsStudy1987}) and calculating the atomic strains \cite{falk1998dynamics} from two atomic configurations, a current configuration (deformed one) and a reference configuration (initial one). To capture the randomness in the distribution of GBs and Mo atoms, we performed three simulations with different initial polycrystalline structures for each grain size.
\section{Results}
\subsection{Hall-Petch and inverse Hall-Petch relation}
To test the Hall-Petch and inverse Hall-Petch relations, we plotted the stress-strain curves with varying grain sizes of Ni polycrystal, as shown in Fig.~\ref{fig:hallpetch}a. Since we start from perfect dislocation-free structures and the dislocations need to be nucleated at a large stress from GBs, no dislocations are observed at small strains. As the strain increases to around 4$\%$, the dislocations appear and massive plastic deformation occurs, corresponding to a stress dropping. When the strain surpasses about 8$\%$, the stress becomes more steady. Hence, we calculate the average stress in the strain interval from 8 to 10\%, and take it as the flow stress. Fig.~\ref{fig:hallpetch}b shows that the flow stress depends strongly on the grain size. Specifically, as the grain size decreases from 11.6 nm, the flow stress first increases, and after reaching the maximum (at the grain size of around 7~nm), it decreases. This is the well-known Hall-Petch and inverse Hall-Petch relations \cite{schiotz1998softening,schiotz2003maximum,shen2007effect}.
\begin{figure}[t]
\includegraphics[width=1.0 \textwidth]{Fig2_hall-petch.pdf}
\caption{\label{fig:hallpetch}The grain-size dependence of the flow stress. (a) Stress-strain curves for three simulations with average grain sizes ranging from 4.1 nm to 11.6 nm for Ni polycrystals. (b) The flow stress of pure Ni polycrystal (green) and Ni-Mo polycrystal with 10$\%$ Mo doping (red), defined as the average stress in the strain interval from 8 to 10$\%$ deformation. The error bars indicate the standard deviation of the three calculations with different initial polycrystalline structures. A maximum in the flow stress is seen for the grain size of around 7 nm for clean Ni and slightly left shift (smaller grain size) for Ni-Mo. The blue point is the flow stress for Ni-Mo polycrystalafter annealing at 600 K. The dark dashed line is guided for eyes for continuous Hall-Petch strengthening at even smaller grains after stabilizing the nano-polycrystals.}
\end{figure}
When the grain size is large (e.g. diameter $\geq 8$ nm in our simulations), the flow stress increases with a decreasing grain size, which is consistent with the Hall-Petch relation. This is because, at large grain sizes, dislocation-based deformation dominates, which is indicated by our stress-strain and dissociated dislocation-strain curves in Fig.~\ref{fig:analysis}a. The evolution of the strength of the stress is strongly correlated with the dissociated dislocation. As the grain size further decreases, the GB-mediated plasticity plays a more important role. We plotted the average atomic strain at yielding (from 0 to 3$\%$ applied strain) as a function of the strain for different grain sizes, as shown in Fig.\ref{fig:analysis}b. Noting that at small applied strain, the induced atomic strain is mainly contributed from the GB region. We can see that smaller grain size will induce larger atomic strain under small tensile strain, leading to the decrease of the flow stress. In particular, when the grain size is small (e.g. the grain diameter$=$4.1 nm), we observe an abrupt increase in the atomic strain around 2.5$\%$ strain in the average atomic strain-tensile strain curve. This abrupt change can exactly be identified in the stress-strain curve with obvious plasticity, as shown in Fig.~\ref{fig:analysis}c, which indicates that the GB-mediated plasticity dominates for polycrystals at small grain sizes and further demonstrates the validity of the SNAP model.
\begin{figure}[t]
\includegraphics[width=1.0 \textwidth]{Fig3_hall-petch_analysis.pdf}
\caption{\label{fig:analysis}The evolution of dissociated dislocation, stress, and atomic strain as a function of the tensile strain in the Ni polycrystals. (a) Stress-strain and dissociated dislocation-strain curves for polycrystal with large grain size (grain diameter $= 11.6$ nm). (b) The average atomic strain as a function of tensile strain for different grain sizes. (c) Stress-strain and average atomic strain-strain curves for Ni polycrystal with small grain size (grain diameter$=$4.1 nm). The black vertical lines indicate the locations with abrupt changes and guide for eyes.}
\end{figure}
\subsection{Solute strengthening}
\begin{figure}[t]
\includegraphics[width=1.0 \textwidth]{Fig4_doping_analysis_new.pdf}
\caption{\label{fig:doping}GB analysis of the polycrystal model. (a) Mo percentage in GB as a function of grain size. (b) The average atomic strain as a function of tensile strain for small grain sizes w/ and w/o doping of Mo. (c) Normalized excess free volume, and (d) dislocation density for different grain sizes w/ and w/o doping of Mo at $9\%$ applied strain.}
\end{figure}
We further investigated the solute effects on the mechanical properties of the polycrystals by replacing 10\% of all Ni atoms with Mo. As shown by the red points in Fig.~\ref{fig:hallpetch}b, the flow stress increases by a significant amount for all ranges of grain sizes, compared with pure Ni. To find out the underlying strengthening mechanism, we first analyze the segregation of Mo atoms. After MC/MD simulations, solute Mo atoms segregate at GBs, as shown in Fig.~\ref{fig:doping}a. This segregation effect becomes even stronger as the grain size decreases. In the meanwhile there still exists a significant amount of Mo atoms in the bulk region, which is consistent with the experimental results \cite{hu2017grain}. We further compare the average atomic strain of the doped systems with those without Mo doping in small grain size systems, as shown in Fig.~\ref{fig:doping}b. It is clear that after doping 10$\%$ Mo, the atomic strain at yielding is significantly reduced for grain sizes of 4.1 nm and 6.1 nm. This doping effect is weakened as the grain size increases (see Fig. S1 in the supplementary information). Therefore, for the small grain-size polycrystals when the GB mediated plasticity dominates, solute doping can greatly reduce the atomic strain leading to an increased flow stress. The calculations of normalized excess free volume in the GB also support this conclusion. We define the normalized excess free volume (denoted as $k$) as below,
\begin{equation}
\begin{aligned}
k & = (V_\mathrm{GB}-N_\mathrm{Ni,GB}\times v_\mathrm{Ni}-N_\mathrm{Mo,GB}\times v_\mathrm{Mo})/(N_\mathrm{Ni,GB}\times v_\mathrm{Ni}+N_\mathrm{Mo,GB}\times v_\mathrm{Mo}) \label{eq1}
\end{aligned}
\end{equation}
where $V_\mathrm{GB}$ is the total volume of GB atoms; $N_\mathrm{Ni,GB}$ and $N_\mathrm{Mo,GB}$ are numbers of Ni and Mo atoms in GBs, respectively; $v_\mathrm{Ni}$ and $v_\mathrm{Mo}$are the volumes of Ni and Mo atoms in the respective bulk. We use the common neighbor analysis algorithm in OVITO \cite{stukowskiVisualizationAnalysisAtomistic2009} to identify the GB atoms. In principle, GBs are less dense than a perfect crystal leading to a positive value of $k$. As shown in Fig.~\ref{fig:doping}c, the excess volume is reduced after Mo solute doping for different grain sizes. This means that the GB is denser after 10$\%$ Mo doping, making it more difficult for GB atoms jumping and free volume migrations. As a result, the resistance to GB-mediated plasticity increases, which will benefit for the flow stress increasing.
For large grain size polycrystals, the dislocation-based deformation becomes much more important. We thus plot the intragranular dislocation density for large grain sizes, as shown in Fig.~\ref{fig:doping}d. We can observe that dislocation density increases after Mo doping for large grain sizes. The increase of the dislocation density will in general result in the increase of the strength due to dislocation interactions and entanglements. This explains that the flow stress increases after Mo doping for polycrystals with large grain sizes.
\subsection{Annealing effect}
\begin{figure}[t]
\includegraphics[width=1.0 \textwidth]{Fig5_anneal_new.pdf}
\caption{\label{fig:anneal}Annealing effects on small-size polycrystal model. (a) The average atomic strain at yielding as a function of tensile strain under different conditions for polycrystal with grain diameter 4.1 nm. (b) The stress-strain curves under different conditions for polycrystal with grain diameter 4.1 nm. Pure: as-prepared Ni polycrystal after MC/MD calculations at room temperature; Doping: Doping the Ni polycrystal with 10$\%$ Mo; Annealing: anneal the pure Ni polycrystal at 600 K; Doping-Annealing: anneal the 10$\%$ Mo doped polycrystal at 600 K.}
\end{figure}
To study the annealing effect on the mechanical properties of the polycrystal, we annealed both the pure Ni and the doped with 10$\%$ Mo polycrystals with small grain size (e.g. grain diameter equals 4.1 nm) at 600 K. We calculated the average atomic strain for all four polycrystals, pure Ni (Pure), Ni with 10$\%$ Mo doping (Doping), pure Ni after annealing at 600 K (Annealing), and Ni with 10$\%$ Mo doping after annealing at 600 K (Doping-Annealing). As shown in Fig.~\ref{fig:anneal}a, doping or annealing alone will reduce the atomic strain at yielding with a considerable amount. With both doping and annealing, the atomic strain will reduce further compared with doping and annealing alone (also see Fig. S2 in the supplementary information for the strain distributions). The strain energy (defined as (potential energy-energy in the corresponding bulk)/number of atoms) of the polycrystals is 0.154 eV/atom for pure Ni, reduced to 0.145 eV/atom after aneealing and 0.122 eV/atom after Mo doping, and further reduced to 0.112 eV/atom with both doping and annealing. In other word, both doping and annealing can stabilize the GB, making the GB mediated plasticity more difficult. This can also be observed from the stress-strain curves in Fig.~\ref{fig:anneal}b. The plasticity occurs in pure Ni polycrystal much earlier (red curve), followed by the doping or annealing alone polycrystals, (green and blue curves); while the polycrystal with both doping and annealing has the stress increasing linearly for a much wider range of strain, resulting in the largest strength compared to other three polycrystals. Therefore, with both doping and annealing, the strength of the small-size polycrystals can be improved further, e.g., grain diameter small than 7 nm (see blue points in Fig.~\ref{fig:hallpetch}b).
\begin{figure}[h]
\includegraphics[width=1.0 \textwidth]{Fig6_anneal_different_size.pdf}
\caption{\label{fig:anneal1}Annealing analysis of the polycrystal models with different grain sizes. The average atomic strain during plastic flow deformation as a function of tensile strain under different conditions for polycrystal with grain diameters (a) 4.1 nm, and (b) 7.5 nm, at entire, GB, and dissociated dislocation (DD) regions, respectively.}
\end{figure}
On the other hand, annealing has a opposite effect on large-size polycrystals, e.g., grain diameter larger than 7 nm, as shown by blue points in Fig.~\ref{fig:hallpetch}b). In other words, further annealing after doping will reduce the strength of the large-size polycrystals. To reveal the underlying atomistic behavior, we plotted the average atomic strain during plastic flow deformation for both small-size (grain diameter equals 4.1 nm) and large-size (grain diameter equals 7.5 nm) polycrystals, as shown in Fig.~\ref{fig:anneal1}. It is clearly observed that the atomic strain during plastic deformation can be further reduced after annealing for small-size polycrystal, while for large-size polycrystal annealing will increase the atomic strain significantly, mainly at GB region (see Fig.~S3 in the supplymentary information for the atomic strain distribution in the model structure).
\section{Discussion}
ML-IAP has received plenty of interest in the field of materials science due to its high accuracy and good scalability \cite{mishin2021machine}. It has been used to investigate a variety of mechanical properties in metal and alloys, including the dislocation properties \cite{yin2021atomistic,zhao2021anomalous,byggmastar2021modeling}, stacking faults \cite{bartok2018machine,wang2021generalized,li2020complex}, phase transition \cite{zong2018developing,verdi2021thermal}, etc. However, few studies were dedicated to the GB strengthening problem, e.g., different grain sizes in polycrystals. This is mainly due to the high computational cost of ML-IAPs compared to the classical force fields, e.g., EAM potential \cite{li2018quantum}. Here, with the help of high performance computing clusters, We successfully study polycrystals using ML-IAP because this type of IAP, like the classical ones, still scales linearly with the number of atoms and is orders of magnitude computationally cheaper than DFT calculations. We demonstrated that the ML-IAP can successfully reproduce the Hall-Petch and inverse Hall-Petch relations in polycrytalline metals and reveal their underlying mechanisms.
Complex alloy strengthening mechanisms have been extensively explored, including the GB strengthening, solute strengthening, annealing strengthening, etc. GB strengthening, also known as Hall-Petch strengthening, will fail at extremely fine grain sizes, e.g., nanometer sizes. While after coupling with solute effects, the Hall-Petch maximum strengthening limit can be modified to lower grain size \cite{sansoz2022hall}. Annealing adds extra complications to the strengthening mechanisms, which will further strengthen the alloys for small grain size polycrystals while weakening the large-size ones by triggering large atomic movements in GB region during plastic deformation (see Fig.~S3 in the supplymentary information). The enhancement of the strength in small grain size polycrystals with solute doping and annealing can lead to the resurgence of Hall-Petch strengthening down to even smaller grain sizes, as shown in Fig.~\ref{fig:hallpetch}b (dashed black line).
\section{Conclusion}
The present computational and theoretical study has shown that the Hall-Petch and inverse Hall-Petch relations can be well predicted by the ML-IAP. A switch of the dominant mechanism from the dislocation based to GB mediated plasticity is observed as the grain size decreases, which is responsible for the Hall-Petch to inverse Hall-Petch transition. Atomistic analysis indicates that uniaxial tensile loading induced atomic strain can be significantly reduced by both solute doping and annealing for small grain size polycrystals, resulting in the great enhancement in the strength of the polycrystals. However, for the doped large-size polycrystals, annealing would weaken the samples due to the increased atomic strain in GB region. The combination effects of solute doping and annealing on different grain size of polycrystals can postpone the inverse Hall-Petch relation and keep continuous strengthening down to even smaller grain sizes. Our atomistic simulations reveal the underlying complex strengthening mechanisms in nano-sized polycrystals, theoretically supporting the potential of achieving ultra-strong nanograined materials.
\section*{Acknowledgements}
XL, QZ, SL, and JS would like to acknowledge financial support from the Hundreds of Talents Program of Sun Yat-sen University.
|
{
"arxiv_id": "2302.08699",
"language": "en",
"timestamp": "2023-02-20T02:07:05",
"url": "https://arxiv.org/abs/2302.08699",
"yymm": "2302"
} | \section{Introduction}
\subsection{Background}
Suppose $G$ is an algebraic group (or supergroup) over a field $k$. One can then consider the category $\Rep(G)$ of finite dimensional algebraic representations of $G$, which comes with a tensor product. This category satisfies the following conditions:
\begin{enumerate}
\item It is abelian and every object has finite length (i.e., a finite composition series).
\item The space of maps between two objects is a finite dimensional $k$-vector space.
\item Every object has a dual (i.e., the category is rigid).
\item If $\mathds{1}$ denotes the unit object for tensor product (i.e., the trivial representation) then $\End(\mathds{1})=k$.
\end{enumerate}
A \defn{pre-Tannakian category} is a $k$-linear symmetric tensor category satisfying these axioms (see \cite[\S 2.1]{ComesOstrik} for more details). An important problem within the field of tensor categories is to understand the extent to which pre-Tannakian categories go beyond classical representation categories.
Deligne \cite{Deligne} gave the first examples of pre-Tannakian categories not of the form $\Rep(G)$: he constructed a 1-parameter family of pre-Tannakian categories $\uRep(\mathfrak{S}_t)$ by ``interpolating'' the representation categories $\Rep(\mathfrak{S}_n)$ of symmetric groups. Knop \cite{Knop, Knop2} generalized Deligne's construction and interpolated other families of finite groups, such as finite linear groups. There has been much subsequent work in this direction, e.g., \cite{ComesOstrik1, ComesOstrik, CW, EntovaAizenbudHeidersdorf, Harman, Harman2}.
Recently, in joint work with Harman \cite{repst}, we gave a new construction of pre-Tannakian categories. Recall that an \defn{oligomorphic group} is a permutation group $(G, \Omega)$ such that $G$ has finitely many orbits on $\Omega^n$ for all $n \ge 0$. We introduced a notion of \defn{measure} for an oligomorphic group (reviewed in \S \ref{s:olig} below). Given a $k$-valued measure $\mu$ for $G$, we constructed a $k$-linear rigid tensor category $\uPerm(G; \mu)$ of ``permutation modules.'' Under certain hypotheses, we showed that this category admits an abelian envelope $\uRep(G; \mu)$ that is pre-Tannakian.
The simplest example of an oligomorphic group is the infinite symmetric group $\mathfrak{S}$. In this case, we showed in \cite{repst} that there is a 1-parameter family of measures $\mu_t$, and that the resulting category $\uRep(\mathfrak{S}; \mu_t)$ coincides with Deligne's interpolation category $\uRep(\mathfrak{S}_t)$. More generally, in all known cases of interpolation (such as those considered by Knop), the sequence of finite groups has an oligomorphic limit, and our theory yields the interpolation categories previously constructed.
In \cite{repst}, we also considered a handful of oligomorphic groups that do not arise as limits of finite groups. For example, we analyzed the the oligomorphic group $\Aut(\mathbf{R},<)$ of order-preserving self-bijections of the real line. We showed that this group admits essentially four measures, and that one of these measures leads to a pre-Tannakian category. This category was studied in detail in \cite{delannoy}, where it was named the \defn{Delannoy category}; we found that it possess several remarkable properties (e.g., the Adams operations are trivial on its Grothendieck group). A related case will be treated in the forthcoming paper \cite{delannoy2}.
The purpose of this paper is to add to the list of examples from \cite{repst}: we determine the measures for a certain infinite family of oligomorphic groups (which cannot be realized as limits of finite groups). This leads to a large number of new rigid tensor categories. We do not know if these categories have abelian envelopes.
\begin{remark}
If $\mathcal{C}$ is a class of finite relational structures (such as graphs, total orders, etc.), one can sometimes form the \defn{Fra\"iss\'e limit} $\Omega$ of $\mathcal{C}$, which is a countable structure that has an important homogeneity property. The automorphism group $G$ of $\Omega$ is often oligomorphic, and this construction is the main source of oligomorphic groups. We showed in \cite{repst} that a measure for $G$ is (essentially) a rule assigning to each member of $\mathcal{C}$ a value in $k$ such that certain identities hold. Thus understanding measures is really a combinatorial problem, and indeed, most of the work in this paper is combinatorially in nature.
\end{remark}
\subsection{Statement of results}
Let $\mathbf{S}$ be a countable set equipped with an everywhere dense cyclic order; for example, one can take $\mathbf{S}$ to be the roots of unity in the complex unit circle. Let $\Sigma$ be a non-empty finite set and let $\sigma \colon \mathbf{S} \to \Sigma$ be a function such that $\sigma^{-1}(a)$ is dense for each $a \in \Sigma$. We regard $\sigma$ as a coloring of $\mathbf{S}$. It turns out that $\mathbf{S}$ is the Fra\"iss\'e limit of the class of finite sets equipped with a cyclic order and $\Sigma$-coloring (see Proposition~\ref{prop:S-homo}); in particular, up to isomorphism, $\mathbf{S}$ is independent of the choice of coloring $\sigma$. It also follows that the automorphism group $G$ of $\mathbf{S}$ is oligomorphic.
The goal of this paper is to classify the measures for $G$. It is easy to see that there is a universal measure valued in a certain ring $\Theta(G)$, and the problem of classifying measures for $G$ amounts to computing the ring $\Theta(G)$. This is what our main theorem accomplishes:
\begin{theorem} \label{mainthm}
Given a directed tree $T$ with edges labeled by $\Sigma$, there is an associated $\mathbf{Z}$-valued measure $\mu_T$ for $G$. The product of these measures (over isomorphism classes of trees) defines a ring isomorphism $\Theta(G) \to \prod_T \mathbf{Z}$. In particular, $\Theta(G) \cong \mathbf{Z}^N$, where $N=2^n \cdot (n+1)^{n-2}$ and $n=\# \Sigma$.
\end{theorem}
Here, a ``tree'' is a connected simple graph with no cycles, ``directed'' means each edge has been given an orientation, and the labeling means that there is a given bijection between the edge set and $\Sigma$. An example is given in \S \ref{ss:ex}.
Fix a point $\infty \in \mathbf{S}$, and let $\mathbf{L}=\mathbf{S} \setminus \{\infty\}$. The set $\mathbf{L}$ is totally ordered and $\Sigma$-colored, and its automorphism group $H$ is oligomorphic. We also classify measures for $H$ (see \S \ref{s:line}). (We note that the group $H$ appears in \cite[\S 5.2]{Agarwal} and \cite{Laflamme}.)
\subsection{Summary of proof} \label{ss:plan}
The proof has essentially four main steps:
\begin{enumerate}
\item Using the classification of open subgroups of $H$ (Proposition~\ref{prop:subgp}), we first show that measures for $G$ are equivalent to $\Sigma$-measures. A $\Sigma$-measure is a rule $\nu$ that assigns to each $a,b \in \Sigma$ and word $w \in \Sigma^{\star}$ a quantity $\nu_{a,b}(w)$, such that certain relations hold (see Definition~\ref{defn:Sigma-meas}). This is an important reduction since $\Sigma$-measures are purely combinatorial objects.
\item We next show that $\Sigma$-measures are determined by their values on words of length one, and the defining relations are generated by those involving words of length at most two. We phrase this result as an equivalence between $\Sigma$-measures and another notion called $\Sigma$-symbols (see Definition~\ref{defn:symbol}). This is another important reduction since $\Sigma$-symbols are far simpler than $\Sigma$-measures.
\item Next, we essentially solve the defining equations for $\Sigma$-symbols. Assuming the coefficient ring is connected, $\Sigma$-symbols correspond to functions $S \colon \Sigma^2 \to \{0,1\}$ satisfying one relatively simple condition, namely, condition $(\ast)$ in \S \ref{ss:orbi}.
\item The functions $S$ appearing above are studied in \S \ref{s:bisect}, where they are called ``oriented bisection structures.'' We show that these functions naturally correspond to directed trees with edges labeled by $\Sigma$.
\end{enumerate}
\subsection{Tensor categories}
Let $k$ be a field. Given a tree $T$ as in Theorem~\ref{mainthm}, the machinery of \cite{repst} produces a $k$-linear rigid tensor category $\uPerm(G; \mu_T)$ (which is not abelian). This is potentially a very interesting example, especially in light of the results of \cite{delannoy, delannoy2} in the case $\# \Sigma=1$. An important problem is to determine if this category has an abelian envelope, as this would yield a new pre-Tannakian category. We know of no obstruction, but when $\# \Sigma>1$ the results in \cite{repst} on abelian envelopes do not apply (the measure $\mu_T$ is not quasi-regular).
\subsection{Outline}
In \S \ref{s:bisect}, we introduce the concept of a bisection structure, and show that they are equivalent to trees. In \S \ref{s:olig}, we review oligomorphic groups and measures in general. In \S \ref{s:group}, we introduce the main groups of interest, and determine some of their group-theoretic properties. In \S \ref{s:class}, we prove Theorem~\ref{mainthm}. Finally, in \S \ref{s:line}, we treat the group $H$.
\subsection{Notation}
The following is the most important notation:
\begin{description}[align=right,labelwidth=2.25cm,leftmargin=!]
\item[ $k$ ] the coefficient ring
\item[ $\Sigma$ ] the finite set of colors
\item[ $\Sigma^{\star}$ ] the set of words in the alphabet $\Sigma$
\item[ $\mathbf{S}$ ] the circle
\item[ $\mathbf{L}$ ] the line, defined as $\mathbf{S} \setminus \{\infty\}$
\item[ $\sigma$ ] the coloring of $\mathbf{S}$
\item[ $G$ ] the automorphism group of $\mathbf{S}$ (except in \S \ref{s:olig})
\item[ $H$ ] the automorphism group of $\mathbf{L}$
\item[ $G(A)$ ] the subgroup of $G$ fixing each element of $A$
\item[ {$w[i,j]$} ] a substring of the word $w$ (see \S \ref{ss:Sigma-meas})
\end{description}
\subsection*{Acknowledgments}
We thank Nate Harman and Steven Sam for helpful discussions.
\section{Bisection structures} \label{s:bisect}
\subsection{The definition} \label{ss:bisect-defn}
Suppose $x$ is a real number. Deleting $x$ from the real line cuts it into two pieces. We thus get an equivalence relation $R_x$ on $\mathbf{R} \setminus \{x\}$ by letting $R_x(y,z)$ mean ``$y$ and $z$ belong to the same connected component.'' If $x$, $y$, and $z$ are distinct real numbers then exactly one of $R_x(y,z)$, $R_y(x,z)$, and $R_z(x,y)$ is false. The following definition axiomizes this situation, but allows for a bit more flexibility.
\begin{definition} \label{defn:bisect}
Let $X$ be a set. A \defn{bisection structure} on $X$ is a rule $R$ assigning to each $x \in X$ an equivalence relation $R_x$ on $X \setminus \{x\}$ such that the following conditions hold:
\begin{enumerate}
\item The equivalence relation $R_x$ has at most two equivalence classes.
\item If $x,y,z \in X$ are distinct then at most one of $R_x(y,z)$, $R_y(x, z)$, and $R_z(x, y)$ fails to hold. \qedhere
\end{enumerate}
\end{definition}
Trees also lead to bisection structures. For the purposes of this paper, a \defn{tree} is a finite simple graph that is connected and has no cycles, and an \defn{$X$-labeled tree} is a tree with a given bijection between $X$ and the edge set (i.e., the edges are labeled by $X$). Suppose $T$ is an $X$-labeled tree, and let $x$ be an edge of $T$. Deleting $x$ from $T$ (but not the vertices in $x$) yields a forest $T_x$ with at most two components. Let $R_x(y,z)$ mean ``$y$ and $z$ belong to the same component of $T_x$.'' Equivalently, $R_x(y,z)$ means that the geodesic joining $y$ and $z$ in $T$ does not include the edge $x$. One readily verifies that the $R_x$'s define a bisection structure on $X$. We denote this bisection structure by $R^T$.
\begin{remark}
Bisection structures are closely related to the concept of \emph{betweenness}. Indeed, if $R$ is the bisection structure on the real line then $\lnot R_x(y,z)$ exactly means that $x$ is between $y$ and $z$; a similar observation holds for the bisection structures associated to trees. Bankston \cite{Bankston} has defined a general notion of betweenness, and discussed many examples. The betweenness relations on the vertices of trees appears often in the literature; however, this does not lead to a bisection structure in general.
\end{remark}
\subsection{The main result}
The following is the main result we need on bisection structures:
\begin{theorem} \label{thm:bisect}
Let $R$ be a bisection structure on a finite set $X$. Then there exists an $X$-labeled tree $T$ such that $R=R^T$, and $T$ is unique up to isomorphism.
\end{theorem}
The proof will be divided into several lemmas. Say that distinct elements $x$ and $y$ of $X$ are \defn{adjacent} if for all $z \in X \setminus \{x,y\}$ we have $R_z(x,y)$. For each $x \in X$, let $E_x^+$ and $E_x^-$ be the two equivalence classes for $R_x$, labeled in an arbitrary manner; if there are fewer than two equivalence classes, take one or both of the sets to be empty. Let $\tilde{V} = X \times \{\pm\}$. We define a relation $\sim$ on $\tilde{V}$ by $(x,a) \sim (y,b)$ if $(x,a)=(y,b)$, or $x \ne y$ are adjacent and $x \in E_y^b$ and $y \in E_x^a$.
\begin{lemma}
$\sim$ is an equivalence relation on $\tilde{V}$.
\end{lemma}
\begin{proof}
We just need to verify transitivity. Thus suppose $(x,a) \sim (y,b)$ and $(y,b) \sim (z,c)$. We show $(x,a) \sim (z,c)$.
We first claim that $x$ and $z$ are adjacent. Let $w \in X \setminus \{x,z\}$ be given. If $w=y$ then since $x$ and $z$ both belong to the equivalence class $E_y^b$, the relation $R_w(x,z)$ holds. If $w \ne y$ then $R_w(x,y)$ and $R_w(y,z)$ hold, since $(x,y)$ and $(y,z)$ are adjacent pairs, and so $R_w(x,z)$ holds, since $R_w$ is an equivalence relation. This proves the claim.
Now, since $x$ and $y$ are adjacent, the relation $R_z(x,y)$ holds. Since $y$ belongs to $E_z^c$, it follows that $x$ does as well. Similarly, $z$ belongs to $E_x^a$. Thus $(x,a) \sim (z,c)$, as required.
\end{proof}
Let $V$ to be the quotient $\tilde{V}/\sim$. We define a graph $T=T(R)$ with vertex set $V$ and edge set $X$. For $x \in X$, the two vertices of the edge $x$ are the classes of $(x,+)$ and $(x,-)$ in $V$; these two vertices are distinct by definition of the equivalence relation, and so $T$ has no loops. The following lemma shows that there are no 2-cycles, i.e., parallel edges. Thus $T$ is a simple graph. Note that if $x$ and $y$ are distinct elements of $X$ then they are adjacent (in the above sense) if and only if they share a vertex in $T$.
\begin{lemma}
$T$ has no cycles, i.e., it is a forest.
\end{lemma}
\begin{proof}
Suppose by way of contraction that we have a cycle. Let $x_1, \ldots, x_n$ be the edges involved, so that $x_i$ is adjacent to $x_{i+1}$ for all $i \in \mathbf{Z}/n$. Since the labeling of equivalence classes was arbitrary, we may as well suppose that $(x_i, +) \sim (x_{i+1}, -)$ for all $i \in \mathbf{Z}/n$. We thus have $x_2 \in E^+_{x_1}$. Now, $x_1 \in E^-_{x_2}$ and $x_3 \in E^+_{x_2}$, so $\lnot R_{x_2}(x_1, x_3)$ holds; thus by Definition~\ref{defn:bisect}(b), we have $R_{x_1}(x_2,x_3)$, and so $x_3 \in E_{x_1}^+$. Continuing in this manner, we find $x_i \in E_{x_1}^+$ for all $2 \le i \le n$. However, since $(x_n, +) \sim (x_1, -)$, we have $x_n \in E_{x_1}^-$, a contradiction.
\end{proof}
Let $x \ne z$ be elements of $X$. We say that $y \in X \setminus \{x,z\}$ is \defn{between} $x$ and $z$ if $\lnot R_y(x,z)$ holds. We let $P(x,z)$ be the set of such elements $y$.
\begin{lemma}
$T$ is connected, and thus a tree. In fact, for $x \ne z$, the set $P(x,z)$ is the collection of edges in the shortest path joining $x$ and $z$.
\end{lemma}
\begin{proof}
We proceed by induction on $\# P(x,z)$. If $P(x,z)$ is empty then $x$ and $z$ are adjacent, and the statement is trivial. Suppose now that $\# P(x,z)>0$, and let $y$ be an element of this set. We claim that
\begin{displaymath}
P(x,z) = P(x,y) \cup \{y\} \cup P(y,z).
\end{displaymath}
Indeed, suppose that $w \in P(x,y)$, i.e., $\lnot R_w(x,y)$ holds. We have the following implications
\begin{displaymath}
\lnot R_w(x,y) \implies R_y(w,x) \implies \lnot R_y(w,z) \implies R_w(y,z) \implies \lnot R_w(x,z).
\end{displaymath}
The first implication comes from Definition~\ref{defn:bisect}(b); the second follows since $\lnot R_y(x,z)$ holds and $R_y$ is an equivalence relation; the third comes from Definition~\ref{defn:bisect}(b); and the fourth follows since $\lnot R_w(x,y)$ holds and $R_w$ is an equivalence relation. Thus $w \in (x,z)$. We have thus show that $P(x,y) \subset P(x,z)$, and by symmetry we have $P(y,z) \subset P(x,z)$ as well. This proves one of the containments above.
We now prove the reverse containment. Thus suppose $w \in P(x,z)$ and $w \ne y$. We must show $w$ belongs to either $P(x,y)$ or $P(y,z)$. Suppose it does not belong to $P(x,y)$. Then $\lnot R_w(x,z)$ holds, since $w \in P(x,z)$, and $R_w(x,y)$ holds, since $w \not\in P(x,y)$. Since $R_w$ is an equivalence relation, it follows that $\lnot R_w(y,z)$ holds, and so $w \in P(y,z)$, as required.
Now, $P(x,y)$ and $P(y,z)$ do not contain $y$, and thus are proper subsets of $P(x,y)$. Thus, by the inductive hypothesis, $P(x,y)$ is a path from $x$ to $y$, and $P(y,z)$ is a path from $y$ to $z$. It follows that $P(x,z)$ is a path from $x$ to $z$. As for the minimality of $P(x,z)$, choose $y \in P(x,z)$ adjacent to $x$ (which must exist). Then $P(x,z) = \{y\} \sqcup P(y,z)$. Since $P(y,z)$ is a minimal path between $y$ and $z$ (by induction), and $x \not\in P(y,z)$, it follows that $P(x,z)$ is a minimal path between $x$ and $z$.
\end{proof}
\begin{lemma} \label{lem:bisect-4}
We have $R=R^T$.
\end{lemma}
\begin{proof}
We have seen that $P(y,z)$ is the shortest path in $T$ from $y$ to $z$, and so $R^T_x(y,z)$ holds if and only if $x \not\in P(y,z)$. However, $x \not\in P(y,z)$ is equivalent to $R_x(y,z)$, by definition.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:bisect}]
Let $\mathscr{T}$ be the set of isomorphism classes of $X$-labeled trees, and let $\mathscr{B}$ be the set of bisection structures on $X$. We have a map $\Phi \colon \mathscr{T} \to \mathscr{B}$ by $\Phi(T)=R^T$. The construction $R \mapsto T(R)$ above yields a function $\Psi \colon \mathscr{B} \to \mathscr{T}$. Lemma~\ref{lem:bisect-4} shows that $\Phi \circ \Psi$ is the identity. It is easy to see directly that $\Psi \circ \Phi$ is also the identity; that is, if one starts with a tree $T$ then $T(R^T)$ is isomorphic to $T$.
\end{proof}
\subsection{Orientations} \label{ss:orbi}
We now discuss a variant of the above ideas. An \defn{oriented bisection structure} is a function
\begin{displaymath}
S \colon (X \times X) \setminus \Delta \to \{\pm \},
\end{displaymath}
where $\Delta$ denotes the diagonal, such that taking $R_x(y,z)$ to be relation defined by $S(x,y)=S(x,z)$, the collection $R=\{R_x\}$ is a bisection structure on $X$. This $R$ automatically satisfies Definition~\ref{defn:bisect}(a), so one only needs to consider Definition~\ref{defn:bisect}(b). In terms of $S$, this amounts to the following condition:
\begin{itemize}
\item[($\ast$)] Given distinct $x,y,z \in X$, at most one of the equalities
\begin{displaymath}
S(x,y)=S(x,z), \qquad S(y,x)=S(y,z), \qquad S(z,x)=S(z,y)
\end{displaymath}
fails to hold.
\end{itemize}
More informally, an oriented bisection structure is simply a bisection structure $R$ where for each $x$ we have labeled the equivalence classes of $R_x$ as $+$ and $-$. We made use of exactly this kind of structure in the proof of Theorem~\ref{thm:bisect}.
Define a \defn{directed tree} to be a tree in which each edge has been given a direction. If $T$ is an $X$-labeled directed tree then it induces an oriented bisection structure $S^T$ on $X$, as follows. If we delete edge $x$ from $T$, there are (at most) two resulting components; the edge $x$ points towards one of these components, and away from the other. We put $S^T(x,y)=+$ if $x$ points towards $y$'s component, and put $S^T(x,y)=-$ otherwise. One readily verifies that $S^T$ is an oriented bisection structure, and that the analog of Theorem~\ref{thm:bisect} holds in this setting. We give an example in \S \ref{ss:ex}.
\subsection{Enumeration}
The following proposition counts the structures we have considered.
\begin{proposition} \label{prop:enum}
Let $X$ be a finite set with $n$ elements. Put
\begin{align*}
N_1 &= \text{the number of bisection structures on $X$} \\
N_2 &= \text{the number of $X$-labeled trees up to isomorphism} \\
N_3 &= \text{the number of oriented bisection structures on $X$} \\
N_4 &= \text{the number of directed $X$-labeled trees up to isomorphism}
\end{align*}
Then
\begin{displaymath}
N_1=N_2=(n+1)^{n-2}, \qquad N_3=N_4=2^n (n+1)^{n-2}.
\end{displaymath}
The first formula is valid for $n \ge 2$, while the second is valid for $n \ge 1$.
\end{proposition}
\begin{proof}
The equality $N_1=N_2$ follows from Theorem~\ref{thm:bisect}. The explicit formula for $N_2$ follows from Cayley's theorem on trees; see \cite[Proposition~2.1]{CameronTrees}. The equality $N_3=N_4$ follows from the oriented analog of Theorem~\ref{thm:bisect}. If $n \ge 2$ then there are no automorphisms of an $X$-labeled tree $T$, since the group of unlabeled automorphisms acts faithfully on the edges (see the proof of \cite[Proposition~2.1]{CameronTrees}). It follows that if we direct the edges of $T$ in two different ways, the resulting directed $X$-labeled trees are non-isomorphic. Hence $N_4=2^n \cdot N_3$. The formula for $N_4$ when $n=1$ is easily verified directly. (Note that if $T$ is an $X$-labeled tree with one edge then the two ways of directing this edge yield isomorphic directed $X$-labeled trees.)
\end{proof}
\begin{remark}
The integer sequence defined by the formula $2^n \cdot (n+1)^{n-2}$ is discussed in \cite{OEIS}. We mention two other places where it occurs.
\begin{itemize}
\item Let $S_n=\mathbf{Q}[x_i,y_i,z_i]_{1 \le i \le n}$, and let $R_n$ be the quotient of $S_n$ by the ideal generated by homogeneous $\mathfrak{S}_n$-invariants of positive degree. Here the symmetric group $\mathfrak{S}_n$ acts on $S_n$ by permuting each set of variables in the obvious manner. Haiman \cite[Fact~2.8.1]{Haiman} observed that the dimension of $R_n$ as a $\mathbf{Q}$-vector space is $2^n (n+1)^{n-2}$ for $1 \le n \le 5$, and suggested this might be true for all $n$; as far as we know, this is still open.
\item Let $B_n(x)$ be the $n$th Morgan--Voyce polynomial. This is defined recursively by $B_0=B_1=1$ and $B_n=(x+2)B_{n-1} - B_{n-2}$. The disciminant of $B_{n+1}$ is $2^n(n+1)^{n-2}$ \cite[Table~5]{Florez}. \qedhere
\end{itemize}
\end{remark}
\section{Oligomorphic groups and measures} \label{s:olig}
\subsection{Oligomorphic groups}
An \defn{oligomorphic group} is a permutation group $(G, \Omega)$ such that $G$ has finitely many orbits on $\Omega^n$ for all $n \ge 1$. We refer to Cameron's book \cite{CameronBook} for general background on these groups.
Suppose we have an oligomorphic group $(G, \Omega)$. For a finite subset $A \subset \Omega$, let $G(A)$ be the subgroup of $G$ fixing each element of $A$. These subgroups form a neighborhood basis of the identity for a topology on $G$. This topology has three important properties: it is Hausdorff; it is non-archimedean (open subgroups form a neighborhood basis of the identity); and it is Roelcke pre-compact (if $U$ and $V$ are open subgroups then $U \backslash G/V$ is finite); see \cite[\S 2.2]{repst}. We say that a topological group is \defn{admissible} if it satisfies these three properties.
Although we ultimately care most about oligomorphic groups, our constructions only depend on the topology and not the specific permutation action, so we tend to work with admissible topological groups.
\subsection{Actions}
Let $G$ be an admissible topological group. We say that an action of $G$ on a set $X$ is \defn{smooth} if every point has open stabilizer. We use the term ``$G$-set'' to mean ``set equipped with a smooth action of $G$.'' We say that a $G$-set is \defn{finitary} if it has finitely many orbits. A product of two finitary $G$-sets is again a finitary $G$-set. See \cite[\S 2.3]{repst} for details.
A \defn{$\hat{G}$-set} is a $U$-set for some open subgroup $U$ of $G$, called a \defn{group of definition}; shrinking $U$ does not change the $\hat{G}$-set. A $\hat{G}$-set is called finitary if it is finitary with respect to some group of definition; this does not depend on the group of definition. If $f \colon X \to Y$ is a map of $G$-sets then the fiber over any point is a $\hat{G}$-set; this is one reason this concept is useful. The symbol $\hat{G}$ has no rigorous meaning on its own, but we think of it as an infinitesimal neighborhood of the identity. See \cite[\S 2.]{repst} for details.
\subsection{Measures}
Let $G$ be an admissible group. The following definition was introduced in \cite{repst}, and will be the primary concept studied in this paper:
\begin{definition} \label{defn:measure}
A \defn{measure} for $G$ valued in a commutative ring $k$ is a rule $\mu$ assigning to each finitary $\hat{G}$-set $X$ a quantity $\mu(X)$ in $k$ such that the following axioms hold (in which $X$ and $Y$ denote finitary $\hat{G}$-sets):
\begin{enumerate}
\item Isomorphism invariance: $\mu(X)=\mu(Y)$ if $X \cong Y$.
\item Normalization: $\mu(\mathbf{1})=1$, where $\mathbf{1}$ is the one-point $\hat{G}$-set.
\item Conjugation invariance: $\mu(X^g)=\mu(X)$, where $X^g$ is the conjugate of the $\hat{G}$-set $X$ by $g \in G$.
\item Additivity: $\mu(X \amalg Y)=\mu(X)+\mu(Y)$.
\item Multiplicativity in fibrations: if $X \to Y$ is a map of transitive $U$-sets, for some open subgroup $U$, with fiber $F$ (over some point) then $\mu(X)=\mu(F) \cdot \mu(Y)$.
\end{enumerate}
We let $\mathcal{M}_G(k)$ denote the set of $k$-valued measures for $G$.
\end{definition}
Given a measure, one obtains a theory of integration for functions on $G$-sets; see \cite[\S 3]{repst}. The construction of tensor categories in \cite{repst} is built on top of this theory of integration. For the present paper, however, we will not need integration.
There are a few concepts equivalent to the above notion of measure that we mention, simply to provide the reader with more intuition:
\begin{enumerate}
\item If $X \to Y$ is a map of $G$-sets then the fiber over any point is a $\hat{G}$-set, and every $\hat{G}$-set can be obtained in this manner. One can use this to reformulate the notion of measure as a rule that assigns to each such map $X \to Y$ (with $Y$ transitive and $X$ finitary) a quantity in $k$, such that certain conditions hold; see \cite[\S 4.5]{repst}. The advantage of this formulation is that it depends only on the category of $G$-sets.
\item A \defn{generalized index} is a rule assigning to each containment of open subgroups $U \subset V$ a quantity $\lbb U:V \rbb$ in $k$, satisfying properties similar to the usual index; see \cite[\S 3.6]{repst}. This concept is equivalent to measure, with $\mu$ corresponding to $\lbb -:- \rbb$ if $\lbb U:V \rbb=\mu(U/V)$ for all $V \subset U$.
\item Suppose that $\mathcal{C}$ is a Fra\"iss\'e class with limit $\Omega$, and $G=\Aut(\Omega)$ is oligomorphic. A measure for $G$ is then equivalent to a rule assigning to each inclusion $X \subset Y$ in $\mathcal{C}$ a value in $k$, such that certain conditions hold; see \cite[\S 6]{repst}. (Actually, this only gives a measure for $G$ relative to a stabilizer class.) This shows that measures are essentially combinatorial in nature.
\end{enumerate}
We will not use any of the above perspectives in this paper. However, we will give a combinatorial interpretation for our measures that is similar in spirit to (c).
There is one more concept connected to measures that we will use. Define a ring $\Theta(G)$ as follows: start with the polynomial ring in variables $[X]$, where $X$ varies over isomorphism classes of finitary $\hat{G}$-sets, and impose relations corresponding to Definition~\ref{defn:measure}(b,c,d,e). There is a measure $\mu_{\rm univ}$ valued in $\Theta(G)$ given by $\mu_{\rm univ}(X)=[X]$. This measure is universal, in the sense that if $\mu$ is a measure valued in some ring $k$ then there is a unique ring homomorphism $\phi \colon \Theta(G) \to k$ such that $\mu(X)=\phi(\mu_{\rm univ}(X))$. A complete understanding of measures for $G$ essentially amounts to computing the ring $\Theta(G)$.
\begin{example}
Let $\mathfrak{S}$ be the infinite symmetric group, and let $\Omega=\{1,2,3,\ldots\}$ be its domain. Given a complex number $t$, there is a unique $\mathbf{C}$-valued measure $\mu_t$ for $\mathfrak{S}$ such that $\mu_t(\Omega)=t$. This measure satisfies $\mu_t(\Omega^{(n)})=\binom{t}{n}$, where $\Omega^{(n)}$ denotes the set of $n$-element subsets of $\Omega$. The ring $\Theta(\mathfrak{S})$ is the ring of integer-valued polynomials in a single variable. These statements are proven in \cite[\S 15]{repst}.
\end{example}
\subsection{Minimal maps}
Let $G$ be an admissible group and let $\phi \colon X \to Y$ be a map of transitive $G$-sets. We say that $\phi$ is \defn{minimal} if it is not an isomorphism and does not factor non-trivially, i.e., given $\phi=\beta \circ \alpha$, where $\beta$ and $\alpha$ are maps of transitive $G$-sets, either $\alpha$ or $\beta$ is an isomorphism. The following two results show the significance of this notion.
\begin{proposition} \label{prop:min-subgp}
Given an open subgroup $V$ of $G$, there are only finitely many subgroups $U$ of $G$ containing $V$.
\end{proposition}
\begin{proof}
If $U$ contains $V$ then $U$ is a union of double cosets of $V$. Since $V \backslash G/V$ is finite, there are thus only finitely many possibilities for $U$.
\end{proof}
\begin{proposition}
Any map of transitive $G$-sets that is not an isomorphism can be factored into a sequence of minimal maps.
\end{proposition}
\begin{proof}
It suffices to show that if $V \subset U$ is a proper inclusion of open subgroups then the natural map $G/V \to G/U$ admits such a factorization. Choose a strict chain of open subgroups
\begin{displaymath}
V=W_0 \subset W_1 \subset \cdots \subset W_n=U
\end{displaymath}
where $W_i$ is minimal over $W_{i-1}$; such a chain exists by Proposition~\ref{prop:min-subgp}. The map $G/V \to G/U$ factors as the composition of the minimal maps $G/W_{i-1} \to G/W_i$, which completes the proof.
\end{proof}
\subsection{Another view on measures} \label{ss:E-meas}
We now give a slight reformulation of the definition of measure that will be more convenient in our particular case. For an admissible group $G$, let $\Omega(G)$ be its \defn{Burnside ring}. This is the free $\mathbf{Z}$-module on the set of isomorphism classes of transitive $G$-sets. For a transitive $G$-set $X$, we let $\lbb X \rbb$ denote its class in $\Omega(G)$. For a general finitary $G$-set $X$, we define $\lbb X \rbb=\sum_{i=1}^n \lbb Y_i \rbb$, where $Y_1, \ldots, Y_n$ are the $G$-orbits on $X$. As the name suggests, $\Omega(G)$ is a ring, via $\lbb X \rbb \cdot \lbb Y \rbb=\lbb X \times Y \rbb$.
Let $\mathscr{E}$ be a collection of open subgroups of $G$ that is stable under conjugation, and such that every open subgroup contains some member of $\mathscr{E}$. We introduce the following notion:
\begin{definition} \label{defn:E-meas}
An \defn{$\mathscr{E}$-measure} valued in a ring $k$ is a rule $\mu_{\bullet}$ assigning to each $U \in \mathscr{E}$ an additive map $\mu_U \colon \Omega(U) \to k$ satisfying the following axioms:
\begin{enumerate}
\item We have $\mu_U(\mathbf{1})=1$ for any $U \in \mathscr{E}$.
\item Given subgroups $V \subset U$ in $\mathscr{E}$ and a finitary $U$-set $X$, we have $\mu_U(X)=\mu_V(X)$.
\item Given $U \in \mathscr{E}$, a finitary $U$-set $X$ and $g \in G$, we have $\mu_U(X)=\mu_{U^g}(X^g)$, where the superscript denotes conjugation.
\item Given $U \in \mathscr{E}$ and a map $\pi \colon X \to Y$ of transitive $U$-sets, we have $\mu_U(X)=\mu_U(Y) \mu_V(F)$, where $F=\pi^{-1}(y)$ for some $y \in Y$, and $V \in \mathscr{E}$ stabilizes $y$.
\end{enumerate}
We have written $\mu_U(X)$ in place of $\mu_U(\lbb X \rbb)$ above. Let $\mathcal{M}^{\mathscr{E}}_G(k)$ denote the set of $\mathscr{E}$-measures for $G$ valued in $k$.
\end{definition}
The following is the main result we require on this concept.
\begin{proposition} \label{prop:meas-E-meas}
We have a natural isomorphism $\mathcal{M}_G(k) \to \mathcal{M}^{\mathscr{E}}_G(k)$.
\end{proposition}
\begin{proof}
Suppose that $\mu$ is a measure for $G$. For an open subgroup $U$, define $\mu_U \colon \Omega(U) \to k$ by $\mu_U(X)=\mu(X)$. It is clear that $\mu_{\bullet}$ is an $\mathscr{E}$-measure, and that $\mu$ can be recovered from $\mu_{\bullet}$. We thus have an injective map $\Phi \colon \mathcal{M}_G(k) \to \mathcal{M}^{\mathscr{E}}_G(k)$.
We now show that $\Phi$ is surjective, which will complete the proof. Let $\mu_{\bullet}$ be a given $\mathscr{E}$-measure. We define a measure $\mu$, as follows. Let $X$ be a finitary $\hat{G}$-set. Choose a group of definition $U$ for $X$ that belongs to $\mathscr{E}$, and put $\mu(X)=\mu_U(X)$. This is independent of the choice of $U$. Indeed, suppose $U'$ is a second group of definition belonging to $\mathscr{E}$. Then $U \cap U'$ is an open subgroup of $G$, and thus contains some $V \in \mathscr{E}$ by hypothesis. We then have $\mu_U(X)=\mu_V(X)=\mu_{U'}(X)$, since $\mu_{\bullet}$ is compatible with restriction. Clearly, $\mu_{\bullet}=\Phi(\mu)$, provided that $\mu$ is a measure, so it suffices to show this. One easily sees that that $\mu$ satisfies axioms (a)--(d) of Definition~\ref{defn:measure}.
We now verify axiom (e). Thus let $\pi \colon X \to Y$ be a map of transitive $U$-sets, for some open subgroup $U$, and let $F=\pi^{-1}(y)$ for some point $y \in Y$. Let $V \subset U$ be an open subgroup contained in $\mathscr{E}$. Let $Y=\bigsqcup_{i=1}^n Y_i$ be the decomposition of $Y$ into orbits of $V$ and let $X_i=\pi^{-1}(Y_i)$. Let $y_i \in Y_i$ be any point, and let $F_i=\pi^{-1}(y_i)$ be the fiber over it. Let $W \subset V$ be an open subgroup in $\mathscr{E}$ fixing each $y_i$. We have
\begin{displaymath}
\mu(X_i)=\mu_V(X_i)=\mu_V(Y_i) \mu_W(F_i) = \mu(Y_i) \mu(F_i) = \mu(Y_i) \mu(F).
\end{displaymath}
In the first step, we used the definition of $\mu$; in the second step, we used Definition~\ref{defn:E-meas}(d); in the third step, we again used the definition of $\mu$; and in the final step, we used that $F_i$ is conjugate to $F$ by $U$ (since $U$ acts transitively on $Y$), and that $\mu$ is conjugation invariant. Summing the above equation over $i$, we find $\mu(X)=\mu(Y) \mu(F)$, as required.
\end{proof}
We note that since measures are multiplicative, i.e., $\mu(X \times Y)=\mu(X) \cdot \mu(Y)$, the above proposition shows that if $\mu_{\bullet}$ is an $\mathscr{E}$-measure then each map $\mu_U \colon \Omega(U) \to k$ is a ring homomorphism. Our next result can simplify the task of verifying Definition~\ref{defn:E-meas}(d).
\begin{proposition} \label{prop:minimal}
Suppose that $\mathscr{E}$ is downwards closed, meaning that if $U \in \mathscr{E}$ and $V \subset U$ is an open subgroup then $V \in \mathscr{E}$. For each $U \in \mathscr{E}$, let $\mu_U \colon \Omega(U) \to k$ be an additive map. Suppose $\mu_{\bullet}$ satisfies Definition~\ref{defn:E-meas}(a,b,c) as well as the following condition:
\begin{itemize}
\item[(d')] Given $U \in \mathscr{E}$ and a minimal map $\pi \colon X \to Y$ of transitive $U$-sets, we have $\mu_U(X)=\mu_U(Y) \mu_V(F)$, where $F=\pi^{-1}(y)$ for some $y \in Y$, and $V \in \mathscr{E}$ stabilizes $y$.
\end{itemize}
Then $\mu_{\bullet}$ also satisfies Definition~\ref{defn:E-meas}(d), and is thus an $\mathscr{E}$-measure.
\end{proposition}
\begin{proof}
Given open subgroups $V \subset U$ of $G$, define $\delta(V \subset U)$ to be the maximal $n$ for which there exists a chain $V=W_0 \subsetneq \cdots \subsetneq W_n=U$ of subgroups. This is defined by Proposition~\ref{prop:min-subgp}. We note that $\delta(V \subset U)=0$ if and only if $V=U$, and $\delta(V \subset U)=1$ if and only if $V \ne U$ but there is no subgroup strictly between $V$ and $U$.
To prove Definition~\ref{defn:E-meas}(d), it suffices to show
\begin{displaymath}
\mu_U(U/W)=\mu_U(U/V) \mu_V(V/W)
\end{displaymath}
whenever $W \subset V \subset U$ are subgroups in $\mathscr{E}$. We proceed by induction on $\delta(W \subset V)$. If $\delta(W \subset V)=0$ the statement is clear. Suppose now that $\delta(W \subset V)$ is positive, and let $W'$ be a minimal subgroup over $W$ contained in $V$, so that $\delta(W' \subset V)<\delta(W \subset V)$. Note that $U/W \to U/W'$ is a minimal map of transitive $U$-sets, and $V/W \to V/W'$ is a minimal map of transitive $V$-sets. We have
\begin{align*}
\mu_U(U/W)
&= \mu_U(U/W') \mu_{W'}(W'/W) \\
&= \mu_U(U/V) \mu_V(V/W') \mu_{W'}(W'/W) \\
&= \mu_U(U/V) \mu_V(V/W)
\end{align*}
where in the first step we used (d'), in the second step the inductive hypothesis, and in the third (d') again. The result follows.
\end{proof}
\section{The colored circle and its symmetries} \label{s:group}
\subsection{The circle}
Fix a countable set $\mathbf{S}$ with an everywhere dense cyclic order; for example, one can take the roots of unity in the complex unit circle. Let $\Sigma$ be a non-empty finite set and let $\sigma \colon \mathbf{S} \to \Sigma$ be a function such that $\sigma^{-1}(a)$ is dense for every $a \in \Sigma$. We regard $\sigma$ as a coloring of $\mathbf{S}$. Let $G$ be the automorphism group of $\mathbf{S}$, i.e., the group of all self-bijections preserving the cyclic ordering and the coloring.
\begin{proposition} \label{prop:S-homo}
We have the following:
\begin{enumerate}
\item $\mathbf{S}$ is a homogeneous structure: if $X$ and $Y$ are finite subsets of $\mathbf{S}$ and $i \colon X \to Y$ is an isomorphism (i.e., a bijection preserving the induced cyclic orders and colorings) then there exists $g \in G$ such that $i = g \vert_X$.
\item $\mathbf{S}$ is the Fra\"iss\'e limit of the class of finite sets equipped with a cyclic order and $\Sigma$-coloring; in particular, $\mathbf{S}$ is independent of the choice of $\sigma$, up to isomorphism.
\item The group $G$ is oligomorphic (with respect to its action on $\mathbf{S}$).
\end{enumerate}
\end{proposition}
\begin{proof}
(a) For this proof, a ``structure'' means a set equipped with a cyclic order and a $\Sigma$-coloring. Suppose that $X \to Y$ is an embedding of finite structures and we have an embedding $\alpha \colon X \to \mathbf{S}$. We claim that $\alpha$ extends to an embedding $\beta \colon Y \to \mathbf{S}$. By an inductive argument, it suffices to treat the case where $Y$ has one more element than $X$. Thus suppose $Y=X \sqcup \{y\}$. If $\# X \le 1$, the claim is clear, so suppose $\# X \ge 2$. Write $X=\{x_1, \ldots, x_n\}$, where $x_1<x_2<\cdots<x_n<x_1$, and let $i$ be such that $x_i<y<x_{i+1}$ (where $i+1$ is taken modulo $n$). Choose a point $z \in \mathbf{S}$ between $\alpha(x_i)$ and $\alpha(x_{i+1})$ having the same color at $y$. Now simply define $\beta(x_j)=\alpha(x_j)$ for all $j$, and $\beta(y)=z$. Then $\beta$ is an embedding of $Y$ extending $\alpha$.
It now follows from a standard back-and-forth argument that $\mathbf{S}$ is homogeneous. To be a bit more precise, the previous paragraph shows that $\mathbf{S}$ is ``f-injective'' in the terminology of \cite[\S A.4]{homoten}. By \cite[Proposition~A.7]{homoten}, any f-injective object is homogeneous.
(b) Since $\mathbf{S}$ is a countable homogeneous structure into which every finite structure embeds, it is the Fra\"iss\'e limit of the class of finite structures. (To see that every structure embeds, simply note that the empty structure does and so the general case follows from f-injectivity.) The Fra\"iss\'e limit is unique up to isomorphism, which yields the uniqueness statement.
(c) Let $\mathbf{S}^{(n)}$ denote the set of $n$-element subsets of $\mathbf{S}$. If $x$ and $y$ are two points in $\mathbf{S}^{(n)}$ that are isomorphic (with their induced structures) then the homogeneity of $\mathbf{S}$ shows that they belong to the same $G$-orbit. Since there are finitely many structures of cardinality $n$, it follows that $G$ has finitely many orbits on $\mathbf{S}^{(n)}$. Since this holds for all $n$, it follows that $G$ is oligomorphic.
\end{proof}
\subsection{The line}
Fix a point $\infty \in \mathbf{S}$, and let $\mathbf{L} = \mathbf{S} \setminus \{\infty\}$. Then $\mathbf{L}$ carries a total order and a $\Sigma$-coloring. As a totally ordered set, $\mathbf{L}$ is isomorphic to the set of rational numbers with its standard order. An argument similar to the above shows that $\mathbf{L}$ is homogeneous, and the Fra\"iss\'e limit of the class of finite sets equipped with a total order and $\Sigma$-coloring. We let $H$ be the automorphism group of $\mathbf{L}$, which is also oligomorphic. Note that $H$ is simply the stabilizer of $\infty$ in $G$, and for a finite subset $A$ of $\mathbf{L}$, we have $H(A)=G(A \cup \{\infty\})$. (Here $H(A)$ denotes the subgroup of $H$ fixing each element of $A$.)
\subsection{Intervals}
Given $x \ne y$ in $\mathbf{S}$, consider the set $I=(x,y)$ of all points $z \in \mathbf{S}$ satisfying $x<z<y$. We refer to sets of the form $(x,y)$ as \defn{proper intervals}. We refer to $x$ and $y$ as the left and right endpoints of $I$. If $J=(x',y')$ is a second proper interval then $J=gI$ for some $g \in G$ if and only if $\sigma(x)=\sigma(x')$ and $\sigma(y)=\sigma(y')$. An \defn{improper interval} is one of the form $\mathbf{S} \setminus \{x\}$ for $x \in \mathbf{S}$.
Let $I$ be an interval (proper or improper). Then $I$ carries a total order and a $\Sigma$-coloring, and is easily seen to be the Fra\"iss\'e limit of the class of finite sets with a total order and $\Sigma$-coloring. Thus $I$ is abstractly isomorphic to $\mathbf{L}$. We let $H_I$ be the automorphism group of $I$. The pair $(H_I, I)$ is isomorphic to $(H, \mathbf{L})$; in particular, $H_I$ is oligomorphic.
Suppose that $A$ is a finite non-empty subset of $\mathbf{S}$, and write $\mathbf{S} \setminus A = I_1 \sqcup \cdots \sqcup I_r$, where $I_1, \ldots, I_r$ are intervals. Whenever we write such a decomposition, we assume that the indexing of the intervals is compatible with the natural cyclic order on them, i.e., $I_i$ is between $I_{i-1}$ and $I_{i+1}$ (where the indices are taken modulo $n$). An element of $G(A)$ preserves each $I_i$, and so there is a natural map
\begin{displaymath}
G(A) \to H_{I_1} \times \cdots H_{I_r}.
\end{displaymath}
One readily sees that this map is an isomorphism.
\subsection{Open subgroups}
We now classify the open subgroups of $H$.
\begin{proposition} \label{prop:subgp}
Every open subgroup of $H$ has the form $H(A)$ for some finite subset $A \subset \mathbf{L}$.
\end{proposition}
\begin{proof}
This is proved for $\# \Sigma=1$ in \cite[Proposition~17.1]{repst}. The general case follows from a similar argument.
\end{proof}
\begin{corollary} \label{cor:subgp}
Let $A$ be a non-empty finite subset of $\mathbf{S}$. Then every open subgroup of $G(A)$ has the form $G(B)$ for some finite subset $B$ of $\mathbf{S}$ containing $A$.
\end{corollary}
\begin{proof}
Since the choice of $\infty$ is arbitrary, we may as well assume it belongs to $A$. Thus $G(A) \subset H$, and the result now follows from the proposition.
\end{proof}
\begin{remark}
The $G(A)$'s do not account for all open subgroups of $G$. Let $G[A]$ denote the subgroup of $G$ that maps $A$ to itself (as a set). This can be larger than $G(A)$; for instance, if every point in $A$ has the same color, then the points of $A$ can be cyclically permuted, and $G[A] = \mathbf{Z}/n \ltimes G(A)$ where $n=\# A$. One can show that $G[A]$ is the normalizer of $G(A)$, and that $G[A]/G(A)$ is a finite cyclic group. One can furthermore show that every open subgroup of $G$ sits between $G(A)$ and $G[A]$ for some $A$. We will not need this result, however.
\end{remark}
\subsection{Actions}
Let $\Sigma^{\star}$ denote the set of all words in the alphabet $\Sigma$. Given a word $w=w_1 \cdots w_n$ in $\Sigma^{\star}$ and an interval $I$, we let $I^w$ denote the subset of $I^n$ consisting of those tuples $(x_1, \ldots, x_n)$ such that $x_1<\cdots<x_n$ and $\sigma(x_i)=w_i$. The group $H_I$ clearly acts on $I^w$, and this action is transitive by the homogeneity of $I$.
\begin{proposition} \label{prop:Ghat-sets}
Let $A$ be a non-empty finite subset of $\mathbf{S}$, and write $\mathbf{S} \setminus A = I_1 \sqcup \cdots \sqcup I_r$. Then every transitive $G(A)$-set is isomorphic to $I_1^{w_1} \times \cdots \times I_r^{w_r}$ for some $w_1, \ldots, w_r \in \Sigma^{\star}$.
\end{proposition}
\begin{proof}
Let $X$ be a transitive $G(A)$-set. Then $X$ is isomorphic to $G(A)/U$ for some open subgroup $U$ of $G(A)$. By Corollary~\ref{cor:subgp}, we have $U=G(B)$ for some finite subset $B$ of $\mathbf{S}$ containing $A$. Let $B_i = B \cap I_i$. Writing $B_i=\{x_{i,1}<\cdots<x_{i,n(i)}\}$, let $w_i=\sigma(x_{i,1}) \cdots \sigma(x_{i,n(i)})$. Under the isomorphism $G(A) = H_{I_1} \times \cdots \times H_{I_r}$, we have $G(B) = H_{I_1}(B_1) \times \cdots \times H_{I_r}(B_r)$. As $H_{I_i}$ acts transitively on $I_i^{w_i}$ with stabilizer $H_{I_i}(B_i)$, it follows that $H_{I_i}/H_{I_i}(B_i) \cong I_i^{w_i}$. Thus $X \cong I_1^{w_1} \times \cdots \times I_r^{w_r}$, as required.
\end{proof}
\begin{proposition} \label{prop:minimal-class}
Let $A$ be a non-empty finite subset of $\mathbf{S}$, and write $\mathbf{S} \setminus A = I_1 \sqcup \cdots \sqcup I_r$.
\begin{enumerate}
\item Let $w,w',w_2, \ldots, w_r \in \Sigma^{\star}$ and let $c \in \Sigma$. Let $\pi_1 \colon I_1^{wcw'} \to I_1^{ww'}$ be the map that projects away from the $c$ coordinate, and let
\begin{displaymath}
\pi \colon I_1^{wcw'} \times I_2^{w_2} \times \cdots \times I_r^{w_r} \to I_1^{ww'} \times I_2^{w_2} \times \cdots \times I_r^{w_r}
\end{displaymath}
be the map that is $\pi_1$ on the first factor and the identity on other factors. Then $\pi$ is a minimal map of transitive $G(A)$-sets.
\item The fiber of $\pi$ over any point is isomorphic to $J^c$, where $J$ is a subinterval of $I_1$ defined as follows. If $w$ is non-empty, let $y$ be a point in $I_1$ whose color is the final letter of $w$; otherwise, let $y$ be the left endpoint of $I_1$. If $w'$ is non-empty, let $y<y'$ be a point in $I_1$ whose color is the first letter of $w'$; otherwise, let $y'$ be the right endpoint of $I_1$. Then $J=(y,y')$.
\item Any minimal map of transitive $G(A)$-sets is isomorphic to one as in (a), after possibly re-indexed the intervals.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $A \subset B \subset C$ be finite subsets of $\mathbf{S}$ with $\# C=\# B + 1$. Then $G(B)$ is a minimal subgroup over $G(C)$, and so the natural map $G(A)/G(C) \to G(A)/G(B)$ is a minimal map of transitive $G(A)$-sets. Looking at the identifications in the proof of Proposition~\ref{prop:Ghat-sets}, we see that this map has the form stated in (a). Every minimal map has this form by the classification of open subgroups of $G(A)$.
We now explain statement (b). First note that the fiber of $\pi$ is isomorphic to the fiber of $\pi_1$, so we just consider this. Suppose $\ell(w)=n$ and $\ell(w')=m$. Let $p=(x_1, \ldots, x_n, z_1, \ldots, z_m)$ be a point in $I_1^{ww'}$. Let $J$ be the interval $(x_n, z_1)$, where $x_n$ is taken to be the left endpoint of $I_1$ if $w$ is empty, and $z_1$ is taken to be the right endpoint of $I_1$ if $w'$ is empty. Then $\pi_1^{-1}(p)=J^c$, and so (b) follows.
\end{proof}
\section{Classification of measures} \label{s:class}
\subsection{Combinatorial reformulation of measures} \label{ss:Sigma-meas}
In \S \ref{s:class}, we prove Theorem~\ref{mainthm} following the plan in \S \ref{ss:plan}. As a first step, we introduce $\Sigma$-measures and connect them to measures. For a word $w=w_1 \cdots w_n$ in $\Sigma^{\star}$, we let $w[i,j]$ denote the subword $w_i \cdots w_j$. We use parentheses to omit endpoints, e.g., $w[i,j)=w_i \cdots w_{j-1}$.
\begin{definition} \label{defn:Sigma-meas}
A \defn{$\Sigma$-measure} with values in $k$ is a rule $\nu$ assigning to each $a,b \in \Sigma$ and $w \in \Sigma^{\star}$ a quantity $\nu_{a,b}(w)$ in $k$ such that the following axioms hold:
\begin{enumerate}
\item $\nu_{a,b}(w)=1$ if $w$ is the empty word.
\item Let $w$ and $w'$ be words, put $r=\ell(w)$, and let $a,b,c \in \Sigma$. Then
\begin{displaymath}
\nu_{a,b}(w c w')=\nu_{a,b}(ww') \nu_{w_r,w'_1}(c).
\end{displaymath}
Here, we use the convention that $w_r=a$ if $w$ is empty, and $w'_1=b$ if $w'$ is empty.
\item Let $w$ of length $n$, and let $a,b,c \in \Sigma$. Then
\begin{displaymath}
\nu_{a,b}(w) = \sum_{i=0}^n \nu_{a,c}(w[1,i]) \nu_{c,b}(w(i,n]) + \sum_{w_i=c} \nu_{a,c}(w[1,i)) \nu_{c,b}(w(i,n]).
\end{displaymath}
\end{enumerate}
We let $\mathcal{M}_{\Sigma}(k)$ denote the set of such measures.
\end{definition}
\begin{proposition} \label{prop:Sigma-meas}
We have a natural bijection $\mathcal{M}_G(k) \cong \mathcal{M}_{\Sigma}(k)$. Under this bijection, a measure $\mu$ for $G$ corresponds to a $\Sigma$-measure $\nu$ if and only if $\mu(I^w)=\nu_{a,b}(w)$ whenever $I$ is a proper interval with endpoints of color $a$ and $b$, and $w \in \Sigma^{\star}$.
\end{proposition}
The proof of the proposition will take the remainder of \S \ref{ss:Sigma-meas}. Define $R$ to be the commutative ring generated by symbols $x_{a,b}(w)$, where $a,b \in \Sigma$ and $w \in \Sigma^{\star}$, modulo the following relations:
\begin{enumerate}
\item $x_{a,b}(w)=1$ if $w$ is empty.
\item Let $w$ and $w'$ be words, put $r=\ell(w)$, and let $a,b,c \in \Sigma$. Then
\begin{displaymath}
x_{a,b}(w c w')=x_{a,b}(ww') x_{w_r,w'_1}(c),
\end{displaymath}
where we use conventions as in Definition~\ref{defn:Sigma-meas}.
\item Let $w$ of length $n$, let $a,b,c \in \Sigma$. Then
\begin{displaymath}
x_{a,b}(w) = \sum_{i=0}^n x_{a,c}(w[1,i]) x_{c,b}(w(i,n]) + \sum_{w_i=c} x_{a,c}(w[1,i)) x_{c,b}(w(i,n]).
\end{displaymath}
\end{enumerate}
Thus a $\Sigma$-measure is a homomorphism $R \to k$. With this language, we can reformulate Proposition~\ref{prop:Sigma-meas} as follows:
\begin{proposition} \label{prop:Sigma-meas-2}
There exists a ring isomorphism $\phi \colon \Theta(G) \to R$ satisfying $\phi([I^w])=x_{a,b}(w)$, where $I$ is an arbitrary proper interval, and $a$ and $b$ are the colors of the left and right endpoints of $I$.
\end{proposition}
We note that the classes $[I^w]$ generate $\Theta(G)$ by Proposition~\ref{prop:Ghat-sets}, so there is at most one ring isomorphism as in the proposition.
Let $\mathscr{E}$ be the set of subgroups of $G$ of the form $G(A)$ where $\# A \ge 2$. This satisfies the conditions of \S \ref{ss:E-meas}. We construct an $R$-valued $\mathscr{E}$-measure $\phi_{\bullet}$ for $G$. Let $A$ be a finite subset of $\mathbf{S}$ of cardinality at least~2, and write $\mathbf{S} \setminus A = I_1 \sqcup \cdots \sqcup I_r$. Recall that every transitive $G(A)$ set is isomorphic to one of the form $I_1^{w_1} \times \cdots \times I_r^{w_r}$ with $w_1, \ldots, w_r \in \Sigma^{\star}$ (Proposition~\ref{prop:Ghat-sets}). We define
\begin{displaymath}
\phi_{G(A)} \colon \Omega(G(A)) \to R
\end{displaymath}
to be the unique additive map satisyfing
\begin{displaymath}
\phi(I_1^{w_1} \times \cdots \times I_r^{w_r}) = x_{a_1,b_1}(w_1) \cdots x_{a_r,b_r}(w_r),
\end{displaymath}
where $a_i$ and $b_i$ are the colors of the left and right endpoints of $I_i$. We now verify that the system $\phi_{\bullet}$ is indeed an $\mathscr{E}$-measure. Conditions (a) and (c) of Definition~\ref{defn:E-meas} are clear.
\begin{lemma} \label{lem:Sigma-meas-1}
Let $I$ be an interval and let $z \in I$. Write $I=J \sqcup \{z\} \sqcup K$ for intervals $J$ and $K$, and let $c=\sigma(z)$. Then for a word $w \in \Sigma^{\star}$ of length $n$, we have a natural bijection
\begin{displaymath}
I^w = \big( \coprod_{i=0}^n J^{w[1,i]} \times K^{w(i,n]} \big) \amalg \big( \coprod_{w_i=c} J^{w[1,i)} \times K^{w(i,n]} \big)
\end{displaymath}
that is equivariant for the action of $H_I(b) = H_J \times H_K$.
\end{lemma}
\begin{proof}
Recall that $I^w$ consists of tuples $x=(x_1<\cdots<x_n)$ in $I^n$ such that $\sigma(x_i)=w_i$. Let $X$ be the subset of $I^w$ consisting of points $x$ such that no $x_i$ is equal to $z$, and let $Y$ be the complement. We have a decomposition $X=X_0 \sqcup \cdots \sqcup X_n$, where $X_i$ is the subset of $X$ consisting of points $x$ such that $x_i<z<x_{i+1}$ (and where we ignore conditions involving $x_0$ or $x_{n+1}$), and an isomorphism $X_i \cong J^{w[1,i]} \times K^{w(i,n]}$. We also have a decomposition $Y=\bigsqcup_{w_i=c} Y_i$, where $Y_i$ consists of points $x$ such that $x_i=z$, and an isomorphism $Y_i \cong J^{w[1,i)} \times K^{w(i,n]}$. This completes the proof.
\end{proof}
\begin{lemma}
The system $\phi_{\bullet}$ satisfies Definition~\ref{defn:E-meas}(b).
\end{lemma}
\begin{proof}
It suffices to treat the case where $V$ is a maximal subgroup of $U$. We can thus assume $V=G(B)$ and $U=G(A)$ where $B=A \cup \{z\}$ and $z$ is some element of $\mathbf{S} \setminus A$; let $c=\sigma(z)$ be the color of $z$. Write $\mathbf{S} \setminus A = I_1 \sqcup \cdots \sqcup I_r$ as above; cyclically rotating the labels, if necessary, we assume that $z \in I_1$. Write $I_1=J \sqcup \{z\} \sqcup K$. Let $X=I_1^{w_1} \times \cdots \times I_r^{w_r}$ be a transitive $G(A)$-set. Decomposing $I_1^{w_1}$ by Lemma~\ref{lem:Sigma-meas-1}, we find that $\phi_V(X)$ is equal to
\begin{displaymath}
\big( \sum_{i=0}^n x_{a_1,c}(w[1,i]) x_{c,b_1}(w(i,n]) + \sum_{w_i=c} x_{a_1,c}(w[1,i)) x_{c,b_1}(w(i,n]) \big) \times x_{a_2,b_2}(w_2) \cdots x_{a_r,b_r}(w_r)
\end{displaymath}
By definition of $R$, the first factor is equal to $x_{a_1,b_1}(w_1)$, and so the whole expression is equal to $\phi_U(X)$. Thus $\phi_V(X)=\phi_U(X)$, as required.
\end{proof}
\begin{lemma}
The system $\phi_{\bullet}$ satisfies Definition~\ref{defn:E-meas}(d).
\end{lemma}
\begin{proof}
It follows from the classification of open subgroups of $H$ (Proposition~\ref{prop:subgp}) that $\mathscr{E}$ satisfies the condition of Proposition~\ref{prop:minimal}. Thus, by that proposition, it is enough to verify Proposition~\ref{prop:minimal}(d'). Let $A$ be a finite subset of $\mathbf{S}$ of cardinality at least~2, and write $\mathbf{S} \setminus A = I_1 \sqcup \cdots \sqcup I_r$. Let $a_i$ and $b_i$ be the colors of the endpoints of $I_i$. Let $\pi \colon X \to Y$ be a minimal map of transitive $G(A)$-sets with fiber $F$. By Proposition~\ref{prop:minimal-class}, after possibly reindexing, $\pi$ is isomorphic to
\begin{displaymath}
\pi_1 \times \mathrm{id} \times \cdots \times \mathrm{id} \colon I_1^{wcw'} \times I_2^{w_2} \times \cdots \times I_r^{w_r} \to I_1^{ww'} \times I_2^{w_2} \times \cdots \times I_r^{w_r},
\end{displaymath}
where $\pi_1$ projects away from the $c$ coordinate. Additionally, $F$ is isomorphic to $J^c$, where $J$ is an interval with endpoints of colors $w_r$ and $w'_1$, with $r=\ell(w)$; here we use the convention that $w_r=a_1$ if $w=\emptyset$, and $w'_1=b_1$ if $w'=\emptyset$. The equation $\phi(X)=\phi(Y) \phi(F)$ thus becomes
\begin{displaymath}
x_{a_1,b_1}(wcw') x_{a_2,b_2}(w_2) \cdots x_{a_r,b_r}(w_r) = x_{a_1,b_1}(ww') x_{w_r,w_1'}(c) x_{a_2,b_2}(w_2) \cdots x_{a_r,b_r}(w_r),
\end{displaymath}
which does indeed hold in $R$: this is just the definining relation (b) of $R$, multiplied on each side by the same quantity.
\end{proof}
We have thus verified that the system $\phi_{\bullet}$ is an $\mathscr{E}$-measure. By Proposition~\ref{prop:meas-E-meas}, $\phi_{\bullet}$ corresponds to a measure for $G$ valued in $R$, i.e., a ring homomorphism $\phi \colon \Theta(G) \to R$. This homomorphism clearly satisfies $\phi([I^w])=x_{a,b}(w)$, where $a$ and $b$ are the colors of the endpoints of $I$. The following lemma completes the proof of the proposition.
\begin{lemma}
The map $\phi \colon \Theta(G) \to R$ is an isomorphism.
\end{lemma}
\begin{proof}
Let $\tilde{R}$ be the polynomial ring in the symbols $x_{a,b}(w)$. Define a ring homomorphism $\tilde{\psi} \colon \tilde{R} \to \Theta(G)$ by $\psi(x_{a,b}(w))=[I^w]$, where $I$ is any proper interval with endpoints of colors $a$ and $b$. This is well-defined since if $J$ is a second such interval then $I$ and $J$ are conjugate by an element of $G$, and so $[I^w]=[J^w]$ in $\Theta(G)$. By computations similar to the ones carried out above, we see that $\tilde{\psi}$ kills the defining relations of $R$, and thus induces a ring homomorphism $\psi \colon R \to \Theta(G)$. This is clearly inverse to $\phi$, and so the proof is complete.
\end{proof}
\subsection{Measures and symbols}
We just proved that measures for $G$ are equivalent to $\Sigma$-measures. This is a significant step forward since $\Sigma$-measures are purely combinatorial objects. However, they are still rather complicated: $\Sigma$-measures have infinitely many parameters and defining equations. We now introduce $\Sigma$-symbols, which have finitely many parameters and defining equations, and connect them to $\Sigma$-measures.
\begin{definition} \label{defn:symbol}
\begin{subequations}
A \defn{$\Sigma$-symbol} with values in $k$ is a function
\begin{displaymath}
\eta \colon \Sigma^3 \to k, \qquad (a, b, c) \mapsto \eta_{a,b}(c)
\end{displaymath}
satisfying the following two conditions, for all $a,b,c,d \in \Sigma$:
\begin{align}
\eta_{a,b}(c) \eta_{c,b}(d) &= \eta_{a,b}(d) \eta_{a,d}(c) \label{eq:symb-1} \\
\eta_{a,b}(c) &= \eta_{a,d}(c)+\eta_{d,b}(c)+\delta_{c,d} \label{eq:symb-2}
\end{align}
We let $\mathcal{S}_{\Sigma}(k)$ denote the set of all $\Sigma$-symbols.
\end{subequations}
\end{definition}
\begin{proposition}
We have a natural bijection $\mathcal{M}_{\Sigma}(k) \to \mathcal{S}_{\Sigma}(k)$ given by restricting measures to words of length~1. In other words, if $\nu$ is a $\Sigma$-measure then $(a,b,c) \mapsto \nu_{a,b}(c)$ is a $\Sigma$-symbol, and this construction is bijective.
\end{proposition}
We break the proof into several lemmas.
\begin{lemma}
Let $\nu$ be a $\Sigma$-measure and define $\eta \colon \Sigma^3 \to k$ by $\eta_{a,b}(c) = \nu_{a,b}(c)$. Then $\eta$ is a $\Sigma$-symbol, and $\nu$ can be recovered from $\eta$.
\end{lemma}
\begin{proof}
Making the substitution $(w,c,w') \to (c,d,\emptyset)$ in Definition~\ref{defn:Sigma-meas}(b), we find
\begin{displaymath}
\nu_{a,b}(c d) = \nu_{a,b}(c) \nu_{c,b}(d).
\end{displaymath}
Making the substitution $(w,c,w') \to (\emptyset,c,d)$ in the same axiom, we find
\begin{displaymath}
\nu_{a,b}(c d) = \nu_{a,b}(d) \nu_{a,b}(c).
\end{displaymath}
This gives \eqref{eq:symb-1}. Making the substition $(w,c) \to (c,d)$ in Definition~\ref{defn:Sigma-meas}(c) gives \eqref{eq:symb-2}. Thus $\eta$ is a $\Sigma$-symbol. Applying Definition~\ref{defn:Sigma-meas}(b) iteratively, we see that $\nu$ is determined by its values on length~1 words. These values are recorded by $\eta$, and so $\nu$ can be recovered from $\eta$.
\end{proof}
The above lemma provides us with an injective function $\Phi \colon \mathcal{M}_{\Sigma}(k) \to \mathcal{S}_{\Sigma}(k)$. To complete the proof of the proposition, we must show that $\Phi$ is surjective. Let a $\Sigma$-symbol $\eta$ be given. We recursively define $\nu$ by $\nu_{a,b}(\emptyset)=1$ and
\begin{displaymath}
\nu_{a,b}(w_1 \cdots w_n)=\eta_{a,b}(w_n) \nu_{a,w_n}(w_1\cdots w_{n-1})
\end{displaymath}
for $n \ge 1$. We clearly have $\Phi(\nu)=\eta$, provided that $\nu$ is a $\Sigma$-measure. It thus suffices to show this, which we do in the next two lemmas.
\begin{lemma}
$\nu$ satisfies Definition~\ref{defn:Sigma-meas}(b).
\end{lemma}
\begin{proof}
We must show
\begin{displaymath}
\nu_{a,b}(w c w') = \nu_{a,b}(ww') \nu_{w_r,w'_1}(c)
\end{displaymath}
for all $a,b,c \in \Sigma$ and $w,w' \in \Sigma^{\star}$, where $r=\ell(w)$; recall the convention that $w_r=a$ if $w=\emptyset$ and $w'_1=b$ if $w'=\emptyset$. We proceed by induction on the length $n$ of the word $wcw'$. The base case ($n=1$) is trivial, and the $n=2$ case follows directly from \eqref{eq:symb-1}. Suppose now that $n \ge 3$ and the identity holds in length $n-1$.
First suppose that $w'$ is non-empty, and let $s=\ell(w')$. We have
\begin{align*}
\nu_{a,b}(wc w')
&= \eta_{a,b}(w'_s) \nu_{a,w'_s} (w c w'_1 \cdots w'_{s-1}) \\
&= \eta_{a,b}(w'_s) \nu_{a,w'_s}(w w'_1 \cdots w'_{s-1}) \nu_{w_r,w'_1}(c) \\
&= \nu_{a,b}(ww') \nu_{w_r,w'_1}(c).
\end{align*}
In the first and third steps, we used the definition of $\mu$, while in the second we used the inductive hypothesis.
Now suppose that $w'$ is empty. Since $n \ge 3$, we have $r \ge 2$. We have
\begin{align*}
\nu_{a,b}(w c)
&= \nu_{a,b}(w_2 \cdots w_r c) \nu_{a,w_2}(w_1) \\
&= \nu_{a,b}(w_2 \cdots w_r) \nu_{w_r,b}(c) \nu_{a,w_2}(w_1) \\
&= \nu_{a,b}(w) \nu_{w_r,b}(c).
\end{align*}
In the first step, we applied the previous paragraph with with $(w, \rho, w')$ being $(\emptyset, w_1, w_2 \cdots w_r c)$. In the final two steps, we used the inductive hypothesis. This completes the proof.
\end{proof}
\begin{lemma}
$\nu$ satisfies Definition~\ref{defn:Sigma-meas}(c).
\end{lemma}
\begin{proof}
For $a,b,c \in \Sigma$ and $w \in \Sigma^{\star}$, with $n=\ell(w)$, put
\begin{displaymath}
X_{a,b}^c(w) = \sum_{i=0}^n \nu_{a,c}(w[1,i]) \nu_{c,b}(w(i,n]) + \sum_{w_i=c} \nu_{a,c}(w[1,i)) \nu_{c,b}(w(i,n]).
\end{displaymath}
We must prove $X_{a,b}^c(w) = \nu_{a,b}(w)$. We proceed by induction on $n$. The case $n=0$ is clear. Thus suppose $n \ge 1$ and the identity holds for smaller $n$.
In the definition of $X_{a,b}^c(w)$, break off the $i=n$ terms from each sum, and then apply the recursive definition of $\nu$ to the second factors in the sum. This yields
\begin{align*}
X_{a,b}^c(w) =& \nu_{a,c}(w) + \eta_{c,b}(w_n) \sum_{i=0}^{n-1} \nu_{a,c}(w[1,i]) \nu_{c,w_n}(w[i,n)) \\ &+ \delta_{w_n,c} \nu_{a,c}(w[1,n)) + \eta_{c,b}(w_n) \sum_{1 \le i \le n-1,w_i=c} \nu_{a,c}(w[1,i)) \nu_{c,w_n}(w(i,n)),
\end{align*}
and so
\begin{displaymath}
X_{a,b}^c(w) = \nu_{a,c}(w) + \delta_{w_n,c} \nu_{a,c}(w[1,n)) + \eta_{c,b}(w_n) X^c_{a,w_n}(w[1,n))
\end{displaymath}
The final $X$ factor on the right is equal to $\nu_{a,w_n}(w[1,n))$ by the inductive hypothesis. Applying the definition of $\nu$ to the first term above, and replacing $c$ with $w_n$ in the second term (which is valid due to the Kronecker delta), we thus find
\begin{align*}
X_{a,b}^c(w)
&= \eta_{a,c}(w_n) \nu_{a,w_n}(w[1,n)) + \delta_{w_n,c} \nu_{a,w_n}(w[1,n)) + \eta_{c,b}(w_n) \nu_{a,w_n}(w[1,n)) \\
&= (\eta_{a,c}(w_n) + \delta_{w_n,c} + \eta_{c,b}(w_n)) \nu_{a,w_n}(w[1,n)) \\
&= \eta_{a,b}(w_n) \nu_{a,w_n}(w[1,n)) = \nu_{a,b}(w_n).
\end{align*}
In the penultimate step, we applied \eqref{eq:symb-2}, and in the final step the definition of $\nu$. This completes the proof.
\end{proof}
\subsection{Symbols and bisection structures}
We now relate $\Sigma$-symbols to the oriented bisection structures introduced in \S \ref{ss:orbi}. We use somewhat different conventions here, though. We will take our structures valued in $\{0,1\}$ instead of $\{\pm\}$. We will also extend them by zero to the diagonal. Thus an oriented bisection structure is a function
\begin{displaymath}
S \colon \Sigma \times \Sigma \to \{0,1\}
\end{displaymath}
satisfying condition $(\ast)$ of \S \ref{ss:orbi}, and $S_{a,a}=0$ for all $a \in \Sigma$; here, and in what follows, we write $S_{a,b}$ for the value of $S$ at $(a,b)$.
\begin{proposition} \label{prop:digraph}
Let $S$ be an oriented bisection structure on $\Sigma$. Define $\eta^S \colon \Sigma^3 \to k$ by
\begin{displaymath}
\eta^S_{a,b}(c) = S_{c,a} - S_{c,b} - \delta_{b,c}
\end{displaymath}
Then $\eta^S$ is a $\Sigma$-symbol. If the ring $k$ is connected then every $\Sigma$-symbol $\eta$ has the form $\eta^S$ for a unique $S$.
\end{proposition}
Recall that $k$ is \defn{connected} if it has exactly two idempotents, namely 0 and~1; in particular, this means $1 \ne 0$ in $k$. We break the proof into several lemmas. In the first two, $S$ denotes an oriented bisection structure on $\Sigma$.
\begin{lemma}
$\eta^S$ satisfies \eqref{eq:symb-1}.
\end{lemma}
\begin{proof}
Let $a,b,c,d \in \Sigma$. We show
\begin{displaymath}
\eta^S_{a,b}(c) \eta^S_{c,b}(d) = \eta^S_{a,b}(d) \eta^S_{a,d}(c)
\end{displaymath}
There are three Kronecker $\delta$'s appearing in the above equation, namely, $\delta_{c,d}$, $\delta_{c,b}$, and $\delta_{d,b}$. We proceed in cases to handle the possible values of these.
\textit{Case 1: $b$, $c$, and $d$ are distinct.} The identity is
\begin{displaymath}
(S_{c,a}-S_{c,b})(S_{d,c}-S_{d,b}) = (S_{d,a}-S_{d,b})(S_{c,a}-S_{c,d}).
\end{displaymath}
By $(\ast)$, we have $(S_{c,d}-S_{c,b})(S_{d,c}-S_{d,b})=0$, and so
\begin{displaymath}
(S_{c,a}-S_{c,b})(S_{d,c}-S_{d,b}) = (S_{c,a}-S_{c,b})(S_{d,c}-S_{d,b})
\end{displaymath}
Similarly, we have
\begin{displaymath}
(S_{d,a}-S_{d,b})(S_{c,a}-S_{c,d}) = (S_{d,c}-S_{d,b})(S_{c,a}-S_{c,d}).
\end{displaymath}
We have thus established the identity.
\textit{Case 2: $c=d=b$.} The identity becomes
\begin{displaymath}
(-1)(S_{c,a}-1) = (S_{c,a}-1)^2,
\end{displaymath}
which is true since $S_{c,a}-1$ is either 0 or $-1$.
\textit{Case 3: $c=d$ and $c \ne b$.} The identity becomes
\begin{displaymath}
(S_{c,a}-S_{c,b})(-S_{c,b}) = (S_{c,a}-S_{c,b})(S_{c,a}-1)
\end{displaymath}
This is equivalent to
\begin{displaymath}
(S_{c,a}-S_{c,b})(S_{c,a}+S_{c,b}-1) = 0.
\end{displaymath}
If $S_{c,a}$ and $S_{c,b}$ coincide then the first factor vanishes; otherwise, one is~0 and one is~1, and so their sum is~1 and the second factor vanishes.
\textit{Case 4: $c=b$ and $c \ne d$.} Then $\eta^S_{c,b}(d)=0$, and the identity becomes
\begin{displaymath}
(S_{d,a}-S_{d,c})(S_{c,a}-S_{c,d})=0.
\end{displaymath}
This follows from $(\ast)$.
\textit{Case 5: $d=b$ and $c \ne d$.} The identity becomes
\begin{displaymath}
(S_{c,a}-S_{c,d})(S_{d,c}-1) = (S_{d,a}-1)(S_{c,a}-S_{c,d}),
\end{displaymath}
which is equivalent to
\begin{displaymath}
(S_{c,a}-S_{c,d})(S_{d,c}-S_{d,a})=0.
\end{displaymath}
This follows from $(\ast)$.
\end{proof}
\begin{lemma}
$\eta^S$ satisfies \eqref{eq:symb-2}.
\end{lemma}
\begin{proof}
Let $a,b,c,d \in \Sigma$. We have
\begin{align*}
& \eta^S_{a,d}(c)+\eta^S_{d,b}(c)+\delta_{c,d} \\
=& (S_{c,a} - S_{c,d} - \delta_{c,d}) + (S_{c,d}-S_{c,b} - \delta_{c,b}) + \delta_{c,d} \\
=& S_{c,a} - S_{c,b} - \delta_{c,b} = \eta^S_{a,b}(c).
\end{align*}
This completes the proof.
\end{proof}
\begin{lemma} \label{lem:digraph-3}
Let $\eta$ be a $\Sigma$-symbol and let $a,b,c \in \Sigma$.
\begin{enumerate}
\item We have $\eta_{a,a}(c)=-\delta_{a,c}$.
\item We have $\eta_{a,b}(c) = \eta_{c,b}(c)-\eta_{c,a}(c)-\delta_{c,a}$.
\item If $a \ne b$ then $\eta_{a,b}(a)^2 = -\eta_{a,b}(a)$.
\item If $a \ne b$ then $\eta_{c,b}(a) \eta_{c,a}(b)=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) By \eqref{eq:symb-2}, we have $\eta_{a,a}(c)=2\eta_{a,a}(c)+\delta_{a,c}$.
(b) By \eqref{eq:symb-2}, we have $\eta_{c,b}(c) = \eta_{c,a}(c)+\eta_{a,b}(c)+\delta_{a,c}$.
(c) By \eqref{eq:symb-1}, we have $\eta_{a,b}(a) \eta_{a,b}(a) = \eta_{a,b}(a) \eta_{a,a}(a)$, and $\eta_{a,a}(a)=-1$ by (a).
(d) By \eqref{eq:symb-1}, the product in question is $\eta_{c,b}(b) \eta_{b,b}(a)$, which vanishes by (a).
\end{proof}
\begin{lemma} \label{lem:digraph-4}
Let $\eta$ be a $\Sigma$-symbol valued in a connected ring $k$. Then there exists a unique oriented bisection structure $S$ such that $\eta=\eta^S$.
\end{lemma}
\begin{proof}
Let $a \ne b$ be elements of $\Sigma$. By Lemma~\ref{lem:digraph-3}(c), we see that $-\eta_{a,b}(a)$ is an idempotent of $k$. Since $k$ is connected, it follows that it is either~0 or~1. Define $S$ by $S_{a,b}=-\eta_{a,b}(a)$ for $a \ne b$, and $S_{a,a}=0$. Since $-\eta_{a,a}(a)=1$ by Lemma~\ref{lem:digraph-3}(a), we see that $S_{a,b}=-\eta_{a,b}(a)-\delta_{a,b}$ is valid for all $a,b \in \Sigma$.
We now verify that $S$ satisfies condition $(\ast)$. Let $a$, $b$, and $c$ be distinct elements of $\Sigma$. We must show
\begin{displaymath}
(S_{a,b}-S_{a,c})(S_{b,a}-S_{b,c})=0.
\end{displaymath}
Up to signs, the first factor is $\eta_{c,b}(a)$ and the second is $\eta_{c,a}(b)$. The product of these vanishes by Lemma~\ref{lem:digraph-3}(d), and so the claim follows. It now follows from Lemma~\ref{lem:digraph-3}(b) that $\eta=\eta^S$. Since $S$ can be recovered from $\eta^S$ (as $1 \ne 0$ in $k$), uniqueness of $S$ follows.
\end{proof}
\subsection{Proof of Theorem~\ref{mainthm}}
Let $T$ be directed $\Sigma$-labeled tree. This induces an oriented bisection structure on $\Sigma$ (\S \ref{ss:bisect-defn}) and thus, by the above results, a $\mathbf{Z}$-valued measure for $G$, i.e., a ring homomorphism $\mu_T \colon \Theta(G) \to \mathbf{Z}$. Consider the product of these measures
\begin{displaymath}
\phi \colon \Theta(G) \to \prod_T \mathbf{Z}.
\end{displaymath}
We must show that $\phi$ is an isomorphism. For any ring $k$, there is an induced map
\begin{displaymath}
\phi^* \colon \Hom(\prod_T \mathbf{Z}, k) \to \Hom(\Theta(G), k),
\end{displaymath}
where the $\Hom$'s are taken in the category of rings. By the results of this section (and Theorem~\ref{thm:bisect}), $\phi^*$ is bijective if $k$ is a connected ring. It follows that $\phi^*$ is also bijective if $k$ is a finite product of connected rings. It thus suffices to show that $\Theta(G)$ is such a product, for then Yoneda's lemma will show that $\phi$ is an isomorphism.
We now show that $\Theta(G)$ is a finite product of connected rings. It is equivalent to show that $\Theta(G)$ has finitely many idempotents, and for this it is sufficient to show that $\Theta(G)$ has finitely many minimal primes. Suppose $\mathfrak{p}$ is a minimal prime. Since $\Theta(G)/\mathfrak{p}$ is a connected ring, the quotient map $\Theta(G) \to \Theta(G)/\mathfrak{p}$ factors through some $\mu_T$. We thus find $\ker(\mu_T) \subset \mathfrak{p}$, and so $\mathfrak{p}=\ker(\mu_T)$ by the minimality of $\mathfrak{p}$. Hence $\Theta(G)$ has finitely many minimal primes.
\subsection{Description of measures}
Let $T$ be a directed $\Sigma$-labeled tree and let $\nu$ be the corresponding $\mathbf{Z}$-valued $\Sigma$-measure. We now explain how to compute $\nu$ directly from $T$. Let $S$ be the oriented bisection structure associated to $T$; we use the convention that $S_{a,b}$ is~1 if $a$ points towards $b$, and~0 if $a$ points away from $b$. Let $\eta=\eta^S$ be the symbol asociated to $S$ .
Let $a,b \in \Sigma$ and let $w \in \Sigma^{\star}$ be of length $n$. Put $w_0=a$ and $w_{n+1}=b$. We say that $(a,b,w)$ is \defn{monotonic} if for all $0 \le i<j<l \le n+1$ with $w_i$, $w_j$, and $w_{\ell}$ distinct, we have that $w_j$ belongs to the shortest path joining $w_i$ and $w_l$. This means that each $w_i$ lies on the shortest path between $a$ and $b$ (inclusive), and that as we go from $w_i$ to $w_{i+1}$ we either stay at the same edge or move closer to $b$ along this path.
We say that an edge on the path from $a$ to $b$ is \defn{positively oriented} if it points away from $a$ and towards $b$; otherwise we say that it is \defn{negatively oriented}. We also apply this terminology to $a$ and $b$ themselves: $a$ is positively oriented if it points towards $b$, and $b$ is positively oriented if it points away from $a$. (If $a=b$ then $a$ is considered positively oriented.)
We say that $(a,b,w)$ is \defn{good} if it is monotonic and $w_i$ is positively oriented whenever $w_i$ occurs more than once in $w_0 \cdots w_{n+1}$. Assuming $(a,b,w)$ is good, we put $\epsilon_{a,b}(w)=(-1)^m$, where $m$ is the number of $w_i$'s, for $1 \le i \le n$, that are positively oriented. The following is our main result:
\begin{proposition} \label{prop:explicit}
If $(a,b,w)$ is good then $\nu_{a,b}(w)=\epsilon_{a,b}(w)$; otherwise $\nu_{a,b}(w)=0$.
\end{proposition}
We require a few lemmas before giving the proof.
\begin{lemma} \label{lem:explicit-1}
For $a,b,c,d \in \Sigma$, we have
\begin{displaymath}
\eta_{a,c}(b) \eta_{a,d}(c) = \eta_{a,c}(b) \eta_{b,d}(c).
\end{displaymath}
\end{lemma}
\begin{proof}
The stated equation is equivalent to
\begin{displaymath}
\eta_{a,c}(b) (\eta_{a,d}(c)-\eta_{b,d}(c)) = 0.
\end{displaymath}
We have
\begin{displaymath}
\eta_{a,d}(c)-\eta_{b,d}(c) = \eta_{a,b}(c) + \delta_{b,c},
\end{displaymath}
and so we must show
\begin{displaymath}
\eta_{a,c}(b) (\eta_{a,b}(c)+\delta_{b,c}) = 0.
\end{displaymath}
If $b \ne c$ this follows from $(\ast)$, while if $b=c$ it follows since $\eta_{a,b}(b) \in \{-1,0\}$.
\end{proof}
\begin{lemma} \label{lem:explicit-2}
For $a,b \in \sigma$ and $w \in \Sigma^{\star}$ of length $n$, we have
\begin{displaymath}
\nu_{a,b}(w) = \prod_{i=1}^n \eta_{w_{i-1},w_{i+1}}(w_i),
\end{displaymath}
where we put $w_0=a$ and $w_{n+1}=b$.
\end{lemma}
\begin{proof}
Write $\nu'_{a,b}(w)$ for the right side above. We show $\nu_{a,b}(w)=\nu'_{a,b}(w)$ by induction on the length $n$ of $w$. If $n \le 1$, the statement is clear. Now assume $n \ge 2$. We have
\begin{align*}
\nu_{a,b}(w)
&= \eta_{a,w_2}(w_1) \nu_{a,b}(w_2 \cdots w_n) \\
&= \eta_{a,w_2}(w_1) \eta_{a,w_3}(w_2) \eta_{w_2,w_4}(w_3) \cdots \\
&= \eta_{a,w_2}(w_1) \eta_{w_1,w_3}(w_2) \eta_{w_2,w_4}(w_3) \cdots = \nu'_{a,b}(w).
\end{align*}
In the first step, we used Definition~\ref{defn:Sigma-meas}(b) with $(w,c,w') \to (\emptyset, w_1, w[2,n])$; in the second step, we used the inductive hypothesis; and in the third step we applied Lemma~\ref{lem:explicit-1} to the first two factors, with $(a,b,c,d) \to (a,w_1,w_2,w_3)$. The result thus follows.
\end{proof}
\begin{lemma} \label{lem:explicit-3}
Let $a,b \in \Sigma$ be distinct. Then
\begin{displaymath}
\eta_{a,b}(a) = \begin{cases} -1 & \text{if $a$ points towards $b$} \\
0 & \text{otherwise} \end{cases} \qquad
\eta_{a,b}(b) = \begin{cases} -1 & \text{if $b$ points away from $a$} \\
0 & \text{otherwise} \end{cases}
\end{displaymath}
Let $a,b,c \in \Sigma$ be distinct. Then
\begin{displaymath}
\eta_{a,b}(c) = \begin{cases} -1 & \text{if $c$ is between $a$ and $b$, and points towards $b$} \\
1 & \text{if $c$ is between $a$ and $b$, and points towards $a$} \\
0 & \text{otherwise} \end{cases}
\end{displaymath}
\end{lemma}
\begin{proof}
These follow from direct computation. We explain the final formula. We have
\begin{displaymath}
\eta_{a,b}(c) = S_{c,a}-S_{c,b}
\end{displaymath}
since $\delta_{b,c}=0$ by assumption. The edge $c$ is between $a$ and $b$, i.e., on the shortest path joining $a$ and $b$, if and only if $S_{c,a} \ne S_{c,b}$. Thus $\eta_{a,b}(c)=0$ unless $a \le c \le b$. Assume that $a \le c \le b$. If $c$ points towards $a$ then $S_{c,a}=1$ and $S_{c,b}=0$; thus $\eta_{a,b}(c)=1$ in this case. Similarly, if $c$ points towards $b$ then $\eta_{a,b}(c)=-1$.
\end{proof}
\begin{lemma} \label{lem:explicit-4}
Let $a,b \in \Sigma$ and let $w \in \Sigma^{\star}$ have length $n$. Then for $0 \le i<j<l \le n+1$ we have that $\eta_{w_i,w_l}(w_j)$ divides $\nu_{a,b}(w)$. Here we put $w_0=a$ and $w_{n+1}=b$.
\end{lemma}
\begin{proof}
If $j=i+1$ and $l=i+2$ this follows from Definition~\ref{defn:Sigma-meas}(b) with $(w,c,w') \to (w[1,i), w_i, w(i,n])$. Suppose now that $j>i+1$. Then by Definition~\ref{defn:Sigma-meas} with $(w,c,w') \to (w[1,i], w_{i+1}, w[i+2,n])$, we see that $\nu_{a,b}(w[1,i] w[i+2,n])$ divides $\nu_{a,b}(w)$. The former is divisible by $\eta_{w_i,w_l}(w_j)$ by induction, which completes the proof.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:explicit}]
Suppose $(a,b,w)$ is not good; we show that $\nu_{a,b}(w)=0$. First suppose $(a,b,w)$ is not monotonic. Then there exists $0 \le i<j<l \le n+1$ with $w_i$, $w_j$, and $w_l$ distint such that $w_j$ is not between $w_i$ and $w_l$. By Lemma~\ref{lem:explicit-3}, we have $\eta_{w_i,w_l}(w_j)=0$. By Lemma~\ref{lem:explicit-4}, this symbol divides $\nu_{a,b}(w)$, and so $\nu_{a,b}(w)=0$. Next suppose that there is a letter $c$ occurring more than once in $w_0 \cdots w_{n+1}$ that is negatively oriented. Let $i<j$ be such that $w_i=w_j=c$. If $c \ne a$ then Lemma~\ref{lem:explicit-3} shows that $\eta_{a,w_j}(w_i)=0$, while if $c \ne b$ then the same result shows that $\eta_{w_i,b}(w_j)=0$. By Lemma~\ref{lem:explicit-4}, these symbols divide $\nu_{a,b}(w)$, and so this vanishes as well.
Now suppose that $(a,b,w)$ is good. Let $1 \le i \le n$. If $w_{i-1}=w_i=w_{i+1}$ then $\eta_{w_{i-1},w_{i+1}}(w_i)=-1$. Otherwise, Lemma~\ref{lem:explicit-3} shows that $\eta_{w_{i-1},w_{i+1}}(w_i)$ is $-1$ if $w_i$ is positively oriented and $+1$ otherwise. Thus the result follows from Lemma~\ref{lem:explicit-2}.
\end{proof}
\subsection{An example} \label{ss:ex}
Let $\Sigma=\{\mathrm{a},\ldots,\mathrm{f}\}$ be a six element set. Consider the following directed $\Sigma$-labeled tree $T$:
\begin{displaymath}
\begin{tikzpicture}[decoration={markings,mark=at position 0.5 with {\arrow{>}}}]
\tikzset{leaf/.style={circle,fill=black,draw,minimum size=.75mm,inner sep=0pt}}
\node[leaf] (A) at (-1.866, .5) {};
\node[leaf] (B) at (-1.866, -.5) {};
\node[leaf] (C) at (-1,0) {};
\node[leaf] (D) at (0,0) {};
\node[leaf] (E) at (1,0) {};
\node[leaf] (F) at (1.866, .5) {};
\node[leaf] (G) at (1.866, -.5) {};
\draw[postaction={decorate}] (A) to node[align=center,xshift=1.5mm,yshift=2mm]{\tiny a} (C);
\draw[postaction={decorate}] (C) to node[align=center,xshift=1.5mm,yshift=-2mm]{\tiny b} (B);
\draw[postaction={decorate}] (C) to node[align=center,yshift=2mm]{\tiny c} (D);
\draw[postaction={decorate}] (E) to node[align=center,yshift=2mm]{\tiny d} (D);
\draw[postaction={decorate}] (E) to node[align=center,xshift=-1.5mm,yshift=2mm]{\tiny e} (F);
\draw[postaction={decorate}] (E) to node[align=center,xshift=-1.5mm,yshift=-2mm]{\tiny f} (G);
\end{tikzpicture}
\end{displaymath}
The oriented bisection structure $S$ is specified in the following table:
\begin{center}
\begin{tabular}{c|cccccc}
& a & b & c & d & e & f \\
\hline
a & 0 & 1 & 1 & 1 & 1 & 1 \\
b & 0 & 0 & 0 & 0 & 0 & 0 \\
c & 0 & 0 & 0 & 1 & 1 & 1 \\
d & 1 & 1 & 1 & 0 & 0 & 0 \\
e & 0 & 0 & 0 & 0 & 0 & 0 \\
f & 0 & 0 & 0 & 0 & 0 & 0
\end{tabular}
\end{center}
Here the row labeled ``$\mathrm{a}$'' specifies the values of $S_{\mathrm{a},-}$.
Consider $\nu_{\mathrm{b},\mathrm{f}}(w)$. For $(\mathrm{b},\mathrm{f},w)$ to be good, $w$ can only use the edges joining $\mathrm{b}$ and $\mathrm{f}$ (inclusively), i.e., $\mathrm{b}$, $\mathrm{c}$, $\mathrm{d}$, and $\mathrm{f}$, and they must appear in that order. In fact, $\mathrm{b}$ can not occur in $w$ since it is negatively oriented and is one of the endpoints, and $\mathrm{d}$ can only occur once. It follows that the good $w$'s in this case have the form $\mathrm{c}^i \mathrm{d}^j \mathrm{f}^l$ where $i,l \in \mathbf{N}$ and $j \in \{0,1\}$. Moreover, for such $i$, $j$, and $l$, we have
\begin{displaymath}
\nu_{\mathrm{b},\mathrm{f}}(\mathrm{c}^i \mathrm{d}^j \mathrm{f}^l) = (-1)^{i+l}.
\end{displaymath}
\section{The case of the line} \label{s:line}
Recall that $\mathrm{L}=\mathbf{S} \setminus \{\infty\}$ is a homogeneous structure with a total order and $\Sigma$-coloring, and its automorphism group $H$ is oligomorphic. The following theorem describes measures for $H$.
\begin{theorem}
Given a directed $\Sigma$-labeled tree $T$ and two (possibly equal) vertices $x$ and $y$ of $T$, there is an associated $\mathbf{Z}$-valued measure $\mu_{T,x,y}$ for $H$. The product of these measures defines a ring isomorphism $\Theta(H) \to \prod_{T,x,y} \mathbf{Z}$. In particular, $\Theta(H) \cong \mathbf{Z}^M$, where $M=(2n+2)^n$ and $n=\# \Sigma$.
\end{theorem}
\begin{proof}
We simply indicate the main ideas of the proof. Put $\ol{\Sigma}=\Sigma \cup \{\pm \infty\}$. A \defn{$\ol{\Sigma}$-measure} is a rule assigning to $a \in \Sigma \cup \{-\infty\}$, $b \in \Sigma \cup \{+\infty\}$, and $w \in \Sigma^{\star}$ a value $\nu_{a,b}(w)$ in $k$ satisfying axioms similar to those in Definition~\ref{defn:Sigma-meas}. One first shows that measures for $H$ are equivalent to $\ol{\Sigma}$-measures: given a measure $\mu$ for $H$, the corresponding $\ol{\Sigma}$-measure is defined by $\nu_{a,b}(w)=\mu(I^w)$, where $I$ is an interval in $\mathbf{L}$ with endpoints of type $a$ and $b$.
Next, a \defn{$\ol{\Sigma}$-symbol} is a rule assigning to $a \in \Sigma \cup \{-\infty\}$, $b \in \Sigma \cup \{+\infty\}$, and $c \in \Sigma$ a value $\eta_{a,b}(c)$ in $k$ satisfying axioms similar to those in Definition~\ref{defn:symbol}. One shows that there is a bijective correspondence between $\ol{\Sigma}$-measures and $\ol{\Sigma}$-symbols.
Suppose $\eta$ is a $\ol{\Sigma}$-symbol valued in a connected ring $k$. Define
\begin{displaymath}
S \colon \Sigma \times \ol{\Sigma} \to \{0,1\}
\end{displaymath}
as follows. For $a,b \in \Sigma$, we define $S_{a,b}$ just as before. We put $S_{a,\infty}=-\eta_{a,\infty}(a)$ and $S_{a,-\infty}(a)=\eta_{-\infty,a}(a)+1$. One shows using arguments similar to before that these values do belong to $\{0,1\}$, and that $\eta$ can be recovered from $S$. Moreover, $S$ satisfies the following condition: given $a,b \in \Sigma$ and $c \in \ol{\Sigma}$, at least one of the equalities $S_{a,b}=S_{a,c}$ or $S_{b,a}=S_{b,c}$ holds.
Restricting $S$ to $\Sigma \times \Sigma$ yields an oriented bisection structure, and thus a directed $\Sigma$-labeled tree $T$. One then shows that there is a unique vertex $x$ of $T$ such that $S_{a,\infty}$ is~1 if $a$ points to $x$, and~0 otherwise; similarly, one gets a vertex $y$ associated to $-\infty$. From this description, it is also clear how one can start with $(T,x,y)$ and then define $S$, $\eta$, $\nu$, and finally $\mu$. This is how one obtains the stated description for $\Theta(H)$.
Finally, we explain the enumeration. If $T$ is a tree with $n$ edges then $T$ has $n+1$ vertices, and so there are $(n+1)^2$ choices for $(x,y)$. Hence $M=(n+1)^2 \cdot N$, where $N$ is the number of choices for $T$. (Note that directed $\Sigma$-labeled trees have no non-trivial automorphisms.) We have already seen in Proposition~\ref{prop:enum} that $N=2^n \cdot (n+1)^{n-2}$.
\end{proof}
|
{
"arxiv_id": "2302.08701",
"language": "en",
"timestamp": "2023-02-20T02:07:14",
"url": "https://arxiv.org/abs/2302.08701",
"yymm": "2302"
} | \section{Introduction}
Determining the demographic and socioeconomic characteristics that shape a society's overall picture is crucial. The distribution of social classes and groups within a society can greatly impact the region's stability, social welfare, and economic potential~\cite{holzer2003public,yang2022oversampling,arva2016spatial}. For example, \cite{barter2019manifold} has shown that, for some cities, university students and social deprivation are the most significant factors in explaining other social variables in census statistics. Other studies have revealed the impact of social, economic, and ethnic attributes on regional disparities, such as energy burdens in households~\cite{scheier2022measurement}, heterogeneity in epidemic vulnerability~\cite{elliot2000spatial}, and environmental inequality~\cite{brazil2022environmental}.
Census data or similar datasets are often used to identify these demographic characteristics by grouping a large number of social variables collected from hundreds of thousands of geographical units (thus, country-level) into a few independent spatial distributions~\cite{ratledge2022using}. However, due to the scale and complexity of these datasets, there can be challenges in practice. These may include limitations in processing only a selection of social variables on a large spatial scale~\cite{liu2019principal}, or utilizing a full collection of social variables for only a small region with expert knowledge~\cite{liu2019principal}.
We argue that the objectivity can be compromised when deriving these social variables or spatial regions from census data. For instance, defining geographical clustering of social groups for statistical analysis requires aggregating regions into specific areas. However, the modifiable areal unit problem (MAUP)~\cite{gehlke1934certain} challenges the possibility of such spatial aggregations being consistent across different social issues. Additionally, the heterogeneity of features across cities leads to a gap between local studies and the general significance of these features for all cities. Furthermore, researchers may have different perspectives on nominal attributes like race or religion, leading to a lack of consensus on the significance of these features. This can make it difficult to synthesize findings from different studies to identify critical socioeconomic characteristics.
To overcome these challenges, the Diffusion Maps (DM) manifold learning method can be applied to analyze census data. DM effectively captures the interplay of social indicators by assigning the social identities of locations as branches in the data manifold. Previous studies, such as~\cite{barter2019manifold}, have used DM to analyze similar cities and identify common social indicators, decomposing 1450-dimensional census data into the two most significant social indicators of Bristol and Edinburgh (university students and social deprivation). In fact, it forms a successful attempt to overcome the MAUP problem by defining a topic-specific continuous metric that is localized into small patches of important areas of census.
While this approach has been successful in identifying important social indicators in Bristol and Edinburgh, it may not accurately represent the entire population of England and Wales. To gain a comprehensive understanding of demographics, it is important to identify globally-consistent contributors while also taking into account the unique characteristics of small communities within a society. To achieve this, our study applies DM to the census data of England and Wales to identify geographical clustering and hotspots of inequality, providing a more nuanced understanding of demographics.
Our study utilizes Diffusion Maps to analyze the census data of England and Wales and identify geographical clustering and hotspots of inequality. The goal is to decompose high-dimensional social variables into branched, interdependent social factors, revealing patterns in space that would otherwise go unrecognized. A new method, the correlation table, is proposed to explain the derived social dynamics and provide a standard for structuring and analyzing any spatial collection of features while minimizing preassumed spatial correlations in large study areas, such as a densely organized country. Additionally, we introduce a localization metric to reveal the critical features of specific cities. Our method provides a comprehensive view of the descendingly important features of England and Wales and tracks where these features are locally highlighted. By combining the Diffusion Maps method with the correlation table and localization metric, our study offers a powerful tool for understanding demographics and uncovering patterns in social and economic data.
\section{Method and Data}
\subsection{The census data}
The 2011 UK Census collected by the Office for National Statistics of the United Kingdom (https://www.ons.gov.uk/census) provides a comprehensive picture of the population and households in England and Wales, with over 2,000 social variables (or features) on 181,408 Output Areas (OAs), where OAs are the smallest geographical units used in the census, are designed to be compact, homogeneous, and contain between 125 and 650 households. However, the vast amount of data can make it difficult to extract meaningful insights. Our study aims to tackle this challenge by utilizing Diffusion Maps to decompose the high-dimensional social variables into branched social factors, revealing hidden patterns in space.
\subsection{Diffusion Maps}
Diffusion Maps (DM) is a nonlinear dimensionality reduction technique that leverages a random walk process on a sparse network of data points to uncover the structural differentiation within data. In urban sciences, it is sensible from the idea that locations are clusters of similar individuals. Hence, the similarity of locations is equivalent to their distances to each other in the data space, which can further be used to define the network topology. This method allows for a local perspective to be integrated into a broader understanding of urban dynamics, making it an ideal tool for our study.
The constructions of the DM are performed as follows. Suppose for each of the $N$ Output Areas, $x$ is a $M$-dimensional vector whose entries are the social variables. Here, $M$ is the dimensionality of social variables in the census dataset, and $N$ is the total number of data points. To leverage the distribution heterogeneity of different social variables, we measure the similarity $s(x, y)$ of the OA pair $x$, $y$ through their Spearman Rank Correlation
\begin{equation*}
\rho_{x,y}=\frac{R_x R_y - N^2/4}{||R_x-n/2||\cdot ||R_y-n/2||}
\end{equation*}
for each pair $(x, y)$ in $\{1,\dots, N\}$, where $R_x$ is a vector that each of its entry is the rank of $x$ for a social variable. We denote $\Sigma$ as the rank correlation matrix, where each of its elements $\Sigma_{x,y} = \rho_{x,y}$ is the correlation of the corresponding data points $x$ and $y$. The elements of $\Sigma$ are thus all valued between $-1$ and $1$. Nearby points in the data space have $\rho_{x,y}$ close to 1 following a framework in \cite{ryabov2022estimation}. To emphasize the structure of the most important links connecting most similar data points, we define an alternative matrix $\widetilde{W}$ keeps only $k$ largest elements in each row of $\Sigma$ and lets the rest of elements be zero. Here, we choose $k=10$ that barely keeps the network connected, that from each data point there exists at least one route to every other data point in the network. Next, we define a $N\times N$ normalization matrix $D$ whose diagonal elements are the row sums of $\Sigma$. Then, we compute the eigenvalues and right eigenvectors of the following normalized Laplacian matrix $A = I - D^{-1}\widetilde{W}$. $A$ can be regarded as a Markovian transition matrix for a random walk process over data points. The random walk process converges to a continuous time diffusion process as $N\to \infty$ and a small $k$ over the observable data manifold. The low-order eigenvectors of $A$ are then an approximate parameterization of the underlying manifold that hints at the actual urban dynamics.
As presented in~\cite{barter2019manifold}, the social features can be represented by the linear combination of the leading eigenvectors. The complete set of eigenvectors $\eta_j$ correspond to an increasing sequence of $A$’s eigenvalues $\lambda_0 \le \lambda_1 \le \dots \le \lambda_{N-1}$, and each of $\eta_j$ corresponds to a relatively independent dynamical variable, whose nonlinear combinations are explicit in the census as social variables. We then color code the output areas according to their corresponding elements in each eigenvector $\eta_j$, and generate spatial plots to visualize the spatial configurations of the dynamical variables.
In order to make sense of the dynamical variables identified through the eigenvectors, we perform a backward calculation to investigate the correlation between the eigenvectors $\eta_j$ and the census social variables. By identifying the social variables that are most positively and negatively correlated with a given eigenvector $\eta_j$, we can gain valuable insights into the significance of the corresponding eigenfeatures. This information, in combination with the visual representation of the eigenvectors through spatial plots, enables a comprehensive analysis of the underlying dynamics.
\subsection{Virtual similarity networks versus social hierarchy}
From the above analysis of DM, we are actually interpreting the census dataset as a weighted sparse network formed by 181,408 Output Areas (OAs) in a 1,450 dimensional space, and the `similarity' of OAs as weights of the links. The sparsity of this census data network is required to extract the backbones of the feature's synthetic structure rather than exposing to some highly heterogeneously distributed social variables. A natural extension of the problem is then how sparse the network needs to be to recover the underlying structures and properties of the census data, represented by the social hierarchy and criticality~\cite{tadic2017mechanisms}. Consequently, the discussion on whether there are significant cross-scale features in the census data helps to justify the proposed network based on local metrics.
We here consider the problem of sparsity by simply assuming the census data network should be at a most informative criticality state because the society is widely accepted to be a critical system~\cite{midgley1994ecology}.
As the census data network is formed by finding the $k$-most similar OAs for each of the OAs, the sparsity of the network can then be determined through the value of $k$: a larger (smaller) $k$ represents stronger (weaker) network connectivity, and denote $G_k$ as the network of connectivity $k$.
To define the census data network's criticality, we specifically consider the degree distribution of the census data network for different network sparsity~\cite{larremore2011predicting}. Table~\ref{tab:powerlaw_diff_map} shows that as the similarity network $G_k$ is defined as more sparse (i.e., smaller $k$), the likelihood of $G_k$'s degree distribution being more similar to a powerlaw distribution increases. This suggests that as the adjacency threshold and network connectivity decrease, the network's powerlaw characteristics become more prominent. The analysis in this paper thus chooses $k=10$ to maximize the likelihood for the data network to be powerlaw-like.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$n$ & $\alpha$ & $x_{\min}$ & $R$ & $p$ \\ \hline
10 & 2.890 & 16. & 0.933 & 1.846$\times 10^{-9}$\\ \hline
20 & 2.877 & 30. & 0.0385 & 9.367$\times 10^{-1}$\\ \hline
30 & 2.878 & 44. & -2.726 & 1.381$\times 10^{-1}$\\ \hline
40 & 2.875 & 58. & -9.835 & 4.938$\times 10^{-3}$\\ \hline
50 & 2.867 & 73. & -27.09 & 3.493$\times 10^{0}$\\ \hline
\end{tabular}
\caption{The maximum-likelihood fitting methods combined with goodness-of-fit tests (based on the Kolmogorov-Smirnov statistic and likelihood ratios) of $G_k$'s degree distribution. Columns $\alpha$ and $x_{\min}$ represent the estimation of the powerlaw degree distribution of the census network of OAs with the form \(p(k)=k^{-\alpha}/\sum_{n=1}^{\infty} ( n+x_{\min} )^{-\alpha}\), where $x_{\min}$ is the minimum degree, and $\alpha$ is the powerlaw exponent. $R$ is the likelihood ratio test comparing the fit of the power laws curve and the log-normal curve. A more positive $R$ indicates a better fit of the powerlaw curve to the degree distribution over the log-normal curve. Finally the $p$ column is the $p$-value of the confidence of the powerlaw distribution.}
\label{tab:powerlaw_diff_map}
\end{table}
\subsection{Localized participation ratio}
The DM eigenvectors are globally consistent features that play a significant role in the distribution of a variety of social variables found in census data. There are likely many factors that contribute to the society of England and Wales, and while the most dominant factors may be important, less dominant ones can also have significance in specific regions. This is illustrated in the example of Bristol and Edinburgh where university students have higher socioeconomic importance than social deprivation.
Here, we emphasize the importance of understanding how certain factors that are important on a global scale (as represented by the leading DM eigenvectors) are localized in specific cities. To accomplish this, we introduce a new metric called the local inverse participation ratio ($LIPR$) which allows us to trace the localization of an eigenfeature (i.e. a specific factor represented by an eigenvector) into a certain city. They argue that this is important because it can help to identify cities that are of special importance for a particular factor. Additionally, we note that there are few metrics in existing literature that measure the local properties of global features and therefore it is essential to introduce a new localization index like $LIPR$.
The $LIPR$ is an extension from the metric `inverse participation ratio' (IPR) from statistical physics~\cite{fyodorov1992analytical}, defined as~\begin{equation}
IPR_i = \sum_{k=1}^M \frac{(\eta_i^k)^4}{\sum_{j=1}^{M} (\eta_i^j)^2},
\end{equation} where $M$ is the number of $\eta_i$'s entries, and $\eta_i = (\eta_i^1, \dots, \eta_i^M)^T$. Here, if a feature appears in one single area, i.e., $\eta_i = (0,\dots,0, 1, 0, \dots, 0)^T$, the corresponding $IPR_i = 1$; for another limiting case, if a feature is uniformly distributed in all the areas, $\eta_i = (1/\sqrt{M},\dots, 1/\sqrt{M})^T$, the corresponding $IPR_i = 1/M$, which diminishes as $N$ grows. So a highly localized pattern corresponds with a large value of the $IPR$.
To capture whether an indicator clusters in an area, we extend the $IPR$ to local inverse participation ratio $LIPR$ of area $A$,~\begin{equation}
LIPR_i^A = \left( \frac{\sum_{j\in A} {\eta_i^j}^4}{||\eta_i||^2}\right) / \left( \frac{\sum_{j} {\eta_i^j}^4}{||\eta_i||^2}\right) = \frac{\sum_{j\in A} {\eta_i^j}^4}{\sum_{j} {\eta_i^j}^4}.
\end{equation} It is intended to be large when the distribution of eigenfeature $i$ is highlighted in the city $A$. A region with a high $LIPR$ indicates the spatial clustering of small communities, which supports similar social groups across the country, and is mainly localized in some single cities.
The $LIPR$ metric can be used to understand how localized an eigenfeature is in a certain city. We give two examples to illustrate how the metric works in two limiting cases. In the first case, an eigenfeature highlights only one area in London and assigns it a value of 0.1, while assigning 0 to all other areas. The corresponding $LIPR$ in this case would be a relatively high value of 0.001. In the second case, if an eigenfeature does not highlight any specific areas in London and assigns all 10,000 areas a value of 0.0001, the corresponding $LIPR$ would be a near-zero value of $10^-8$. They explain that in general, a highly localized eigenfeature would have a larger $LIPR$ value and that the metric can be used to pinpoint meaningful communities in more than one city.
\section{General dominant features}
We begin at the smallest positive, thus the most important Laplacian eigenvectors of the England and Wales diffusion mapping. A spatial plot can associate each of the eigenvectors, which is color-coded from the most negative to the most positive entries, representing the exposures of each OA to the corresponding demographic context.
\subsection{Urbanization properties}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\linewidth]{Figs/EF12.png}
\caption{The eigenvector map of $\eta_1$ and $\eta_2$, the representative eigenvectors that shows global properties. The colors are assigned by the entries of an eigenvector, from the most positive (red) to the most negative (blue). Here, $\eta_1$ highlights the main cities; $\eta_2$ pinpoints the most skilled workers, which is mostly concentrated in the main airports. The label \textbf{a} in \textbf{B} is the Blackwell tunnel from where the symmetric pattern is broken between either sides of Thames; In \textbf{D}, \textbf{a}, \textbf{b}, and \textbf{c} are Hammersmith Hospital, Heathrow Airport, and HM Prison Isis, respectively.}
\label{fig:EF12}
\end{figure*}
The first eigenvector, $\eta_1$ can be used to identify patterns of urbanization in England and Wales~(Fig.~\ref{fig:EF12}\textbf{A} and \textbf{B}). $\eta_1$ is localized in the main cities of the country, and that it highlights not only the largest cities such as London, Liverpool, and Manchester, but also smaller central places surrounded by forest and mountains in the form of a continuous patch of OAs represented by Porthmadog, Tregaron, and Newport. By analyzing only London entries of $\eta_1$, we find working-class residential areas expanding along the River Thames, with a relatively north-south symmetrical pattern from west to east until the Blackwall tunnel neighborhood, where tunnels replace the walkable bridges as the connection between the riversides. We conclude this as urbanized residential areas are the walkable neighborhood, which is the most explanatory feature of the 2011 census. We recall the diffusion mapping results inputting the city-level census data of in~\cite{barter2019manifold} that highlight universities and poverty as the dominant features of Bristol and Edinburgh. The eigenvector $\eta_1$ exhibits a more globalized spatial distribution of urbanization.
A natural question to follow is what element from the census perspective determines the shape of a city identified by $\eta_1$. To this end, we compute the correlation of $\eta_1$ with all the census social variables. We found that the most correlated census variables of $\eta_1$ and the corresponding correlations are: \textit{living in a couple: Married or in a registered same-sex civil partnership} (0.82), \textit{two cars or vans in household} (0.80), \textit{Married} (0.78), \textit{Occupancy rating (rooms) of +2 or more}, i.e. at least 2 rooms more than the basic standard (0.78); Meanwhile, $\eta_1$ is also highly negatively correlated with particular races and religions (\textit{Black African/Caribbean/Black British: African} -0.58, \textit{Muslim} -0.57). These social variables capture the typical community in a city in England and Wales. We note that urbanization is the most important dimension in census, and urbanization is largely explained by the percentage of marriage and civil partnerships, vehicles ownership, and occupancy status of the households in a neighborhood.
Eigenvector $\eta_2$ highlights similar areas $\eta_1$ but exhibits a milder aggregation with many clustered areas in medium level regional centres~(Fig.~\ref{fig:EF12}\textbf{C} and \textbf{D}). Generally, $\eta_2$ picks all the important airports in England and Wales with the highest entries, in addition to a general mapping of the working class in most cities and lower-level central places. We conclude that $\eta_2$ is mostly associated with the skilled occupations, which can also be validated statistically by its most correlated census variables of degrees and diplomas: \textit{Degree (for example BA, BSc), Higher degree (for example MA, PhD, PGCE)}, 0.85, \textit{two+ A levels/VCEs, 4+ AS levels, Higher School Certificate, Progression/Advanced Diploma, Welsh Baccalaureate Advanced Diploma} (0.81), \textit{Highest level of qualification: Level 4 qualifications and above} (0.81). $\eta_2$'s high correlation with education and its appearance at the second most dominant eigenvector indicate that education is one of the most clustering feature of England and Wales, that widely explains other socioeconomic properties underlying census data.
We then wonder what areas are `most educated'. Zooming in on London, $\eta_2$ separates the city from Northwestern to Southeast, similar to what is usually believed as the separation of Old and New London. The most highlighted areas of $\eta_2$ in London are the Hammersmith Hospital. However, $\eta_2$ surprisingly finds HM Isis Prison. We referred to the prison website and Wikipedia and learned that this prison provides education and vocational training in partnership with Kensington and Chelsea College.
Beyond educations, $\eta_2$ is highly negatively correlated with \textit{Routine occupations} (-0.75), \textit{No British identity} (-0.60), and \textit{Bad health} (-0.59). These features indicate that education is one of the most important determinations of household gathering features as the education-related eigenvector appears to be as $\eta_2$. Here we compare the spatial distribution of $\eta_2$ and $\eta_6$ because visually $\eta_6$ finds almost every university in England and Wales. We conclude that $\eta_2$ is more about where the university graduates settle and work, while the positive entries of $\eta_6$ find most of the university campus. The population composition of $\eta_6$'s most correlated with the racial census variables are \textit{White: English/Welsh/Scottish/Northern Irish/British} (0.57), \textit{No religion} (0.55), and \textit{Born in UK} (0.53). These features can be linked to the typical features of the university neighborhood of England and Wales.
\subsection{University neighborhoods}
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\linewidth]{Figs/EFuniv.png}
\caption{The eigenvector $\eta_6$ zoomed in London, which finds A. University of east London, B. Queen Mary’s Hospital, C. University of Greenwich, D. Brunel University, E. Smart College UK, F. The London College, G. Goldsmiths, University of London, H. Lewisham College, I. Harrow School, J.Northwick Park Hospital, K. King's College London Guy's Campus, L. Imperial College, M. Chelsea and Westminster Hospital, N. University of Roehampton London, O. Richmond University, P. Kingston University, Q. St Mary's University Twickenham London, R. Croydon College, S. Northumbria University - London etc.}
\label{fig:london6}
\end{figure}
The spatial pattern of variable $\eta_6$ is associated with universities, which is not expected to be related to ethnicities. However, statistical analysis reveals differences in correlation with various ethnicities. The correlation coefficient between $\eta_6$ and the ethnic group of \textit{White: English/Welsh/Scottish/Northern Irish/British} is high at 0.570, while it has a negative correlation with \textit{British only identity}, \textit{self-employed individuals}, and the \textit{African language group of Somali}. These correlations are likely due to historical factors, as universities were established at a time when fewer immigrants came to the UK for education, and university communities tend to be selective or stable, with many graduates having a strong emphasis on education and research.
At a finer level of correlation, $\eta_6$'s correlation with individuals who identify as having \textit{No Religion} is 0.546. This can be explained by the high proportion of non-religious researchers in scientific or social studies, as well as the high proportion of non-religious international students in university-related areas. Other social variables that have correlations with $\eta_6$ that are greater than $0.5$ include \textit{Born in the UK} (0.530), \textit{Europe: Total} (0.508), and \textit{No British identity} (0.502). Census data was collected at the household level to identify households with pure British or foreign backgrounds in the highly correlated social variables. This household composition is representative of the typical characteristics of university staff and students, including middle-aged families established prior to recent globalization and young students in shared tenancy arrangements.
\subsection{Social security: prisons and military installations}
Eigenvector $\eta_3$ was found to have a high correlation with prison installations, as evidenced by its strong association with the social variable \textit{Other establishment: Prison Service} and \textit{Other establishment: Detention Centers and other detention} (correlation coefficient valued 0.855). This correlation suggests that areas with similar population compositions to prisons are characterized by a unique pattern that may reflect societal instability.
To further validate this association, we examined the correlations of other social variables with $\eta_3$. Our analysis revealed that several factors, including race, education, and health, contribute to an area's stability. Specifically, we found that $\eta_3$ was positively correlated with White: English/Welsh/Scottish/Northern Irish/British (correlation coefficient valued 0.128), \textit{No qualifications} (0.125), \textit{Routine occupations} (0.124), \textit{Fair health} (0.107), and \textit{Last worked before 2001} (0.103).
Of these social variables, health was found to have a particularly interesting relationship with $\eta_3$. Our analysis showed that \textit{medium health conditions}, rather than \textit{very good}, \textit{good}, \textit{bad} or \textit{very bad health}, were mainly related to $\eta_3$. This result is intuitive as individuals in perfect health are likely to have adequate income and those in poor health are less likely to commit a crime. Taken together, these findings provide further support for the hypothesis that $\eta_3$ is a marker of societal instability, and suggest that the distribution of population characteristics related to race, education, and health may play a role in shaping the spatial pattern of crime and prison. These implications are useful for policymakers and researchers seeking to understand and address the root causes of instability in society.
\section{Feature localization into cities}
\begin{table*}[ht]
\tiny
\begin{tabular}{|c|c|c|}
\hline
$\eta_i$ & $LIPR$ & Interpretation \\
\hline
3 & 0.010373 & Prison service \\ \hline
8 & 0.011114 & Educational establishment \\ \hline
4 & 0.022512 & Defense establishment \\ \hline
9 & 0.026599 & Retirement \\ \hline
10 & 0.030878 & Defense \\ \hline
7 & 0.058357 & Full time employee \\ \hline
5 & 0.064503 & One person household/Household spaces with no usual residents -\textgreater Tourist? \\ \hline
15 & 0.066020 & Multi-person household: All full-time students averaged household spaces\\ \hline
6 & 0.077067 & University \\ \hline
0 & 0.138103 & - \\ \hline
12 & 0.143874 & 1 car or van in household/Lower supervisory and technical occupations \\ \hline
14 & 0.179744 & Owned: Owned with a mortgage or loan/Economically active: Employee: Full-time \\ \hline
19 & 0.261445 & One family only: Married or same-sex civil partnership couple: All children non-dependent/ \\ & & Other households: Three or more adults and no children \\ \hline
17 & 0.264369 & Intermediate occupations/Multiple types of central heating \\ \hline
1 & 0.274687 & Marriage/many vehicles/redundant rooms \\ \hline
11 & 0.295386 & Skilled trades occupations/Lower supervisory and technical occupations/ \\ & & Caring, leisure and other service occupations \\ \hline
2 & 0.303946 & Higher degree -\textgreater Finance and technology \\ \hline
16 & 0.567684 & Tamil/Opposite: Yiddish/Israeli \\ \hline
18 & 0.716807 & Gas central heating/Solid fuel \\ \hline
13 & 0.743838 & Gas central heating, three or more adults and no children, highest level of education \\ \hline
\end{tabular}
\caption{The ranked $LIPR$s of eigenvectors restricted in London. The larger $LIPR$ of an $\eta$ indicates that the feature is more localized in London. Generally, the features with small $LIPR$s are infrastructures, while the features with larger $LIPR$s are the superlinear urban indicators.}
\label{tab:lipr}
\end{table*}
Diffusion maps are a useful method for evaluating and aggregating social variables across different regions. While a variable may hold global significance, it can still hold crucial importance in comprehending the behavior of specific cities if it displays unique characteristics within those areas (such as universities in \cite{barter2019manifold}). These maps can help identify key characteristics of specific areas and uncover minority groups that are concentrated in specific communities within a city. These communities may act as country-wise hubs for that group, e.g., the London Chinatown for Chinese group in London and surrounding areas. To highlight these unique features, we introduce the concept of Localized Inverse Participation Ratio ($LIPR$), which is described in detail in Method section.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{Figs/EFlocal1318.png}
\caption{The most localized features in London, $\eta_{13}$ and $\eta_{18}$ for typical lifestyles. Here \textbf{A} and \textbf{B} are the spatial map of $\eta_{13}$ for England and Wales, and London, respectively; \textbf{D} and \textbf{C} are the spatial map of $\eta_{18}$ for England and Wales, and London, respectively.}
\label{fig:local_feat}
\end{figure*}
We investigate London to show how $LIPR$ is used. First we determine the study set of the first 20 eigenvectors, to pinpoint some of the properties that are important aggregation of social variables valid for the whole England and Wales. Then for each of the eigenvectors, we query the entries that correspond to the OAs in London and further compute the $LIPR$ for the eigenvector-city pair~\ref{tab:lipr}. A benchmark for $LIPR^\text{London}$ values is the uniform distribution, where a feature takes the same value of $1/\sqrt{N}$ in all the OAs in England and Wales, where $N=181,408$. In the Greater London region, there are $N_\text{London}=24,927$ OAs, and the corresponding ``neutral'' $LIPR$ value is $LIPR^*=24927/181408=0.137408$. For an eigenvector $\eta_i$, if its $LIPR_i^\text{London}$ is greater than $LIPR^*$, it can be referred to as a \textit{localized} feature in London; otherwise, if $LIPR_i^\text{London}$ is smallest than $LIPR^*$, $\eta_i$ is not a localized feature in London (either not localized at all, or localized in other cities). A localized feature in London refers to a unique and distinguishable community that is highly concentrated within the city of London, setting it apart from its surrounding neighborhoods. Specifically, if an eigenvector has a high inverse participation ratio (IPR) but a low $LIPR$, it means the corresponding feature is globally significant but not localized in the city. On the other hand, if a feature (such as prisons) has a high IPR and a low LIPR of a city, the feature usually correspond to those rarely seen but essential elements for every city thus infrastructures.
The $LIPR$-ranking approach allows for a systematic investigation of the small social groups in the country who gather in London and have a significant impact. The top localized features in London are $LIPR_{13} = 0.74$, $LIPR_{18}=0.72$, $LIPR_{16}=0.56$, $LIPR_2=0.30$, $LIPR_{11}=0.29$, $LIPR_1=0.27$, $LIPR_{17}=0.26$, $LIPR_{19}=0.26$, $LIPR_{14}=0.18$, and $LIPR_{12}=0.14$ (in descending order). The rest of the eigenvectors may not be localized in London, but could be localized in other cities.
The correlation analysis of localized features in London reveals that $\eta_{13}$ and $\eta_{18}$ (Fig.~\ref{fig:local_feat}) are highly associated with central gas heating, highest level of education, and households consisting of three or more adults with no children. The correlation coefficients of these features with $\eta_{13}$ and $\eta_{18}$ are around 0.25, which highlights the demographic composition of the typical Londoner. Central heating is more prevalent in newer and more expensive homes, and these homes are more likely to be occupied by higher-educated and childless individuals. The concentration of such households in the affluent suburbs of London is consistent with the trend of urban gentrification and high demand for modern and comfortable living environments in urban areas. Our findings suggest that this demographic is characterized by well-educated individuals living in new build properties with central gas heating. The spatial distribution of $\eta_{13}$ highlights the affluent suburbs of London, which suggests that this area is perceived as desirable by wealthy families in business. This finding is supported by previous research studies~\cite{hanmer2017actors}, which have demonstrated a positive relationship between education level, household composition, and central heating system with wealth and urban development.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\linewidth]{Figs/EFTamil.png}
\caption{The spatial clustering of Tamil related people found by $\eta_{16}$ of England and Wales (\textbf{A}) and London (\textbf{B}). Here \textbf{a} is Tamil Community Housing Association, and \textbf{b} is Tamil Association of Brent. The darkest red region are however the clusters of Yiddish and Hebrew speakers.}
\label{fig:tamil}
\end{figure}
The third highest localized feature in London, $\eta_{16}$, is associated with social variables related to the Tamil community and Yiddish, Israeli, and Hebrew speakers. Negative entries of $\eta_{16}$ indicate the presence of the Tamil community near the Tamil Community Housing Association, which supports refugees from Sri Lanka. The Tamil community in London has been growing since the Sri Lankan Civil War and is becoming distinct, as evidenced by high academic performance of Tamil children and a preference for having only children. Meanwhile, positive entries of $\eta_{16}$ mark areas with high concentrations of Yiddish, Israeli, and Hebrew speakers in Stamford Hill, North London. These areas tend to be isolated, as seen in the distribution of Yiddish newspapers aimed at audiences in Leeds, Manchester, and Gateshead, rather than being clustered in a distinct Yiddish neighborhood.
The two eigenvectors, $\eta_2$ and $\eta_{11}$, provide insight into the occupational landscape of London. $\eta_2$ is highly correlated with areas that demand degrees and higher education, such as BA, BSc, MA, PhD, and PGCE, with a correlation coefficient of 0.85. This indicates a strong presence of professional and highly educated individuals in these areas. On the other hand, $\eta_{11}$ marks communities with a higher concentration of lower supervisory and technical occupations, including mechanics, chefs, train drivers, plumbers, and electricians, with a correlation coefficient of 0.33. These are typically considered higher grade blue-collar jobs that require specialized skills.
It is worth noting that $\eta_2$ also has a negative correlation with South Asian language speakers, specifically those who speak Pakistani Pahari, Mirpuri, and Potwari, indicating a lack of assimilation into London's societies. This may suggest a potential barrier for these individuals in accessing higher education and professional opportunities.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\linewidth]{Figs/EFGlobal.png}
\caption{Three of the most globalized eigenvectors: \textbf{A}. $\eta_3$ maps prison installations; \textbf{B}. $\eta_8$ maps educations before college; and \textbf{C}. $\eta_4$ highlights the military camps.}
\label{fig:global}
\end{figure}
The eigenvectors identified by $\eta_3$, which highlight prison installations~(Fig.~\ref{fig:global}A), are not unique to London, but can also be found in other cities. This eigenvector has the highest level of globalization among the first 20 (with a correlation coefficient of 0.010373), indicating that these features may be associated with broader infrastructure elements, such as $\eta_8$ for education~(Fig.~\ref{fig:global}B) or $\eta_4$ for national defense~(Fig.~\ref{fig:global}C). This highlights the importance of considering the broader contextual factors that influence local patterns and structures in cities, beyond just their specific local features.
\section{Discussion}\label{sec12}
In this article, we applied diffusion maps to analyze the synchronized variations in the census responses of England and Wales. Our study represents a novel attempt to decompose the British census as a whole, not just in urban areas. The results of our work demonstrate the effectiveness of diffusion maps in uncovering the underlying social structures in bulk, publicly accessible data. Our method ranks the relative importance of different features by themes and highlights the continuity of social aspects, such as educational levels, in the form of continuous indices.
The complex nature of demographic features calls for a efficient and scalable data analysis approach that can handle multiple scales and themes. Manifold learning methods, such as diffusion maps, are ideal for this purpose as they focus on local structures while preserving global information. To adapt the diffusion maps method to the bulk census data of England and Wales, we developed several techniques, including the preservation of a limited number of correlations that ensure connectedness and the use of Spearman Rank Correlation to measure the high-dimensional census data and account for heterogeneity in the distribution of social variables.
Networks with a higher likelihood of power law behaviors in the degree distribution have robust cross-scale characteristics, and thus defining local metrics can better reflect the heterogeneous distribution of their data noise using local metrics. Diffusion mapping can portray nonlinear features and local metrics in the data, thus one single model can be used to deal with large cross-scale problems that reveals both general picture and local hotspots.
The diffusion mapping eigenvectors provide insights into the urban structures of England and Wales and their impact on the cross-scaled behaviors of British society. Our method uses a heuristic definition of the $k$ nearest neighbor network to ensure that these characteristics are globally sensible and applicable to all areas, not just cities. Furthermore, the local inverse participation ratio is used to identify minority groups localized in big cities and to classify features as sublinear or superlinear urban indicators using only one input dataset.'
The proposed local inverse participation ratio ($LIPR$) is a method for identifying and characterizing minorities in urban areas. The $LIPR$ measures the concentration of a given feature in a specific region compared to its distribution across the entire urban area. It calculates the fraction of total variation in a feature that is captured by a limited number of Output Areas. The $LIPR$ values of each feature allow us to categorize the features as either sublinear or superlinear urban indicators. Features with high $LIPR$ values are considered highly concentrated in one region and classified as superlinear indicators, while features with low $LIPR$ values are considered widely distributed and classified as sublinear indicators.
Our findings extend the existing knowledge that some urban indicators, such as the number of university students, can be infrastructural in some cities but are urban outputs in the others. The $LIPR$ provides valuable insights into the distributional patterns of demographic features in urban areas and can reveal the unique social, economic, and cultural characteristics of highly concentrated minority groups and their relationship to the broader urban population.
|
{
"arxiv_id": "2302.08709",
"language": "en",
"timestamp": "2023-02-20T02:07:28",
"url": "https://arxiv.org/abs/2302.08709",
"yymm": "2302"
} | \section{Introduction}
Since the beginning of this century,
significant progress has been made in the area of Sentiment Analysis and Opinion Mining on processing opinionated documents. Recent years have seen a surge of interest in processing a particular type of opinionated documents: persuasive documents. Work in this area is typically done under the umbrella of Argument Mining, in which the core task is to uncover the argumentative structure of a persuasive document. Specifically, the goal is to (1) identify the main claim, the claims, and the premises (i.e., supporting evidences) expressed in the given document, and (2) determine the relationships among them (e.g., identify which premises support which claim).
Work on argument mining has so far focused on processing legal text \cite{legal, legal09, walker-etal-2018-evidence}, persuasive student essays \cite{persing-ng-2016-end,stab-gurevych-2017-parsing}, and Oxford-style debates \cite{orbach-etal-2020-echo, Slonim2021}.
Although persuasive in nature, propagandistic articles (i.e., articles that aim to influence public opinion via disseminating biased and/or misleading information) have received relatively little attention in Natural Language Processing (NLP). This is somewhat surprising
given the growing prevalence of {\em computational propaganda}, an "emergent form of political manipulation that occurs over the Internet" \cite{book:computational18}.
From a research perspective, automatic processing of propaganda
presents a number of challenges to AI researchers:
\paragraph{Multimodality.} One characteristic that distinguishes propaganda content from other persuasive texts is that the former is often multimodal, comprising both text and images. As the saying goes, a picture is worth a thousand words. In multimodal propaganda, it is often the images that are most eye-catching and which create the biggest psychological impact on the reader. Although the text usually plays a supporting role, there are many cases where the image(s) could not be understood properly without the supporting text. How to combine the information derived from the two modalities to properly understand propaganda is an open question.
\vspace{-2mm}
\paragraph{Deep understanding of text and images.} Propaganda processing takes argument mining to the next level of complexity. As noted above, argument mining involves (1) extracting the claims and premises from the associated text and (2) identifying the relationships (e.g., support, attack) among them.
For the kind of texts that NLP researchers have focused on so far (e.g., legal text,
Oxford-style debates),
the claims and premises are typically clearly stated. In contrast, the main claims and possibly some other supporting claims in propagandistic articles are often intentionally omitted, so we are faced with the additional challenge of recovering these hidden messages. Moreover, while the arguments in legal texts, essays, and debates can largely be interpreted literally, we often have to read between the lines when interpreting the text and images in propaganda content. For example, when given a picture of Russian soldiers killing Ukrainian civilians, current Computer Vision (CV) technologies would be able to produce a caption about this killing event, but if this picture appears in propagandistic articles, we probably need to infer the motive behind this picture (e.g., gaining the world's sympathy and support for Ukraine), which is currently beyond the reach of today's technology.
\vspace{-2mm}
\paragraph{The need for background knowledge.} Historical or cultural background knowledge may be needed to properly process propaganda content. For instance, given a propagandistic article with a picture showing a Palestinian social unrest event in the West Bank, the author may want to instill fear among the Israelis. However, without the
knowledge of the long-standing conflict between the Palestinians and the Israelis, one may not be able to
understand the author's intent.
\vspace{-2mm}
\paragraph{Persuasion by deception.} As noted above, argument mining researchers have focused on processing legal text, essays, and debates, where virtually all claims are established using persuasion strategies like logos (i.e., through logical reasoning), pathos (i.e., through an emotional appeal), and ethos (i.e., through the speaker's authority or status). In contrast, the persuasion strategies used in propaganda are
more sophisticated, often involving logical fallacies and framing.
Automatic processing of propaganda content could have important societal ramifications. In many cases, people are not aware that they are being brainwashed by propaganda campaigns, and this could lead to life-threatening consequences. One of the most compelling examples would be the ISIS propaganda and recruitment in 2014 \cite{farwell2014media}, in which ISIS successfully recruited many people from all over the world, particularly those from the European Union, to go to Syria to serve as soldiers and sex slaves.
Within the U.S.\, propaganda is typically manifested in the form of political manipulation campaigns with the goal of swaying public opinion. In fact, political manipulation campaigns have doubled since 2017 \cite{bradshaw2018challenging}, and increased efforts of disinformation should be expected as the U.S.\ midterm elections draw near \cite{newKnowledge}.
In this paper, we present the novel task of multimodal propaganda processing, where the goal is to analyze propaganda content by extracting relevant information from the different modalities present in the content, identifying the persuasion devices and tactics that are used in different portions of the content, and eventually generating the message(s) being conveyed. We believe that time is ripe for AI researchers to work on this task. From a societal perspective, given the increasing influence that propaganda content has on our daily lives, it is more important than ever for us to be able to understand propagandistic articles. From an AI perspective, deep learning technologies have enabled revolutionary advances in machine understanding.
It is time to examine how robust these technologies are when applied to a task as challenging as multimodal propaganda processing.
\section{Examples}
In this section, we explain why multimodal propaganda processing is interesting and challenging via two examples.
\label{sec:en-guardia}
\subsection{Example 1}
\label{sec:ussr-us}
Figure \ref{fig:ussr-us} presents an advertisement published by the White Motor Company in 1965 aiming to establish the superiority of the American truck transportation road networks to their Soviet counterparts. The advertisement shows on the left side
the stone-surfaced roads being used for transportation in the USSR and the paucity of paved roads in the country, and it shows on the right side the
modern highways along with a map of the USA that is full of road networks.
The text states that (1) the distribution system by trucks
was only one of the examples where the USA was superior to the USSR,
and (2) the restrictions imposed by many states in the USA concerning truck sizes prevented motor truck transportation from operating to its full potential.
\begin{figure}[t]
\centering
\includegraphics[width=.475\textwidth]{new-exp2.pdf}
\caption{Some pages of the \textit{En Guardia} magazine.
Figure~2(b) covers two pages.}
\label{fig:en-guardia}
\vspace{-2mm}
\end{figure}
\vspace{-2mm}
\paragraph{Human perception.} Human readers need to possess certain knowledge in order to process the propaganda content in this
advertisement. First, they have to have some geology knowledge to discover that the two maps are at different scales, which make the USA look larger than the USSR in terms of land mass. Second, they need to be aware of the oversimplifying language "SOVIET WAY - USA WAY", which implies that the Soviets adopted the American way of transportation.
No evidence was provided to substantiate this claim, however.
Third, they need to pay attention to the deceptive language in the text. While the advertisement contains a road map of the USSR and an estimated length of paved road in the country, the sources were never given and hence the information could be far from accurate.
Several propaganda devices and tactics are involved in this
advertisement
(see Section~3 for the list of devices and tactics). First,
the large font size associated with the title and the subtitle as well as the oversimplifying language in them signal the use of the \textit{Binary Reduction}
tactic, which employs the false-dilemma logical fallacy. Second, the use of language to depict that the Americans are superior in many ways (e.g., "Our higher standard of living", "for all the superiority of our American highways", etc.) signals the \textit{All-encompassing}
tactic, which is a sort of rhetoric that often appears as window dressing for a larger point. Third, the sentence "America has the advantage of maximum efficiency and economy" employs the \textit{Cultural Signaling}
tactic, calling on America's values of efficiency and success. Finally, the text gives "best estimates" of the USSR's road network, which is a case of the \textit{Card Stacking}
device, where only partial facts are used to defend a statement.
\vspace{-2mm}
\paragraph{Challenges.}
It is non-trivial
to automatically process the propaganda content in this advertisement.
The key challenges stem from the need to (1)
process multimodal information extracted from the images and the text;
(2)
exploit background knowledge to unveil hidden information;
(3) interpret the use of different font styles and text sizes to highlight specific pieces of information; and (4) understand the hidden information conveyed in the images (e.g., the difference in scaling between the two maps).
\subsection{Example 2}
During World War II, the U.S.\ propagandists
sought
support from the Latin Americas by publishing a high-quality Spanish periodical \textit{En Guardia}. Figure \ref{fig:en-guardia} shows six pages
from the first issue of
\textit{En Guardia}.
Each
page has its own propaganda messaging.
Figure \ref{fig:en-guardia}(a) is the cover of the magazine, which shows two naval ships moving fast in the ocean. The title "En Guardia" and the subtitle "Para la defensa de las Am\'{e}ricas" translate into "On Guard" and "For the defense of the Americas" respectively. Figure \ref{fig:en-guardia}(b)
shows a naval ship in the ocean with a scope pointing at the ship. The main points in the caption of \ref{fig:en-guardia}(b) translate into "The American navy must and will keep the seas
free, and will protect merchant ships against the danger of bombardment". Figure \ref{fig:en-guardia}(c) shows a merchant ship and discusses the importance of merchant ships in delivering goods and troops to all parts of the Americas. The boldfaced sub-caption translates into "America means 21 nations". Figure \ref{fig:en-guardia}(d) shows a ship while Figure \ref{fig:en-guardia}(e) focuses on the training and the sheer size of the U.S.\ Navy.
Most importantly, these pages need to be considered as a {\em sequence} in order to obtain the full
messaging, which is that "Maritime commerce in the Americas is under imminent threat, and protecting the oceans from the enemy is vital to western hemisphere interests. However, the U.S. navy has the best equipment and personnel to deal with such a threat."
\vspace{-2mm}
\paragraph{Human perception.} A human reader with the appropriate background would interpret these pages as follows. First, the cover points out the main theme of the magazine by using the eye-catching subtitle "For the defense of the Americas", which proposes a sense of shared identity and immediate danger. The warship depicted on the cover is moving fast in the ocean, as the water below it is splashing high, suggesting something is happening immediately. Second, Figure \ref{fig:en-guardia}(b) shows an exaggerated scaling of a gun scope on the left and
a ship that is being pointed to by the scope on the right. This would naturally take all the attention of the reader. Third, Figure \ref{fig:en-guardia}(c) repeats the sense of a shared identity by saying "America means 21 nations". Finally, Figures \ref{fig:en-guardia}(d) and \ref{fig:en-guardia}(e) show that the U.S.\ has the equipment and the personnel to deal with the danger threatening free commerce.
Next, we analyze the propaganda devices and tactics used in these pages. Figures \ref{fig:en-guardia}(a), \ref{fig:en-guardia}(b), and \ref{fig:en-guardia}(d) all use the \textit{Visual Scaling}
tactic, which is concerned with evoking emotional understanding (e.g., fear, power, etc.) by using images. These images also use the \textit{Card Stacking}
device, as they do not explicitly point out who is threatening the Americas. Figure \ref{fig:en-guardia}(c) uses two
devices: (1) \textit{Band Wagon}, which implies that all of the countries in the Western Hemisphere are a collective and should work together; and (2) \textit{Glittering Generalities}, where a "virtue word" (in this case, the impressiveness of U.S.\ merchant ships) is being used to create positive emotion and acceptance
(of the U.S.\ military involvement)
without examination of evidence.
\vspace{-2mm}
\paragraph{Challenges.} Automatically processing propaganda in this example is even harder than that in the first example
since proper understanding depends heavily on visual clues rather than textual information. For instance, a machine needs to understand that (1) in Figure \ref{fig:en-guardia}(b), the scope was enlarged to an exaggerated size and was pointing at the ship; (2) in Figure \ref{fig:en-guardia}(d), the picture was taken at an angle that makes the ship look substantially larger than
other objects in the background, with the intent of showing off
the military might of the U.S.; and (3)
these images need to be considered as a {\em sequence} in order to get the full messaging.
\section{Corpus Creation and Annotation}
\label{sec:3}
In this section, we outline the initial steps
needed to address the task of multimodal propaganda processing.
\subsection{Corpus Creation}
Given the recent advances in CV and NLP, we propose to approach this problem by building a machine learning (ML) system. Appropriately annotated corpora are critical to the successful application of any ML systems.
Since the goal of the multimodal propaganda processing task is to analyze propaganda content, we need data instances that correspond to examples of propaganda.
Since we do not have a system for automatically identifying propaganda content, it would be best for us to begin data collection by looking for websites or publications that are known to publish propagandistic materials. A possible source of historical propaganda would be the \textit{En Guardia} magazine described in the previous section. So far, we have applied OCR to every page of every issue of this magazine and have used these articles to assemble the first version of our corpus.
In order to assemble a corpus that contains contemporary propagandistic articles,
we propose to exploit the content published on some fact-checking websites. For example, Politifact\footnote{https://www.politifact.com/} verifies the accuracy of claims made by elected officials. Those claims that were marked as inaccurate would constitute a good set of candidates of propagandistic articles.
We can then manually go through these candidates to identify propaganda content. Similar websites include Full Fact\footnote{https://fullfact.org/}, FactCheck\footnote{https://www.factcheck.org/}, and Media Bias/Fact Check\footnote{https://mediabiasfactcheck.com/}.
\begin{figure*}[t]
\centering
\includegraphics[width=.9\textwidth]{tree2.pdf}
\caption{Sample argument tree for the input taken from Figure~2(b). Given this figure, a possible caption generated for Task 6 is "A ship is sailing on the sea, while a large gun scope is pointing at it". R1: A gun scope is pointing at a ship, which creates a sense of danger, hence \textit{Visual Scaling}. However, who is holding the gun is unclear (partial information), hence \textit{Card Stacking}. R2: A sense of \textit{Fear} created by the gun scope. R3: the image depicts a (potential) crime of bombardment, hence \textit{Enemy Atrocities}. R4: Use the idea of the U.S.\ protecting maritime commerce to justify other U.S.\ military involvements, hence \textit{Transfer}. R5: "La armada norteamericana ... proteger\'{a} a los buques mercantes" means "the American army will protect merchant vessels", which suggests a sense of \textit{Security}. R6: "Los océanos no son barreras, sino las amplias rutas del comercio mundial" means "the oceans are the routes of world trades", which focuses on \textit{Economic Interests}. R7: Since the image suggests threats to maritime trades, while the text suggests the U.S.\ will protect maritime trades, we can get the overall message by combining them.}
\label{fig:exp}
\vspace{-2mm}
\end{figure*}
\subsection{Annotation Tasks}
Next, we define a set of annotation tasks that we believe would be helpful for analyzing propaganda content. The annotations we obtain via these tasks will provide the data needed to train models for processing propaganda content.
\vspace{-2mm}
\paragraph{Task 1: Propaganda device and tactic detection and rationale generation.}
The first task concerns identifying the propaganda device(s) and tactic(s) used in propaganda content.
The "Seven Propaganda Devices" \cite{childs_1936, authorship}, a well-known propaganda theory, defines seven propaganda devices that represent the seven persuasion strategies commonly used in propaganda, including:
\textit{Band Wagon}, \textit{Card Stacking}, \textit{Glittering Generalities}, \textit{Name-Calling}, \textit{Plain Folks}, \textit{Testimonial}, and \textit{Transfer}.%
\footnote{Other propaganda theories can also be used.}
We identify the device(s) used in the text portion and the image portion of the multimodal input separately. Note that more than one device can be used for a given piece of text or image.
In addition, we extend our annotation scheme by including a set of propaganda {\em tactics}, which serve to underscore the methods of employing the devices. The set of
tactics we have identified include: \textit{Extremism}, \textit{All-encompassing}, \textit{Repetition}, \textit{Visual scaling}, \textit{Binary Reduction}, \textit{Cultural Signaling}, \textit{Prestige Signaling}, \textit{Pandering}, and \textit{Innuendo}.
Finally, we provide the rationale behind each device label and each tactic label we assign to the given propaganda content. A rationale is a natural language description of why the corresponding device/tactic label is assigned based on the information extracted from the input. As we will see, rationale generation could improve model interpretability.
\vspace{-2mm}
\paragraph{Task 2: Domain-independent message detection with rationales.} Inspired by existing propaganda theory regarding the content of an article \cite{Ellul1973-na,9780205067169}, we detect the types of the (possibly hidden) messages conveyed by the author in our second task. The messaging in this task is domain-independent and attempts to communicate a broad idea to provoke an emotional reaction. An initial set of message types we identified includes: \textit{Might}, \textit{Fear}, \textit{Inspiration}, \textit{Urgency}, \textit{Unity}, \textit{Teamwork}, \textit{Patriotism}, \textit{Superiority}, \textit{Abundance}, \textit{Reciprocity}, \textit{Sacrifice}, \textit{Masculinity}, \textit{Ingenuity}, \textit{Virtue}, \textit{Progress}, \textit{Security}, \textit{Reassurance}, \textit{Fun}, and \textit{Sameness}. We expect this list to grow as we identify additional types.
Multiple message types may be applicable to a given propaganda content. As in Task~1, in this task the rationale behind each annotated message will be annotated.
\vspace{-2mm}
\paragraph{Task 3: Domain-specific message detection with rationales.} Like Task 2, Task 3 also concerns identifying the types of messages conveyed by the author, but the message types in this task are domain-specific and therefore would need to be redesigned for each new domain. For wartime propaganda such as those that appear in \textit{En Guardia}, the message types could include: \textit{Military Strength}, \textit{Industrial Production}, \textit{US-Latin American Cooperation}, \textit{US culture}, \textit{US Leadership}, \textit{WW2}, \textit{Pan-Americanism}, \textit{War preparation}, \textit{Economic Interests}, \textit{Gendered messaging}, \textit{Civilian contributions}, and \textit{Common Culture}.
Again, multiple message types may be applicable to a given propaganda content.
As in Tasks~1 and~2, in this task the rationale behind each annotated message will be annotated.
\vspace{-2mm}
\paragraph{Task 4: Main message generation with rationales.} This task concerns generating the main message conveyed by the author in natural language. As in the first three tasks, here the rationale behind the main message will be annotated.
\vspace{-2mm}
\paragraph{Task 5: Background knowledge.} As noted before, background
knowledge may be needed to properly interpret propaganda content. The background knowledge needed will be annotated in the form of natural language sentences.
\vspace{-2mm}
\paragraph{Task 6: Image captioning.}
Existing image encoders may fail to encode all the details of an image, particularly when the image contains abstract concepts.
To mitigate the difficulty of accurately extracting information from images, we propose an auxiliary task, image captioning, where we annotate the information present in an image in natural language so that the resulting caption is an equivalent textual representation of the image.
With these annotations, we can train a model to first caption an image and use the resulting caption in lieu of the original image for further processing.
\subsection{Sample Annotation}
We propose to annotate each propagandistic article in the form of an {\em argument tree}, which is the representation used by argument mining researchers to represent the argumentative structure of a persuasive document \cite{stab-gurevych-2014-annotating}. In an argument tree, the root node corresponds to the main claim of the document, and each child of a node corresponds to a piece of supporting evidence (which can be a claim or a premise) for the parent. In other words, each edge denotes a support relation. A leaf node always corresponds to a premise, which by definition does not need any support.
To enable the reader to understand how to annotate a propagandistic article as an argument tree, we show in Figure~\ref{fig:exp} the argument tree that should be produced for input taken from Figure~2(b).
As we can see, the root node contains the main message (see Task~4). It has five children, which implies that it is supported by five pieces of evidence, including the domain-independent and domain-dependent messages derived from the image, the domain-independent and domain-dependent messages derived from the text and the rationale associated with the main message, which is also derived from the text. For each of the first four children, there are two child nodes, one corresponding to its rationale and the other corresponding to the device(s) and tactic(s) used. The fifth child, which is a rationale, is a leaf node. Note that a rationale always appears in a leaf node, the reason being that rationales are derived directly from either the image or the text (or both) and therefore do not need any support. The remaining nodes in the tree can be interpreted in a similar fashion. Note that the annotations for Tasks~1--5 will always appear as nodes in the tree.
\comment{
Next, we give some sample annotations for all four sub-tasks using Figure \ref{fig:ussr-us} and \ref{fig:en-guardia} as examples. The input format for sub-tasks 1-4 is essentially the same, which can be an image, the caption of an image, a paragraph of the article, or a sequence of a combination of images and paragraphs. For text input, we propose that the rich formatting of the text should be encoded as a part of the input.
\begin{itemize}
\item Sub-task 1: A model needs to detect both the boundaries and types of propaganda devices and tactics.
\item Sub-task 2: Similar to sub-task 1, the output of sub-task 2 consists of boundaries and types of the domain-independent message conveyed by the author.
\item Sub-task 3: The output for sub-task 3 is essentially the same as sub-task 2, except that now the model needs to output domain-specific types.
\item Sub-task 4: The output for sub-task 4 is natural language sentences describing the high-level message conveyed by the author.
\item Sub-task 6: The input for sub-task 5 is an image, while the output is the image transcription.
\end{itemize}
}
\section{Models}
Given a dataset annotated using our annotation scheme, we can train a model to perform the six annotation tasks. Given the recent successes of neural models in NLP, we propose to employ neural models for our task. As a first step, we can employ existing models and design new models for this task if needed. There are several considerations.
\vspace{-2mm}
\paragraph{Multimodal vs. unimodal models.}
Since our input is multimodal and composed of text and image(s), it would be natural to train a multimodal model assuming three inputs: two of them correspond to the two modalities and the remaining one encodes the background knowledge base
(assembled using the background knowledge annotated for each training instance, for example). The images can be encoded using a visual encoder such as ResNet \cite{He2016DeepRL} and ViLBERT \cite{Lu2019ViLBERTPT}, whereas the text inputs (including the background knowledge base) could be encoded using a neural encoder such as SpanBERT \cite{joshi-etal-2020-spanbert}. The outputs from the encoders can then be concatenated together for further processing.
Alternatively, one can employ a unimodal model where we caption the image first (Task~6) with the help of an object detection system (e.g., YOLO \cite{yolo}) and possibly an off-the-shelf image captioning system (e.g., X-Transformer \cite{Pan_2020_CVPR}, VinVL \cite{Zhang_2021_CVPR}). As noted before, the caption is supposed to be an equivalent textual representation of the corresponding image. The caption can then be encoded by a text encoder, and the resulting representation can be concatenated with the encoded outputs from the text side for further processing.
\vspace{-2mm}
\paragraph{Joint vs. pipeline models.}
Should we adopt a pipeline architecture where we first train a model for each task independently of the others and then apply the resulting models in a pipeline fashion? For instance, given multimodal propagandistic articles, we can first train a model to caption the image (Task~6), as described above. After that, we can train a model to identify the device(s) and another model to identify the tactic(s) (Task 1). To improve model interpretability, the rationales can be predicted jointly with the corresponding device(s)/tactic(s).
We can similarly train models to predict the domain-independent and domain-specific labels (Tasks 2 and 3) jointly with their rationales by using
all of the available information predicted so far (e.g., the tactics and devices, the caption). Finally, we can train a model to predict the main message (Task 4).
Recall that pipeline models are prone to error propagation, where errors made by an upstream model will propagate to a downstream model. To mitigate error propagation, we can consider joint models. Specifically, we can train {\em one} model to perform all of the six tasks jointly. Joint models allow the different tasks involved to benefit each other via a shared input representation layer. The major downside of a joint model is that the resulting network (and hence the corresponding learning task) is typically very complex.
\vspace{-2mm}
\paragraph{Pre-trained models.} A key challenge in the automatic processing of propaganda is the need for background knowledge. While we have access to background knowledge through the manual annotations obtained as part of Task~5, it is conceivable that the amount of background knowledge we need will far exceed what these annotations can provide. A potential solution to this background knowledge acquisition bottleneck is {\em pre-training}.
More specifically, researchers in NLP have shown that a vast amount of general knowledge about language, including both linguistic and commonsense knowledge, can be acquired by (pre-)training a language model in a {\em task-agnostic} manner using {\em self-supervised} learning tasks. Self-supervised learning tasks are NLP tasks for which the label associated with a training instance can be derived automatically from the text itself.%
\footnote{A well-known self-supervised learning task is Masked Language Modeling (MLM)~\cite{devlin2019bert}. Given a sequence of word tokens in which a certain percentage of tokens is {\em masked} randomly, the goal of MLM is to predict the masked tokens.}
Because no human annotation is needed, a language model can be pre-trained on a large amount of labeled data that can be automatically generated, thereby acquiring a potentially vast amount of knowledge about language. Many pre-trained language models have been developed and widely used in NLP, such as BERT \cite{devlin2019bert}, XLNet \cite{yang2019xlnet}, RoBERTa \cite{liu2019roberta}, ELECTRA \cite{clark2019electra}, GPT-2 \cite{gpt2}, T5 \cite{raffel2020t5}, and BART \cite{lewis2020bart}. These models have been shown to offer considerable improvements on a variety of NLP tasks.
To acquire the background knowledge needed for processing the articles in {\em En Guardia}, for instance, we can pre-train a language model on as many {\em unannotated} articles in {\em En Guardia} as we can collect. The resulting model can then be optimized for a specific task by {\em fine-tuning} its parameters using the task-specific labeled data we obtained via our annotation process in the standard supervised fashion. While there has been a lot of work on developing pre-trained language models, the development of {\em multimodal} pre-trained models that can understand both text and images, which is what we need for multimodal propaganda processing, is a relatively unexplored area of research.
\comment{
\vspace{-2mm}
\paragraph{Deep vs. shallow understanding.} Since our ultimate goal is to train a model to predict the hidden message conveyed in the propaganda content (Task~4), we can take a {\em shallow} approach, where we train a model that takes the multimodal input and the background knowledge base and directly predicts the hidden message without predicting the results of the intermediate tasks (Tasks~1--4). Unlike the deep understanding approach where all the intermediate tasks will be learned, the shallow approach will not be interpretable (since no rationales will be outputted) and hence it will not learn to reason with and exploit the information provided by the intermediate tasks when predicting the hidden message. Nevertheless, the shallow approach could be sufficient as far as predicting the hidden message is concerned.
}
\section{Related Work}
\paragraph{Memes.} Memes are user-created pictures, often accompanied by text, that are used to express opinions on all kinds of topics.
Similar to propaganda messaging, memes typically require
background knowledge for proper interpretation.
Memes are widely used in political manipulation campaigns as a tool for conveying propaganda messaging \cite{farwell2014media,fhms15,bradshaw2018challenging,newKnowledge}. Hence, unveiling hidden information from memes is highly related to processing propaganda messaging from images and text. There has been recent work that aims to build a model to detect a rich set of propaganda techniques in memes \cite{dimitrov-etal-2021-detecting}.
\vspace{-2mm}
\paragraph{Document-level unimodal misinformation analysis.} Several publicly-available datasets
are composed of news articles labeled with whether they contain misinformation.
For example, in the \texttt{TSHP-17} dataset \cite{rashkin-etal-2017-truth}, each article is labeled with one of four classes: \textit{trusted}, \textit{satire}, \textit{hoax}, and \textit{propaganda}, whereas in the \texttt{QProp} dataset \cite{10.1016/j.ipm.2019.03.005}, only two labels are used: \textit{propaganda} and \textit{non-propaganda}.
\citet{da-san-martino-etal-2019-fine}, on the other hand, develop a corpus of news articles
labeled with the propaganda techniques used. Their corpus enables the study of multi-label multi-class classification task in propaganda identification.
\vspace{-2mm}
\paragraph{Multimodal misinformation classification.}
Some researchers have examined the task of multimodal propaganda identification.
For instance, \citet{Volkova_Ayton_Arendt_Huang_Hutchinson_2019} construct a dataset consisting of 500,000 Twitter posts classified into six categories (\textit{disinformation}, \textit{propaganda}, \textit{hoaxes}, \textit{conspiracies}, \textit{clickbait}, and \textit{satire}) and build models to detect misleading information in images and text.
While this attempt seeks to perform a shallow analysis of tweets, we propose to perform a deep analysis of propaganda content, which would lead to the generation of the hidden messages conveyed.
\section{Concluding Remarks}
We presented the task of multimodal propaganda processing and discussed the key challenges.
Below we conclude with several other
issues that
are also relevant to the task.
\vspace{-2mm}
\paragraph{Propaganda identification.} While we have focused on analyzing propaganda content,
it is equally important to
identify such content. Although we did not explicitly discuss how such content can be identified,
a system that can analyze propaganda content could also be used for identifying such content. More specifically, if the system determines that no persuasion devices and tactics were being used in the given content, it could imply that the content is not propagandistic. Another possibility would be to train a model to distinguish propaganda content from non-propaganda content on our corpus of propaganda articles and other non-propaganda articles collected from the Internet.
\vspace{-2mm}
\paragraph{Domain transferability.}
Since the models described thus far are trained on domain-specific annotations (i.e., the background knowledge from Task~5 and the domain-specific labels from Task~4), they are necessarily domain-specific. To facilitate their application to a new domain, especially when labeled training data in the new domain is scarce, we can explore domain adaptation techniques.
\vspace{-2mm}
\paragraph{Ethical considerations.}
Care should be taken to ensure that
propaganda processing technologies would not be misused by people to attack their political opponents by intentionally using a propaganda processing system to draw wrong conclusions or generate propaganda content aiming to achieve their personal agenda, for instance.
|
{
"arxiv_id": "2302.08693",
"language": "en",
"timestamp": "2023-02-20T02:06:57",
"url": "https://arxiv.org/abs/2302.08693",
"yymm": "2302"
} | \section{\bf Introduction} \label{pre}
Stochastic differential equations (SDEs) driven by Wiener noise and L\'{e}vy noise have important applications to finance and physics. It is well known that Brownian motion is an $\alpha$-stable L\'{e}vy process with $\alpha=2$. For one-dimensional stochastic process, we know that an $\alpha$-stable process converges in distribution to a Brownian motion as $\alpha\rightarrow2$ (see Sato \cite{Sato}). Then a natural question arises: for the SDE driven by an $\alpha$-stable process, dose its solution converge in distribution to that of an SDE driven by a Brownian motion?
Liu \cite{Liu} studied weak convergence for the solution of an SDE driven by the symmetric stable process, and the $\alpha$-continuity of the solution to an SDE driven by an $\alpha$-stable process as $\alpha$ tends to $\alpha^* \in (1,2]$, but the case of non-symmetric $\alpha$-stable process is not well studied. In this paper, we study the weak convergence behavior as $\alpha \rightarrow 2$ for the solutions of SDEs driven by non-symmetic $\alpha$-stable processes in the Skorokhod space $D_{J_1}([0,T], \mathbb{R}^d$) (the space $D([0,T], \mathbb{R}^d)$ equipped with the $J_1$-metric, also see \cite{Bill}).
Let $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\geq 0},\mathbb{P})$ be
a filtered probability space, which satisfies the usual conditions. Consider the following equations,
\begin{align}
dX^{\alpha,\beta} _t&=b(X^{\alpha,\beta} _t)dt+ \sigma (X^{\alpha,\beta} _{t-}) dL^{\alpha,\beta}_t,
\quad X^{\alpha,\beta}_0=x\in \mathbb{R}^d, \label{e:0} \\
dX _t&=b(X_t)dt+ \sqrt{2}\sigma (X_{t}) dB_t,\quad X_0=x, \label{e2}
\end{align}
where $b(x)=(b_1(x),\cdot\cdot\cdot,b_d(x))$ is a vector function, $\sigma(x)=(\sigma_{ij}(x))_{d\times d}$ is a $d\times d$ matrix-valued function. For more assumptions on the drift and diffusion coefficients, see Assumption \ref{asp1} and Assumption\ref{asp2}. The noise $B_t=(B_t^1, \cdots, B_t^d)$ is a $d$-dimensional Brownian motion, and $L^{\alpha,\beta}_t=(L^{\alpha,\beta_1}_t,\cdots,L^{\alpha,\beta_d}_t)$ is a $d$-dimensional cylindrical non-symmetric $\alpha$-stable process. For each $i\in\{1,2,...,d\}$, $L^{\alpha,\beta_i}_t$ is an one-dimensional non-symmetric $\alpha$-stable L\'{e}vy process and $N^{\alpha,\beta_i}$ is an $\mathcal{F}_t$-Poisson random measure with intensity measure $\nu^{\alpha,\beta_i}(dz)dt$. The L\'{e}vy measure $\nu^{\alpha,\beta_i}$ on $\mathbb{R}$ is defined as
\begin{equation}\label{e1}
\nu^{\alpha,\beta_i}(dz)=\left\{
\begin{array}{rcl}
\frac{C_1^idz}{|z|^{1+\alpha}},&&{z > 0},\\
\ \\
\frac{C_2^i dz}{|z|^{1+\alpha}},&&{z < 0},
\end{array}\right.
\end{equation}
with $\beta_i\in [-1,1]$, \ $C_1^i=K_\alpha^i\frac{1+\beta_i}{2}$ and $C_2^i=K_\alpha^i\frac{1-\beta_i}{2}$, where
\begin{equation}\nonumber
K_\alpha^i=\left\{
\begin{array}{rcl}
\frac{\alpha(1-\alpha)}{\Gamma(2-\alpha)cos(\frac{\pi\alpha}{2})},&&{\alpha \neq 1},\\
\ \\
\frac{2}{\pi}\ ,&&{\alpha=1}.
\end{array}\right.
\end{equation}
In particular, $\nu^{\alpha,\beta_i}$ is said to be symmetric if $\beta_i = 0$.
Furthermore, the compensated Poisson random measure $\tilde{N}^{\alpha,\beta_i}$ is defined
by
\[
\tilde{N}^{\alpha,\beta_i}(dt,dz):=N^{\alpha,\beta_i}(dt,dz)- \nu^{\alpha,\beta_i}(dz)dt.
\]
According to
L\'{e}vy-It\^{o}'s decomposition, we have
\begin{equation*}\label{levy-ito-1}
L^{\alpha,\beta_i}_t = \int_0^t \int_{|z|\leq1} z
\tilde{N}^{\alpha,\beta_i}(ds,dz) + \int_0^t \int_{|z|>1} z
N^{\alpha,\beta_i}(ds,dz),\quad \alpha \in (1,2).
\end{equation*}
For every $i\in\{1,2,...,d\}$,
\begin{equation*}\label{e1-eqv}
dX^{\alpha,\beta_i} _t=b_i(X^{\alpha,\beta}_t)dt+\sum_{j=1}^d\int_{|z_j|\leq 1}\sigma_{ij}
(X^{\alpha,\beta}_{t-}) z_j \tilde{N}^{\alpha,\beta_j}(dt,dz_j) +\sum_{j=1}^d \int_{|z_j|>1}\sigma_{ij}(X^{\alpha,\beta}_{t-}) z_j
N^{\alpha,\beta_j}(dt,dz_j),
\end{equation*}
is the $i$th component of SDE \eqref{e:0}.
We will prove the solution of Equation \eqref{e:0}, whose noise is non-symmetric $\alpha$-stable, converges weakly to the solution of Equation \eqref{e2} in Skorohod space (see Section 3), which we prove by using the Aldous' criterion. To this purpose
we need to establish the tightness of the approximating solution and this can be obtained by a priori estimate
$ \sup\limits_{\alpha\geq\alpha_0} \EX [\sup\limits_{0\leq t\leq T} |X^{
\alpha,\beta}_t|^\theta] <\infty. $
Secondly, by martingale characterization, we show that the limit (in the sense of distributions) of the solution of the approximating equation is the solution of an SDE driven by a Brownian motion.
While the weak convergence properties are established for SDEs driven by non-symmetric $\alpha$-stable processes in Skorohod space, the rate of convergence is still not easily derived. To our knowledge, many works are devoted to study the convergence rate of the solution for an SDE, where
both the approximation equation and the limit equation taking the same type of noise term (see, for instance, \cite{rate} and \cite{rate1}). Here we will derive
the explicit error bounds on the difference between the solution of \eqref{e:0} and the solution of limit equation \eqref{e2}, where the two noises appearing in above two equations are
totally different. To this
purpose, we introduce the backward Kolmogorov equation associated with the solution of SDE \eqref{e2} to construct
an evolutionary equation that describes the weak difference. Furthermore, we need to ensure that the solution of SDE \eqref{e:0} converges in weak sense to the solution of SDE \eqref{e2} under Assumption \ref{asp2}. For simplicity, we only establish the uniform estimates on solution of approximating equation (see Lemma \ref{lem4.2}). And the corresponding proof of tightness and martingale characterization is omitted. Finally, to illustrate the estimate \eqref{wr} is the optimal, we present a special case in Example \ref{example}.
The organization of this paper is given as follows. We introduce some notations, basic settings and state the main results in Section 2. Section 3 contains the proof of the main results. Section 4 is devoted to the rate of weak convergence and an
illustrative example.
\section{\bf Notation and main results}
Denote $H=[0,T]\times \mathbb{R}^d,\ \mathbb{N}=\{0, 1, 2, ...\}$. For $(t,x)$ in $H$, $\eta=(\eta_1, \cdots, \eta_d)$ in $(\mathbb{N}^+)^d$ is a multi-index, $|\eta|=\eta_1 +\cdots+ \eta_d$, and $D^{\eta}
=\frac{\partial^{|\eta|}}{\partial x_1^{\eta_1}\cdots\partial x_d^{\eta_d}}$. We denote
\begin{align}\nonumber
&\partial_t u(t,x)=\frac{\partial}{\partial t}u(t,x), \ \ \ \ \ D^ku(t,x)=(D^{\eta}u(t,x))_{|\eta|=k}, k\in\mathbb{N},\\ \nonumber
&\partial_i u(t,x)=u_{x_i}(t,x), \ \ \ \ \ \partial^2_{ij} u(t,x)=u_{x_i x_j}(t,x), \ \ i, j \in \{1,\cdots, d\} \\ \nonumber
&\partial_xu(t,x)=\nabla u(t,x)=\nabla_x u(t,x)=(\partial_1 u(t,x),\cdots, \partial_du(t,x)). \nonumber
\end{align}
Let $\kappa \in \mathbb{N}^+$ and $0< \gamma \le 1$. And let $ C^{\kappa, \gamma}(H)$ denote the space of measurable functions in $H$ such that the derivatives $D_x^\eta u(t,x)$ are continuous with respect to $(t,x)$ for all $1\le|\eta|\le\kappa$ and the derivatives are locally $\gamma-$H${\rm \ddot{o}}$lder continuous with respect to $x$ for $|\eta|=\kappa$. We denote its norm by
\begin{align} \nonumber
||u||_{\kappa,\delta}:=\sum_{ |\eta|\le \kappa } |D_x^\eta u(t,x)|_0+\sup_{|\eta|=\kappa ,
\atop t,\ x \neq x^1 } \frac{|D_x^\eta u(t,x)- D_x^\eta u(t,x)|}{|x-x^1|^{\gamma}}, \nonumber
\end{align}
where $|u|_0 = \sup_{(t,x)\in H}|u(t,x)|$. We denote by $C^{\kappa, \gamma}(\mathbb{R}^d)$ the corresponding function sapce on $\mathbb{R}^d$.
The letter $C=C(\cdot, ... , \cdot)$ is a constant, which only depends on quantities appearing in parentheses. In a given context, the same letter is (generally) used to denote different constants depending on the same set of arguments. Moreover, all of the constant $C$ in this study are independent of $\alpha$, but it possibly depends on $\beta$.
We denote by $H(f)$ the Hessian matrix of $f$ and let $H_x(u(t,x))$ be the Hessian matrix of $u$ with respect to $x$. Let $\sigma^*$ be the transpose of $\sigma$, and set
\begin{equation}\nonumber
\sigma=(\sigma_1,\cdots, \sigma_d),\ \ \sigma_i=(\sigma_{1i},\cdots,\sigma_{di})^* .
\end{equation}
We use the Einstein summation convention throughout this paper, for example if $x,y$ in $\mathbb{R}^d$, \ then
\begin{equation}\nonumber
x y =\sum_{i=1}^d x_iy_i.
\end{equation}
Now, we make some assumptions on the drift and diffusion coefficients.
\begin{assumption} \label{asp1}
(a) There is a constant $K_{\sigma}$ such
that for every $x,y \in \mathbb{R}^d$,
\begin{align*}
\|\sigma(x)-\sigma(y)\|_{HS}&\leq K_\sigma|x-y|,
\end{align*}
where $||\cdot||_{\text{HS}}$ denotes the Hilbert-Schmidt norm of a
matrix.
(b) There exists a positive number $\delta_0$, such that for all $R>0$, and $x,y\in\mathbb{R}^d$, with $\max{\{|y|,|x|\}}\le R$
and $|y-x|\le\delta_0$, satisfying
\begin{equation}\nonumber
\langle y-x , b(y)-b(x) \rangle \le K_{b} |y-x|^2, \nonumber
\end{equation}
where $K_b$ is a positive constant. Also, there exists a constant $C>0$, such that for all $x\in\mathbb{R}^d$, satisfying
\begin{equation}\nonumber
|b(x)|\le C(1+|x|^r), \nonumber
\end{equation}
where $r\in(0,+\infty)$.
\end{assumption}
Observe that in view of the above assumptions, drift $b$ is dissipative and locally bounded. The second item ensures the tightness of the approximating solution (see Lemma \ref{lem2.4}), which is similar to that in \cite{SGI}. With our assumptions we have the following main result which is proved later in section 3.
\begin{theorem}\label{thm1}
If Assumption \ref{asp1} is satisfied. Let $X_t^{\alpha,\beta}$ be the solution of SDE \eqref{e:0}, then there
exists a measure $\mu$ defined on $D([0,T],\mathbb{R}^d)$ so that
\begin{equation}\label{main1} \nonumber
\mathcal{L} (X^{\alpha,\beta}) \rightarrow \mu \quad \text{as} \quad
\alpha\rightarrow 2,
\end{equation}
in $D_{J_1}([0,T],\mathbb{R}^d)$. Moreover, $\mu$ is the law of
solution to SDE \eqref{e2}.
\end{theorem}
Note that this theorem is different from \cite[Theorem 1.1]{Liu}, where the noise is symmetric and rotationally invariant $\alpha$-stable L\'{e}vy process and the drift coefficient satisfies global Lipschitz condition, while in our setting the noise is cylindrical non-symmetric $\alpha$-stable L\'{e}vy process and the drift is non-Lipschitz. In order to investigate the convergence rate, we need to impose the following assumptions on the coefficients $b$ and $\sigma$.
\begin{assumption}\label{asp2}
Let $0< \gamma \le 1$, $b_i(x) \in {C}^{2, \gamma}(\mathbb{R}^d)$, $\sigma_{ij}(x) \in {C}^{2, \gamma}(\mathbb{R}^d)$, for all $1\le i,j \le d.$
\end{assumption}
The above assumptions are similar to the conditions imposed on the drift and jump coefficients in \cite{rate}. And it ensures the existence and uniqueness for the backward Kolmogorov equation which is used to derive the rate of weak convergence. We state the following theorem.
\begin{theorem} \label{thm2}
If Assumption \ref{asp2} is satisfied. Let $X_t$ be the solution of SDE \eqref{e2}, then for each $f\in {C}^{2,\gamma}(\mathbb{R}^d)$, we have
\begin{equation} \label{wr}
\EX[f(X_t^{\alpha,\beta})-f(X_t)]\le C(2-\alpha).
\end{equation}
\end{theorem}
This is our second main result and whose proof will be carried out in Section \ref{Weak Convergence}.
\section{\bf Proof of Theorem 2.1 } \label{proof1}
This section is devoted to the proof of Theorem \ref{thm1}. First, we state an existence and uniqueness result for SDE \eqref{e:0}. Then, by using uniform estimates of the approximating solution (Lemma \ref{lem2.3}) we prove the tightness of the approximating equation (Lemma \ref{lem2.4}). Finally, by martingale characterization, we show that the limit of the solution of the approximating equation is the solution of an SDE driven by a Brownian motion. Now we are ready to state that SDE \eqref{e:0} has a unique strong solution.
\begin{lemma}\label{lem2.1}
If Assumption \ref{asp1} is satisfied, then SDE \eqref{e:0}
admits a unique strong solution.
\end{lemma}
The proof of this result is similar to \cite[Corollary 2.9]{zhu}, where a standard truncated method is employed to show that the stochastic differential equation has a unique strong solution, and hence is omitted here. In this article, the following lemma plays a crucial role.
\begin{lemma}\label{lem2.2}
For every $0<\delta\le 1$, $l \in (1,2)$,
$\vartheta \in [1,l)$, and $\nu^{\alpha,\beta}$ define as \eqref{e1}, we have
\item{\begin{equation} \label{eq3.1}
\sup_{\alpha\geq l} \int_{ |z|\leq \delta} |z|^{2}
\nu^{\alpha,\beta}(dz)+\sup_{\alpha\geq l}
\int_{ |z|> \delta} |z|^\vartheta \nu^{\alpha,\beta}(dz) = C<\infty
\end{equation}}
and
\item{\begin{equation} \label{eq3.2}
\lim_{\alpha \rightarrow 2}\int_{ |z|\leq \delta}|z|^2
\nu^{\alpha,\beta}(dz)=\lim_{\alpha \rightarrow 2}\frac{
K_\alpha}{2-\alpha}= 2.
\end{equation}}
For every $\delta>0$ and $ \vartheta <2$, we have
\item{\begin{equation} \label{eq3.3}
\lim_{\alpha \rightarrow 2}\int_{ |z|>\delta}|z|^\vartheta
\nu^{\alpha,\beta}(dz) =0.
\end{equation}}
For every $\alpha$ in $(0,2)$, we have
\item{\begin{equation} \label{eq3.4}
\lim_{\delta\rightarrow 0+}\delta^{\alpha-2}\int_{ |z|\leq \delta} |z|^{2}
\nu^{\alpha,\beta}(dz)=c <\infty, \quad \lim_{\delta\rightarrow 0+}\int_{ |z|\leq \delta} |z|^{2}
\nu^{\alpha,\beta}(dz)=0.
\end{equation}}
Moreover, let $\alpha\in(\frac{3}{2}, 2)$, we have
\item{\begin{equation} \label{eq3.5}
|\int_{|z|\le\delta} |z|^2 \nu^{\alpha,\beta}(dz) - 2| + \int_{|z|>\delta} |z| \nu^{\alpha,\beta}(dz) \le C(2-\alpha).
\end{equation}}
\end{lemma}
We give a proof for this result in Section \ref{Appendix}. Next, we derive a uniform estimate in the following lemma.
\begin{lemma}\label{lem2.3}
If Assumption \ref{asp1} is satisfied, then
\begin{equation}
\sup_{\alpha\geq\alpha_0} \EX [\sup_{0\leq t\leq T} |X^{
\alpha,\beta}_t|^\theta] <\infty. \label{solution0}
\end{equation}
where $ \theta \in (0,1)$ and $\alpha_0 \in (\frac{3}{2} , 2)$.
\end{lemma}
\begin{proof} Let $f(x)=(1+|x|^2)^\frac{1}{2}$ and $\delta>0$,
by using It\^{o} formula, we have
\begin{align}\label{ito formula}
df(X^{\alpha,\beta}_t)&=\quad b(X^{\alpha,\beta}_t)\cdot \nabla f(X^{\alpha,\beta}_t)dt \nonumber \\
&\quad+\sum_{i=1}^{d}\int_{ |z_i|>\delta}[f(X^{\alpha,\beta}_{t-}+
\sigma_i(X^{\alpha,\beta}_{t-})z_i)-f(X^{\alpha,\beta}_{t-})]
\nu^{\alpha,\beta_i}(dz_i)dt\nonumber \\
&\quad +\sum_{i=1}^d\int_{|z_i|\leq \delta}[f(X^{\alpha,\beta}_{t-}+
\sigma_i(X^{\alpha,\beta}_{t-})z_i)-f(X^{\alpha,\beta}_{t-})\nonumber \\
&\quad\quad\quad\quad\quad\quad\quad- \sigma_i(X^{\alpha,\beta}_{t-})z_i\cdot
\nabla f(X^{\alpha,\beta}_{t-})] \nu^{\alpha,\beta_i}(dz_i)dt\nonumber \\
&\quad-\sum_{i=1}^d\int_{\delta<|z_i|\leq 1}\sigma_i(X^{\alpha,\beta}_{t-})z_i\cdot
\nabla f(X^{\alpha,\beta}_{t-})] \nu^{\alpha,\beta_i}(dz_i)dt\nonumber \\
&\quad \ +\sum_{i=1}^d\int_{ \mathbb{R}-\{0\}}[f(X^{\alpha,\beta}_{t-}+
\sigma_i(X^{\alpha,\beta}_{t-})z_i)-f(X^{\alpha,\beta}_{t-})]
\tilde{N}^{\alpha,\beta_i}(dt,dz_i)\nonumber \\
&:= I_1(X_t^{\alpha,\beta})dt+I_2(X_t^{\alpha,\beta})dt+I_3(X_t^{\alpha,\beta})dt+I_4(X_t^{\alpha,\beta})+dM_t.
\end{align}
For $I_1(X_t^{\alpha,\beta})$, we have
\begin{equation} \label{I1}
|I_1(X_t^{\alpha,\beta})|\le |b_i(X_t^{\alpha,\beta})\partial^i f(X_t^{\alpha,\beta})|=\frac{|\langle X_t^{\alpha,\beta}, b(X_t^{\alpha,\beta})\rangle|}{(1+|X_t^{\alpha,\beta}|^2)^{\frac{1}{2}}}\le C f(X_t^{\alpha,\beta}).
\end{equation}
We observe that
\begin{align*}
|f(x+y)-f(x)|\le|y|\int_0^1|\nabla f(x+sy)|ds\le \frac{|y|}{2}.
\end{align*}
Using Lemma \ref{lem2.2} and the above inequality, we deduce that
\begin{align} \label{I2}
|I_2(X_t^{\alpha,\beta})|&\le\sum_{i=1}^{d}\int_{|z_i|>\delta}|f(X_{t-}^{\alpha,\beta} +\sigma_i(X_{t-}^{\alpha,\beta})z_i)-f(X_{t-}^{\alpha,\beta})|\nu^{\alpha,\beta_i}(dz_i) \nonumber \\
&\le C\sum_{i=1}^{d}\int_{|z_i|>\delta}|\sigma_i(X_{t-}^{\alpha,\beta})z_i|\nu^{\alpha,\beta_i}(dz_i)\nonumber \\
&\le C \sum_{i=1}^{d}\int_{|z_i|>\delta}|z_i|\nu^{\alpha,\beta_i}(dz_i)(|X_t^{\alpha,\beta}|+1)\nonumber \\
&\le C (f(X_t^{\alpha,\beta})+1) .
\end{align}
For $I_3(X_{t}^{\alpha,\beta})$, we can choose a sufficiently small $\delta>0$ such that for all $|z|\le\delta$,
\begin{align*}
&\quad|f(X^{\alpha,\beta}_{t-}+
\sigma_i(X^{\alpha,\beta}_{t-})z_i)-f(X^{\alpha,\beta}_{t-})
- \sigma_i(X^{\alpha,\beta}_{t-})z_i\cdot
\nabla f(X^{\alpha,\beta}_{t-})|\\
&=|\int_0^1dr \int_0^r (\sigma_i(X_{t-}^{\alpha,\beta})z_i)H(f)(X^{\alpha,\beta}_{t-}+s\sigma_i(X_{t-}^{\alpha,\beta})z_i) \cdot(\sigma_i(X_{t-}^{\alpha,\beta})z_i)ds|\\
&\le \sup_{s\le1}\frac{C|\sigma_i(X_{t-}^{\alpha,\beta})z_i|^2}{(1+|X_{t-}^{\alpha,\beta} +s\sigma_i(X_{t-}^{\alpha,\beta})z_i|^2)^{\frac{1}{2}}}\\ \nonumber
&\le \frac{C|\sigma_i(X_{t-}^{\alpha,\beta})z_i|^2}{(1+|X_{t-}^{\alpha,\beta}|^2)^{\frac{1}{2}}}.
\end{align*}
In the last step, since
\begin{align*}
&\quad\frac{1}{1+|X_{t-}^{\alpha,\beta} +s\sigma_i(X_{t-}^{\alpha,\beta})z_i|^2}\\
&\le \frac{1}{1+|\frac{X_{t-}^{\alpha,\beta}}{2}+2s\sigma_i(X_{t-}^{\alpha,\beta})z_i|^2+\frac{3|X_{t-}^{\alpha,\beta}|^2}{4}-3|s\sigma_i(X_{t-}^{\alpha,\beta})z_i|^2}\\
&\le \frac{C}{1+|X_{t-}^{\alpha,\beta}|^2}.
\end{align*}
Thus, we have
\begin{align} \label{I3}
&|I_3(X_{t}^{\alpha,\beta})|\le C\sum_{i=1}^d\int_{|z_i|\le \delta} \frac{|\sigma_i(X_{t-}^{\alpha,\beta})z_i|^2}{(1+|X_{t-}^{\alpha,\beta}|^2)^{\frac{1}{2}}}\nu^{\alpha,\beta_i}(dz_i) \nonumber \\
&\le C \sum_{i=1}^d\int_{|z_i|\le \delta} |z_i|^2\nu^{\alpha,\beta_i}(dz_i)(1+|X_{t-}^{\alpha,\beta}|^2)^{\frac{1}{2}}\nonumber \\
&\le C f(X_{t}^{\alpha,\beta}).
\end{align}
For $I_4(X_{t}^{\alpha,\beta})$, we have
\begin{align}\label{I4}
|I_4(X_{t}^{\alpha,\beta})|&\le \sum_{i=1}^d \int_{\delta \le |z_i| < 1} |\sigma_i(X^{\alpha,\beta}_{t-})z_i\cdot
\nabla f(X^{\alpha,\beta}_{t-})| \nu^{\alpha,\beta_i}(dz_i)\nonumber \\
&\le C \sum_{i=1}^d \int_{|z|>\delta} |z_i|f(X^{\alpha,\beta}_{t-}) \nu^{\alpha,\beta_i}(dz_i) \le Cf(X^{\alpha,\beta}_{t-}).
\end{align}
From \eqref{ito formula}-\eqref{I4}, we get
\begin{equation}
f(X_t^{\alpha,\beta})-f(x_0)\le C\int_0^t f(X_s^{\alpha,\beta})ds+C+M_t. \nonumber
\end{equation}
By using a stochastic Gronwall's inequality in Xie and Zhang \cite[lemma 3.8]{SGI}, for any\ $1<q<\infty$, we obtain
\begin{align*}
\EX \sup_{0\le t \le T}|X_t^{\alpha,\beta}|^\frac{1}{q}
&\le \EX \sup_{0\le t \le T}{f(X_t^{\alpha,\beta})}^\frac{1}{q}\\
&\le C(q,T)<\infty.
\end{align*}
Since the inequality always holds for all $q\in(1,\infty)$, then we have
\begin{equation}\label{solution0} \nonumber
\sup_{\alpha\geq\alpha_0} \EX [\sup_{0\leq t\leq T} |X^{
\alpha,\beta}_t|^\theta] <\infty, \nonumber
\end{equation}
where $\theta\in(0,1)$.\qed\end{proof}\bigskip
The result of uniform estimate can be used below to establish the tightness of the approximating solution.
\begin{lemma}\label{lem2.4}
If Assumption \ref{asp1} is satisfied. The family
$\{X^{\alpha,\beta}\}_{\alpha \geq \alpha_0} $ is tight in the space
$D_{J_1}([0,T],\mathbb{R}^d)$.
\end{lemma}
\begin{proof}
According to Aldous' criterion \cite[Thereom 1.1]{Aldous}, we only prove the following two items:
\((1)\) For every $\varepsilon >0$, there exists $L_\varepsilon$ such that
\begin{equation} \label{Aldous1}
\sup_{\alpha \geq \alpha_0}\PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t| >L_\varepsilon) \leq
\varepsilon.
\end{equation}
\((2)\) For every stopping time $\tau^\alpha \in [0,T]$, and every $K>0$, we
have
\begin{equation}\label{Aldous2}
\lim_{\varepsilon\rightarrow 0}
\sup_{\alpha \geq
\alpha_0}\PX(|X^{\alpha,\beta}_{T\wedge(\tau^\alpha+\varepsilon)}-X^{\alpha,\beta}_{\tau^\alpha}|
>K) =0.
\end{equation}
By using Chebyshev's inequality, we have
\begin{align}\label{Che}
\sup_{\alpha \geq \alpha_0}\PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t| >L) \leq\frac{1}{L^\theta} \sup_{\alpha>\alpha_0} \EX(\sup_{0\le t\e T}|X_t^{\alpha}|^{\theta})\le\frac{C}{L^\theta},
\end{align}
where $\theta \in (0, 1)$, and $\alpha_0 \in (\frac{3}{2}, 2)$. This immediately implies the estimate \eqref{Aldous1}.
For notational convenience, we set
$\tau^\alpha+\varepsilon:=T\wedge(\tau^\alpha+\varepsilon)$ and
obtain that
\begin{align*}
\quad\sup_{\alpha \geq \alpha_0}\PX(|X^{\alpha,\beta}_{\tau^\alpha+\varepsilon}-X^{\alpha,\beta}_{\tau^\alpha}|^2
>K)& \leq \sup_{\alpha \geq
\alpha_0}\PX(|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
b(X^{\alpha,\beta}_s)ds|>\frac{K}{3})\\
&\quad+ \sup_{\alpha \geq
\alpha_0}\PX(|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
\int_{ |z|\leq 1} z\sigma(X^{\alpha,\beta}_{s-}) \tilde{N}^{\alpha,\beta}(ds,dz)|>\frac{K}{3})\\
&\quad+ \sup_{\alpha \geq
\alpha_0}\PX(|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
\int_{ |z|>1} z\sigma(X^{\alpha,\beta}_{s-}) N^{\alpha,\beta}(ds,dz)|>\frac{K}{3})\\
&:= K_1+K_2+K_3.
\end{align*}
Let $\ p \in(0,1)$, for $K_1$, by using inequality $(a+b)^p \le (a^p + b^p)$ and Lemma \ref{lem2.3} we have
\begin{align*}
K_1
&\leq \sup_{\alpha \geq
\alpha_0}\PX(|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
C(1+|X^{\alpha,\beta}_s|^r)ds|>\frac{K}{3}) \\
&\leq \sup_{\alpha \geq
\alpha_0}\frac{C}{K^{\frac{1}{r+1}}}\EX|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
(1+|X^{\alpha,\beta}_s|^r)ds|^{\frac{1}{r+1}} \\
&\leq\frac{C\varepsilon^{\frac{1}{r+1}}}{K^{\frac{1}{r +1}}} \sup_{\alpha \geq
\alpha_0} \EX (1+ \sup_{0<s \leq T}|X^{\alpha,\beta}_s|^{\frac{r}{r +1}})\rightarrow 0,
\quad \text{as} \quad \varepsilon \rightarrow 0.
\end{align*}
By Burkholder's
inequality, we obtain that
\begin{align*}
K_2
&=\sup_{\alpha \geq
\alpha_0}\PX(|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
\int_{ |z|\leq 1} z\sigma( X^{\alpha,\beta}_{s-}) \tilde{N}^{\alpha,\beta}(ds,dz)|>\frac{K}{3})\\
&\leq \sup_{\alpha \geq
\alpha_0}\PX(|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
\int_{ |z|\leq 1} z\sigma( X^{\alpha,\beta}_{s-}) \tilde{N}^{\alpha,\beta}(ds,dz)|>\frac{K}{3}; \sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\leq M)\\
&\quad+ \PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\geq M)\\
&\leq \frac{3}{K} \sup_{\alpha \geq
\alpha_0}\EX|(\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon} \int_{
|z|\leq
1} z\sigma(X^{\alpha,\beta}_{s-}) \tilde{N}^{\alpha,\beta}(ds,dz); \sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\leq
M)| \\
&\quad + \PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\geq M)\\
&\leq \frac{C}{K} \sup_{\alpha \geq \alpha_0}[\EX
\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon} \int_{ |z|\leq 1} |z|^2 \nu^{\alpha,\beta}(dz)ds]^{\frac{1}{2}}+ \PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\geq M)\\
&\leq\frac{C\sqrt{\varepsilon}}{K} \sup_{\alpha \geq
\alpha_0}[\int_{ |z|\leq 1}
|z|^2 \nu^{\alpha,\beta}(dz)]^{\frac{1}{2}}+\frac{1}{M^\theta}\sup_{\alpha \geq \alpha_0}
\EX \sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|^\theta.
\end{align*}
By choosing sufficient large positive number $M$ in the last line of the above chain of inequalities so that
\[
\frac{1}{M^\theta}\sup\limits_{\alpha \geq \alpha_0} \EX \sup\limits_{0\leq t\leq
T}|X^{\alpha,\beta}_t|^\theta
\]
is sufficient small. Then by the fact that \ $\sup\limits_{\alpha \geq
\alpha_0}\int_{ |z|\leq 1}|z|^2 \nu^{\alpha,\beta}(dz)<\infty$\ , we deduce
that
\[
K_2 \rightarrow 0, \quad \text{as} \quad \varepsilon \rightarrow 0.
\]
For $K_3$, we have
\begin{align*}
K_3
&=\sup_{\alpha \geq
\alpha_0}\PX(|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
\int_{ |z|>1} z\sigma( X^{\alpha,\beta}_{s-}) N^{\alpha,\beta}(ds,dz)|>\frac{K}{3})\\
&\leq \sup_{\alpha \geq
\alpha_0}\PX(|\int_{\tau^\alpha}^{\tau^\alpha+\varepsilon}
\int_{ |z|>1} z\sigma( X^{\alpha,\beta}_{s-}) {N}^{\alpha,\beta}(ds,dz)|>\frac{K}{3}; \sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\leq M) + \PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\geq M)\\
&\leq \frac{C}{K} \sup_{\alpha
\geq \alpha_0}\EX( \int_{\tau^\alpha}^{\tau^\alpha+\varepsilon} \int_{ |z|>
1} |z| {N}^{\alpha,\beta}(ds,dz) )+ \PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\geq M)\\
&= \frac{C}{K} \sup_{\alpha
\geq \alpha_0}\EX( \int_{\tau^\alpha}^{\tau^\alpha+\varepsilon} \int_{ |z|>
1} |z| \nu^{\alpha,\beta}(dz)ds)+ \PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\geq M)\\
& \leq \frac{ C\varepsilon}{K} \sup_{\alpha \geq \alpha_0}
\int_{|z|\geq
1}|z|\nu^{\alpha,\beta}(dz)+\frac{1}{M^\theta}\sup_{\alpha \geq \alpha_0} \EX \sup_{0\leq t\leq
T}|X^{\alpha,\beta}_t|^\theta.
\end{align*}
In the same way, by choosing sufficient large positive number $M>0$ in the last line of the above chain of inequalities so that
\[
\frac{1}{M^\theta}\sup_{\alpha \geq \alpha_0} \EX \sup_{0\leq t\leq
T}|X^{\alpha,\beta}_t|^\theta
\]
is sufficient small. Then by the fact that $ \sup\limits_{\alpha \geq
\alpha_0}\int_{|z|\geq1}|z|\nu^{\alpha,\beta}(dz) <\infty$, we deduce that
\[
K_3 \rightarrow 0, \quad \text{as} \quad \varepsilon \rightarrow 0.
\]
Combing above arguments for $K_1,K_2$ and $K_3$, we get
\eqref{Aldous2}. Then, we can deduce that
the family $\{X^{\alpha,\beta}\}_{\alpha \geq \alpha_0}$ is tight in the
space $D_{J_1}([0,T],\mathbb{R}^d)$. \qed\end{proof}\bigskip
The following lemma characterizes the limit of $\mathcal{L}(X^\alpha)$, as the law of $X^\alpha$.
\begin{lemma}\label{lem2.5}
If Assumption \ref{asp1} is satisfied. Let
$\mu^{\alpha,\beta} :=\mathcal{L}(X^\alpha)$ and $\mu^*$ is the weak limit of any convergent subsequence
$\{\mu^{\alpha_n,\beta}\}$ $(\alpha_n \rightarrow 2$ as $n \rightarrow
\infty)$. Then $\mu^*$ is supported by $C_U([0,T],\mathbb{R}^d)$.
\end{lemma}
\begin{proof}
For any $\eta>0$ and $M>0$, we have for any $ \theta$ in
$(0,1)$ that,
\begin{align*}
\PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t-X^{\alpha,\beta}_{t-}| >\eta)
& \leq \PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t-X^{\alpha,\beta}_{t-}| >\eta;
\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\leq M)\\
& \quad + \PX(\sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|\geq M)\\
&\leq\frac{c}{\eta}\EX \int_0^T \int_{|z|\geq \eta}|z| N^{\alpha,\beta}(ds,dz)+\frac{1}{M^\theta} \EX \sup_{0\leq t\leq T}|X^{\alpha,\beta}_t|^\theta\\
& \leq \frac{ c}{\eta}\int_{|z|\geq
\eta}|z|\nu^{\alpha,\beta}(dz)+\frac{1}{M^\theta} \EX \sup_{0\leq t\leq
T}|X^{\alpha,\beta}_t|^\theta.
\end{align*}
Letting first $M\rightarrow \infty$ and then $\alpha \rightarrow 2$,
we obtain that
\[
\lim_{\alpha \rightarrow 2}\PX(\sup_{0\leq t\leq
T}|X^{\alpha,\beta}_t-X^{\alpha,\beta}_{t-}|
>\eta) =0.
\]
Then $\mu^*$ is supported by
$C_U([0,T],\mathbb{R}^d)$.
\qed\end{proof}\bigskip
~\\
By using It\^{o}'s formula for the solution $X^{\alpha,\beta}_t$ of SDE \eqref{e:0}, for each $f(x)\in C^2(\mathbb{R}^d)$, we have
\begin{equation}\label{M1}
\begin{aligned}
&\quad \ f(X^{\alpha,\beta}_t)-f(x)\nonumber\\
&=\sum_{i=1}^{d}\int_0^t b_i(X^{\alpha,\beta}_s)\cdot \partial_i f(X^{\alpha,\beta}_s)ds\nonumber\\
&\quad+\sum_{i=1}^{d}\int_0^t \int_{ |z_i|\leq 1}[f(X^{\alpha,\beta}_{s-}+
\sigma_i(X^{\alpha,\beta}_{s-})z_i)-f(X^{\alpha,\beta}_{s-})]
\tilde{N}^{\alpha,\beta_i}(ds,dz_i)\nonumber\\
&\quad+\sum_{i=1}^{d}\int_0^t \int_{ |z_i|>1}[f(X^{\alpha,\beta}_{s-}+
\sigma_i(X^{\alpha,\beta}_{s-})z_i)-f(X^{\alpha,\beta}_{s-})]
N^{\alpha,\beta_i}(ds,dz_i)\nonumber\\
&\quad+\sum_{i=1}^{d}\int_0^t \int_{|z_i|\leq 1}[f(X^{\alpha,\beta}_{s-}+
\sigma_i(X^{\alpha,\beta}_{s-})z_i)-f(X^{\alpha,\beta}_{s-})- \sigma_i(X^{\alpha,\beta}_{s-})z_i\cdot
\nabla f(X^{\alpha,\beta}_{s-})] \nu^{\alpha,\beta_i}(dz_i)ds\nonumber\\
&=\tilde{M}_t+\mathcal{L}_{b,\sigma}^{\alpha,\beta} f(X_{s-}^{\alpha,\beta}),
\end{aligned}
\end{equation}
where $\tilde{M}_t$ is a martingale and
$\mathcal{L}_{b,\sigma}^{\alpha,\beta} f(u)$ is defined as following:
\begin{align}\label{Lf-alpha}
\mathcal{L}_{b,\sigma}^{\alpha,\beta} f(u)
&:= b_i(u)\cdot \partial^i f(u)+\sum_{i=1}^{d}\int_{ |z_i|\leq 1}[f(u+
\sigma_i(u)z_i)-f(u)-\sigma_i(u)z_i\cdot \nabla f(u)]\nu^{\alpha,\beta_i}(dz_i)\\
&\quad+\sum_{i=1}^{d} \int_{ |z_i|>1}[f(u+
\sigma_i(u)z_i)-f(u)]\nu^{\alpha,\beta_i}(dz_i).\nonumber
\end{align}
Moreover, let
\begin{equation}\label{Lf}
\mathcal{L}_{b,\sigma}f(u):=b_i(u)\cdot \partial^i f(u)+\text{Tr}\left( \sigma \sigma^*
H(f)\right)(u),
\end{equation}
where $\text{Tr}\left(
\sigma \sigma^* H(f)\right):= \sum_{i,j}(\sigma
\sigma^*)_{ij}\frac{\partial^2 f}{\partial x_i \partial x_j}$.
\begin{lemma}\label{lem2.6}
Let $\mathcal{L}_{b,\sigma}^{\alpha,\beta}$ and $ \mathcal{L}_{b,\sigma}$ be
the operators defined in \eqref{Lf-alpha} and \eqref{Lf}, respectively. Suppose
Assumption \ref{asp1} is verified, then for every $f\in
C^2_b (\R^d)$, we have $$\mathcal{L}_{b,\sigma}^{\alpha,\beta} f \rightarrow
\mathcal{L}_{b,\sigma} f$$ locally uniformly in $\mathbb{R}^d$ as
$\alpha$ tends to $2$.
\end{lemma}
\begin{proof}
For every $0<\delta<1$, we have
\begin{align}\label{g1}
\mathcal{L}_{b,\sigma}^{\alpha,\beta}
f(u)-\mathcal{L}_{b,\sigma}f(u)&=\sum_{i=1}^{d}\int_{ |z_i|\leq 1}[f(u+
\sigma_i(u)z_i)-f(u)-\sigma_i(u)z_i\cdot \nabla f(u)]\nu^{\alpha,\beta_i}(dz_i)\nonumber\\
&\quad+\sum_{i=1}^{d} \int_{ |z_i|>1}[f(u+
\sigma_i(u)z_i)-f(u)]\nu^{\alpha,\beta_i}(dz_i)-\text{Tr}\left( \sigma \sigma^*
H(f)\right)(u)\nonumber\\
&=\sum_{i=1}^{d}\int_{ |z_i|\leq \delta}[f(u+
\sigma_i(u)z_i)-f(u)-\sigma_i(u)z_i\cdot \nabla f(u)]\nu^{\alpha,\beta_i}(dz_i)\nonumber\\
&\quad+\sum_{i=1}^{d} \int_{ |z_i|>\delta}[f(u+
\sigma_i(u)z_i)-f(u)]\nu^{\alpha,\beta_i}(dz_i)\nonumber \\
&\quad-\sum_{i=1}^{d}\int_{\delta < |z_i| \leq 1}\sigma_i(u)z_i\cdot\nabla f(u)\nu^{\alpha,\beta_i}(dz_i)
-\text{Tr}\left( \sigma \sigma^*H(f)\right)(u).\nonumber
\end{align}
For convenience, we denote
\begin{align}\nonumber
\mathcal{A}f(u)&:=\sum_{i=1}^{d}\int_{ |z_i|\leq \delta}[f(u+
\sigma_i(u)z_i)-f(u)-\sigma_i(u)z_i\cdot \nabla f(u)]\nu^{\alpha,\beta_i}(dz_i),\nonumber\\
\mathcal{B}f(u)&:=\sum_{i=1}^{d} \int_{ |z_i|>\delta}[f(u+
\sigma_i(u)z_i)-f(u)]\nu^{\alpha,\beta_i}(dz_i),\nonumber \\
\mathcal{C}f(u)&:=\sum_{i=1}^{d}\int_{\delta < |z_i| \leq 1}\sigma_i(u)z_i\cdot\nabla f(u)\ \nu^{\alpha,\beta_i}(dz_i), \nonumber\\
\mathcal{D}f(u)&:=\sum_{i=1}^{d}\int_0^1dt \int_0^t \int_{ |z_i|\leq
\delta}[\sigma_i(u)z_iH(f)(u)\cdot
\sigma_i(u)z_i] \nu^{\alpha,\beta_i}(dz_i)ds.\nonumber
\end{align}
By Taylor's formula, we get
$$
f(u+v)-f(u)-v \cdot \nabla f(u)=\int_0^1dt \int_0^t vH(f)(u+sv)\cdot
vds.
$$
Then, we can choose suitable $\delta<< 1$ such that
\begin{align}\nonumber
&\quad \mathcal{A}f(u)-\mathcal{D}f(u)\\
&=\sum_{i=1}^{d}
\int_{ |z_i|\leq \delta}\int_0^1dt \int_0^t
\sigma_i(u)z_i[H(f)(u+\sigma_i(u)z_is)-H(f)(u)]\cdot \sigma_i(u)z_ids\nu^{\alpha,\beta_i}(dz_i)\nonumber
\end{align}
is sufficient small. For a fixed $\delta>0$, we have
\begin{align*}
|\int_{|z_i|\le \delta}|z_i|^2 \nu^{\alpha,\beta_i}(dz_i)-2|\rightarrow 0, \ \ \ as \ \ \alpha\rightarrow 2.
\end{align*}
Then
\begin{center}
\begin{align}\nonumber
&\quad \mathcal{D}f(u)-\text{Tr}\left( \sigma \sigma^*H(f)\right)(u)\nonumber\\
&=\sum_{j,k=1}^d \sum_{i=1}^d\sigma_{ki}\sigma_{ji}f_{kj}(u)\int_0^1dt\int_0^tds(\int_{|z_i|\le \delta}|z_i|^2 \nu^{\alpha,\beta_i}(dz_i))-\sum_{j,k=1}^d \sum_{i=1}^d\sigma_{ki}\sigma_{ji}f_{kj}(u)\int_0^1dt\int_0^t 2ds\nonumber\\
&=\sum_{j,k=1}^d \sum_{i=1}^d\sigma_{ki}\sigma_{ji}f_{kj}(u)\int_0^1dt\int_0^t ds(\int_{|z_i|\le \delta}|z_i|^2 \nu^{\alpha,\beta_i}(dz_i)-2)\rightarrow 0, \ \ \ as \ \ \alpha\rightarrow 2.\nonumber
\end{align}
\end{center}
Similarly, it can be shown that
\begin{align}\nonumber
|\mathcal{B}f(u)|\le C \sum_{i=1}^d \int_{|z_i|>\delta}|z_i| \nu^{\alpha,\beta_i}(dz_i)\rightarrow 0,
\end{align}
and
\begin{equation} \nonumber
|\mathcal{C}f(u)|
\le C\sum_{i=1}^{d}\int_{\delta<|z|\leq1} |z_i| \nu^{\alpha,\beta_i}(dz_i)
\le C\sum_{i=1}^{d}\int_{|z_i|>\delta}|z_i| \nu^{\alpha,\beta_i}(dz_i)\rightarrow 0,
\end{equation}
as $\alpha$ tends to 2.
Then, we have
\begin{align}\nonumber
&\quad\mathcal{L}_{b,\sigma}^{\alpha,\beta}
f(u)-\mathcal{L}_{b,\sigma}f(u)\\\nonumber
&=\mathcal{A}f(u)+\mathcal{B}f(u)-\mathcal{C}f(u)-\text{Tr}\left( \sigma \sigma^*H(f)\right)(u)\nonumber\\
&=\mathcal{A}f(u)-\mathcal{D}f(u)+\mathcal{D}f(u)-\text{Tr}\left( \sigma \sigma^*H(f)\right)(u)+\mathcal{B}f(u)-\mathcal{C}f(u) \rightarrow 0,\nonumber
\end{align}
as $\alpha$ tends to 2.
\qed\end{proof}\bigskip
Now, by using the idea of Cerrai (see \cite[Theorem 6.2]{Cerrai}) and all of the lemmas presented in this section, we can prove Theorem \ref{thm1} as follows.
~\\
\textbf{\emph{Proof of Theorem 2.1.}}\ \ \
Due to sequence $\{\mu^{\alpha_n,\beta}\}_{n\in \mathbb{N}}$ converges weakly to $\mu^*$. If we are able to identify $\mu^*$ with $\mathcal{L}(X)$, where $\mathcal{L}(X)$ is the distribution of $X$, then we conclude that the whole sequence $\{\mu^{\alpha_n,\beta}\}_{n\in \mathbb{N}}$ weakly convergent to $\mathcal{L}(X)$ in $D_{J_1}([0,T],\mathbb{R}^d)$.
Let $\mathcal{L}_{b,\sigma}^{\alpha,\beta}$ and $ \mathcal{L}_{b,\sigma}$ be the operators defined in \eqref{Lf-alpha} and \eqref{Lf}, respectively. For any $f\in C_b^2(\mathbb{R}^d)$, we obtain that the process
\begin{align*}
f(X_t^{\alpha,\beta}) - f(x) - \int_0^t \mathcal{L}_{b,\sigma}^{\alpha,\beta}f(X_t^{\alpha,\beta})
\end{align*}
is a $\mathcal{F}_t-$martingale.
We denote by $\EX^{\mu^\alpha}$ as the expectation with respect to the probability measure $\mu^\alpha$ and $\EX^{\mu^*}$ as the expectation with respect to the probability measure $\mu^*$. We denote by $\eta(t)$ the canonical process in $D_{J_1}([0,T],\mathbb{R}^d)$. For every $0\le s_0 < s_1 < \cdots < s_m \le s < t$ and $f_0, f_1, \cdots, f_m \in C_b (\mathbb{R}^d)$, we have
\begin{equation}\label{M2}
\EX^{\mu^{\alpha}}\left[ \left( f(\eta_t)-f(\eta_s)-\int_s^t \mathcal{L}_{b,\sigma}^{\alpha,\beta}f(\eta_r)dr \right) f_0(\eta_{s_0})\cdots f_m(\eta_{s_m})\right]=0.
\end{equation}
By \eqref{M2} , we have
\begin{align*}
&\quad \EX^{\mu^*}\left [ \left(f(\eta_t)-f(\eta_s)-\int_s^t\mathcal{L}_{b,\sigma}f(\eta_r)dr\right) f_0(\eta_{s_0})\cdots f_m(\eta_{s_m})\right]\\
& = \lim_{n \rightarrow \infty}\EX^{\mu^{\alpha_n}} \left[ \left(f(\eta_t)-f(\eta_s)-\int_s^t\mathcal{L}_{b,\sigma}f(\eta_r)dr\right) f_0(\eta_{s_0})\cdots f_m(\eta_{s_m})\right]\\
& = \lim_{n \rightarrow \infty}\EX^{\mu^{\alpha_n}} \Big[ \left(f(\eta_t)-f(\eta_s)-\int_s^t\mathcal{L}_{b,\sigma}f(\eta_r)dr\right) f_0(\eta_{s_0})\cdots f_m(\eta_{s_m})\\
& \quad \quad \quad\quad\quad\quad -\left(f(\eta_t)-f(\eta_s)-\int_s^t \mathcal{L}_{b,\sigma}^{\alpha_n,\beta} f(\eta_r)dr\right) f_0(\eta_{s_0})\cdots f_m(\eta_{s_m})\Big]\\
& = \lim_{n \rightarrow \infty}\EX^{\mu^{\alpha_n}} \left[ \left(\int_s^t \mathcal{L}_{b,\sigma}^{\alpha_n,\beta} f(\eta_r)-\mathcal{L}_{b,\sigma}f(\eta_r) dr \right) f_0(\eta_{s_0})\cdots f_m(\eta_{s_m}) \right].
\end{align*}
By Lemma \ref{lem2.4}, Lemma \ref{lem2.5} and Lemma \ref{lem2.6}, choosing a suitable $M>0$, we obtain
\begin{align}\label{M4}
&\quad\lim_{n\rightarrow \infty} \EX^{\mu^{\alpha_n}}\left[ \left(\int_s^t \mathcal{L}_{b,\sigma}^{\alpha_n, \beta}f(\eta_r)-\mathcal{L}_{b,\sigma}f(\eta_r) dr\right) f_0(\eta_{s_0})\cdots f_m(\eta_{s_m})\right]\nonumber \\
&\le\lim_{n\rightarrow \infty} C\EX^{\mu^{\alpha_n}}\left[\int_s^t | \mathcal{L}_{b,\sigma}^{\alpha_n, \beta}f(\eta_r)-\mathcal{L}_{b,\sigma}f(\eta_r) |dr \right] \nonumber \\
&\le \lim_{n\rightarrow \infty} C\EX^{\mu^{\alpha_n}}\left[\int_s^t | \mathcal{L}_{b,\sigma}^{\alpha_n, \beta}f(\eta_r)-\mathcal{L}_{b,\sigma}f(\eta_r) |dr; \sup_{s \le r \le t}|\eta_r| \le M\right] \nonumber \\
&\quad+\lim_{n \rightarrow \infty} C\EX^{\mu^{\alpha_n}}\left[\int_s^t |\mathcal{L}_{b,\sigma}^{\alpha_n, \beta}f(\eta_r)-\mathcal{L}_{b,\sigma}f(\eta_r) |dr; \sup_{s\le r \le t}|\eta_r|> M\right] \nonumber \\
&=0.
\end{align}
Hence, according to \eqref{M4}, we can conclude that
\begin{align*}
\EX^{\mu^*}\left[\left(f(\eta_t)-f(\eta_s)-\int_s^t\mathcal{L}_{b,\sigma}f(\eta_r)dr\right) f_0(\eta_{s_0})\cdots f_m(\eta_{s_m})\right]=0.
\end{align*}
This means that $f(\eta_t)-f(\eta_0)-\int_0^t\mathcal{L}_{b,\sigma}f(\eta_s)ds$ is a martingale under $\mu^*$. By using the argument is similar to \cite[Theorem 6.2]{Cerrai}, we say that a probability measure $\mu^*$ solves the martingale problem which induces a weak solution of the SDE \eqref{e2}, associated with $\mathcal{L}_{b,\sigma}$. Thus, we can conclude that $\mu^*=\mathcal{L}(X)$.
\qed\bigskip
\begin{remark}
Now, we have shown the solution of an SDE driven by a non-symmetric $\alpha$-stable process converged weakly to that of an SDE driven by a Brownian motion. From the proof of Theorem \ref{thm1}, for any $\delta>0$, we observe that
\begin{align*}
\int_0^. \int_{|z|>1} zN^{\alpha,\beta}(dt,dz) \rightarrow 0, \ \ \int_0^. \int_{|z|\le 1} z\tilde{N}^{\alpha,\beta}(dt,dz) \rightarrow B. , \ as\ \alpha\rightarrow 2
\end{align*}
weakly in $D_{J_1}([0,T],\mathbb{R}^d)$, where the process \ $ \int_0^. \int_{|z|\le 1} z\tilde{N}^{\alpha,\beta}(dt,dz)$\ is the compensated sum
of small jumps, and is said to be ``central part'', which plays a vital part. $ \int_0^. \int_{|z|>1} zN^{\alpha,\beta}(ds,dz)$\ describing the ``large jumps'', and is said to be ``non-symmetric part''. However, in case of non-symmetric noise, the process \ $ \int_0^. \int_{|z|\le 1} z\tilde{N}^{\alpha,\beta}(dt,dz) $ is non-symmetric, which converges to a Brownian motion in $D_{J_1}([0,T],\mathbb{R}^d)$ and in law when $\alpha$ is close to 2. Moreover, the SDE driven by $ \int_0^. \int_{|z|\le 1} z\tilde{N}^{\alpha,\beta}(dt,dz)$ is similar to the SDE driven by Brownian motion in $D_{J_1}([0,T],\mathbb{R}^d)$ and in law.
\end{remark}
\section{\bf Proof of Theorem 2.2} \label{Weak Convergence}
In this section we will derive the rate of weak convergence stated in Theorem \ref{thm2}. For this purpose we will use the backward Kolmogorov equation.
Let's consider the equation
\begin{equation}\label{Keq}
\begin{aligned}
&(\partial_t+\mathcal{L}_{b,\sigma})u(t,x)=0, t\in[0,T]\\
&u(T,x)=f(x),
\end{aligned}
\end{equation}
where
\begin{align*}
\mathcal{L}_{b,\sigma} u(t,x)= b_i(x)\partial^iu(t,x)+\text{Tr}\left( \sigma \sigma^*H_x\big(u(t,x)\big)\right),
\end{align*}
which denotes the generator of $X_t$.
\begin{lemma}\label{lem4.1}
If Assumption \ref{asp2} is satisfied, then for each $f\in {C}^{2,\gamma}(\mathbb{R}^d)$, the equation \eqref{Keq} exists a unique solution $u\in {C}^{2,\gamma}(H)$.
\end{lemma}
A proof of this result can be found in R. Mikulevicius \cite[Theorem 4]{rate}. In order to establish the rate of weak convergence under Assumption \ref{asp2}, we need to prove weak convergence firstly. Since this provement is quite similar to the argument of Theorem \ref{thm1}, we only establish a uniform estimate in the following lemma.
\begin{lemma}\label{lem4.2}
If Assumption \ref{asp2} is satisfied, then for
every $ \theta \in (0,\alpha_0)$,
\begin{equation}\label{solution1}
\sup_{\alpha\geq\alpha_0} \EX [\sup_{0\leq t\leq T} |X^{
\alpha,\beta}_t|^\theta] <\infty,
\end{equation}
where $\alpha_0 \in (\frac{3}{2}, 2).$
\end{lemma}
\begin{proof}
We need only to prove \eqref{solution1} on condition that $\theta \in [1,\alpha_0)$ because the case where $\theta \in (0,1)$ is an immediate corollary by H${\rm \ddot{o}}$lder's inequality. In fact, we have
\begin{align*}
X_t^{\alpha,\beta}&=x+\int_0^t b(X_s^{\alpha,\beta})ds +\int_0^t\int_{|z|\le 1}\sigma(X_{s-}^{\alpha,\beta})z \tilde{N}^{\alpha,\beta}(ds,dz)
+\int_0^t\int_{|z|>1}\sigma(X_{s-}^{\alpha,\beta})zN^{\alpha,\beta}(ds,dz)\\
&=x+\int_0^t b(X_s^{\alpha,\beta})ds +\int_0^t\int_{|z|\le1}\sigma(X_{s-}^{\alpha,\beta})z \tilde{N}^{\alpha,\beta}(ds,dz)\\
&\quad\ +\int_0^t\int_{|z|>1}\sigma(X_{s-}^{\alpha,\beta})z\nu^{\alpha,\beta}dz+\int_0^t\int_{|z|>1}\sigma(X_{s-}^{\alpha,\beta})z\tilde{N}^{\alpha,\beta}(ds,dz).
\end{align*}
First, there exists a positive constant $M$, such that\ $|b(x)|\le M$ and $|\sigma(x)|\le M$, for all $x\in\mathbb{R}^d$. We obtain
\begin{align}\label{E1}
&\quad\EX(\sup_{0\le t \le T}|\int_0^tb(X_s^{\alpha,\beta})ds|^{\theta})
\le M^\theta T^\theta < \infty.
\end{align}
By using Burkholder's inequality \cite[Theorem 26.12]{12}, and Lemma \ref{lem2.2}, we have
\begin{align}\label{E2}
&\quad\sup_{\alpha\geq \alpha_0}\EX\sup_{0\le t \le T}|\int_0^t\int_{|z|\le1} \sigma(X_{s-}^{\alpha,\beta})z\tilde{N}^{\alpha,\beta}(ds,dz)|^\theta \nonumber \\
&\le C_\theta \sup_{\alpha\geq\alpha_0}\EX|\int_0^T\int_{|z|\le 1} |\sigma(X_{s-}^{\alpha,\beta})z|^2 \nu^{\alpha,\beta}(dz)ds|^\frac{\theta}{2} \nonumber \\
&\le C_\theta M^\theta T^\frac{\theta}{2}\sup_{\alpha\geq \alpha_0}|\int_{|z|\le 1} |z|^2 \nu^{\alpha,\beta}(dz)|^\frac{\theta}{2} <\infty.
\end{align}
By applying \cite[Lemma 2.3]{Liu}, we obtain
\begin{align}\label{E3}
&\quad\sup_{\alpha\geq \alpha_0}\EX\sup_{0\le t \le T}|\int_0^t\int_{|z|>1} \sigma(X_{s-}^{\alpha,\beta})z\tilde{N}^{\alpha,\beta}(ds,dz)|^\theta \nonumber \\
&\le \sup_{\alpha\geq \alpha_0}C\EX \left(\int_0^T\int_{|z|>1} |\sigma(X_{s-}^{\alpha,\beta})z|^\theta \nu^{\alpha,\beta}(dz)ds\right)
+ \sup_{\alpha\geq \alpha_0}C\EX \left(\int_0^T\int_{|z|>1}|\sigma(X_{s-}^{\alpha,\beta})z| \nu^{\alpha,\beta}(dz)ds\right)^\theta \nonumber \\
&\le C M^\theta\sup_{\alpha\geq\alpha_0}[T\int_{|z|>1} |z|^\theta \nu^{\alpha,\beta}(dz) + T^\theta(\int_{|z|>1}|z|\nu^{\alpha,\beta}(dz))^\theta] < \infty
\end{align}
and
\begin{align}\label{E4}
&\quad\sup_{\alpha\geq\alpha_0}\EX\sup_{0\le t \le T} |\int_0^t\int_{|z|>1}\sigma(X_{s-}^{\alpha,\beta})z\nu^{\alpha,\beta}(dz)ds|^\theta\le T^\theta M^{\theta} \sup_{\alpha\geq\alpha_0}\left(\int_{|z|>1}|z|\nu^{\alpha,\beta}(dz)\right)^\theta < \infty.
\end{align}
By using $(x+y)^\theta \le C_{\theta}(x^\theta +y^\theta)$ for $\theta\in(1,2)$, we have
\begin{equation}\label{E5}
\begin{aligned}
\quad \sup_{\alpha\geq\alpha_0} \EX [\sup_{0\leq t\leq T} |X^{
\alpha,\beta}_t|^\theta]
\le C[\sup_{\alpha\geq\alpha_0}\EX(&\sup_{0\le t \le T}|\int_0^tb(X_s^{\alpha,\beta})ds|^{\theta} \\
\quad+\sup_{\alpha\geq \alpha_0}\EX&\sup_{0\le t \le T}|\int_0^t\int_{|z|\le1} \sigma(X_{s-}^{\alpha,\beta})z\tilde{N}^{\alpha,\beta}(ds,dz)|^\theta \\
\quad+\sup_{\alpha\geq \alpha_0}\EX&\sup_{0\le t \le T}|\int_0^t\int_{|z|>1} \sigma(X_{s-}^{\alpha,\beta})z\tilde{N}^{\alpha,\beta}(ds,dz)|^\theta \\
\quad+\sup_{\alpha\geq\alpha_0}\EX&\sup_{0\le t \le T} |\int_0^t\int_{|z|>1}\sigma(X_{s-}^{\alpha,\beta})z\nu^{\alpha,\beta}(dz)ds|^\theta].
\end{aligned}
\end{equation}
By formulas \eqref{E1}-\eqref{E5}, we obtain
\begin{align*}
\quad \sup_{\alpha\geq\alpha_0} \EX [\sup_{0\leq t\leq T} |X^{
\alpha,\beta}_t|^\theta] < \infty.
\end{align*}
The proof is complete.
\qed\bigskip\end{proof}
Now, we proceed to the proof of Theorem \ref{thm2}.
~\\
\textbf{\emph{Proof of Theorem 2.2.}}\ \ \
Let $\mathcal{L}^{\alpha,\beta}_{b,\sigma}$ be the generator of $X_t^{\alpha,\beta}$,
\begin{align*}
\mathcal{L}^{\alpha,\beta}_{b,\sigma} u(t,x)&=b_i(x)\partial^i u(t,x)+\sum_{i=1}^d\int_{|z_i|>1} u(t,x+\sigma_i(x)z_i)-u(t,x) \ \nu^{\alpha,\beta_i}(dz_i)\\
&\quad +\sum_{i=1}^d\int_{|z_i|\le 1} u(t,x+\sigma_i(x)z_i)-u(t,x)-\sigma_i(x)z_i \cdot\nabla u(t,x) \nu^{\alpha,\beta_i}(dz_i).
\end{align*}
According to Lemma \ref{lem4.1}, there exists a unique solution $u(t,x)\in {C}^{2,\gamma}(H)$ to \eqref{Keq}, for each $f(x)\in {C}^{2,\gamma}(\mathbb{R}^d)$. By using It\^{o} formula, we have
\begin{align*}
\EX[f(X_t^{\alpha,\beta})-f(X_t)]&=\EX[u(t,X_t^{\alpha,\beta})-u(0,x_0)]\\
&=\EX [\int_0^t \partial_tu(s,X_s^{\alpha,\beta})+ \mathcal{L}_{b,\sigma}^{\alpha,\beta}u(s,X_s^{\alpha,\beta}) ds]\\
&= \EX[\int_0^t -\mathcal{L}_{b,\sigma}u(s,X_s^{\alpha,\beta})+ \mathcal{L}_{b,\sigma}^{\alpha,\beta} u(s,X_s^{\alpha,\beta})ds ].
\end{align*}
Notice that $\sigma$ and $b$ is bounded, then for any $\delta>0$ we have
\begin{align*}
&\quad\ \int_0^t(\mathcal{L}_{b,\sigma}^{\alpha,\beta} -\mathcal{L}_{b,\sigma}) u(s,X_s^{\alpha,\beta})ds\\
&=\sum_{i=1}^d\int_0^t\int_{|z_i|\le 1} u(s,X_{s-}^{\alpha,\beta} +\sigma_i(X_{s-}^{\alpha,\beta})z_i) - u(s,X_{s-}^{\alpha,\beta})-\sigma_i(X_{s-}^{\alpha,\beta})z_i\cdot\partial_x u(s,X_{s-}^{\alpha,\beta}) \nu^{\alpha,\beta_i}(dz_i)ds\\
&\quad -\int_0^tTr[\sigma\sigma^*(X_{s-}^{\alpha,\beta})H_x\big(u(s,X_{s-}^{\alpha,\beta})\big)] ds\\
&\quad + \sum_{i=1}^d\int_0^t\int_{|z_i|>1} u(s,X_{s-}^{\alpha,\beta} +\sigma_i(X_{s-}^{\alpha,\beta})z_i) - u(s,X_{s-}^{\alpha,\beta}) \nu^{\alpha,\beta_i}(dz_i)ds\\
&= \sum_{i=1}^d\int_0^t\int_{|z_i|\le \delta} u(s,X_{s-}^{\alpha,\beta} +\sigma_i(X_{s-}^{\alpha,\beta})z_i) - u(s,X_{s-}^{\alpha,\beta})-\sigma_i(X_{s-}^{\alpha,\beta})z_i\cdot\partial_x u(s,X_{s-}^{\alpha,\beta}) \nu^{\alpha,\beta_i}(dz_i)ds \\
&\quad -\int_0^tTr[\sigma\sigma^*(X_{s-}^{\alpha,\beta})H_x\big(u(s,X_{s-}^{\alpha,\beta})\big)]ds \\
&\quad + \sum_{i=1}^d\int_0^t\int_{|z_i|>\delta} u(s,X_{s-}^{\alpha,\beta} +\sigma_i(X_{s-}^{\alpha,\beta})z_i)
- u(s,X_{s-}^{\alpha,\beta}) \nu^{\alpha,\beta_i}(dz_i)ds\\
&\quad - \sum_{i=1}^{d}\int_0^t\int_{\delta\le|z_i|\le1}\sigma_i(X_{s-}^{\alpha,\beta})z_i\cdot\partial_xu(s,X_{s-}^{\alpha,\beta}) \nu^{\alpha,\beta_i}(dz_i)ds.
\end{align*}
For convenience, we denote
\begin{align*}\nonumber
\mathcal{A}u(s,X_{s}^{\alpha,\beta})&=\sum_{i=1}^{d}\int_{ |z_i|\leq \delta}u(s,X_{s-}^{\alpha,\beta} +\sigma_i(X_{s-}^{\alpha,\beta})z_i) - u(s,X_{s-}^{\alpha,\beta})-\sigma_i(X_{s-}^{\alpha,\beta})z_i\cdot\partial_x u(s,X_{s-}^{\alpha,\beta}) \nu^{\alpha,\beta_i}(dz_i),\nonumber\\
\mathcal{B}u(s,X_{s}^{\alpha,\beta})&=\sum_{i=1}^{d} \int_{ |z_i|>\delta} u(s,X_{s-}^{\alpha,\beta} +\sigma_i(X_{s-}^{\alpha,\beta})z_i)
- u(s,X_{s-}^{\alpha,\beta})\nu^{\alpha,\beta_i}(dz_i),\nonumber \\
\mathcal{C}u(s,X_{s}^{\alpha,\beta})&=\sum_{i=1}^{d}\int_{\delta < |z_i| \leq 1}\sigma_i(X_{s-}^{\alpha,\beta})z_i
\cdot\partial_xu(s,X_{s-}^{\alpha,\beta})\nu^{\alpha,\beta_i}(dz_i), \nonumber\\
\mathcal{D}u(s,X_{s}^{\alpha,\beta})&=\sum_{i=1}^{d}\int_0^1dr \int_0^r \int_{ |z_i|\leq
\delta}[\sigma_i(X_{s-}^{\alpha,\beta})z_iH_x\big(u(s,X_{s-}^{\alpha,\beta})\big)\cdot
\sigma_i(X_{s-}^{\alpha,\beta})z_i] \nu^{\alpha,\beta_i}(dz_i)ds.\nonumber
\end{align*}
By Taylor's formula, we get
$$
u(t,x+y)-u(t,x)-y \cdot \nabla_x u(t,x)=\int_0^1dr \int_0^r yH_x\big(u(t,x+\xi y)\big)\cdot
yd\xi.
$$
Then, we have
\begin{align*}\nonumber
&\quad |\EX\int_0^t\mathcal{A}\big(u(s,X_{s}^{\alpha,\beta})\big)-\mathcal{D}\big(u(s,X_{s}^{\alpha,\beta})\big)ds|\\
&\le\sum_{i=1}^{d}\EX\int_0^t
\int_{ |z_i|\leq \delta}\int_0^1dr \int_0^r
\big|\sigma_i(X_{s-}^{\alpha,\beta})z_i\Big(H_x\big(u(s,X_{s-}^{\alpha,\beta}+\sigma_i(X_{s-}^{\alpha,\beta})z_i\xi)\big)\nonumber\\
&\quad\quad-H_x\big(u(s,X_{s-}^{\alpha,\beta})\big)\Big)\cdot \sigma_i(X_{s-}^{\alpha,\beta})z_i\big|d\xi \nu^{\alpha,\beta_i}(dz_i)ds\nonumber \\
&\le C\sum_{i=1}^{d}\EX\int_0^t
\int_{ |z_i|\leq \delta}\int_0^1dr \int_0^r
|z_i|^2\big|H_x\big(u(s,X_{s-}^{\alpha,\beta}+\sigma_i(X_{s-}^{\alpha,\beta})z_i\xi)\big)
-H_x\big(u(s,X_{s-}^{\alpha,\beta})\big)\big|d\xi \nu^{\alpha,\beta_i}(dz_i)ds.
\end{align*}
Since $u\in {C}^{2,\gamma}(H)$, we can choose suitable $\delta<< 1$ such that $\EX\int_0^t|\mathcal{A}\big(u(s,X_{s}^{\alpha,\beta})\big)-\mathcal{D}\big(u(s,X_{s}^{\alpha,\beta})\big)|ds$ is sufficient small. For a fixed $\delta>0$, by using Lemma~\ref{lem2.2}, we obtain
\begin{equation}\nonumber
|\int_{|z_i|\le \delta}|z_i|^2 \nu^{\alpha,\beta_i}(dz_i)-2|\le C(2-\alpha).
\end{equation}
Then, we have
\begin{center}
\begin{align*}\nonumber
&\quad\ |\EX\int_0^t \mathcal{D}\big(u(s,X_s^{\alpha,\beta})\big)-\text{Tr}\left( \sigma \sigma^*(X_{s-}^{\alpha,\beta})H_x\big(u(s,X_{s-}^{\alpha,\beta})\big)\right)ds|\nonumber\\
&\le\sum_{j,k=1}^d \sum_{i=1}^d\int_0^t|\sigma_{ki}\sigma_{ji}\partial_{kj}^2u(s,X_{s-}^{\alpha,\beta})\int_0^1dr\int_0^r d\xi(\int_{|z_i|\le \delta}|z_i|^2 \nu^{\alpha,\beta_i}(dz_i)-2)|ds\nonumber\\
&\le C (2-\alpha).
\end{align*}
\end{center}
Furthermore, we have
\begin{align*}\nonumber
|\mathcal{B}\big(u(s,X_s^{\alpha,\beta})\big)|\le c \sum_{i=1}^d \int_{|z_i|>\delta}|z_i| \nu^{\alpha,\beta_i}(dz_i)\le C(2-\alpha),
\end{align*}
and
\begin{align*} \nonumber
\quad|\EX\int_0^t\ \mathcal{C}\big(u(s,X_{s}^{\alpha,\beta})\big)\ ds|&\le C\sum_{i=1}^{d}\int_{\delta<|z_i|\leq1} |z_i| \nu^{\alpha,\beta_i}(dz_i)\\
&\le C\sum_{i=1}^{d}\int_{|z_i|>\delta}|z_i| \nu^{\alpha,\beta_i}(dz_i)\le C(2-\alpha).
\end{align*}
Thus, we obtain
\begin{align*}\nonumber
&\quad\ \big|\EX\int_0^t \mathcal{L}_{b,\sigma}^{\alpha,\beta}
\big(u(s,X_s^{\alpha,\beta})\big)-\mathcal{L}_{b,\sigma}\big(u(s,X_s^{\alpha,\beta})\big) \ ds\big|\nonumber\\
&=\big|\EX\int_0^t(\mathcal{A}+\mathcal{B}-\mathcal{C})\big(u(s,X_s^{\alpha,\beta})\big)-\text{Tr}\left( \sigma \sigma^*(X_{s-}^{\alpha,\beta})H_x\big(u(s,X_{s-}^{\alpha,\beta})\big)\right)\ ds\big|\nonumber\\
&=\big|\EX\int_0^t(\mathcal{A}-\mathcal{D})\big(u(s,X_s^{\alpha,\beta})\big)ds\big|\\
&\quad+\big|\EX\int_0^t\mathcal{D}\big(u(s,X_s^{\alpha,\beta})\big)-\text{Tr}\left( \sigma \sigma^*(X_{s-}^{\alpha,\beta})H_x\big(u(s,X_s^{\alpha,\beta})\big)\right)ds\big|\\
&\quad+\big|\EX\int_0^t\mathcal{B}\big(u(s,X_s^{\alpha,\beta})\big)ds\big|+\big|\EX\int_0^t\mathcal{C}\big(u(s,X_s^{\alpha,\beta})\big)ds\big| \\
&\le C(2-\alpha).
\end{align*}
The proof is complete. \qed\bigskip
\begin{remark}
The estimate \eqref{wr} implies that the weak convergence rate is $2-\alpha$, which is the optimal rate. Please see the following example for illustrating it.
\end{remark}
\begin{example}\label{example}
Let $L_t^\alpha$ and $B_t$ are one-dimensional symmetric $\alpha$-stable process ($\beta=0$) and standard Brownian motion, respectively. Then, according to \cite{moment}, we have
\begin{align*}
\EX|L_t^\alpha|=\frac{\Gamma(1-\frac{1}{\alpha})}{\int_0^\infty u^{-2}sin^2u du}\sqrt{t}.
\end{align*}
By using $\Gamma(1-x)\Gamma(x)=\frac{\pi}{sin(\pi x)}, \ x\in(0,1)$ \ and \ $\Gamma(\frac{1}{2})=\sqrt{\pi}$, we have
\begin{align*}
\lim_{\alpha\rightarrow 2}\Big|\frac{\EX|L_t^\alpha|-\EX|B_t|}{2-\alpha}\Big|&=\lim_{\alpha\rightarrow 2}\Big|\frac{\Gamma(1-\frac{1}{\alpha})(\int_0^\infty u^{-2}sin^2u\ du )^{-1}- \frac{2}{\sqrt{\pi}}}{2-\alpha}\Big|\sqrt{t}\\
&=\lim_{\alpha\rightarrow 2}\Big|\frac{\frac{2\pi}{\Gamma(\frac{1}{\alpha})sin(\frac{\pi}{\alpha})}-2\sqrt{\pi}}{\pi(2-\alpha)}\Big|\sqrt{t}\\
&=\lim_{\alpha\rightarrow 2}\Big|\frac{2\big(\sqrt{\pi}-\Gamma(\frac{1}{\alpha})sin\frac{\pi}{\alpha} \ \big)}{\pi(2-\alpha)}\Big|\sqrt{t}\\
&=\lim_{\alpha\rightarrow 2}\Big| \frac{2}{\pi \alpha^2} \big( \Gamma'(\frac{1}{\alpha})sin\frac{\pi}{\alpha}+ \pi\Gamma(\frac{1}{\alpha})cos\frac{\pi}{\alpha}\ \big)\Big|\sqrt{t}\\
&=\frac{|\Gamma'(\frac{1}{2})|}{2\pi}\sqrt{t}\\
&=\frac{\gamma+2ln2}{2\sqrt{\pi}}\sqrt{t},
\end{align*}
where $\gamma$ is Euler-Mascheroni constant. Hence, for every\ $t\in[0,T]$, we can obtain that there is a constant $C$, such that
\begin{equation}\label{optimal}
\big|\ \EX|L_t^\alpha|-\EX|B_t| \ \big| \geq C(2-\alpha).
\end{equation}
The estimate \eqref{optimal} implies that the optimal weak convergence rate is $2-\alpha$.
\end{example}
\section{\bf Appendix: Proof of Lemma \ref{lem2.2} } \label{Appendix}
In this appendix we give the proof of Lemma \ref{lem2.2}.\\
\begin{proof} Let $\alpha \geq l$,
\begin{align*}\nonumber
\int_{|z|\le\delta} |z|^2 \nu^{\alpha,\beta}(dz)=\int_0 ^\delta \frac{ C_1 z^2}{|z|^{\alpha+1}} dz
+\int_{-\delta}^0 \frac{C_2 z^2}{|z|^{\alpha+1}} dz
=\frac{K_\alpha \delta^{2-\alpha}}{2-\alpha}, \nonumber
\end{align*}
\begin{align*} \nonumber
\int_{|z|>\delta}|z|^{\vartheta}\nu^{\alpha,\beta}(dz)=\int_{z>\delta} \frac{K_\alpha (1+\beta)}{2z^{\alpha+1-\vartheta}}dz + \int_{z< {-\delta}}\frac{K_\alpha(1-\beta)}{2z^{\alpha+1-\vartheta}}dz
=\frac{K_\alpha \delta^{\vartheta-\alpha}}{\alpha-\vartheta}. \nonumber
\end{align*}
Then by using\ $\lim\limits_{x \rightarrow 0}x\Gamma(x)=\lim\limits_{x\rightarrow 0} \frac{\pi x}{\Gamma(1-x) sin \pi x}=1$, we have
\begin{align*} \nonumber
\lim_{\alpha \rightarrow 2}\int_{ |z|\leq \delta}|z|^2\nu^{\alpha,\beta}(dz)
=\lim_{\alpha \rightarrow 2}\frac{\alpha(1-\alpha) \delta^{2-\alpha}}{(2-\alpha)\Gamma(2-\alpha)cos(\frac{\pi\alpha}{2})}=2, \nonumber
\end{align*}
\begin{align*} \nonumber
\lim_{\alpha \rightarrow 2} \int_{|z|>\delta}|z|^\vartheta \nu^{\alpha,\beta}(dz)
=\lim_{\alpha \rightarrow 2}\frac{2(2-\alpha) \delta^{\vartheta-\alpha}}{\alpha-\vartheta}=0. \nonumber
\end{align*}
Fix $\alpha\in(0,2)$,
\begin{align*} \nonumber
\lim_{\delta\rightarrow 0+}\delta^{\alpha-2}\int_{ |z|\leq \delta} |z|^{2}
\nu^{\alpha,\beta}(dz)= \lim_{\delta\rightarrow 0+}\frac{K_\alpha}{2-\alpha}<\infty. \nonumber
\end{align*}
Hence $ \lim\limits_{\delta\rightarrow 0+}\int_{ |z|\leq \delta} |z|^{2}
\nu^{\alpha,\beta}(dz)=0. $
Furthermore, by using L'Hospital's rule and $\lim\limits_{x \rightarrow 0} \frac{x\Gamma(x)}{x-1}=\lim\limits_{x\rightarrow 0}(x\Gamma(x))'=-1$, we have
\begin{align*}
\lim_{\alpha \rightarrow 2}\frac{\big|\int_{ |z|\leq \delta} |z|^{2}
\nu^{\alpha,\beta}(dz)-2\big|}{2-\alpha}=\lim_{\alpha \rightarrow 2} \big|\frac{\alpha(1-\alpha) \delta^{2-\alpha}-2(2-\alpha)\Gamma(2-\alpha)cos(\frac{\pi\alpha}{2})}{(2-\alpha)}\big|=C.
\end{align*}
This result implies that
\begin{align*}
|\int_{ |z|\leq \delta} |z|^{2}
\nu^{\alpha,\beta}(dz)-2|\le C(2-\alpha).
\end{align*}
And let $\vartheta=1$, we get
\begin{align*}
\int_{|z|>\delta}|z| \nu^{\alpha,\beta}(dz)\le C(2-\alpha).
\end{align*}
The proof is complete.\qed\end{proof}\bigskip
\section*{Ackonwledgement}
The authors ackonwledge support provided by NNSF of China (Nos. 11971186, 11971367), Science and Technology
Research Projects of Hubei Provincial Department of Education No. B2022077.
|
{
"arxiv_id": "2302.08704",
"language": "en",
"timestamp": "2023-02-20T02:07:18",
"url": "https://arxiv.org/abs/2302.08704",
"yymm": "2302"
} | \section{Bias-Variance Analysis in the Gaussian Mixture Model}
\label{sec:appendix-theory}
\subsubsection{Overall Model}
Let's begin by computing the bias on $X^{priv}$ samples:
$$ \mathbb{E}[\hat{\mu}_{overall} - \mu_{priv}] =
\mathbb{E}[p_{priv}.\hat{\mu}_{priv} + p_{dis}.\hat{\mu}_{dis} - \mu_{priv}] $$
$$ =\mathbb{E}[p_{priv}.\hat{\mu}_{priv} + p_{dis}.\hat{\mu}_{dis} - p_{priv}.\mu_{priv} - p_{dis}\mu_{priv}] $$
\begin{equation}
p_{priv}\mathbb{E}[\hat{\mu}_{priv} - \mu_{priv}] + p_{dis}\mathbb{E}[\hat{\mu}_{dis} - \mu_{priv}]
\end{equation}
The first term goes to zero since $\hat{\mu}_{priv}$ is an unbiased estimator of $\mu_{priv}$. We can approximate the second term using the mean difference $\Delta \mu$, defined as follows (also shown pictorically in Figure \ref{fig:mean_estimation}):
\begin{equation}
\Delta \mu = |\mu_{priv} - \mu_{dis}|
\end{equation}
Plugging this into (7), we get:
$$ \mathbb{E}[\hat{\mu}_{overall} - \mu_{priv}] = p_{dis}\mathbb{E}[\hat{\mu}_{dis} - \mu_{priv}] $$
\begin{equation}
\approx p_{dis}\mathbb{E}[\mu_{dis} - \mu_{priv}] = p_{dis}.\mathbb{E}[\Delta \mu]
\end{equation}
Similarly, we can derive the expression for bias on $X^{dis}$ samples as:
\begin{equation}
\mathbb{E}[\hat{\mu}_{overall} - \mu_{dis}] = p_{priv}\mathbb{E}[\hat{\mu}_{priv} - \mu_{dis}] \approx p_{priv}.\mathbb{E}[\Delta \mu]
\end{equation}
This is a very intuitive result: we are paying for mis-specifying a single model instead of group-specific ones. The bias term is simply the product of the mean difference (how ``far" the two distributions are from each other) and the proportion of samples from the ``wrong" group (how many "bad" samples we used in our estimation).
Let's look at the variance (which does not depend on the true value) of this estimator next:
$$ \text{Var}(\hat{\mu_{overall}}) = \text{Var}(p_{priv}.\hat{\mu}_{priv} + p_{dis}.\hat{\mu}_{dis}) $$
\begin{equation}
= p_{priv}^2.\text{Var}(\hat{\mu}_{priv}) + p_{dis}^2.\text{Var}(\hat{\mu}_{dis}) + 2.p_{priv}^2.p_{dis}^2.\mathbb{E}[[\hat{\mu}_{priv} - \mathbb{E}\hat{\mu}_{priv}][\hat{\mu}_{dis} - \mathbb{E}\hat{\mu}_{dis}]]
\end{equation}
The third term goes to zero since $\hat{\mu}_{priv}$ and $\hat{\mu}_{dis}$ are unbiased. Let us write expressions for the variance of these two estimators now. Recall that these are both simply the MLE of their respective distribution, so we know their variance scales in the order of 1/n:
\begin{equation}
\text{Var}(\hat{\mu}_{priv}) = \frac{\sigma^2_{priv}}{n_{priv}}
\end{equation}
\begin{equation}
\text{Var}(\hat{\mu}_{dis}) = \frac{\sigma^2_{dis}}{n_{dis}}
\end{equation}
Plugging this back into (11):
$$ \text{Var}(\hat{\mu_{overall}}) = p_{priv}^2.\frac{\sigma^2_{priv}}{n_{priv}} + p_{dis}^2.\frac{\sigma^2_{dis}}{n_{dis}}
$$
\begin{equation}
= \frac{1}{n} [\frac{n_{priv}}{n}.\sigma^2_{priv} + \frac{n_{dis}}{n}.\sigma^2_{dis}]
\end{equation}
\subsubsection{Conditional Model}
The conditional model defined in section \ref{sec:ciid-model} is by construction unbiased: we have fit unbiased models to individual mixture components, and will use a-priori information about mixture membership ($X_{protected}$) to decide which estimator to use.
Let's look at it's variance: the conditional estimator uses $\hat{\mu}_{priv}$ to estimate the mean when samples come from $X^{priv}$, and $\hat{\mu}_{dis}$ when samples come from $X^{dis}$. So, the conditional model has a variance of $\sigma_{priv}^2/n_{priv}$ and $\sigma_{dis}^2/n_{dis}$ on samples from $X^{priv}$ and $X^{dis}$ respectively.
Although the difference between the i.i.d and conditional-i.i.d setting might seem subtle, we can already start seeing how different the resulting procedures are: in the i.i.d case we paid heavily in terms of bias: if distributions are very far apart ($\Delta \mu$ is large), and/or if samples are highly unequally drawn from mixture components ($p_priv$ or $p_dis$ is close to 1, and the other is close to 0) then errors are large and unequally distributed: we do well on the distribution whose mean is closer to the estimated value, and do very poorly on the other mixture component. In the c-i.i.d setting we have the opposite problem: we are able to construct unbiased estimators for both mixture components. However in doing so, the variance of our estimator also becomes conditional on group membership --- we pay in the order of $\sigma^2_{group}/n_{group}$. So, for groups/mixture components for which we see very little data, or very noisy data, we have large variance.
\subsubsection{Ensemble Model}
Recall the ensemble model defined in \ref{sec:ensemble-model}. This is a symmetric model, and so the bias is the same on samples from the privileged group and the disadvantaged group. Writing it for the priv group:
\begin{equation}
\mathbb{E}[\hat{\mu}_{ensemble} - \mu_{priv}] = \mathbb{E}[\frac{\hat{\mu}_{priv} + \hat{\mu}_{dis}}{2} - \mu_{priv}] = \frac{1}{2}\mathbb{E}[\hat{\mu}_{dis} - \mu_{priv}] \approx \frac{1}{2} \mathbb{E}[\Delta \mu]
\end{equation}
And, its variance is:
\begin{equation}
\text{Var}[\hat{\mu_{ensemble}}] = \text{Var}[\frac{\hat{\mu}_{priv} + \hat{\mu}_{dis}}{2}] = \frac{1}{4} [\text{Var}(\hat{\mu}_{priv}) + \text{Var}(\hat{\mu}_{dis})] = \frac{1}{4} (\frac{\sigma^2_{priv}}{n_{priv}} + \frac{\sigma^2_{dis}}{n_{dis}})
\end{equation}
\subsubsection{Dis Model}
Lastly, let's look at how we would perform if we only used the model trained on $X^{dis}$ samples, ie. $\hat{\mu}_{dis}$.
We know that $\hat{\mu}_{dis}$ is an unbiased estimator of ${\mu}_{dis}$, so the bias on samples from $X^{dis}$ is zero. Let's look at the bias on samples from $X^{priv}$:
\begin{equation}
\mathbb{E}[\hat{\mu}_{dis} - \mu_{priv}] \approx \mathbb{E}[\mu_{dis} - \mu_{priv}] = \mathbb{E}[\Delta \mu].
\end{equation}
We already saw in (13) that the variance of this estimator is $\sigma^2_{dis}/n_{dis}$
\section{Conditioning on Sensitive Attributes}
\label{sec:appendix-sensitive}
Several performance metrics (accuracy, TPR, FPR, FNR, TNR, selection rate and positive rate) of different models on COMPAS and folktables is reported in Figures~\ref{fig:compas_clusters_groups_all_metrics} and~\ref{fig:folk_clusters_groups_all_metrics}. Colors denote different models, and the $x$-axis reports performance broken down by test subgroup. Train groups are constructed on the basis of sensitive attributes.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{icml2023/compas_sensitive_dtc_groups_all.png}
\caption{Conditioning on sensitive attributes, compas: Test performance of different models broken down by test subgroup.}
\label{fig:compas_clusters_groups_all_metrics}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{icml2023/folktables_sensitive_dtc_groups_all.png}
\caption{Conditioning on sensitive attributes, folktables: Test performance of different models broken down by test subgroup.}
\label{fig:folk_clusters_groups_all_metrics}
\end{figure*}
\section{Conditioning on Cluster Membership (blind)}
\label{sec:appendix-blind}
Several performance metrics (accuracy, TPR, FPR, FNR, TNR, selection rate and positive rate) of different models on COMPAS and folktables is reported in Figures~\ref{fig:compas_clusters_groups} and~\ref{fig:folk_clusters_groups}. Colors denote different models, and the $x$-axis reports performance broken down by test subgroup. Train groups are constructed on the basis of cluster assignment.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{icml2023/compas_blind_dtc_groups.png}
\caption{Conditioning on cluster membership (blind), compas: Test performance of different models broken down by test subgroup.}
\label{fig:compas_clusters_groups}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{icml2023/folktables_blind_dtc_groups.png}
\caption{Conditioning on cluster membership (blind), folktables: Test performance of different models broken down by test subgroup.}
\label{fig:folk_clusters_groups}
\end{figure*}
\section{The Conditional-i.i.d Model}
\label{sec:ciid}
Assume we are given a dataset of covariates and targets $\mathcal{D}(X^\text{i}, Y^\text{i})$, and we wish to learn an estimator $\hat{f} = E[Y|X=x]$, which takes the covariates of an unseen sample $x$, and returns its predicted label $\hat{y} = \hat{f}(x)$. Further, assume that the covariate vector $X^i$ can be partitioned into two types of features: $X_{relevant}$ and $X_{protected}$, and that demographic groups are constructed on the basis of $X_{protected}$.
Let $x^*$
be the value of the feature $X_{protected}$ for samples from the privileged group. Then:
$$ \mathcal{D}^{priv} = \{ (X^i,Y^i)| X^i_{protected} = x^* \}, i=1,2 \dots n $$
$$ \mathcal{D}^{dis} = \{ (X^i,Y_i) | X^i_{protected} \neq x^* \}, i=1,2 \dots n $$
$$ \mathcal{D} = \mathcal{D}^{priv} \cup \mathcal{D}^{dis}$$
Conventionally, we assume that samples $(X_i,Y_i)$ are i.i.d.\xspace:
$$\mathcal{D}, \mathcal{D}^{priv}, \mathcal{D}^{dis} \sim (\mathcal{X},\mathcal{Y})$$
Instead, in this paper we model the conditional-i.i.d.\xspace setting:
$$\mathcal{D}^{priv} \sim (\mathcal{X}^{priv},\mathcal{Y}^{priv}), \mathcal{D}^{dis} \sim (\mathcal{X}^{dis},\mathcal{Y}^{dis})$$
Under conditional-i.i.d.\xspace, the i.i.d.\xspace assumption holds after conditioning on the protected attributes.
Different statistical measures correspond to different notions of fairness, but a particularly influential approach has been to
assume all samples are i.i.d.\xspace, fit an estimator to the full dataset (all groups), and then post-process predictions until the desired group-wise parity (fairness) constraint is met \cite{hardt_EOP2016}. The main observation we make here is that there is a global trade-off that the overall (iid) estimator $\hat{Y}$ is making.
The goal of fair-ML is to find a principled way to trade off comparable performance within groups with slightly worse performance overall.
In this paper, we interrogate this procedural tension:
adopting conditional-i.i.d.\xspace assumptions, we train group-specific models and investigate the trade-offs they make in terms of statistical bias and variance compared to the single (i.i.d.\xspace) model.
\section{Discussion}
\label{sec:discussion}
\paragraph{Worldview.}
Our work aligns with the
``We Are All Equal'' (WAE) worldview of~\citet{friedler_impossibility} that posits that people's qualifications are indistinguishable in the construct space, but, due to systematic skew (the effect of discrimination), they are no longer identically distributed in the observed space. From a philosophical perspective, the WAE worldview necessitates modelling beyond the i.i.d.\xspace setting.
\paragraph{Desert vs. Privilege.} From a moral perspective, it is desirable to use desert-based decision-making procedures in critical contexts such as criminal justice and employment. A desert-based procedure operates on whether the applicant deserves the outcome, whether good (positive employment status) or bad (high risk of recidivism). Our compelling empirical results on COMPAS and folktables in Section~\ref{sec:experiments} are an indication that real-world decision-making is influenced more by privilege than by desert. Here, model unfairness is evidence of the massive weight of privilege: our datasets are records of past decision-making. If this was purely desert-based, then seeing samples from socially privileged groups (such as sex$\_$priv and race$\_$priv) would not inherently make the procedure perform worse on samples from socially disadvantaged groups.
\paragraph{Rawls's Original Position.}
Philosophically, our porposed procedure resembles John Rawls's Original Position under a Veil of Ignorance. Rawls posits that people would select rules of social cooperation under a veil of ignorance, i.e.,\xspace without knowing where they are going to end up on the social ladder in a way that agrees with his principles of justice~\cite{Rawls1971Justice}. The intuition is that, if nobody knows where they are going to end up, and could possibly end up on the bottom-most rung of the ladder, then everyone will set up society in a just manner, keeping in mind the position of the worst-off group. In our set-up, the most socially disadvantaged group is the intersectionally disadvantaged one. Hence, a procedure that trains an estimator on samples from that group (the sex$\_$race$\_$dis$\_$dis estimator in Section~\ref{sec:exp-sensitive}), and then applies it to everyone, irrespective of their morally irrelevant characteristics (sensitive features), formalizes the notion of fairness that John Rawls' posits in this broad theory of justice~\cite{Rawls1971Justice}.
\paragraph{Procedural Fairness.}
The conditional-i.i.d.\xspace model uses group-specific estimators during testing. For example, the race$\_$priv model is used to predict outcomes for white applicants, whereas the race$\_$dis model is applied to non-white applicants. While this gives us the improved predictive and computational gains we desired, from a legal standpoint, the conditional-i.i.d.\xspace model is procedurally unfair because it is unfair to apply different decision-making procedures to applicants based on their sensitive features.
That being said, we also provide two conditional formulations that satisfy procedural fairness: the first is simply to use one of the conditional models on the entire population. For example, the procedure described under ``Rawls's Original Position'' uses only the sex$\_$race$\_$dis$\_$dis to make predictions. This is no longer a violation of procedural fairness. Although the estimator only saw samples from one demographic group during training, it is the very same estimator being used to decide outcomes for all candidates, irrespective of their sensitive attributes, and hence this is a fair procedure. The blind classifier described in Section~\ref{sec:exp-blind} is another way to prevent procedural unfairness in the conditional-i.i.d.\xspace setting.
\section{Conclusions, Limitations and future work}
In this paper we challenged the suitability of the dominant i.i.d.\xspace setting for fair decision-making. Through our theoretical analysis in the Gaussian mixture model, and empirical evaluation on benchmark datasets, we hope to have demonstrated the suitability of an alternate formulation: the conditional-i.i.d.\xspace setting. We also instantiated the proposed conditional-i.i.d.\xspace model with two different procedures that improve ``fairness'' by conditioning out undesirable effects: first, by conditioning directly on sensitive attributes, and, second, by clustering samples into groups and conditioning on cluster membership.
Our work opens many avenues for exciting future work. One of these is that, in social applications we have the luxury of modelling in the low-dimensional region of feature space: There are 17 categorical features in folktables, and on the order of 200k samples. In COMPAS, there are 13 features, and on the order of 5k samples. The conditional-i.i.d.\xspace setting is suitable for social domains such as the tasks evaluated here because we are in the low-dimensional regime. There is interesting future work to be done to investigate the efficacy of the conditional-i.i.d.\xspace model beyond tabular data, for example,
on image data with thousands of pixels, especially in medical contexts, where the number of features can be larger than the number of labelled samples.
Furthermore, in our experiments we showed that most of the conditional models are competitive with models that have seen the full dataset. Notably, even intersectionally group-specific models like the
sex$\_$race$\_$dis$\_$dis estimator and sex$\_$race$\_$dis$\_$priv estimator --- which were trained only on 17.7$\%$ and 33.8$\%$ of folktables, and 49.1$\%$ 32.1$\%$ of COMPAS, respectively --- are highly competitive. Interestingly, these are also the test groups that most models perform the poorest on! This is extremely telling about the informativeness of samples from the intersectionally disadvantaged groups. The deceptively simple point we demonstrated is that samples from marginalized groups are more likely to form the predictive margin. There is interesting future work to be done to connect this bias-variance analysis with margin analysis in the conditional-i.i.d.\xspace setting.
\section{Empirical Analysis}
\label{sec:experiments}
\subsection{Experimental Set-Up}
\label{sec:exp-setup}
\textbf{Datasets.} We used two fair-ml benchmark dataset in our evaluation, \href{https://github.com/zykls/folktables#1}{folktables} and \href{https://github.com/propublica/compas-analysis}{COMPAS}.
\begin{table*}[t!]
\centering
\small
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& sex$\_$race$\_$priv$\_$priv& sex$\_$race$\_$priv$\_$dis& sex$\_$race$\_$dis$\_$priv & sex$\_$race$\_$dis$\_$dis& sex$\_$priv& sex$\_$dis& race$\_$priv& race$\_$dis \\
\hline
Full & 0.322 & 0.161 & 0.338 & 0.177 & 0.484 & 0.516 & 0.661 & 0.339 \\
\hline
Group1 & 0.317 & 0.201 & 0.295 & 0.186 & 0.519 & 0.481 & 0.612 & 0.388 \\
\hline
Group2 & 0.291 & 0.248 & 0.227 & 0.234 & 0.539 & 0.461 & 0.518 & 0.482 \\
\hline
Group3 & 0.320 & 0.118 & 0.369 & 0.192 & 0.438 & 0.561 & 0.689 & 0.311 \\
\hline
Group4 & 0.353 & 0.114 & 0.404 & 0.128 & 0.468 & 0.532 & 0.757 & 0.243 \\
\hline
\end{tabular}
\caption{Demographic composition of the folktables dataset (left to right): sex$\_$race$\_$priv$\_$priv is white men, sex$\_$race$\_$priv$\_$dis is non-white men, sex$\_$race$\_$dis$\_$priv is white women, sex$\_$race$\_$dis$\_$dis is non-white women, sex$\_$priv is men, sex$\_$dis is women, race$\_$priv is whites, and race$\_$dis is non-whites. Full is reported on the entire test set. Groups 1-4 are the groups assigned from unsupervised clustering.}
\label{tab:folk-info}
\end{table*}
Folktables~\citep{DBLP:conf/nips/DingHMS21} is constructed from census data from 50 US states for the years 2014-2018. We report results on the ACSEmployment task: a binary classification task of predicting whether an individual is employed. We report our results on data from Georgia from 2016, but observe consistent trends on different states and years. The dataset has 16 covariates, including age, schooling, and disability status, and contains about 200k samples.
COMPAS~\citep{compas_propublica} is perhaps the most influential dataset in fair-ml, released for public use by ProPublica as part of their seminal report titled ``Machine Bias.'' COMPAS forms a binary classification task to predict violent recidivism.
Covariates include sex, age, and information on prior criminal justice involvement.
We use the version of COMPAS supported by \href{https://fairlearn.org/v0.4.6/auto_examples/plot_binary_classification_COMPAS.html}{fairlearn}. Fairlearn loads the dataset pre-split into training and test. We merge them into a single dataset and then perform different random splits. The full dataset has 5,278 samples.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{icml2023/compas_sensitive_dtc_groups.png}
\caption{Conditioning on sensitive attributes, COMPAS: Test performance of different models broken down by test subgroup.}
\label{fig:compas_groups}
\end{figure*}
\textbf{Protected Groups.} We define binary protected groups with respect to two features, sex and race. Males are the privileged ($priv$) group in folktables, while females are the privileged group in COMPAS. Whites are the privileged group in both folktables and COMPAS. We also look at intersectional groups constructed from sex and race: for example, (male,white) is the priv$\_$priv group in folktables.
The proportion of demographic groups in folktables and COMPAS is reported in Tables \ref{tab:folk-info} and~ \ref{tab:compas-info}, respectively.
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{icml2023/folktables_sensitive_dtc_groups.png}
\vspace{-0.75cm}
\caption{Conditioning on sensitive attributes, folktables: Test performance of different models broken down by test subgroup.}
\label{fig:folk_groups}
\end{figure*}
\textbf{Training.} We fit the following models on a training set $\mathcal{D} (X^i,Y^i)$:
\begin{enumerate}
\item Overall i.i.d.\xspace model is trained on the entire dataset, treating it as sampled from a single distribution.
\begin{equation}
\hat{f}_{overall}(\mathcal{D}) := argmin_{f \in \mathcal{F}} \mathbb{E}_{\mathcal{D}}[\ell (f(X),Y)]
\end{equation}
\begin{equation}
\hat{y}_{iid}(X) = \hat{f}_{overall}(X_{relevant}, X_{protected})
\end{equation}
\item Conditional-i.i.d.\xspace models: We train two models, each on one subset of the data: the $priv$ model is fit on samples only from the privileged group. For example, the sex$\_$priv model in experiments on folktables denotes the estimator that was fit on samples only from male respondents to the census.
Conversely, the $dis$ model only sees samples from the disadvantaged group during training. For example, the race$\_$dis model corresponds to the estimator fit on samples from non-white applicants. We apply the same procedure for intersectional groups. For example, the sex$\_$race$\_$dis$\_$dis model on COMPAS corresponds to the estimator fit on samples from male, non-white defendants.
\begin{equation}
\hat{f}_{priv}(\mathcal{D}) := argmin_{f \in \mathcal{F}} \mathbb{E}_{\mathcal{D}}[\ell (f(X^{priv}),Y^{priv})]
\end{equation}
\begin{equation}
\hat{f}_{dis}(\mathcal{D}) := argmin_{f \in \mathcal{F}} \mathbb{E}_{\mathcal{D}}[\ell (f(X^{dis}),Y^{dis})]
\end{equation}
\begin{equation}
\begin{aligned}
\hat{y}_{ciid}(X) = \hat{f}_{priv}(X_{relevant}).\mathbbm{1} [X_{protected}=x^*] \\ + \hat{f}_{dis}(X_{relevant}).\mathbbm{1} [X_{protected} \neq x^*]
\end{aligned}
\end{equation}
We will also look at these models in isolation, i.e.,\xspace if we applied a single conditional model to the entire population:
\begin{equation}
\hat{y}_{priv}(X) = \hat{f}_{priv}(X_{relevant})
\end{equation}
\begin{equation}
\hat{y}_{dis}(X) = \hat{f}_{dis}(X_{relevant})
\end{equation}
\end{enumerate}
\textbf{Model Selection and Testing.}
\label{sec:model-training}
For each experiment, in one run we randomly split the data into train:test:validation (80:10:10), fit all the models described above, and report several predictive metrics: accuracy, true positive rate (TPR), and selection rate, on the test set. We chose these metrics because fairness measures are usually composed as differences and ratios of these performance metrics computed on different demographic groups in the test set~\cite{hardt_EOP2016, dwork_awareness, chouldechova_impossibility, Kleinberg_impossibility}. We use the validation set to tune hyperparameters once for each model type, for each dataset. Notably, the only difference between the models we compare is the sub-population of the dataset that they see during training. Everything else, crucially including the model type (architecture and hyperparameters), is held fixed. For each experiment we conduct 18 runs, each with a different random split of the dataset. We use scikit-learn's implementations of different predictors. We experimented with different model types, namely LogisticRegression, MLPClassifier, DecisionTreeClassifier and KNeighborsClassifier and observed comparable results across all. We report results from the DecisionTreeClassifier here.
\subsection{Experimental Results}
We run two sets of experiments: the first using training groups constructed on the basis of sensitive attributes (sex, race and sex$\_$race), and the second by clustering the samples into groups and constructing training groups based on cluster membership (blind to sensitive information).
We present our results in two ways: we first report the performance of all models on different test subgroups. From a fairness perspective it is important that models perform comparably well (for our chosen metric) on all demographic groups in the test set. Next, we compare the performance of different models on the overall test set.
\subsubsection{Conditioning on sensitive attributes}
\label{sec:exp-sensitive}
The accuracy, TPR, and selection rate of different models on COMPAS and folktables is reported in Figures~\ref{fig:compas_groups} and~\ref{fig:folk_groups}. Colors denote different models, and the $x$-axis reports performance broken down by test subgroup. We report several metrics on the overall test set in Figure \ref{fig:compas_all} (COMPAS) and \ref{fig:folk_all} (folktables). See Appendix~\ref{sec:appendix-sensitive} for results with other metrics.
Our empirical results demonstrate that both overall performance, and performance on specific demographic groups, of group-specific models is comparable to performance of a single model trained on the entire training set on the overall test set (see Figure~\ref{fig:folk_groups} for folktables and Figure \ref{fig:compas_groups} for COMPAS). The results suggest that if samples are coming from two (or possibly more) distributions, there is a significant overlap between their supports --- models trained on only one subset of the data (e.g.,\xspace race$\_$priv, trained only on white samples) perform competitively to models that see both groups during training (e.g.,\xspace the overall model), even on test groups that they never saw during training (e.g.,\xspace race$\_$dis, which corresponds to all non-white test samples). Furthermore, the conditional models are trained with at most half of the data compared to the overall model, and yet perform competitively --- a substantial computational gain. The proportions of samples from different groups is summarized in Tables~\ref{tab:folk-info} and~\ref{tab:compas-info}, and are indicative of the relative training set sizes of different conditional estimators.
From a fairness perspective, we desire that the model perform equally well on different demographic subgroups in the data, and the disparity in performance metrics (such as accuracy, FPR, selection rate, etc) are quantified as measures of model unfairness. Conditional estimators trained only on samples from the \emph{dis} group are more ``fair'' than i.i.d.\xspace estimators, by construction: the race$\_$dis, sex$\_$dis and sex$\_$race$\_$dis$\_$dis are top performing models on their respective demographic groups in the test set as seen in Figure \ref{fig:folk_groups} (folktables) and Figure \ref{fig:compas_groups} (COMPAS). Additionally, these models are competitive even on \emph{priv} test groups that they did not see during training, and perform only marginally worse than the overall model. Hence, the disparity in the performance of these conditional models on \emph{dis} and \emph{priv} groups is smaller than that of the overall model, and we get the desired improvement in \emph{fairness}.
\subsubsection{Conditioning on cluster assignment (blind to protected group membership)}
\label{sec:exp-blind}
We also evaluate the efficacy of a conditional-i.i.d.\xspace model that is blind to protected group membership. In order to do so, we first cluster all the data points into a suitable number of groups and then train models after conditioning on cluster membership. For example, the model for Group1 is the estimator that only saw samples from the first cluster during training.
Notably, the choice of the number of clusters is a hyperparameter of this blind conditional model. We find that choosing 3 and 4 clusters for COMPAS and folktables, respectively, creates clusters of a reasonable size and with reasonable coverage of protected groups. The proportion of protected groups in each cluster is reported in Tables~\ref{tab:compas-info} (COMPAS) and~\ref{tab:folk-info} (folktables). For COMPAS Group1, Group2 and Group3 constitute 56$\%$, 33.6$\%$ and 10.4$\%$ of the training dataset, respectively. For folktables, Groups 1-4 constitute 20.1$\%$, 22.1$\%$, 23.3$\%$m and 34.5$\%$ of the training data, respectively.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{icml2023/folktables_blind_dtc_overall.png}
\vspace{-0.75cm}
\caption{Conditioning on cluster membership (blind), folktables: Test performance of different models on the overall test set.}
\label{fig:folk_clusters_all}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{icml2023/compas_blind_dtc_overall.png}
\vspace{-0.75cm}
\caption{Conditioning on cluster membership (blind), COMPAS: Test performance of different models on the overall test set.}
\label{fig:compas_clusters_all}
\end{figure*}
As before, we report several metrics on the overall test set in Figure \ref{fig:folk_clusters_all} (folktables) and Figure \ref{fig:compas_clusters_all} (COMPAS). We report additional metrics, as well as a breakdown by demographic groups in the test set, in Appendix~\ref{sec:appendix-blind}.
The results of this experiment demonstrate the efficacy of a race-blind and sex-blind classifier in the conditional-i.i.d.\xspace setting. We see a slightly different mechanism that is leading to gains in performance, however: unsupervised clustering before conditioning allows us to identify and isolate spurious samples. The Group1 estimator (in green) in Figure~\ref{fig:compas_clusters_groups} (COMPAS) and the Group1 estimator (in blue) in Figure~\ref{fig:folk_clusters_groups} (folktables) are clearly trained on spurious samples. In practice, conditional-i.i.d.\xspace allows us to identify and thereby isolate such spurious samples in the training set.
\section{Motivation}
\label{sec:intro}
In this paper we challenge the dominant modelling paradigm in fair machine learning, namely, that data samples are \emph{independent and identically distributed} (i.i.d.\xspace) from an underlying data generating process. Instead, we propose to construct fair estimators under conditional-i.i.d.\xspace assumptions.
As the name suggests, conditional-i.i.d.\xspace challenges the assumption that all samples come from the same distribution (i.e.,\xspace that they are identically distributed).
\begin{figure*}[h!]
\centering
\includegraphics[width=\linewidth]{icml2023/compas_sensitive_dtc_overall.png}
\vspace{-0.75cm}
\caption{Conditioning on sensitive attributes, COMPAS: Test performance of different models on the overall test set.}
\label{fig:compas_all}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=\linewidth]{icml2023/folktables_sensitive_dtc_overall.png}
\vspace{-0.75cm}
\caption{Conditioning on sensitive attributes, folktables: Test performance of different models on the overall test set.}
\label{fig:folk_all}
\end{figure*}
We are motivated by the widely-held sociological view
that the data from marginalized social and demographic groups tends to be ``noisier'' than the data from privileged social groups~\cite{kappelhof2017,schelter2019fairprep}.
In alignment with the \emph{WAE} worldview~\cite{friedler_impossibility}, we posit that, while people's
qualifications are in fact drawn from the same distribution (equal natural talent), social inequality affects different demographic groups differently (unequal treatment).
The composite effect of equal natural talent and unequal treatment results in distributions that are no longer identical.
The intuition is then that, since different social groups are assumed to be drawn from different distributions, we can improve on the bias-variance trade-offs of a single model that treats the data as i.i.d.\xspace by training multiple group-specific conditional models.
In this paper, we formally and empirically study the bias-variance trade-off under conditional-i.i.d.\xspace. Notably, it may be impractical --- or even illegal --- to train models for different demographic groups and then select a model to use at decision time based on an individual's group membership.\footnote{In the U.S. this is illegal in many domains based on the doctrine of disparate treatment, see \url{https://en.wikipedia.org/wiki/Disparate_treatment}.} We show that understanding the bias-variance trade-off under conditional-i.i.d.\xspace helps select a specific \emph{single model} to use at decision time, and, furthermore, that the best-performing model is not always the one that is trained under i.i.d.\xspace (i.e.,\xspace on the entire training dataset). Our results are highlighted in Figures~\ref{fig:compas_all} and~\ref{fig:folk_all}: Group-specific models that are only trained on samples from intersectionally disadvantaged groups, which form 17.7$\%$ of the folktables dataset~\cite{DBLP:conf/nips/DingHMS21} and 49.1$\%$ of the COMPAS dataset~\cite{compas_propublica}, respectively, are highly competitive on the overall test set, and outperform i.i.d.\xspace models on disadvantaged test groups.
{\bf In this paper, we make the following contributions:}
(1) We propose a new modelling paradigm for fair machine learning, namely, the conditional-i.i.d.\xspace (Section~\ref{sec:ciid}).\\
(2) We theoretically contrast the bias-variance trade-offs made in the dominant i.i.d.\xspace setting and in the new conditional-i.i.d.\xspace setting using the example of mean estimation in the Gaussian Mixture Model (Section \ref{sec:theory}). We find that the i.i.d.\xspace and conditional-i.i.d.\xspace models lie on opposite ends of bias-variance spectrum: the i.i.d.\xspace model has low variance, but potentially unbounded bias in poorly specified settings, whereas the conditional-i.i.d.\xspace model is unbiased (on the demographic group it saw during training) by construction, but it has large variance. However, in some decision-making contexts --- specially in critical social contexts --- sacrificing estimator variance to gain unbiasedness on protected groups of interest is a worthy trade-off.\\
(3) We empirically evaluate our approach on two real-world benchmarks, and show that conditional-i.i.d.\xspace models are competitive with i.i.d.\xspace models (Section \ref{sec:experiments}). We demonstrate two procedures that improve ``fairness'': (a) conditioning on the sensitive attributes and (b) clustering samples into groups and conditioning on cluster membership.
\section{Gaussian Mixture Model Mean Estimation}
\label{sec:theory}
\begin{table*}[t!]
\centering
\small
\caption{Summary of bias-variance trade-offs of different mean estimators}
\centering
\begin{tabular}{||c |c |c |c ||}
\hline
Model & Bias on $priv$ & Bias on $dis$ & Variance \\ [0.5ex]
\hline\hline
overall (i.i.d.\xspace) & $p_{dis}.\mathbb{E}[\Delta \mu]$ & $p_{priv}.\mathbb{E}[\Delta \mu]$ & $\frac{1}{n} [\frac{n_{priv}}{n}.\sigma^2_{priv} + \frac{n_{dis}}{n}.\sigma^2_{dis}]$ \\
\hline
ensemble & $\frac{1}{2} \mathbb{E} [\Delta \mu]$ & $\frac{1}{2} \mathbb{E} [\Delta \mu]$ & $\frac{1}{4} (\frac{\sigma^2_{priv}}{n_{priv}} + \frac{\sigma^2_{dis}}{n_{dis}})$ \\
\hline
disprivileged & $\mathbb{E} [\Delta \mu]$ & 0 & $\sigma^2_{dis}/n_{dis}$\\
\hline
conditional-i.i.d.\xspace & 0 & 0 & $\sigma^2_{priv}/n_{priv}$ (on priv), $\sigma^2_{dis}/n_{dis}$ (on dis) \\
\hline
\end{tabular}
\label{table:mean-results}
\end{table*}
A natural way to think about the conditional-i.i.d.\xspace setting is through the Gaussian mixture model (GMM), where the latent variable that determines which component of the mixture the sample is drawn from is no longer latent/unobserved, but is one of the covariates, specifically $X_{protected}$. We will now use mean estimation in GMM, to build intuition about the bias-variance trade-offs under conditional-i.i.d.\xspace.
\begin{figure}[b!]
\centering
\includegraphics[width=8cm]{Mean_estimation.png}
\caption{Mean estimation under conditional-i.i.d.\xspace.}
\label{fig:mean_estimation}
\end{figure}
\subsection{Data Generating Process}
Let there be two mixture components, or groups, in the GMM: the privileged group $X^{priv}$, and the disadvantaged group $X^{dis}$, defined as follows:
$$ X^{priv} := X | X_{protected}=x^* \sim \mathcal{N}(\mu_{priv}, \sigma^2_{priv}) $$
$$ X^{dis} := X | X_{protected} \neq x^* \sim \mathcal{N}(\mu_{dis}, \sigma^2_{dis}) $$
Assume we get $n$ samples from the data generating process, and for each sample we get a scalar value $X$ sampled as described, along with a second binary-valued $X_{protected}$.
The set-up is shown graphically in Figure \ref{fig:mean_estimation}.
\subsection{Defining Estimators}
Let's now fit different mean estimators under different modelling assumptions, namely, i.i.d.\xspace and conditional-i.i.d.\xspace.
\subsubsection{Overall model (IID Assumption)}
\label{sec:iid-model}
In the i.i.d.\xspace setting, we will fit a single model on the entire dataset. Here, the MLE is the sample average.
\begin{equation}
\hat{\mu}_{overall} = \bar{X} = \frac{1}{n} \sum_{i=1}^{n} X^i
\end{equation}
\subsubsection{Group-specific models (C-IID Assumption)}
\label{sec:ciid-model}
In the conditional-i.i.d.\xspace setting, we assume that data is a mixture coming from two different distributions, and we know apriori which of the two distribution samples are drawn from. To estimate the means of both distributions, we fit separate estimators for each mixture component. We assume that there are $n$ samples in total, and that $n_{priv}$ and $n_{dis}$ samples are drawn from $X^{priv}$ and $X^{dis}$, respectively.
\begin{equation}
\hat{\mu}_{priv} = \bar{X}^{priv} = \frac{1}{n} \sum_{i=1}^{n} X^i.\mathbbm{1} [X_{protected}= x^*]
\end{equation}
\begin{equation}
\hat{\mu}_{dis} = \bar{X}^{dis} = \frac{1}{n} \sum_{i=1}^{n} X^i.\mathbbm{1} [X_{protected}\neq x^*]
\end{equation}
\begin{equation}
\hat{\mu}_{ciid} = \hat{\mu}_{priv}.\mathbbm{1} [X_{protected}=x^*] + \hat{\mu}_{dis}.\mathbbm{1} [X_{protected}\neq x^*]
\end{equation}
We rewrite the overall model in terms of group-specific models.
Denoting $p_{priv} = \frac{n_{priv}}{n}$ and $p_{dis} = \frac{n_{dis}}{n}$:
\begin{equation}
\hat{\mu}_{overall} = \bar{X} =
p_{priv}.\hat{\mu}_{priv} + p_{dis}.\hat{\mu}_{dis}
\end{equation}
\subsubsection{Ensemble}
\label{sec:ensemble-model}
For comparison, we will also look at a simple ensemble model that averages the outputs of group-specific estimators.
\begin{equation}
\hat{\mu}_{ensemble} = \frac{\hat{\mu}_{priv} + \hat{\mu}_{dis}}{2}
\end{equation}
\begin{table*}[t!]
\centering
\small
\caption{Demographic composition of the compas dataset (left to right): sex$\_$race$\_$priv$\_$priv is white women, sex$\_$race$\_$priv$\_$dis is non-white women, sex$\_$race$\_$dis$\_$priv is white men, sex$\_$race$\_$dis$\_$dis is non-white men, sex$\_$priv is women, sex$\_$dis is men, race$\_$priv is whites, and race$\_$dis is non-whites. Full is reported on the entire test set. Groups 1-3 are the groups assigned from unsupervised clustering.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& sex$\_$race$\_$priv$\_$priv& sex$\_$race$\_$priv$\_$dis& sex$\_$race$\_$dis$\_$priv & sex$\_$race$\_$dis$\_$dis& sex$\_$priv& sex$\_$dis& race$\_$priv& race$\_$dis \\
\hline
Full & 0.083 & 0.105 & 0.321 & 0.491 & 0.188 & 0.812 & 0.404 & 0.596 \\
\hline
Group1 & 0.078 & 0.114 & 0.305 & 0.502 & 0.192 & 0.808 & 0.384 & 0.616 \\
\hline
Group2 & 0.132 & 0.105 & 0.351 & 0.412 & 0.237 & 0.763 & 0.483 & 0.517\\
\hline
Group3 & 0.027 & 0.059 & 0.144 & 0.769 & 0.087 & 0.913 & 0.172 & 0.828 \\
\hline
\end{tabular}
\label{tab:compas-info}
\end{table*}
\subsection{Analysis of Trade-offs}
The bias-variance trade-offs of these estimators on different parts of the input data ($priv$ or $dis$ samples)
is summarized in Table \ref{table:mean-results}. Here, $\Delta \mu$ = $|\mu_{priv} - \mu_{dis}|$ is the mean difference. See Appendix \ref{sec:appendix-theory} for a derivation.
Between all the models we analyze, the overall (i.i.d.\xspace) and the conditional-i.i.d.\xspace fall on opposite ends of the spectrum. The i.i.d.\xspace model has potentially unbounded bias: the bias grows as the mean difference grows.
Further, if we overfit to the majority distribution using a single model, and observe only very few samples from the minority distribution during testing, we can overlook how biased the i.i.d.\xspace model actually is, because the bias on samples from the majority class is low, and so the overall error will be low as well.
On the other hand, the conditional-i.i.d.\xspace model is unbiased by construction, but we pay for this in terms of large variance. However, we posit that in some contexts sacrificing estimator variance to gain unbiasedness on protected groups is a worthy trade-off. We elaborate on this further, and will validate this conjecture experimentally in Section~\ref{sec:experiments}.
Considering the trade-offs made by the $dis$ model (trained on disprivileged group samples) in Table \ref{table:mean-results}, we build an intuition for when variance can be helpful, especially for ``fair'' classification. Over-fitting to the $dis$ group has two benefits: Firstly, we directly reduce the error on this demographic group. This is morally desirable from a fairness perspective, since we want the error on the disprivileged group to be small, or at least comparable, to the error rates on privileged groups.
Secondly, the large variance of the conditional estimator can be a good thing: it allows us to still search for/land on ``other good values'' to estimate the mean of the second group (that it did not see during training).
This is just not possible with the overall model because it biases towards the majority mixture component (i.e.,\xspace the privileged group) and also drastically reduces the variance, and any potentially corrective effect it could have had.
We see that the bias terms for all the models (whose bias is non-zero) depends on the mean difference $\Delta \mu$. This frames fair classification in the conditional-i.i.d.\xspace model as the task of domain adaptation, with the goal of fitting to the domain (i.e.,\xspace demographic group) that is most informative. We posit that this will be the group with the highest noise variance (i.e.,\xspace the disprivileged group). Indeed, as we will see in Section~\ref{sec:experiments},
the model trained on the disprivileged group will be unbiased for that group, and it will also have large enough variance to be able to adapt and make reasonably good predictions for the groups unseen during training.
As we will further discuss in Section~\ref{sec:discussion}, randomness is also morally neutral form the philosophical perspective, and so the effects of large variance can be morally more acceptable than the effects of systematic skew in predictions.
Group-specific estimators are (by construction) unbiased on the population on which they were trained, and their large variance allows them to perform reasonably well for unseen groups. Hence, the net-effect of the large variance of conditional estimators can be \emph{fairness-enhancing}.
|
{
"arxiv_id": "2302.08652",
"language": "en",
"timestamp": "2023-02-20T02:05:08",
"url": "https://arxiv.org/abs/2302.08652",
"yymm": "2302"
} | \subsubsection*{\bibname}}
\usepackage{multicol}
\usepackage{amssymb, multirow, paralist, color}
\usepackage{textcase, booktabs,makecell}
\usepackage{enumitem}
\def \mathbf{y} {\mathbf{y}}
\def \mathrm{E} {\mathrm{E}}
\def \mathcal{E} {\mathcal{E}}
\def \mathbf{x} {\mathbf{x}}
\def \mathbf{g} {\mathbf{g}}
\def \mathbf{v} {\mathbf{v}}
\def \mathcal{L} {\mathcal{L}}
\def \mathcal{D} {\mathcal{D}}
\def \mathbf{z} {\mathbf{z}}
\def \mathbf{u} {\mathbf{u}}
\def \mathcal{H} {\mathcal{H}}
\def \mathbf{w} {\mathbf{w}}
\def \mathbb{R} {\mathbb{R}}
\def \mathbf{m} {\mathbf{m}}
\def \boldsymbol\ell {\boldsymbol\ell}
\DeclareMathOperator{\diag}{diag}
\newtheorem{prop}{Proposition}
\def \mbox{regret} {\mbox{regret}}
\def \widehat{U} {\widehat{U}}
\def \mathcal{P} {\mathcal{P}}
\def {\textnormal{Exp}} {{\textnormal{Exp}}}
\def \textnormal{Exp}^{-1} {\textnormal{Exp}^{-1}}
\def \tilde{X}_{t-1} {\tilde{X}_{t-1}}
\def \Gamma_t {\Gamma_t}
\def \Gamma_{t+1} {\Gamma_{t+1}}
\def \tilde{X} {\tilde{X}}
\def \delta G {\delta G}
\def \mathcal{N}_{\delta M} {\mathcal{N}_{\delta M}}
\def \mathcal{N}_\delta {\mathcal{N}_\delta}
\def \mathcal{F} {\mathcal{F}}
\def \mathcal{G} {\mathcal{G}}
\def \mathcal{Q} {\mathcal{Q}}
\def \mathcal{W} {\mathcal{W}}
\def \mathcal{N} {\mathbb{N}}
\def \mathcal{A} {\mathcal{A}}
\def \mathbf{q} {\mathbf{q}}
\def \mathbf{v} {\mathbf{v}}
\def \mathcal{M} {\mathcal{M}}
\def \mathbf{c} {\mathbf{c}}
\def \widehat{p} {\widehat{p}}
\def \mathbf{d} {\mathbf{d}}
\def \mathbf{p} {\mathbf{p}}
\def \mathbf{q} {\mathbf{q}}
\def \bar{\d} {\bar{\mathbf{d}}}
\def \bar{d} {\bar{d}}
\def \mathcal{I} {\mathcal{I}}
\def \widetilde{x} {\widetilde{\mathbf{x}}}
\def \mathbf{f} {\mathbf{f}}
\def \mathbf{a} {\mathbf{a}}
\def \mathbf{b} {\mathbf{b}}
\def \widetilde{f} {\widetilde{\mathbf{f}}}
\def \widetilde{\b} {\widetilde{\mathbf{b}}}
\def \mathbf{h} {\mathbf{h}}
\def \mathcal{B} {\mathbf{B}}
\def \mathbb{B} {\mathbb{B}}
\def \widetilde{b} {\widetilde{b}}
\def \widetilde{f} {\widetilde{f}}
\def \widehat{G} {\widehat{G}}
\def \widehat{b} {\widehat{b}}
\def \widehat{f} {\widehat{f}}
\def \bar{\u} {\bar{\mathbf{u}}}
\def \bar{\v} {\bar{\mathbf{v}}}
\def \widetilde{\z} {\widetilde{\mathbf{z}}}
\def \widetilde{z} {\widetilde{z}}
\def \widehat{\g} {\widehat{\mathbf{g}}}
\def \widehat{\v} {\widehat{\mathbf{v}}}
\def \overline{P} {\overline{P}}
\def \mathcal{M} {\mathcal{M}}
\def \mathcal{N} {\mathcal{N}}
\def \widehat{\rho} {\widehat{\rho}}
\def \widehat{\h} {\widehat{\mathbf{h}}}
\def \mathcal{C} {\mathcal{C}}
\def \mathsf{grad}\kern 0.1em {\mathsf{grad}\kern 0.1em}
\def \mathsf{Hess}\kern 0.1em {\mathsf{Hess}\kern 0.1em}
\def \mathcal{o} {\mathcal{o}}
\def \mathbf{0} {\mathbf{0}}
\def \mathcal{V} {\mathcal{V}}
\def \mathbf{t} {\mathbf{t}}
\def \mathbf{m} {\mathbf{m}}
\def \widehat{\x} {\widehat{\mathbf{x}}}
\def \bar{\x} {\bar{\mathbf{x}}}
\def \widetilde{U} {\widetilde{U}}
\def \widetilde{\w} {\widetilde{\mathbf{w}}}
\def \widehat{T} {\widehat{T}}
\def \tilde{\mathcal{O}} {\tilde{\mathcal{O}}}
\def \widehat{\nabla} {\widehat{\nabla}}
\def \mathcal{K} {\mathcal{K}}
\def \mathcal{T} {\mathcal{T}}
\def \mathcal{F} {\mathcal{F}}
\def \widetilde{f}{\widetilde{f}}
\def \widetilde{x} {\widetilde{x}}
\def \mathcal{R} {\mathcal{R}}
\def \mathcal{V} {\mathcal{V}}
\def \bar{\Rt} {\bar{\mathcal{R}}}
\def \bar{\w} {\bar{\mathbf{w}}}
\def \widehat{f} {\widehat{f}}
\def \widehat{\w} {\widehat{\mathbf{w}}}
\def \widehat{\lambda} {\widehat{\lambda}}
\def \mathbf{e} {\mathbf{e}}
\def \mathcal{B} {\mathcal{B}}
\def \mathcal{X} {\mathcal{X}}
\def \mathbf{s} {\mathbf{s}}
\def \mathcal{S} {\mathcal{S}}
\def \D(\kappa_2) {\mathcal{D}(\kappa_2)}
\def \zeta_{\mathcal{M}} {\zeta_{\mathcal{M}}}
\def \xi_{\mathcal{M}} {\xi_{\mathcal{M}}}
\def \left(1+\frac{2c(\zet+1)}{\lambda}\right) {\left(1+\frac{2c(\zeta_{\mathcal{M}}+1)}{\lambda}\right)}
\def \Gamma {\Gamma}
\def \left\langle {\left\langle}
\def \right\rangle {\right\rangle}
\def \bar{\x} {\bar{\mathbf{x}}}
\def \lceil {\lceil}
\def \rceil {\rceil}
\def \left {\left}
\def \right {\right}
\def \sum_{t=1}^T {\sum_{t=1}^T}
\def \sum_{t=2}^T {\sum_{t=2}^T}
\def \kappa_l {\kappa_l}
\def \kappa_u {\kappa_u}
\newtheorem{Proposition}{Proposition}
\newtheorem{ass}{Assumption}
\newtheorem{thm}{Theorem}
\newtheorem{cor}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{defn}{Definition}
\newtheorem{remark}{Remark}
\DeclareMathOperator\arctanh{arctanh}
\DeclareMathOperator\arcosh{arcosh}
\DeclareMathOperator*{\Reg}{Regret}
\DeclareMathOperator*{\WAReg}{WA-Regret}
\DeclareMathOperator*{\SAReg}{SA-Regret}
\DeclareMathOperator*{\DReg}{D-Regret}
\DeclareMathOperator*{\poly}{poly}
\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\newcommand{\RomanNumeralCaps}[1]
{\MakeUppercase{\romannumeral #1}}
\usepackage{microtype}
\usepackage{graphicx}
\usepackage{booktabs}
\subsubsection*{\bibname}}
\def\textsc{Radar}\xspace{\textsc{Radar}\xspace}
\def\textsc{Radar}\textsubscript{v}\xspace{\textsc{Radar}\textsubscript{v}\xspace}
\def\textsc{Radar}\textsubscript{s}\xspace{\textsc{Radar}\textsubscript{s}\xspace}
\def\textsc{Radar}\textsubscript{b}\xspace{\textsc{Radar}\textsubscript{b}\xspace}
\SetKwComment{Comment}{/* }{ */}
\input{command.tex}
\usepackage{times}
\begin{document}
\begin{center}
{\bf{\Large{Minimizing Dynamic Regret on Geodesic Metric Spaces}}}
\vspace*{.2in}
{\large{
\begin{tabular}{ccc}
Zihao Hu$^{\dagger}$ &Guanghui Wang$^{\dagger}$ & Jacob Abernethy$^{\dagger}$ \\
\end{tabular}}}
\vspace*{.05in}
\begin{tabular}{c}
College of Computing$^{\dagger}$\\
Georgia Institute of Technology\\
Atlanta, GA 30339 \\
\texttt{\{zihaohu,gwang369,prof\}@gatech.edu} \\
\end{tabular}
\vspace*{.2in}
\date{}
\end{center}
\begin{abstract}%
In this paper, we consider the sequential decision problem where the goal is to minimize the \emph{general dynamic regret} on a complete Riemannian manifold. The task of offline optimization on such a domain, also known as a \emph{geodesic metric space}, has recently received significant attention. The online setting has received significantly less attention, and it has remained an open question whether the body of results that hold in the Euclidean setting can be transplanted into the land of Riemannian manifolds where new challenges (e.g., \emph{curvature}) come into play.
In this paper, we show how to get optimistic regret bound on manifolds with non-positive curvature whenever improper learning is allowed and propose an array of adaptive no-regret algorithms. To the best of our knowledge, this is the first work that considers general dynamic regret and develops ``optimistic'' online learning algorithms which can be employed on geodesic metric spaces.
\end{abstract}
\section{Introduction}
Online convex optimization (OCO) in Euclidean space is a well-developed area with numerous applications. In each round, the learner takes an action from a decision set, while the \textit{adversary} chooses a loss function. The long-term performance metric of the learner is (static) regret, which is defined as the difference between the learner's cumulative loss and the loss of playing the best-fixed decision in hindsight. As the name suggests, OCO requires both the losses and the decision set to be \emph{convex}. From the theoretical perspective, convex functions and sets are well-behaved objects with many desirable properties that are generally required to obtain tight regret bounds.
Typical algorithms in OCO, such as \textit{mirror descent}, determine how one should adjust parameter estimates in response to arriving data, typically by shifting parameters against the gradient of the loss. But in many cases of interest, the underlying parameter space is not only non-convex but non-Euclidean. The \textit{hyperboloid}, for example, arising from the solution set of a degree-two polynomial, is a Riemannian manifold that has garnered interest as a tool in tree-embedding tasks \citep{lou2020differentiating}. On such manifolds, we do have a generalized notion of convexity, known as \textit{geodesic convexity} \citep{udriste2013convex}. There are many popular problems of interest \citep{hosseini2015matrix,vishnoi2018geodesic,sra2018geodesically} where the underlying objective function is geodesically convex (gsc-convex) under a suitable Riemannian metric. But there has thus far been significantly limited research on how to do \textit{adaptive learning} in such spaces and to understand when regret bounds are obtainable.
Let $\mathcal{N}$ be a gsc-convex subset of a geodesic metric space $\mathcal{M}$. In this paper, we consider the problem of minimizing the \emph{general dynamic regret} on $\mathcal{N}$, defined as
$$
\textstyle \text{D-Regret}_T := \sum_{t=1}^T f_t(\mathbf{x}_t)-\sum_{t=1}^T f_t(\mathbf{u}_t),
$$
where $\mathbf{x}_1, \ldots, \mathbf{x}_T \in \mathcal{N}$ is the sequence of actions taken by the learner, whose loss is evaluated relative to the sequence of ``comparator'' points $\mathbf{u}_1,\dots,\mathbf{u}_T\in\mathcal{N}$. There has been recent work establishing that sublinear regret is possible as long as $\mathcal{N}$ and the $f_t$'s are gsc-convex, for example using a Riemannian variant of Online Gradient Descent \citep{wang2021no}. But so far there are no such results that elicit better D-Regret using \textit{adaptive} algorithms.
What do we mean by ``adaptive'' in this context? In the online learning literature there have emerged three key quantities of interest in the context of sequential decision making, the \textit{comparator path length}, the \textit{gradient variation}, and the \textit{comparator loss}, defined respectively as:
\begin{eqnarray}
\textstyle P_T & := &\textstyle\sum_{t=2}^T d(\mathbf{u}_t,\mathbf{u}_{t-1}),\label{path} \\
\textstyle V_T & := & \textstyle\sum_{t=2}^T\sup_{\mathbf{x}\in\mathcal{N}}\|\nabla f_{t-1}(\mathbf{x})-\nabla f_t(\mathbf{x})\|^2, \nonumber \\
\textstyle F_T & := & \textstyle\sum_{t=1}^T f_t(\mathbf{u}_t). \nonumber
\end{eqnarray}
Let us start by considering regret minimization with respect to path length. While it has been observed that $O(\sqrt{(1+P_T)T})$ is optimal in a minimax sense~\citep{zhang2018adaptive}, a great deal of research for the Euclidean setting \citep{srebro2010smoothness,chiang2012online,rakhlin2013online} has shown that significantly smaller regret is achievable when any one of the above quantities is small. These are not just cosmetic improvements either, many fundamental applications of online learning rely on these adaptive methods and bounds. We give a thorough overview in Section~\ref{sec:related}.
\begin{table*}[t]\label{table-1}
\centering
\caption{Summary of bounds. $\delta$ describes the discrepancy between the decision set and the comparator set. We define $\zeta$ in Def.~\ref{def1}. We specify in "Match?" whether the bound is equivalent to the Euclidean setting. Let $B_T\coloneqq\min\{V_T,F_T\}$.}
\begin{tabular}{cccc}
\toprule[1pt]
Algorithm& Type & Dynamic regret & Match? \\
\hline
\textsc{Radar}\xspace& Standard & $O(\sqrt{{\zeta}(1+P_T)T})$ & Yes \\
& Minimax lower bound &$\Omega(\sqrt{(1+P_T)T})$ & Yes \\
\textsc{Radar}\textsubscript{v}\xspace& Gradient-variation & $O(\sqrt{{\zeta}(\frac{1+P_T}{\delta^2}
+V_T)(P_T+1)})$ & Yes \\
\textsc{Radar}\textsubscript{s}\xspace& Small-loss & $O(\sqrt{{\zeta}((1+P_T){\zeta}+F_T)(P_T+1)})$ & Yes \\
\textsc{Radar}\textsubscript{b}\xspace& Best-of-both-worlds & {\small $O\left(\sqrt{\zeta( P_T(\zeta+\frac{1}{\delta^2})+B_T+1)(P_T+1)+ B_T\ln T}\right)$} & No \\
\bottomrule[1pt]
\end{tabular}
\end{table*}
The goal of the present paper is to translate to the Riemannian setting an array of adaptive regret algorithms and prove corresponding bounds. We propose a family of algorithms which we call \textsc{Radar}\xspace, for \underline{R}iemannian \underline{a}daptive \underline{d}yn\underline{a}mic \underline{r}egret. The three important variants of \textsc{Radar}\xspace are \textsc{Radar}\textsubscript{v}\xspace, \textsc{Radar}\textsubscript{s}\xspace, and \textsc{Radar}\textsubscript{b}\xspace, we prove regret bounds on each, summarized in Table \ref{table-1}. We allow \emph{improper learning} for the gradient-variation bound, which means the player can choose $\mathbf{x}_1,\dots,\mathbf{x}_T$ from a slightly larger set $\mathcal{N}_{\delta G}$ (formally defined in Definition \ref{def2}).
As a general matter, convex constraints on a Riemannian manifold introduce new difficulties in optimization that are not typically present in the Euclidean case, and there has been limited work on addressing these.
To the best of our knowledge, there are only three papers
considering how to incorporate constraints on manifolds, and these all make further assumptions on either the curvature or the diameter of the feasible set. \cite{martinez2022global} only applies to hyperbolic and spherical spaces. \cite{criscitiello2022negative} works for complete Riemannian manifolds with sectional curvature in $[-K,K]$ but the diameter of the decision set is at most $\textstyle O(\frac{1}{\sqrt{K}})$.
\cite{martinez2022accelerated} mainly works for locally symmetric Hadamard manifolds. Our paper is the first to consider the projective distortion in the \emph{online} setting that applies to all Hadamard manifolds as long as improper learning is allowed without imposing further constraints on the diameter or the curvature.
Obtaining adaptive regret guarantees in the Riemannian setting is by no means a trivial task, as the new geometry introduces various additional technical challenges. Here is but one example: whereas the cost of a (Bregman) projection into a feasible region can be controlled using a generalized ``Pythagorean'' theorem in the Euclidean setting, this same issue becomes more difficult on a manifold as we encounter geometric distortion due to curvature. To better appreciate this, for a Hadamard manifold $\mathcal{M}$, assume the projection of $\mathbf{x}\in \mathcal{M}$ onto a convex subset $\mathcal{N}\subset\mathcal{M}$ is ${\mathbf{z}}$. While it is true that for any $\mathbf{y}\in\mathcal{N}\setminus \{\mathbf{z}\}$ the angle between geodesics $\overline{\mathbf{z}\mathbf{x}}$ and $\overline{\mathbf{z}\mathbf{y}}$
is obtuse, this property is only relevant at the tangent space of $\mathbf{z}$, yet we need to analyze gradients at the tangent space of ${\mathbf{x}}$. The use of \textit{parallel transport} between $\mathbf{x}$ and $\mathbf{z}$ unavoidably incurs extra distortion and could potentially lead to $O(T)$ regret.
The last challenge comes from averaging on manifolds. For example, many adaptive OCO algorithms rely on the \textit{meta-expert} framework \citep{van2016metagrad,zhang2018adaptive}, that runs several learning algorithms in parallel and combines them through appropriately-weighted averaging. There is not, unfortunately, a single way to take convex combinations in a geodesic metric space, and all such averaging schemes need to account for the curvature of the manifold and incorporate the associated costs. We finally find the \emph{Fr\'echet mean} to be a desirable choice, but the analysis must proceed with care.
The key contributions of our work can be summarized as follows:
\begin{itemize}
\item We develop the optimistic mirror descent (OMD) algorithm on Hadamard manifolds\footnote{We focus on Hadamard manifolds in the main paper and extend the guarantee to CAT$(\kappa)$ spaces in Appendix \ref{cat-proof}.} in the online improper learning setting. Interestingly, we also show Optimistic Hedge, a variant of OMD, works for gsc-convex losses. We believe these tools may have significant applications to research in online learning and Riemannian optimization.
\item We combine our analysis on OMD with the \emph{meta-expert framework} \citep{van2016metagrad,zhang2018adaptive} to get several adaptive regret bounds, as shown in Table \ref{table-1}.
\item We develop the minimax dynamic regret lower bound, which renders our $O(\sqrt{\zeta(1+P_T)T})$ bound to be tight up to the geometric constant $\zeta$.
\end{itemize}
\section{Preliminaries}
In this section, we introduce background knowledge of OCO and Riemannian manifolds.
\paragraph{OCO in Euclidean space.} We first formally describe OCO in Euclidean space. For each round $t=1,\dots,T$, the learner makes a decision $\mathbf{x}_t\in\mathcal{X}$ based on historical losses $f_1,\dots,f_{t-1}$ where $\mathcal{X}$ is a convex decision set, and then the adversary reveals a convex loss function $f_t$. The goal of the learner is to minimize the difference between the cumulative loss and that of the best-fixed decision in hindsight: $
\text{Regret}_T=\sum_{t=1}^Tf_t(\mathbf{x}_t)-\min_{\mathbf{x}\in\mathcal{X}}\sum_{t=1}^Tf_t(\mathbf{x}),
$
which is usually referred to as the \emph{static regret}, since the comparator is a fixed decision.
In the literature, there exist a large number of algorithms \citep{hazan2016introduction} on minimizing the static regret. However, the underlying assumption of the static regret is the adversary's behavior does not change drastically, which can be unrealistic in real applications. To resolve this issue, dynamic regret stands out, which is defined as \citep{zinkevich2003online}
$$
\textstyle\text{Regret}_T(\mathbf{u}_1,\dots,\mathbf{u}_T)=\sum_{t=1}^Tf_t(\mathbf{x}_t)-\sum_{t=1}^Tf_t(\mathbf{u}_t),
$$
where $\mathbf{u}_1,\dots,\mathbf{u}_T\in\mathcal{X}$ is a comparator sequence. Dynamic regret receives considerable attention recently \citep{besbes2015non,jadbabaie2015online,mokhtari2016online,NIPS:2017:Zhang,zhang2018adaptive,zhao2020dynamic,wan2021projection,L4DC:2021:Zhao,pmlr-v134-baby21a} due to its flexibility. However, dynamic regret can be as large as $O(T)$ in general. Thus, regularizations need to be imposed on the comparator sequence to ensure no-regret online learning. A common assumption \citep{zinkevich2003online} is the path-length (see Equation \ref{path}) of the comparator sequence to be bounded. We refer to the corresponding dynamic regret as \textit{general dynamic regret} because any assignments to $\mathbf{u}_1,\dots,\mathbf{u}_T$ subject to the path-length constraint are feasible.
\textbf{Riemannian manifolds.} Here, we give a brief overview of Riemannian geometry, but this is a complex subject, and we refer the reader to, e.g., \cite{petersen2006riemannian} for a full treatment. A Riemannian manifold $(\mathcal{M},g)$ is a smooth manifold $\mathcal{M}$ equipped with a Riemannian metric $g$. The tangent space $T_{\mathbf{x}}\mathcal{M}\cong\mathbb{R}^d$, generalizing the concept of the tangent plane, contains vectors tangent to any curve passing $\mathbf{x}$. The Riemannian metric $g$ induces the inner product $\langle\mathbf{u},\mathbf{v}\rangle_{\mathbf{x}}$ and the Riemannian norm $\|\mathbf{u}\|_{\mathbf{x}}=\sqrt{\langle\mathbf{u},\mathbf{u}\rangle_{\mathbf{x}}}$ where $\mathbf{u},\mathbf{v}\in T_{\mathbf{x}}\mathcal{M}$ (we omit the reference point $\mathbf{x}$ when it is clear from the context). We use $d(\mathbf{x},\mathbf{y})$ to denote the Riemannian distance between $\mathbf{x},\mathbf{y}\in\mathcal{M}$, which is the greatest lower bound of the length of all piecewise smooth curves joining $\mathbf{x}$ and $\mathbf{y}$.
A curve connecting $\mathbf{x},\mathbf{y}\in\mathcal{M}$ is a geodesic if it is locally length-minimizing. For two points $\mathbf{x},\mathbf{y}\in\mathcal{M}$, suppose there exists a geodesic $\gamma(t):[0,1]\rightarrow\mathcal{M}$ such that $\gamma(0)=\mathbf{x},\gamma(1)=\mathbf{y}$ and $\gamma'(0)=\mathbf{v}\in T_{\mathbf{x}}\mathcal{M}$. The exponential map ${\textnormal{Exp}}_{\mathbf{x}}(\cdot):T_{\mathbf{x}}\mathcal{M}\rightarrow\mathcal{M}$ maps $\mathbf{v}\in T_{\mathbf{x}}\mathcal{M}$ to $\mathbf{y}\in \mathcal{M}$. Correspondingly, the inverse exponential map ${\textnormal{Exp}}_{\mathbf{x}}^{-1}(\cdot):\mathcal{M}\rightarrow T_{\mathbf{x}}\mathcal{M}$ maps $\mathbf{y}\in\mathcal{M}$ to $\mathbf{v}\in T_{\mathbf{x}}\mathcal{M}$.
Since traveling along a geodesic is of constant velocity, we indeed have $d(\mathbf{x},\mathbf{y})=\|{\textnormal{Exp}}_{\mathbf{x}}^{-1}\mathbf{y}\|_{\mathbf{x}}$. Also, it is useful to compare tangent vectors in different tangent spaces. Parallel transport $\Gamma_{\mathbf{x}}^{\mathbf{y}}\mathbf{u}$ translates $\mathbf{u}$ from $T_{\mathbf{x}}\mathcal{M}$ to $T_{\mathbf{y}}\mathcal{M}$ and preserves the inner product, i.e., $\langle \mathbf{u},\mathbf{v}\rangle_{\mathbf{x}} = \langle \Gamma_{\mathbf{x}}^{\mathbf{y}}\mathbf{u},\Gamma_{\mathbf{x}}^{\mathbf{y}}\mathbf{v}\rangle_{\mathbf{y}}$.
The curvature of Riemannian manifolds reflects the extent to which the manifold differs from a Euclidean surface. For optimization purposes, it usually suffices to consider the sectional curvature. Following \cite{zhang2016first,wang2021no}, in this paper we mainly consider Hadamard manifolds, which are complete and single-connected manifolds with non-positive sectional curvature. On such manifolds, every two points are connected by a unique and distance-minimizing geodesic by Hopf-Rinow Theorem \citep{petersen2006riemannian}.
A subset $\mathcal{N}$ of $\mathcal{M}$ is gsc-convex if for any $\mathbf{x},\mathbf{y}\in\mathcal{N}$, there exists a geodesic connecting $\mathbf{x},\mathbf{y}$ and fully lies in $\mathcal{N}$. A function $f:\mathcal{N}\rightarrow \mathbb{R}$ is gsc-convex if $\mathcal{N}$ is gsc-convex and the composition $f(\gamma(t))$ satisfies
$
f(\gamma(t))\leq (1-t)f(\gamma(0))+tf(\gamma(1))
$
for any geodesic $\gamma(t)\subseteq\mathcal{N}$ and $t\in[0,1]$. An alternative definition of geodesic convexity is
$$
\textstyle f(\mathbf{y})\geq f(\mathbf{x})+\langle\nabla f(\mathbf{x}),{\textnormal{Exp}}_{\mathbf{x}}^{-1}\mathbf{y}\rangle,\qquad\forall\;\mathbf{x},\mathbf{y}\in\mathcal{N}.
$$
where the Riemannian gradient $\nabla f(\mathbf{x})\in T_{\mathbf{x}}\mathcal{M}$ is the unique vector determined by
$
D f(\mathbf{x})[\mathbf{v}] = \langle\mathbf{v}, \nabla f(\mathbf{x})\rangle
$
and $D f(\mathbf{x})[\mathbf{v}]$ is the differential of $f$ along $\mathbf{v}\in T_{\mathbf{x}}\mathcal{M}$.
Similarly, a $L$-geodesically-smooth (L-gsc-smooth) function $f$ satisfies
$\|\Gamma_{\mathbf{y}}^{\mathbf{x}}\nabla f(\mathbf{y})-\nabla f(\mathbf{x})\|\leq L\cdot d(\mathbf{x},\mathbf{y})$ for all $\mathbf{x},\mathbf{y}\in\mathcal{N}$, or
$$
\textstyle f(\mathbf{y})\leq f(\mathbf{x})+\langle\nabla f(\mathbf{x}),{\textnormal{Exp}}_{\mathbf{x}}^{-1}\mathbf{y}\rangle+\frac{L}{2}d(\mathbf{x},\mathbf{y})^2.
$$
\section{Related Work} \label{sec:related}
In this part, we briefly review past work on OCO in Euclidean space and optimization on Riemannian manifolds.
\subsection{OCO in Euclidean Space}
\paragraph{Static regret.} We first consider work on static regret. In Euclidean space, it is well known that online gradient descent (OGD) guarantees $O(\sqrt{T})$ and $O(\log T)$ regret for convex and strongly convex losses \citep{hazan2016introduction}, which are also minimax optimal \citep{abernethy2008optimal}. However, the aforementioned bounds are not fully adaptive due to the dependence on $T$. Therefore, there is a tendency to replace $T$ with problem-dependent quantities. \cite{srebro2010smoothness} first notice that the smooth and non-negative losses satisfy the self-bounding property, thus establishing the small-loss bound $O(\sqrt{F_T^\star})$ where $F_T^{\star}=\sum_{t=1}^Tf_t(\mathbf{x}^\star)$ is the cumulative loss of the best action in hindsight. \cite{chiang2012online} propose extra-gradient
to get $O(\sqrt{V_T})$ gradient-variation regret bound for convex and smooth losses where $V_T=\sum_{t=2}^T\sup_{\mathbf{x}\in\mathcal{X}}\|\nabla f_{t-1}(\mathbf{x})-\nabla f_t(\mathbf{x})\|_2^2$.
\cite{rakhlin2013online} generalize the work of \cite{chiang2012online} and propose optimistic mirror descent, which has become a standard tool in online learning since then.
\paragraph{Dynamic regret.} Now we switch to the related work on dynamic regret. \cite{zinkevich2003online} propose to use OGD to get a $O\left(\eta T+\frac{1+P_T}{\eta}\right)$ regret bound where $\eta$ is the step size, but the result turns out to be $O((1+P_T)\sqrt{T})$ since the value of $P_T$ is unknown to the learner. To address this issue, \cite{zhang2018adaptive} utilize Hedge to combine output of OGD with different step sizes, and show a $O\left(\sqrt{(1+P_T)T}\right)$ regret. A matching lower bound is also established therein. \cite{zhao2020dynamic} utilize smoothness to get several state-of-the-art results, including a gradient-variation bound, a small-loss bound, and a best-of-both-worlds bound in Euclidean space.
\subsection{Optimization on Riemannian Manifolds}
We start from the offline setting. \cite{zhang2016first} propose Riemannian gradient descent (RGD) and show its theoretical guarantee is similar to the Euclidean counterpart. \cite{liu2017accelerated} consider how to accelerate RGD for the first time, but their algorithms rely on the solution of non-linear equations, and it is unclear if these equations admit solutions. \cite{zhang2018estimate} show local acceleration rates for gsc-smooth and strongly gsc-convex functions via a novel estimate sequence. \cite{ahn2020nesterov} generalize the idea of \cite{zhang2018estimate} and show the full acceleration attains when the iterate is close to the optimum. However, \cite{liu2017accelerated,zhang2018estimate,ahn2020nesterov} all assume the iterates stay in a pre-specified gsc-convex set. \cite{martinez2022global} first propose an algorithm that enforces the gsc-convex constraint and enjoys global acceleration in hyperbolic and spherical spaces. \cite{criscitiello2022negative} show the lower bound for Riemannian acceleration and provide a reduction to convert the Riemannian acceleration problem to its Euclidean counterpart when the diameter of the feasible set is small. \cite{martinez2022accelerated} adopt the reduction to achieve global acceleration on locally symmetric Hadamard manifolds. From the methodological perspective, \cite{martinez2022global,criscitiello2022negative,martinez2022accelerated}
pullback the objective function defined on the manifolds to Euclidean space, whereas we work on the Riemannian manifolds to bound the projective distortion directly.
In the online setting, \cite{becigneul2018riemannian} consider adaptive stochastic optimization on Riemannian manifolds, but their results only apply to the Cartesian product of one-manifolds.
\cite{maass2022tracking}
study the \emph{restricted dynamic regret} on Hadamard manifolds under the gradient-free setting and provide $O(\sqrt{T}+P_T^\star)$ bound for gsc-strongly convex and gsc-smooth functions, where $P_T^\star$ is the path-length of the comparator formed by $\mathbf{u}_t=\argmin_{\mathbf{x}\in\mathcal{X}}f_t(\mathbf{x})$. On Hadamard manifolds, \cite{wang2021no} apply Riemannian OGD (R-OGD) to get $O(\sqrt{T})$ upper bound and $\Omega(\sqrt{T})$ randomized lower bound. Comparatively, we focus on general and adaptive dynamic regret on Hadamard manifolds. Our minimax lower bound is also novel.
There also exist algorithms considering optimism on Riemannian manifolds. \cite{zhang2022minimax} propose Riemannian Corrected Extra Gradient (RCEG) for unconstrained minimax optimization on manifolds. \cite{karimi2022riemannian} consider a Robbins-Monro framework on Hadamard manifolds which subsumes Riemannian stochastic extra-gradient. By imposing the weakly asymptotically coercivity and using a decaying step size, the trajectory is guaranteed to be finite \citep{karimi2022riemannian}. However, our paper is the first to consider the constrained case and the online setting. For the improper learning setting, we show a constant step size achieves the same guarantee as in Euclidean space.
\section{Minimax Dynamic Regret Bound on Manifolds}
\label{radar}
In this section, we present the results related to the minimax path-length bound on manifolds. Before diving into the details, following previous work \citep{zinkevich2003online,wang2021no}, we introduce some standard assumptions and definitions.
\begin{ass}\label{hada}
$\mathcal{M}$ is a Hadamard manifold and its sectional curvature is lower bounded by $\kappa\leq 0$.
\end{ass}
\begin{ass}\label{diam}
The decision set $\mathcal{N}$ is a gsc-convex compact subset of $\mathcal{M}$ with diameter upper bounded by $D$, i.e., $\sup_{\mathbf{x},\mathbf{y}\in\mathcal{N}} d(\mathbf{x},\mathbf{y})\leq D$. For optimistic online learning, we allow the player chooses decisions from $\mathcal{N}_{\delta M}$, which is defined in Definition \ref{def2} and the diameter becomes $(D+2\delta M)$.
\end{ass}
\begin{ass}\label{grad}
The norm of Riemannian gradients are bounded by $G$, i.e., $\sup_{\mathbf{x}\in\mathcal{N}}\|\nabla f_t(\mathbf{x})\|\leq G.$ When improper learning is allowed, we assume $\sup_{\mathbf{x}\in\mathcal{N}_{\delta M}}\|\nabla f_t(\mathbf{x})\|\leq G.$
\end{ass}
\begin{defn}\label{def1}
Under Assumptions \ref{hada}, \ref{diam}, we denote
${\zeta}\coloneqq\sqrt{-\kappa} D \operatorname{coth}(\sqrt{-\kappa} D)$. When improper learning is allowed, ${\zeta}\coloneqq\sqrt{-\kappa} (D+2\delta M) \operatorname{coth}(\sqrt{-\kappa} (D+2\delta M))$, where $M$ is in Definition \ref{def2}.
\end{defn}
\begin{algorithm2e}[t]
\caption{\textsc{Radar}\xspace: Meta Algorithm}\label{alg:meta}
\KwData{Learning rate $\beta$, set of step sizes $\mathcal{H}$, initial weights $w_{1,i}=\frac{N+1}{i(i+1)N}$}
\For{$t=1,\dots,T$}{
Receive $\mathbf{x}_{t,i}$ from experts with stepsize $\eta_i$\\
$\mathbf{x}_t=\argmin_{\mathbf{x}\in\mathcal{N}}\sum_{i\in[N]}w_{t,i} d(\mathbf{x},\mathbf{x}_{t,i})^2$\\
Observe the loss function $f_t$\\
Update $w_{t+1,i}=\frac{w_{t,i} e^{-\beta f_{t}(\mathbf{x}_{t,i})}}{\sum_{j \in [N]} w_{t,j} e^{-\beta f_{t}(\mathbf{x}_{t,j})}}$\\
Send gradient $\nabla f_t(\mathbf{x}_{t,i})$ to each expert
}
\end{algorithm2e}
\hfill
\begin{algorithm2e}[t]
\caption{\textsc{Radar}\xspace: Expert Algorithm}\label{alg:expert}
\KwData{ A step size $\eta_i$}
Let $\mathbf{x}_{1,i}^\eta$ be any point in $\mathcal{N}$\\
\For{$t=1,\dots,T$} {
Submit $\mathbf{x}_{t,i}$ to the meta-algorithm\\
Receive gradient $\nabla f_t(\mathbf{x}_{t,i})$ from the meta-algorithm\\
Update:\\
$\mathbf{x}_{t+1,i}=\Pi_{\mathcal{N}}{\textnormal{Exp}}_{\mathbf{x}_{t,i}}(-\eta_i\nabla f_t(\mathbf{x}_{t,i}))$\\
}
\end{algorithm2e}
The seminal work of \cite{zinkevich2003online} shows that the classical OGD algorithm can minimize the general dynamic regret in Euclidean space. Motivated by this, we consider the Riemannian OGD (R-OGD) algorithm \citep{wang2021no}:
\begin{equation}
\label{alg:rogd}
\mathbf{x}_{t+1}=\Pi_{\mathcal{N}}{\textnormal{Exp}}_{\mathbf{x}_t}(-\eta\nabla f_t(\mathbf{x}_t)),
\end{equation}
which is a natural extension of OGD to the manifold setting. We show R-OGD can also minimizes the general dynamic regret on manifolds. Due to page limitation, we postpone details to Appendix \ref{app:radar}.
\begin{thm}\label{ROGD}
Suppose Assumptions \ref{hada}, \ref{diam} and \ref{grad} hold. Then the general dynamical regret of R-OGD defined in Equation \ref{alg:rogd} satisfies
\begin{equation}
\textstyle\text{D-Regret}_T\leq \frac{D^2+2DP_T}{2\eta}+\frac{\eta{\zeta} G^2T}{2}.
\end{equation}
\end{thm}
Theorem \ref{ROGD} implies that R-OGD yields $O(\frac{P_T+1}{\eta}+\eta T)$ general dynamic regret bound, which means the optimal step size is $\eta= O{\scriptstyle\left(\sqrt{\frac{1+P_T}{T}}\right)}$.
However, this configuration of $\eta$ is invalid, as $P_T$ is unknown to the learner. Although a sub-optimal choice for $\eta$, i.e., $\textstyle{\eta=O\left(\frac{1}{\sqrt{T}}\right)}$, is accessible, the resulting algorithm suffers $O((1+P_T)\sqrt{T})$ regret.
In Euclidean space, to resolve this problem, the seminal work of \cite{zhang2018adaptive} propose an algorithm based on the learning with expert advice framework. Their algorithm, Ader, has a two-level hierarchical structure consisting of a meta-algorithm and expert algorithms. In every round $t$, each expert runs OGD with a different step size, and the meta-algorithm applies Hedge to learn the best weights. The step sizes used by the experts are carefully designed so that there always exists an expert which is almost optimal. The regret of Ader is $O(\sqrt{(1+P_T)T})$, which is minimax-optimal in Euclidean space \citep{zhang2018adaptive}.
However, it is unclear how to extend Ader to manifolds at first glance since we need to figure out the "correct" way to do averaging. In this paper, we successfully resolve this problem using the \emph{Fr\'echet mean} and the \emph{geodesic mean}. Our proposed algorithm, called \textsc{Radar}\xspace, consists of $N$ instances of the expert algorithm (Algorithm \ref{alg:expert}), each of which runs R-OGD with a different step size, and a meta-algorithm (Algorithm \ref{alg:meta}), which enjoys a regret approximately the same as the best expert. We denote the set of all step sizes $\{\eta_i\}$ by $\mathcal{H}$. In the $t$-th round, the expert algorithms submit all $\mathbf{x}_{t,i}$'s ($i=1,\dots,N$) to the meta-algorithm. Then the meta-algorithm either computes the Fr\'{e}chet mean or the geodesic mean (see Algorithm \ref{alg:gsc} in Appendix \ref{app:lem} for details) as $\mathbf{x}_t$. After receiving $f_t$, the meta-algorithm updates the weight of each expert $w_{t+1,i}$ via Hedge and sends $\nabla f_t(\mathbf{x}_{t,i})$ to the $i$-th expert, which computes $\mathbf{x}_{t+1,i}$ by R-OGD. The regret of the meta-algorithm of \textsc{Radar}\xspace can be bounded by Lemma \ref{meta-rader}.
\begin{lemma}\label{meta-rader}
Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, and setting $\beta=\sqrt{\frac{8}{G^2D^2T}}$, the regret of Algorithm \ref{alg:meta} satisfies
$$
\textstyle\sum_{t=1}^Tf_t(\mathbf{x}_t)-\sum_{t=1}^Tf_t(\mathbf{x}_{t,i})\leq\sqrt{\frac{G^2D^2T}{8}}\left(1+\ln\frac{1}{w_{1,i}}\right).
$$
\end{lemma}
We show that, by configuring the step sizes in $\mathcal{H}$ carefully, \textsc{Radar}\xspace ensures a $O(\sqrt{(1+P_T)T})$ bound on geodesic metric spaces.
\begin{thm}\label{RAder}
Set $\textstyle\mathcal{H}=\left\{\eta_i=2^{i-1}\sqrt{\frac{D^2}{G^2{\zeta} T}}\big| i\in[N]\right\}$
where $N=\lceil\frac{1}{2}\log_2(1+2T)\rceil+1$ and $\beta=\sqrt{\frac{8}{G^2D^2T}}$. Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, for any comparator sequence $\mathbf{u}_1,\dots,\mathbf{u}_T\in\mathcal{N}$, the general dynamic regret of \textsc{Radar}\xspace satisfies
\begin{equation*}
\text{D-Regret}_T=O(\sqrt{{\zeta}(1+P_T)T}).
\end{equation*}
\end{thm}
\begin{remark}
Note that if $\mathcal{M}$ is Euclidean space, then ${\zeta}=1$ and we get $O(\sqrt{(1+P_T)T)}$ regret, which is the same as in \cite{zhang2018adaptive}.
\end{remark}
Using the Busemann function as a bridge, we show the following minimax dynamic lower bound in Appendix \ref{app:lb}.
\begin{thm}\label{lb_dynamic}
There exists a comparator sequence which satisfies $\sum_{t=2}^T d(\mathbf{u}_t,\mathbf{u}_{t-1})\leq P_T$ and encounters $\Omega(\sqrt{(1+P_T)T})$
minimax dynamic regret on Hadamard manifolds.
\end{thm}
Although the regret guarantee in Theorem \ref{RAder}
is optimal up to constants by considering the corresponding lower bound, it still depends on $T$ and thus cannot adapt to mild environments. In Euclidean space, the smoothness of losses induces adaptive regret bounds, including the gradient-variation bound \citep{chiang2012online} and the small-loss bound \citep{srebro2010smoothness}. It is then natural to ask if similar bounds can be established on manifolds by assuming gsc-smoothness. We provide an affirmative answer to this question and show how to get problem-dependent bounds under the \textsc{Radar}\xspace framework.
\section{Gradient-variation Bound on Manifolds}
\label{radarv}
In this section, we show how to obtain the gradient-variation bound on manifolds under the \textsc{Radar}\xspace framework with alternative expert and meta algorithms.
\paragraph{Expert Algorithm.} For minimax optimization on Riemannian manifolds, \cite{zhang2022minimax} propose Riemannian Corrected Extra Gradient (RCEG), which performs the following iterates:
\begin{equation*}
\begin{split}
\textstyle&\mathbf{x}_{t}={\textnormal{Exp}}_{\mathbf{y}_t}(-\eta \nabla f_{t-1}(\mathbf{y}_t))\\
&\mathbf{y}_{t+1}={\textnormal{Exp}}_{\mathbf{x}_t}\left(-\eta\nabla f_t(\mathbf{x}_t)+{\textnormal{Exp}}_{\mathbf{x}_t}^{-1}(\mathbf{y}_t)\right).
\end{split}
\end{equation*}
However, this algorithm does not work in the constrained case, which has been left as an open problem \citep{zhang2022minimax}. We find the projective distortion can be bounded elegantly in the improper learning setting, and
generalize RCEG to incorporate an optimism term $M_t\in T_{\mathbf{y}_t}\mathcal{M}$.
\begin{defn}\label{def2}
We use $M_t$ to denote the optimism at round $t$ and assume there exists $M$ such that $\|M_t\|\leq M$ for all t. We define $\mathcal{N}_c= \{\mathbf{x}|d(x,\mathcal{N})\leq c\}$ where $d(\mathbf{x},\mathcal{N})\coloneqq\inf_{\mathbf{y}\in\mathcal{N}}d(\mathbf{x},\mathbf{y})$. In the improper setting, we allow the player to choose decisions from $\mathcal{N}_{\delta M}$.
\end{defn}
\begin{thm}\label{var-expert}
Suppose all losses $f_t$ are $L$-gsc-smooth on $\mathcal{M}$. Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, the iterates
\begin{equation}\label{omd}
\begin{split}
\textstyle &\mathbf{x}_{t}'={\textnormal{Exp}}_{\mathbf{y}_t}(-\eta M_t)\\
&\mathbf{x}_t=\Pi_{\mathcal{N}_{\delta M}}\mathbf{x}_t'\\
&\mathbf{y}_{t+1}=\Pi_{\mathcal{N}}{\textnormal{Exp}}_{\mathbf{x}_t'}\left(-\eta\nabla f_t(\mathbf{x}_t')+{\textnormal{Exp}}_{\mathbf{x}_t'}^{-1}(\mathbf{y}_t)\right).
\end{split}
\end{equation}
satisfies
\begin{equation*}
\begin{split}
\textstyle\sum_{t=1}^Tf_t(\mathbf{x}_t)-\sum_{t=1}^Tf_t(\mathbf{u}_t)\leq\eta{\zeta}\sum_{t=1}^T\|\nabla f_t(\mathbf{y}_t)-M_t\|^2+\frac{D^2+2DP_T}{2\eta}.
\end{split}
\end{equation*}
for any $\mathbf{u}_1,\dots,\mathbf{u}_T\in\mathcal{N}$ and $\eta\leq\frac{\delta M}{G+(G^2+2\zeta\delta^2M^2L^2)^{\frac{1}{2}}}$. Specifically, we achieve $\eta{\zeta}V_T+\frac{D^2+2DP_T}{2\eta}$ regret by choosing $M_t=\nabla f_{t-1}(\mathbf{y}_t)$. In this case, $M=G$ and we need $\eta\leq\frac{\delta}{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}$.
\end{thm}
\begin{remark}
Note that although we allow the player to make improper decisions, $V_T$ is still defined on $\mathcal{N}$ instead of $\mathcal{N}_{\delta G}$. For the static setting, $P_T=0$ and the resulting regret bound is $O(\sqrt{V_T}+\frac{1}{\delta})$.
\end{remark}
\paragraph{Meta-algorithm.} Intuitively, we can run OMD with different step sizes and
apply a meta-algorithm to estimate the optimal step size. Previous studies in learning with multiple step sizes usually adopt Hedge to aggregate the experts' advice. However, the regret of Hedge is $O(\sqrt{T\ln N})$ and thus is undesirable for our purpose. Inspired by optimistic online learning \citep{rakhlin2013online,syrgkanis2015fast}, \cite{zhao2020dynamic} adopt Optimistic Hedge as the meta-algorithm to get $O(\sqrt{(V_T+P_T)P_T})$ gradient-variation bound. After careful analysis, we show Optimistic Hedge works for gsc-convex losses regardless of the geometric distortion and get the desired gradient-variation bound.
\begin{algorithm2e}[H]
\caption{\textsc{Radar}\textsubscript{v}\xspace: Expert Algorithm}\label{alg:expert_v}
\KwData{A step size $\eta_i$}
Let $\mathbf{x}_{1,i}^\eta$ be any point in $\mathcal{N}$\\
\For{$t=1,\dots,T$} {
Submit $\mathbf{x}_{t,i}$ to the meta-algorithm\\
Receive gradient $\nabla f_t(\cdot)$ from the meta-algorithm\\
Each expert runs \ref{omd} with $M_t=\nabla f_{t-1}(\mathbf{y}_t)$, $M=G$ and step size $\eta_i$\\
}
\end{algorithm2e}
\begin{algorithm2e}[H]
\caption{\textsc{Radar}\textsubscript{v}\xspace: Meta Algorithm}\label{alg:meta_v}
\KwData{A learning rate $\beta$, a set of step sizes $\mathcal{H}$, initial weights $w_{1,i}=w_{0,i}=\frac{1}{N}$}
\For{$t=1,\dots,T$} {
Receive all $\mathbf{x}_{t,i}$'s from experts with step size $\eta_i$\\
$\bar{\x}_t=\argmin_{\mathbf{x}\in\mathcal{N}_{\delta G}}\sum_{i\in[N]}w_{t-1,i} d(\mathbf{x},\mathbf{x}_{t,i})^2$\\
Update $w_{t,i}\propto\exp\left(-\beta\left(\sum_{s=1}^{t-1}\ell_{s,i}+m_{t,i}\right)\right)$ by \ref{radarv_opt}\\
$\mathbf{x}_t=\argmin_{\mathbf{x}\in\mathcal{N}_{\delta G}}\sum_{i\in[N]}w_{t,i} d(\mathbf{x},\mathbf{x}_{t,i})^2$\\
Observe $f_t(\cdot)$ and send $\nabla f_t(\cdot)$ to experts\\
}
\end{algorithm2e}
We denote $\boldsymbol\ell_t,\mathbf{m}_t\in\mathbb{R}^N$ as the surrogate loss and the optimism at round $t$. The update of Optimistic Hedge is: $\textstyle w_{t,i}\propto\exp\left(-\beta\left(\sum_{s=1}^{t-1}\ell_{s,i}+m_{t,i}\right)\right)$, which achieves adaptive regret due to the optimism. The following technical lemma \citep{syrgkanis2015fast} is critical for our analysis of Optimistic Hedge, and the proof is in Appendix \ref{app:lem:opthedge} for completeness.
\begin{lemma}\label{optHedge}
For any $i\in[N]$, Optimistic Hedge satisfies
\begin{equation*}
\begin{split}
\textstyle \sum_{t=1}^T\left\langle\mathbf{w}_t,\boldsymbol\ell_t\right\rangle-\ell_{t,i}\leq\frac{2+\ln N}{\beta}+\beta\sum_{t=1}^T\|\boldsymbol\ell_t
-\mathbf{m}_t\|_{\infty}^2-\frac{1}{4\beta}\sum_{t=2}^T\|\mathbf{w}_t-\mathbf{w}_{t-1}\|_1^{2}
\end{split}
\end{equation*}
\end{lemma}
Unfortunately, the optimism term proposed in \cite{zhao2020dynamic} is incompatible with the Riemannian metric and thus does not work on manifolds. We instead propose to use the following surrogate loss and optimism:
\begin{equation}\label{radarv_opt}
\begin{split}
&\ell_{t,i}=\left\langle\nabla f_t(\mathbf{x}_t),{\textnormal{Exp}}_{\mathbf{x}_t}^{-1}\mathbf{x}_{t,i}\right\rangle \\
&m_{t,i}=\left\langle\nabla f_{t-1}(\bar{\mathbf{x}}_t),{\textnormal{Exp}}_{\bar{\x}_t}^{-1}\mathbf{x}_{t,i}\right\rangle \\
\end{split}
\end{equation}
where $\mathbf{x}_t$ and $\bar{\x}_t$ are Fr\'echet averages of $\mathbf{x}_{t,i}$ w.r.t. linear combination coefficients $\mathbf{w}_{t}$ and $\mathbf{w}_{t-1}$ respectively.
We emphasize that the design of the surrogate loss and optimism is highly non-trivial. As we will see in the proof of Theorem \ref{opthedge-reg}, the combination of the surrogate loss and the gradient-vanishing property of the Fr\'{e}chet mean ensures Lemma \ref{optHedge} can be invoked to upper bound the regret of the meta-algorithm. However, $\mathbf{m}_{t}$ cannot rely on $\mathbf{x}_t$ thus, we need to design optimism based on the tangent space of ${\bar{\x}_t}$, which incurs extra cost. Luckily, we bound the distortion by the sensitivity of the Fr\'{e}chet mean (Lemma \ref{frechet}) and apply the negative term in Lemma \ref{optHedge} to eliminate it. Algorithms \ref{alg:expert_v} and \ref{alg:meta_v} describe the expert algorithm and meta-algorithm of \textsc{Radar}\textsubscript{v}\xspace. We show
the meta-regret and total regret of \textsc{Radar}\textsubscript{v}\xspace in Theorems \ref{opthedge-reg} and \ref{var-reg}, respectively. Detailed proof in this section is deferred to Appendix \ref{app:radarv}.
\begin{thm}\label{opthedge-reg}
Assume all losses are $L$-gsc-smooth on $\mathcal{M}$. Then under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, the regret of Algorithm \ref{alg:meta_v} satisfies:
\begin{equation*}
\begin{split}
\textstyle & \sum_{t=1}^Tf_t(\mathbf{x}_t)-\sum_{t=1}^Tf_t(\mathbf{x}_{t,i})\\
\leq&\frac{2+\ln N}{\beta}+3D^2\beta (V_T+G^2)+\sum_{t=2}^T\left(3\beta (D^4L^2+D^2G^2{\zeta}^2)-\frac{1}{4\beta}\right)\|\mathbf{w}_t-\mathbf{w}_{t-1}\|_1^2.
\end{split}
\end{equation*}
\end{thm}
\begin{thm}\label{var-reg}
Let $\textstyle\beta=\min\left\{\sqrt{\frac{2+\ln N}{3D^2V_T}},\frac{1}{\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}}\right\}$, $\textstyle\mathcal{H}=\left\{\eta_i=2^{i-1}\sqrt{\frac{D^2}{8{\zeta} G^2T}}\big|i\in[N]\right\}$, where $\textstyle N=\left\lceil\frac{1}{2}\log\frac{8\zeta\delta^2G^2T}{({1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}})^2}\right\rceil+1$. Assume all losses are $L$-gsc-smooth on $\mathcal{M}$ and allow improper learning. Under Assumptions \ref{hada}, \ref{diam} and \ref{grad},
the regret of \textsc{Radar}\textsubscript{v}\xspace satisfies
$$
\text{D-Regret}_T={O}\left(\sqrt{{\zeta}(V_T+(1+P_T)/\delta^2)(1+P_T)}\right).
$$
\end{thm}
\section{Small-loss Bound on Manifolds}
\label{radars}
For dynamic regret, the small-loss bound replaces the dependence on $T$ by $F_T=\sum_{t=1}^Tf_t(\mathbf{u}_t)$, which adapts to the function values of the comparator sequence.
In Euclidean space, \cite{srebro2010smoothness} show this adaptive regret by combining OGD with the self-bounding property of smooth and non-negative functions, which reads
$
\|\nabla f(\mathbf{x})\|_2^2\leq 4L\cdot f(\mathbf{x})
$
where $L$ is the smoothness constant. We show a similar conclusion on manifolds and defer proof details in this part to Appendix \ref{app:radars}.
\begin{lemma}\label{self-bound}
Suppose $f:\mathcal{M}\rightarrow\mathbb{R}$ is both $L$-gsc-smooth and non-negative on its domain where $\mathcal{M}$ is a Hadamard manifold, then we have
$\|\nabla f(\mathbf{x})\|^2\leq 2L\cdot f(\mathbf{x})$.
\end{lemma}
To facilitate the discussion, we denote $\bar{F}_T=\sum_{t=1}^T f_t(\mathbf{x}_t)$ and $\bar{F}_{T,i}=\sum_{t=1}^T f_t(\mathbf{x}_{t,i})$.
We use R-OGD as the expert algorithm (Algorithm \ref{alg:expert}) and Hedge with surrogate loss
$$
\ell_{t,i}=\left\langle\nabla f_t(\mathbf{x}_t),\textnormal{Exp}^{-1}_{\mathbf{x}_t}\mathbf{x}_{t,i}\right\rangle
$$
as the meta-algorithm (Algorithm \ref{alg:meta}). The following Lemma
considers the regret of a single expert and shows that R-OGD achieves a small-loss dynamic regret on geodesic metric spaces.
\begin{lemma}\label{sml-expert}
Suppose all losses are $L$-gsc-smooth and non-negative on $\mathcal{M}$. Under Assumptions \ref{hada}, \ref{diam}, by choosing any step size $\eta\leq \frac{1}{2{\zeta}L}$, R-OGD achieves $O\left(\frac{P_T}{\eta}+\eta F_T\right)$ regret.
\end{lemma}
Again, we can not directly set $\textstyle\eta=O\left(\frac{1+P_T}{F_T}\right)$ because $P_T$ is unknown, which is precisely why we need the meta-algorithm. The meta-regret of Hedge is as follows.
\begin{lemma}\label{sml-meta}
Suppose all losses are $L$-gsc-smooth and non-negative on $\mathcal{M}$.
Under Assumptions \ref{hada}, \ref{diam}, by setting the learning rate of Hedge as $\beta=\sqrt{\frac{(2+\ln N)}{D^2\bar{F}_T}}$, the regret of the meta-algorithm satisfies
\begin{equation*}
\begin{split}
\textstyle\sum_{t=1}^Tf_t(\mathbf{x}_t)-\sum_{t=1}^Tf_t(\mathbf{x}_{t,i})\leq 8D^2L(2+\ln N)
+\sqrt{8D^2L(2+\ln N)F_{T,i}}.
\end{split}
\end{equation*}
\end{lemma}
Now as we have the guarantee for both the expert algorithm and the meta-algorithm, a direct combination yields the following general dynamic regret guarantee.
\begin{thm}\label{sml-reg}
Suppose all losses are $L$-gsc-smooth and non-negative on $\mathcal{M}$. Under Assumptions \ref{hada}, \ref{diam}. Setting $\mathcal{H}=\left\{\eta_i=2^{i-1}\sqrt{\frac{D}{2 {\zeta} LGT}}\big|i\in[N]\right\}$ where $N=\lceil\frac{1}{2}\log\frac{GT}{2 LD{\zeta}}\rceil+1$ and {\small $\beta=\sqrt{\frac{(2+\ln N)}{D^2\bar{F}_T}}$}. Then for any comparator $\mathbf{u}_1,\dots,\mathbf{u}_T\in\mathcal{N}$, we have
$$
\text{D-Regret}_T=O(\sqrt{\zeta({\zeta}(P_T+1)+F_T)(P_T+1)}).
$$
\end{thm}
\begin{remark}
If we take $\mathcal{M}$ as Euclidean space, the regret guarantee shown in Theorem \ref{sml-reg} becomes { $O(\sqrt{(P_T+F_T+1)(P_T+1)})$}, which matches
the result of \cite{zhao2020dynamic}.
\end{remark}
\section{Best-of-both-worlds Bound on Manifolds}
\label{radarb}
Now we already achieve the gradient-variation bound and the small-loss bound on manifolds. To highlight the differences between them, we provide an example in Appendix \ref{bbw} to show under certain scenarios, one bound can be much tighter than the other.
The next natural question is, is that possible to get a best-of-both-worlds bound on manifolds?
We initialize $N\coloneqq N^v+N^s$ experts as shown in Theorems \ref{var-reg} and \ref{sml-reg} where $N^v$ and $N^s$ are the numbers of experts running OMD and R-OGD, respectively. For each expert $i\in [N]$, the surrogate loss and the optimism are
\begin{equation}\label{radarb_opt}
\begin{split}
&\ell_{t,i}=\left\langle\nabla f_t(\mathbf{x}_t),{\textnormal{Exp}}_{\mathbf{x}_t}^{-1}\mathbf{x}_{t,i}\right\rangle \\
&m_{t,i}=\gamma_t\left\langle\nabla f_{t-1}(\bar{\mathbf{x}}_t),{\textnormal{Exp}}_{\bar{\x}_t}^{-1}\mathbf{x}_{t,i}\right\rangle. \\
\end{split}
\end{equation}
$\gamma_t$ controls the optimism used in the meta-algorithm. When $\gamma_t=1$, the optimism for the gradient-variation bound is recovered, and $\gamma_t=0$ corresponds to the optimism for the small-loss bound.
Following \cite{zhao2020dynamic}, we use Hedge for two experts to get a best-of-both-worlds bound. The analysis therein relies on the strong convexity of $\|\nabla f_t(\mathbf{x}_t)-\mathbf{m}\|_2^2$ in $\mathbf{m}$, which is generally not the case on manifolds. So an alternative scheme needs to be proposed. We denote
\begin{equation}
\begin{split}
&m_{t,i}^v=\left\langle\nabla f_{t-1}(\bar{\x}_t),\textnormal{Exp}^{-1}_{\bar{\x}_t}\mathbf{x}_{t,i}\right\rangle\\
&m_{t,i}^s=0,\\
\end{split}
\end{equation}
while $\mathbf{m}_t^v$ and $\mathbf{m}_t^s$ be the corresponding vectors respectively. Then $\mathbf{m}_t=\gamma_t\mathbf{m}_t^v+(1-\gamma_t)\mathbf{m}_t^s$, which is exactly the combination rule of Hedge. The function $\|\boldsymbol\ell_t-\mathbf{m}\|_{\infty}^2$ is convex with respect to $\mathbf{m}$ but not strongly convex so we instead use $d_t(\mathbf{m})\coloneqq \|\boldsymbol\ell_t-\mathbf{m}\|_{2}^2$ for Hedge, and the learning rate is updated as
{\small
\begin{equation}\label{gamma}
{\textstyle\gamma_t=\frac{\exp\left(-\tau\left(\sum_{r=1}^{t-1}d_r(\mathbf{m}_r^v)\right)\right)}{\exp\left(-\tau\left(\sum_{r=1}^{t-1}d_r(\mathbf{m}_r^v)\right)\right)+\exp\left(-\tau\left(\sum_{r=1}^{t-1}d_r(\mathbf{m}_r^s)\right)\right)} }
\end{equation}
}
Algorithm \ref{alg:meta_b} summarizes the meta-algorithm as well as the expert algorithm for \textsc{Radar}\textsubscript{b}\xspace.
\begin{algorithm2e}[t]
\caption{\textsc{Radar}\textsubscript{b}\xspace: Algorithm}\label{alg:meta_b}
\KwData{ Learning rates $\beta$ for Optimistic Hedge and $\gamma_t$ for Hedging the two experts, $\mathcal{H}=\{\eta_i\}$ consists of $N=N^v+N^s$ step sizes, $\tau=\frac{1}{8NG^2D^2}$}
\For{$t=1,\dots,T$} {
Run Algorithms \ref{alg:expert_v} and \ref{alg:expert} on the first $N^v$ experts and the later $N^s$ experts, resp.
$\bar{\x}_t=\argmin_{\mathbf{x}\in\mathcal{N}_{\delta G}}\sum_{i\in[N]}w_{t-1,i} d(\mathbf{x},\mathbf{x}_{t,i})^2$\\
Update $\gamma_t$ as in Equation \ref{gamma}\\
Update $w_{t,i}\propto\exp\left(-\beta\left(\sum_{s=1}^{t-1}\ell_{s,i}+m_{t,i}\right)\right)$ by Equation \ref{radarb_opt}
$\mathbf{x}_t=\argmin_{\mathbf{x}\in\mathcal{N}_{\delta G}}\sum_{i\in[N]}w_{t,i} d(\mathbf{x},\mathbf{x}_{t,i})^2$\\
Observe $f_t$ and send $\nabla f_t(\cdot)$ to each expert
}
\end{algorithm2e}
In Theorem \ref{best-meta} we show the guarantee of the meta-algorithm of \textsc{Radar}\textsubscript{b}\xspace and postpone proof details of this section to Appendix \ref{app:radarb}.
\begin{thm}\label{best-meta}
Setting learning rates $\tau=\frac{1}{8NG^2D^2}$ and
\begin{equation*}
\begin{split}
\textstyle\beta=\min\left\{\sqrt{\frac{2+\ln N}{N(D^2\min\{3(V_T+G^2),\bar{F}_T\}+8G^2D^2\ln 2)} }, \frac{1}{\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}}\right\}.
\end{split}
\end{equation*}
Suppose all losses are $L$-gsc-smooth and non-negative on $\mathcal{M}$. Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, the regret of the meta-algorithm satisfies
$$
\textstyle\sum_{t=1}^T f_t(\mathbf{x}_t)-\sum_{t=1}^T f_t(\mathbf{x}_{t,i})=O\left(\sqrt{\ln T\min\{V_T,\bar{F}_T\}}\right)
$$
where $N=N^v+N^s$ and $\bar{F}_T=\sum_{t=1}^T f_t(\mathbf{x}_t)$.
\end{thm}
Finally, we show the regret of \textsc{Radar}\textsubscript{b}\xspace is bounded by the smaller of two problem-dependent bounds as follows.
\begin{thm}\label{best-reg}
Suppose all losses are $L$-gsc-smooth and non-negative on $\mathcal{M}$ and allow improper learning. Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, if we set the set of candidate step sizes as
\begin{equation}\label{radar_b}
\mathcal{H}=\mathcal{H}^v\cup\mathcal{H}^s,
\end{equation}
where $\mathcal{H}^v$ and $\mathcal{H}^s$ are sets of step sizes in Theorem \ref{var-reg} with $N=N^v$ and Theorem \ref{sml-reg} with $N=N^s$ respectively. Then Algorithm \ref{alg:meta_b} satisfies
{\small $$
\text{D-Regret}_T= O\left(\sqrt{\zeta( P_T(\zeta+1/\delta^2)
+B_T+1)(1+P_T)+\ln T\cdot B_T}\right)
$$}
where $B_T\coloneqq\min\{V_T,F_T\}$.
\end{thm}
\begin{remark}
Comparing to the result in \cite{zhao2020dynamic}, we find the result of Theorem \ref{best-reg} has an additional $\sqrt{\ln T}$ factor, which comes from our construction of hedging two experts. It will be interesting to remove this dependence.
\end{remark}
\section{Conclusion}
In this paper, we consider adaptive online learning on Riemannian manifolds. Equipped with the idea of learning with multiple step sizes and optimistic mirror descent, we derive a series of no-regret algorithms that adapt to quantities reflecting the intrinsic difficulty of the online learning problem in different aspects. In the future, it is interesting to study adaptive online learning on manifolds in the proper setting and the bandit setting.
\bibliographystyle{icml2021}
|
{
"arxiv_id": "2302.08651",
"language": "en",
"timestamp": "2023-02-20T02:05:00",
"url": "https://arxiv.org/abs/2302.08651",
"yymm": "2302"
} | \section{Introduction}
Recent works \citep{Luo:2019iby,Luo:2021zpi} show possible relations
between Perelman's formalism of the Ricci flow and some fundamental
problems in quantum spacetime and quantum gravity, for instance, the
trace anomaly and the cosmological constant problem. Perelman's seminal
works (the section-5 of \citep{perelman2002entropy}) and further
development by Li \citep{2012Perelman,2013arXiv1303.5193L} also suggest
deep relations between the Ricci flow and the thermodynamics system,
not only the irreversible non-equilibrium but also the thermal equilibrium
thermodynamics of certain underlying microscopic system. In \citep{perelman2002entropy}
Perelman also declared a partition function and his functionals without
specifying what the underlying microscopic ensemble really are (in
physics). So far it is not clear whether the beautiful thermodynamic
analogies are physical or pure coincidences. On the other hand, inspired
by the surprising analogies between the black hole and thermodynamics
system, it is generally believed the existence of temperature and
entropy of a black hole. Works along this line also showed, in many
aspects, the gravitational system would be profoundly related to thermodynamics
system (see recent review \citep{Page:2004xp} and references therein),
it is generally conjectured that there would exist certain underlying
statistical theory for the underlying microscopic quantum degrees
of freedom of gravity. It gradually becomes one of the touchstones
for a quantum gravity.
The motivations of the paper are, firstly, to propose an underlying
statistical fields theory for Perelman's seminal thermodynamics analogies
of his formalism of the Ricci flow, and secondly, for understanding
the possible microscopic origin of the spacetime thermodynamics especially
for the Schwarzschild black hole. We hope the paper could push forward
the understanding to the possible interplay of the mysterious Perelman's
formalism of Ricci flow and the quantum spacetime and gravity. To
our knowledge, several tentative works have been devoted to the goal,
see e.g. \citep{Shu:2006bk,Headrick:2006ti,Samuel:2007ak,Samuel:2007zz},
but frankly speaking, the physical picture underlying the Ricci flow
is not fully clear, if a fundamental physical theory underlying the
Ricci flow and a fundamental theory of quantum spacetime is lacking.
Based on our previous works \citep{Luo2014The,Luo2015Dark,Luo:2015pca,Luo:2019iby,Luo:2021wdh,Luo:2021zpi,Luo:2022goc,Luo:2022ywl}
on the quantum reference frame and its relation to Perelman's formalism
of the Ricci flow, we propose a statistical fields theory of the quantum
reference frame as a possible underlying theory of Perelman's seminal
analogies between his geometric functionals and the thermodynamic
functions. In section II, we review the theory of quantum reference
frame based on a $d=4-\epsilon$ quantum non-linear sigma model, at
the Gaussian approximation quantization, we obtain a density matrix
of the frame fields system as a physical foundation to the statistical
interpretation of the theory. The induced Ricci flow of the frame
fields and the Ricci-DeTurck flow of the frame fields associated with
the density matrix is deduced. In section III, we discuss the diffeomorphism
and related trace anomaly of the quantum frame fields theory and its
profound implications to the irreversible non-equilibrium thermodynamics
of the frame fields, for instance, the statistical entropy and an
analogous H-theorem of the frame fields, and the effective gravity
theory at cosmic scale (especially the emergence of the cosmological
constant). In section IV, the thermal equilibrium state of the frame
fields as a flow limit configuration (the Gradient Shrinking Ricci
Soliton) is discussed, in which the density matrix recovers the thermal
equilibrium canonical ensemble density. This section gives a physical
foundation to Perelman's seminal thermodynamic analogies. In section
V, the framework gives a possible microscopic understanding of the
thermodynamics of the Schwarzschild black hole. Finally, we summarize
the paper and give conclusions in the section VI.
\section{Quantum Reference Frame}
Reference frame is one of the most fundamental notions in physics.
Any measurement in physics is performed or described, a reference
frame has always been explicitly or implicitly used. In classical
physics, the reference frame is idealizationally used via classical
rulers and clocks to label the spacetime coordinates, which are classical,
external, and rigid without any fluctuation. Even in the textbook
quantum mechanics or quantum fields theory, the spacetime coordinates
are still classical. But quantum principles tell us that all physical
measuring instruments including the rulers and clocks are inescapably
quantum fluctuating. Such idealizational and classical treatment of
reference frame works not bad in quantum mechanics and quantum fields
theory. To a large extent, this is due to the fact that the general
coordinates transformation and gravitational effects are not seriously
taken into account. Just as expected, when the quantum principles
are seriously applied to the spacetime itself and gravitational phenomenon,
severe difficulties arise, e.g. information losses (non-unitary),
diffeomorphism anomaly and the cosmological constant problems, etc.
The quantum reference frame is a recurring theme in literature (not
completely list, see e.g. \citep{1984PhRvD..30..368A,rovelli1991quantum,dickson2004view,2007Reference,2007Fundamental,angelo2011physics,Flaminia2019Quantum,Hoehn:2020epv}
and references therein) based on various difference physical motivations,
from quantum foundation to quantum information or quantum communication,
to quantum gravity. For example, in Ref.\citep{1984PhRvD..30..368A},
the author suggests the general relation between superselection rules
and the lack of reference frame. In Ref.\citep{2007Reference}, it
also more practically shows that extra assumptions about the superselection
rules can not be avoided from the viewpoint of quantum information
and quantum communication theory, if local observers do not share
common information about their relative phase or Cartesian frames
etc. The extra assumptions of the superselection rules may be also
viewed as the weakness of textbook quantum mechanics, which can be
overcome by introducing appropriate quantum reference frame. And many
models (e.g. \citep{rovelli1991quantum,angelo2011physics,Flaminia2019Quantum})
of quantum reference frame and relational descriptions to the quantum
system and the quantum reference frame as a whole are suggest into
the quantum foundation. In recent works \citep{Hoehn:2020epv} and
the references therein, the authors review three approaches (relational
Dirac observables, the Page-Wootters formalism and quantum deparameterizations)
of relational quantum dynamics and suggest their equivalence. Other
author focus on the possible role of quantum reference frame to the
decoherence in quantum gravity \citep{Poulin:2005dn,2007Fundamental}.
Certainly, the works list of the direction is far from complete, which
is beyond the scope and ability of the author.
Fundamentally, our work shares the similar philosophical viewpoint
to the role of quantum reference frame in quantum mechanics, such
as considering that an appropriate materialized (but idealized) reference
frame obeying the same law of quantum mechanics must be taken into
account, and in the full quantum theory a relational description based
on an entanglement of a quantum system and the quantum reference frame
as a whole must play a fundamental role. However, there are some differences
from the past literature that we considered more important. First,
we do not simply or merely treat the quantum clock as a quantum mechanical
system (\citep{Flaminia2019Quantum,Hoehn:2020epv}) (which is more
simple and has less degree of free to deal with as discussed in most
quantum reference literature, in fact our early work (\citep{Luo2014The,Luo2015Dark})
also started from the operationally treatment of quantum clock to
get some general conclusions on the vaccum energy and the cosmological
constant problem), but in the paper we put both quantum space-rod
and clock-time on an equal footing in the framework of quantum statistical
fields, so that it makes the theory more appropriate to incorporate
gravity, under the assumption of a quantum version of equivalence
principle. To my understanding, quantum clock can be viewed as a first
step model and far from a theory. Second, based on the quantum spacetime
reference frame model (i.e. the $d=4-\epsilon$ non-linear sigma model),
our paper does not treat the genuine relational quantities from the
very beginning (as most literature tend to announce), but we prepare
the quantum frame fields of reference in a laboratory frame (the $d=4-\epsilon$
base spacetime of the non-linear sigma model) as the starting reference,
and then quantum events are relative to the prepared quantum frame
fields. In this sense, the framework equivalently assumes the existence
of an external, classical and rigid (free from quantum fluctuation
and volume fixed) reference frame to be the laboratory frame, since
the non-linear sigma model allows us to assign quantum state of spacetime
reference (the target spacetime) to the base spacetime to arbitrary
precision. But it can be easily verified that the theory is independent
to the laboratory frame (metric, sign etc.) in the non-linear sigma
model. The notion of the external and classical laboratory frame is
just for convenience, since a quantum statistical fields theory is
historically (or maybe more appropriate to) defined on an inertial
frame (flat spacetime). So the relational quantities describing the
relation between the quantum system and the quantum spacetime reference
system is in essential in the framework. Third, also for the reason
of the base spacetime independence of the non-linear sigma model,
whose Hamiltonian is trivial, so the theory of spacetime reference
frame is more properly quantized by using the path integral or functional
method rather than the operator methods (e.g. the relational Dirac
observables quantization or relational Schrodinger picture in Page-Wootters
formalism). And fourth, there is a fundamentally non-unitary relation
between two spacetime reference frames under a coordinate transformation
due to an irreversible Ricci flow of spacetime reference frame, unlike
most approaches in which the coordinate transformation between difference
reference frames is assumed unitary. This is considered as a key ingredient
of quantum spacetime reference frame that is intrinsically ensemble
statistical and thermal.
Generally speaking, our approach alongs the general philosophy of
the quantum reference frame but is considered independent to the details
of the past literature. The framework associates to several elegant
physics and mathematical structures that are not discussed in the
past literature, such as the non-linear sigma model, Shannon entropy,
the Ricci flow and density Riemannian geometry, etc. And our previous
works \citep{Luo2014The,Luo2015Dark,Luo:2015pca,Luo:2019iby,Luo:2021wdh,Luo:2021zpi,Luo:2022goc,Luo:2022ywl}
have revealed very rich consequencies of the framework, (e.g. the
acceleration expansion of the late epoch universe, the cosmological
constant, diffemorphism anomaly, the inflationary early universe,
local conformal stability and non-collapsibility, modified gravity,
etc.), but frankly speaking, the possible consequencies of the quantum
reference frame are still far from fully discovered. The main motivation
here for a quantum treatment of a reference frame system is that it
might form a foundation to constructing a theory of quantum spacetime
and quantum gravity that is analogous to the way it is used to construct
the classical general relativity and it is crucial in understanding
the microscopic origin of the spacetime thermodynamics.
\subsection{Definition}
In this section, we propose a quantum fields theory of reference frame
as a starting point to study a quantum theory of spacetime and quantum
gravity, based on an Equivalence Principle extended to reference frame
described by quantum state (discussed by a paradox in Section-V-B
and in the conclusion of the paper). The generalization of the Equivalence
Principle to the quantum level might form another foundation to a
quantum reference frame and a quantum gravity. How the Equivalence
Principle behaves at the quantum level has many discussions having
a long history (e.g. \citep{1980The,1983Is,1994On,2010arXiv1011.3719G,2019arXiv190301289H}
and references therein, and \citep{2013arXiv1308.6289K,2016arXiv161001968S}
for an extended thermal version). The Equivalence Principle is the
physical foundation of measuring the spacetime by physical material
reference frame even at the quantum level, and it is the bridge between
the geometric curved spacetime and gravity, and hence the gravity
is simply a relational phenomenon that the motion of a test particle
in gravity is manifested as a relative motion w.r.t. the (quantum)
material reference frame. Without the Equivalence Principle, we would
lost the physical foundation of all these concepts. Therefore, the
basic argument of the paper is that there are several supports (e.g.
uniform quantum origin of the accelerating expansion of the universe
posited by myself in previous works \citep{Luo:2015pca,Luo:2019iby,Luo:2021zpi},
and a consistent incorporating the thermodynamics of the spacetime
shown in this work) and the self-consistency of the framework are
all possible evidences for its validity for the quantum reference
frame.
In this framework, a to-be-studied quantum system described by a state
$|\psi\rangle$ and the spacetime reference system by $|X\rangle$
are both quantum. The states of the whole system are given by an entangled
state
\begin{equation}
|\psi[X]\rangle=\sum_{ij}\alpha_{ij}|\psi\rangle_{i}\otimes|X\rangle_{j}\label{eq:entangle}
\end{equation}
in their direct product Hilbert space $\mathcal{H}_{\psi}\otimes\mathcal{H}_{X}$.
The state (\ref{eq:entangle}) of the to-be-studied system and the
reference frame system is an entangled state but a trivial direct
product state is for the reason of calibration between them. Usually,
a quantum measurement is performed as follows. At a preparation step
of a quantum measurement, a one-to-one correlation between a quantum
system $|\psi\rangle_{i}$ and a reference system $|X\rangle_{j}$
(a quantum instrument or ruler) is prepared, called calibration. The
step in usual sense is a comparison and adjustment of the measuring
instrument $|X\rangle_{j}$ by a calibration standard $|\psi_{standard}\rangle_{i}$
which is physically similar with the to-be-studied system $|\psi\rangle_{i}\doteq|\psi_{standard}\rangle_{i}$.
A well-calibrated entangled state $\sum_{ij}\alpha_{ij}|\psi_{standard}\rangle_{i}\otimes|X\rangle_{j}$
can be used to measure the to-be-studied system $|\psi\rangle_{i}$
with the reference to the quantum instrument $|X\rangle_{j}$. In
essential, the measurement indirectly performs a comparision between
$|\psi\rangle_{i}$ and the fiducial state $|\psi_{standard}\rangle_{i}$.
So the entangled state $|\psi[X]\rangle$ is a superposition of all
possible one-to-one correlations. According to the standard Copenhagen
interpretation of a quantum state, the to-be-studied quantum system
collapsing into a state $|\psi\rangle_{i}$ together with the collapsing
of the quantum reference system into the corresponding $|X\rangle_{j}$
happening by the joint probability $|\alpha_{ij}|^{2}$, meaning that
when the state of the quantum instrument is read out being in state
$|X\rangle_{j}$, then in this sense the to-be-studied system is inferred
to be the corresponding $|\psi\rangle_{i}$. A simple and practical
example is the Stern-Gerlach experiment (see \citep{Luo:2019iby}).
The entangled state generalizes the textbook quantum description of
the state $|\psi(x)\rangle$ w.r.t. an idealized parameter $x$ of
a classical reference system free from quantum fluctuations (in quantum
mechanics $x$ is the Newtonian time, in quantum fields theory $x_{a}$
are the Minkowskian spacetime).
The entangled state $|\psi[X]\rangle$ is inseparable, so that the
state can only be interpreted in a relational manner, i.e. the entangled
state describes the ``relation'' between $|\psi\rangle$ and $|X\rangle$,
but each absolute state. The individual state $|\psi\rangle$ has
physical meaning only being reference to $|X\rangle$ entangled to
it. When quantum mechanics is reformulated on the new foundation of
the relational quantum state (the entangled state) describing the
``relation'' between the state of the under-studied quantum system
and the state of the quantum reference system, a gravitational theory
is automatically contained in the quantum framework without extra
assumption.
Since the state of reference $|X\rangle$ is also subject to quantum
fluctuation, so mathematically speaking, the state $|\psi[X]\rangle$
can be seen as the state $|\psi(x)\rangle$ with a smeared spacetime
coordinates, instead of the textbook state $|\psi(x)\rangle$ with
a definite and classical spacetime coordinates. The state $|\psi[X]\rangle$
could recover the textbook state $|\psi(x)\rangle$ only when the
quantum fluctuation of the reference system is small enough and hence
can be ignored. More precise, the 2nd order central moment (even higher
order central moments) fluctuations of the spacetime coordinate $\langle\delta X^{2}\rangle$
(the variance) can be ignored compared with its 1st order moment of
quadratic distance $\langle\Delta X\rangle^{2}$ (squared mean), where
$\langle...\rangle$ represents the quantum expectation value by the
state of the reference system $|X\rangle$. In this 1st order approximation,
this quantum framework recovers the standard textbook quantum mechanics
without gravity. When the quantum fluctuation $\langle\delta X^{2}\rangle$
as the 2nd order correction of the reference frame system is important
and taken into account, gravity as a next order effects emerges in
the quantum framework, as if one introduces gravitation into the standard
textbook quantum mechanics, details are seen below and in previous
works.
To find the state $|X\rangle\in\mathcal{H}_{X}$ of the quantum reference
system, a quantum theory of the reference frame must be introduced.
If the quantum spacetime reference frame $|X^{\mu}\rangle$ ($\mu=0,1,2,...D-1$)
itself is considered as the to-be-studied quantum system, w.r.t. the
fiducial lab spacetime $|x_{a}\rangle$ as the reference system, ($a=0,1,2,...d-1$),
the entangled state $|X(x)\rangle=\sum_{ij}\alpha_{ij}|X\rangle_{i}\otimes|x\rangle_{j}$
can be constructed by a mapping between the two states, i.e. $|x\rangle\rightarrow|X\rangle$.
From the mathematical viewing point, to defined a D-dimensional manifolds
we need to construct a non-linear differentiable mapping $X(x)$ from
a local coordinate patch $x\in\mathbb{R}^{d}$ to a D-manifolds $X\in M^{D}$.
The mapping in physics is usually realized by a kind of fields theory
for $X(x)$, the non-linear sigma model (NLSM) \citep{gell1960axial,friedan1980nonlinear,Friedan1980,zinn2002quantum,codello2009fixed,percacci2009asymptotic,ketov2013quantum,2015Non}
\begin{equation}
S[X]=\frac{1}{2}\lambda\sum_{\mu,\nu=0}^{D-1}\int d^{d}xg_{\mu\nu}\sum_{a=0}^{d-1}\frac{\partial X^{\mu}}{\partial x_{a}}\frac{\partial X^{\nu}}{\partial x_{a}},\label{eq:NLSM}
\end{equation}
where $\lambda$ is a constant with dimension of energy density $[L^{-d}]$
taking the value of the critical density (\ref{eq:critical density})
of the universe.
In the action, $x_{a}$ $(a=0,1,2,...,d-1)$, with dimension length
$[L]$, is called the base space in NLSM's terminology, representing
the coordinates of the local patch. They will be interpreted as the
lab wall and clock frame as the starting reference, which is considered
fiducial and classical with infinite precision. For the reason that
a quantum fields theory must be formulated in a classical inertial
frame, i.e. flat Minkowskian or Euclidean spacetime, so the base space
is considered flat. Without loss of generality, we consider the base
space as the Euclidean one, i.e. $x\in\mathbb{R}^{d}$ which is better
defined when one tries to quantize the theory.
The differential mapping $X_{\mu}(x)$, $(\mu=0,1,2,...,D-1)$, with
dimensional length $[L]$, is the coordinates of a general Riemannian
or Lorentzian manifolds $M^{D}$ (depending on the boundary condition)
with curved metric $g_{\mu\nu}$, called the target space in NLSM's
terminology. We will work with the real-defined coordinates for the
target spacetime, and the Wick rotated version has been included into
the general coordinates transformation of the time component. In the
language of quantum fields theory, $X_{\mu}(x)$ or $X^{\mu}(x)=\sum_{\nu=0}^{D-1}g^{\mu\nu}X_{\nu}(x)$
are the real scalar frame fields.
Here, if not specifically mentioned, we will use the Einstein summation
convention to sum over index variable appears twice (Latin index for
the lab frame from 0 to $d-1$ and Greek index for the spacetime from
0 to $D-1$) and dropping the summation notation sigma.
From the physical point of view, the reference frame fields can be
interpreted as a physical coordinates system by using particle/fields
signals, for instance, a multi-wire proportional chamber that measuring
coordinates of an event in a lab. To build a coordinates system, first
we need to orient, align and order the array of the multi-wires with
the reference to the wall of the lab $x_{a}$, ($a=1,2,3$). The electron
fields (ignoring the spin) in these array of multi-wires are considered
as the scalar frame fields. With the reference to the wall of the
lab, to locate a position of an event, at least three electron signals
$X_{1},X_{2},X_{3}$ must be received and read in three orthogonal
directions. The location information can be measured from the wave
function of the electron fields, e.g. from the phase counting or particle
number counting. Usually we could consider the electrons in the wires
are free, and the field's intensity is not very large, so that the
intensity can be seen as a linear function of the coordinates of the
lab's wall, $X_{\mu}(x)=\sum_{a=1}^{3}e_{\mu}^{a}x_{a}$, $(\mu=1,2,3)$,
for instance, here $e_{\mu}^{a}=\delta_{\mu}^{a}$ is the intensity
of the signals in each orthogonal direction. Meaning that when the
direction $\mu$ is the lab's wall direction $a$, the intensity of
the electron beam is 1, otherwise the intensity is 0. Similarly, one
need to read an extra electron signal $X_{0}$ to know when the event
happens, with the reference to the lab's clock $x_{0}$. Thus, the
fields of these 3+1 electron signals can be given by
\begin{equation}
X_{\mu}(x)=\sum_{a=0}^{3}e_{\mu}^{a}x_{a},\quad(\mu=0,1,2,3).\label{eq:X=00003Dex}
\end{equation}
The intensity of the fields $e_{\mu}^{a}$ is in fact the vierbein,
describing a mapping from the lab coordinate $x_{a}$ to the frame
fields $X_{\mu}$.
When the event happens at a long distance beyond the lab's scale,
for instance, at the scale of earth or solar system, we could imagine
that to extrapolate the multi-wire chamber to such long distance scale
still seems OK, only replacing the electrons beam in wire by the light
beam. However, if the scale is much larger than the solar system,
for instance, to the galaxy or cosmic scale, when the signal travels
along such long distance and be read by an observer, we could imagine
that the broadening of the light beam fields or other particle fields
gradually becomes non-negligible. More precisely, the 2nd (or higher)
order central moment fluctuations of the frame fields signals can
not be neglected, the distance of Riemannian/Lorentzian spacetime
as a quadratic form must be modified by the 2nd moment fluctuation
or variance $\langle\delta X^{2}\rangle$ of the coordinates
\begin{equation}
\left\langle \left(\varDelta X\right)^{2}\right\rangle =\langle\varDelta X\rangle^{2}+\langle\delta X^{2}\rangle.\label{eq:dx^2}
\end{equation}
A local distance element in spacetime is given by a local metric tensor
at the point, so it is convenient to think of the location point $X$
being fixed, and interpreting the variance of the coordinate affect
only the metric tensor $g_{\mu\nu}$ at the location point. As a consequence,
the expectation value of a metric tensor $g_{\mu\nu}$ is corrected
by the 2nd central moment quantum fluctuation of the frame fields
\begin{equation}
\langle g_{\mu\nu}\rangle=\left\langle \frac{\partial X_{\mu}}{\partial x_{a}}\frac{\partial X_{\nu}}{\partial x_{a}}\right\rangle =\left\langle \frac{\partial X_{\mu}}{\partial x_{a}}\right\rangle \left\langle \frac{\partial X_{\nu}}{\partial x_{a}}\right\rangle +\frac{1}{2}\frac{\partial^{2}}{\partial x_{a}^{2}}\left\langle \delta X_{\mu}\delta X_{\nu}\right\rangle =g_{\mu\nu}^{(1)}(X)-\delta g_{\mu\nu}^{(2)}(X),\label{eq:g=00003Dg(1)+dg(2)}
\end{equation}
where
\begin{equation}
g_{\mu\nu}^{(1)}(X)=\left\langle \frac{\partial X_{\mu}}{\partial x_{a}}\right\rangle \left\langle \frac{\partial X_{\nu}}{\partial x_{a}}\right\rangle =\langle e_{\mu}^{a}\rangle\langle e_{\nu}^{a}\rangle
\end{equation}
is the 1st order moment (mean value) contribution to the classical
spacetime. For the contribution of the 2nd order central moment $\delta g_{\mu\nu}^{(2)}$
(variance), the expectation value of the metric generally tends to
be curved up and deformed, the longer the distance scale the more
important the broadening of the frame fields, making the spacetime
geometry gradually deform and flow at long distance scale.
Since the classical solution of the frame fields (\ref{eq:X=00003Dex})
given by the vierbein satisfying the classical equation of motion
of the NLSM, it is a frame fields interpretation of NLSM in a lab:
the base space of NLSM is interpreted as a starting reference by the
lab's wall and clock, the frame fields $X(x)$ on the lab are the
physical instruments measuring the spacetime coordinates. In this
interpretation we consider $d=4-\epsilon$, $(0<\epsilon\ll1)$ in
(\ref{eq:NLSM}) and $D=4$ is the least number of the frame fields.
There are several reason why $d$ is not precise but very close to
4 in the quantum frame fields interpretation of NLSM. $d$ must be
very close to 4, first, certainly at the scale of lab it is our common
sense; Second if we consider the entangled system $\mathcal{H}_{\psi}\otimes\mathcal{H}_{X}$
between the to-be-studied physical system and the reference frame
fields system, without loss of generality, we could take a scalar
field $\psi$ as the to-be-studied (matter) system, which shares the
common base space with the frame fields, the total action of the two
entangled system is a direct sum of each system
\begin{equation}
S[\psi,X]=\int d^{d}x\left[\frac{1}{2}\frac{\partial\psi}{\partial x_{a}}\frac{\partial\psi}{\partial x_{a}}-V(\psi)+\frac{1}{2}\lambda g_{\mu\nu}\frac{\partial X^{\mu}}{\partial x_{a}}\frac{\partial X^{\nu}}{\partial x_{a}}\right],
\end{equation}
where $V(\psi)$ is some potential of the $\psi$ fields. It can be
interpreted as an action of a quantum fields $\psi$ on general spacetime
coordinates $X$. Since both $\psi$ field and the frame fields $X$
share the same base space $x$, here they are described w.r.t. the
lab spacetime $x$ as the textbook quantum fields theory defined on
inertial frame $x$. If we interpret the frame fields as the physical
general spacetime coordinates, the coordinates of $\psi$ field must
be transformed from inertial frame $x$ to general coordinates $X$.
At the semi-classical level, or 1st moment approximation when the
fluctuation of $X$ can be ignored, it is simply a classical coordinates
transformation
\begin{align}
S[\psi,X]\overset{(1)}{\approx}S[\psi(X)] & =\int d^{4}X\sqrt{|\det g^{(1)}|}\left[\frac{1}{4}\left\langle g_{\mu\nu}^{(1)}\frac{\partial X^{\mu}}{\partial x_{a}}\frac{\partial X^{\nu}}{\partial x_{a}}\right\rangle \left(\frac{1}{2}g^{(1)\mu\nu}\frac{\delta\psi}{\delta X^{\mu}}\frac{\delta\psi}{\delta X^{\nu}}+2\lambda\right)-V(\psi)\right]\nonumber \\
& =\int d^{4}X\sqrt{|\det g^{(1)}|}\left[\frac{1}{2}g^{(1)\mu\nu}\frac{\delta\psi}{\delta X^{\mu}}\frac{\delta\psi}{\delta X^{\nu}}-V(\psi)+2\lambda\right],\label{eq:coupled gravity (1)}
\end{align}
in which $\overset{(1)}{\approx}$ stands for the 1st moment or semi-classical
approximation, and $\frac{1}{4}\left\langle g_{\mu\nu}^{(1)}\frac{\partial X^{\mu}}{\partial x_{a}}\frac{\partial X^{\nu}}{\partial x_{a}}\right\rangle =\frac{1}{4}\left\langle g_{\mu\nu}^{(1)}g^{(1)\mu\nu}\right\rangle =\frac{1}{4}D=1$
has been used. It is easy to see, at the semi-classical level, i.e.
only consider the 1st moment of $X$ while 2nd moment fluctuations
are ignored, the (classical) coordinates transformation reproduces
the scalar field action in general coordinates $X$ up to a constant
$2\lambda$, and the derivative $\frac{\partial}{\partial x_{a}}$
is formally replaced by the functional derivative $\frac{\delta}{\delta X^{\mu}}$.
$\sqrt{|\det g^{(1)}|}$ is the Jacobian determinant of the coordinate
transformation, note that the determinant requires the coordinates
transformation matrix to be a square matrix, so at semi-classical
level $d$ must be very close to $D=4$, which is not necessarily
true beyond the semi-classical level, when the 2nd moment quantum
fluctuations are important. For instance, since $d$ is a parameter
but an observable in the theory, it could even not necessary be an
integer but effectively fractal at the quantum level.
$d$ not precisely 4 is for the quantum and topological reasons. To
investigate this, we note that quantization depends on the homotopy
group $\pi_{d}(M^{D})$ of the mapping $X(x):\mathbb{R}^{d}\rightarrow M^{D}$.
If we consider the (Wick rotated) spacetime $M^{D}$ topologically
the $S^{D}$ for simplicity, the homotopy group is trivial for all
$d<D=4$, in other words, when $d<4$ the mapping $X(x)$ will be
free from any unphysical singularities for topological reason, in
this situation, the target spacetime is always mathematically well-defined.
However, the situation $d=4$ is a little subtle, since $\pi_{4}(S^{4})=\mathbb{Z}$
is non-trivial, the mapping might meet intrinsic topological obstacle
and become singular, i.e. a singular spacetime configuration. When
the quantum principle is taken into account, this situation can not
be avoided, and by its RG flow the spacetime is possibly deformed
into intrinsic singularities making the theory ill-defined at the
quantum level and non-renormalizable (RG flow not converge). So at
the quantum level, $d=4$ should be not precisely, we have to assume
$d=4-\epsilon$ when the quantum principle applies, while at the classical
or semi-classical level, considering $d=4$ has no serious problem.
The above argument is different from the conventional simple power
counting argument, which claims the NLSM is perturbative non-renormalizable
when $d>2$, but it is not necessarily the case, it is known that
numerical calculations also support $d=3$ and $d=4-\epsilon$ are
non-perturbative renormalizable and well-defined at the quantum level.
\subsection{Beyond the Semi-Classical Level: Gaussian Approximation}
Going beyond the semi-classical or 1st order moment approximation,
we need to quantize the theory at least at the next leading order.
If we consider the 2nd order central moment quantum fluctuation are
the most important next leading order contribution (compared with
higher order moment), we call it the Gaussian approximation or 2nd
order central moment approximation, while the higher order moment
are all called non-Gaussian fluctuations which might be important
near local singularities of the spacetime when local phase transition
happens, although the intrinsic global singularity can be avoided
by guaranteeing the global homotopy group trivial.
At the Gaussian approximation, $\delta g_{\mu\nu}^{(2)}$ can be given
by a perturbative one-loop calculation \citep{codello2009fixed,percacci2009asymptotic}
of the NLSM when it is relatively small compared with $g_{\mu\nu}^{(1)}$
\begin{equation}
\delta g_{\mu\nu}^{(2)}(X)=\frac{R_{\mu\nu}^{(1)}(X)}{32\pi^{2}\lambda}\delta k^{2},\label{eq:dg(2)}
\end{equation}
where $R_{\mu\nu}^{(1)}$ is the Ricci curvature given by 1st order
metric $g_{\mu\nu}^{(1)}$, $k^{2}$ is the cutoff energy scale of
the Fourier component of the frame fields. The validity of the perturbation
calculation $R^{(1)}\delta k^{2}\ll\lambda$ is the validity of the
Gaussian approximation, which can be seen as follows. It will be shown
in later section that $\lambda$ is nothing but the critical density
$\rho_{c}$ of the universe, $\lambda\sim O(H_{0}^{2}/G)$, $H_{0}$
the Hubble's constant, $G$ the Newton's constant. Thus for our concern
of pure gravity in which matter is ignored, the condition $R^{(1)}\delta k^{2}\ll\lambda$
is equivalent to $\delta k^{2}\ll1/G$ which is reliable except for
some local singularities are developed when the Gaussian approximation
is failed.
The equation (\ref{eq:dg(2)}) is nothing but a RG equation or known
as the Ricci flow equation (some reviews see e.g. \citep{chow2004ricci,chow2006hamilton,topping2006lectures})
\begin{equation}
\frac{\partial g_{\mu\nu}}{\partial t}=-2R_{\mu\nu},\label{eq:ricci flow}
\end{equation}
with flow parameter $\delta t=-\frac{1}{64\pi^{2}\lambda}\delta k^{2}$
having dimension of length squared $[L^{2}]$, which continuously
deform the spacetime metric driven by its Ricci curvature.
For the Ricci curvature is non-linear for the metric, the Ricci flow
equation is a non-linear version of a heat equation for the metric,
and flow along $t$ introduces an averaging or coarse-graining process
to the intrinsic non-linear gravitational system which is highly non-trivial
\citep{carfora1995renormalization,piotrkowska1995averaging,carfora2008ricci,Zalaletdinov:2008ts,Paranjape:2009zu}.
In general, if the flow is free from local singularities there exists
long flow-time solution in $t\in(-\infty,0)$, which is often called
ancient solution in mathematical literature. This range of the t-parameter
corresponds to $k\in(0,\infty)$, that is from $t=-\infty$, i.e.
the short distance (high energy) UV scale $k=\infty$ forwardly to
$t=0$ i.e. the long distance (low energy) IR scale $k=0$. The metric
at certain scale $t$ is given by being averaged out the shorter distance
details which produces an effective correction to the metric at that
scale. So along t, the manifolds loss its information in shorter distance,
thus the flow is irreversible, i.e. generally having no backwards
solution, which is the underlying reason for the non-unitary and existence
of entropy of a spacetime.
As it is shown in (\ref{eq:dx^2}), (\ref{eq:g=00003Dg(1)+dg(2)}),
the 2nd order moment fluctuation modifies the local (quadratic) distance
of the spacetime, so the flow is non-isometry. This is an important
feature worth stressing, which is the underlying reason for the anomaly.
The non-isometry is not important for its topology, so along t, the
flow preserves the topology of the spacetime but its local metric,
shape and size (volume) changes. There also exists a very special
solution of the Ricci flow called Ricci Soliton, which only changes
the local volume while keeps its local shape. The Ricci Soliton, and
its generalized version, the Gradient Ricci Soliton, as the flow limits,
are the generalization of the notion of fixed point in the sense of
RG flow. The Ricci Soliton is an important notion for understanding
the gravity at cosmic scale and studying the the thermodynamics of
the Ricci flow at equilibrium.
The Ricci flow was initially introduced in 1980s by Friedan \citep{friedan1980nonlinear,Friedan1980}
in $d=2+\epsilon$ NLSM and independently by Hamilton in mathematics
\citep{Hamilton1982Three,hamilton1986four}. The main motivation of
introducing it from the mathematical point of view is to classify
manifolds, a specific goals is to proof the Poincare conjecture. Hamilton
used it as a useful tool to gradually deform a manifolds into a more
and more ``simple and good'' manifolds whose topology can be readily
recognized for some simple cases. A general realization of the program
is achieved by Perelman at around 2003 \citep{perelman2002entropy,perelman2003ricci,perelman307245finite},
who introduced several monotonic functionals to successfully deal
with the local singularities which might be developed in more general
cases. The Ricci flow approach is not only powerful to the compact
geometry (as Hamilton's and Perelman's seminal works had shown) but
also to the non-compact \citep{1989Ricci,1989Deforming,2005Uniqueness}
and the Lorentzian geometry \citep{2010arXiv1007.3397B,2011arXiv1106.2924B,Ruchin:2013azz,2014arXiv1403.4400B,2019arXiv190703541B,2020AnPhy.42368333B,2020EPJC...80..639V,Luo:2022goc}.
\subsection{The Wavefunction and Density Matrix at the Gaussian Approximation }
So far we have not explicitly defined the quantum state of the reference
frame $|X\rangle$ in (\ref{eq:entangle}). In fact, the previous
(2nd order) results e.g. (\ref{eq:g=00003Dg(1)+dg(2)}), (\ref{eq:dg(2)})
and hence the Ricci flow (\ref{eq:ricci flow}) can also equivalently
be given by the expectation value $\langle O\rangle=\langle X|O|X\rangle$
via explicitly writing down the wavefunction $\Psi(X)$ of the frame
fields at the Gaussian approximation. Note that at the semi-classical
level, the frame fields $X$ is a delta-distribution and peaks at
its mean value, and further more, the action of the NLSM seems like
a collection of harmonic oscillators, thus at the Gaussian approximation
level, finite Gaussian width/2nd moment fluctuation of $X$ must be
introduced. When one performs a canonical quantization to the NLSM
at the Gaussian approximation level, the fundamental solution of the
wave function(al) (as a functional of the frame fields $X^{\mu}$)
of NLSM takes the Gaussian form, i.e. a coherent state
\begin{equation}
\Psi[X^{\mu}(x)]=\frac{1}{\sqrt{\lambda}(2\pi)^{D/4}}\frac{\left|\det\sigma_{\mu\nu}\right|^{1/4}}{|\det g_{\mu\nu}|^{1/4}}\exp\left[-\frac{1}{4}\left|X^{\mu}(x)\sigma_{\mu\nu}(x)X^{\nu}(x)\right|\right],
\end{equation}
where the covariant matrix $\sigma_{\mu\nu}(x)$, playing the role
of the Gaussian width, is the inverse of the 2nd order central moment
fluctuations of the frame fields at point $x$
\begin{equation}
\sigma_{\mu\nu}(x)=\frac{1}{\sigma^{\mu\nu}(x)}=\frac{1}{\left\langle \delta X^{\mu}(x)\delta X^{\nu}(x)\right\rangle },
\end{equation}
which is also given by perturbative one-loop calculation up to a diffeomorphism
of $X$. The absolute symbol of $\left|X^{\mu}\sigma_{\mu\nu}X^{\nu}\right|$
in the exponential is used to guarantee the quadratic form and hence
the determinant of $\sigma_{\mu\nu}$ induced from the Gaussian integral
over $X$ positive even in the Lorentzian signature.
We can also define a dimensionless density matrix corresponding to
the fundamental solution of the wavefunction
\begin{equation}
u[X^{\mu}(x)]=\Psi^{*}(X)\Psi(X)=\frac{1}{\lambda(2\pi)^{D/2}}\frac{\sqrt{\left|\det\sigma_{\mu\nu}\right|}}{\sqrt{|\det g_{\mu\nu}|}}\exp\left[-\frac{1}{2}\left|X^{\mu}(x)\sigma_{\mu\nu}X^{\nu}(x)\right|\right],\label{eq:u}
\end{equation}
and $\frac{1}{\lambda(2\pi)^{D/2}}\frac{\sqrt{\left|\det\sigma_{\mu\nu}\right|}}{\sqrt{|\det g_{\mu\nu}|}}$
is a normalization parameter, so that
\begin{equation}
\lambda\int d^{D}X\Psi^{*}(X)\Psi(X)=\lambda\int d^{D}Xu(X)=1,\label{eq:u-normalization}
\end{equation}
in which we often attribute the flow of the volume form $d^{D}X_{t}$
to the flow of the metric $g_{t}$, for the volume element $d^{D}X_{t}\equiv dV_{t}(X^{\mu})\equiv\sqrt{|g_{t}|}dX^{0}dX^{1}dX^{2}dX^{3}$.
Then the expectation values $\langle O\rangle$ can be understood
as $\lambda\int d^{D}X_{t}uO$. As the quantum frame fields $X$ are
q-number in the theory, precisely speaking, the integral of them should
be, in principle, a functional integral. Here the formal c-number
integral of them $\int d^{D}X_{t}...$ is for the conventional in
the Ricci flow literature, in which $X$ is a coarse-grained c-number
coordinates of manifolds at scale $t$. The exact functional integral
of $X$ is considered in calculating the partition function and related
anomaly of the theory in section-III.
Under a diffeomorphism of the metric, the transformation of $u(X)$
is given by a diffeomorphism of the covariant matrix ($h$ is certain
function)
\begin{equation}
\sigma_{\mu\nu}\rightarrow\hat{\sigma}_{\mu\nu}=\sigma_{\mu\nu}+\nabla_{\mu}\nabla_{\nu}h.
\end{equation}
So there exists an arbitrariness in the density $u(X)$ for different
choices of a diffeomorphism/gauge.
According to the statistical interpretation of wavefunction with the
normalization condition (\ref{eq:u-normalization}), $u(X^{0},X^{1},X^{2},X^{3})$
describes the probability density that finding these frame particles
in the volume $dV_{t}(X^{\mu})$. As the spacetime $X$ flows along
$t$, the volume $\Delta V_{t}$, in which density is averaged, also
flows, so the density at the corresponding scale is coarse-grained.
If we consider the volume of the lab, i.e the base space, is rigid
and fixed by $\lambda\int d^{4}x=1$, by noting (\ref{eq:u-normalization}),
we have
\begin{equation}
u[X^{\mu}(x),t]=\frac{d^{4}x}{d^{D}X_{t}}=\lim_{\Delta V_{t}\rightarrow0}\frac{1}{\Delta V_{t}}\int_{\Delta V_{t}}1\cdot d^{4}x.\label{eq:coarse-grain density}
\end{equation}
We can see that the density $u(X,t)$ can be interpreted as a coarse-grained
density in the volume element $\Delta V_{t}\rightarrow0$ w.r.t. a
fine-grained unit density in the lab volume element $d^{4}x$ at UV
$t\rightarrow-\infty$.
In this sense, the coarse-grained density $u(X,t)$ is in analogy
with the Boltzmann's distribution function, so it should satisfy an
analogous irreversible Boltzmann's equation, and giving rise to an
analogous Boltzmann's monotonic H-functional. In the following sections,
we will deduce such equation and the functional of $u(X,t)$. The
coarse-grained density $u(X,t)$ has profound physical and geometric
meaning, it also plays a central role in analyzing the statistic physics
of the frame fields and generalizes the manifolds to the density manifolds.
\subsection{Ricci-DeTurck Flow}
In previous subsection, from the viewpoint of frame fields particle,
$u(X^{\mu},t)$ has a coarse-grained particle density interpretation,
the eq.(\ref{eq:coarse-grain density}) can also be interpreted as
a manifolds density \citep{2016arXiv160208000W} from the geometric
point of view. For instance, $u(X,t)$ associates a manifold density
or density bundle to each point $X$ of a manifolds, measures the
fuzziness of the ``point''. It is worth stressing that the manifolds
density $u(X,t)$ does not simply a conformal scaling of a metric
by the factor, since if it is the case, the integral measure of $D=4$-volume
or 3-volume in the expectation $\langle O\rangle=\lambda\int d^{D}XuO$
would scale by different powers. There are various useful generalizations
of the Ricci curvature to the density manifolds, a widely accepted
version is the Bakry-Emery generalization \citep{1985Diffusions}
\begin{equation}
R_{\mu\nu}\rightarrow R_{\mu\nu}-\nabla_{\mu}\nabla_{\nu}\log u,
\end{equation}
which is also used in Perelman's seminal paper. The density normalized
Ricci curvature is bounded from below
\begin{equation}
R_{\mu\nu}-\nabla_{\mu}\nabla_{\nu}\log u\ge\sigma_{\mu\nu},\label{eq:density ric bound}
\end{equation}
if the density manifolds has finite volume.
As a consequence, replacing the Ricci curvature by the density normalized
one, we get the Ricci-DeTurck flow \citep{deturck1983deforming}
\begin{equation}
\frac{\partial g_{\mu\nu}}{\partial t}=-2\left(R_{\mu\nu}-\nabla_{\mu}\nabla_{\nu}\log u\right),\label{eq:ricci-deturk}
\end{equation}
which is equivalent to the standard Ricci flow equation (\ref{eq:ricci flow})
up to a diffeomorphism. Mathematically, the Ricci-DeTurck flow has
the advantage that it turns out to be a gradient flow of some monotonic
functionals introduced by Perelman, which have profound physical meanings
shown later.
The eq.(\ref{eq:u-normalization}) and (\ref{eq:coarse-grain density})
also give a volume constraint to the fiducial spacetime (the lab),
the coarse-grained density $u(X,t)$ cancels the flow of the volume
element $\sqrt{|\det g_{\mu\nu}|}$, so
\begin{align}
\frac{\partial}{\partial t}\left(u\sqrt{|\det g_{\mu\nu}|}\right) & =0.\label{eq:volume constraint}
\end{align}
Together with the Ricci-DeTurck flow equation (\ref{eq:ricci-deturk}),
we have the flow equation of the density
\begin{equation}
\frac{\partial u}{\partial t}=\left(R-\Delta_{X}\right)u,
\end{equation}
which is in analogy to the irreversible Boltzmann's equation for his
distribution function. $\Delta_{X}$ is the Laplacian operator in
terms of the manifolds coordinates $X$. Note the minus sign in front
of the Laplacian, it is a backwards heat-like equation. Naively speaking,
the solution of the backwards heat flow will not exist. But we could
also note that if one let the Ricci flow flows to certain IR scale
$t_{*}$, and at $t_{*}$ one might then choose an appropriate $u(t_{*})=u_{0}$
arbitrarily (up to a diffeomorphism gauge) and flows it backwards
in $\tau=t_{*}-t$ to obtain a solution $u(\tau)$ of the backwards
equation. Now since the flow is consider free from global singularities
for the trivialness of the homotopy group, we could simply choose
$t_{*}=0$, so we defined
\begin{equation}
\tau=-t=\frac{1}{64\pi^{2}\lambda}k^{2}\in(0,\infty).\label{eq:tau}
\end{equation}
In this case, the density satisfies the heat-like equation
\begin{equation}
\frac{\partial u}{\partial\tau}=\left(\Delta_{X}-R\right)u,\label{eq:u-equation}
\end{equation}
which does admit a solution along $\tau$, often called the conjugate
heat equation in mathematical literature.
So far (\ref{eq:u-equation}) together with (\ref{eq:ricci-deturk})
the mathematical problem of the Ricci flow of a Riemannian/Lorentzian
manifolds is transformed to a coupled equations
\begin{equation}
\begin{cases}
\frac{\partial g_{\mu\nu}}{\partial t}=-2\left(R_{\mu\nu}-\nabla_{\mu}\nabla_{\nu}\log u\right)\\
\frac{\partial u}{\partial\tau}=\left(\Delta_{X}-R\right)u\\
\frac{d\tau}{dt}=-1
\end{cases}
\end{equation}
and the manifolds $(M^{D},g)$ is generalized to a density manifolds
$(M^{D},g,u)$ \citep{Morgan2009Manifolds,2016arXiv160208000W,Corwin2017Differential}
with the constraint (\ref{eq:u-normalization}).
\section{The Anomaly and its Implications}
At the semi-classical approximation, see in eq.(\ref{eq:coupled gravity (1)}),
when the quantum fluctuations of the frame fields or spacetime coordinates
are ignored, the general coordinates transformation is just a classical
coordinates transformation. We will show that when the quantum fluctuations
are taken into account in the general coordinates transformation beyond
the semi-classical approximation, quantum anomaly emerges. As is seen
in the previous section, the quantum fluctuation and hence the coarse-graining
process of the Ricci flow does not preserve the quadratic distance
of a geometry, see (\ref{eq:dx^2}) and (\ref{eq:g=00003Dg(1)+dg(2)}).
The non-isometry of the quantum fluctuation induces a breakdown of
diffeomorphism or general coordinate transformation at the quantum
level, namely the diffeomorphism anomaly. In this section, we derive
the diffeomorphism anomaly of the theory, show its relation to the
Shannon entropy whose monotonicity gives an analogous H-theorem of
the frame fields system and the Ricci flow. Further more, as the quantum
frame fields theory describes a quantum spacetime, together with the
generalized quantum Equivalence Principle, the anomaly induced effective
action in terms of the Shannon entropy can also be interpreted as
a gravity theory, which at low energy expansion is a scale dependent
Einstein-Hilbert action plus a cosmological constant. This part has
certain overlap with the previous work \citep{Luo:2021zpi}, for the
self-containedness of the paper, we hope this section provide a general
background and lay the foundation for the subsequent thermodynamic
and statistic interpretation of the theory.
\subsection{Diffeomorphism at the Quantum Level}
First we consider the functional quantization of the pure frame fields
without explicitly incorporating the matter source. The partition
function is
\begin{equation}
Z(M^{D})=\int[\mathcal{D}X]\exp\left(-S[X]\right)=\int[\mathcal{D}X]\exp\left(-\frac{1}{2}\lambda\int d^{4}xg^{\mu\nu}\partial_{a}X_{\mu}\partial_{a}X_{\nu}\right),\label{eq:partition of NLSM}
\end{equation}
where $M^{D}$ is the target spacetime, and the base space can be
either Euclidean and Minkowskian. Since considering the action or
the volume element $d^{4}x\equiv d^{4}x\det e$ ($\det e$ is a Jacobian)
does not pick any imaginary $i$ factor no matter the base space is
in Minkowskian or Euclidean one, if one takes $dx_{0}^{(E)}\rightarrow idx_{0}^{(M)}$
then $\det e^{(E)}\rightarrow-i\det e^{(M)}$, so without loss of
generality we use the Euclidean base spacetime in the following discussions,
and remind that the result is the same for Minkowskian.
Note that a general coordinate transformation
\begin{equation}
X_{\mu}\rightarrow\hat{X}_{\mu}=\frac{\partial\hat{X}_{\mu}}{\partial X_{\nu}}X_{\nu}=e_{\mu}^{\nu}X_{\nu}
\end{equation}
does not change the action $S[X]=S[\hat{X}]$, but the measure of
the functional integral changes
\begin{align}
\mathcal{D}\hat{X} & =\prod_{x}\prod_{\mu=0}^{D-1}d\hat{X}_{\mu}(x)=\prod_{x}\epsilon_{\mu\nu\rho\sigma}e_{\mu}^{0}e_{\nu}^{1}e_{\rho}^{2}e_{\sigma}^{3}dX_{0}(x)dX_{1}(x)dX_{2}(x)dX_{3}(x)\nonumber \\
& =\prod_{x}\left|\det e(x)\right|\prod_{x}\prod_{a=0}^{D-1}dX_{a}(x)=\left(\prod_{x}\left|\det e(x)\right|\right)\mathcal{D}X,
\end{align}
where
\begin{equation}
\epsilon_{\mu\nu\rho\sigma}e_{\mu}^{0}e_{\nu}^{1}e_{\rho}^{2}e_{\sigma}^{3}=\left|\det e_{\mu}^{a}\right|=\sqrt{\left|\det g_{\mu\nu}\right|}
\end{equation}
is the Jacobian of the diffeomorphism. The Jacobian is nothing but
a local relative (covariant basis) volume element $dV(\hat{X}_{\mu})$
w.r.t. the fiducial volume $dV(X_{a})$. Note that the normalization
condition (\ref{eq:u-normalization}) also defines a fiducial volume
element $ud^{4}X\equiv udV(\hat{X}_{\mu})$, so the Jacobian is nothing
but related to the frame fields density matrix
\begin{equation}
u(\hat{X}_{\mu})=\frac{dV(X_{a})}{dV(\hat{X}_{\mu})}=\left|\det e_{a}^{\mu}\right|=\frac{1}{\left|\det e_{\mu}^{a}\right|}.\label{eq:volume form}
\end{equation}
Here the absolute symbol of the determinant is because the density
$u$ and the volume element are kept positive defined even in the
Lorentz signature. Otherwise, for the Lorentz signature, it should
introduce some extra imaginary factor $i$ into (\ref{eq:parametrize u})
to keep the condition (\ref{eq:u-normalization}). The density so
defined followed by (\ref{eq:u-normalization}) is an explicit generalization
from the standard 3-space density to a 4-spacetime version. It is
the definition of the volume form and the manifolds density ensure
the formalism of the framework formally the same with the Perelman's
standard form even in the Lorentzian signature. The manifolds density
encodes the most important information of a Riemannian or Lorentzian
geometry, i.e. the local volume comparison.
In this case, if we parameterize a dimensionless solution $u$ of
the conjugate heat equation as
\begin{equation}
u(\hat{X})=\frac{1}{\lambda(4\pi\tau)^{D/2}}e^{-f(\hat{X})},\label{eq:parametrize u}
\end{equation}
then the partition function $Z(M^{D})$ is transformed to
\begin{align}
Z(\hat{M}^{D}) & =\int[\mathcal{D}\hat{X}]\exp\left(-S[\hat{X}]\right)=\int\left(\prod_{x}\left|\det e\right|\right)[\mathcal{D}X]\exp\left(-S[X]\right)\nonumber \\
& =\int\left(\prod_{x}e^{f+\frac{D}{2}\log(4\pi\tau)}\right)[\mathcal{D}X]\exp\left(-S[X]\right)\nonumber \\
& =\exp\left(\lambda\int d^{4}x\left[f+\frac{D}{2}\log(4\pi\tau)\right]\right)\int[\mathcal{D}X]\exp\left(-S[X]\right)\nonumber \\
& =\exp\left(\lambda\int_{\hat{M}^{D}}d^{D}Xu\left[f+\frac{D}{2}\log(4\pi\tau)\right]\right)\int[\mathcal{D}X]\exp\left(-S[X]\right).
\end{align}
Note that $N(\hat{M}^{D})$ in the exponential of the change of the
partition function
\begin{equation}
Z(\hat{M}^{D})=e^{\lambda N(\hat{M}^{D})}Z(M^{D})\label{eq:Z->Zhat}
\end{equation}
is nothing but a pure real Shannon entropy in terms of the density
matrix $u$
\begin{equation}
N(\hat{M}^{D})=\int_{\hat{M}^{D}}d^{D}Xu\left[f+\frac{D}{2}\log(4\pi\tau)\right]=-\int_{\hat{M}^{D}}d^{D}Xu\log u.
\end{equation}
The classical action $S[X]$ is invariant under the general coordinates
transformation or diffeomorphism, but the quantum partition function
is no longer invariant under the general coordinates transformation
or diffeomorphism, which is called diffeomorphism anomaly, meaning
a breaking down of the diffeomorphism at the quantum level. The diffeomorphism
anomaly is purely due to the quantum fluctuation and Ricci flow of
the frame fields which do not preserve the functional integral measure
and change the spacetime volume at the quantum level. The diffeomorphism
anomaly has many profound consequences to the theory of quantum reference
frame, e.g. non-unitarity, the trace anomaly, the notion of entropy,
reversibility, and the cosmological constant.
The non-unitarity is indicated by the pure real anomaly term, which
is also induced by the non-isometry or volume change, and consequently
the non-invariance of the measure of the functional integral during
the Ricci flow. Because of the real-defined volume form (\ref{eq:volume form})
for both Euclidean and Lorentzian signature, the pure real contribution
of the anomaly and hence the non-unitarity are valid not only for
spacetime with Euclidean but also for the Lorentzian signature, it
is a rather general consequence of the Ricci flow of spacetime. Essentially
speaking, the reason of the non-unitarity is because we have enlarged
the Hilbert space of the reference frame, from a rigid classical frame
to a fluctuating quantum frame. The non-unitarity implies the breakdown
of the fundamental Schrodinger equation which is only valid on a classical
time of inertial frame, the solution of which is in $\mathcal{H}_{\psi}$.
A fundamental equation playing the role of the Schrodinger equation,
which can arbitrarily choose any (quantum) physical system as time
or reference frame, must be replaced by a Wheeler-DeWitt-like equation
in certain sense \citep{Luo2014The}, the solution of which is instead
in $\mathcal{H}_{\psi}\otimes\mathcal{H}_{X}$. In the fundamental
equation, the quantum fluctuation of physical time and frame, more
generally, a general physical coordinates system must break the unitarity.
We know that in quantum fields theory on curved spacetime or accelerating
frame, the vacuum states of the quantum fields in difference diffeomorphism
equivalent coordinate systems are unitarily inequivalent. The Unruh
effect is a well known example: accelerating observers in the vacuum
will measure a thermal bath of particles. The Unruh effect shows us
how a general coordinates transformation (e.g. from an inertial to
an accelerating frame) leads to the non-unitary anomaly (particle
creation and hence particle number non-conservation), and how the
anomaly will relate to a thermodynamics system (thermal bath). In
fact, like the Unruh effect, the Hawking effect \citep{1977Trace}
and all non-unitary particle creation effects in a curved spacetime
or accelerating frame are related to the anomaly in a general covariant
or gravitational system. All these imply that the diffeomorphism anomaly
will have deep thermodynamic interpretation which is the central issue
of the paper.
Without loss of generality, if we simply consider the under-transformed
coordinates $X_{\mu}$ identifying with the coordinates of the fiducial
lab $x_{a}$ which can be treated as a classical parameter coordinates,
in this situation the classical action of NLSM is just a topological
invariant, i.e. half the dimension of the target spacetime
\begin{equation}
\exp\left(-S_{cl}\right)=\exp\left(-\frac{1}{2}\lambda\int d^{4}xg^{\mu\nu}\partial_{a}x_{\mu}\partial_{a}x_{\nu}\right)=\exp\left(-\frac{1}{2}\lambda\int d^{4}xg^{\mu\nu}g_{\mu\nu}\right)=e^{-\frac{D}{2}}.
\end{equation}
Thus the total partition function of the frame fields takes a simple
form
\begin{equation}
Z(\hat{M}^{D})=e^{\lambda N(\hat{M}^{D})-\frac{D}{2}}.\label{eq:frame-partition}
\end{equation}
\subsection{The Trace Anomaly}
The partition function now is non-invariance (\ref{eq:Z->Zhat}) under
diffeomorphism at the quantum level, so if one deduces the stress
tensor by $\langle\mathcal{T}_{\mu\nu}\rangle=-\frac{2}{\sqrt{|g|}}\frac{\delta\log Z}{\delta g^{\mu\nu}}$,
its trace $\langle g^{\mu\nu}\rangle\langle\mathcal{T}_{\mu\nu}\rangle=0$
is difference from $\langle\mathcal{T}_{\mu}^{\mu}\rangle=\langle g^{\mu\nu}\mathcal{T}_{\mu\nu}\rangle$
\begin{equation}
\langle\Delta\mathcal{T}_{\mu}^{\mu}\rangle=\langle g^{\mu\nu}\rangle\langle\mathcal{T}_{\mu\nu}\rangle-\langle g^{\mu\nu}\mathcal{T}_{\mu\nu}\rangle=\lambda N(M^{D})\label{eq:trace anomaly}
\end{equation}
known as the trace anomaly. Cardy conjectured \citep{1988Is} that
in a $d=4$ theory, quantities like $\langle\mathcal{T}_{\mu}^{\mu}\rangle$
could be a higher dimensional generalization of the monotonic Zamolodchikov's
c-function in $d=2$ conformal theories, leading to a suggestion of
the a-theorem \citep{2011On} in $d=4$ and other suggestions (e.g.
\citep{1993Quantum,Gaite:1995yg}). In the following subsections,
we will show that the Shannon entropy $N$ and generalized $\tilde{N}$
are indeed monotonic, which might have more advantages, e.g. suitable
for a Lorentzian target spacetime and for general $D$.
Note that the Shannon entropy $N(M^{D})$ can be expanded at small
$\tau$
\begin{equation}
\lambda N(\hat{M}^{D})=\lambda\sum_{n=0}^{\infty}B_{n}\tau^{n}=\lambda\left(B_{0}+B_{1}\tau+B_{2}\tau^{2}+...\right)\quad(\tau\rightarrow0).
\end{equation}
For $D=4$ the first few coefficients are
\begin{equation}
B_{0}=\lim_{\tau\rightarrow0}N=\frac{D}{2\lambda}\left[1+\log\left(\sqrt{\lambda}4\pi\tau\right)\right],
\end{equation}
\begin{equation}
B_{1}=\lim_{\tau\rightarrow0}\frac{dN}{d\tau}=\int_{\hat{M}^{4}}d^{4}X\sqrt{|g|}\left(R+\frac{D}{2\tau}\right),
\end{equation}
\begin{equation}
B_{2}=\lim_{\tau\rightarrow0}\frac{1}{2}\frac{d^{2}N}{d^{2}\tau}=-\int_{\hat{M}^{4}}d^{4}X\sqrt{|g|}\left|R_{\mu\nu}+\frac{1}{2\tau}g_{\mu\nu}\right|^{2},
\end{equation}
in which $B_{0}$ can be renormalized out, and a renormalized $B_{1}$
will contribute to the effective Einstein-Hilbert action of gravity,
see following subsection D. And $B_{2}$, as a portion of the full
anomaly, plays the role of the conformal/Weyl anomaly up to some total
divergence terms, for instance, $\varDelta R$ terms and the Gauss-Bonnet
invariant. That is, a non-vanishing $B_{2}$ term measures the broken
down of the conformal invariance of $M^{D=4}$, otherwise, a vanishing
$B_{2}$ means that the manifold is a gradient steady Ricci soliton
as the fixed point of the Ricci-DeTurck flow, which preserves it shape
(conformal invariant) during the flows.
We note that $B_{2}$ as the only dimensionless coefficient measures
the anomalous conformal modes, in this sense, $N(M^{D})$ indeed relates
to certain entropy. However, since the conformal transformation is
just a special coordinates transformation, thus it is clearly that
the single $B_{2}$ coefficient does not measure the total (general
coordinates transformation) anomalous modes. Obviously this theory
at $2<d=4-\epsilon$ is not conformal invariant, thus as the theory
flows along $t$, the degrees of freedom are gradually coarse-grained
and hence the modes-counting should also change with the flow and
the scale, as a consequence all coefficients $B_{n}$ in the series
and hence the total partition function $e^{\lambda N(M^{D})}$ should
measure the total anomalous modes at certain scale $\tau$, leading
to the full entropy and anomaly.
Different from some classically conformal invariant theories, e.g.
the string theory, in which we only need to cancel a single scale-independent
$B_{k}$ coefficient in order to avoid conformal anomaly. As the theory
at higher than 2-dimension is not conformal invariant, the full scale-dependent
anomaly $N(M^{D})$ is required to be canceled at certain scale. Fortunately,
it will show in later subsection that a normalized full anomaly $\lambda\tilde{N}(M^{D})$
can converge at UV for its monotonicity, thus giving rise to a finite
counter term of order $O(\lambda)$ playing the role of a correct
cosmological constant. The idea that the trace anomaly might have
a relation to the cosmological constant is a recurring subject in
literature \citep{Bilic:2007gr,Tomboulis:1988gw,Antoniadis:1991fa,Antoniadis:1992hz,Salehi:2000eu},
in the framework, the cosmological constant is naturally emerged in
this way as the counter term of the trace anomaly (see subsection-D
or \citep{Luo:2021zpi}).
\subsection{Relative Shannon Entropy and a H-Theorem for Non-Equilibrium Frame
Fields }
In the Ricci flow limit, i.e. the Gradient Shrinking Ricci Soliton
(GSRS) configuration, the Shannon entropy $N$ taking its maximum
value $N_{*}$, it is similar with the thermodynamics system being
in a thermal equilibrium state where its entropy is also maximal.
In mathematical literature of Ricci flow, it is often defined a series
of relative formulae w.r.t. the extreme values taking by the flow
limit GSRS or analogous thermal equilibrium state denoted by a subscript
{*}.
In GSRS, the covariance matrix $\sigma^{\mu\nu}$ as 2nd central moment
of the frame fields with a IR cutoff $k$ is simply proportional to
the metric
\begin{equation}
\frac{1}{2}\sigma_{*}^{\mu\nu}=\frac{1}{2}\langle\delta X^{\mu}\delta X^{\nu}\rangle=\frac{1}{2\lambda}g^{\mu\nu}\int_{0}^{|p|=k}\frac{d^{4}p}{(2\pi)^{4}}\frac{1}{p^{2}}=\frac{k^{2}}{64\pi^{2}\lambda}g^{\mu\nu}=\tau g^{\mu\nu},\label{eq:sigma=00003Dtau*g}
\end{equation}
and then
\begin{equation}
\sigma_{*\mu\nu}=\left(\sigma_{*}^{\mu\nu}\right)^{-1}=\frac{1}{2\tau}g_{\mu\nu},
\end{equation}
which means a uniform Gaussian broadening is achieved. And in this
gauge, only longitudinal part of fluctuation exists.
When the density normalized Ricci curvature is completely given by
the longitudinal fluctuation $\sigma_{\mu\nu}$, i.e. the inequality
(\ref{eq:density ric bound}) saturates, giving a Gradient Shrinking
Ricci Soliton (GSRS) equation
\begin{equation}
R_{\mu\nu}+\nabla_{\mu}\nabla_{\nu}f=\frac{1}{2\tau}g_{\mu\nu}.\label{eq:shrinker}
\end{equation}
It means, on the one hand, for a general $f(X)=\frac{1}{2}\left|\sigma_{\mu\nu}X^{\mu}X^{\nu}\right|$,
so $R_{\mu\nu}$ seems vanish, so the standard Ricci flow equation
(\ref{eq:ricci flow}) terminates; and on the other hand, the Ricci-DeTurck
flow (\ref{eq:ricci-deturk}) only changes the longitudinal size or
volume of the manifolds but its shape keep unchanged, thus the GSRS
can also be seen stop changing, up to a size or volume rescaling.
Thus the GSRS is a flow limit and can be viewed as a generalized RG
fixed point.
In the following, we consider relative quantities w.r.t. the GSRS
configuration. Considering a general Gaussian density matrix
\begin{equation}
u(X)=\frac{1}{\lambda(2\pi)^{D/2}}\frac{\sqrt{\left|\det\sigma_{\mu\nu}\right|}}{\sqrt{|\det g_{\mu\nu}|}}\exp\left(-\frac{1}{2}\left|X^{\mu}\sigma_{\mu\nu}X^{\nu}\right|\right),\label{eq:general u}
\end{equation}
in GSRS limit it becomes
\begin{equation}
u_{*}(X)=\frac{1}{\lambda(4\pi\tau)^{D/2}}\exp\left(-\frac{1}{4\tau}\left|X\right|^{2}\right).\label{eq:u*}
\end{equation}
Therefore, in GSRS, a relative density can be defined by the general
Gaussian density $u(X)$ relative to the density $u_{*}(X)$ in GSRS
\begin{equation}
\tilde{u}(X)=\frac{u}{u_{*}}.
\end{equation}
By using the relative density, a relative Shannon entropy $\tilde{N}$
can be defined by
\begin{equation}
\tilde{N}(M^{D})=-\int d^{D}X\tilde{u}\log\tilde{u}=-\int d^{D}Xu\log u+\int d^{D}Xu_{*}\log u_{*}=N-N_{*}=-\log Z_{P}\le0,\label{eq:perelman-partition}
\end{equation}
where $Z_{P}$ is nothing but the Perelman's partition function
\begin{equation}
\log Z_{P}=\int_{M^{D}}d^{D}Xu\left(\frac{D}{2}-f\right)\ge0,
\end{equation}
and $N_{*}$ is the maximum Shannon entropy
\begin{equation}
N_{*}=-\int d^{D}Xu_{*}\log u_{*}=\int d^{D}Xu_{*}\frac{D}{2}\left[1+\log(\sqrt{\lambda}4\pi\tau)\right]=\frac{D}{2\lambda}\left[1+\log(\sqrt{\lambda}4\pi\tau)\right].
\end{equation}
Since the relative Shannon entropy and the anomaly term is pure real,
so the change of the partition function under diffeomorphism is non-unitary.
For the coarse-graining nature of the density $u$, it is proved that
the relative Shannon entropy is monotonic non-decreasing along the
Ricci flow (along $t$),
\begin{equation}
\frac{d\tilde{N}(\hat{M}^{D})}{dt}=-\tilde{\mathcal{F}}\ge0,\label{eq:analog H-theorem}
\end{equation}
where $\tilde{\mathcal{F}}=\mathcal{F}-\mathcal{F}_{*}\le0$ is the
GSRS-normalized F-functional of Perelman
\begin{equation}
\mathcal{F}=\frac{dN}{d\tau}=\int_{M^{D}}d^{D}Xu\left(R+\left|\nabla f\right|^{2}\right)
\end{equation}
with the maximum value (at GSRS limit)
\begin{equation}
\mathcal{F}_{*}\equiv\mathcal{F}(u_{*})=\frac{dN_{*}}{d\tau}=\frac{D}{2\lambda\tau}.\label{eq:F*}
\end{equation}
The inequality (\ref{eq:analog H-theorem}) gives an analogous H-theorem
to the non-equilibrium frame fields and the irreversible Ricci flow.
The entropy is non-decreasing along the Ricci flow making the flow
irreversible in many aspects similar with the processes of irreversible
thermodynamics, meaning that as the observation scale of the spacetime
flows from short to long distance scale, the process losses information
and the Shannon entropy increases. The equal sign in (\ref{eq:analog H-theorem})
can be taken when the spacetime configuration has flowed to a limit
known as a Gradient Shrinking Ricci Soliton (GSRS), when the Shannon
entropy takes its maximum value. Similarly, at the flow limit the
density matrix $u_{*}$ eq.(\ref{eq:u*}) takes the analogous standard
Maxwell-Boltzmann distribution.
\subsection{Effective Gravity at Cosmic Scale and the Cosmological Constant}
In terms of the relative Shannon entropy, the total partition function
(\ref{eq:frame-partition}) of the frame fields is normalized by the
GSRS extreme value
\begin{equation}
Z(M^{D})=\frac{e^{\lambda N-\frac{D}{2}}}{e^{\lambda N_{*}}}=e^{\lambda\tilde{N}-\frac{D}{2}}=Z_{P}^{-\lambda}e^{-\frac{D}{2}}=\exp\left[\lambda\int_{M^{D}}d^{D}Xu\left(f-D\right)\right].\label{eq:relative-partition}
\end{equation}
The relative Shannon entropy $\tilde{N}$ as the anomaly vanishes
at GSRS or IR scale, however, it is non-zero at ordinary lab scale
up to UV where the fiducial volume of the lab is considered fixed
$\lambda\int d^{4}x=1$. The cancellation of the anomaly at the lab
scale up to UV is physically required, which leads to the counter
term $\nu(M_{\tau=\infty}^{D})$ or cosmological constant. The monotonicity
of $\tilde{N}$ eq.(\ref{eq:analog H-theorem}) and the W-functional
implies \citep{perelman2002entropy,2004The}
\begin{equation}
\nu(M_{\tau=\infty}^{D})=\lim_{\tau\rightarrow\infty}\lambda\tilde{N}(M^{D},u,\tau)=\lim_{\tau\rightarrow\infty}\lambda\mathcal{W}(M^{D},u,\tau)=\inf_{\tau}\lambda\mathcal{W}(M^{D},u,\tau)<0,
\end{equation}
where $\mathcal{W}$, the Perelman's W-functional, is the Legendre
transformation of $\tilde{N}$ w.r.t. $\tau^{-1}$,
\begin{equation}
\mathcal{W}\equiv\tau\frac{\partial\tilde{N}}{\partial\tau}+\tilde{N}=\tau\tilde{\mathcal{F}}+\tilde{N}=\frac{d}{d\tau}\left(\tau\tilde{N}\right).\label{eq:W=00003DtauF+N}
\end{equation}
In other words, the difference between the effective actions (relative
Shannon entropies) at UV and IR is finite
\begin{equation}
\nu=\lambda(\tilde{N}_{UV}-\tilde{N}_{IR})<0.
\end{equation}
Perelman used his analogies: the temperature $T\sim\tau$, the (relative)
internal energy $U\sim-\tau^{2}\tilde{\mathcal{F}}$, the thermodynamics
entropy $S\sim-\mathcal{W}$, and the free energy $F\sim\tau\tilde{N}$,
up to proportional balancing the dimensions on both sides of $\sim$,
the equation (\ref{eq:W=00003DtauF+N}) is in analogy to the thermodynamics
equation $U-TS=F$. So in this sense the W-functional is also called
the W-entropy. Whether the thermodynamic analogies are real and physical,
or just pure coincidences, is an important issue discussed in the
next sections.
In fact $e^{\nu}<1$ (usually called the Gaussian density \citep{cao2004gaussian,cao2009recent})
is a relative volume or the reduced volume $\tilde{V}(M_{\tau=\infty}^{D})$
of the backwards limit manifolds introduced by Perelman, or the inverse
of the initial condition of the manifolds density $u_{\tau=0}^{-1}$.
A finite value of it makes an initial spacetime with unit volume from
UV flow and converge to a finite $u_{\tau=0}$, and hence the manifolds
finally converges to a finite relative volume/reduced volume instead
of shrinking to a singular point at $\tau=0$.
As an example, for a homogeneous and isotropic universe for which
the sizes of space and time (with a ``ball'' radius $a_{\tau}$)
are on an equal footing, i.e. a late epoch FRW-like metric $ds^{2}=a_{\tau}^{2}(-dx_{0}^{2}+dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2})$,
which is a Lorentzian shrinking soliton configuration. Note that the
shrinking soliton equation $R_{\mu\nu}=\frac{1}{2\tau}g_{\mu\nu}$
it satisfies and its volume form (\ref{eq:volume form}) are independent
to the signature, so it can be approximately given by a 4-ball value
$\nu(B_{\infty}^{4})\approx-0.8$ \citep{Luo:2019iby,Luo:2021zpi}.
So the partition function, which is anomaly canceled at UV and having
a fixed-volume fiducial lab, is
\begin{equation}
Z(M^{D})=e^{\lambda\tilde{N}-\frac{D}{2}-\nu}.
\end{equation}
Since $\lim_{\tau\rightarrow0}\tilde{N}(M^{D})=0$, so at small $\tau$,
$\tilde{N}(M^{D})$ can be expanded by powers of $\tau$
\begin{align}
\tilde{N}(M^{D}) & =\frac{\partial\tilde{N}}{\partial\tau}\tau+O(\tau^{2})=\tau\tilde{\mathcal{F}}+O(\tau^{2})\nonumber \\
& =\int_{M^{D}}d^{D}Xu_{\tau\rightarrow0}\left[\left(R_{\tau\rightarrow0}+\left|\nabla f_{\tau\rightarrow0}\right|^{2}-\frac{D}{2\tau}\right)\tau\right]+O(\tau^{2})\nonumber \\
& =\int_{M^{D}}d^{D}Xu_{0}R_{0}\tau+O(\tau^{2}),
\end{align}
in which $\lambda\int d^{D}Xu_{\tau\rightarrow0}\tau\left|\nabla f_{\tau\rightarrow0}\right|^{2}=\frac{D}{2}$
(at GSRS) has been used.
For $D=4$ and small $\tau$, the effective action of $Z(M^{4})$
can be given by
\begin{equation}
-\log Z(M^{4})=S_{eff}\approx\int_{M^{4}}d^{4}Xu_{0}\left(2\lambda-\lambda R_{0}\tau+\lambda\nu\right)\quad(\textrm{small\,\ensuremath{\tau}}).
\end{equation}
Considering $u_{0}d^{4}X=\sqrt{|g_{t}|}dV=\sqrt{|g_{t}|}dX^{0}dX^{1}dX^{2}dX^{3}$
is the invariant volume element, and using (\ref{eq:tau}) to replace
$t$ or $\tau$ by cutoff scale $k$, we have
\begin{equation}
S_{eff}=\int_{M^{4}}dV\sqrt{|g_{k}|}\left(2\lambda-\frac{R_{0}}{64\pi^{2}}k^{2}+\lambda\nu\right)\quad(\textrm{small\,k}).\label{eq:eff-EH+cc}
\end{equation}
The effective action can be interpreted as a low energy effective
action of pure gravity. As the cutoff scale $k$ ranges from the lab
scale to the solar system scale ($k>0$), the action must recover
the well-tested Einstein-Hilbert (EH) action. But at the cosmic scale
($k\rightarrow0$), we know that the EH action deviates from observations
and the cosmological constant becomes important. In this picture,
as $k\rightarrow0$, the action leaving $2\lambda+\lambda\nu$ should
play the role of the standard EH action with a limit constant background
scalar curvature $R_{0}$ plus the cosmological constant, so
\begin{equation}
2\lambda+\lambda\nu=\frac{R_{0}-2\Lambda}{16\pi G}.
\end{equation}
While at $k\rightarrow\infty$, $\lambda\tilde{N}\rightarrow\nu$,
the action leaving only the fiducial Lagrangian $\frac{D}{2}\lambda=2\lambda$
which should be interpreted as a constant EH action without the cosmological
constant
\begin{equation}
2\lambda=\frac{R_{0}}{16\pi G}.
\end{equation}
Thus we have the cosmological term
\begin{equation}
\lambda\nu=\frac{-2\Lambda}{16\pi G}=-\rho_{\Lambda}.
\end{equation}
The action can be rewritten as an effective EH action plus a cosmological
term
\begin{equation}
S_{eff}=\int_{M^{4}}dV\sqrt{|g_{k}|}\left(\frac{R_{k}}{16\pi G}+\lambda\nu\right)\quad(\textrm{small\,k}),
\end{equation}
where
\begin{equation}
\frac{R_{k}}{16\pi G}=2\lambda-\frac{R_{0}}{64\pi^{2}}k^{2},\label{eq:eff-R}
\end{equation}
which is nothing but the flow equation of the scalar curvature \citep{topping2006lectures}
\begin{equation}
R_{k}=\frac{R_{0}}{1+\frac{1}{4\pi}Gk^{2}},\quad\textrm{or}\quad R_{\tau}=\frac{R_{0}}{1+\frac{2}{D}R_{0}\tau}.
\end{equation}
Since at the cosmic scale $k\rightarrow0$, the effective scalar curvature
is bounded by $R_{0}$ which can be measured by ``Hubble's constant''
$H_{0}$ at the cosmic scale,
\begin{equation}
R_{0}=D(D-1)H_{0}^{2}=12H_{0}^{2},
\end{equation}
so $\lambda$ is nothing but the critical density of the 4-spacetime
Universe
\begin{equation}
\lambda=\frac{3H_{0}^{2}}{8\pi G}=\rho_{c},\label{eq:critical density}
\end{equation}
so the cosmological constant is always of order of the critical density
with a ``dark energy'' fraction
\begin{equation}
\Omega_{\Lambda}=\frac{\rho_{\Lambda}}{\rho_{c}}=-\nu\approx0.8,
\end{equation}
which is close to the observational value. The detail discussions
about the cosmological constant problem and the observational effect
in the cosmology, especially the modification of the Distance-Redshift
relation leading to the acceleration parameter $q_{0}\approx-0.68$
can be found in \citep{Luo2015Dark,Luo:2015pca,Luo:2019iby,Luo:2021zpi}.
If matters are incorporated into the gravity theory, consider the
entangled system in $\mathcal{H}_{\psi}\otimes\mathcal{H}_{X}$ between
the to-be-studied quantum system (matters) and the quantum reference
frame fields system (gravity). $2\lambda$ term in eq.(\ref{eq:coupled gravity (1)})
is normalized by the Ricci flow, by using eq.(\ref{eq:eff-EH+cc})
and eq.(\ref{eq:eff-R}), a matter-coupled-gravity is emerged from
the Ricci flow
\begin{align}
S[\psi,X]\overset{(2)}{\approx} & \int dV\sqrt{|g_{k}|}\left[\frac{1}{2}g^{\mu\nu}\frac{\delta\psi}{\delta X^{\mu}}\frac{\delta\psi}{\delta X^{\nu}}-V(\psi)+2\lambda-\frac{R_{0}}{64\pi^{2}}k^{2}+\lambda\nu\right]\nonumber \\
= & \int dV\sqrt{|g_{k}|}\left[\frac{1}{2}g^{\mu\nu}\frac{\delta\psi}{\delta X^{\mu}}\frac{\delta\psi}{\delta X^{\nu}}-V(\psi)+\frac{R_{k}}{16\pi G}+\lambda\nu\right]
\end{align}
\section{Thermal Equilibrium State}
A Gradient Shrinking Ricci Soliton (GSRS) configuration as a Ricci
flow limit extremizes the Shannon entropy $N$ and the W-functional.
Similarly, a thermal equilibrium state also extremizes the H-functional
of Boltzmann and the thermodynamic entropy. Thus the process of a
generic Ricci flow flows into a GSRS limit is in analogy with the
non-equilibrium state evolves into a thermal equilibrium state, they
are not merely similar but even equivalent, when the thermal system
is nothing but the frame fields system. In this section, following
the previous discussions on the non-equilibrium state of the frame
fields in 4-dimension, in a proper choice of time, we will discuss
the thermal equilibrium state of the frame particle system as a GSRS
configuration in lower 3-dimension, in which the temperature and several
thermodynamic functions of the system can be explicitly calculated
and the manifolds density can be interpreted as the thermal ensemble
density of the frame fields particles, giving a statistical interpretation
to Perelman's thermodynamic analogies of the Ricci flow.
\subsection{A Temporal Static Shrinking Ricci Soliton as a Thermal Equilibrium
State}
When the shrinking Ricci soliton $M^{4}$ is static in the temporal
direction, i.e. being a product manifolds $M^{4}=M^{3}\times\mathbb{R}$
and $\delta\mathbf{X}/\delta X_{0}=0$, where $X_{0}\in\mathbb{R}$
is the physical time, $\mathbf{X}=(X_{1},X_{2},X_{3})\in M^{3}$ is
a 3-space gradient shrinking Ricci soliton of lower dimensions, we
can prove here that the temporal static spatial part $M^{3}$ is in
thermal equilibrium with the flow parameter $\tau$ proportional to
its temperature, and the manifolds density $u$ of $M^{3}$ can be
interpreted as the thermal equilibrium ensemble density.
According to Masubara's formalism of thermal fields theory, the thermal
equilibrium of the spatial frame fields can be defined by the periodicity
$\mathbf{X}(\mathbf{x},0)=\mathbf{X}(\mathbf{x},\beta)$ in their
Euclidean time of the lab (remind that we start from the Euclidean
base space for the frame fields theory), where $\beta=1/T$ is the
inverse of the temperature. Now the frame fields is a mapping $\mathbb{R}^{3}\times S^{1}\rightarrow M^{3}\times\mathbb{R}$.
Then in such configuration, the $\tau$ parameter of the 3-space shrinking
soliton $M^{3}$ becomes
\begin{equation}
\tau=\frac{1}{2\lambda}\int\frac{d^{3}\mathbf{p}d\omega_{n}}{(2\pi)^{4}}\frac{1}{\mathbf{p}^{2}+\omega_{n}^{2}}=\frac{1}{2\lambda}T\sum_{n}\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\frac{1}{\mathbf{p}^{2}+(2\pi nT)^{2}},
\end{equation}
where $\omega_{n}=2\pi nT$, $\int\frac{d\omega_{n}}{2\pi}=T\sum_{n}$
have been used. The calculation is a periodic-Euclidean-time version
of the general eq.(\ref{eq:sigma=00003Dtau*g}). Since the density
matrix eq.(\ref{eq:u*}) of the frame fields $X_{\mu}$ is Gaussian
or a coherent state, which the oscillators are almost condensed in
the central peak, thus $\omega_{0}=0$ dominants the Masubara sum,
\begin{equation}
\tau=\frac{1}{2\lambda}T\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\frac{1}{\mathbf{p}^{2}}.
\end{equation}
Different from the naive notion of ``temporal static'' at the classical
level, which means w.r.t. the physical clock $X_{0}$ of the quantum
reference frame, i.e. $\langle\frac{\delta\mathbf{X}}{\delta X_{0}}\rangle=0$.
However, the notion ``temporal static'' is a little subtle at the
quantum level. Because there is no ``absolute static'' at the quantum
or microscopic level, since at such microscopic scale the modes are
always in motion or vibrating w.r.t. the infinitely precise lab time
$x_{0}$, i.e. $\frac{\partial\mathbf{X}(x)}{\partial x_{0}}\neq0$.
Actually $\partial\mathbf{X}/\partial x_{0}$ is in general non-zero
even though its oscillation degrees of freedom are almost frozen (Masubara
frequency $\omega_{n}$ is zero for the Gaussian wavefunction), while
the center of the Gaussian wave pocket of $\mathbf{X}$ is in translational
motion so $\mathbf{p}\neq0$, so its expectation value is in general
finite, for instance, $\langle\frac{\partial\mathbf{X}(x)}{\partial x_{0}}\rangle\sim\frac{3}{2}T<\infty$
claimed by the equipartition energy of the translational motion in
3-space. In general, whether or not the modes of the spatial frame
fields is temporal static depends on the scale to evaluate the average
of the physical clock $\langle X_{0}\rangle$. The notion of ``thermal
static'' in the sense of statistical physics is approximate at a
macroscopic scale rather than a microscopic scale, at which scale
the molecules are always in motion (so does the physical clock $X_{0}$).
The macroscopic scale of the thermal static system is at such a long
physical time scale $\delta\langle X_{0}\rangle\gg\delta x_{0}$ that
the averaged physical clock is almost frozen $\frac{\partial x_{0}}{\partial\langle X_{0}\rangle}\rightarrow0$
w.r.t. the infinitely precise lab time $x_{0}$, so that the thermal
static condition $\langle\frac{\delta\mathbf{X}}{\delta X_{0}}\rangle=\langle\frac{\partial\mathbf{X}}{\partial x_{0}}\rangle\cdot\frac{\partial x_{0}}{\partial\langle X_{0}\rangle}\rightarrow0$
can be achieved.
More precisely, when we mention that the 3-space is macroscopic ``temporal
static'', a IR cutoff, for example, $H_{0}$ as a macroscopic Hubble
scale should be taken into account. The fluctuation modes on the 3-space
outside the Hubble scale $0<|\mathbf{p}|<H_{0}$ are frozen and temporal
static, while those modes $|\mathbf{p}|>H_{0}$ inside the Hubble
horizon are dynamic. So with this cutoff scale we have
\begin{equation}
\tau=\frac{1}{2\lambda}T\int_{0}^{|\mathbf{p}|=H_{0}}\frac{d^{3}\mathbf{p}}{(2\pi)}\frac{1}{\mathbf{p}^{2}}=\frac{C_{3}}{2\lambda}TH_{0}=\frac{1}{\lambda_{3}}T=\frac{1}{\lambda_{3}\beta},\label{eq:tau-T}
\end{equation}
where the 3-space energy density is $\lambda_{3}=\frac{\lambda}{\frac{1}{2}C_{3}H_{0}}=\frac{12\pi^{2}\lambda}{H_{0}}$.
Note that if we consider the temporal integral is also cutoff at about
a long physical time scale, e.g. the age of the universe $O(1/H_{0})$,
let the temporal direction is normalized as $\frac{1}{12\pi^{2}}\int_{0}^{12\pi^{2}/H_{0}}dx_{0}H_{0}=1$,
then the condition $\int d^{4}x\lambda\equiv1$ gives its 3-space
version
\begin{equation}
\int d^{3}x\lambda_{3}=1,\label{eq:3-density normalize}
\end{equation}
which is the definition of $\lambda_{3}$ on 3-space slice generalizing
the critical density $\lambda$ in a 4-spacetime covariant theory.
It is worth stressing that since the spatial slice depends on the
definition of time, so the value of $\lambda_{3}$ is not universal
(not necessarily equal to above $\frac{12\pi^{2}\lambda}{H_{0}}$
in other frame or cutoff, unlike the universal 4-spacetime critical
density $\lambda$) but frame dependent. If a specific gauge of time
or frame is chosen, $\lambda_{3}$ could be considered fix and be
used as a proportional to correlate the $\tau$ parameter with the
temperature of the temporal static frame fields configuration in such
a specific gauge of time. The 3-space energy density $\lambda_{3}$
is very useful when we consider a temporal static GSRS spacetime or
corresponding thermal equilibrium frame fields ensemble in later discussions.
In summary, an important observation is that when $M^{3}$ is a shrinking
Ricci soliton in a temporal static product shrinking soliton $M^{3}\times\mathbb{R}$,
the global $\tau$ parameter of $M^{3}$ can be interpreted as a thermal
equilibrium temperature defined by the Euclidean time periodic of
the frame fields, up to a proportional being a 3-space energy density
$\lambda_{3}$ (satisfying eq.(\ref{eq:3-density normalize})) balancing
the dimensions between $\tau$ and $T$. Since temperature $T$ is
frame dependent, so is the proportional $\lambda_{3}$. The observation
gives us a reason why in Perelman's paper $\tau$ could be analogous
to the temperature $T$. The same results can also be obtained if
one use the Lorentzian signature for the lab or base spacetime of
the frame fields theory (\ref{eq:NLSM}). In this case the thermal
equilibrium of the spatial frame fields instead are subject to periodicity
in the imaginary Minkowskian time $\mathbf{X}(\mathbf{x},0)=\mathbf{X}(\mathbf{x},i\beta)$,
but even though the base spacetime is Wick rotated, the path integral
does not pick any imaginary $i$ factor in front of the action in
(\ref{eq:partition of NLSM}) as the starting point, so the main results
of the discussions retain independent to the signature of the base
spacetime.
\subsection{Thermodynamic Functions}
For the thermodynamic interpretation of the quantum reference frame
and gravity theory, in this subsection, we derive other thermodynamic
functions of the system beside the temperature in the previous subsection,
which are similar with the ideal gas. So the frame fields system in
the Gaussian approximation can be seen as a system of frame fields
gas, which manifests a underlying statistic picture of Perelman's
thermodynamics analogies of his functionals. As convention, we all
take the temperature $T=\lambda_{3}\tau$, eq.(\ref{eq:tau-T}), $D=3$
and $\lambda$ replaced by $\lambda_{3}$, it is equivalent to choose
a specific gauge of time for the thermal equilibrium frame fields
configuration.
When the spatial shrinking soliton $M^{3}$ is in temporal static
$dX_{0}=0$ and in thermal equilibrium, the partition function of
the thermal ensemble of the frame fields $\mathbf{X}$ can be given
by the trace/integration of the density matrix,
\begin{equation}
Z_{*}(\tau)=\lambda_{3}\int d^{3}\mathbf{X}u(\mathbf{X})=\lambda_{3}\int d^{3}\mathbf{X}e^{-\frac{\mathbf{X}^{2}}{4\tau}}=\lambda_{3}(4\pi\tau)^{3/2},\label{eq:Z*}
\end{equation}
the normalized $u$ density can be given by the 3-dimensional version
of eq.(\ref{eq:u*})
\begin{equation}
u_{*}(\mathbf{X})=\frac{1}{Z_{*}}u(\mathbf{X})=\frac{1}{\lambda_{3}(4\pi\tau)^{3/2}}e^{-\frac{\mathbf{X}^{2}}{4\tau}}.
\end{equation}
The partition function can also be consistently given by (\ref{eq:frame-partition})
with $D=3$ in thermal equilibrium and hence the partition function
of the frame fields in the shrinking soliton configuration
\begin{equation}
Z_{*}(\tau)=e^{\lambda_{3}N_{*}(M^{3})-\frac{3}{2}}=\exp\left[-\lambda_{3}\int_{M^{3}}d^{3}Xu_{*}\log u_{*}-\frac{3}{2}\right]=\lambda_{3}(4\pi\tau)^{3/2}=V_{3}\left(\frac{4\pi\lambda_{3}^{1/3}}{\beta}\right)^{3/2}=Z_{*}(\beta),\label{eq:thermo partition function}
\end{equation}
where $V_{3}=\int d^{3}x$ is the 3-volume with the constraint $\lambda_{3}V_{3}=1$.
The partition function is identified with the partition function of
the canonical ensemble of ideal gas (i.e. non-interacting frame fields
gas in the lab) of temperature $1/\beta$ and gas particle mass $\lambda_{3}^{1/3}$.
The interactions are effectively absorbed into the broadening of the
density matrix and normalized mass of the frame fields gas particles.
The physical picture of frame fields gas in thermal equilibrium lays
a statistical and physical foundation to Perelman's analogies between
his functionals and thermodynamics equations as follows.
The internal energy of the frame fields gas can be given similar to
the standard internal energy of ideal gas $\frac{3}{2}T$ given by
the equipartition energy of translational motion in 3-space. Consider
$\beta$ as the Euclidean time of the flat lab, the internal energy
seen from an observer in the lab is
\begin{equation}
E_{*}=-\frac{\partial\log Z_{*}}{\partial\beta}=\lambda_{3}^{2}\tau^{2}\frac{\partial N_{*}}{\partial\tau}=\lambda_{3}^{2}\tau^{2}\mathcal{F}_{*}=\frac{3}{2}\lambda_{3}\tau=\frac{3}{2}T,\label{eq:internal energy}
\end{equation}
in which (\ref{eq:F*}) with $D=3$ and $\lambda\rightarrow\lambda_{3}$
have been used.
The fluctuation of the internal energy is given by
\begin{equation}
\left\langle E_{*}^{2}\right\rangle -\left\langle E_{*}\right\rangle ^{2}=\frac{\partial^{2}\log Z_{*}}{\partial\beta^{2}}=\frac{3}{2}\lambda_{3}^{2}\tau^{2}=\frac{3}{2}T^{2}.
\end{equation}
The Fourier transformation of the density $u_{*}(\mathbf{X})$ is
given by
\begin{equation}
u_{*}(\mathbf{K})=\int d^{3}Xu_{*}(\mathbf{X})e^{-i\mathbf{K}\cdot\mathbf{X}}=e^{-\tau\mathbf{K}^{2}},
\end{equation}
since $u$ satisfies the conjugate heat equation (\ref{eq:u-equation}),
so $\mathbf{K}^{2}$ is the eigenvalue of the Laplacian $-4\Delta_{X}+R$
of the 3-space, taking the value of the F-functional,
\begin{equation}
\mathbf{K}^{2}=\lambda_{3}\int d^{3}X\left(R|\Psi|^{2}+4|\nabla\Psi|^{2}\right)=\lambda_{3}\mathcal{F},
\end{equation}
so
\begin{equation}
u_{*}(\mathbf{K}^{2})=e^{-\lambda_{3}\tau\mathcal{F}}.
\end{equation}
For a state taking energy $\lambda_{3}^{2}\tau^{2}\mathcal{F}=E$,
the probability density of the state can be rewritten as
\begin{equation}
u_{*}(E)=e^{-\frac{E}{\lambda_{3}\tau}}=e^{-\frac{E}{T}},\label{eq:ensemble density}
\end{equation}
which is the standard Boltzmann's probability distribution of the
state. So we can see that the (Fourier transformed) manifolds density
can be interpreted as the thermal equilibrium canonical ensemble density
of the frame fields.
The free energy is given by
\begin{equation}
F_{*}=-\frac{1}{\beta}\log Z_{*}=-\lambda_{3}\tau\log Z_{*}=-\frac{3}{2}\lambda_{3}\tau\log(4\pi\tau),\label{eq:free-energy}
\end{equation}
similar with the standard free energy of ideal gas $-\frac{3}{2}T\log T$
up to a constant.
The minus H-functional of Boltzmann at an equilibrium limit and the
thermal entropy of the frame fields gas can be given by the Shannon
entropy
\begin{equation}
\lambda_{3}N_{*}=S_{*}=-\lambda_{3}\int d^{3}Xu_{*}\log u_{*}=\frac{3}{2}\left[1+\log(4\pi\tau)\right],\label{eq:S*}
\end{equation}
similar with the thermal entropy of fixed-volume ideal gas $\frac{3}{2}\log T+\frac{3}{2}$
up to a constant. The thermal entropy can also be consistently given
by the standard formula
\begin{equation}
S_{*}=\log Z_{*}-\beta\frac{\partial\log Z_{*}}{\partial\beta}=\frac{3}{2}\left[1+\log(4\pi\tau)\right].\label{eq:bulk entropy}
\end{equation}
which is in analogy with the fact that the W functional is the Legendre
transformation of the relative Shannon entropy w.r.t. $\tau^{-1}$.
For this reason, the W functional is also an entropy function related
to the (minus) thermodynamics entropy.
In summary, we have seen that, under general frame fields (coordinates)
transformation the Shannon entropy anomaly $N$ appearing in the partition
function (\ref{eq:Z->Zhat}) (or relative Shannon entropy $\tilde{N}$
w.r.t. $N_{*}$) has profound thermodynamics interpretations. The
Ricci flow of frame fields lead to non-equilibrium and equilibrium
thermodynamics of the quantum spacetime, we summarize the comparisons
between them in the Table I and II.
\begin{table}[H]
\centering{
\begin{tabular}{|c|c|}
\hline
Frame fields at non-Ricci-flow-limit & Non-equilibrium thermodynamics\tabularnewline
\hline
\hline
Relative Shannon entropy: $\tilde{N}=-\int d^{3}\mathbf{X}\tilde{u}(\mathbf{X},t)\log\tilde{u}(\mathbf{X},t)$ & Boltzmanian H function: $H(t)=\int d^{3}\mathbf{v}\rho(\mathbf{v},t)\log\rho(\mathbf{v},t)$\tabularnewline
\hline
Ricci flow parameter: $t$ & Newtonian time: $t$\tabularnewline
\hline
Monotonicity: $\frac{d\tilde{N}}{dt}=-\tilde{\mathcal{F}}\ge0$ & H theorem: $\frac{dH}{dt}\le0$\tabularnewline
\hline
conjugate heat equation: $\frac{\partial u}{\partial t}=\left(-\varDelta+R\right)u$ & Boltzmann equation of ideal gas: $\frac{\partial\rho}{\partial t}=-\mathbf{v}\cdot\boldsymbol{\nabla}\rho$\tabularnewline
\hline
\end{tabular}\caption{Frame fields in general Ricci flow at non-flow-limit and the Non-equilibrium
thermodynamics.}
\end{table}
\begin{table}[H]
\begin{centering}
\begin{tabular}{|c|c|}
\hline
Frame fields at the Ricci flow limit (GSRS) & Equilibrium thermodynamics of ideal gas\tabularnewline
\hline
\hline
partition function: $Z_{*}(\tau)=\lambda_{3}(4\pi\tau)^{3/2}$ & partition function: $Z(T)=V_{3}(2\pi mT)^{3/2}$\tabularnewline
\hline
GSRS flow parameter: $\lambda_{3}\tau$ & temperature: $T=\beta^{-1}$\tabularnewline
\hline
$\lambda_{3}^{2}\tau^{2}\mathcal{F}_{*}=\frac{3}{2}\lambda_{3}\tau$ & internal energy: $E_{*}=-\frac{\partial\log Z}{\partial\beta}=\frac{3}{2}T$\tabularnewline
\hline
manifold density: $u_{*}(\mathbf{K})=e^{-\tau\mathbf{K}^{2}}=e^{-\lambda_{3}\tau\mathcal{F}}$ & canonical ensemble density: $\rho=e^{-\frac{E}{T}}$\tabularnewline
\hline
$-\lambda_{3}\tau\log Z_{*}=-\frac{3}{2}\lambda_{3}\tau\log(4\pi\tau)$ & free energy: $F_{*}=-T\log Z(T)=-\frac{3}{2}T\log T$\tabularnewline
\hline
Shannon entropy: $\lambda_{3}N_{*}=\frac{3}{2}\left[1+\log(4\pi\tau)\right]$ & thermodynamic entropy: $S_{*}=\frac{3}{2}\left(1+\log T\right)$\tabularnewline
\hline
W functional: $\mathcal{W}=\tau\frac{d\tilde{N}}{d\tau}+\tilde{N}$ & first law of thermodynamics: $E_{*}-TS_{*}=F_{*}$\tabularnewline
\hline
Monotonicity: $\frac{d\tilde{N}}{dt}\ge0$ & second law of thermodynamics: $\delta S\ge0$\tabularnewline
\hline
\end{tabular}\caption{Frame fields in Gradient Shrinking Ricci Soliton (GSRS) configuration
and the equilibrium thermodynamics of ideal gas.}
\par\end{centering}
\end{table}
\section{Application to the Schwarzschild Black Hole}
In this section, we try to apply the general statistic and thermodynamics
interpretation of the quantum frame fields to a physical gravitational
system, as one of the touchstone of quantum gravity, i.e. to understand
the statistical origin of the thermodynamics of the Schwarzschild
black hole.
\subsection{The Temperature of a Schwarzschild Black Hole}
The region in the vicinity of the origin of a Schwarzschild black
hole is an example of classical static shrinking Ricci soliton. A
rest observer distant from it sees an approximate metric $M^{3}\times\mathbb{R}$,
where the region in the vicinity of the origin of the spatial part
$M^{3}$ is a shrinking Ricci soliton. The reason is as follows, because
the black hole satisfies the Einstein's equation
\begin{equation}
R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi G\mathcal{T}_{\mu\nu},
\end{equation}
where the stress tensor is a point distributed matter in rest with
a mass $m$ at the origin $x=0$ (seen from the distant rest observer)
\begin{equation}
\mathcal{T}_{00}=m\delta^{(3)}(\mathbf{x}),\quad\mathcal{T}_{ij}=0\quad(i,j=1,2,3),
\end{equation}
where Latin index $i,j$ is for spatial index in the following. So
we have
\begin{equation}
R(\mathbf{x})=-8\pi G\mathcal{T}_{\mu}^{\mu}=8\pi Gm\delta^{(3)}(\mathbf{x}).\label{eq:scalar curvature of bh}
\end{equation}
From the Einstein's equation we have the Ricci curvature of $M^{3}$
is proportional to the metric of $M^{3}$
\begin{equation}
R_{ij}(\mathbf{x})=8\pi G\mathcal{T}_{ij}+\frac{1}{2}g_{ij}R=\frac{1}{2}8\pi Gm\delta^{(3)}(\mathbf{x})g_{ij}\quad(i,j=1,2,3).
\end{equation}
The equation is nothing but a normalized shrinking Ricci soliton equation
(\ref{eq:shrinker}) for $M^{3}$
\begin{equation}
R_{ij}(\mathbf{x})=\frac{1}{2\tau}g_{ij}(\mathbf{x})\quad(\mathbf{x}\approx0)\label{eq:3-shrinking-soliton}
\end{equation}
with
\begin{equation}
\delta^{(3)}(\mathbf{x})\tau=\frac{1}{8\pi Gm},
\end{equation}
where $\delta^{(3)}(x)$ plays the role of the 3-space energy density
$\lambda_{3}$ in the vicinity of the origin, satisfying $\int d^{3}x\delta^{(3)}(x)=1$
as eq.(\ref{eq:3-density normalize}), so by using the relation between
$\tau$ and temperature $T$ (\ref{eq:tau-T}), we can directly read
from the equation that a temperature seen by the lab's infinite distant
rest observer is
\begin{equation}
T=\delta^{(3)}(\mathbf{x})\tau=\frac{1}{8\pi Gm},\label{eq:BH temperature}
\end{equation}
which is the standard Hawking's temperature of the Schwarzschild black
hole seen by a distant rest observer.
Is the vacuum region outside the origin of the black hole also a shrinking
Ricci soliton? One may naively think that the answer is no, since
at the classical level, it seems $R_{ij}=0$ (not a shrinking soliton
eq.(\ref{eq:3-shrinking-soliton})), since outside the origin is just
vacuum. But as is discussed in the next subsection, we argue that
it is not true at the quantum level, if the vacuum and the vicinity
region of the origin are in thermal equilibrium, they must be a shrinking
Ricci solitons as a whole, i.e. $\langle R_{ij}\rangle=\frac{1}{2\tau}g_{ij}\neq0$,
eq.(\ref{eq:<Rij>=00003D(1/2tau)*gij}) in the ``vacuum''. The above
result can be extended to the ``vacuum'' region outside the origin,
the price to pay is that the ``vacuum'' is full of internal energy
corresponding to the Hawking temperature. If the whole spacetime have
not been in thermal equilibrium yet, the configuration has to irreversibly
go on flowing to a common thermal equilibrium fixed point (a global
shrinking Ricci soliton), leading to a global maximized entropy, as
the H-theorem asserts.
\subsection{The Energy of a Schwarzschild Black Hole}
In classical general relativity, the mass $m$ is often mentioned
as the ADM energy of the black hole
\begin{equation}
m=\int d^{3}\mathbf{x}\mathcal{T}_{00}=\int d^{3}\mathbf{x}m\delta^{(3)}(\mathbf{x}),
\end{equation}
seen by the distant rest observer (w.r.t. the lab time $x_{0}$).
Here at the quantum level, the coordinates or frame fields and spacetime
are quantum fluctuating, which gives rise to the internal energy related
to the periodicity of the (Euclidean) lab time $x_{0}$ (i.e. $\beta=\frac{1}{T}$).
So, mathematically speaking, the anomaly of the trace of the stress
tensor will modify the total ADM mass at the quantum level, see (\ref{eq:trace anomaly}).
Since the anomaly of the action of the frame fields $\lambda_{3}N_{*}$
representing the spacetime part is always real, the internal energy
of the frame fields is given by the (\ref{eq:internal energy})
\begin{equation}
E_{*}=-\frac{\partial\log Z_{*}}{\partial\beta}=\frac{3}{2}T=\frac{3}{16\pi Gm},\label{eq:internal energy of BH}
\end{equation}
in which we have considered the 3-space volume $V_{3}$ outside the
origin is in thermal equilibrium with the Hawking's temperature at
the origin eq.(\ref{eq:BH temperature}), sharing the same equilibrium
temperature $T$ in the 3-volume $V_{3}$.
We can see that the internal energy $E_{*}$ is an extra contribution
to the total energy of the (black hole + ``vacuum'') system seen
by the distant rest observer. Essentially this term can be seen as
a quantum correction or a part of the trace anomaly contribution to
the stress tensor, thus the total energy of the black hole including
the classical ADM energy and the quantum fluctuating internal energy
of the metric is
\begin{equation}
m_{BH}=\int d^{3}\mathbf{x}\left\langle \mathcal{T}_{00}\right\rangle =m+E_{*}=m+\frac{3}{16\pi G}\frac{1}{m},\label{eq:BH total mass}
\end{equation}
where the classical stress tensor $\mathcal{T}_{00}$ is formally
replaced by its quantum expectation value
\begin{equation}
\langle\mathcal{T}_{00}\rangle=m\delta^{(3)}(\mathbf{x})+\frac{3}{2}\frac{T}{V_{3}}.\label{eq:<Tij>}
\end{equation}
A quantum Equivalence Principle should assert that the total energy
rather than only the classical ADM mass contributes to the gravitation.
For a macroscopic classical black hole, $m\gg\sqrt{\frac{1}{G}}$,
the first term ADM energy dominants the eq.(\ref{eq:BH total mass}),
\begin{equation}
m_{BH}\approx m.
\end{equation}
The second internal energy term is gradually non-negligible for a
microscopic quantum black hole. An important effect of the existence
of the second term in (\ref{eq:BH total mass}) is, for a microscopic
quantum black hole, it makes the total energy bound from below, the
minimal energy is of order of the Planck mass
\begin{equation}
m_{BH}\ge\sqrt{\frac{3}{4\pi G}}\sim O(m_{p}),
\end{equation}
which seems to prevent the black hole evaporating into nothing.
Further more, the internal energy $\frac{3}{2}T$ term contributing
to the total energy $m_{BH}$ and gravitation also demands that, not
only the vicinity of the origin of the black hole is a shrinking soliton
(as previous subsection claims), at the quantum level the whole 3-space
is also the same shrinking soliton (i.e. satisfying eq.(\ref{eq:3-shrinking-soliton})
with the identical $\tau$ globally and hence the same temperature
$T$ everywhere for the whole 3-space), just replacing the $\delta^{(3)}$-density
in eq.(\ref{eq:BH temperature}) by the $\lambda_{3}$-density, which
extends the $\delta^{(3)}$-density at the origin to the outside region
(the ``vacuum''), we have
\begin{equation}
T=\lambda_{3}\tau=\frac{1}{8\pi Gm},\quad\mathrm{with}\quad\int d^{3}\mathbf{x}\lambda_{3}=\int d^{3}\mathbf{x}\frac{\left\langle \mathcal{T}_{00}\right\rangle }{m_{BH}}=1
\end{equation}
for the whole thermal equilibrium 3-space, although at the classical
level the vacuum $R_{ij}(x\neq0)=0$ is seem not a shrinking soliton
outside the origin. The physical reason is transparent that the internal
energy's contribution $\frac{3T}{2V_{3}}$ in $\langle\mathcal{T}_{00}\rangle$
also plays the role of an additional source of gravity outside the
origin. For the whole 3-space with $\langle\mathcal{T}_{00}\rangle\neq0$
and $\langle\mathcal{T}_{ij}\rangle=0$, the Einstein's equation for
the whole 3-space is nothing but the Shrinking Ricci Soliton equation
(\ref{eq:3-shrinking-soliton}):
\begin{equation}
\langle R_{ij}\rangle=\frac{1}{2}\left\langle R\right\rangle g_{ij}=\frac{1}{2}8\pi G\left\langle \mathcal{T}_{00}\right\rangle g_{ij}\approx\frac{1}{2}8\pi Gm\frac{\left\langle \mathcal{T}_{00}\right\rangle }{m_{BH}}g_{ij}=\frac{1}{2T}\lambda_{3}g_{ij}=\frac{1}{2\tau}g_{ij}\neq0\label{eq:<Rij>=00003D(1/2tau)*gij}
\end{equation}
in which $\langle R\rangle=-8\pi G\langle\mathcal{T}_{\mu}^{\mu}\rangle=8\pi G\langle\mathcal{T}_{00}\rangle\neq0$
is used in the ``vacuum'' outside the origin. The equation is in
fact the spatial components of the Gradient Shrinking Ricci Soliton
equation (\ref{eq:shrinker}) where $\langle R_{ij}\rangle=R_{ij}+\nabla_{i}\nabla_{j}f$,
the Gaussian/thermal broadening of the density matrix $u$ contributes
to the classical curvature. The vicinity region of the origin plus
the ``vacuum'' outside the origin of the black hole as a whole,
is nothing but globally a shrinking Ricci soliton. The \textquotedblleft vacuum\textquotedblright{}
is not completely nothing at the quantum level but full of thermal
particles $\langle\mathcal{T}_{00}(x\neq0)\rangle\neq0$. The Hawking
temperature is essentially an Unruh effect, in certain sense, the
Gradient Shrinking Ricci soliton equation, eq.(\ref{eq:<Rij>=00003D(1/2tau)*gij}),
might play a more fundamental role than the Unruh's formula, which
determines how local acceleration or gravitation gives rise to temperature.
The internal energy of the spacetime frame fields is an additional
and necessary source of gravity, although macroscopically it is too
small to contribute, at the quantum level its contribution is crucial
for the 3-space in thermal equilibrium just right being a global shrinking
Ricci soliton. The thermal internal energy coming from the quantum
fluctuation of the 3-space gravitates normally as the quantum Equivalence
Principle will assert. Otherwise, we have to face a paradox as follows.
If we consider a frame $x$ having $\mathcal{T}_{\mu\nu}(x)=0$ everywhere,
so according to the classical gravity $R_{\mu\nu}(x)=0$ everywhere,
if we transform it to anther accelerating frame $x^{\prime}$, one
expects $\mathcal{T}_{\mu\nu}(x)\rightarrow\mathcal{T}_{\mu\nu}^{\prime}(x^{\prime})=0$,
and hence $R_{\mu\nu}^{\prime}(x^{\prime})=0$ everywhere. However,
according to the Equivalence Principle, in the accelerating frame
$x^{\prime}$ one should feel equivalent gravity $R_{\mu\nu}^{\prime}(x^{\prime})\neq0$.
It is clearly something is missing, a new dimension of the Equivalence
Principle must be considered. In order to solve the paradox and retain
the Equivalence Principle, a quantum effect (actually the effect from
the diffemorphism anomaly such as the trace anomaly or the Unruh effect)
must be introduced so that the accelerating frame must be particles
creating from the ``vacuum'' and be thermal, which plays the role
of an equivalent gravitational source making $R_{\mu\nu}^{\prime}(x^{\prime})\neq0$.
The Hawking temperature in the internal energy term of eq.(\ref{eq:BH total mass})
is in essential the Unruh temperature playing such role. In this sense,
the validity of the Equivalence Principle should be extended to the
reference frame described by quantum state.
\subsection{The Entropy of a Schwarzschild Black Hole}
In the general framework, the entropy of the black hole comes from
the uncertainty or quantum fluctuation moment of the frame fields
given by the manifolds density $u$, more precisely, the thermalized
black hole entropy is measured by the maximized Shannon entropy in
terms of the probability distribution $u$ of the frame fields in
the background of the black hole. So in this subsection, we calculate
the $u$ density distributed around the Schwarzschild black hole and
then evaluate the corresponding entropy as a measure of the black
hole entropy. After a proper definition of a zero-point of the Shannon
entropy, it gives a standard Bekenstein-Hawking entropy.
For an observer in the distant lab rest frame, the contributions to
the temporal static $u$ density around the black hole is two folds.
Beside the thermal distribution $u_{*}$ in the ``vacuum'' or bulk
outside the black hole horizon, which gives rise to the ideal gas
entropy (\ref{eq:S*}) as the background entropy, there is an additional
$\tilde{u}$ density distribute mostly in a exterior thin shell near
the horizon, and sparsely in the bulk outside the horizon, which we
will focus on. The reason is as follows. Because $\tilde{u}$ density
satisfies the conjugate heat equation (\ref{eq:u-equation}) on the
classical background of the black hole, since the classical scalar
curvature $R=0$ outside the horizon, and the temperature (equivalently
the parameter $\tau$ and the mass) can be seen unchange for the thermalized
black hole i.e. $\frac{\partial\tilde{u}}{\partial\tau}=0$, thus
the conjugate heat equation for $\tilde{u}$ is approximately given
by the 4-Laplacian equation on the Schwarzschild black hole
\begin{equation}
\Delta_{X}\tilde{u}(X)=0,\quad(|\mathbf{X}|\ge r_{H}).
\end{equation}
Now the temporal static density $\tilde{u}(X)$ plays a similar role
like a solution of the Klein-Gordon equation on the static background
of the black hole. The approximation of the conjugate heat equation
is equivalent to interpret the Klein-Gordon modes as a ``first''-quantization
probability density (not second-quantization fields). As is well-known,
there are modes falling into the black hole horizon and hence disappearing
from the outside observer's view. Just like the negative Klein-Gordon
modes falling into the negative energy states below the groundstate.
In a flat background, the amplitudes of the modes falling into and
going out of the horizon are identical. So in the second-quantization,
the negative mode falling into the horizon can be reinterpreted as
a single anti-particle with positive energy modes going out of the
horizon with the identical amplitude. However, in a curved background,
for instance, the spacetime near the black hole horizon, the statement
is no longer true. The two amplitudes differ from each other by a
non-unitary equivalent factor. Thus the negative mode falling into
the black hole horizon are no longer be reinterpreted as a single
anti-particle mode going out, rather than multi-particles thermo-ensemble.
At the situation, the density $\tilde{u}$ describes the ensemble
density of modes going exterior the horizon $|\mathbf{X}|\ge r_{H}$
which can be seen by an outside observer.
By a routine calculation of the solution near the exterior black hole
horizon resembling a Rindler metric as a starting point, we denote
the solution $\tilde{u}_{\mathbf{k}}(\rho)$, in which $\mathbf{k}$
represents the Fourier component/momentum in the direction that are
orthogonal to the direction of radius with $\rho=\log(r-r_{H})$,
$r$ the radius, $r_{H}=2Gm$ the radius of the horizon. The equation
becomes
\begin{equation}
-\frac{\partial^{2}\tilde{u}_{\mathbf{k}}}{\partial\rho^{2}}+\mathbf{k}^{2}e^{2\rho}\tilde{u}_{\mathbf{k}}=\omega^{2}\tilde{u}_{\mathbf{k}},
\end{equation}
where $\omega$ is the eigen-energy of the modes. By using a natural
boundary condition that $\tilde{u}$ vanishes at infinity, we can
see that each transverse Fourier mode $\tilde{u}_{\mathbf{k}}$ can
be considered as a free 1+1 dimensional quantum field confined in
a box, one wall of the box is at the reflecting boundary $\rho_{0}=\log\epsilon_{0}$
where $\epsilon_{0}\approx0$, and the other wall of the box is provided
by the potential
\begin{equation}
V(\rho)=\mathbf{k}^{2}e^{2\rho},
\end{equation}
which becomes large $V(\rho)\gg1$ at $\rho>-\log\mathbf{k}$. So
we can approximate the potential by the second wall at $\rho_{w}=-\log\mathbf{k}$.
So the length of the box is given by
\begin{equation}
\Delta\rho=\rho_{w}-\rho_{0}=-\log(\epsilon_{0}\mathbf{k}).
\end{equation}
Thus the thickness of the horizon is about $\Delta r\sim e^{\Delta\rho}\sim\epsilon_{0}\mathbf{k}$.
The density $\tilde{u}_{\mathbf{k}}(\rho)$ is located in the box
$\rho\in(\rho_{0},\rho_{w})$. In other words, the solution of $\tilde{u}$
density is located mainly in a thin shell near the horizon $r\in(r_{H},r_{H}+\epsilon_{0}\mathbf{k})$.
Furthermore, the modes $\mathbf{k}$ is assumed normal distributed
(with a tiny width described by the parameter $\tau$). In this picture,
without solving the equation, we can approximately write down the
natural solution as $\tilde{u}_{\mathbf{k}}(r)\overset{\tau\rightarrow0}{\approx}\delta(|\mathbf{k}|)\delta(r-r_{H})$,
while for finite and small $\tau$, we have a nearly Gaussian form
\begin{equation}
\tilde{u}_{\mathbf{k}}(r)\approx\delta(|\mathbf{k}|)\cdot\frac{1}{(4\pi\tau)^{1/2}}e^{-\frac{(r-r_{H})^{2}}{4\tau}}\approx\frac{1}{(4\pi|\mathbf{k}|^{2}\tau)^{1/2}}e^{-\frac{(r-r_{H})^{2}}{4\tau}},\quad(r>r_{H})
\end{equation}
The exterior horizon solution can be considered as the standing wave
solution as the superposition of the modes falling into and coming
out of the black hole horizon. Then we have (up to a constant)
\begin{equation}
\log\tilde{u}_{\mathbf{k}}(r)\overset{r\sim r_{H}}{\approx}-\frac{1}{2}\log\left(|\mathbf{k}|^{2}\tau\right).
\end{equation}
A routine calculation of the relative Shannon entropy or W-functional
gives the entropy of each k-mode in the limit in which the width $\tau$
is very small,
\begin{align}
\lambda_{3}\tilde{N}(\tilde{u}_{\mathbf{k}}) & =-\lambda_{3}\int d^{3}X\tilde{u}_{\mathbf{k}}\log\tilde{u}_{\mathbf{k}}\nonumber \\
& =\delta(|\mathbf{k}|)\int_{r_{H}}^{\infty}4\pi r^{2}dr\frac{1}{(4\pi\tau)^{1/2}}e^{-\frac{(r-r_{H})^{2}}{4\tau}}\frac{1}{2}\log\left(|\mathbf{k}|^{2}\tau\right)\nonumber \\
& \overset{\tau\sim0}{=}\delta(|\mathbf{k}|)\frac{1}{4}A\log\left(|\mathbf{k}|^{2}\tau\right),\label{eq:entropy-3-volume-integral}
\end{align}
where $A=4\pi r_{H}^{2}$ is the area of the horizon.
It is naturally to assume the momentum $\mathbf{k}$ in the horizon
shell is homogeneous,
\begin{equation}
|\mathbf{k}|=|k_{r}|=|k_{\perp}|,\label{eq:momentum in shell}
\end{equation}
where $k_{r}$ is the momentum in the radius direction and $k_{\perp}$
in the transverse directions on the horizon. When we integrate over
all k-modes, we have the total relative Shannon entropy weakly depending
to $\tau$
\begin{align}
\lambda_{3}\tilde{N}(\tilde{u}) & =\lambda_{3}\int d^{3}\mathbf{k}\tilde{N}(\tilde{u}_{\mathbf{k}})\nonumber \\
& =\frac{1}{4}A\int\frac{d^{2}k_{\perp}}{(2\pi)^{2}}\log\left(|k_{\perp}|^{2}\tau\right)\int dk_{r}\delta(k_{r})\nonumber \\
& \approx\frac{1}{4}A\int_{0}^{1/\epsilon}\frac{2\pi k_{\perp}dk_{\perp}}{(2\pi)^{2}}\log\left(|k_{\perp}|^{2}\tau\right)\nonumber \\
& =\frac{1}{4}A\times\frac{1}{2\pi\tau}\left[-\frac{\tau}{2\epsilon^{2}}\left(1-\log\frac{\tau}{\epsilon^{2}}\right)\right]\nonumber \\
& \approx-\frac{A}{16\pi\epsilon^{2}},\label{eq:entropy-3-momentum-integral}
\end{align}
in which the transverse momentum is effectively cut off at an inverse
of a fundamental UV length scale $\epsilon^{2}$.
The relative Shannon entropy gives an area law of the black hole entropy.
To determine the UV length cutoff $\epsilon^{2}$, we need to consider
the scale at which the relative entropy is defined to be zero (not
only the black hole is locally thermal equilibrium, but also the asymptotic
background spacetime is globally thermal equilibrium), thus we need
to consider the flow of the asymptotic background spacetime. A natural
choice of a thermal equilibrium Ricci flow limit of the background
spacetime (the black hole is embedded) is an asymptotic homogeneous
and isotropic Hubble universe with scalar curvature $R_{0}=D(D-1)H_{0}^{2}=12H_{0}^{2}$
at scale $t_{UV}$ where we could consider and normalize the relative
entropy to be zero (leaving only the background ideal gas entropy),
since there is no information of the local shape distortions in such
GSRS background because of the vanishing of its Weyl curvature, while
the global curvature is non-zero which codes the information of its
global volume shrinking. Under such definition, taking the normalized
Shrinking Ricci soliton equation (\ref{eq:shrinker}) and (\ref{eq:tau}),
we have
\begin{equation}
\tau_{UV}=-t_{UV}=\frac{D}{2R_{0}}=\frac{1}{64\pi^{2}\lambda}k_{UV}^{2},
\end{equation}
by using the critical density (\ref{eq:critical density}), which
gives a natural cutoff corresponding to the scale $t_{UV}$,
\begin{equation}
\epsilon^{2}=k_{UV}^{-2}=\frac{1}{D\pi}G=\frac{1}{4\pi}G.
\end{equation}
This is exactly the Planck scale, which is a natural cutoff scale
induced from the Hubble scale $H_{0}$ and $\lambda$ of the framework.
However, it is worth stressing that the Planck scale is not the absolute
fundamental scale of the theory, it only has meaning w.r.t. the asymptotic
Hubble scale. The only fundamental scale of the theory is the critical
density $\lambda$ which is given by a combination of both the Planck
scale and Hubble scale, but each individual Planck or Hubble scale
does not have absolute meaning. The UV (Planck) cutoff scale could
tend to infinity while the complementary (Hubble) scale correspondingly
tends to zero (asymptotic flat background), keeping $\lambda$ finite
and fixed.
At this point, if we define a zero-relative-entropy for an asymptotic
Hubble universe of scalar curvature $R_{0}$, then the black hole
in this asymptotic background has a non-zero thermodynamic entropy
\begin{equation}
S=-\lambda_{3}\tilde{N}(\tilde{u})=\frac{A}{4G},\label{eq:S=00003DA/4}
\end{equation}
up to the bulk background entropy $\lambda_{3}N_{*}=S_{*}\ll S$,
eq.(\ref{eq:bulk entropy}). Combining the relative Shannon entropy
$\tilde{N}$ and the bulk thermal background entropy $N_{*}$, and
using the total partition function eq.(\ref{eq:frame-partition}),
$Z(M^{3})=e^{\lambda_{3}N-\frac{3}{2}}=e^{\lambda_{3}(\tilde{N}+N_{*})-\frac{3}{2}}$
we can also reproduce the total energy of the black hole in (\ref{eq:BH total mass})
\begin{equation}
m_{BH}=-\frac{\partial\log Z}{\partial\beta}=m+\frac{3}{2}T,
\end{equation}
in which eq.(\ref{eq:perelman-partition}) and $A=4\pi r_{H}^{2}=16\pi G^{2}m^{2}=\frac{\beta^{2}}{4\pi}$
have been used.
Different from the holographic idea that the information or entropy
are coded in the (infinite thin and 2-dimensional) horizon or boundary
of a gravitational system, in this framework where the coordinates
of the spacetime geometry are smeared by quantum fluctuation, as a
consequence that there is no mathematically precise notion of an infinite
thin boundary in a ``density manifolds'' in general, it is just
a semi-classical concept. Note that manifolds density $u$ is mainly
distributed at the horizon with a finite thickness (although very
small), which contributes most of the anomaly and entropy to the black
hole, so although the entropy (\ref{eq:S=00003DA/4}) is proportional
to the area, the geometric gravitational entropy given by the framework
essentially comes from the 3-volume (note the 3d integral in eq.($\ref{eq:entropy-3-volume-integral}$)
and eq.(\ref{eq:entropy-3-momentum-integral})) but the 2-surface
boundary. Or in other equivalent words, here the area of the horizon
is fluctuating (due to its finite thickness) rather fixed, while the
total energy and hence the temperature is fixed. In this sense, it
is a canonical ensemble but an area ensemble as some ideas might suggest.
\section{Conclusions}
In this paper, we have proposed a statistical fields theory underlying
Perelman's seminal analogies between his geometric functionals and
the thermodynamic functions. The theory is based on a $d=4-\epsilon$
quantum non-linear sigma model, interpreted as a quantum reference
frame. When we quantize the theory at the Gaussian approximation,
the wavefunction $\Psi(X)$ and hence the density matrix $u(X)=\Psi^{*}(X)\Psi(X)$
eq.(\ref{eq:u}) can be written down explicitly. Based on the density
matrix, the Ricci flow of the frame fields (\ref{eq:ricci flow})
and the generalized Ricci-DeTurck flow (\ref{eq:ricci-deturk}) of
the frame fields endowed with the density matrix is discussed. And
further more, we find that the density matrix has profound statistical
and geometric meanings, by using it, the spacetime $(M^{D},g)$ as
the target space of NLSM is generalized to a density spacetime $(M^{D},g,u)$.
The density matrix $u(X,\tau)$, satisfying a conjugate heat equation
(\ref{eq:u-equation}), not only describes a (coarse-grained) probability
density of finding frame fields in a local volume, but also describes
a volume comparison between a local volume and the fiducial one.
For the non-isometric nature of the Ricci or Ricci-DeTurck flow, the
classical diffeomorphism is broken down at the quantum level. By the
functional integral quantization method, the change of the measure
of the functional integral can be given by using a Shannon entropy
$N$ in terms of the density matrix $u(X,\tau)$. The induced trace
anomaly and its relation to the anomalies in conventional gravity
theories are also discussed. As the Shannon entropy flows monotonically
to its maximal value $N_{*}$ in a limit called Gradient Shrinking
Ricci Soliton (GSRS), a relative density $\tilde{u}$ and relative
Shannon entropy $\tilde{N}=N-N_{*}$ can be defined w.r.t. the flow
limit. The relative Shannon entropy gives a statistical interpretation
underlying Perelman's partition function (\ref{eq:perelman-partition}).
And the monotonicity of $\tilde{N}$ along the Ricci flow gives an
analogous H-theorem (\ref{eq:analog H-theorem}) for the frame fields
system. As a side effect, the meanings on the gravitational side of
the theory is also discussed, in which a cosmological constant $-\lambda\nu(B_{\infty}^{4})\approx0.8\rho_{c}$
as a UV counter term of the anomaly must be introduced.
We find that a temporal static GSRS, $M^{3}$, as a 3-space slice
of the 4-spacetime GSRS, $M^{4}=M^{3}\times\mathbb{R}$, is in a thermal
equilibrium state, in which the temperature is proportional to the
global $\tau$ parameter of $M^{3}$ (\ref{eq:tau-T}) up to a 3-space
energy density $\lambda_{3}$ with normalization $\int d^{3}x\lambda_{3}=1$.
The temperature and $\lambda_{3}$ both depend on the choice of time
$\mathbb{R}$. In the sense that $M^{3}$ is in thermal, its Ricci
soliton equation eq.(\ref{eq:3-shrinking-soliton}) or quantum (indistinguishable
with thermal) fluctuation eq.(\ref{eq:sigma=00003Dtau*g}), can be
considered as a generalization of the Unruh's formula, relating the
temperature to local acceleration or gravitation. Based on the statistical
interpretation of the density matrix $u(X,\tau)$, we find that the
thermodynamic partition function (\ref{eq:Z*}) at the Gaussian approximation
is just a partition function of ideal gas of the frame fields. In
this physical picture of canonical ensemble of frame fields gas, several
thermodynamic functions, including the internal energy (\ref{eq:internal energy}),
the free energy (\ref{eq:free-energy}), the thermodynamic entropy
(\ref{eq:S*}), and the ensemble density (\ref{eq:ensemble density})
etc. can be calculated explicitly agreeing with Perelman's formulae,
which gives an underlying statistical foundation to Perelman's analogous
functionals.
We find that the statistical fields theory of quantum reference frame
can be used to give a possible underlying microscopic origin of the
spacetime thermodynamics. The standard results of the thermodynamics
of the Schwarzschild black hole, including the Hawking temperature,
energy and Bekenstein-Hawking entropy can be successfully reproduced
in the framework. And we find that when the fluctuation internal energy
of the metric is taken into account in the total energy, the energy
of the black hole has a lower bound of order of the Planck energy,
which avoid the quantum black hole evaporating into nothing. The internal
energy or related temperature of the spacetime frame fields is an
additional source of gravity, although macroscopically it is very
small, at the quantum level its contribution is necessary for a thermal
equilibrium 3-space just right being a GSRS, otherwise, the Equivalence
Principle would breakdown. In this paper, the extended quantum Equivalence
Principle plays a fundamental role as a bridge from the quantum reference
frame theory (as a statistical fields or quantum fields theory on
the base/lab spacetime) to the quantum gravity.
To sum up, the paper can be seen as an attempt to discuss the deep
relations between these three fundamental themes: the diffeomorphism
anomaly, gravity and the spacetime thermodynamics, based on the statistical
fields theory of quantum spacetime reference frame and the quantum
Equivalence Principle. In the spirit of classical general relativity,
if we trust the Equivalence Principle, one can not in principle figure
out whether one is in an absolute accelerating frame or in an absolute
gravitational background, which leads to a general covariance principle
or diffeomorphism invariance of the gravitational theory. However,
at the quantum level, the issue is a little subtle. If an observer
in an accelerating frame sees the Unruh effect, i.e. thermal particles
are creating in the ``vacuum'', which seems leading to the unitarily
inequivalence between the vacuums of, for instance, an inertial frame
and an accelerating frame, and hence the diffeomorphism invariance
is seen breakdown discussed as the anomaly in the paper. The treatment
of the anomaly in the paper is that, the anomaly is only canceled
in an observer's lab up to UV scale, where the frame can be considered
classical, rigid and cold, while at general scale the anomaly is not
completely canceled. Whether one can figure out that he/she is in
an absolute accelerating frame by detecting the anomaly (Shannon $\tilde{N}$
term) at general scale (e.g. by thermodynamic experiments detecting
the vacuum thermal particle creation and hence find the non-unitarity)?
We argue that if the answer is still ``NO!'' in the spirit of the
general relativity, the anomaly term coming from a quantum general
coordinates transformation must be also equivalently interpreted as
the effects of spacetime thermodynamics and gravity. Because the 2nd
order moment fluctuation of the quantum coordinates or a non-trivial
manifolds density $u$, which gives rise to the diffemorphism anomaly,
also contributes to other 2nd order quantities (series coefficients
at second spacetime derivative) such as (i) the acceleration (second
time derivative of coordinates, e.g. leading to uniform acceleration
expansion or other acceleration discrepancies in the universe \citep{Luo:2019iby}),
(ii) the gravity or curvature (second spacetime derivative of metric,
e.g. see (\ref{eq:dg(2)}) and (\ref{eq:density ric bound})) and
(iii) the thermal broadening (second spatial derivative of the manifolds
density or the ensemble density, e.g. see (\ref{eq:sigma=00003Dtau*g})
and (\ref{eq:tau-T})) at the same (2nd) order. In this sense, the
validity of the classical Equivalence Principle would be generalized
to the quantum level to incorporate the effects of the quantum fluctuation
of the spacetime coordinates or frame fields, so that, one in principle
still can not figure out and distinguish whether he/she is in an accelerating
frame, or in a gravitational field or in a thermal spacetime (as a
new dimension of the Equivalence Principle), these three things have
no absolute physical meanings and are indistinguishable any more in
the framework. The classical Equivalence Principle asserts the equivalence
of the first two things at the first order (mean level), the quantum
Equivalence Principle asserts the equivalence of the three things
even at the second order (variance level), even higher order.
\section*{Data availability statement}
All data that support the findings of this study are included within
the article.
\begin{acknowledgments}
This work was supported in part by the National Science Foundation
of China (NSFC) under Grant No.11205149, and the Scientific Research
Foundation of Jiangsu University for Young Scholars under Grant No.15JDG153.
\bibliographystyle{plain}
|
{
"arxiv_id": "2302.08627",
"language": "en",
"timestamp": "2023-02-20T02:04:04",
"url": "https://arxiv.org/abs/2302.08627",
"yymm": "2302"
} | \section{Introduction}
Large-scale dynamical systems, including electric circuit design, structural mechanics, and microelectromechanical systems, are often described by considerably high-dimensional mathematical models and hence may require formidable computational demands for numerical simulations, control design, and optimization.
This spurs the development of various model reduction techniques to generate lower-dimensional approximates, or reduced-order models, that allow us to capture the pertinent characteristics of large-scale systems in a computationally manageable fashion. These reduced-order models can further replace their complex counterparts in the control and optimization process, resulting in a shorter design cycle time, see \cite{obinata2012model,antoulas2005approximation,morAntSG01,Gugercin2004Survey} for an overview.
Reduced-order models are often constructed to closely approximate the input-output mapping of original complex systems while preserving the essential structural features of large-scale systems presents a fundamental problem.
Many physical systems possess certain properties or structural properties such as stability, passivity, or dissipativity, which are critical in determining the system behaviour and play a crucial role in the control and optimization of the systems. Hence, it is highly desirable that the reduced-order models also preserve the structural properties of the system and thereby provide an adequate and lean basis for control and optimization, retaining the physical characteristics, see e.g., \cite{wu2018reduced,zheng2015finite,sadamoto2020fast,cheng2021review}.
Structure-preserving model reduction has attracted extensive attention, and in the existing literature, different methods have been developed for reducing large-scale dynamical systems with specific structural properties. The balanced truncation approach \cite{moore1981principal} naturally retains system stability and minimality through a reduction process, which is extended to retain the positive real and bounded real properties of a linear system
\cite{Gugercin2004Survey,antoulas2005approximation,reis2010positive}. Further developments in e.g., \cite{phillips2003guaranteed,guiver2013error,kawano2018structure,wu2018reduced,salehi2021passivity,breiten2021error} extend the balanced truncation approach to preserve dissipativity or the port-Hamiltonian structure. The other model reduction methods, including moment matching or Krylov subspace approaches, generally do not guarantee to preserve stability, while the work in \cite{gugercin2008iterative} shows that by combining the features of the singular value decomposition and Krylov-based approximation, the stability and moment matching can be achieved simultaneously. A generalization of this method is then presented in \cite{selga2012stability} by introducing the notions of contractivity and matrix measure. Furthermore, in e.g., \cite{antoulas2005passive,sorensen2005passivity,ionutiu2008passivity,gugercin2012pH,polyuga2010pH,wolf2010passivity,Ionescu2013,hauschild2019pH}, the Krylov subspace approaches have been applied to the mode reduction of passive systems or port-Hamiltonian systems, where reduced-order models are produced with the preservation of the passivity and hence the stability. Besides, a reduction method for dissipative Hamiltonian systems is presented in \cite{afkham2019structure}, which adopts a symplectic time integrator to conserve the total energy of a Hamiltonian system. Although the aforementioned methods have demonstrated their effectiveness in structure-preserving model reduction, these methods barely guarantee any optimality with respect to a certain criterion. Furthermore, the methods based on the iterative rational Krylov algorithm (IRKA) do not guarantee convergence to local or global optima \cite{flagg2012convergence}. Recently, \cite{Schwerdtner2020structure,breiten2021passivity} present new developments in passivity-preserving model reduction via techniques such as parameter optimization and spectral factorization.
In contrast to above methods, the works in e.g., \cite{Yu1999approximate,vanDooren2008H2,beattie2007krylov,beattie2009trust,gugercin2008H2,sato2017structure,sato2018pHsys,sato2021book,Jiang2019model,moser2020new} formulate model reduction problem in the context of nonconvex $\mathcal{H}_2$ optimization, which aims to minimize the $\mathcal{H}_2$ error between original and reduced-order models.
In \cite{beattie2007krylov,beattie2009trust,gugercin2008H2}, the first-order optimality conditions are evaluated on the basis of poles and residues of the transfer matrices of original and reduced-order models, and thereby the rational Krylov algorithm \cite{beattie2007krylov,gugercin2008H2} or the trust region method \cite{beattie2009trust} can be applied to converge to a reduced-order model satisfying first-order optimality conditions with respect to the $\mathcal{H}_2$ error criteria. Nevertheless, structure preservation is not the primary concern of these methods.
Within the pole-residue framework, \cite{moser2020new} provides an optimal model reduction method that is capable to preserve the port-Hamiltonian structure, while this method is restricted to single-input and single-output systems.
The other nonconvex optimization framework for model reduction is formulated in e.g., \cite{Yu1999approximate,sato2017structure,sato2018pHsys,sato2021book,Jiang2019model} and the references therein. In this framework, the first-order conditions for (local) $\mathcal{H}_2$ optimality is established on coupled Lyapunov equations, and then gradient-based algorithms are devised to search for reduced-order models on the Stiefel manifold and Riemannian manifold, respectively, leading to structure-preserving reduced-order models. However, these methods in \cite{Yu1999approximate,sato2017structure,sato2018pHsys,sato2021book} require the system state matrix or its symmetric part to be negative definite. The current paper follows this Lyapunov framework but requires only the stability of state matrices and hence can handle more general systems with different structural properties.
In this paper, we focus on the Petrov-Galerkin projection framework striving to find an optimal reduced-order model that not only admits the $\mathcal{H}_2$ optimality in reduction error but also preserves structural properties of interest. To this end, the $(Q,S,R)$-\textit{dissipativity} of a linear state-space model is considered and characterized by a linear matrix inequality, whose solution is called a structure matrix and incorporated into constructing the project matrix. It can lead to certain degrees of freedom by altering the structure matrix to maintain desired structural properties including dissipativity, passivity, and bounded realness. Then, the model reduction problem is formulated as a nonconvex optimization on the basis of a noncompact Stiefel manifold. By using the controllability and observability Gramian matrices of the error system, we derive the explicit expression of the gradient of its objective function on the matrix manifold and provide two gradient descent algorithms to find a (local) optimal projection yielding the smallest reduction error. Unlike the previous works which only preserve one structural feature, this method allows retaining multiple structural properties in one unified framework by simply altering the structure matrix. Furthermore, compared to \cite{Yu1999approximate}, we formulate the optimal projection problem on a
noncompact Stiefel manifold that enables the algorithm to search the solutions in a larger topological space.
The paper is organized as follows. In Section~\ref{sec:Preliminaries} we recapitulate some preliminaries on the $\mathcal{H}_2$ norm and introduce the problem setting for the structure-preserving model reduction. The main results are presented in Section~\ref{sec:projection} and Section~\ref{sec:gradient}, where we formulate the model reduction problem in an optimization framework and solve it using gradient-based algorithms. The proposed methods are illustrated numerically in Section~\ref{sec:example}, and some concluding remarks are made in Section~\ref{sec:conclusion}.
\section{Preliminaries \& Problem Setting}
\label{sec:Preliminaries}
Some important notations and fundamental concepts are defined in this section, and the structure-preserving model reduction problem is then introduced.
\subsection{Notations}
The symbol $\mathbb{R}$ denotes the set of real numbers. For a matrix $A \in \mathbb{R}^{n \times m}$, the induced Frobenius norm of $A$ is denoted by $\|A\|^2_F = \langle A, A \rangle = \mathrm{Tr}(A^\top A)$. A symbol $\star$ in a square symmetric matrix stands for the symmetric counterpart, namely, we denote a symmetric matrix
$
\begin{bmatrix}
A & B \\ B^\top & C
\end{bmatrix}
$
simply by
$
\begin{bmatrix}
A & B \\ \star & C
\end{bmatrix}
$.
Let $\mathbb{S}^{n \times r}_*$ denote the set of all $n \times r$ matrices whose columns are linearly independent, i.e.
\begin{equation} \label{eq:Stiefel}
\mathbb{S}^{n \times r}_* = \{ V \in \mathbb{R}^{n \times r} \mid \mathrm{Rank} (V) = r \}.
\end{equation}
This is referred to
as the manifold $\mathbb{S}^{n \times r}_*$, or the \textit{noncompact Stiefel manifold} of full-rank $n \times r$
matrices. The set $\mathbb{S}^{n \times r}_*$ is an open subset of $\mathbb{R}^{n \times r}$ since its complement $\{X \in \mathbb{R}^{n \times r} : \det(X^\top X) = 0\}$ is closed. Therefore, $\mathbb{S}^{n \times r}_*$ admits a structure
of an open submanifold of $\mathbb{R}^{n \times r}$. We refer to \cite{Absil2009optimization} for more details.
\subsection{Gramians and $\mathcal{H}_2$ Norm of Linear Stable Systems}
We recapitulate the concept of the $\mathcal{H}_2$ norm of a stable linear time-invariant system with the following state-space representation.
\begin{equation}\label{sys:orig}
\mathbf{\Sigma}: \
\begin{cases}
\dot{x}(t) = A x(t) + B u(t), \\
y(t) = C x(t) + D u(t),
\end{cases}
\end{equation}
where $x(t) \in \mathbb{R}^n$, $u(t) \in \mathbb{R}^p$, and $y(t) \in \mathbb{R}^{q}$ are the vectors of the states, inputs, and outputs of the system $\mathbf{\Sigma}$. $A \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times p}$, $C \in \mathbb{R}^{q \times n}$, and $D \in \mathbb{R}^{q \times p}$ are constant matrices. If $A$ is Hurwitz, the system $\mathbf{\Sigma}$ is (asymptotically) stable. Then,
the \textit{controllability and observability Gramians} are properly defined as
\begin{equation*}
\label{defn:Gramians}
\mathcal{P} = \int_{0}^{\infty}
e^{A t} BB^\top e^{A^\top t} \mathrm{d}{t},
\quad
\mathcal{Q} = \int_{0}^{\infty}
e^{A^\top t} C^\top C e^{A t} \mathrm{d}{t},
\end{equation*}
respectively \cite{antoulas2005approximation,gugercin2008H2}. Furthermore, the two positive semidefinite Gramian matrices are solved as unique solutions of the following Lyapunov equations:
\begin{align} \label{eq:gramins}
A \mathcal{P} + \mathcal{P} A^\top + B B^\top =0, \quad A^\top \mathcal{Q} + \mathcal{Q} A + C^\top C =0.
\end{align}
Let the system $\mathbf{\Sigma}$ in \eqref{sys:orig} be strictly causal, i.e. $D = 0$, and $G(s):=C(sI-A)^{-1} B$ be the transfer matrix of $\mathbf{\Sigma}$. Then, the $\mathcal{H}_2$ norm of $\mathbf{\Sigma}$ is defined and computed as
\begin{align*}
\|G(s)\|_{\mathcal{H}_2}^2
& = \int_{0}^{\infty} \mathrm{Tr} \left( C e^{At} B B^\top e^{A^\top t} C^\top \right) \mathrm{d}{t}
\\
& = \mathrm{Tr}(C \mathcal{P} C^\top) = \mathrm{Tr}(B^\top \mathcal{Q} B).
\end{align*}
In this paper, the $\mathcal{H}_2$ norm is chosen to be a metric for evaluating the approximation accuracy of a reduced-order model such that an $\mathcal{H}_2$ optimal model reduction problem is formulated, where the objective function and its gradient can be formulated on the matrix manifold \eqref{eq:Stiefel}.
\subsection{Structure-Preserving Model Reduction}
\label{sec:problem}
The overall aim of this paper is to develop a general model reduction methodology that is applicable to a wide range of systems with specific structural properties, including passivity (or positive realness) and finite-gain $L_2$ stability (or bounded realness). To this end, we consider the concept of the $(Q, S, R)$-dissipativity, and the aforementioned structural properties can be considered as special cases of the $(Q, S, R)$-dissipativity.
\begin{definition}[$(Q, S, R)$-Dissipativity]\cite{willems2007dissipative,brogliato2007dissipative,xie1998robust,xia2016passivity}
For given matrices $Q = Q^\top$, $R = R^\top$
and $S$, a system $\mathbf{\Sigma}$ in \eqref{sys:orig} is called $(Q,S,R)$-
dissipative if
\begin{equation}
\int_{0}^{\tau} y(t)^\top Q y(t) + u(t)^\top R u(t) + 2 u(t)^\top S y(t) \mathrm{d}{t} \geq 0
\end{equation}
holds for all admissible $u(t)$ and all $\tau \geq 0$.
\end{definition}
It is further shown in \cite{willems2007dissipative,xia2016passivity,kottenstette2014relationships} that the $(Q, S, R)$-dissipativity of a linear state-space model \eqref{sys:orig} can be characterized by a linear matrix inequality.
\begin{lemma} \label{lem:QSRdisip}
A linear system \eqref{sys:orig} is $(Q, S, R)$-dissipative if and only if there exists
a matrix $K > 0$ such that
\begin{equation} \label{eq:LMI-QSR}
\Gamma = \begin{bmatrix}
A^\top K + K A & KB - C^\top S & - C^\top Q^{\frac{1}{2}} \\
\star & - R - D^\top S - S^\top D & -D^\top Q^{\frac{1}{2}} \\
\star & \star & - I
\end{bmatrix} \leq 0.
\end{equation}
If $\Gamma$ is strictly negative definite, then the system \eqref{sys:orig} is asymptotically stable and $(Q, S, R)$-dissipative.
\end{lemma}
This work focuses on the notion of $(Q, S, R)$-dissipativity as it is a rather general property that can be reduced, with particular weighting matrices $Q$, $S$ and $R$, to passivity and finite-gain $L_2$ stability.
\begin{enumerate}
\item Let $Q = 0$, $S = I$, $R = 0$. Then, the $(Q, S, R)$-dissipativity reduces to the \textit{passivity} (or \textit{positive realness}) as $
\int_{0}^{\tau} u(t)^\top y(t) \mathrm{d}{t} \geq 0
$
for all admissible $u(t)$ and all $\tau \geq 0$. The inequality \eqref{eq:LMI-QSR} is simplified as
\begin{equation} \label{eq:Kpassiv}
\begin{bmatrix}
A^\top K + K A & KB - C^\top
\\
\star & {-D^\top - D}
\end{bmatrix} \leq 0,
\end{equation}
\item Let $Q = -\gamma^{-1} I$, $S = 0$, $R = \gamma I$, then
the $(Q, S, R)$-dissipativity becomes the \textit{finite-gain $L_2$ stability} (or \textit{bounded realness}) as
$$
\int_{0}^{\tau} y(t)^\top y(t) \mathrm{d}{t} \leq \gamma^2 \int_{0}^{\tau} u(t)^\top u(t) \mathrm{d}{t}
$$ for all admissible $u(t)$ and all $\tau \geq 0$, and meanwhile \eqref{eq:LMI-QSR} is simplified by { the Schur complement lemma} as
\begin{equation} \label{eq:KL2}
\begin{bmatrix}
A^\top K + K A & KB
\\
\star & - \gamma I
\end{bmatrix} { +} \frac{1}{\gamma}
\begin{bmatrix}
C^\top
\\
D^\top
\end{bmatrix}
\begin{bmatrix}
C & D
\end{bmatrix}
\leq 0,
\end{equation}
\end{enumerate}
\begin{remark}
We emphasize that the inequality in \eqref{eq:LMI-QSR} only provides a characterization for the $(Q,S,R)$-dissipativity, which is a general concept including passivity and bounded realness. However, it is not necessary that the LMI in \eqref{eq:LMI-QSR} has always to be solved numerically to obtain a solution of $K$. For instance, to reduce port-Hamiltonian systems as in e.g., \cite{gugercin2012pH,polyuga2010pH,wolf2010passivity,Ionescu2013,sato2018pHsys,Jiang2019model}, we can simply use positive definite Hamiltonian matrices as $K$. To preserve the stability of { an} observable system, $K$ can be chosen as the observability Gramian of the system as in \cite{gugercin2008iterative}.
\end{remark}
In this paper, we consider a full-order model $\mathbf{\Sigma}$ in \eqref{sys:orig}, which is asymptotically stable and $(Q, S, R)$-dissipative, i.e. $A$, $B$, and $C$ satisfy the inequality \eqref{eq:LMI-QSR}.
Then, the problem of structure-preserving model reduction in this paper is formulated as follows.
\begin{problem}
Given a fixed positive integer $r$ ($ < n$), we aim to construct a low-dimensional model
\begin{equation}\label{sys:red0}
\mathbf{\hat{\Sigma}}: \
\begin{cases}
\dot{\hat{x}}(t) = \hat{A} \hat{x}(t) + \hat{B} u(t), \\
\hat{y}(t) = \hat{C} \hat{x}(t) + \hat{D} u(t),
\end{cases}
\end{equation}
where $\hat{A} \in \mathbb{R}^{r \times r}$, $\hat{B}\in \mathbb{R}^{r \times p}$, $\hat{C} \in \mathbb{R}^{q \times r}$, and $\hat{D} \in \mathbb{R}^{q \times p}$ are reduced matrices such that
\begin{enumerate}
\item the reduction procedure guarantees the retention of the stability and the $(Q, S, R)$-dissipativity in the reduced-order model $\mathbf{\hat{\Sigma}}$, and
\item the reduced-order model $\mathbf{\hat{\Sigma}}$ best approximates the original model $\mathbf{\Sigma}$ in the sense that the $\mathcal{H}_2$ norm of the error between $\mathbf{\Sigma}$ and $\mathbf{\hat{\Sigma}}$ is minimized.
\end{enumerate}
\end{problem}
The objective (1) is { to} guarantee structural properties of interest retained in the reduced-order model. Under this constraint, the objective in (2) aims to minimize the approximation error in the $\mathcal{H}_2$ norm.
In this work, a unified framework will be presented to solve the structure-preserving model reduction problem, where the reduced-order model is built based on the Petrov-Galerkin projection. The model reduction problem thereby is reformulated as a nonconvex optimization problem, which is solved efficiently by a gradient descent method.
\section{Projection-Based Model Reduction With Structure Preservation}
\label{sec:projection}
Consider a full-order model in \eqref{sys:orig} with any structural property discussed in Section~\ref{sec:problem}. To preserve the desired structure in the reduced-order model \eqref{sys:red0}, a novel Petrov-Galerkin projection is adopted to generate a reduced-order model.
Let $X = X^\top > 0$, $V \in \mathbb{S}^{n \times r}_*$, and denote \begin{equation}\label{eq:pinv}
V^\dagger : = (V^\top X V)^{-1} V^\top X,
\end{equation}
in which $V^\top X V$ is invertible since $V$ has full column rank, and $X \in \mathbb{R}^{n \times n}$ is a positive definite matrix. $V^\dagger$ is called a \textit{reflexive generalized inverse} of $V$. The projection matrix is thereby defined as $\Pi = V V^\dagger $, by which a reduced-order model is constructed as follows.
\begin{equation}\label{sys:red}
\mathbf{\hat{\Sigma}}: \
\begin{cases}
\dot{\hat{x}}(t) = \underbrace{V^\dagger A V}_{\hat{A}} \hat{x}(t) + \underbrace{V^\dagger B}_{\hat{B}} u(t), \\
\hat{y}(t) = \underbrace{C V}_{\hat C} \hat{x}(t) + D u(t),
\end{cases}
\end{equation}
where $\hat{x} \in \mathbb{R}^{r}$ is the reduced state vector. The matrix $X$ is called a \textit{structure matrix}, which allows us to preserve certain desired structural properties. The following result shows that this particular construction of a reduced-order model can lead to structure preservation.
\begin{lemma}[Structure Preservation]
Consider the original system $\mathbf{\Sigma}$ \eqref{sys:orig} and its reduced-order model $\mathbf{\hat{\Sigma}}$ defined in \eqref{sys:red}. If $\mathbf{\Sigma}$ is $(Q, S, R)$-dissipative and $X = K > 0$ in \eqref{sys:red} is a solution of \eqref{eq:LMI-QSR}, then $\mathbf{\hat{\Sigma}}$ in \eqref{sys:red} is also $(Q, S, R)$-dissipative.
\end{lemma}
\begin{proof}
To verify the $(Q, S, R)$-dissipativity of the reduced-order model $\mathbf{\hat{\Sigma}}$ in \eqref{sys:red}, we let $\hat{K} = V^\top X V = V^\top K V$. Then, the following equations hold.
\begin{align*}
\hat{A}^\top \hat{K} + \hat{K} \hat{A}
= & V^\top A^\top X V + V^\top X A V \\
= & V^\top (A^\top K + K A) V
\\
\hat{K} \hat{B} - \hat{C}^\top S = & V^\top (KB - C^\top S),
\\ \hat{C}^\top Q^{\frac{1}{2}} = & V^\top C^\top Q^{\frac{1}{2}}.
\end{align*}
Based on that, we have
\begin{align*}
& \begin{bmatrix}
\hat{A}^\top \hat{K} + \hat{K} \hat{A} & \hat{K}\hat{B} - \hat{C}^\top S & - \hat{C}^\top Q^{\frac{1}{2}} \\
\star & - R - D^\top S - S^\top D & -D^\top Q^{\frac{1}{2}} \\
\star & \star & - I
\end{bmatrix} \\
& = \begin{bmatrix}
V^\top & & \\ & I & \\ & & I
\end{bmatrix}
\Gamma
\begin{bmatrix}
V & & \\ & I & \\ & & I
\end{bmatrix} \leq 0,
\end{align*}
where $\Gamma$ defined in \eqref{eq:LMI-QSR} is negative semidefinite due to the $(Q, S, R)$-dissipativity of the original model $\mathbf{\Sigma}$.
Therefore, it follows from Lemma~\ref{lem:QSRdisip} that the reduced-order model $\mathbf{\hat{\Sigma}}$ remains $(Q, S, R)$-dissipative.
\end{proof}
Different structural properties can be preserved in the reduced-order model \eqref{sys:red} by altering the structure matrix $X$. More preciously, let $X$ be the solution of \eqref{eq:Kpassiv}, \eqref{eq:KL2} or the { inequality} $AX + X A^\top < 0$, we can retain passivity, finite-gain $L_2$ stability, or stability, respectively. Therefore, a unified framework is presented here to address the model reduction problems with different structural constraints. Particularly, this work provides a novel method for the passivity-preserving model reduction problems for port-Hamiltonian systems, which have been extensively studied in the literature, see e.g.,
\cite{antoulas2005passive,ionutiu2008passivity,gugercin2012pH,polyuga2010pH,wolf2010passivity,Ionescu2013,hauschild2019pH,phillips2003guaranteed,salehi2021passivity,sato2018pHsys,moser2020new}.
In our approach, the structure matrix $X$ in \eqref{eq:pinv} can be chosen as the energy matrix $K$ of the system i.e. $\frac{1}{2}x(t)^\top K x(t)$ represents the total energy (Hamiltonian) of the system. Consequently, the reduced-order model has the reduced energy matrix as $V^\top K V$.
\begin{remark}[Multiple Structural
Properties]
The proposed model reduction framework can also be applied to preserve multiple structural properties, while $X$ in \eqref{eq:pinv} is required to be a common solution of the corresponding matrix inequalities or equations. For instance, if there exists $K$ satisfying \eqref{eq:Kpassiv} and \eqref{eq:KL2} simultaneously, then we let $X = K$ in \eqref{eq:pinv} that preserves the passivity and finite-gain $L_2$ stability in the reduced-order model \eqref{sys:red}.
\end{remark}
\begin{remark}
Instead of the $(Q, S, R)$-dissipativity, we may consider retaining controllability or observability of a given system $\mathbf{\Sigma}$. In this regard, we let the structure matrix $X$ be the solution of
\begin{equation*}
A^\top X + X A + X BB^\top X = 0,
\end{equation*}
or
\begin{equation*}
A^\top X + X A + C^\top C = 0,
\end{equation*}
using which to construct a reduced-order model \eqref{sys:red} will allow the preservation of controllability or observability. Furthermore, if $X$ is chosen as the observability Gramian of the system $\mathbf{\Sigma}$, then the stability is simultaneously preserved in the reduced-order model \eqref{sys:red} \cite{gugercin2008iterative}.
\end{remark}
The reduced-order model in \eqref{sys:red} guarantees the preservation of a desirable structural property by selecting a specific matrix $X$. Next, we aim to minimize the approximation error between the original and reduced models by tuning $V$. To this end, we define the error system as
\[
G_e(s) = C(sI_n - A)^{-1} B - C V (sI_r - V^\dagger A V)^{-1} V^\dagger B,
\]
where the feedthrough gain $D$ in the two systems has been canceled out. The system $G_e(s)$ has the following state-space representation
\begin{align} \label{sys:err}
\begin{cases}
\dot{z}(t) = {A}_e z(t) + {B}_eu(t )\\
y_e (t) = {C}_e z(t)
\end{cases}
\end{align}
where $z(t) = [x(t)^\top \ \hat{x}(t)^\top]^\top \in \mathbb{R}^{n+r}$, and
\begin{align*}
{A}_e = \begin{bmatrix}
A & 0 \\ 0 & V^\dagger A V
\end{bmatrix}, \
{B}_e = \begin{bmatrix}
B \\ V^\dagger B
\end{bmatrix}, \
{C}_e = \begin{bmatrix}
C & - CV
\end{bmatrix}.
\end{align*}
To guarantee the stability of the error system, we consider the original system $\mathbf{\Sigma}$ asymptotically stable, i.e. $A$ in \eqref{sys:orig} is Hurwitz. Thereby, let the structure matrix $X$ additionally be a solution of $\Gamma < 0$, instead of $\Gamma \leq 0$ in \eqref{eq:LMI-QSR}. Note that $\Gamma < 0$ implies
\begin{equation*}
A^\top X + X A < 0,
\end{equation*}
which will impose the asymptotic stability of the reduced-order model $\mathbf{\hat{\Sigma}}$. Then, the square $\mathcal{H}_2$-norm of the approximation error is computed as
\begin{equation}
\| G_e(s) \|_{\mathcal{H}_2}^2 =
\mathrm{Tr}(C_e \mathcal{P} C_e^\top) = \mathrm{Tr}(B_e^\top \mathcal{Q} B_e),
\end{equation}
where $\mathcal{P}$ and $\mathcal{Q}$ are the controllability and observability Gramians of the error system \eqref{sys:err}. Algebraically, $\mathcal{P}$ and $\mathcal{Q}$ are solved as the unique solutions of the following Lyapunov equations:
\begin{subequations}\label{eq_Gramians_err}
\begin{align}
\label{W_Lyap}
A_e \mathcal{P}+\mathcal{P} A_e^\top+ B_e B_e^\top &=0, \\
\label{M_Lyap}
A_e^\top\mathcal{Q}+\mathcal{Q} A_e + C_e^\top C_e &=0.
\end{align}
\end{subequations}
Observe that given a structure matrix $X$, the reduced-order model \eqref{sys:red} is parameterized in the matrix $V$.
We formulate an optimization problem in terms of minimizing the $\mathcal{H}_2$ norm of the error system $G_e(s)$:
\begin{align}
\label{pro:optimization}
& \min_{V,\mathcal{Q}} \; \mathrm{Tr}(B_e^\top \mathcal{Q} B_e) \\
& \;\; \text{s.t.:} \;\; V^\top V > 0, \quad V \in \mathbb{R}^{n \times r},
\nonumber
\\ & \;\; \quad \quad A_e^\top \mathcal{Q} + \mathcal{Q} A_e + C_e^\top C_e =0.
\nonumber
\end{align}
This optimization problem is highly {nonconvex} and has a continuous variable $\mathcal{Q} \in {\mathbb R}^{(n+r) \times (n+r)}$ and a full column rank matrix $V \in \mathbb{R}_*^{n \times r}$, where $\mathcal{Q} =\mathcal{Q}(V)$ is an implicit function of $V$ due to the unique solution of the Lyapunov equation \eqref{M_Lyap}. Hence, for any $V \in \mathbb{R}_*^{n \times r}$, the partial minimization in $\mathcal{Q}$ leads to the optimization problem
\begin{align}
\label{eq:optimization1}
\min_{V \in \mathbb{S}^{n\times r}_*} \; J(V) \; : & = \mathrm{Tr} \left( B_e(V)^\top \mathcal{Q}(V) B_e(V) \right) \\
& = \mathrm{Tr} \left( \mathcal{Q}(V) B_e(V) B_e(V)^\top \right),\nonumber
\end{align}
where the set $\mathbb{S}^{n\times r}_*$ is defined in \eqref{eq:Stiefel}.
\section{Gradient-Based Methods for Optimal Projection}
\label{sec:gradient}
This section presents gradient-based methods for solving the model reduction problem as a nonconvex optimization problem. The explicit expression for the gradient of the objective function $J(V)$ is derived, and furthermore, gradient descent algorithms are proposed to minimize $J(V)$ over both noncompact and compact Stiefel manifolds.
\subsection{Gradient Analysis}
To solve the optimization problem \eqref{eq:optimization1}, we analyze the key properties of the objective function $J(V)$ such as smoothness and differentiability, based on which we can apply gradient-based methods for solving \eqref{eq:optimization1} over the manifold $\mathbb{S}^{n \times r}_*$. In particular, we show in the next theorem that $J$ is a differentiable function and derive an explicit expression for its gradient.
Before proceeding, we partition the controllability and observability Gramians $\mathcal{P}$ and $\mathcal{Q}$ according to the block structure of the error system as:
\begin{equation} \label{eq:blockMW}
\mathcal{P} = \begin{bmatrix}
\mathcal{P}_{11} & \mathcal{P}_{12} \\ \mathcal{P}_{12}^\top & \mathcal{P}_{22}
\end{bmatrix}, \quad
\mathcal{Q} = \begin{bmatrix}
\mathcal{Q}_{11} & \mathcal{Q}_{12} \\ \mathcal{Q}_{12}^\top & \mathcal{Q}_{22}
\end{bmatrix},
\end{equation}
where $P_{11}$ and $Q_{11}$ are Gramians of the high-order system \eqref{sys:orig}, and $P_{22}$ and $Q_{22}$ are the Gramians of the reduced-order model \eqref{sys:red}. Here, the submatrices $\mathcal{P}_{22}$, $\mathcal{Q}_{22}$, $\mathcal{P}_{12}$, and $\mathcal{Q}_{12}$ are characterized by the following Sylvester equations:
\begin{subequations} \label{eq:Sylv}
\begin{align}
\hat{A} \mathcal{P}_{22} + \mathcal{P}_{22}\hat{A}^\top + \hat{B} \hat{B}^\top = 0,
\\
\hat{A}^\top \mathcal{Q}_{22} + \mathcal{Q}_{22}\hat{A} + \hat{C}^\top \hat{C} = 0,
\\
{A} \mathcal{P}_{12} + \mathcal{P}_{12} \hat{A}^\top + {B} \hat{B}^\top = 0,
\label{eq:Sylv_c}
\\
{A}^\top \mathcal{Q}_{12} + \mathcal{Q}_{12}\hat{A} - {C}^\top \hat{C} = 0,
\label{eq:Sylv_d}
\end{align}
\end{subequations}
where $\hat{A}$, $\hat{B}$, and $\hat{C}$ are computed as in \eqref{sys:red}. Then, the following theorem holds.
\begin{theorem}[Gradient on Matrix Manifold]
\label{thm:gradJ}
The objective function $J(V)$ in \eqref{eq:optimization1} is differentiable, and the gradient has the following expression:
\begin{align} \label{eq:nablaJ}
\nabla J(V)
= & 2 X (I - V V^\dagger) \left[ \mathcal{F}_A (V) + \mathcal{F}_B (V)^\top \right] (V^\top X V)^{-1}
\nonumber \\
& - 2(V^\dagger)^\top \left[ \mathcal{F}_A (V)^\top + \mathcal{F}_B (V) \right] (V^\dagger)^\top
\nonumber
\\& + 2 A^\top (V^\dagger)^\top \Sigma_{22}^\top + 2 \mathcal{F}_C (V),
\end{align}
where $\Sigma_{22}: = \mathcal{P}_{12}^\top\mathcal{Q}_{12} + \mathcal{P}_{22}\mathcal{Q}_{22}$, and
\begin{subequations}\label{eq:FAFBFC}
\begin{align}
\mathcal{F}_A (V) : & = A V \Sigma_{22}, \label{eq:F_A}\\
\mathcal{F}_B (V) : & = \mathcal{Q}_{12}^\top BB^\top + \mathcal{Q}_{22} V^\dagger BB^\top, \label{eq:F_B}\\
\mathcal{F}_C (V) : & = C^\top C V \mathcal{P}_{22} - C^\top C\mathcal{P}_{12}.
\end{align}
\end{subequations}
\end{theorem}
%
\begin{proof}
Observe that
$
V^\top X = (V^\top X V) V^\dagger.
$
We take the differential on both sides, leading to
\begin{align*}
\mathrm{d} V^\top X = &\mathrm{d}((V^\top X V) V^\dagger)
\\
=& \mathrm{d} (V^\top X V) V^\dagger + (V^\top X V) \mathrm{d} V^\dagger
\\
= & (\mathrm{d} V^\top X V + V^\top X \mathrm{d} V) V^\dagger + (V^\top X V) \mathrm{d} (V^\dagger),
\end{align*}
from which, the differential $ \mathrm{d} (V^\dagger)$ is solved as
\begin{equation}\label{eq:differential0}
\mathrm{d}(V^\dagger) = (V^\top X V)^{-1} \mathrm{d} V^\top X (I - V V^\dagger) - V^\dagger \mathrm{d} V V^\dagger.
\end{equation}
To compute the gradient $\nabla J(V)$, we write the derivative $J'(V) \mathrm{d}{V}$ for some $\mathrm{d}{V} \in \mathbb R^{n \times r }$ in a gradient form in terms of trace as follows.
\begin{align*}
J'(V) \mathrm{d}{V} = \mathrm{Tr} \left( \nabla J(V)^\top \mathrm{d}{V} \right).
\end{align*}
Considering \eqref{eq:optimization1}, $J'(V) \mathrm{d}{V}$ can be specified as
\begin{align} \label{diff_JV}
J'(V) \mathrm{d}{V}
& = \mathrm{Tr} \left( \mathcal{Q}(V) \mathrm{d}{\mathcal{B}}(V) + \mathrm{d} \mathcal{Q}(V) {\mathcal{B}}(V) \right)
\nonumber\\
& = \mathrm{Tr}\left(\mathcal{Q}(V) \mathrm{d}{\mathcal{B}}(V) \right)
+ \mathrm{Tr} \left( \mathrm{d} \mathcal{Q}(V) {\mathcal{B}}(V) \right) ,
\end{align}
where ${\mathcal{B}}(V)$ is denoted for notational simplicity as
\begin{align}\label{eq:BeBe}
\mathcal{B}(V) = B_e B_e^\top = \begin{bmatrix}
B B^\top & B B^\top (V^\dagger)^\top \\
V^\dagger B B^\top & V^\dagger B B^\top (V^\dagger)^\top
\end{bmatrix}.
\end{align}
In the following, the two terms in \eqref{diff_JV} are computed separately.
First, we analyze the first term in \eqref{diff_JV}. From \eqref{eq:BeBe}, it follows that
\begin{align*}
\mathrm{d} \mathcal{B} (V)
= \begin{bmatrix}
0 & BB^\top \mathrm{d} (V^\dagger)^\top
\\ \mathrm{d} (V^\dagger) BB^\top& \mathrm{d} (\hat{B} \hat{B}^\dagger)
\end{bmatrix},
\end{align*}
where $\mathrm{d} (\hat{B} \hat{B}^\dagger) = \mathrm{d} (V^\dagger) BB^\top (V^\dagger)^\top + V^\dagger BB^\top \mathrm{d} (V^\dagger)^\top$.
Thus, the first term in \eqref{diff_JV} can be specified as
\begin{align}\label{eq:trace_MdiffB0}
& \mathrm{Tr} \left( \mathcal{Q}(V) \mathrm{d} \mathcal{B} (V) \right)
\\ = & \mathrm{Tr} \left(\mathcal{Q}_{12} \mathrm{d}(V^\dagger) BB^\top + \mathcal{Q}_{12}^\top BB^\top \mathrm{d}(V^\dagger)^\top \right.
\nonumber\\ & \left. \quad+ \mathcal{Q}_{22} \mathrm{d}(V^\dagger) BB^\top (V^\dagger)^\top + \mathcal{Q}_{22} V^\dagger BB^\top \mathrm{d}(V^\dagger)^\top \right).
\nonumber
\end{align}
Using the standard properties of trace, we have
\begin{align*}
\mathrm{Tr} \left(\mathcal{Q}_{12} \mathrm{d}(V^\dagger) BB^\top\right) & = \mathrm{Tr} \left(BB^\top \mathrm{d}(V^\dagger)^\top \mathcal{Q}_{12}^\top \right) \\
& = \mathrm{Tr} \left(\mathcal{Q}_{12}^\top BB^\top \mathrm{d}(V^\dagger)^\top \right),
\end{align*}
and
\begin{align*}
\mathrm{Tr} \left(\mathcal{Q}_{22} \mathrm{d}(V^\dagger) BB^\top (V^\dagger)^\top \right) & = \mathrm{Tr} \left( V^\dagger BB^\top \mathrm{d}(V^\dagger)^\top \mathcal{Q}_{22} \right) \\
& = \mathrm{Tr} \left( \mathcal{Q}_{22} V^\dagger BB^\top \mathrm{d}(V^\dagger)^\top \right).
\end{align*}
Consequently, \eqref{eq:trace_MdiffB0} is simplified as
\begin{align*}
\mathrm{Tr} \left( \mathcal{Q}(V) \mathrm{d} \mathcal{B} (V) \right)
= 2\mathrm{Tr} \left( \mathcal{F}_B(V) \mathrm{d}(V^\dagger)^\top\right).
\nonumber
\end{align*}
with $\mathcal{F}_B(V)$ defined in \eqref{eq:F_B}.
Substituting the differential of $\mathrm{d}(V^\dagger)$ in \eqref{eq:differential0} into the above equation then yields
\begin{align} \label{trace_MdiffB}
& \mathrm{Tr} \left( \mathcal{Q}(V) \mathrm{d} \mathcal{B} (V) \right)
\\ = & 2 \mathrm{Tr} \left[ \mathcal{F}_B(V)(I - (VV^\dagger)^\top) X \mathrm{d} V (V^\top X V)^{-1} \right. \nonumber
\\ & \left. \quad - \mathcal{F}_B(V) (V^\dagger)^\top \mathrm{d}(V)^\top (V^\dagger)^\top
\right]
\nonumber\\
= & 2\mathrm{Tr} \left[ (V^\top X V)^{-1} \mathcal{F}_B(V)(I - (VV^\dagger)^\top) X \mathrm{d} V \right. \nonumber
\\ & \left. \quad - V^\dagger \mathcal{F}_B(V)^\top V^\dagger \mathrm{d} V
\right].
\nonumber
\end{align}
Next, we derive an explicit expression for the second term in \eqref{diff_JV}, i.e. $\mathrm{Tr} \left( \mathrm{d} \mathcal{Q}(V) {\mathcal{B}}(V) \right)$. To this end, we define the function:
\begin{equation} \label{eq:PhiVM}
\Phi(V,\mathcal{Q}) = A_e^\top \mathcal{Q} + \mathcal{Q} A_e + \mathcal{C}(V) = 0,
\end{equation}
whose partial derivatives with respect to $V$ and $\mathcal{Q}$ are given as
\begin{equation*}
\Phi_V'(V, \mathcal{Q}) \mathrm{d}{V} = (\mathrm{d} A_e(V))^\top {\mathcal{Q}} + {\mathcal{Q}} \mathrm{d} A_e(V) + \mathrm{d} \mathcal{C}(V),
\end{equation*}
and
\begin{equation*}
\Phi_\mathcal{Q}' (V, \mathcal{Q}) \mathrm{d}{\mathcal{Q}} = (A_e(V))^\top \mathrm{d}{\mathcal{Q}} + \mathrm{d}{\mathcal{Q}} A_e(V),
\end{equation*}
where
\begin{align} \label{eq:CeCe}
\mathcal{C}(V) = C_e^\top C_e = \begin{bmatrix}
C^\top C & - C^\top C V \\
-V^\top C^\top C & V^\top C^\top CV
\end{bmatrix}.
\end{align}
Since $\Phi(V,\mathcal{Q})$ a multivariable function of both $V$ and $\mathcal{Q}$ and $\Phi(V,\mathcal{Q}) = 0$, applying the Implicit Function Theorem yields $ \Phi_V'(V, \mathcal{Q}) \mathrm{d}{V} + \Phi_\mathcal{Q}' (V, \mathcal{Q}) \mathrm{d}{\mathcal{Q}} = 0$, i.e.
\begin{multline*}
(\mathrm{d} A_e)^\top \mathcal{Q}(V) + \mathcal{Q}(V) \mathrm{d} A_e(V) + \mathrm{d} \mathcal{C}(V) \\
+ (A_e(V))^\top \mathrm{d}{\mathcal{Q}} (V) + \mathrm{d}{\mathcal{Q}}(V) A_e(V) = 0.
\end{multline*}
Multiplying $\mathcal{P}(V)$ from the left then gives
\begin{multline} \label{diff_Lyap}
\mathcal{P}(\mathrm{d} A_e)^\top \mathcal{Q} + \mathcal{P} \mathcal{Q} \mathrm{d} A_e + \mathcal{P} \mathrm{d} \mathcal{C} \\
+ \mathcal{P} A_e^\top \mathrm{d}{\mathcal{Q}} + \mathcal{P} \mathrm{d}{\mathcal{Q}} A_e = 0,
\end{multline}
where we suppress the dependence on $V$ for ease of notation. Furthermore, notice that the controllability Gramian $\mathcal{P}(V)$ is the unique solution of the Lyapunov equation \eqref{W_Lyap}, from which we obtain
\begin{equation}
\label{W_Lyap-P}
A_e \mathcal{P} \mathrm{d}{\mathcal{Q}}+ \mathcal{P} A_e^\top \mathrm{d}{\mathcal{Q}} = -\mathcal{B} \mathrm{d}{\mathcal{Q}}.
\end{equation}
Subtracting \eqref{W_Lyap-P} from \eqref{diff_Lyap} leads to
\begin{multline*}
\mathcal{P} A_e^\top \mathrm{d}{\mathcal{Q}} + \mathcal{P} \mathrm{d}{\mathcal{Q}} A_e
- A_e \mathcal{P} \mathrm{d}{\mathcal{Q}} - \mathcal{P} A_e^\top \mathrm{d}{\mathcal{Q}} \nonumber
\\ +
\mathcal{P}(\mathrm{d} A_e)^\top \mathcal{Q} + \mathcal{P} \mathcal{Q} \mathrm{d} A_e
- \mathcal{B} \mathrm{d}{\mathcal{Q}}
+ \mathcal{P} \mathrm{d} \mathcal{C} = 0.
\end{multline*}
Then, we take the trace in the above equation that cancels the first four terms and yields an expression for the first term in \eqref{diff_JV} as
\begin{equation}
\label{trace_diffMB}
\mathrm{Tr} \left( \mathrm{d} \mathcal{Q} \mathcal{B} \right)
=
2\mathrm{Tr} \left( \mathcal{P} \mathcal{Q} \mathrm{d} A_e \right)
+ \mathrm{Tr} \left( \mathcal{P} \mathrm{d} \mathcal{C}
\right),
\end{equation}
where
\begin{align*}
\mathrm{d} A_e(V) = \begin{bmatrix}
0 & 0 \\
0 & \mathrm{d} \hat{A}(V)
\end{bmatrix},
\end{align*}
with $\mathrm{d} \hat{A}(V) = \mathrm{d} ( V^\dagger A V) = \mathrm{d}(V^\dagger) A V + V^\dagger A \mathrm{d} V$.
Applying \eqref{eq:differential0}, we therefore obtain
\begin{align} \label{eq:trace_diffMB1}
& \mathrm{Tr} \left( \mathcal{P} \mathcal{Q} \mathrm{d} A_e \right)
\\
= & \mathrm{Tr} \left[ \Sigma_{22} \mathrm{d}(V^\dagger) A V + \Sigma_{22} V^\dagger A \mathrm{d} V \right]\nonumber\\
= &
\mathrm{Tr} \left[ \Sigma_{22} (V^\top X V)^{-1} \mathrm{d} V^\top X (I - V V^\dagger) A V \right. \nonumber\\
& \quad \left. - \Sigma_{22} V^\dagger \mathrm{d} V V^\dagger AV + \Sigma_{22} V^\dagger A \mathrm{d} V \right]
\nonumber\\
= &
\mathrm{Tr} \left[ (V^\top X V)^{-1} \mathcal{F}_A(V)^\top (I - (V V^\dagger)^\top) X \mathrm{d} V \right. \nonumber\\
& \quad \left. - V^\dagger \mathcal{F}_A(V) V^\dagger \mathrm{d} V + \Sigma_{22} V^\dagger A \mathrm{d} V \right],\nonumber
\end{align}
with $\mathcal{F}_A(V)$ defined in \eqref{eq:F_A}.
Moreover, taking into account the structure of $\mathcal{C}(V)$ in \eqref{eq:CeCe}, we expand the second term in \eqref{trace_diffMB} as
\begin{align}\label{eq:trace_diffMB2}
&\mathrm{Tr}(\mathcal{P} \mathrm{d} \mathcal{C})
\\= &
- \mathrm{Tr}(\mathcal{P}_{12} \mathrm{d}{V}^\top C^\top C) - \mathrm{Tr}(\mathcal{P}_{12}^\top C^\top C \mathrm{d}{V}) \nonumber \\
& + \mathrm{Tr}\left[ \mathcal{P}_{22} (\mathrm{d}{V}^\top C^\top C V + {V}^\top C^\top C \mathrm{d} V) \right]
\nonumber\\
= & 2 \mathrm{Tr}( -\mathcal{P}_{12}^\top C^\top C \mathrm{d}{V} + \mathcal{P}_{22} {V}^\top C^\top C \mathrm{d} V) \nonumber
\\
= & 2 \mathrm{Tr}(\mathcal{F}_C(V)^\top \mathrm{d}{V}).
\nonumber
\end{align}
Finally, we combine the three equations in \eqref{trace_MdiffB}, \eqref{eq:trace_diffMB1}, and \eqref{eq:trace_diffMB2} to obtain the gradient in \eqref{eq:nablaJ}
\end{proof}
\begin{remark}
Different from methods in from e.g., \cite{vanDooren2008H2,sato2018pHsys}, which provide three gradient expressions over the variables $\hat{A}$, $\hat{B}$, and $\hat{C}$, respectively, this paper aims to find the optimal reduced basis matrix $V$ for projection, which is the only optimization variable in the optimization problem \eqref{eq:optimization1}. Theorem~\ref{thm:gradJ} thereby provides the expression of the gradient over $V$, which is clearly different from the aforementioned methods.
\end{remark}
\begin{remark}
Note that the computation of the gradient $\nabla J$ does not need the Gramian matrices $\mathcal{P}_{11} \in \mathbb{R}^{n \times n}$ and $\mathcal{Q}_{11} \in \mathbb{R}^{n \times n}$ of the original system. It only requires to compute the Gramians for the reduced-order model $\mathcal{P}_{22} \in \mathbb{R}^{r \times r}$, $\mathcal{Q}_{22} \in \mathbb{R}^{r \times r}$ and the cross terms $\mathcal{P}_{12} \in \mathbb{R}^{n \times r}$, $\mathcal{Q}_{12} \in \mathbb{R}^{n \times r}$. The computation of $\mathcal{P}_{22}$ and $\mathcal{Q}_{22}$ can be efficient as the reduced order $r$ is typically small in model reduction problems. Nevertheless, solving the Sylvester equations in \eqref{eq:Sylv} can still be computationally demanding due to \eqref{eq:Sylv_c} and \eqref{eq:Sylv_d} that contain the matrices from the original system. To make the computation of $\mathcal{P}_{12}$ and $\mathcal{Q}_{12}$ more tractable, we follow \cite{sorensen2002sylvester} and derive explicit expressions for $\mathcal{P}_{12}$ and $\mathcal{Q}_{12}$ as the sums of rank-one matrices:
\begin{subequations}
\label{eq:expSylv}
\begin{align}
\mathcal{P}_{12} & = \sum_{j=1}^{r} (\mu_j I_r + {A})^{-1} B \hat{B}^\top w_j \hat{w}_j^*, \\
\mathcal{Q}_{12} & = \sum_{j=1}^{r} (\mu_j I_r + {A}^\top)^{-1} {C} \hat{C}^\top \hat{w}_j w_j^*,
\end{align}
\end{subequations}
where the vectors $w_j$ and $\hat{w}_j$ are the right and left eigenvectors corresponding to the eigenvalue $\mu_j$ of $\hat{A}$, respectively. Computing \eqref{eq:expSylv} now requires the eigendecomposition of the reduced matrix $\hat{A}$ followed by $r$ inversions of $n \times n$ matrices $\mu_j I + A$, which may be less expensive than directly applying standard Sylvester equation solvers, particularly when $A$ is sparse, and $r$ is small.
To further accelerate the method, it is advisable to adopt a low-rank method, see e.g., \cite{jbilou2006low-rank,benner2014low-rank}, to achieve approximate solutions with cheap computations for extremely large-scale systems.
\end{remark}
Having the formula for the gradient $\nabla J$ in \eqref{eq:nablaJ}, we further define a gradient flow of $J(V)$ as
\begin{align} \label{eq:gradflow}
\frac{\mathrm{d}}{\mathrm{d} t}{V}(t)
= & -2 X (I - V V^\dagger) \left[ \mathcal{F}_A (V) + \mathcal{F}_B (V)^\top \right] (V^\top X V)^{-1}
\nonumber \\
& + 2(V^\dagger)^\top \left[ \mathcal{F}_A (V)^\top + \mathcal{F}_B (V) \right] (V^\dagger)^\top
\nonumber
\\& - 2 A^\top (V^\dagger)^\top \Sigma_{22}^\top - 2 \mathcal{F}_C (V),
\end{align}
with $\mathcal{F}_A (V)$, $\mathcal{F}_B (V)$, and $\mathcal{F}_C (V)$ defined in \eqref{eq:FAFBFC}, and then we analyze the key properties of the gradient flow.
\begin{theorem} \label{thm:gradflow}
Let the initial condition of \eqref{eq:gradflow} be given by $V(0) = V_0 \in \mathbb{S}^{n \times r}_*$. Then, the following statements hold.
\begin{enumerate}
\item $V(t)^\top \frac{\mathrm{d}}{\mathrm{d} t}{V(t)} = 0$, and $V(t)^\top V(t)$ is invariant in the gradient flow \eqref{eq:gradflow}.
\item $V(t) \in \mathbb{S}^{n \times r}_*$ for all $t \geq 0$.
\item The objective function is nonincreasing along $V(t)$ with
\begin{align} \label{eq:nonincreasing}
J(V(t_1)) \geq J(V(t_2)),
\end{align}
for all $0 \leq t_1 \leq t_2$.
\item Let $V$ be any local minimizer of problem \eqref{eq:optimization1}, the following implication holds
\begin{equation} \label{eq:JV0}
\nabla J(V) = 0 \ \Leftrightarrow \ \nabla J(V) V^\top - V \nabla J(V)^\top = 0.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
We first prove $V^\top \frac{\mathrm{d}}{\mathrm{d} t}{V} = 0$ for any $V \in \mathbb{S}^{n \times r}_*$. We rewrite \eqref{eq:gradflow} as
$\frac{\mathrm{d}}{\mathrm{d} t}{V} = 2(\mathcal{J}_1 + \mathcal{J}_2)$, where
\begin{align*}
\mathcal{J}_1
= & - X (I - V V^\dagger) \left[ \mathcal{F}_A (V) + \mathcal{F}_B (V)^\top \right] (V^\top X V)^{-1},
\\
\mathcal{J}_2 = & (V^\dagger)^\top \left[ \mathcal{F}_A (V)^\top + \mathcal{F}_B (V) \right] (V^\dagger)^\top
\nonumber
\\& - A^\top (V^\dagger)^\top \Sigma_{22}^\top - \mathcal{F}_C (V),
\end{align*}
Note that
$
V^\top X (I - V V^\dagger) = V^\top X - V^\top X V V^\dagger = 0,
$
which leads to
\begin{equation} \label{eq:VJ1}
V^\top \mathcal{J}_1 = 0.
\end{equation}
Furthermore, we have
\begin{align} \label{eq:VJ2}
V^\top \mathcal{J}_2 = & \Sigma_{22}^\top \hat{A} + \mathcal{Q}_{12}^\top B \hat{B}^\top + \mathcal{Q}_{22} \hat{B} \hat{B}^\top
\nonumber \\
& - \hat{A}^\top \Sigma_{22} - \hat{C}^\top \hat{C} \mathcal{P}_{22} + \hat{C}^\top C \mathcal{P}_{12}
\end{align}
and it is obtained from the Sylvester equations in \eqref{eq:Sylv} that
\begin{align*}
- \mathcal{Q}_{12}^\top B \hat{B}^\top & = \mathcal{Q}_{12}^\top A \mathcal{P}_{12} + \mathcal{Q}_{12}^\top \mathcal{P}_{12} \hat{A}^\top,
\\
- \mathcal{Q}_{22} \hat{B} \hat{B}^\top
& = \mathcal{Q}_{22} \hat{A} \mathcal{P}_{22} + \mathcal{Q}_{22} \mathcal{P}_{22} \hat{A}^\top,
\\
\hat{C}^\top \hat{C} \mathcal{P}_{22}
& = - \hat{A}^\top \mathcal{Q}_{22} \mathcal{P}_{22} - \mathcal{Q}_{22} \hat{A} \mathcal{P}_{22},
\\
- \hat{C}^\top C \mathcal{P}_{12}
& = - \mathcal{Q}_{12}^\top A \mathcal{P}_{12} - \hat{A}^\top \mathcal{Q}_{12}^\top \mathcal{P}_{12}.
\end{align*}
Substituting the above equations into \eqref{eq:VJ2} then gives
\begin{align} \label{eq:VJ2-2}
V^\top \mathcal{J}_2 = & \Sigma_{22}^\top \hat{A} - \mathcal{Q}_{12}^\top \mathcal{P}_{12} \hat{A}^\top - \mathcal{Q}_{22} \mathcal{P}_{22} \hat{A}^\top
\nonumber \\
& - \hat{A}^\top \Sigma_{22} + \hat{A}^\top \mathcal{Q}_{22}\mathcal{P}_{22} + \hat{A}^\top \mathcal{Q}_{12}^\top \mathcal{P}_{12}
= 0.
\end{align}
Therefore, it follows from \eqref{eq:VJ1} and \eqref{eq:VJ2-2} that
\begin{equation} \label{eq:VdV0}
V^\top \frac{\mathrm{d}}{\mathrm{d} t}{V} = 2 V^\top \mathcal{J}_1 + 2 V^\top \mathcal{J}_2 = 0,
\end{equation}
Furthermore,
$$\frac{\mathrm{d}}{\mathrm{d} t} (V(t)^\top V(t)) = \frac{\mathrm{d}}{\mathrm{d} t}{V}(t)^\top V(t) + V(t)^\top \frac{\mathrm{d}}{\mathrm{d} t}{V}(t) = 0,$$ that proves the first statement.
The second statement can be shown with the invariance of $V(t)^\top V(t)$ in the gradient flow.
It is clear that
$
V(t)^\top V(t) = V(0)^\top V(0) = V_0^\top V_0,
$
which gives
\begin{equation*}
\mathrm{Rank} (V(t)) =
\mathrm{Rank} (V_0).
\end{equation*}
For arbitrary initial condition $V_0 \in \mathbb{S}^{n \times r}_*$, the matrix $V(t)$ at any $t \geq 0$ stays in $\mathbb{S}^{n \times r}_*$.
Next, we prove the third statement by writing the derivative of $J(V)$ as follows.
\begin{equation*}
\frac{\mathrm{d}}{\mathrm{d} t}{J} (V(t)) = \langle \nabla J(V), \frac{\mathrm{d}}{\mathrm{d} t}{V} \rangle = - 2 \| \nabla J(V) \|^2_F \leq 0.
\end{equation*}
Therefore, $J(V)$ is nonincreasing, i.e. \eqref{eq:nonincreasing} holds.
Finally, denote $W: = \nabla J(V) V^\top - V \nabla J(V)^\top$, and it is clear that $W = 0$ if $\nabla J(V) = 0$. Moreover, we have $\nabla J(V)^\top V = 0$, $\forall V \in \mathbb{S}^{n \times r}_*$ due to \eqref{eq:VdV0}, then
$$ W V (V^\top V)^{-1} = J(V),
$$
and thus $J(V) = 0$ if $W = 0$. The implication \eqref{eq:JV0} is obtained.
\end{proof}
The features of the gradient flow $\frac{\mathrm{d}}{\mathrm{d} t} V(t)$ reflects that the solution of the ordinary differential equation \eqref{eq:gradflow} always lies on the manifold $\mathbb{S}^{n \times r}_*$ over all the time for
any given initial condition $V_0$, and moreover, the solution will converge to a critical point of $J$ on $\mathbb{S}^{n \times r}_*$ as the objective function $J(V)$ is always non-negative. It is guaranteed that if the objective function $J(V)$ has only isolated minimum points, the solution $V(t)$ should converge to one of them.
With the matrix differential equation \eqref{eq:gradflow}, we can apply numerical algorithms that search along a geodesic to solve this gradient
flow problem as in e.g., \cite{Yu1999approximate,edelman1998geometry,Jiang2015h2}.
However, this procedure may require a considerable computational resource and hence is not discussed in this paper. Alternatively, we provide iterative algorithms based on gradient descent to solve the nonconvex optimization problem \eqref{eq:optimization1} in a more efficient manner.
\subsection{Gradient-Based Algorithms}
With the explicit expression of the gradient $\nabla J(V)$,
the gradient descent method can be directly used, which has the following simple iteration:
\begin{algorithm}
\caption{Gradient Decent Algorithm}
\begin{algorithmic}[1]
\State Let $X$ be chosen based on the structural property of the original system \eqref{sys:orig} and $V_0 \in \mathbb{S}^{n \times r}_*$ be an initial solution.
\Repeat
\State Solve the Sylvester equations in \eqref{eq:Sylv}.
\State Compute the gradient $\nabla J(V_k)$ in \eqref{eq:nablaJ}.
\State Update $V_{k+1} = V_k - \alpha_k \nabla J(V_k)$.
\Until{$\| V_{k+1} - V_k \|_{F}^2 \leq \epsilon$}
\end{algorithmic}
\label{alg}
\end{algorithm}
Algorithm~\ref{alg} terminates when the difference between $V_{k+1}$ and $V_k$ is small enough, i.e. their error norm is less than a small positive scalar $\epsilon$. More detailed discussions on how to choose the stepsize $\alpha_k > 0$ in each iteration $k$ and an initial solution $V_0$ are provided in Remark~\ref{rem:stepsize} and Remark~\ref{rem:initialization}, respectively. Generally, the stepsize $\alpha_k$ should be chosen such that the iteration in Algorithm~\ref{alg} converges. However, the convergence of Algorithm~\ref{alg} is hard to be analyzed theoretically since $\mathbb{S}^{n \times r}_*$ is a noncompact set.
\begin{coro}
\label{coro:iter}
Let $V_0 \in \mathbb{S}^{n \times r}_*$ be the initial condition and $\{V_k\}$ be the generated sequence in Algorithm~\ref{alg}. Then, $V_k \in \mathbb{S}^{n \times r}_*$ for each iteration $k$.
\end{coro}
\begin{proof}
It is equivalent to show that $V_{k+1}^\top V_{k+1} > 0$ if $V_{k}^\top V_{k} > 0$.
Note that
\begin{align*}
V_{k+1}^\top V_{k+1} = & V_{k}^\top V_{k} + \alpha_k^2 \nabla J(V_k)^\top \nabla J(V_k) \\
&- \alpha_k V_{k}^\top \nabla J(V_k) - \alpha_k \nabla J(V_k)^\top V_{k}.
\end{align*}
We can follow the proof of Theorem~\ref{thm:gradflow} to show that
\begin{equation}
V_{k}^\top \nabla J(V_k) = 0, \ \forall k,
\end{equation}
which gives
\begin{equation*}
V_{k+1}^\top V_{k+1} = V_{k}^\top V_{k} + \alpha_k^2 \nabla J(V_k)^\top \nabla J(V_k) > 0.
\end{equation*}
Therefore, $V_{k}$ stays in the manifold $\mathbb{S}^{n \times r}_*$ given $V_0 \in \mathbb{S}^{n \times r}_*$.
\end{proof}
\begin{remark}
\label{rem:stepsize}
The stepsize $\alpha_k$ in Algorithm~\ref{alg} can be fixed. Alternatively, we apply the steepest descent method \cite{Meza2010steepest} to determine the stepsize via an exact line search as follows:
\begin{equation}\label{eq:steepest}
\alpha_k = \argmin\limits_{\alpha > 0} J(V_k - \alpha \nabla J(V_k)),
\end{equation}
which aims to find the best stepsize along the direction of the gradient at each iteration. We refer to \cite{Meza2010steepest} for more details.
\end{remark}
It is noted that the noncompact Stiefel manifold $\mathbb{S}^{n \times r}_*$ is unbounded, and thus it is difficult to show the Lipschitz continuity of the gradient $\nabla J(V)$.
To establish a convergence guarantee that can be theoretically proven,
we discuss in the following an iterative optimization algorithm based on a curvilinear search that can generate a sequence of feasible points on an invariant submanifold of $\mathbb{S}^{n \times r}_*$:
\begin{equation} \label{eq:Stiefel_Comp}
\mathcal{M} = \{V \in \mathbb{S}^{n \times r}_* \mid V^\top V = M \},
\end{equation}
where $M \in \mathbb{R}^{r \times r}$ is positive definite. In particular, if $M = I_r$, then $\mathcal{M}$ is the compact \textit{Stiefel manifold}, denoted by $\mathrm{St}(n, r)$.
To this end, we employ the so-called Cayley transform that computes a parametric curve at the point $V \in \mathcal{M}$, with the following closed form:
\begin{equation} \label{eq:Cayley}
\mathcal{V} (\tau) = \left(I - \frac{\tau}{2} W\right)^{-1} \left(I + \frac{\tau}{2} W\right) V ,
\end{equation}
where $\tau$ is
a parameter that represents the length on the curve, and $W$ is a skew-symmetric matrix given as
\begin{equation} \label{eq:W}
W = \nabla J(V) V^\top - V \nabla J(V)^\top.
\end{equation}
We refer to e.g., \cite{wen2013feasible,Lanlin2020h2} for more details. It is observed that the Cayley transform \eqref{eq:Cayley} leads to interesting properties: $\mathcal{V} (\tau)$ is a smooth function of $\tau$ with $\mathcal{V} (0) = V$, and moreover,
$
\mathcal{V}(\tau)^\top \mathcal{V} (\tau) = V^\top V,
$
for all $\tau \in \mathbb{R}$.
However, the Cayley transform \eqref{eq:Cayley} involves the computation of a matrix inversion of dimension $n$, and hence the iterative algorithm can be computationally expensive when the original system \eqref{sys:orig} is large-scale. To reduce the computational effort, we make use of the property $\nabla J(V)^\top V = 0$ proven in Theorem~\ref{thm:gradflow} and provide a new update scheme as follows. For simplicity, $\nabla J(V)$ is briefly denoted by $\nabla J$ here.
\begin{lemma}
Consider $\mathcal{V}(t)$ in the Cayley transform \eqref{eq:Cayley} at any point $V \in \mathbb{S}^{n \times r}_*$ with $\tau \in \mathbb{R}$ and the gradient $\nabla J$ in \eqref{eq:nablaJ}. Then, $\mathcal{V}(t)$ can be equivalently computed as
\begin{equation} \label{eq:Vt_new}
\mathcal{V}(\tau) = V - \tau \left(I+\frac{\tau}{2} V \nabla J^\top \right) \nabla J \mathcal{U}(\tau)^{-1} V^\top V,
\end{equation}
where
$ \mathcal{U}(\tau) = I + \frac{\tau^2}{4} V^\top V \nabla J^\top \nabla J.
$
If $V \in \mathrm{St}(n, r)$, the expression \eqref{eq:Vt_new} is simplified as
\begin{equation} \label{eq:Vt_new1}
\mathcal{V}(\tau) = V - \tau \left(I+\frac{\tau}{2} V \nabla J^\top \right) \nabla J \left(I + \frac{\tau^2}{4} \nabla J^\top \nabla J\right)^{-1}.
\end{equation}
\end{lemma}
\begin{proof}
Observe that the matrix $W$ in \eqref{eq:W} can be expressed as the outer product of two low-rank matrices as $W = W_L W_R^\top$ with
\begin{equation} \label{eq:WLWR}
W_L = [
\nabla J (V), \ -V
], \ \text{and} \ W_R = [
V, \ \nabla J (V)].
\end{equation}
Then, it follows from the matrix inversion lemma that
\begin{equation*}
\left(I - \frac{\tau}{2} W\right)^{-1} = I - \frac{\tau}{2} W_L \left(I + \frac{\tau}{2} W_R^\top W_L \right)^{-1} W_R^\top.
\end{equation*}
Therefore, we have
\begin{align} \label{eq:Vt}
\mathcal{V}(t) & = \left(I - \frac{\tau}{2} W\right)^{-1} \left(I + \frac{\tau}{2} W\right) V
\nonumber\\
& = \left[ 2\left(I - \frac{\tau}{2} W \right)^{-1} - I \right] V
\nonumber\\
& = \left[I - \tau W_L \left(I + \frac{\tau}{2} W_R^\top W_L \right)^{-1} W_R^\top \right] V.
\end{align}
Denote $Z: = W_L (I + \frac{\tau}{2} W_R^\top W_L )^{-1} W_R^\top V$. With the expressions in \eqref{eq:WLWR} into \eqref{eq:Vt}, $Z$ can be rewritten as
\begin{align*}
Z & =
\begin{bmatrix}
\nabla J & - V
\end{bmatrix}
\begin{bmatrix}
I & \frac{\tau}{2} V^\top V \\
\frac{\tau}{2} \nabla J^\top \nabla J & I
\end{bmatrix}^{-1}
\begin{bmatrix}
V^\top V \\ 0
\end{bmatrix}
\\
& = \begin{bmatrix}
\nabla J & - V
\end{bmatrix}
\begin{bmatrix}
\mathcal{U}(\tau)^{-1} V^\top V \\
\frac{\tau}{2}\nabla J^\top \nabla J \mathcal{U}(\tau)^{-1} V^\top V
\end{bmatrix}
\\
& = \left(I + \frac{\tau}{2} V \nabla J^\top \right) \nabla J \mathcal{U}(\tau)^{-1} V^\top V,
\end{align*}
which is substituted into \eqref{eq:Vt} and then yields \eqref{eq:Vt_new}.
\end{proof}
The merit of this result is that, rather than computing the inverse of the $n \times n$ matrix $I - \frac{\tau}{2}W$ in \eqref{eq:Cayley}, it presents an alternative to compute $\mathcal{V}(\tau)$ which only requires the inverse of the $r \times r$ matrix $\mathcal{U}(\tau)$. As in our model reduction problem, the reduced order $r$ is typically much smaller than $n$, and the new update scheme in \eqref{eq:Vt_new} can significantly improve the computational efficiency.
With the new curve expression \eqref{eq:Vt_new}, the following curvilinear search approach is thereby provided to solve the nonconvex optimization problem \eqref{eq:optimization1}, and the solution $V_k$ in each iteration $k$ remains in the manifold $\mathcal{M}$.
\begin{algorithm}
\caption{Iterative Algorithm Based on Curvilinear Search}
\begin{algorithmic}[1]
\State Let $X$ be chosen based on the structural property of the original system \eqref{sys:orig} and $V_0 \in \mathbb{S}^{n \times r}_*$ be an initial solution.
\State $M = V_0^\top V_0$.
\Repeat
\State Solve the Sylvester equations in \eqref{eq:Sylv}.
\State Compute the gradient $\nabla J(V_k)$ in \eqref{eq:nablaJ}.
\State Update
$$V_{k+1} = V_k - \tau_k \left(I+\frac{\tau_k}{2} V_k \nabla J(V_k)^\top \right) \nabla J(V_k) \mathcal{U}_k^{-1}M.$$
where $\mathcal{U}_k: = I + \frac{\tau_k^2}{4} M \nabla J(V_k)^\top \nabla J(V_k)$
\Until{$\| \nabla J(V_k) \|_{F}^2 \leq \epsilon$}
\end{algorithmic}
\label{alg2}
\end{algorithm}
Note that Algorithm~\ref{alg2} can treat the optimization problem on the Stiefel manifold $\mathrm{St}(n, r)$ as a special case. It is simply initialized by any $V_0 \in \mathrm{St}(n,r)$ then applies \eqref{eq:Vt_new1} for the update, and the orthogonality constraint $V_k^\top V_k = I_r$ will be retained for all $k$.
\begin{remark}
\label{rem:stepsize2}
At iteration $k$, $\mathcal{V}_k(\tau)$ is the function \eqref{eq:Vt_new} obtained at the point $V_k$, and we can select the stepsize $\tau_k \geq 0$ by minimizing $J(\mathcal{V}_k(\tau))$ along the curve $\mathcal{V}_k(\tau)$ with respect to $\tau$. However, it is computationally demanding to
obtain the minimizer. Therefore, we can also select the stepsize $\tau_k$ to fulfill the following Armijo-Wolfe conditions \cite{jorge2006numerical,wen2013feasible}:
\begin{align}
J(\mathcal{V}_k(\tau_k)) & \leq J(\mathcal{V}_k(0)) + \rho_1 \tau_k J_\tau'(\mathcal{V}_k(0)),
\label{eq:Wolfe}\\
J_\tau'(\mathcal{V}_k(\tau_k)) & \geq \rho_2 J_\tau'(\mathcal{V}_k(0)),
\nonumber
\end{align}
where $0 <\rho_1 < \rho_2 < 1$ are two parameters, and $J_\tau'(\mathcal{V}(\tau))$ denotes the derivative of $J(\mathcal{V}(\tau))$ with respect to $\tau$.
It is worth emphasizing that there always exists a stepsize $\tau_k$
satisfying the above
conditions since $J(\mathcal{V}(\tau))$ is continuously differentiable and bounded, see \cite{jorge2006numerical} for more details.
\end{remark}
Finally, the following corollary is presented to formalize the convergence of Algorithm~\ref{alg2}.
\begin{coro}
Let $V_0 \in \mathbb{S}^{n \times r}_*$ be the initial condition and $\{V_k\}$ be the generated sequence in Algorithm~\ref{alg2}. Denote $M: = V_0^\top V_0$. If the stepsize $\tau_k$ satisfies the Armijo-Wolfe condition \eqref{eq:Wolfe}, then the sequence $\{V_k\}$ has the following properties.
\begin{enumerate}
\item $V_k \in \mathbb{S}^{n \times r}_*$, and $V_k^\top V_k = M$ for each iteration $k$.
\item the objective function $J(V)$ is non-increasing along the iterations as
\begin{align} \label{eq:convergence}
J(V_{k+1}) & \leq J(V_k) - \rho_1 \tau_k \| V_k \nabla J(V_k)^\top \|_F^2 \\
& { \leq J(V_k) - \rho_1 \tau_k \sigma_{\mathrm{min}}(M) \|\nabla J(V_k)^\top \|_F^2} \nonumber
\end{align}
for all $k \geq 0$, { where $\sigma_{\mathrm{m}}(M)$ is the smallest singular value of the constant matrix $M$.}
\item { The sequence $\{V_k\}$ converges to a critical point, i.e. $\lim\limits_{k \rightarrow \infty} \| \nabla J(V_k) \|_F = \lim\limits_{k \rightarrow \infty} \| V_{k+1} - V_k \|_F = 0$.}
\item { It takes at most
\[
\kappa = \left\lceil \frac{J(V_0) - J(V_*)}{ \rho_1 \nu \sigma_{\mathrm{min}}(M) \epsilon} \right\rceil
\] iterations for Algorithm~\ref{alg2} to terminate, where $\nu: = \inf\{\tau_0, \tau_1, ..., \tau_{\kappa-1} \}$ is strictly positive, and $\lceil \cdot \rceil$ returns the ceiling of a number.}
\end{enumerate}
\end{coro}
\begin{proof}
The first statement is a direct result of the invariance property of the Cayley transform \eqref{eq:Cayley}. The convergence of Algorithm~\ref{alg2} is guaranteed by the Wolfe condition \eqref{eq:Wolfe}. Using the chain rule,
\begin{equation}
J_\tau'(\mathcal{V}(t)) = \mathrm{Tr} \left( \frac{\partial J(\mathcal{V}(\tau))}{\partial \mathcal{V}(\tau)}^\top
\cdot \frac{\partial \mathcal{V}(\tau)}{\partial \tau} \right),
\end{equation}
where
\begin{equation*}
\frac{\partial \mathcal{V}(\tau)}{\partial \tau} = - (\nabla J V^\top - V \nabla J^\top ) V = - \nabla J V^\top V,
\end{equation*}
and $\frac{\partial J(\mathcal{V}(0))}{\partial \mathcal{V}(0)} = \nabla J(V)$ as $\mathcal{V}(0) = V$.
Hence,
$$J_\tau'(\mathcal{V}(0)) = - \mathrm{Tr} (\nabla J^\top \nabla J V^\top V) =
- \mathrm{Tr} (V \nabla J^\top \nabla J V^\top )
.$$ Then the first inequality in \eqref{eq:convergence} is obtained, followed by the condition \eqref{eq:Wolfe}. { The second inequality in \eqref{eq:convergence} is induced from the fact that
$\mathrm{Tr} (\nabla J^\top \nabla J V^\top V) = \mathrm{Tr} ( \nabla J M \nabla J^\top) \geq \sigma_{\mathrm{m}}(M) \mathrm{Tr} (\nabla J^\top \nabla J)$.
Since $\rho_1$, $\tau_k$, and $\sigma_{\mathrm{m}}(M)$ are all positive scalars, $J(V_k)$ is monotone decreasing along $k$. Besides, $J(V_{k})$ is always nonnegative by its definition \eqref{eq:optimization1}. Therefore, the sequence $\{J(V_{k})\}$ converges, which implies $\lim_{k \rightarrow \infty}\left[ J(V_k) - J(V_{k+1})\right] = 0$ and hence $
\lim_{k \to \infty} \| \nabla J(V_k) \|_F = 0
$ from \eqref{eq:convergence}. As a result, $\mathcal{U}_k$ in Algorithm~\ref{alg2} converges to the identity matrix as $k \rightarrow \infty$, which leads to $\lim_{k \rightarrow \infty} \| V_{k+1} - V_k \|_F = 0$. We would also like to refer to \cite{goldfarb2009curvilinear} for the convergence analysis of general curvilinear search methods. The analysis of the convergence rate in item (4) can be proven following the similar reasoning as in Corollary~\ref{coro:iter} and hence is omitted here.}
\end{proof}
\begin{remark}
Algorithm~\ref{alg2} can also be viewed as a Riemannian optimization method due to the property $\nabla J(V)^\top V = 0$, for all $V \in \mathbb{S}^{n \times r}_*$. Following \cite{Absil2009optimization}, the Riemannian gradient can be written as
\begin{equation*}
\mathrm{grad} J(V) : = \nabla J(V) - \frac{1}{2} V (V^\top \nabla J(V) + \nabla J(V)^\top V),
\end{equation*}
which is identical to $\nabla J(V)$ whenever $\nabla J(V)^\top V = 0$. Furthermore, the Cayley transformation in \eqref{eq:Cayley} or \eqref{eq:Vt_new} is adopted to preserve the constraint $V_k^\top V_k = M$. This is actually equivalent to a retraction map used in Riemannian gradient descent.
\end{remark}
\begin{remark}
\label{rem:initialization}
The initialization of $V_0$ in Algorithms~\ref{alg} and \ref{alg2} is discussed. Generally, we can choose $V_0$ as a random $n \times r$ matrix with full column rank. Then, $V_k$ in each iteration of Algorithm~\ref{alg} will have full column rank, and furthermore, $V_k^\top V_k$ will remain constant in Algorithm~\ref{alg2}.
However, a random initial condition might result in a slow convergence and produce a local optimum with the objective function value much larger than the global minimum value. In contrast, it is preferable to choose the initial point $V_0$ as an output of the balanced truncation or the Krylov subspace methods. In this manner, the presented approach can be viewed as a posterior procedure of the balanced truncation or the Krylov subspace methods for improving the accuracy of reduced-order models. We will demonstrate this point in the following numerical example.
\end{remark}
%
\section{Numerical Experiments}
\label{sec:example}
In this section, the preceding theoretical results of the proposed optimal projection-based model reduction method are illustrated. In the first part, we implement the proposed framework for the stability-preserving model reduction of a building model, which is compared with the standard balanced truncation \cite{morAntSG01} and the projection-based method
\cite{selga2012stability}. The second part, a model reduction problem for port-Hamiltonian systems is addressed, and comparisons are made among the proposed method and the other existing approaches in \cite{polyuga2012effort,gugercin2012pH,sato2018pHsys}.
\subsection{Stability-Preserving Model Reduction of Building Model}
In this example, we aim to preserve the asymptotic stability of a system. Consider a model reduction benchmark example in \cite{morAntSG01} that describes the displacement of a multi-storey building. The full-order model is asymptotically stable and has $48$ state variables, $1$ input, and $1$ output. The objective in this example is to build reduced-order models that remain asymptotically stable. Note that the optimization-based approaches in \cite{Yu1999approximate,sato2017structure} are not applicable to this example, since the symmetric part of the system state matrix, i.e. $\frac{1}{2}(A + A^\top)$ with $A$ in \eqref{sys:orig}, is not negative definite.
For comparison, three projection-based methods are applied: (i) the standard balanced truncation (BT) that naturally preserves stability \cite{morAntSG01}, (ii) the iterative SVD-rational Krylov-based model reduction approach (ISRK) in \cite{gugercin2008iterative}, and (iii) the proposed method based on structure-preserving optimal projection (SPOP) in Algorithm~\ref{alg}, where $X$ in \eqref{eq:pinv} is equal to the observability Gramian of the original system, and the stepsize of each iteration is determined by \eqref{eq:steepest}. Meanwhile, to demonstrate the performance of Algorithm~\ref{alg}, we adopt three different strategies for its initialization. First, we use five randomly generated matrices as initial points to run the iterations, and the one that produces the smallest error is marked as SPOP-R. Besides, we also choose the initial points $V_0$ of Algorithm~\ref{alg} as the outputs of BT and ISRK, and the corresponding schemes are denoted by SPOP-B and SPOP-K, respectively.
For each reduced order $r = 3, 6, 9, 12, 15$, we compute and show in Table~\ref{tab:building} the relative errors in terms of the $\mathcal{H}_2$ norm, i.e. $\| {G}(s) - \hat{G}(s) \|_{\mathcal{H}_2}/ \| {G}(s) \|_{\mathcal{H}_2}$ with ${G}(s)$, $\hat{G}(s)$ the transfer matrices of the full-order and reduced-order models, respectively.
\begin{table}[t]
\centering
\caption{Relative $\mathcal{H}_2$ errors as the reduced order $r$ varies}
\begin{tabular}[t]{lccccc}
\toprule
Order & $r=3$ & $r=6$ & $r=9$ & $r=12$ & $r=15$\\
\midrule
BT & 0.7170 & 0.2905 & 0.2217 & 0.1650 & 0.1644\\
ISRK & 0.7945 & 0.2486 & 0.2010 & 0.1399 & 0.1132\\
SPOP-R & 0.7153 & 0.2607 & 0.2063 & 0.1730 & 0.1583\\
SPOP-B & \textbf{0.7145} & \textbf{0.2460} & \textbf{0.1761} & 0.1367 & 0.0971\\
SPOP-K & 0.7809 & \textbf{0.2460} & 0.1780 & \textbf{0.1364} & \textbf{0.0921}\\
\bottomrule
\end{tabular}
\label{tab:building}
\end{table}%
From Table~\ref{tab:building}, we see that Algorithm~\ref{alg} is sensitive to the choices of its initial point $V_0$, as with different initial points, the algorithm can converge to distinct local optimal solutions that give different relative errors. When $r = 12$ and $r = 15$, the random selection of initial points of $V_0$ in Algorithm~\ref{alg} produce reduced-order models with greater relative $\mathcal{H}_2$ errors than the ones generated by BT and ISRK. Therefore, the selection of the initial point $V_0$ in Algorithm~\ref{alg} is crucial for achieving a smaller reduction error.
\begin{figure}[t]
\centering \includegraphics[width=0.4\textwidth]{fig/bode.pdf}
\caption{Comparison of the relative error magnitudes versus frequency in different methods when $r = 15$.}
\label{fig:bode}
\end{figure}
Furthermore, when the outputs of BT or ISRK are taken as the initial points, Algorithm~\ref{alg} presents better results than both BT and ISRK. This is because the projections generated by the BT and ISRK methods are generally not optimal in minimizing the approximation error, while the proposed scheme can take their outputs as initial points and leads to a local optimal projection via the iterations on the noncompact Stiefel manifold. Particularly, when $r = 15$, SPOP-B and SPOP-K further reduce the relative errors by $41.0\%$ and $18.6\%$, compared to BT and ISRK, respectively. The detailed comparison of the approximation error magnitudes at different frequencies is presented in Fig.~\ref{fig:bode}, which shows that the SPOP-B and SPOP-K methods outperform the BT or ISRK approaches for most frequencies, particularly in the lower frequency range. This shows the potential of the presented method as a posterior procedure of the balanced truncation or the Krylov subspace methods, as it is able to further improve the quality of the projection and hence gives a better reduced-order model.
\subsection{Passivity-Preserving Model Reduction of Mass-Spring-Damper System}
We further show the effectiveness of the proposed framework in the passivity-preserving model reduction problem for port-Hamiltonian systems, which has received considerable attention from the model reduction literature. We adopt the benchmark example of a mass-spring-damper system as shown in Fig.~\ref{fig:msd}, with masses $m_i$, spring coefficients $k_i$ and damping constants $c_i$, $i = 1,...,\ell$. The displacement of the mass $m_i$ is indicated by $q_i$. The same model parameters are used as in \cite{gugercin2012pH}, which considers $\ell = 50$ masses and hence $100$ states in the full-order model. Furthermore, the system inputs are two external forces $u_1$, $u_2$ applied to the masses $m_1$, $m_2$, whose velocities $\dot{q}_1$ and $\dot{q}_2$ are measured as the outputs $y_1$, $y_2$ of the system. We refer to \cite{gugercin2012pH} for the detailed state-space formulation of this mass-spring-damper system.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.9, transform shape, every node/.style={outer sep=0pt},thick,
mass/.style={draw,thick},
spring/.style={thick,decorate,decoration={zigzag,pre length=0.3cm,post
length=0.3cm,segment length=6}},
ground/.style={fill,pattern=north east lines,draw=none,minimum
width=0.75cm,minimum height=0.3cm},
dampic/.pic={\fill[white] (-0.1,-0.2) rectangle (0.2,0.2);
\draw (-0.3,0.2) -| (0.2,-0.2) -- (-0.3,-0.2);
\draw[line width=1mm] (-0.1,-0.2) -- (-0.1,0.2);}]
\node[mass,minimum width=1.1cm,minimum height=0.9cm,fill={rgb:orange,1;yellow,2;pink,5}] (m1) {$m_1$};
\node[mass,minimum width=1.1cm,minimum height=0.9cm,fill={rgb:orange,1;yellow,2;pink,5},right=1.4cm of
m1] (m2) {$m_2$};
\node[mass,minimum width=1.1cm,minimum height=0.9cm,fill={rgb:orange,1;yellow,2;pink,5},right=2.1cm of
m2] (m3) {$m_{\ell}$};
\node[right=1.4cm of m2] (dots) {$\cdots$};
\node[right=1.4cm of m3,ground,minimum width=3mm,minimum height=2cm] (g1){};
\draw[thick] (g1.south west) -- (g1.north west);
\draw[spring] ([yshift=3mm]g1.west) coordinate(aux)
-- (m3.east|-aux)
node[midway,above=1mm]{$k_{\ell}$};
\draw[spring] (m1.east|-aux) -- (m2.west|-aux) node[midway,above=1mm]{$k_1$};
\draw[spring] (m2.east|-aux) -- (dots.west|-aux) node[midway,above=1mm]{$k_2$};
\draw ([yshift=-3mm]g1.west) coordinate(aux')
-- (m3.east|-aux') pic[midway]{dampic} node[midway,below=3mm]{$c_\ell$}
(m1.east|-aux') -- (m2.west|-aux') pic[midway]{dampic} node[midway,below=3mm]{$c_1$}
(m2.east|-aux') -- (dots.west|-aux') pic[midway]{dampic} node[midway,below=3mm]{$c_2$};
\foreach \X in {1,2}
{
\draw (m\X.north) -- ++ (0,0.5) coordinate[midway](aux\X);
\draw[latex-] (aux\X) -- ++ (-0.5,0) node[above]{$u_\X$};
\draw[thin,dashed] (m\X.south) -- ++ (0,-0.8) coordinate[pos=0.7](aux'\X);
\draw[-latex] (aux'\X) -- ++ (1,0) node[midway,below]{$q_\X$};
}
\draw[thin,dashed] (m3.south) -- ++ (0,-0.8) coordinate[pos=0.7](aux'3);
\draw[-latex] (aux'3) -- ++ (1,0) node[midway,below]{$q_{\ell}$};
\end{tikzpicture}
\caption{A mass-spring-damper system with $\ell$ masses has the dimension of $n = 2 \ell$.}
\label{fig:msd}
\end{figure}
To preserve the passivity of the full-order model, we let $X$ in \eqref{eq:pinv} be chosen as the Hamiltonian matrix of the system, which satisfies the LMI \eqref{eq:LMI-QSR}. Our optimal projection approach in Algorithm~\ref{alg}, shortened as SPOP, is then compared with the other two projection-based methods, namely, the balanced truncation method for port-Hamiltonian systems (BT-PH) in \cite{kawano2018structure}, and the iterative tangential rational interpolation method (IRKA-PH) for port-Hamiltonian systems in \cite{gugercin2012pH}. In Algorithm~\ref{alg}, we fix the number iteration at $100$ and adopt the steepest descent strategy in \eqref{eq:steepest} to compute a stepsize at each iteration. Furthermore, the starting point of the SPOP algorithm is initialized by one-step interpolation. Specifically,
we choose sets of initial interpolation points $\{s_1, ..., s_r \}$ that are logarithmically spaced between $10^{-3}$ and $10^{-1}$ and tangential directions $\{b_1 , ... , b_r \}$ as the dominant right singular vectors of the transfer matrix $G(s)$ at each interpolation point. Then an initial point is constructed as follows:
\begin{equation}
V_0 = \begin{bmatrix}
(s_1 I - A)^{-1}B b_1, ... ,(s_r I - A)^{-1} B b_r
\end{bmatrix},
\end{equation}
which is replaced with a real basis if the columns of $V_0$ occur in conjugate pairs, see \cite{gugercin2012pH} for more details.
The same interpolation points and directions are used to initialize the IRKA-PH algorithm for comparison. We adopt each of the methods to reduce the full-order model to reduced ones with the order from $r = 2$ to $r = 20$. The resulting relative $\mathcal{H}_2$ errors for different reduced orders are illustrated in Fig.~\ref{fig:errcomp}, from which we observe that the proposed method achieves superior performance to both BT-PH and IRKA-PH, especially when the reduced model dimension between $6$ and $16$. For instance, when $r = 10$, our method achieves a relative error in the $\mathcal{H}_2$ norm that is $49.55\%$ and $46.96\%$ smaller errors than BT-PH and IRKA-PH, respectively.
The convergence feature of the presented iterative method is illustrated by Fig.~\ref{fig:converg}, in which we see a monotone decay of the objective function in each case when increasing the number of iterations. Eventually, the algorithm can gradually converge to a local optimum as shown in Fig.~\ref{fig:converg}.
\begin{figure}[t]
\centering \includegraphics[width=0.35\textwidth]{fig/errorcomp.pdf}
\caption{The magnitudes of the relative $\mathcal{H}_2$ errors as the reduced dimension $r$ varies.}
\label{fig:errcomp}
\end{figure}
\begin{figure}[t]
\centering \includegraphics[width=0.4\textwidth]{fig/converg.pdf}
\caption{Comparison of the error magnitudes versus frequency in different methods when $r = 15$.}
\label{fig:converg}
\end{figure}
Next, we illustrate both Algorithm~\ref{alg} and Algorithm~\ref{alg2} in reducing a model with $\ell = 1000$, i.e. $n = 2000$ to a reduced-order model with $r = 10$. In both algorithms, fixed stepsizes are imposed, i.e. $\alpha_k = 0.21$ in Algorithm~\ref{alg}, and $\tau_k = 0.21$ in Algorithm~\ref{alg2} at each iteration. Furthermore, we make a comparison between the two algorithms and the { Riemannian trust-region method} \cite{sato2018pHsys}, where each algorithm is implemented for 100 iterations with the initial point as the output of IRKA-PH. Fig.~\ref{fig:bode2} depicts the amplitude plot of the reduction errors, and Table~\ref{tab:comp} shows the relative $\mathcal{H}_2$ error and total running time on an Intel Core i7-7500U (2.70GHz) CPU.
From Fig.~\ref{fig:bode2} and Table~\ref{tab:comp}, we see that Algorithm~\ref{alg} and Algorithm~\ref{alg2} achieve nearly the same approximation error, but Algorithm~\ref{alg2} requires more computation time since it requires the matrix inversion in \eqref{eq:Vt_new} at each iteration. The Riemannian optimization approach does not impose the Petrov-Galerkin projection and thus achieves a better approximation compared to Algorithm~\ref{alg} and Algorithm~\ref{alg2}. However, it consumes more than three times the computation time of the proposed algorithms.
\begin{figure}[t]
\centering \includegraphics[width=0.4\textwidth]{fig/errorcomp2.pdf}
\caption{Comparison of the relative error magnitudes versus frequency in the three algorithms.}
\label{fig:bode2}
\end{figure}
\begin{table}[t]
\centering
\caption{Comparison of three algorithms with 100 iterations}
\footnotesize \begin{tabular}[t]{lcc}
\toprule
Algorithms & Relative Error & Running Time \\
\midrule
Algorithm~\ref{alg} & 0.1616 & 99.46s \\
Algorithm~\ref{alg2} & 0.1618 & 109.34s\\
Method in \cite{sato2018pHsys} & 0.1088 & 344.81s\\
\bottomrule
\end{tabular}
\label{tab:comp}
\end{table}%
\section{Conclusions}
\label{sec:conclusion}
In this paper, we have presented a novel model reduction method that incorporates nonconvex optimization with the Petrov-Galerkin projection, which provides a unified framework for reducing linear systems with different properties, including stability, passivity, and bounded realness. An optimization problem has been
formulated on a noncompact Stiefel manifold, aiming to find an optimal projection such that the resulting approximation error is minimized in terms of the $\mathcal{H}_2$ norm.
The gradient expression for the objective function is derived, which leads to a gradient descent algorithm that produces a locally optimal solution. The convergence property of the algorithm has also been analyzed. Finally, the feasibility and
performance of the proposed method has been illustrated by two numerical examples, which shows that the method can be applied to both stability-preserving and passivity-preserving model reduction problems and can achieve smaller approximation errors compared to the existing balanced truncation and Krylov subspace approaches.
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.08664",
"language": "en",
"timestamp": "2023-02-20T02:05:50",
"url": "https://arxiv.org/abs/2302.08664",
"yymm": "2302"
} | \section{Introduction}
Online social networks (OSNs) are an integral part of modern society, influencing a wide range of aspects of our daily lives. The vast quantity of personal information shared on these platforms makes them a treasure trove for companies seeking to reach out to potential customers and for individuals looking to grow their social circle or entertain themselves. However, like any software, OSNs are prone to bugs and technical issues; consequences range from poor user experience~\cite{brodkin2022socialapp,day2015socialbugs} to massive data breaches affecting billions of individuals~\cite{morahan2022dataleaks}.
The recent rise of social bugs in software systems has prompted the need for social testing~\cite{ahlgren2020wes}. However, for the research community, social testing poses several challenges.
First and foremost, obtaining data from online social networks (OSNs) can be time-consuming, resource-intensive, and requires specialised expertise, which may not be accessible to non-specialists~\cite{https://doi.org/10.48550/arxiv.1612.04666}. For example, privacy policies and community guidelines may restrict access to data, making it unavailable to researchers; Furthermore, descriptions of data extraction methods are often omitted in many studies~\cite{info:doi/10.2196/13544}.
Second, researchers may be limited in their ability to conduct experiments on OSN systems if they are built using proprietary software platforms~\cite{ahlgren2020wes}.
Finally, the sheer size and complexity of OSN systems can result in significant operational costs, which can impede researchers' ability to conduct large-scale experiments, limiting the scope and depth of their research.
To overcome these challenges, technology companies develop tools such as Web-Enabled Simulation (WES, by Facebook/Meta)~\cite{ahlgren2020wes}, allow developers to test code updates and new features in a simulated environment, without risking real user data.
Social testing, which involves simulating interactions among a large community of users, can be used to uncover faults in online social networks.
However, these tools are not widely available to the public and may have limitations in simulating the full spectrum of user behaviours that could result in uncovering bugs.
To address these limitations, we introduce Socialz, an approach for social fuzz testing, which makes the following key contributions:
\setlist[itemize]{leftmargin=7mm}
\setlist[enumerate]{leftmargin=8.5mm}
\begin{enumerate}
\item characterisation of users of a real social network,
\item evolutionary diversification of community interaction with respect to multiple, non-trivial features,
and
\item a workflow for executing interactions in the system and collecting performance data.
\end{enumerate}
Socialz aims to advance the field of social testing through diversity-based user behaviour. Our approach involves evolving diverse sets of virtual users that are diversified and unbiased across a non-trivial feature space. This allows us to cover a wider range of behaviours compared to real users, and increases the likelihood of uncovering bugs that may not be detected by a set of similar and biased virtual users.
\section{Related Work}
Software testing has evolved into a vast field that includes many different methods and techniques for assessing the performance, usability, and other attributes of software systems~\cite{7814898}. Testing is performed at various levels of abstraction, ranging from unit testing to system testing.
Recently, the concept of social testing has emerged as a new level of abstraction, potentially positioned above system testing~\cite{ahlgren2020wes}. This is due to the recognition that social bugs can arise through community interactions and may not be uncovered by traditional testing that focuses solely on single user journeys~\cite{harman2018journeys}.
In the following overview, we will briefly examine prior work in the testing of social networks, the application of evolutionary methods in fuzz testing, and diversity optimisation.
\subsection{Testing of Social Networks}
Search-Based Software Testing (SBST) is a technique that leverages optimization search algorithms to solve software testing problems~\cite{5954405}. This approach is widely used in both industrial and academic sectors, including at Facebook, where it is employed to test the behaviour of both the system and its users~\cite{10.1007/978-3-319-99241-9_1}.
In social networks, testing goes beyond simply assessing the behaviour of the system and involves evaluating the interactions between users facilitated by the platform. To that end, Web-Enabled Simulation (WES) simulates the behaviours of a community of users on a software platform using a set of bots~\cite{ahlgren2020wes}. Traditional tests, on the other hand, involve executing a predetermined series of input steps. WES is run in-vivo but in a shadow copy of the system, hence as a separate ``digital twin'', which allows for testing without risk to real user data~\cite{ahlgren2021facebook}. Both ``digital twins'' and WES have widespread industrial applications, not only in OSNs but also in robotics, manufacturing, healthcare, transport, and the energy sector, among others~\cite{jiang2021industrial,8972429}.
In addition to testing, there are also methods for formally verifying the correctness of social network software and systems. Formal verification of social network protocols and algorithms ensures security and reliability by ensuring access to content is subject to both user-specified and system-specified policies~\cite{cheng2012user,kafali2014detecting}. Model checking can be applied at various levels of abstraction, from high-level network properties to the implementation of individual components~\cite{pardo2017model}. This approach can be used in combination with other testing techniques, such as simulation or testing with software-controlled bots, to provide a more comprehensive evaluation of a social network's behaviour and performance~\cite{pedersen2021social}.
\subsection{Evolutionary Fuzzing}
We adopt definitions from~\citet{https://doi.org/10.48550/arxiv.1812.00140}, who define fuzzing as a software testing methodology that involves injecting unexpected or randomised input data into a program under test (PUT) to uncover defects or bugs. A specific application of fuzzing, called fuzz testing, evaluates the security policy violations of the PUT. The tool used to perform fuzz testing is known as a fuzzer.
he inputs used in fuzzing can be selected either randomly, with each element having an equal chance of being chosen, or through guided methods such as syntactic or semantic models~\cite{MARIANI2015bbtesting}.
Evolutionary fuzz testing is a variant of fuzz testing that leverages evolutionary algorithms to optimise the input data. Research has shown that it is effective in discovering a wide range of vulnerabilities, including those that are difficult to detect with traditional testing methods. For example, \citet{li2019v} demonstrate V-Fuzz, a vulnerability-oriented evolutionary fuzzing framework that combines vulnerability prediction with evolutionary fuzzing to
reach potentially vulnerable code. They test various open-source Linux applications and fuzzing benchmarks.
For Android applications, \citet{Cotroneo_2019} introduce a coverage-guided fuzzing platform that demonstrated to be more efficient than blind fuzzing.
Similarly, \citet{9847081} employ a constrained neural network evolutionary search method to optimise the testing process and efficiently search for traffic violations.
Although fuzzing is widely used, to the best of our knowledge, no research has been conducted to fuzz test a social network system.
\subsection{Diversity Optimisation}\label{sec:diversityopt}
When it comes to social fuzz testing, the actions a user takes need to be determined, such as ``follow a particular person''. In such a case, defining a user that achieves exactly this goal is straightforward, as a developer can easily translate the desired outcome into a specific interaction.
However, creating a user with a more complex behaviour, such as ``a virtual user should be highly active but not very central to everything that is going on'', is not as simple. Mapping the desired interaction between the virtual user and its environment is challenging due to the intricate interplay, even though the activity and centrality calculations may not be black boxes by definition.
This issue becomes even more complex when designing a group of virtual users within a social network that can interact with each other in various ways.
We propose a practical solution to addressing such issues by treating the functions as black boxes and utilising heuristic approaches like novelty search~\cite{DBLP:conf/gecco/RisiVHS09} or evolutionary diversity optimisation~\cite{DBLP:conf/gecco/UlrichT11}.
This is made possible by recent algorithmic advancements in diversity optimisation with multiple features.
First, the discrepancy-based approach by \citet{neumann2018discrepancy} aims to minimise the discrepancy of the solution set in the feature space. The discrepancy measure is effectively an indicator function that assesses the set of given solutions in the feature space.
Second, \citet{neumann2019moo} present approaches based on space transformations and multi-objective performance indicators. Among other, these formulations are used to create (1) sets of images (with 149x149 pixels)
that are diverse with respect to aesthetic or general features and (2) sets of instances for the travelling salesperson problem
(with up to 2000 cities)
that are diverse with respect to features like hull ratio and minimum spanning tree depth~\cite{bossek2019tsplarge}.
\section{The Methodology of Socialz: Overview}\label{sec:methodology}
In the following, we present our three-stage approach for fuzz testing OSNs. Each of the following three sections is dedicated to one of the stages, and each of them has a distinct methodology.
In \textit{Stage 1/3: Characterisation of User Behaviour}, we detail the process of obtaining and analysing data from a real OSNs to understand the behaviour of users on the network.
\textit{Stage 2/3: Evolutionary Diversification of Community Interaction} employs evolutionary algorithms to improve the diversity of user behaviour in the network, and
\textit{Stage 3/3: Execution} involves evaluating the evolved network.
\paragraph{Target platform.} \label{sec:gitlab}
To demonstrate Socialz, we need to choose a server as the PUT. The server needs to meet certain requirements to ensure that the case study can be conducted:
\begin{itemize}
\item A freely available, open-source server that can be self-hosted.
\item An API connection or similar to impersonate users on a network and to simulate events being created by them, necessary for accurately modelling the behaviour of users in a network.
\item The ability to gather system performance data.
\item The option to easily restart the system in order to achieve comparable conditions for the eventual evaluation.
\item A well-documented application for easy troubleshooting of problems that may arise during installation, maintenance, and operation of the system.
\end{itemize}
GitLab Community Edition (GitLab~CE) is the best fit for these requirements.
GitLab~CE is the free and open source edition of GitLab, which is a platform that has over 30 million registered users~\cite{gitlabusers}.
Despite being free, GitLab~CE provides a comprehensive set of performance metrics that are continually stored on an internal Prometheus time-series database. This database can be scrapped with a comprehensive set of pre-defined Grafana dashboards~\cite{gitlabdocumentation}, providing a wide range of performance metrics for our purposes (see Figure~\ref{fig:grafana} for an example).
\begin{figure}
\centering
\includegraphics[width=\linewidth,trim=0 0 0 0,clip]{grafana.png}\vspace{-1mm}
\caption{Example of a Grafana dashboard, showing statistics of our running GitLab~CE server.}
\label{fig:grafana}
\end{figure}
To easily reset the system to well-defined states, we run GitLab~CE 15.5 in a Docker container, allowing us to delete the data and to restart the container to achieve a reproducible, initial state. Our system runs on a Ubuntu 22.04 virtual machine with 4 CPU cores, 100 GB of hard drive disk space, and 8 GB of RAM.
\section{Characterisation of User Behaviour (Stage 1/3)}
As the starting point for this project, we populate our GitLab server with real-world data from GitHub. We use GitHub as our data source because it offers the largest repository of information with the kind of data that our platform needs and because it has a large community of over 100 million developers~\cite{dohmke2023million}, as well as comprehensive documentation that makes it easy to get started, troubleshoot issues, and find help. There are also several Git server alternatives, such as Gitea and Gogs\footnote{\url{https://gitea.io}, \url{https://gogs.io}. Retrieved 25 January 2023.}, that can be self-hosted and have similar features (but not the server performance metrics that GitLab offers), making it easy to transfer solutions from GitHub to them in terms of data. Additionally, projects exist that provide access to user interaction data that is similar to that found in online social networks (OSNs). These interactions include creating and annotating content, creating networks (e.g. starring a repository, linking comments), as well as ``malicious'' interactions (e.g. intentionally or unintentionally submitting bugs, spamming, or violating privacy). One such project that provide datas on interactions is GH~Archive\footnote{\url{https://www.gharchive.org}. Retrieved 25 January 2023.}. It is a public dataset available on Google Big Query that has been recording the public GitHub timeline since 2011, and it makes this data easily accessible through SQL-like queries of JSON-encoded events as reported by the GitHub API.
To narrow down the immense amount of data available in GH~Archive\footnote{As of December 2022, the GH~Archive data stored on Google BigQuery totals more than 17.23 terabytes.}, we choose a sub-community within GitHub, and after considering various sub-communities based on their size, we find that COBOL's is small enough to enable us to conduct a thorough analysis of the data. This is an intentional choice, because we target a ``complete'' subset of the data, i.e. not a random sample of nodes or edges from GH~Archive that are not interconnected. We model the data as a graph, with users and repositories as nodes and GitHub events as edges. In this model, repositories can be thought of as groups on an online social network (OSN) where users share and contribute content.
To focus on content creation, content annotation, and network creation, we select the following fives types of GitHub events:
\begin{itemize}
\item WatchEvents and ForkEvents can be likened to liking a public profile page.
\item PushEvents can be thought of as being invited to a group with permission to publish some content.
\item PullRequests can be thought of as requesting permission to publish something to a group.
\item FollowEvents represent establishing a connection or friendship with another user. Unfortunately, as of December 2013 FollowEvents have stopped being recorded in GH~Archive, we need to create a workaround where connections between users are based on their similarity. To ensure consistency in our analysis, we disregard the existing FollowEvents in the GH~Archive data and instead utilise only our own approach (see Section~\ref{sec:solrep}).
\end{itemize}
Finally, to ensure that our dataset is as complete as possible, we further adjust its size by filtering it to only include events from the years 2011 to 2016. This decision allows us to compile a relatively complete dataset (i.e. starting from the beginning of GH~Archive's records and going up to a particular date), rather than having more recent but incomplete data (e.g. like considering the last six years until today) in which possibly all relevant events would have occurred before the starting date of the snapshot.
Table~\ref{tab:original} summarises the events in the original dataset, which we represent internally as an edge list containing 6,742 events between users and repositories.
\begin{table}
\centering
\caption{Original dataset: 1,523 users created a total of 6,742 events involving 156 repositories and forks (2011--2016).}
\label{tab:original}\vspace{-1mm}
\begin{tabular}{cc}
\toprule
\textbf{Event type} & \textbf{Number of events} \\
\midrule
PushEvent & 4234 \\
WatchEvent & 1206 \\
PullRequestEvent & 852 \\
ForkEvent & 450 \\
\bottomrule
\end{tabular}
\end{table}
\section{Evolutionary Diversification of Community Interaction (Stage 2/3)}
In this section, we outline the components of our evolutionary approach and how they are used to diversify a set of virtual users, which are less biased than their real-world counterparts. This process has the potential to reveal anomalies or unexpected behaviours that would otherwise be difficult to detect in sets of bots that are similar and biased.
\subsection{Features of Community Interaction}
To characterise users, we investigate three features that we consider to be non-trivial in the sense introduced in Section~\ref{sec:diversityopt}: given these features, a developer may struggle to manually design a set of virtual users that exhibit a spread of desired community interactions.
Our chosen features allow us to characterise how active a user is, what is its relative importance and with what kind of events occur:
\begin{itemize}
\item The graph degree of centrality measure of the nodes quantifies how active a user is in the network, i.e. how many events are submitted. This measure is often used as a notion of popularity in social networks~\cite{https://doi.org/10.48550/arxiv.2011.01627}, as nodes with a large number of relationships are considered more powerful and central, but has a limitation in that it only takes into account local knowledge of the network topology. Hence, we introduce an additional centrality metric to supplement its analysis next.
\item To assess the relative importance of a particular user on the network, we utilise the PageRank algorithm~\cite{page1999pagerank}: it is fast to compute, well suited for a directed network such as ours and has been proven to be effective in characterising users~\cite{9420317}.
\item To characterise the types of actions a user performs --- for example, a user may submit only PushEvents, or only ForkEvents and PullRequestsEvents --- we represent each of the 15 possible combinations as a binary vector and then considered the corresponding decimal value as that user's ``event type''.\footnote{Because the number of users is much larger than the number of possible combinations, and because we aim for diversity, we conjecture that variations to this mapping procedure only have minor effects on the overall outcomes.} We consider 15 combinations, as we have four event types that are not FollowEvents, and the combination of ``user does not interact at all'' is not allowed.
\end{itemize}
It is critical to emphasise that these three metrics are features, not objectives: no user is ``better'' or ``worse'' than another one, neither in a single-objective sense, nor in a multi-objective sense.
\subsection{Solution Evaluation}\label{sec:solrep}
In our evolutionary setup, each individual is an interaction graph that represents how virtual users interact in a social network.
In particular, each individual is an edge list that contains all the necessary information of our graph: the source node, the target node, and the type of event.
\begin{table}
\caption{Original dataset: matrix representation.
Users 658 and 659 are chosen to show non-zero data, as the matrix is sparsely populated.}
\label{tab:solutionrep}\vspace{-3mm}\setlength{\tabcolsep}{0.5mm}
\centering
\begin{tabular}{c|ccc:cccccc|}
& repo$_1$ & $\cdots$ & \multicolumn{1}{c}{repo$_{156}$} & user$_1$ & $\cdots$ & user$_{658}$ & user$_{659}$ & $\cdots$ & \multicolumn{1}{c}{user$_{1523}$} \\ \hline
repo$_1$ & 0 & & 0 & 0 & & 0&4 & & 0 \\
$\vdots$ & & & & & & & & & \\
repo$_{156}$ & 0 & & 0 & 0 & & 0 & 0 & & 0 \\ \cdashline{2-10}
user$_1$ & 0 & & 0 & 0 & & 1 & 0 & & 0 \\
$\vdots$ &&&&&&&&&\\
user$_{658}$ & 0 & & 0 & 1 & & 0 &0 & & 0 \\
user$_{659}$ & 4 & & 0 & 0 & & 0 & 0 & & 0 \\
$\vdots$ & & & & & & & & & \\
user$_{1523}$ & 0 & & 0 & 0 & & 0 & 0 & & 0 \\ \cline{2-10}
\end{tabular}
\end{table}
To evaluate our graphs, we have defined the following steps:
\begin{enumerate}
\item We first transform the edge list into an two-dimensional adjacency matrix. This adjacency matrix has four areas that reflect the interactions repo-repo, repo-user, user-repo, and user-user (see Table~\ref{tab:solutionrep}).
\item We record the interactions between users and repositories in the repo-user and user-repo areas by summing the number of events that occur between each repository and user. Each event has a weight of one, regardless of its type.
At present, as we do not further differentiate, the repo-user and the user-repo areas are mirrored versions of each other.
\item As there are no interactions between repositories in our approach, the repo-repo area is always filled with zeros.
\item We use the user-user area to store FollowEvents. It is initially empty, and we fill it with data by evaluating the cosine similarity of the users based on their interactions with repositories (areas user-repo, repo-user).
If the cosine similarity is greater than zero for a pair of two users, then we set the respective entry to 1.
The diagonal is set to 0 as users cannot follow themselves.
\item With the FollowEvents incorporated into our edge list, we calculate the PageRank score and degree of centrality of each node. For the event type feature, we filter our data to only include user nodes and map each user node to the combination of events they were involved in.
\item Finally, we calculate the star-discrepancy score for the interaction graph.
\end{enumerate}
Our rationale for the decision to create FollowEvents between two users based on the simple criterion that the cosine similarity across all events (for these two users) is greater than zero is three-fold and mostly based on practical considerations: (1) we make the assumption that users who create similar events may be likely to follow each other, (2) the approach is deterministic and thus saves memory (at the cost of computation time), and (3) it reduces the search space by allowing us to generate community interactions.
\subsection{Evolutionary Algorithm}
We employ a diversity optimisation approach using the star-discrepancy measure, based on~\cite{neumann2018discrepancy}. The star-discrepancy measures the regularity with which points are distributed in a hypercube, and in particular with respect to all axis-parallel boxes $\left[0,\ b\right]$, $b\ \in\left[0,\ 1\right]^d$ that are anchored in the origin. Hence, this metric helps us evaluate how evenly the points are distributed in the feature space. In our case, each point represents a user with its coordinates defined by the three above-described metrics.
We linearly scale all three metrics into $\left[ 0,1 \right]$.
We use a $\left(1+20\right)$-EA, and in each mutation, we randomly add and delete edges, where the particular action and the particular edge are chosen uniformly at random.
When deleting edges, we do not allow to disconnect nodes, in which case we resample.
To aid the convergence, we utilise a success-based multiplicative update scheme that can lead to faster convergence solution~\cite{doerr2018parameterselection}. This scheme provides a dynamic mutation rate for the EA based on the performance of the offspring. If an iteration is successful, meaning an offspring is not worse than the current solution, the mutation rate is increased by a constant factor $A=2$. If the offspring is not better, the mutation rate is decreased by a constant factor $b=0.5$. The initial per-edge mutation rate is $1/n$, where $n$ is the number of edges, which is the total number of events that are not FollowEvents.
In summary, our evolutionary approach to diversified community interaction works as follows. We pass an interaction graph as an edge list to our evolutionary algorithm, mutate the edges as described, compute the user’s similarity to add FollowEvents and then create our graph to compute PageRank and centrality for each user and map the users to the combination of events they were involved with. With these three features, we compute a graph's star-discrepancy score and, by means of our evolutionary algorithm, iteratively keep improving the graph.
\subsection{Diversified Community Interaction}
We start by using the original edge list of 6,742 events to calculate the similarity between the 1,523 users, resulting in an edge list of 201,983 events with a star-discrepancy score of 0.540 in the three-dimensional feature space. Then, we use this as the initial definition of the community interaction on our server, and apply the previously described evolutionary approach. After 1,000 generations\footnote{Taking 1.5 hours on an Apple MacBook Pro (2020, M1 Chip ARM64 CPU 8 cores, 16 GB RAM). We implement multiprocessing when evaluating candidates to take advantage of all CPU cores of our machine.} the process results in an evolved edge list of 533,543 events with a corresponding star-discrepancy score of 0.086 for 1,523 users.
Figure~\ref{fig:convergence} shows the evolution over time. We note that the diversity improves quickly due to a large number of mutations. We also note that the number of mutations reduces but remained relatively high (at about 100), which indicates that diversity improvements are still frequent, as otherwise the number of mutations would be close to the minimal (enforced) 1.
\begin{figure}
\centering
\includegraphics[width=\linewidth,trim=0 0 0 0,clip]{fig13.pdf}
\vspace{-7mm}
\caption{Evolution of the interaction graph: discrepancy (of user behaviour) and number of mutations performed. As the mutation rate only provides a probability of mutation, the actual number of the mutations typically differs from multiples of two.}
\label{fig:convergence}
\end{figure}
\section{Execution (Stage 3/3)}\label{sec:eval}
In this section, we present our approach for executing and evaluating community interaction.
\subsection{Benchmarking the Evolutionary Approach}
A natural question is how to compare the Original and Evolved edge lists, because they are noteworthy different size? To compare the two, we craft additional datasets using two approaches. The first approach creates a larger version (called ``Simple'') of the original edge list by copying only the existing events until the size of this simple version matched that of the evolved edge list. The second approach generates new connections at random until the edge list reached the same size as the evolved one; the resulting community interaction is called ``Random''. In both approaches, we ensure that the number of FollowEvents is the same, as these events are considerably more numerous than other types of events.
By reducing the potential impact of differences in size on the validity of the results, these approaches allow for the creation of comparable versions of the Original and Evolved interactions.
\begin{table}
\centering
\caption{Dataset comparison, frequency of events}\vspace{-2mm}
\label{table:alldatasets}
\begin{tabular}{ccccc}
\toprule
\textbf{Event type} & \textbf{Original} & \textbf{Simple} & \textbf{Random} & \textbf{Evolved}\\
\midrule
FollowEvent & 195241 & 515841 & 515841 & 515841 \\
PushEvent & 4234 & 11179 & 6943 & 6330\\
WatchEvent & 1206 & 3163 & 3815 & 4123\\
PullRequestEvent & 852 & 2186 & 3681 & 3718\\
ForkEvent & 450 & 1174 & 3263 & 3531\\
\midrule
\textbf{Total} & \textbf{201983} & \textbf{533543} & \textbf{533543} & \textbf{533543} \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering\vspace{-2mm}%
\includegraphics[width=0.41\linewidth,trim=650 0 0 0,clip]{Original_network_1k.png}
\includegraphics[width=0.395\linewidth,trim=720 0 0 20,clip]{Simple_network_1k.png}\\
(a) Original \hspace{20mm} (b) Simple \hspace{2mm}\;\\
\includegraphics[width=0.385\linewidth,trim=0 0 0 0,clip]{Random_network_1k.png}
\includegraphics[width=0.4\linewidth,trim=0 0 0 0,clip]{Evolved_network_1k.png}\\
(c) Random \hspace{19mm} (d) Evolved
\vspace{-1mm}\caption{Projection of interaction graphs into 2d. The Original and Simple graphs each contain a connected component with only two users that is not shown here.}
\label{fig:projection}
\end{figure}
\begin{figure}
\centering
\rotatebox{90}{\hspace{1mm} Evolved \qquad Random \qquad Simple \qquad Original}\
\includegraphics[width=0.961\linewidth,trim=0 0 0 0,clip]{3Dplus2D_All.png}%
\vspace{-3mm}%
\caption{Dataset comparison: user interaction based on interaction features. The 2d plots are projections of the 3d plots in the leftmost column. The value ranges are always $\left[0,1\right]$.}
\label{fig:featurespace}
\end{figure}
Table~\ref{table:alldatasets} shows a first comparison between the four interaction graphs: the Original one that is directly based on GitHub data, its Simple (but larger) version, the Random version, and the Evolved one.
Figures~\ref{fig:projection} and~\ref{fig:featurespace} add to this by presenting visualisations of the four communities.
Figure~\ref{fig:projection} shows a projection of the graphs into 2d. Edges refer to interactions between users and repositories.
Dot size represents the degree centrality of a node with larger dots indicating a higher degree centrality.
In addition, the colour of each dot represents the PageRank score of the node, where the colour range from green, less important, to yellow, higher score.
As we can see, the Original and the Simple ones are (subjectively) close in structure, while the Evolved and the Random ones are also similar due to the high level of connectedness, but the Evolved one is much more diverse in terms of the distribution of PageRank scores and degree centrality.
Figure~\ref{fig:featurespace} complements these observations by shoing the three features used to calculate the star-discrepancy score for each dataset, i.e. the degree of centrality of each user, their PageRank score, and the combination of events they are involved with. The visualisations show that users in the Original and Simple datasets tend to cluster together and occupy a smaller space, while users in the evolved edge list and the random version of the original edge list appear to be more evenly distributed throughout the space. Interestingly, even the random version achieves a fairly diverse set of interactions, although the degree centrality seems much less covered by the random dataset when compared to the evolved dataset. Overall, this data suggests that our evolutionary algorithm effectively improves the distribution of users in the feature space.
\subsection{Observing effects of community interactions}
To assess the impact that the different datasets have on the server, we consider the processing of the community interaction as an actual benchmark in itself: as the hundreds of thousands events are processed between the 1523 users on the server, we observe how the system behaves.
In the following, we present the workflow used when executing the event and we present our observations.
\subsubsection{Methodology}
We require an elaborate workflow (see Figure~\ref{fig:image3}) as the randomised events create a broad range of situations that need to be dealt with; they would otherwise simply results in a myriad of errors. Essentially, all types of events are first validated by checking GitLab~CE's database to see if the user triggering the event exists. If not, the user is created. The same process is followed for the user and/or repository targeted in the event. The flow then proceeds to the corresponding action for that event.
\begin{figure*}
\centering\vspace{-4mm}
\includegraphics[width=\linewidth]{Flowchart.pdf}
\vspace{-13mm}
\caption{Workflow used when processing interaction graphs in GitLab.}
\label{fig:image3}
\end{figure*}
A complex logic is required for pull request events, where we select or create a branch to submit a pull request. If there is already an open pull request on that branch, we try to merge it. Otherwise, we close the event. If the pull request is closed, we reopen it. To add some realism,
we use a corpus of words that we extracted from the original dataset, so when we create a commit or a pull request, we add a random text from this corpus, allowing the system to check how many lines of text were added or deleted following Git logic.
On the technical side, we use the previously described setup with GitLab~CE and the virtual machine.
Our GitLab API wrapper code implements the workflow and is used to load our datasets (Original, Evolved, Simple, and Random).
During the processing, we collect performance data from the internal Grafana dashboard panels and from the Prometheus database (see Section~\ref{sec:methodology}) included in the GitLab installation. As substantial development effort has gone into developing the evaluation environment, we make the four Docker images publicly available at \url{https://removed.for.review}.
The processing of all community interactions is time-consuming: the execution of the Original/Evolved/Simple/Random datasets on four identical virtual machines takes 9.9/33.3/27.2/31.7 hours.
\subsubsection{Effects on the system}
\begin{figure}
\centering\vspace{-1mm}%
\includegraphics[height=33mm,width=\linewidth]{processingCommunityInteractionsOVERTIME.png}%
\vspace{-3mm}
\caption{Processing of community interactions. The sharp drops at the rightmost end of some data series are due to data aggregation used by the Prometheus database.}
\label{fig:prometheus}
\end{figure}
First, Figure~\ref{fig:prometheus} shows --- over time --- the memory utilisation, the CPU saturation, and the requests per second.
As we can see, all four interaction graphs result in different workloads over time.
We can make a few major observations.
First, considering the memory consumption, the Evolved data set appears to result in fewer sharp increases than the other three; the same observation holds for the CPU saturation, where Simple and Random appear to have more and larger spikes than Evolved.
Second, while the Evolved and Random ones appear to affect the system in similar ways (with the respective data being very similar), the Simple one results in a higher CPU saturation while also being able to process more requests per second; this appears to be contradictory at first, and we can only conjecture that the Simple one exercises the system differently, possibly due to the higher proportion of PushEvents, but other explanations are possible, too.
Third, regarding the the major drop in memory consumption by 4--8\% at about 12 hours into the processing, we conjecture that this is due to a scheduled maintenance; interestingly, this drop does not seem to have obvious effects on the CPU saturation or on the number of processed requests per second.
\begin{figure}
\centering%
\includegraphics[width=\linewidth,trim=0 0 0 0,clip]{corr1.png}\\
\includegraphics[width=\linewidth,trim=0 0 0 0,clip]{corr2.png}
\vspace{-5mm}
\caption{Correlation of user features and resource utilisation. Shown are the averages for each of the 1523 users.}
\label{fig:correlation}\vspace{-2mm}
\end{figure}
In order to gain further insight into user behaviour and its impact on the system, we present a correlation analysis in Figure~\ref{fig:correlation} between three feature metrics (PageRank, Degree centrality, and Event types) and three resource utilization metrics (CPU saturation, Memory saturation, and Latency), averaging results for each user. Despite acknowledging the fluctuation in performance over time as illustrated in Figure~\ref{fig:prometheus}, our analysis identifies several statistically significant differences.
For instance, while all three community interactions involve changes to the original dataset, our findings demonstrate that sometimes these evolved interactions impact the server in a manner that is more similar to the original dataset rather than simple or random modifications.
Specifically, we observe strong negative Spearman correlations (coefficients $-0.69 \leq r \leq -0.62 $, $p<1\%$) between PageRank and CPU, Memory, and Latency in the original data, which is maintained in the evolved dataset. Conversely, no correlation ($|r|<0.1$) is observed in effects of Simple and Random. This observation leads us to hypothesise that a high number of connections between irrelevant users may increase system load, as the system has to execute expensive tasks such as creating these users and connections. On the other hand, relevant users with many connections have already executed several tasks that do not need to be repeated, such as creating new repositories, which could reduce system load.
With regards to degree centrality, while we can make out visual differences in the data, we do not observe any outstanding relationships when considering Spearman correlation; we do only observe similar ranges of CPU and Memory utilisation and Latency between the original, simple, and evolved datasets. The random dataset occupy a smaller range, while the evolved dataset looks more spread out due to optimisation. Finally, the data on the event types overlaps much due to the discrete nature of the event types; the correlations are weak (if at all) at $|r|<0.2$.
\subsubsection{Characterisation of the Performance Indicators}
In our first attempt to evaluate the potential usefulness of performance indicators during the evolution of community interaction in-vivo on a server, we have identified three main observations.
First, we have noticed that neighboring data points in Figure~\ref{fig:prometheus} often show variation due to what seems to be random noise, making it challenging to compare marginal differences that may be impacted by factors outside of our control. In addition, we can see from Figure~\ref{fig:correlation} that users who are similar, such as having the same PageRank or degree centrality, can still have different experiences in terms of server load, making it potentially difficult to develop a surrogate model.
Second, we have observed that disruptive events can be triggered by processes running on the virtual machine or by GitLab CE's own management, which can have a significant impact on the performance of the server.
Third, we have noted significant changes in the performance of the server at the start of the evaluation, where there is a sudden change observed after just a few minutes or hours. These changes may be indicative of major state changes that occur during the evolution of community interaction on the server.
\subsubsection{Found limitation}
Our case study has uncovered a crucial issue with the server that could not have been detected through a simple repetition of the Simple dataset. During our investigations, we found that GitLab imposes limits to ensure optimal performance quality. In particular, our initial experiments revealed a restriction on the maximum number of followed users to 300. This was confirmed by a review of public issue comments in GitLab, which indicated that this limit was in place to prevent the activity page for followed users from failing to load~\cite{Issue360755}. In our case, tests resulted in HTTP 304 errors when trying to load follow events for users who were following more than 300 users, causing the event creation to fail.
To resolve this issue, we manually edited GitLab's source code to increase the limit to 1,523, the total number of users in our dataset.
It is important to mention that for our use case, we do not access the activity page for followed users, so we conjecture that our modification of the source code does not have any adverse impact on our results. Hence, we see this as a good illustration of how diverse data can outperform original data for testing configurations: a broader range of data points offers more chances of uncovering issues, making it a valuable asset for any testing process.
\section{Efficient evolution and evaluation}
In conclusion of this first case study of Socialz, we would like to bring attention to two important research questions: how can we efficiently evolve and evaluate community interaction?
Efficiently evolving community interaction is crucial as it allows for iterative improvements and explorations based on observed data, increasing the chances of identifying social bugs.
It may be beneficial to evolve interactions either
(1) in-vivo, i.e. small-scale interactions are evaluated ``live'' on a running server, or
(2) to run comprehensive evaluations for each large community interaction akin to those performed in Section~\ref{sec:eval}.
However, small-scale evaluations come with the challenge of affecting the virtual machine and creating unintended side-effects, such as triggering memory clean-ups or altering the system's performance over time, as seen in Figure~\ref{fig:prometheus}.
In contrast to this, comprehensive analyses of the entire community interaction (which may involve hundreds of thousands of events like in our case) are time-consuming, taking over one day each.
As a middle way between these two extremes, differential evaluations may be a solution, but only if the effects of mutations can be attributed efficiently and accurately. Currently, this presents a significant challenge both in practice and algorithmically.
The question of how much to evolve community interaction is closely tied to the question of which performance indicators should be used. As our data analysis has shown, there is a significant amount of noise present in the server under examination. This is a common issue in complex systems such as Android phones~\cite{bokhari2020validation}, where the targeted application shares resources with multiple processes and modern multi-core hardware. Existing validation methods, such as complete rollbacks to known states or extended repetition, are not practical due to the time and resources required. Therefore, there is a need for schemes that allow for reliable and efficient attribution of community interactions to their effects.
\section{Conclusions and Future Work}
This study presents a new social fuzz testing method called Socialz, which is based on publicly accessible data from GitHub. The approach uses evolutionary computation to diversify community interactions, and then evaluates the results on a GitLab~CE server.
The key takeaways of this research are:
\begin{itemize}
\item Social fuzz testing is a feasible approach, although the initial setup requires significant effort.
\item Evolutionary diversity optimisation can generate community interactions that are significantly different from the original data or random data, potentially uncovering social bugs.
\item Our testing also revealed a limitation that simple data replay could not.
\end{itemize}
In addition to the already outlined research directions on efficient evolution and evaluation, future work in this area offers endless possibilities, such as:
\begin{itemize}
\item Further characterisation and hybridisation of sub-communities.
\item Exploration of additional community interactions and the related features.
\item Integration of Socialz with traditional fuzz testing techniques that target code-level or system-level interactions.
\end{itemize}
To support future research in social testing, the code, data, and virtual machines used in this study are available for public use at \url{https://removed.for.review}.
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.08675",
"language": "en",
"timestamp": "2023-02-20T02:06:19",
"url": "https://arxiv.org/abs/2302.08675",
"yymm": "2302"
} | \section{Introduction}
Document-level relation extraction (DocRE) has been recognized as a more realistic and challenging task compared with its sentence-level counterpart~\cite{peng-etal-2017-cross,verga-etal-2018-simultaneously,yao-etal-2019-docred}.
In DocRE, an entity can have multiple mentions scattered throughout a document, and relationships can exist between entities in different sentences.
Therefore, DocRE models are expected to apply information filtering to long texts by focusing more on sentences relevant to the current decision of relation extraction (RE) and less on irrelevant ones.
To this end, existing studies retrieve \textit{supporting evidence} (evidence hereafter, \citealp{yao-etal-2019-docred}), a set of sentences necessary for humans to identify the relation between an entity pair~\cite{huang-etal-2021-entity,huang-etal-2021-three,xie-etal-2022-eider,xiao-etal-2022-sais,xu-etal-2022-document}.
As shown in Figure~\ref{fig:example}, to decide the \textit{present in work} relation between \textit{Prince Edmund} and \textit{Blackadder}, reading sentences 1 and 2 should be sufficient.
Although sentences 5 and 6 also mention the subject, they are irrelevant to the relation decision.
Evidence of the relation triple (\textit{Prince Edmund}, \textit{present in work}, \textit{Blackadder}) is thus defined as sentences 1 and 2.
\begin{figure}[t]
\centering
\includegraphics[width=.46\textwidth]{figures/example.pdf}
\caption{Example document and one of the relation triples from DocRED, where the $i$-th sentence is marked with [i] in the beginning. Mentions in bold italics are those of subjects and objects, whereas entity mentions other than subject and object are underlined.}
\label{fig:example}
\end{figure}
Despite the usefulness of evidence, automatic evidence retrieval (ER) faces two major issues.
Firstly, the existing approaches for ER are memory-inefficient.
Previous systems tackle ER and DocRE as separate tasks, introducing extra neural network layers to learn ER with DocRE jointly~\cite{huang-etal-2021-entity,xie-etal-2022-eider,xiao-etal-2022-sais}.
The ER module typically involves a bilinear classifier that receives entity-pair-specific embeddings and sentence embeddings as the input.
To compute the evidence score of each sentence for each entity pair, the module must walk through all (entity pair, sentence) combinations.
The computations significantly increase memory consumption, particularly in documents with numerous sentences and entities.
Secondly, the availability of human annotations of evidence is limited.
To make matters worse, gold training data for DocRE are more expensive to annotate than those for their sentence-level counterpart.
Despite the difficulty of obtaining human annotations, acquiring evidence annotations at a low cost has been underexplored.
Although automatically collecting silver training data for RE by distant supervision~\cite{mintz-etal-2009-distant,yao-etal-2019-docred}, locating evidence for a sliver RE instance in the document is nontrivial.
This work aims at alleviating these issues to improve the usage of ER in DocRE.
To reduce the memory consumption, we propose \textbf{D}ocument-level \textbf{R}elation \textbf{E}xtraction with \textbf{E}vidence-guided \textbf{A}ttention \textbf{M}echanism (DREEAM), a memory-efficient approach for incorporating DocRE with ER.
We adopt ATLOP~\cite{zhou2021atlop}, a Transformer-based DocRE system widely used in previous studies~\cite{xie-etal-2022-eider,tan-etal-2022-document,xiao-etal-2022-sais}, as the backbone.
Instead of introducing an external ER module, we directly guide the DocRE system to focus on evidence.
Specifically, we supervise the computation of entity-pair-specific local context embeddings.
The local context embedding, formed as a weighted sum among all token embeddings based on attention from the encoder, is trained to assign higher weights to evidence and lower weights otherwise.
To compensate for the shortage of evidence annotations, we propose performing ER under a weakly-supervised setting.
Specifically, we design a strategy to perform self-training with DREEAM on massive, unlabeled data.
The data is obtained from distant supervision (distantly-supervised data hereafter) and thus is automatically annotated with relation labels but not evidence labels.
We expect the knowledge about ER learned from the human-annotated data to generate and grow on the distantly-supervised data.
To enable self-training, we first adopt a teacher model trained on human-annotated data to retrieve silver evidence from distantly-supervised data.
Next, we train a student model on the data for RE while learning ER from the silver evidence.
The student model is further finetuned on the human-annotated data to refine its knowledge.
Experiments on the DocRED benchmark~\cite{yao-etal-2019-docred} show that with the help of ER self-training, DREEAM exhibits state-of-the-art performance on both RE and ER.
In short, the contributions of this work are: (1) We propose DREEAM, a memory-efficient approach to incorporate evidence information into Transformer-based DocRE systems by directly guiding the attention.
DREEAM does not introduce any extra trainable parameters for ER while achieving good performance on both RE and ER.
(2) We propose incorporating distantly-supervised RE training with ER self-training, which improves the performance on both tasks.
To the best of our knowledge, DREEAM is the first DocRE system that enables joint training of ER and RE under a weakly-supervised setting.
\section{Preliminary}
\subsection{Problem Formulation}
Given a document $D$ containing sentences $\mathcal{X}_D = \{x_i\}_{i=1}^{|\mathcal{X}_D|}$ and entities $\mathcal{E}_D = \{e_i\}_{i=1}^{|\mathcal{E}_D|}$, DocRE aims to predict all possible relations between every entity pair.
Each entity $e\in \mathcal{E}_D$ is mentioned at least once in $D$, with all its proper-noun mentions denoted as $\mathcal{M}_e=\{m_i\}_{i=1}^{|\mathcal{M}_e|}$.
Each entity pair $(e_s, e_o)$ can hold multiple relations, comprising a set $\mathcal{R}_{s,o} \subset \mathcal{R}$, where $\mathcal{R}$ is a pre-defined relation set.
We let the set $\mathcal{R}$ include $\epsilon$, which stands for \textit{no-relation}.
Additionally, if an entity pair $(e_s, e_o)$ carries a valid relation $r\in \mathcal{R} \backslash \{\epsilon\}$, ER aims to retrieve the supporting evidence $\mathcal{V}_{s,r,o} \subseteq \mathcal{X}_D$ that are sufficient to predict the triplet $(e_s, r, e_o)$.
\subsection{ATLOP}
\label{sec:model}
This section reviews ATLOP, the backbone of our proposed method.
\paragraph{Text Encoding} Before encoding, a special token \texttt{*} is inserted at the beginning and the end of each entity mention.
Then, tokens $\mathcal{T}_D=\{t_i\}_{i=1}^{|\mathcal{T}_D|}$ within document $D$ are encoded with a Transformer-based pretrained language model (PLM, \citealp{NIPS2017attention}) to obtain token embeddings and cross-token dependencies.
Although the original ATLOP adopts only the last layer, this work takes the average of the last three layers\footnote{Pilot experiments showed that using the last 3 layers yields better performance than using only the last layer.}.
Specifically, for a PLM with $d$ hidden dimensions at each transformer layer, the token embeddings $\bm{H}$ and cross-token dependencies $\bm{A}$ are computed as:
\begin{equation}
\label{eq:text_enc}
\bm{H}, \bm{A} = \textrm{PLM}(\mathcal{T}_D),
\end{equation}
where $\bm{H} \in \mathbb{R}^{|\mathcal{T}_D| \times d}$ averages over hidden states of each token from the last three layers and $\bm{A} \in \mathbb{R}^{|\mathcal{T}_D| \times |\mathcal{T}_D|}$ averages over attention weights of all attention heads from the last three layers.
\paragraph{Entity Embedding} The entity embedding $\bm{h}_{e} \in \mathbb{R}^d$ for each entity $e$ with mentions $\mathcal{M}_{e}=\{m_i\}_{i=1}^{|\mathcal{M}_e|}$ is computed by collecting information from all its mentions.
Specifically, \texttt{logsumexp} pooling, which has been empirically shown to be effective in previous studies~\cite{jia-etal-2019-document}, is adopted as: $\bm{h}_{e} = \log\sum_{i=1}^{|\mathcal{M}_e|}\exp(\bm{H}_{m_i})$, where $\bm{H}_{m_i}$ is the embedding of the special token \texttt{*} at the starting position of mention $m_i$.
\paragraph{Localized Context Embedding} To better utilize information from long texts, ATLOP introduces entity-pair specified localized context embeddings.
Intuitively, for entity pair $(e_s,e_o)$, tokens important to both $e_s$ and $e_o$ should contribute more to the embedding.
The importance of each token is determined by the cross-token dependencies $\bm{A}$ obtained from Equation~\ref{eq:text_enc}.
For entity $e_s$, the importance of each token is computed using the cross-token dependencies of all its mentions $\mathcal{M}_{e_s}$.
First, ATLOP collects and averages over the attention $\bm{A}_{m_i} \in \mathbb{R}^{|\mathcal{T}_D|}$ at the special token \texttt{*} before each mention $m_i \in \mathcal{M}_{e_s}$ to get $\bm{a}_s \in \mathbb{R}^{|\mathcal{T}_D|}$ as the importance of each token for entity $e_s$.
Then, the importance of each token for an entity pair $(e_s,e_o)$, noted as $\bm{q}^{(s,o)} \in \mathbb{R}^{|\mathcal{T}_D|}$, is computed from $\bm{a}_s$ and $\bm{a}_o$ as:
\begin{equation}
\bm{q}^{(s,o)} = \frac{\bm{a}_s \circ \bm{a}_o}{\bm{a}_s^\top \bm{a}_o}, \label{eq:context_emb_weight}
\end{equation}
where $\circ$ stands for the Hadamard product.
$\bm{q}^{(s,o)}$ is thus a distribution that reveals the importance of each token for entity pair $(e_s, e_o)$.
Subsequently, ATLOP performs a localized context pooling,
\begin{equation}
\bm{c}^{(s,o)} = \bm{H}^\top \bm{q}^{(s,o)},\label{eq:context_rep}
\end{equation}
where $\bm{c}^{(s,o)}\in \mathbb{R}^d$ is a weighted average over all token embeddings.
\paragraph{Relation Classification} To predict the relation between entity pair $(e_s, e_o)$, ATLOP first generates context-aware subject and object representations:
\begin{align}
& \bm{z}_s = \tanh(\bm{W}_s[\bm{h}_{e_s};\bm{c}^{(s,o)}] + \bm{b}_s) \\
& \bm{z}_o = \tanh(\bm{W}_o[\bm{h}_{e_o};\bm{c}^{(s,o)}] + \bm{b}_o),
\end{align}
where $[\cdot;\cdot]$ represents the concatenation of two vectors and $\bm{W}_s, \bm{W}_o \in \mathbb{R}^{d \times 2d}, \bm{b}_s, \bm{b}_o \in \mathbb{R}^d$ are trainable parameters.
Then, a bilinear classifier\footnote{In practice, a grouped bilinear classifier~\cite{Zheng2019LearningDB} is applied to save memory.} is applied on the context-aware representations to compute the relation scores $\bm{y}^{(s,o)} \in \mathbb{R}^{|\mathcal{R}|}$:
\begin{equation}
\bm{y}^{(s,o)} = \bm{z}_s^\top\mathsf{W}_r\bm{z}_o + \bm{b}_r,
\end{equation}
where $\mathsf{W}_r \in \mathbb{R}^{|\mathcal{R}| \times d\times d}$ and $\bm{b}_r \in \mathbb{R}^{|\mathcal{R}|}$ are trainable parameters.
The probability that relation $r\in \mathcal{R}$ holds between entity $e_s$ and $e_o$ is thus $\mathrm{P}(r|s,o) = \sigma(y^{(s,o)}_r)$, where $\sigma$ is the sigmoid function.
\paragraph{Loss Function} ATLOP proposes Adaptive Thresholding Loss (ATL) that learns a dummy threshold class TH during training, serving as a dynamic threshold for each relation class $r\in\mathcal{R}$.
For each entity pair $(e_s, e_o)$, ATL forces the model to yield scores above TH for positive relation classes $\mathcal{R}_P$ and scores below TH for negative relation classes $\mathcal{R}_N$, formulated as below:
\begin{equation}
\label{eq:atl}
\begin{split}
\mathcal{L}_{\mathrm{RE}} = & - \sum_{s \neq o}\sum_{r \in \mathcal{R}_P} \frac{\exp(y^{(s,o)}_r)}{\sum_{r' \in \mathcal{R}_P \cup \{\mathrm{TH}\}}\exp(y_{r'}^{(s,o)})}\\
& - \frac{\exp(y^{(s,o)}_{\mathrm{TH}})}{\sum_{r' \in \mathcal{R}_N \cup \{\mathrm{TH}\} }\exp(y_{r'}^{(s,o)})}.
\end{split}
\end{equation}
The idea of setting a threshold class is similar to the Flexible Threshold~\cite{chen-etal-2020-hierarchical}.
\section{Proposed Method: DREEAM}
\begin{figure*}
\centering
\subfigure[Model architecture of DREEAM.]{\includegraphics[width=.47\textwidth]{figures/model_.pdf}}
\hspace{.03\textwidth}
\subfigure[Information flow of self-training using DREEAM.]{\includegraphics[width=.47\textwidth]{figures/flow.pdf}}
\caption{Model architecture and the information flow during self-training. In (a), gold/silver evidence distributions come from human-annotations/the teacher model. In (b), arrows represent the direction of knowledge transfer.}
\label{fig:model}
\end{figure*}
To perform information filtering, ATLOP computes a localized context embedding based on attention weights from the Transformer-based encoder.
The rationale is that cross-token dependencies are encoded as attention weights in Transformer layers.
In this work, we propose DREEAM to enhance ATLOP with evidence.
In addition to the automatically-learned cross-token dependencies, the attention modules are supervised to concentrate more on evidence sentences and less on others.
DREEAM can be employed for both supervised and self-training, sharing the same architecture with different supervisory signals, as shown in Figure~\ref{fig:model} (a).
Inspired by~\citet{tan-etal-2022-document}, we propose a pipeline to enable self-training of ER, with the data flow shown in Figure~\ref{fig:model} (b).
First, we train a teacher model on human-annotated data with gold relations and evidence labels.
Next, we apply the trained teacher model to predict silver evidence for the distantly-supervised data.
Then, we train a student model on the distantly-supervised data, with ER supervised by the silver evidence.
Finally, we finetune the student model on the human-annotated data to refine its knowledge.
The rest of this section introduces the training processes of the teacher and student models, followed by the inference process.
\subsection{Teacher Model}
\label{sec:teacher}
For each entity pair $(s,o)$, we guide $\bm{q}^{(s,o)}$ with an evidence distribution to help generate an evidence-centered localized context embedding.
While $\bm{q}^{(s,o)}$ yields token-level importance for $e_s$ and $e_o$, we can obtain only sentence-level evidence from human annotations, as shown in Figure~\ref{fig:example}.
To alleviate this gap, we sum the weight of each token within a sentence.
Specifically, for a sentence $x_i \in \mathcal{X}_D$ consisting of tokens $t_{\textrm{START}(x_i)}, \dots, t_{\textrm{END}(x_i)}$, we obtain the sentence-level importance as:
\begin{equation}
p_i^{(s,o)} = \sum_{j=\textrm{START}(x_i)}^{\textrm{END}(x_i)}q_j^{(s,o)}.
\label{eq:sent_imp}
\end{equation}
Collecting the importance of all sentences yields a distribution $\bm{p}^{(s,o)} \in \mathbb{R}^{|\mathcal{X}_D|}$ that expresses the importance of each sentence within the document.
We further supervise $\bm{p}^{(s,o)}$ for each entity pair $(e_s, e_o)$ using a human-annotated evidence distribution computed from gold evidence.
First, we define a binary vector $\bm{v}^{(s,r,o)} \in \mathbb{R}^{|\mathcal{X}_D|}$ for each valid relation label $r \in \mathcal{R}_{s,o} \subset \mathcal{R} \backslash \{\epsilon\}$ that records whether each sentence $x_i \in \mathcal{X}_D$ is evidence of the relation triple $(e_s, r, e_o)$ or not.
For example, if $x_i$ is evidence of $(e_s, r, e_o)$, then $v^{(s,r,o)}_i$ is set to 1, and otherwise 0.
Next, we marginalize all valid relations and normalize the marginalized vector to obtain $\bm{v}^{(s,o)}$:
\begin{equation}
\bm{v}^{(s,o)} = \frac{\sum_{r\in \mathcal{R}_{s,o}}\bm{v}^{(s,r,o)}}{\sum_{r\in\mathcal{R}_{s,o}}{\bm{1}^\top \bm{v}^{(s,r,o)}}},
\label{eq:evi_human}
\end{equation}
where $\bm{1} = (1,1,\dots,1) \in \mathbb{R}^{|\mathcal{X}_D|}$ is an all-ones vector.
The rationale behind Equation~\ref{eq:evi_human} is that modules before the relation classifier are not explicitly aware of specific relation types.
We thus guide attention modules within the encoder to produce relation-agnostic token dependencies.
\paragraph{Loss Function} Our purpose is to guide $\bm{p}^{(s,o)}$ with human evidence $\bm{v}^{(s,o)}$ to generate an evidence-focused localized context embedding $\bm{c}^{(s,o)}$.
To achieve this, we train the model with Kullback-Leibler (KL) Divergence loss, minimizing the statistical distance between $\bm{p}^{(s,o)}$ and $\bm{v}^{(s,o)}$:
\begin{equation}
\mathcal{L}_{\mathrm{ER}}^{\rm gold} = -D_{\rm KL}( \bm{v}^{(s,o)} || \bm{p}^{(s,o)}).
\label{eq:loss_evi}
\end{equation}
During training, we balance the effect of ER loss with RE loss using a hyper-parameter $\lambda$:
\begin{equation}
\mathcal{L}^{\rm gold} = \mathcal{L}_{\mathrm{RE}} + \lambda\mathcal{L}_{\mathrm{ER}}^{\rm gold}.\label{eq:loss_gold}
\end{equation}
\subsection{Student Model}
We employ the system trained on human-annotated data as a teacher model to support ER self-training on massive data.
The data, obtained from relation distant-supervision~\cite{mintz-etal-2009-distant}, contains noisy labels for RE but no information for ER.
We train a student model on the data.
Supervision of the student model, similar to that of the teacher model, consists of two parts:
an RE binary cross-entropy loss and an ER self-training loss.
In general, predictions from the teacher model are adopted as the supervisory signal for ER training.
First, we let the teacher model infer on the distantly-supervised data, thereby yielding an evidence distribution over tokens $\hat{\bm{q}}^{(s,o)}$ for each entity pair $(e_s,e_o)$.
Next, we train the student model to reproduce $\hat{\bm{q}}^{(s,o)}$ for each entity pair $(e_s,e_o)$.
\paragraph{Loss Function} The objectives of self-training are identical to those of supervised training.
We train ER of the student model using a KL-divergence loss similar to Equation~\ref{eq:loss_evi}:
\begin{equation}
\mathcal{L}^{\rm silver}_{\mathrm{ER}} = -D_{\rm KL} (\hat{\bm{q}}^{(s,o)} || \bm{q}^{(s,o)}),
\label{eq:loss_evi_silver}
\end{equation}
where $\bm{q}^{(s,o)}$ is the student model's evidence distribution over tokens regarding entity pair $(e_s,e_o)$, computed from Equation~\ref{eq:context_emb_weight}.
There are two notable differences between $\mathcal{L}^{\rm silver}_{\mathrm{ER}}$ and $\mathcal{L}_{\mathrm{ER}}^\mathrm{gold}$.
Firstly, the supervisory signal of $\mathcal{L}_{\mathrm{ER}}^\mathrm{gold}$ is sentence-level, while that of $\mathcal{L}^{\rm silver}_{\mathrm{ER}}$ is token-level.
The gap results from the availability of token-level evidence distributions.
On human-annotated data, it is untrivial to obtain token-level evidence distributions from sentence-level annotations.
On distantly-supervised data, however, the evidence distribution over tokens can be easily obtained from predictions of the teacher model.
We thus adopt token-level evidence distributions to provide supervision from a micro perspective for ER self-training.
Secondly, $\mathcal{L}_{\mathrm{ER}}^\mathrm{gold}$ is computed only on entity pairs with valid relation(s), while $\mathcal{L}_{\mathrm{ER}}^\mathrm{silver}$ is computed over all entity pairs within the document.
The design choice is based on the low reliability of relation labels on distantly-supervised data.
As these relation labels are collected automatically, it is possible that some of the annotated relations are irrelevant to the document.
Therefore, it is hard to tell which relations are valid and which are not from the automatic annotations.
For this reason, we compute the loss from all entity pairs to prevent missing important instances.
The overall loss is balanced by the same hyper-parameter $\lambda$ in Equation~\ref{eq:loss_gold}:
\begin{equation}
\mathcal{L}^{\rm silver} = \mathcal{L}_{\mathrm{RE}} + \lambda\mathcal{L}_{\mathrm{ER}}^{\rm silver}.
\end{equation}
After training on the distantly-supervised data, the student model is further finetuned using the human-annotated data to refine its knowledge about DocRE and ER with reliable supervisory signals.
\subsection{Inference}
Following~\citet{zhou2021atlop}, we apply adaptive thresholding to obtain RE predictions, selecting relation classes with scores higher than the threshold class as predictions.
For ER, we apply static thresholding and choose sentences with importance higher than a pre-defined threshold as evidence.
We further incorporate the \textbf{inference-stage fusion} strategy proposed by~\citet{xie-etal-2022-eider}.
Specifically, for each predicted relation triple $(e_s, r, e_o)$ associated with evidence prediction $\mathcal{V}_{s,r,o}$, we create a pseudo-document $\hat{D}_{s,r,o}$ by collecting only evidence sentences $x_i \in \mathcal{V}_{s,r,o}$.
Then, we feed pseudo-documents into the trained model to re-score the relation triples.
To aggregate the predictions from the pseudo-documents and the entire document, we adopt a blending layer that contains only one parameter $\tau$ representing a threshold.
Each triple $(e_s, r, e_o)$ is chosen as the final prediction only if the summation of its scores on the entire document and pseudo-documents is higher than $\tau$.
We adjust $\tau$ to minimize the binary cross-entropy loss of RE on the development set.
For more details, we refer the readers to the original paper~\cite{xie-etal-2022-eider}.
\section{Experiments}
To evaluate DREEAM, we conduct experiments under supervised and weakly-supervised settings.
\subsection{Setting}
\begin{table}[t]
\centering
\small
\begin{tabular}{l|rr}
\Xhline{3\arrayrulewidth}
\multicolumn{1}{c|}{\textbf{Statistics}} & \multicolumn{1}{c}{Human} & \multicolumn{1}{c}{Distant} \\
\Xhline{2\arrayrulewidth}
\# of documents & 3,053/998/1,000 & 101,873\\
\# of relation types & 97 & 97 \\
Avg. \# of ent. per doc. & 19.5 & 19.3 \\
Avg. \# of sent. per doc. & 8.0 & 8.1\\
Avg. \# of ment. per ent. & 1.3 & 1.3 \\
Avg. \# of rel. per doc. & 12.5 & 14.8 \\
Avg. \# of evi. per rel. & 1.6 & -\\
\Xhline{3\arrayrulewidth}
\end{tabular}
\caption{Data statistics of DocRED. \textit{Human} stands for human-annotated data and \textit{Distant} stands for distantly-supervised data. The abbreviations \textit{doc.}, \textit{ent.}, \textit{sent.}, \textit{ment.}, \textit{rel.}, and \textit{evi.} stand for document, entity, sentence, mention, relation, and evidence sentences, respectively.}
\label{tab:dataset}
\end{table}
\begin{table*}[!t]
\centering
\small
\begin{tabular}{lccccccc}
\Xhline{3\arrayrulewidth}
& & \multicolumn{3}{c}{Dev} & \multicolumn{3}{c}{Test} \\
Method & PLM & Ign F1 & F1 & Evi F1 & Ign F1 & F1 & Evi F1 \\
\Xhline{3\arrayrulewidth}
\multicolumn{8}{l}{\textbf{(a) without Distantly-Supervised Data}} \\
SSAN~\cite{xu-etal-2021-ssan} & BERT\textsubscript{base} & 57.03 & 59.19 & - & 55.84 & 58.16 & -\\
ATLOP~\cite{zhou2021atlop} & BERT\textsubscript{base} & 59.22 & 61.09 & - & 59.31 & 61.30 & -\\
E2GRE~\cite{huang-etal-2021-entity} & BERT\textsubscript{base} & 55.22 & 58.72 & 47.12 & - & - & - \\
DocuNet~\cite{zhang-etal-2021-document} & BERT\textsubscript{base} & 59.86 & 61.83 & - & 59.93 & 61.86 & - \\
EIDER~\cite{xie-etal-2022-eider} & BERT\textsubscript{base} & 60.51 & 62.48 & 50.71 & 60.42 & 62.47 & 51.27 \\
SAIS~\cite{xiao-etal-2022-sais} & BERT\textsubscript{base} & 59.98 & 62.96 & 53.70 & 60.96 & 62.77 & 52.88 \\
\hdashline
DREEAM (teacher) & \multirow{2}{*}{BERT\textsubscript{base}} & 59.60\textsubscript{\textpm{0.15}} & 61.42\textsubscript{\textpm{0.15}} & \multirow{2}{*}{52.08\textsubscript{\textpm{0.10}}} & 59.12 & 61.13 & \multirow{2}{*}{51.71} \\
+ Inference-stage Fusion & & 60.51\textsubscript{\textpm{0.06}} & 62.55\textsubscript{\textpm{0.06}} & & 60.03 & 62.49 & \\
\hline
SSAN~\cite{xu-etal-2021-ssan} & RoBERTa\textsubscript{large} & 60.25 & 62.08 & - & 59.47 & 61.42 & -\\ATLOP~\cite{zhou2021atlop} & RoBERTa\textsubscript{large} & 61.32 & 63.18 & - & 61.39 & 63.40 & - \\
DocuNet~\cite{zhang-etal-2021-document} & RoBERTa\textsubscript{large} & 62.23 & 64.12 & - & 62.39 & 64.55 & -\\
EIDER~\cite{xie-etal-2022-eider} & RoBERTa\textsubscript{large} & 62.34 & 64.27 & 52.54 & 62.85 & 64.79 & 53.01 \\
SAIS~\cite{xiao-etal-2022-sais} & RoBERTa\textsubscript{large} & 62.23 & 65.17 & 55.84 & 63.44 & 65.11 & 55.67 \\
\hdashline
DREEAM (teacher) & \multirow{2}{*}{RoBERTa\textsubscript{large}} & 61.71\textsubscript{\textpm{0.09}} & 63.49\textsubscript{\textpm{0.10}} & \multirow{2}{*}{54.15\textsubscript{\textpm{0.11}}} & 61.62 & 63.55 & \multirow{2}{*}{54.01} \\
+ Inference-stage Fusion & & 62.29\textsubscript{\textpm{0.23}} & 64.20\textsubscript{\textpm{0.23}} & & 62.12 & 64.27 \\
\Xhline{2\arrayrulewidth}
\multicolumn{8}{l}{\textbf{(b) with Distantly-Supervised Data}} \\
KD-DocRE~\cite{tan-etal-2022-document} & BERT\textsubscript{base} & 63.38 & 64.81 & - & 62.56 & 64.76 & -\\
\hdashline
DREEAM (student) & \multirow{2}{*}{BERT\textsubscript{base}} & 63.47\textsubscript{\textpm{0.02}} & 65.30\textsubscript{\textpm{0.03}} & \multirow{2}{*}{\textbf{55.68}\textsubscript{\textpm{0.04}}} & 63.31 & 65.30 & \multirow{2}{*}{\textbf{55.43}}\\
+ Inference-Stage Fusion & & \textbf{63.92}\textsubscript{\textpm{0.02}} & \textbf{65.83}\textsubscript{\textpm{0.04}} & & \textbf{63.73} & \textbf{65.87} & \\
\hline
SSAN~\cite{xu-etal-2021-ssan} & RoBERTa\textsubscript{large} & 63.76 & 65.69 & - & 63.78 & 65.92 & - \\
KD-DocRE~\cite{tan-etal-2022-document} & RoBERTa\textsubscript{large} & 65.27 & 67.12 & - & 65.24 & 67.28 & - \\
\hdashline
DREEAM (student) & \multirow{2}{*}{RoBERTa\textsubscript{large}} & 65.24\textsubscript{\textpm{0.07}} & 67.09\textsubscript{\textpm{0.07}} & \multirow{2}{*}{\textbf{57.55}\textsubscript{\textpm{0.07}}} & 65.20 & 67.22 & \multirow{2}{*}{\textbf{57.34}} \\
+ Inference-Stage Fusion & & \textbf{65.52}\textsubscript{\textpm{0.07}} & \textbf{67.41}\textsubscript{\textpm{0.04}} & & \textbf{65.47} & \textbf{67.53} & \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\caption{Evaluation results on development and test sets of DocRED, with best scores \textbf{bolded}. The scores of existing methods are borrowed from corresponding papers. We group the methods first by whether they utilize the distantly-supervised data or not, then by the PLM encoder. }
\label{tab:main_results}
\end{table*}
\paragraph{Dataset} We conduct experiments on DocRED~\cite{yao-etal-2019-docred}\footnote{\url{https://github.com/thunlp/DocRED}}, the largest dataset for DocRE.
As shown in Table~\ref{tab:dataset}, DocRED contains a small portion of human-annotated data and a large portion of distantly-supervised data made by aligning Wikipedia articles with the Wikidata knowledge base~\cite{wikidata}.
This work directly adopts the distantly-supervised data provided in DocRED.
\paragraph{Configuration} We implement DREEAM based on Hugging Face's Transformers~\cite{wolf-etal-2020-transformers}.
Following previous work, we evaluate the performance of DREEAM using BERT\textsubscript{base}~\cite{devlin-etal-2019-bert} and RoBERTa\textsubscript{large}~\cite{Liu2019RoBERTaAR} as the PLM encoder.
The parameter for balancing ER loss with RE loss is set to 0.1 for BERT\textsubscript{base} and 0.05 for RoBERTa\textsubscript{large} when training both the teacher and the student model, chosen based on a grid search from $\lambda \in \{0.05, 0.1, 0.2, 0.3\}$.
We train and evaluate DREEAM on a single Tesla V100 16GB GPU when utilizing BERT\textsubscript{base} and on a single NVIDIA A100 40GB GPU when utilizing RoBERTa\textsubscript{large}.
Details about hyper-parameters and running time are provided in Appendix~\ref{sec:hyparam}.
\paragraph{Evaluation} During inference, sentences $x_i$ with $p_i > 0.2$ computed from Equation~\ref{eq:sent_imp} are retrieved as evidence.
For the evaluation, we adopt official evaluation metrics of DocRED~\cite{yao-etal-2019-docred}: Ign F1 and F1 for RE and Evi F1 for ER.
Ign F1 is measured by removing relations present in the annotated training set from the development and test sets.
We train our system five times, initialized with different random seeds, and report the average scores and standard error of these runs.
\subsection{Main Results}
\label{sec:main_results}
Table~\ref{tab:main_results} lists the performance of the proposed and existing methods.
We select the best-performing model on the development set to make predictions on the test set and submit the predictions to CodaLab for evaluation\footnote{\url{https://codalab.lisn.upsaclay.fr/competitions/365}. Submissions under username \texttt{kgmr15} are from this work.}.
\paragraph{Performance of the Student Model} Table~\ref{tab:main_results} shows that the student model outperforms existing systems on RE by utilizing the distantly-supervised data.
In particular, when adopting BERT\textsubscript{base} as the PLM encoder, DREEAM performs better than KD-DocRE~\cite{tan-etal-2022-document}, the previous state-of-the-art system, by 0.6/1.0 points on Ign F1/F1 for the development set.
On the test set, the improvement reaches 1.1 F1 points on both Ign F1 and F1.
Notably, DREEAM utilizing BERT\textsubscript{base} even performs comparably with SSAN utilizing RoBERTa\textsubscript{large} under the weakly-supervised setting~\cite{xu-etal-2021-ssan}.
When adopting RoBERTa\textsubscript{large} as the PLM encoder, the advantage of DREEAM remains on both development and test sets.
These results support our hypothesis that ER self-training improves RE, which has not been demonstrated by any previous work.
\paragraph{Performance of the Teacher Model} The upper half of Table~\ref{tab:main_results} shows that the teacher model trained on human-annotated data exhibits comparable performance to EIDER~\cite{xie-etal-2022-eider} on both RE and ER.
Although there is a performance gap between DREEAM and SAIS, we attribute it to the difference in supervisory signals.
While DREEAM incorporates RE with only relation-agnostic ER, SAIS is trained under three more tasks: coreference resolution, entity typing, and relation-specific ER~\cite{xiao-etal-2022-sais}.
These extra supervisory signals possibly contribute to the high performance of SAIS.
Apart from the performance, our method has a critical advantage over previous ER-incorporated DocRE systems in memory efficiency.
We provide a detailed discussion in Section~\ref{sec:memory}.
\paragraph{Effectiveness of ER Self-Training} Additionally, we observe that the student model leads the existing systems by a large margin on ER.
As the first approach enabling weakly-supervised ER training, DREEAM utilizes considerable amounts of data without evidence annotation via self-training.
The experimental results reveal that DREEAM improves over the state-of-the-art supervised approaches by approximately 2.0 points on Evi F1.
Therefore, we conclude that our approach to ER self-training succeeds in acquiring evidence knowledge from the relation-distantly-supervised data with no evidence annotation.
\subsection{Ablation Studies}
\begin{table}[t]
\centering
\small
\begin{tabular}{lccc}
\Xhline{3\arrayrulewidth}
Setting & Ign F1 & F1 & Evi F1 \\
\Xhline{3\arrayrulewidth}
\multicolumn{4}{l}{\textbf{(a) Teacher Model}} \\
DREEAM & \textbf{59.60}\textsubscript{\textpm{0.15}} & \textbf{61.42}\textsubscript{\textpm{0.15}} & \textbf{52.08}\textsubscript{\textpm{0.10}} \\
\textit{w/o} ER training & 59.21\textsubscript{\textpm{0.19}} & 61.01\textsubscript{\textpm{0.20}} & 42.79\textsubscript{\textpm{1.65}}\\
\Xhline{2\arrayrulewidth}
\multicolumn{4}{l}{\textbf{(b) Student Model}} \\
DREEAM & \textbf{63.47}\textsubscript{\textpm{0.02}} & 65.30\textsubscript{\textpm{0.03}} & \textbf{55.68}\textsubscript{\textpm{0.04}} \\
\textit{w/o} ER self-training & 61.96\textsubscript{\textpm{0.39}} & 63.77\textsubscript{\textpm{0.44}} & 53.72\textsubscript{\textpm{0.43}}\\
\textit{w/o} ER fine-tuning & 63.34\textsubscript{\textpm{0.02}} & \textbf{65.50}\textsubscript{\textpm{0.02}} & 55.27\textsubscript{\textpm{0.05}}\\
\textit{w/o} both & 62.13\textsubscript{\textpm{0.07}} & 63.82\textsubscript{\textpm{0.08}} & 47.13\textsubscript{\textpm{0.12}} \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\caption{Ablation studies evaluated on the DocRED development set.}
\label{tab:ablation}
\end{table}
This subsection investigates the effect of evidence-guided attention by ablation studies.
All subsequent experiments adopt BERT\textsubscript{base} as the PLM encoder.
We report scores without the inference-stage fusion strategy~\cite{xie-etal-2022-eider}.
\paragraph{Teacher Model} Firstly, we examine how guiding attention with evidence helps RE training on human-annotated data.
We train a variant of our teacher model without ER training and evaluate its performance on the DocRED development set.
In general, disabling ER training reduces the model to a baseline similar to ATLOP~\cite{zhou2021atlop}\footnote{The difference between ATLOP and our baseline is that our baseline utilizes the last three layers of PLM to obtain embeddings, whereas ATLOP adopts only the final layer.}.
As presented in Table~\ref{tab:ablation} (a), the RE performance of our system decreases without ER training.
This observation supports the hypothesis that guiding attention with evidence is beneficial to improving RE.
We further visualize the token importance $\bm{q}^{(s,o)}$ for some instances to investigate the effect of evidence-guided training and find that our method succeeds in guiding the attention to focus more on relevant contexts.
The details can be found in Appendix~\ref{sec:visualization}.
Additionally, we retrieve evidence from the ER-disabled model as sentences with importance higher than the pre-defined threshold.
By doing so, we find that the Evi F1 is not far from its evidence-aware counterpart.
This observation indicates that ER is a task highly coupled with RE.
\paragraph{Student Model} Next, we investigate the student model trained on distantly-supervised data and finetuned on human-annotated data.
The aim is to examine the effect of guiding attention with evidence at various stages of training.
To this end, we remove ER supervisory signals from the student model during the training on distantly-supervised and human-annotated data.
The baseline excludes ER supervision from both stages, pre-trained on distantly-supervised data and then finetuned on human-annotated data for only RE.
As shown in Table~\ref{tab:ablation} (b), DREEAM without ER self-training performs comparably to the baseline, while DREEAM without ER fine-tuning performs comparably to the original model with no ablations.
These results indicate that ER self-training is more essential than ER fine-tuning for the student model.
On the one hand, we observe that disabling ER self-training on massive data causes a huge loss of evidence knowledge that cannot be recovered by finetuning on the much smaller evidence-annotated dataset.
On the other hand, we can conclude that DREEAM succeeds in retrieving evidence knowledge from the data without any evidence annotation, demonstrating the effectiveness of our ER self-training strategy.
\subsection{Memory Efficiency}
\label{sec:memory}
\begin{table}[t]
\centering
\small
\begin{tabular}{lcc}
\Xhline{3\arrayrulewidth}
Method & Memory & Trainable \\
& (GiB) & Params. (M) \\
\Xhline{3\arrayrulewidth}
\multicolumn{3}{l}{\textbf{(a) without ER Module}} \\
ATLOP~\cite{zhou2021atlop} & 10.8 & 115.4\\
SSAN~\cite{xu-etal-2021-ssan} & 6.9 & 113.5\\
KD-DocRE~\cite{tan-etal-2022-document} & 15.2 & 200.1 \\
\Xhline{2\arrayrulewidth}
\multicolumn{2}{l}{\textbf{(b) with ER Module}} \\
EIDER~\cite{xie-etal-2022-eider} & 43.1 & 120.2 \\
SAIS~\cite{xiao-etal-2022-sais} & 46.2 & 118.0 \\
DREEAM (proposed) & 11.8 & 115.4 \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\caption{Memory consumption and the number of trainable parameters of DREEAM and existing methods.}
\label{tab:memory}
\end{table}
This subsection discusses the memory inefficiency issue in previous ER approaches and shows how DREEAM solves it.
Previous approaches regard ER as a separate task from RE that requires extra neural network layers to solve~\cite{huang-etal-2021-entity,xie-etal-2022-eider,xiao-etal-2022-sais}.
To perform ER, all of them introduce a bilinear evidence classifier that receives an entity-pair-specific embedding and a sentence embedding as inputs.
For example, EIDER computes an evidence score for sentence $x_i$ with regard to entity pair $(e_s,e_o)$ as below:
\begin{equation}
\mathrm{P}(x_i|e_s,e_o) = \sigma (\bm{x}_i\mathsf{W}\bm{c}^{(s,o)} + \bm{b}),
\label{eq:eider}
\end{equation}
where $\bm{x}_i$ is a sentence embedding, $\bm{c}^{(s,o)}$ is the localized context embedding computed from Equation~\ref{eq:context_rep}, $\mathsf{W}$ and $\bm{b}$ are trainable parameters.
EIDER and other existing systems thus need to compute over all combinations of (sentence, entity pair).
Specifically, consider a document $D$ with $n$ sentences $\mathcal{X}_D=\{x_1,x_2,\dots,x_n\}$ and $m$ entities $\mathcal{E}_D=\{e_1,e_2,\dots,e_m\}$, yielding $m\times(m-1)$ entity pairs.
To obtain evidence scores, EIDER must perform bilinear classification $n\times m\times(m-1)$ times via Equation~\ref{eq:eider}, resulting in huge memory consumption.
In contrast, DREEAM takes the summations of attention weights over tokens as evidence scores, thus introducing neither new trainable parameters nor expensive matrix computations.
Hence, we see that DREEAM is more memory-efficient than its competitors.
Table~\ref{tab:memory} summarizes the memory consumption and the number of trainable parameters when utilizing BERT\textsubscript{base} as the PLM encoder for existing and proposed methods.
Values are measured when training the systems using the corresponding official repositories with a batch size of four\footnote{The value of EIDER is different from the original paper because we enable ER evaluations during training.}.
We observe that the memory consumption of DREEAM is only 27.4\% of EIDER and 25.5\% of SAIS.
Notably, DREEAM also consumes less memory than KD-DocRE, underscoring the memory efficiency of our proposed method.
\subsection{Performance on Re-DocRED}
\begin{table}[!t]
\centering
\small
\begin{tabular}{lrr}
\Xhline{3\arrayrulewidth}
\multicolumn{1}{c}{\textbf{Statistics}} & \multicolumn{1}{c}{DocRED} & \multicolumn{1}{c}{Re-DocRED} \\
\Xhline{2\arrayrulewidth}
\# rel. & 38,180 & 85,932 \\
\# rel. w/o evi. & 1,421 (3.7\%) & 38,672 (45.0\%)\\
\Xhline{3\arrayrulewidth}
\end{tabular}
\caption{Statistics of relation triples in the training set of DocRED and Re-DocRED. \textit{rel.} stands for relation triples and \textit{rel. w/o evi.} stands for relation triples without evidence sentences.}
\label{tab:evi_redocred}
\end{table}
\begin{table}[!t]
\small
\centering
\begin{tabular}{lcc}
\Xhline{3\arrayrulewidth}
Method & Ign F1 & F1 \\
\Xhline{2\arrayrulewidth}
\multicolumn{3}{l}{\textbf{(a) without Distantly-Supervised Data}}\\
ATLOP~\cite{zhou2021atlop} & 76.82 & 77.56 \\
DocuNet~\cite{zhang-etal-2021-document} & 77.26 & 77.87 \\
KD-DocRE~\cite{tan-etal-2022-document} & 77.60 & 78.28 \\
\hdashline
DREEAM & 77.34\textsubscript{\textpm{0.19}} & 77.94\textsubscript{\textsubscript{\textpm{0.15}}} \\
+ Inference-Stage Fusion & 79.66\textsubscript{\textpm{0.39}} & 80.73\textsubscript{\textpm{0.38}} \\
\Xhline{2\arrayrulewidth}
\multicolumn{3}{l}{\textbf{(b) with Distantly-Supervised Data}} \\
ATLOP~\cite{zhou2021atlop} & 78.52 & 79.46 \\
DocuNet~\cite{zhang-etal-2021-document} & 78.52 & 79.46 \\
KD-DocRE~\cite{tan-etal-2022-document} & 80.32 & 81.04 \\
\hdashline
DREEAM & 78.67\textsubscript{\textpm{0.17}} & 79.35\textsubscript{\textpm{0.18}} \\
+Inference-Stage Fusion & \textbf{80.39}\textsubscript{\textpm{0.03}} & \textbf{81.44}\textsubscript{\textpm{0.04}} \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\caption{Evaluation results on the test set of Re-DocRED, with best scores \textbf{bolded}. PLM encoder is aligned to RoBERTa-large. The scores of existing methods are borrowed from~\citet{redocred}.}
\label{tab:re-docred}
\end{table}
Although DocRED is a widely used benchmark, recent works have pointed out that annotations of the dataset are incomplete~\cite{huang-etal-2022-recommend,xie-etal-2022-eider,redocred}.
To paraphrase, many relation triples in DocRED are missing in human annotations, biasing the dataset with many false negatives.
\citet{redocred} thus proposed Re-DocRED, a more reliable benchmark for DocRE that revises DocRED to alleviate the false negative issue.
In this subsection, we evaluate DREEAM on Re-DocRED to verify the soundness of our proposed method.
Similar to Section~\ref{sec:main_results}, we conducted experiments under two different settings:
(a) a fully-supervised setting without distantly-supervised data and (b) a weakly-supervised setting utilizing distantly-supervised data.
Notably, Re-DocRED introduces new relation triples without providing accurate evidence sentences.
As shown in Table~\ref{tab:evi_redocred}, compared with DocRED, the training set of Re-DocRED contains much more relation triples without evidence sentences.
DREEAM trained on Re-DocRED could thus be inaccurate on ER, biased by the considerable amount of missing evidence.
Therefore, during ER self-training of the student model, we adopt token evidence distributions predicted by a teacher model trained on DocRED as the supervisory signal.
The student model is further finetuned on Re-DocRED to obtain more reliable knowledge about RE.
Table~\ref{tab:re-docred} compares the performance of DREEAM against existing methods.
We observe that DREEAM outperforms existing methods under both the fully-supervised setting and the weakly-supervised setting.
The observation indicates the soundness of our proposed method.
\section{Related Work}
\paragraph{DocRE} Recent work has extended the scope of relation extraction task from sentence to document~\cite{peng-etal-2017-cross,quirk-poon-2017-distant,yao-etal-2019-docred}.
Compared with its sentence-level counterpart, DocRE is a more realistic and challenging setting, aiming at extracting both intra- sentence and inter-sentence relations.
Although commonly-used benchmarks for DocRE include DocRED~\cite{yao-etal-2021-adapt}, CDR~\cite{li2016CDR} and GDA~\cite{wu2019GDA}, only DocRED contains evidence annotation and massive pre-processed data obtained from relation distant supervision.
Therefore, we adopt DocRED as our test bed.
\paragraph{Transformer-based DocRE} Modeling DocRE with a Transformer-based system has been a popular and promising approach, outperforming its graph-based counterparts~\cite{zeng-etal-2020-double,zeng-etal-2021-sire,xu-etal-2021-reconstruct}.
One of the major topics of these systems is a better utilization of long-distance token dependencies captured by the PLM encoder.
\citet{zhang-etal-2021-document} formulate DocRE as a semantic segmentation task and introduce a U-Net~\cite{ronneberger-etal-2015-unet} on top of the PLM encoder to capture local and global dependencies between entities.
\citet{zhou2021atlop} propose localized contextual pooling to focus on tokens relevant to each entity pair.
Based on their work, \citet{tan-etal-2022-document} adopt an axial attention module to perform two-hop reasoning and capture the dependencies between relation triples.
These designs provide no supervision on token dependencies, expecting the model to capture them implicitly during training.
In contrast, we provide explicit supervision for token dependencies by utilizing evidence information.
\paragraph{ER in DocRE} This study is not the first to incorporate evidence information into DocRE.
\citet{huang-etal-2021-three} first report that heuristically selecting evidence sentences boosts the performance of DocRE systems.
\citet{huang-etal-2021-entity}, \citet{xie-etal-2022-eider} and \citet{xiao-etal-2022-sais} train neural classifiers to automatically retrieve evidence together with RE.
However, we perform ER with neither heuristic rules nor neural classifiers.
Furthermore, our approach can be used for ER self-training on data without evidence annotations.
\paragraph{Distant Supervision} Distant supervision has been widely adopted as a technique to generate automatically-labeled data for RE~\cite{mintz-etal-2009-distant,quirk-poon-2017-distant,xiao-etal-2020-denoising}.
The method assumes that if a sentence contains an entity pair that participates in a known relation in a knowledge base (KB), the sentence probably expresses that relation.
Thus unlabeled text can be aligned with a KB using entities as anchors, with each match distantly supervised by the relation described in the KB.
\citet{yao-etal-2019-docred} apply the technique to annotate relations in documents automatically.
In this work, we directly adopt those documents for ER self-training.
\section{Conclusion}
We have introduced methods to improve the usage of ER in DocRE.
First, we propose DREEAM, a memory-efficient method to reduce the computation cost of ER.
Unlike existing approaches that train an evidence classifier for ER, DREEAM directly supervises the attention to concentrate more on evidence than on others.
Next, we propose to employ DREEAM in a weakly-supervised setting to compensate for the shortage of human annotations.
Instead of gold evidence annotated by humans, we adopt evidence predictions from a teacher model trained on human-annotated data as the supervisory signal to realize ER self-training on unlabeled data.
Experiments on the DocRED benchmark show that DREEAM exhibits state-of-the-art performance on both RE and ER, with the help of weakly-supervised training on data obtained from distant supervision of relations.
Compared with existing approaches, DREEAM performs ER with zero trainable parameters introduced, thereby reducing the memory usage to 27\% or less.
The soundness of DREEAM is confirmed by conducting experiments on Re-DocRED, a revised version of DocRED.
In the future, we plan to transfer the evidence knowledge of DREEAM trained on DocRED to other DocRE datasets.
\section*{Limitations}
A major limitation of this work is that our method can only retrieve relation-agnostic evidence.
Unlike~\citet{xiao-etal-2022-sais}, DREEAM cannot specify evidence sentences for each relation label.
Therefore, when an entity pair holds multiple relations, DREEAM retrieves the same evidence regardless of the relation type, even though the evidence may be correct for some of the relations but not for others.
\section*{Ethics Statement}
In this work, we have proposed a method for incorporating ER into DocRE.
Our approach directly supervises the weights of attention modules within a Transformer-based PLM encoder. Inside the research community, we hope our approach can provide a new viewpoint on the explainability of document-level relation extraction systems.
Furthermore, a better DocRE system will benefit the research on other tasks, such as question answering and reading comprehension.
In the real world, a DocRE system with good performance can help extract useful information from unstructured text, reducing human efforts and expenses.
Furthermore, as our method is memory-efficient, it is also friendly to the environment.
We also have demonstrated a use case of our method in ER self-training, utilizing massive data obtained from relation distant-supervision.
Although in this work, we directly adopt the data provided by~\citet{yao-etal-2019-docred}, it is possible to extend the scale of data by utilizing numerous unstructured texts.
Utilizing a wide range of unstructured texts may expose our system to the risk of vulnerable data, potentially biasing our system in the wrong direction.
To mitigate the problem, we encourage performing data pre-processing to detect and remove harmful contents before training.
\section*{Acknowledgements}
This paper is based on results obtained from a project, JPNP18002, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
We appreciate Marco Cognetta for the generous help and valuable discussions.
|
{
"arxiv_id": "2302.08659",
"language": "en",
"timestamp": "2023-02-20T02:05:44",
"url": "https://arxiv.org/abs/2302.08659",
"yymm": "2302"
} | \section{Introduction}
Neural sequence labeling (NSL) is one of the fundamental tasks in natural language processing (NLP) with a broad range of applications, including named entity recognition (NER)~\cite{Li2022A} and slot filling~\cite{Liu2022Cross}, which aims at classifying language tokens into a pre-defined set of classes~\cite{Shen2021Locate, Agarwal2022Towards}.
Previous works have achieved satisfying performance by developing well-designed deep architectures and/or fine-tuning pre-trained language models (PLMs)~\cite{Ma2016End, Li2020A, Zhou2022Distantly, Zhang2022Exploring}. Yet, these approaches heavily depend on massive labeled data, which could be even more bothersome in some real-world scenarios, such as over-fitting on limited labeled data and privacy constraints.
Recently, a branch of the semi-supervised learning (SSL) paradigm~\cite{Nitesh2005Learning, Jesper2020A, Yang2020A} arises to bypass the aforementioned issues, which aims to utilize effectively large-scale unlabeled data in addition to the few-shot labeled data. Self-training is one of the typical SSL techniques which can be viewed as teacher-student training~\cite{Grandvalet2004Semi, Amini2022Self}. Concretely, a~\emph{teacher} model is trained over the labeled data and then be used to generate pseudo labels on the unlabeled data. After that, a~\emph{student} model can be optimized by the augmented data, and be used to initialize a new~\emph{teacher}.
Benefit from self-training, previous methods make a remarkable success on a series of~\emph{instance-level classification} tasks, such as image classification~\cite{Zhou2021Instant, Wang2022Double, Liu2021Unbiased} and text classification~\cite{Meng2020Text, Mukherjee2020Uncertainty, Yu2021ATM, Hu2021Uncertainty, Tsai2022Contrast, Kim2022LST}.
In contrast to instance-level classification, we observe that there are two challenges in applying standard self-training to NSL. On one hand, the task of NSL is based on the~\emph{token-level classification}, which requires the model to capture the inherent token-wise label dependency.
On the other hand, the teacher model inevitably generates some noisy labels that cause error accumulation~\cite{Wang2021Meta}.
Some sample selection strategies (e.g., model confidence, uncertainty estimation) and consistency regularization mitigate the effect of noisy labels and alleviate the problem of confirmation bias~\cite{Do2021Semi, Cao2021Uncertainty, Rizve2021In, Wang2021Combating, Andersen2022Efficient}. However, it is unclear how these methods can be applied to token-level classification.
To remedy this dilemma, we develop~{\model}, a novel semi-supervised learning framework for NSL, which improves standard self-training via two decomposed processes, i.e., reliable token selection and robust learning.
Specifically, we first pseudo annotate the unlabeled data.
Then, a Monte Carlo (MC) dropout technique, which is the approximation technique in Bayesian Neural Network (BNN)~\cite{Gal2016Dropout, Wang2016Towards}, is used to estimate the uncertainty of each language token derived from the teacher model.
We judiciously select the reliable tokens from each unlabeled sentence based on the model confidence and certainty.
Finally, we introduce two training objectives to improve the model robustness, i.e., Masked Sequence Labeling (MSL) and Gaussian-based Consistency Regularization (GCR).
In MSL, we generate a masked matrix to make the model only focus on the selected reliable tokens in each sentence. We also utilize partially huberised cross-entropy (PHCE) loss to explicitly mitigate the effect of label noises.
In GCR, we assume each selected token embedding follows the Gaussian distribution and perturbs token embeddings to encourage consistency between perturbed embeddings and original representations. This technique effectively reduces the risk of over-fitting.
We perform extensive experiments over multiple benchmarks with very few labeled data (10/20/50/100 examples per class). We adopt BERT-base~\cite{Devlin2019BERT} as our backbone. Results show that our~{\model} outperforms strong baselines by a large margin in low-resource settings.
In a nutshell, we make the following main contributions:
\begin{itemize}
\item We develop a novel semi-supervised neural sequence labeling framework~{\model} to alleviate the problem of data scarcity in low-resource scenarios.
\item We propose the token-level uncertainty estimation to empower self-training. Furthermore, two training objectives are introduced to improve the model robustness, including Masked Sequence Labeling and Gaussian-based Consistency Regularization.
\item Extensive experiments over multiple benchmark datasets demonstrate that our proposed framework achieves substantial performance improvement.
\end{itemize}
\section{Related Work}
\subsection{Semi-supervised Learning and Self-training}
SSL aims to utilize effectively unlabeled data in addition to labeled data, which has been widely used in the NLP community~\cite{Yang2017Improved, Gururangan2019Variational, Xie2020Unsupervised, Chen2020MixText}.
For instance, \citet{Yang2017Improved, Gururangan2019Variational} utilize variational autoencoders (VAEs) for sequence classification and labeling.
Unsupervised data augmentation (UDA)~\cite{Xie2020Unsupervised} generates augmented data by back translation, and leverages consistency regularization for unlabeled data.
\citet{Chen2020MixText} proposes {MixText} to mix labeled, unlabeled and augmented data, and performs similar consistency training as UDA.
Self-training is one of the earliest methods in SSL that has recently shown state-of-the-art performances~\cite{Chen2020MixText, Meng2020Text, Li2021Semi}. Recent works improve standard self-training by considering sample selection strategies, including model confidence~\cite{Bengio2009Curriculum, Kumar2010Self} and uncertainty estimation~\cite{Cao2021Uncertainty, Tsai2022Contrast}.
For example, \citet{Cao2021Uncertainty} presents uncertainty-aware self-training (UST) to sample pseudo-labeled data by BNN. ~\cite{Tsai2022Contrast} propose CEST which aims to leverage graph-based contrast induction to solve the problem of confirmation bias and data smoothness on the selected data.
However, these approaches mainly focus on instance-level classification. It would be highly desirable if they can be applied to token-level classification.
\subsection{Low-resource Neural Sequence Labeling}
Low-resource NSL aims at classifying the input language tokens with very few labeled data. Prior researches address this problem via meta-learning~\cite{Ziyadi2020Example, Ma2022Decomposed}
and augmentation learning~\cite{Ding2020DAGA, Zhou2022MELM, Wang2022PromDA}. For instance, \cite{Ziyadi2020Example} leverages prototypical network~\cite{Snell2017Prototypical} to learn adaptable knowledge from few-shot episode data.
\citet{Wang2022PromDA} and \citet{Zhou2022MELM} aim to generate the in-domain augmented data through masked language modeling (MLM)~\cite{Devlin2019BERT}.
Yet, these methods ignore the informative semantics of unlabeled data. To reach this goal, a series of previous works~\cite{Miller2004Name, Peters2017Semi} focus on token representations enhancement by pre-training word embeddings on unlabeled data.
Another line of research focuses on latent variable modeling~\cite{Chen2018Variational}, adversarial training method SeqVAT~\cite{Chen2020SeqVAT} and cross-view training method CVT~\cite{Clark2018Semi}.
Recently, MetaST~\cite{Wang2021Meta} attempts to apply self-training to NSL by two techniques, i.e., adaptive validation set construction by uncertainty estimation, and noisy data re-weighting via student loss.
In contrast, we perform reliable token selection by the joint estimations of the teacher confidence and uncertainty, and two well-designed training objectives support robust training.
Compared with previous methods, we achieve the best overall performance with very few labeled data.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{framework}
\caption{The framework overview. 1) We first fine-tune a teacher model over the labeled data. Then, 2) we utilize MC dropout to perform uncertainty estimation and select reliable tokens for each unlabeled sentence. 3) Two novel training objectives are proposed to improve the model robustness and alleviate the over-fitting issue. (Best viewed in color.)}.
\label{fig:model}
\end{figure*}
\section{Preliminaries}
We first present the notations and then introduce the background knowledge of the Bayesian neural network.
\subsection{Notations}
We represent a labeled dataset and an unlabeled dataset as $\mathcal{D}_{l}=\{X_i, Y_i\}_{i=1}^{N_l}$ and $\mathcal{D}_{u}=\{X_i\}_{i=1}^{N_u}$, respectively. $X_i=\{x_{ij}\}_{j=1}^{L}$ denotes the input sentence with $L$ language tokens. $Y_i=\{y_{ij}\}_{j=1}^{L}$ is the label sequence and $y_{ij}\in\mathcal{Y}$ is the tag of token $x_{ij}\in\mathcal{X}$, $\mathcal{X}$ and $\mathcal{Y}$ denote the token vocabulary of the PLM and the label space, respectively. $N_l$ and $N_u$ represent the numbers of labeled and unlabeled data, respectively ($N_l\ll N_u$).
The goal of semi-supervised NSL is to learn a neural mapping function $f^{W}:\mathcal{X}\rightarrow\mathcal{Y}$ over labeled data $\mathcal{D}_{l}$ and unlabeled data $\mathcal{D}_{u}$, where $W$ is the collection of model parameters.
\subsection{Bayesian Neural Network (BNN)}
Similar to CEST~\cite{Tsai2022Contrast}, in this part we briefly describe BNN.
Given a neural model $f^{W}$, the vanilla BNN assumes a prior distribution over its model parameters $W$. In other words, BNN averages over all the possible weights instead of directly optimizing for the weights~\cite{Mukherjee2020Uncertainty}.
Given a labeled dataset $\mathcal{D}_{l}$, the parameter can be optimized by the posterior distribution $p(W|\mathcal{D}_{l})$.
During model inference, given one unlabeled example $X_i\in\mathcal{D}_{u}$, the probability distribution of each token $x_{ij}\in X_i$ for class $c\in\mathcal{Y}$ is $p(y=c|x_{ij}, X_i)=\int_{W}p(y=c|f^{W}(x_{ij}, X_i)p(W|D_{u})dW$.
Yet, it is intractable in practice for calculation.
To make it tractable, we can find a surrogate tractable distribution $q_{\theta}(W)$ that makes the model posterior easy to compute.
Thus, we consider $q_{\theta}(W)$ to be the dropout distribution~\cite{Srivastava2014Dropout} which aims to sample $T$ masked model weights $\{\widetilde{W}_t\}_{t=1}^{T}\sim q_{\theta}(W)$ from the current model. The approximate posterior for each token is:
\begin{equation}
\begin{aligned}
p(y=c|x_{ij}, X_i)\approx\frac{1}{T}\sum_{t=1}^{T}p(y=c|f^{\widetilde{W}_t}(x_{ij}), X_i).
\label{eql:bnn-posterior}
\end{aligned}
\end{equation}
\section{Methodology}
In this section, we propose the~{\model} framework to improve the self-training paradigm for low-resource NSL.
The framework overview is illustrated in Fig~\ref{fig:model}.
\subsection{Pseudo Annotation}
In the initial stage, a vanilla PLM (e.g., BERT)
can be fine-tuned over the labeled dataset $\mathcal{D}_{l}$ to form a \emph{teacher} model $f^{W^*}$, where $W^*$ is the collection of parameters.
The hard label $\tilde{y}_{ij}$ of each token $x_{ij}$ in the given unlabeled sentence $X_i\in\mathcal{D}_{u}$ can be pseudo annotated by the teacher model $f^{W^*}$. Formally, we have:
\begin{equation}
\begin{aligned}
\tilde{y}_{ij}=\arg\max_{c}p(y=c|f^{W^{*}}(x_{ij}), X_i),
\label{eql:pseudo-label}
\end{aligned}
\end{equation}
where $p(\cdot)$ is the probability distribution, which can be modeled as a softmax classifier or a conditional random field (CRF)~\cite{Lafferty2001Conditional} layer.
\subsection{Reliable Token Selection}
Prior works~\cite{Mukherjee2020Uncertainty, Tsai2022Contrast, Rizve2021In} improve self-training by the instance-level selection strategy, which aims to utilize BNN to perform uncertainty estimation for each example, and then select the reliable examples from the whole unlabeled dataset that the model is most certain about.
Different from them, we focus on the token-level uncertainty estimation, and aim to select reliable tokens from each sentence.
\noindent\textbf{Token-level Uncertainty Estimation}.
We assume that each sentence is independent of another and can be measured individually.
Specifically, we follow~\cite{Houlsby2011Bayesian, Gal2017Deep} to leverage information gain of the model parameters to estimate how certain the model is to the pseudo-labeled tokens with respect to the true labels.
Formally, given one input sentence $X_i\in\mathcal{D}_{u}$, we have:
\begin{equation}
\begin{aligned}
\mathbb{B}(\tilde{y}_{ij}, W|x_{ij}, \mathcal{D}_{u}) =
& \mathbb{H}(\tilde{y}_{ij}|x_{ij}, \mathcal{D}_{u}) - \\
& \mathbb{E}_{p(W|\mathcal{D}_{u})}[\mathbb{H}(\tilde{y}_{ij}|x_{ij}, W)],
\label{eql:information-gain}
\end{aligned}
\end{equation}
where $\mathbb{H}(\cdot)$ is the entropy function, $x_{ij}\in X_i$ and $\tilde{y}_{ij}\in\widetilde{Y}_i$ denote the token and tag, respectively.
$\mathbb{B}(\tilde{y}_{ij}, W|x_{ij}, \mathcal{D}_{u})$ denotes the information gain which is the difference between $\mathbb{H}(\tilde{y}_{ij}|x_{ij}, \mathcal{D}_{u})$ (the final entropy after seeing all tokens from unlabeled sentences) and $\mathbb{H}(\tilde{y}_{ij}|x_{ij}, W)$ (the current entropy for the token $x_{ij}$).
$p(W|\mathcal{D}_{u})$ is the posterior distribution.
As the calculation of Eq.~\ref{eql:information-gain} is intractable, we utilize MC dropout in BNN to perform approximation. Specifically, we assume that the posterior distribution $p(W|\mathcal{D}_{u})$ can be replaced with dropout distribution $q_{\theta}(W)$. Thus, we can sample $T$ masked model weight $\{\widetilde{W}_t\}_{t=1}^{T}\sim q_{\theta}(W)$, and calculate the approximation value of $\mathbb{B}$ as:
\begin{equation}
\begin{aligned}
\hat{\mathbb{B}}(\tilde{y}_{ij}, W|x_{ij}, \mathcal{D}_{u}) =
&- \sum_{c\in\mathcal{Y}}(\frac{1}{T}\sum_{t=1}^{T}\hat{p}_c^t)\log(\frac{1}{T}\sum_{t=1}^{T}\hat{p}_c^t) \\
&+ \frac{1}{T}\sum_{t=1}^{T}\sum_{c\in\mathcal{Y}}\hat{p}_c^t\log(\hat{p}_c^t),
\label{eql:information-gain-mc}
\end{aligned}
\end{equation}
where $\hat{p}_c^t=p(\tilde{y}_{ij}=c|f^{\widetilde{W}_t}(x_{ij}), X_i)$ is the predict probability for the token $x_{ij}$ derived from the $t$-th masked model.
Thus, a lower $\hat{\mathbb{B}}(\tilde{y}_{ij}, W|x_{ij}, \mathcal{D}_{u})$ value means that the model is more certain about the prediction, as higher certainty corresponds to lower information gain.
\noindent\textbf{Tokens Sampling}.
For reliable token selection, we jointly consider the confidence and certainty value.
For the confidence of the prediction $\tilde{y}_{ij}$, we have:
\begin{equation}
\begin{aligned}
s_{ij}^{cf}=\frac{1}{T}\sum_{t=1}^{T}p(y=\tilde{y}_{ij}|f^{\widetilde{W}_t}(x_{ij}), X_i).
\label{eql:confidence-score}
\end{aligned}
\end{equation}
A higher confidence value $s_{ij}^{cf}$ means the model is more confident for the pseudo label $\tilde{y}_{ij}$.
Theoretically, selecting language tokens with high confidence predictions moves decision boundaries to low-density regions, which is satisfied with the low-density assumption~\cite{Rizve2021In}.
However, many of these selected tokens with higher confidence are incorrect due to the poor calibration of neural networks~\cite{Guo2017On}, which brings the conformation bias problem.
To reduce the wrong labels, we additionally design a certainty score $s_{ij}^{ct}$ based on the uncertainty estimation as:
\begin{equation}
\begin{aligned}
s_{ij}^{ct}=1 - \hat{\mathbb{B}}(\tilde{y}_{ij}, W|x_{ij}. \mathcal{D}_{u}),
\label{eql:certainty-score}
\end{aligned}
\end{equation}
Intuitively, if the model is always certain about some tokens, these tokens might be too easy to contribute any additional information.
To this end, we can obtain the final sampling weight for each token as:
\begin{equation}
\begin{aligned}
s_{ij}=\frac{s_{ij}^{cf}\times s_{ij}^{ct}}{\sum_{x_{ij}\in X_i}s_{ij}^{cf}\times s_{ij}^{ct}}.
\label{eql:sampling-weight}
\end{aligned}
\end{equation}
\subsection{Robust Learning for NSL}
\noindent\textbf{Masked Sequence Labeling (MSL).}
With the measure of model confidence and certainty, we can use them to sample reliable tokens for each sentence.
However, each token has the location and label dependency constraint, we can not directly remove the tokens that are not sampled.
Thus, we define a masked matrix to record the sampling results, i.e.,
\begin{equation}
\begin{aligned}
M_{i, j} = \left\{
\begin{array}{rcl}
1 & x_{ij}~\text{has been sampled}; \\
0 & x_{ij}~\text{has not been sampled}; \\
\end{array}
\right.
\label{eql:visible-matrix}
\end{aligned}
\end{equation}
when $M_{ij}=0$, it means the corresponding label may be a noise and should be masked during self-training.
Generally, we can define the following cross-entropy function as the training loss:
\begin{equation}
\begin{aligned}
l(X_i, \widetilde{Y}_{i}, M_i)=\frac{1}{L_i'}\sum_{j=1}^{L_i'}\mathbb{I}(M_{ij}=1)\log p_{W}(x_{ij}, \tilde{y}_{ij}),
\label{eql:loss-cross-entropy}
\end{aligned}
\end{equation}
where $p_{W}(x_{ij}, \tilde{y}_{ij})=p(y=\tilde{y}_{ij}|f^{W}(x_{ij}), X_i)$ is the prediction probability derived from the student model, $L_i'=\sum_{j=1}^{L}\mathbb{I}(M_{ij}=1)$ is the number of the selected tokens in $X_i$. $\mathbb{I}(\cdot)$ denotes the indicator function.
However, it is still possible that the selected tokens could be wrong pseudo annotated although the token with higher confidence and certainty.
To explicitly mitigate the effect of label noises, we follow~\cite{Tsai2022Contrast} to utilize partially huberised cross-entropy loss (PHCE loss) as our noise-robust loss, which is based on a simple variant of gradient clipping for the classification loss, e.g. cross-entropy.
Hence, the loss function in Eq.~\ref{eql:loss-cross-entropy} can be modified as:
\begin{equation}
\begin{aligned}
l(X_i, \widetilde{Y}_{i}, M_i)=\frac{1}{L_i'}\sum_{j=1}^{L_i'}\mathbb{I}(M_{ij}=1)\phi(x_{ij}, \tilde{y}_{ij}),
\label{eql:loss-1}
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\phi(x, y) = \left\{
\begin{array}{rcl}
-\tau p_{W}(x, y)
+ \log\tau + 1 & p_{W}(x, y)\leq 1/\tau; \\
-\log p_{W}(x, y) & p_{W}(x, y)>1/\tau; \\
\end{array}
\right.
\label{eql:loss-phce}
\end{aligned}
\end{equation}
is the PHCE loss function, $\tau>1$ is the hyper-parameter. Thus, the model learned by Eq.~\ref{eql:loss-phce} can be more robust to the noisy labeled tokens than the common cross-entropy.
\begin{algorithm}[t]
\caption{Self-training Procedure of {\model}}
\label{alg:train}
\begin{small}
\begin{algorithmic}[1]
\REQUIRE Neural model $f^{W_{0}}$, labeled data $\mathcal{D}_{l}$, unlabeled data $\mathcal{D}_{u}$.
\STATE Initialize a teacher model $f^{W^{*}}=f^{W_0}$;
\WHILE{not converged}
\STATE Fine-tune the teacher model $f^{W^{*}}$ over the labeled data $\mathcal{D}_{l}$;
\STATE Pseudo annotate each unlabeled sentence $X_i\in\mathcal{D}_{u}$ by Eq.~\ref{eql:pseudo-label} to obtain the hard labels $\widetilde{Y}_i$;
\STATE Initialize a student model $f^{W}=f^{W_{0}}$;
\FOR{$X_i\subseteq\mathcal{D}_{u}$}
\STATE Obtain the confidence score $s_{ij}^{cf}$ for $x_{ij}\in X_i$ by Eq.~\ref{eql:confidence-score};
\STATE Obtain the certainty score $s_{ij}^{ct}$ for $x_{ij}\in X_i$ by Eq.~\ref{eql:certainty-score};
\STATE Sample reliable tokens by the sampling weight in Eq.~\ref{eql:sampling-weight}, and generate a masked matrix $M_{i}$;
\STATE Calculate the training loss $l(X_{i}, \widetilde{Y}_{i}, M_{i})$ in Eq.~\ref{eql:loss-1};
\STATE Calculate the regularization loss $R(f^{W}, X_i)$ in Eq.~\ref{eql:loss-regularization};
\STATE Update the model $f^{W}$ by reduce $\mathcal{L}(W)$ in Eq.~\ref{eql:loss};
\ENDFOR
\STATE Update the teacher model $f^{W^{*}}=f^{W}$;
\ENDWHILE
\RETURN The teacher model $f^{W^{*}}$.
\end{algorithmic}
\end{small}
\end{algorithm}
\noindent\textbf{Gaussian-based Consistency Regularization (GCR).}
During iterative self-training, it is possible that the model biases the sampling process toward picking \emph{easier} tokens, which have higher confidence and certainty. This inevitably leads to the student model over-fitting on these frequently selected samples.
Previous methods~\cite{Chen2020MixText, Xie2020Unsupervised} solve this problem by utilizing back translation~\cite{Sennrich2016Sennrich} to generate instance-level augmented data, which aims to translate each sentence into a different language and then translate it back. However, it can not be directly applied to token-level task~\cite{Lee2021Learning}.
To reach this goal, we elaborately design Gaussian-based consistency regularization (GCR) to perturb token embeddings in the hidden space and make the model prediction consistent.
Given one pseudo-labeled sentence $X_i=\{x_{ij}\}_{j=1}^{L}$, we can obtain the corresponding contextual representations $\mathbf{H}_i=\{\mathbf{h}_{ij}\}_{j=1}^{L}$ derived from the final hidden layer output of the student model (i.e., BERT). $\mathbf{h}_{ij}\in\mathbb{R}^{h}$ denotes the representations of the token $x_{ij}$, where $h$ is the hidden size.
We assume that token embeddings follow Gaussian distributions~\cite{Das2022CONTaiNER, Lee2021Learning}, i.e., $\mathbf{h}_{ij}\sim\mathcal{N}(\bm{\mu}_{ij}, \bm{\Sigma}_{ij})$. Specifically, we use two projection network $\mathcal{F}_{\mu}(\cdot)$ and $\mathcal{F}_{\Sigma}(\cdot)$ with ReLU activation to produce Gaussian distribution $\bm{\mu}_{ij}$ and $\bm{\Sigma}_{ij}$ for each token $x_{ij}$. Formally, we have:
\begin{equation}
\begin{aligned}
\bm{\mu}_{ij} = \mathcal{F}_{\mu}(\mathbf{h}_{ij}),
\bm{\Sigma}_{ij} = \mathcal{F}_{\Sigma}(\mathbf{h}_{ij}),
\label{eql:gaussian-parameters}
\end{aligned}
\end{equation}
where $\bm{\mu}_{ij}\in\mathbb{R}^{h}$, $\bm{\Sigma}_{ij}\in\mathbb{R}^{h}$ represent mean and diagonal covariance of the Gaussian embedding, respectively.
We leverage the reparameterization trick~\cite{Kingma2014Auto} to perturb the input $x_{ij}$ with the sampled noise without altering its semantics. Specifically, we generate $K$ perturbed representations $\{\hat{\mathbf{h}}_{ij}^{(k)}\}_{k=1}^{K}$, where
\begin{equation}
\begin{aligned}
\hat{\mathbf{h}}_{ij}^{(k)} = \mathbf{h}_{ij}\odot(\bm{\mu}_{ij} + \bm{\Sigma}_{ij}\odot\epsilon^{(k)}).
\label{eql:gaussian-perturbation}
\end{aligned}
\end{equation}
Here, $\epsilon^{(k)}\sim\mathcal{N}(\mathbf{0}, \mathbf{I}_{h})$, and $\mathbf{0}\in\mathbb{R}^{h}$, $\mathbf{I}_{h}\in\mathbb{R}^{h}$ are the vector with zeros and identity matrix, respectively. $\odot$ denotes the element-wise multiplication.
Afterward, the KL divergence objective can be used to control the probability distribution consistency between the original semantic representation and the perturbations for each token of $x_{ij}$:
\begin{equation}
\begin{aligned}
R(f^{W}, X_{i}) = \frac{1}{L\cdot K}\sum_{j=1}^{L}\sum_{k=1}^{K}D_{\text{KL}}\big(p_{W}(y|\mathbf{h}_{ij})|| p_{W}(y|\hat{\mathbf{h}}_{ij}^{(k)})\big)
\label{eql:loss-regularization}
\end{aligned}
\end{equation}
where $D_{\text{KL}}(\cdot||\cdot)$ denotes the KL divergence function, and $p_{W}(y|\cdot)$ is the probability distribution derived from the student model.
Finally, we update the student model by modifying the following objective:
\begin{equation}
\begin{aligned}
\mathcal{L}(W) = \sum_{X_i\in\mathcal{D}_u}\big(l(X_i, \widetilde{Y}_i, M_i) + \lambda R(f^{W}, X_i)\big),
\label{eql:loss}
\end{aligned}
\end{equation}
where $\lambda$ is the pre-defined hyper-parameter balancing the regularization term.
\subsection{Self-training Procedure}
The training algorithm of our~{\model} framework is shown in Algorithm~\ref{alg:train}. Specifically, at each iteration stage, we fine-tune a teacher model $f^{W^{*}}$ on the few-shot labeled data $\mathcal{D}_{l}$, and pseudo annotate the unlabeled data $\mathcal{D}_{u}$ (Algorithm~\ref{alg:train}, Line 3-4). Then, we select reliable tokens in each sentence by the model confidence and certainty (Algorithm~\ref{alg:train}, Line 7-9), and use them to update the parameters of the student model $f^{W}$ by modifying the final loss $\mathcal{L}(W)$ (Algorithm~\ref{alg:train}, Line 5, 10-12).
Finally, the trained student model $f^{W}$ is used to initialize a new teacher $f^{W^{*}}$, and repeat the above steps till convergence.
\section{Experiments}
\subsection{Datasets and Implementation Details}
We choose six widely used benchmarks to evaluate our~{\model} framework, including SNIPS~\footnote{\url{https://github.com/sonos/nlu-benchmark/tree/master/2017-06-custom-intent-engines}.}~\cite{Coucke2018Snips} and Multiwoz~\footnote{\url{https://github.com/budzianowski/multiwoz}.}~\cite{Budzianowski2018MultiWOZ} for slot filing, MIT Movie~\footnote{\url{https://groups.csail.mit.edu/sls/downloads/movie/}.}~\cite{Liu2013Query}, MIT Restaurant~\footnote{\url{https://groups.csail.mit.edu/sls/downloads/restaurant/}.}~\cite{Liu2013Asgard}, CoNLL-03~\cite{Sang2003Introduction} and OntoNotes~\cite{weischedel2013ontonotes} for NER. The statistics of each dataset are shown in Table~\ref{tab:datasets}.
For each dataset, we use a greedy-based sampling strategy to randomly select 10-shot labeled data per class for the few-shot labeled training set and validation set, while the remaining data are viewed as unlabeled data.
During self-training, the teacher and student model share the same model architecture. In default, we choose BERT-base-uncased~\cite{Devlin2019BERT} from HuggingFace\footnote{\url{https://huggingface.co/transformers}.}
with a softmax layer as the base encoder. We use grid search to search the hyper-parameters.
We select five different random seeds for the dataset split and training settings among $\{12, 21, 42, 87, 100\}$.
We report the averaged F1 scores with the standard deviation on the whole testing set.
We implement our framework in Pytorch 1.8 and use NVIDIA V100 GPUs for experiments.
\subsection{Baselines}
We adopt several state-of-the-art semi-supervised methods as our strong baselines~\footnote{We do not consider $N$-way $K$-shot few-shot NSL baselines because they have different learning settings with ours.}.
\textbf{Standard self-training (SST)}~\cite{Huang2021Few} is the simple method that generates hard pseudo labels and uses them to train a student model.
\textbf{VAT}~\cite{Miyato2019Virtual} and \textbf{SeqVAT}~\cite{Chen2020SeqVAT} utilize adversarial training method with consistency learning to improve the robustness.
\textbf{CVT}~\cite{Clark2018Semi} is based on cross-view training for semi-supervised sequence labeling.
\textbf{MetaST}~\cite{Wang2021Meta} aims to select reliable validation data and trains the student model by re-weighting strategy.
We also choose the standard \textbf{Fine-tuning} as the supervised learning-based baselines.
We reproduce their results with the same settings.
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{small}
\begin{tabular}{l | ccccc}
\toprule
\bf Datasets &\bf Domain &\bf Type &\bf \#Class &\bf \#Train & \bf \#Test \\
\midrule
SNIPS & Dialogue & Slot Filling & 54 & 13.6k & 0.7k \\
Multiwoz & Dialogue & Slot Filling & 15 & 20.3k & 2.8k \\
MIT Movie & Review & NER & 13 & 7.8k & 2.0k \\
MIT Restaurant & Review & NER & 9 & 7.7k & 1.5k \\
CoNLL-03 & News & NER & 5 & 14.0k & 3.5k \\
OntoNotes & General & NER & 19 & 60.0k & 8.3k \\
\bottomrule
\end{tabular}
\end{small}
}
\caption{The statistics of each dataset.}
\label{tab:datasets}
\end{table}
\begin{table*}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{lcccccccc}
\toprule
\bf Baselines & \bf SNIPS & \bf Multiwoz & \bf MIT Movie & \bf MIT Restaurant & \bf CoNLL-03 & \bf OntoNotes & \bf Avg. & $\Delta$ \\
\midrule
\multicolumn{9}{l}{\textit{\textbf{Full Data}}}\\
Fine-tuning & 97.00 & 88.00 & 87.30 & 79.00 & 91.90 & 89.20 & 88.73 & - \\
\midrule
\multicolumn{9}{l}{\textit{\textbf{Few Labeled Data (10-shot)}}}\\
Fine-tuning & 79.10\textit{\textpm0.38} & 71.40\small{\textpm0.25} & 69.50\small{\textpm0.40} & 53.75\small{\textpm0.19} & 71.82\small{\textpm0.28} & 73.15\small{\textpm0.25} & 69.79 & +0.00 \\
\midrule
\multicolumn{9}{l}{\textit{\textbf{Few Labeled Data (10-shot) + Unlabeled Data}}}\\
SST & 81.07\small{\textpm0.40} & 72.25\small{\textpm0.11} & 71.14\small{\textpm0.22} & 55.13\small{\textpm0.29} & 72.86\small{\textpm0.34} & 75.07\small{\textpm0.18} & 71.25 & +1.46\\
VAT & 79.70\small{\textpm0.22} & 72.08\small{\textpm0.30} & 68.80\small{\textpm0.39} & 54.26\small{\textpm0.33} & 72.60\small{\textpm0.33} & 73.38\small{\textpm0.27} & 70.14 & +0.35 \\
SeqVAT & 79.62\small{\textpm0.41} & 72.17\small{\textpm0.34} & 68.83\small{\textpm0.35} & 54.02\small{\textpm0.40} & 73.18\small{\textpm0.28} & 73.71\small{\textpm0.30} & 70.23 & +0.44\\
CVT & 79.22\small{\textpm0.44} & 71.42\small{\textpm0.38} & 69.03\small{\textpm0.51} & 54.17\small{\textpm0.42} & 71.70\small{\textpm0.72} & 73.88\small{\textpm0.40} & 69.90 & +0.11\\
MetaST & \underline{86.74\small{\textpm0.33}} & \underline{77.34\small{\textpm0.51}} & \underline{77.52\small{\textpm0.39}} & \underline{63.02\small{\textpm0.29}} & \underline{76.88\small{\textpm0.41}} & \underline{77.69\small{\textpm0.24}} & \underline{76.53} & \underline{+6.74}\\
\midrule
\textbf{SeqUST} & \bf 87.33\small{\textpm0.30} & \bf 77.98\small{\textpm0.26} & \bf 77.73\small{\textpm0.22} & \bf 64.19\small{\textpm0.36} & \bf 79.10\small{\textpm0.27} & \bf 80.33\small{\textpm0.44} & \bf 77.78 & \bf +7.99\\
\bottomrule
\end{tabular}
}
\caption{The performance comparison of F1 scores (\%) with standard deviations on six benchmarks. $\Delta$ denotes an improvement over the few-shot fine-tuning method compared to our framework. All models (except fine-tuning with full data) are trained with 10-shot labeled samples for each class and overall F1 aggregated over five different runs with different random seeds.}
\label{tab:main-result}
\end{table*}
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{small}
\begin{tabular}{l | ccc}
\toprule
\bf Models &\bf SNIPS &\bf MIT Movie &\bf CONLL-03 \\
\midrule
SST & 81.07\small{\textpm0.40} & 72.25\small{\textpm0.11} & 72.86\small{\textpm0.34} \\
\textbf{\model} & \bf 87.33\small{\textpm0.30} & \bf 77.73\small{\textpm0.22} & \bf 79.10\small{\textpm0.27} \\
\midrule
\quad w/o. selection & 82.37\small{\textpm0.39} & 70.12\small{\textpm0.29} & 73.40 \small{\textpm0.37} \\
\quad w/o. confidence & 86.98\small{\textpm0.33} & 76.99\small{\textpm0.29} & 77.60 \small{\textpm0.29} \\
\quad w/o. certainty & 83.33\small{\textpm0.30} & 72.29\small{\textpm0.33} & 73.16\small{\textpm0.31} \\
\quad w/o. PHCE loss & 87.01\small{\textpm0.34} & 77.32\small{\textpm0.27} & 78.31\small{\textpm0.35} \\
\quad w/o. GCR & 87.11\small{\textpm0.31} & 77.25\small{\textpm0.23} & 77.93\small{\textpm0.31} \\
\bottomrule
\end{tabular}
\end{small}
}
\caption{The ablation study results over 10-shot labeled data per class (F1 score \%).}
\label{tab:ablation}
\end{table}
\subsection{Main Results}
Table~\ref{tab:main-result} illustrates the main results of our framework~{\model} compared with other baselines.
The results of fine-tuning over full data are the ceiling performance.
With only 10 labeled training data per class, we achieve the best averaged F1 score of 77.78\%. In addition, {\model} outperforms the few-shot fine-tuning and standard self-training by 7.99\% and 6.36\%, respectively.
The vanilla fine-tuning method without any unlabeled data performs worst than expected since the number of labeled data is extremely insufficient for the parameter modification.
The performance of the self-training-based approaches (including SST, MetaST, and SeqUST) is consistently better than others, which indicates the effectiveness of the self-training paradigm.
{MetaST} is a strong baseline that performs a token-level re-weighting strategy in self-training. Yet, {MetaST} ignores the robust learning of some noisy labels even though they have higher weights, and does not consider the consistency regularization, its performance can be considered sub-optimal in contrast with ours.
Results show that {\model} achieves high improvement over the state-of-the-art {MetaST} by 1.25\%.
\subsection{Ablation Study}
In Table~\ref{tab:ablation}, we randomly choose three datasets to conduct an ablation study to investigate the characteristics of the main components in~{\model}.
The results show that no matter which module is removed, the model performance is affected. All variants still outperform standard self-training, even though removing some components.
When removing the reliable token selection (w/o. selection), the performance declines a lot because that many noisy labels hurt the model effectiveness.
We also observe that the result of w/o. certainty is lower than w/o. confidence, which demonstrates that the model uncertainty is more useful for label denoising, and greatly alleviate the conformation bias issue.
Moreover, the use of robust loss (i.e., the PHCE loss in MSL) and consistency regularization consistently contribute to the robustness improvement when training on pseudo-labeled data.
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{l | ccc | ccc}
\toprule
& \multicolumn{3}{c|}{\textbf{SeqUST}} & \multicolumn{3}{c}{\textbf{MetaST}} \\
\bf \#-shot$\longrightarrow$ &\bf 20 &\bf 50 &\bf 100 &\bf 20 &\bf 50 &\bf 100 \\
\midrule
SNIPS & 92.13 & 93.44 & 95.60 & 91.99 & 92.82 & 95.10 \\
Multiwoz & 79.70 & 82.05 & 83.36 & 79.45 & 81.34 & 84.27 \\
MIT Movie & 80.80 & 83.16 & 84.98 & 80.29 & 82.75 & 84.35 \\
MIT Restaurant & 69.02 & 73.95 & 75.70 & 67.93 & 72.83 & 75.28 \\
CoNLL-03 & 81.74 & 83.20 & 85.59 & 78.54 & 82.34 & 85.10 \\
OntoNotes & 82.26 & 84.00 & 85.77 & 80.78 & 82.44 & 85.01 \\
\bottomrule
\end{tabular}
}
\caption{The F1 score with different numbers (20/50/100 examples per class) of labeled data.}
\label{tab:data_efficiency}
\end{table}
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{small}
\begin{tabular}{l | ccc}
\toprule
\bf Selection Strategy &\bf SNIPS &\bf MIT Movie &\bf CONLL-03 \\
\midrule
None & 45.75 & 51.77 & 49.88 \\
Confidence & 43.18 & 48.06 & 45.22 \\
Certainty & 20.42 & 27.60 & 29.10 \\
\bf Confidence + Certainty & \bf 17.76 & \bf 23.10 & \bf 24.63 \\
\bottomrule
\end{tabular}
\end{small}
}
\caption{The error rate (\%) of different selection strategy. None means does not select tokens.}
\label{tab:selection-noise-rate}
\end{table}
\vspace{-.25em}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{case}
\caption{Cases of pseudo-labeled texts. The language tokens in \textbf{bold} are sampled as reliable tokens.}.
\label{fig:cases}
\end{figure*}
\subsection{Further Analyses}
\noindent\textbf{Data Efficiency.}
We further explore the model effects with different numbers of labeled data per class (\#-shot) among $\{20, 50, 100\}$.
Results in Table~\ref{tab:data_efficiency} illustrate that the performance gradually improves as the number of labeled data increases, as expected.
In addition, we also find that our~{\model} consistently outperforms the strong baseline~{MetaST} no matter how many labeled training examples. This can be attributed to the introduction of uncertainty-aware self-training with well-designed robust learning approaches.
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}[t]{0.326\linewidth}
\includegraphics[width = 1\linewidth]{robust-1}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = 1\linewidth]{robust-2}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = 1\linewidth]{robust-3}
\end{minipage}
\end{tabular}
\begin{tabular}{cc}
\begin{minipage}[t]{0.326\linewidth}
\includegraphics[width = 1\linewidth]{robust-4}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = 1\linewidth]{robust-5}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\includegraphics[width = 1\linewidth]{robust-6}
\end{minipage}
\end{tabular}
\caption{The performance (F1 score \%) of different self-training iterations over six benchmarks.}
\label{fig:robust}
\vspace{-.25em}
\end{figure}
\noindent\textbf{Effectiveness of Reliable Token Selection.}
To demonstrate the effectiveness of reliable token selection, we choose three datasets to investigate the error rate on the unlabeled data. As shown in Table~\ref{tab:selection-noise-rate}, we obtain the lowest error rate with both token-level confidence and certainty, indicating the merit of discarding noisy pseudo labels.
In addition, we respectively choose two cases from MIT Movie and CONLL-03 to show the pseudo labels and selected tokens. The results in Fig.~\ref{fig:cases} show that 1) most tokens can be correctly predicted and 2) our selection strategy can identify some noisy labels.
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}[t]{0.4\linewidth}
\includegraphics[width = 1\linewidth]{visualization-1.png}
\end{minipage}
\begin{minipage}[t]{0.4\linewidth}
\includegraphics[width = 1\linewidth]{visualization-2.png}
\end{minipage}
\end{tabular}
\caption{The t-SNE visualization of~{\model} w/o. GCR (left) and w/ GCR (right).}
\label{fig:visualization}
\end{figure}
\noindent\textbf{Effectiveness of the Robust Learning.}
We first demonstrate the importance of PHCE loss for robust learning in the masked sequence labeling task.
As shown in Fig.~\ref{fig:robust}, our framework consistently outperforms the standard self-training by a large margin, which does not use robust learning methods.
Moreover, the performance of~{MetaST} is not higher than ours, indicating that the model trained by the explicit noise masking and the PHCE loss is more robust than the re-weighting strategy.
In Fig.~\ref{fig:visualization}, we choose the CoNLL-03 testing set and use t-SNE~\cite{van2008visualizing} tool to demonstrate the token representations in the semantic space.
The model trained with Gaussian-based consistency regularization can make a clearer boundary between every two classes, corroborating our conclusions that avoiding the over-fitting problem and yielding better generalization capability.
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{l | cc | cc}
\toprule
\multirow{2}{*}{\textbf{Datasets}} & \multicolumn{2}{c|}{\textbf{BERT}} & \multicolumn{2}{c}{\textbf{BiLSTM}} \\
& SoftMax & CRF & SoftMax & CRF \\
\midrule
SNIPS & 87.33\small{\textpm0.30} & 86.95\small{\textpm0.25} & 84.94\small{\textpm0.20} & 85.16\small{\textpm0.28} \\
Multiwoz & 77.98\small{\textpm0.26} & 78.20\small{\textpm0.38} & 73.13\small{\textpm0.19} & 73.92\small{\textpm0.20} \\
MIT Movie & 77.73\small{\textpm0.22} & 78.02\small{\textpm0.24} & 72.05\small{\textpm0.26} & 73.12\small{\textpm0.31} \\
MIT Restaurant & 64.19\small{\textpm0.36} & 66.17\small{\textpm0.26} & 60.83\small{\textpm0.42} & 61.11\small{\textpm0.47} \\
CoNLL-03 & 79.10\small{\textpm0.27} & 78.21\small{\textpm0.35} & 75.53\small{\textpm0.22} & 75.22\small{\textpm0.23} \\
OntoNotes & 80.33\small{\textpm0.44} & 79.74\small{\textpm0.40} & 76.02\small{\textpm0.34} & 76.28\small{\textpm0.34} \\
\midrule
Avg. & 77.78 & 77.88 & 73.75 & 74.14 \\
\bottomrule
\end{tabular}
}
\caption{The F1 score (\%) of different base encoders.}
\label{tab:base-encoder}
\end{table}
\subsection{Performance of Different Base Encoders}
We end this section with a comparison with other base encoders in Table~\ref{tab:base-encoder}, including BERT+SoftMax, BERT+CRF, BiLSTM+SoftMax, and BiLSTM+CRF.
Results show that our framework can be applied to arbitrary encoders.
In addition, CRF is able to exploit the label dependency among the few data and further improves the overall performance of BERT and BiLSTM by 0.10\% and 0.39\%, respectively.
\section{Conclusion}
We propose a novel~{\model} framework for semi-supervised neural sequence labeling based on uncertainty-aware self-training.
We utilize token-level model confidence and certainty to judiciously select reliable tokens in each unlabeled sentence. To mitigates the noisy labels, we introduce a masked sequence labeling task with noise-robust PHCE loss. We also present Gaussian-based consistency regularization to alleviate the over-fitting problem.
Extensive experiments over multiple benchmarks show that our framework consistently outperforms strong baselines in low-resource settings.
In the future, we will further improve the performance of denoising and apply our framework to other token-level scenarios and NLP tasks, such as extractive question answering, event extraction, etc.
\section*{Acknowledgements}
This work was partially supported by the National Natural Science Foundation of China under Grant No. U1911203,
Alibaba Group through the Alibaba Innovation Research Program,
the National Natural Science Foundation of China under Grant No. 61877018,
the Research Project of Shanghai Science and Technology Commission (20dz2260300) and The Fundamental Research Funds for the Central Universities.
|
{
"arxiv_id": "2302.08676",
"language": "en",
"timestamp": "2023-02-20T02:06:20",
"url": "https://arxiv.org/abs/2302.08676",
"yymm": "2302"
} |
\section{Notation and Background}\label{sec:background}
A database schema $\schemasymb(D) = \{\mathrm{Sch}(R_1), \ldots, \mathrm{Sch}(R_n)\}$ is a set of relation schemas $\mathrm{Sch}(R_i) = \tuple{A_1,\; \ldots,\; A_n}$.
Use $\arity{\schemasymb(R)}$ to denote the number of attributes in $\schemasymb(R)$. An instance $D$ for schema $\schemasymb(D)$ is a set of relation instances with one relation per schema in $\schemasymb(D)$: $D=\{R_1, \dots, R_n\}$.
Assuming a universal value domain $\mathbb{D}$, a tuple with schema $\schemasymb(R)$ is an element from $\mathbb{D}^{\arity{\schemasymb(R)}}$.
A
$\ensuremath{\mathcal{K}}\xspace$-relation~\cite{Green:2007:PS:1265530.1265535} annotates each tuple with an element of a (commutative) semiring.
In this paper, we focus on
$\ensuremath{\mathbb{N}}\xspace$-relations. An $\ensuremath{\mathbb{N}}\xspace$-relation of arity $n$ is a function that maps each tuple ($\mathbb{D}^n$) in the relation to an annotation in $\ensuremath{\mathbb{N}}\xspace$ representing the tuple's multiplicity. Tuples not in the relation are mapped to multiplicity $0$.
$\ensuremath{\mathbb{N}}\xspace$-relations have finite support (tuple not mapped to $0$).
Since $\ensuremath{\mathcal{K}}\xspace$-relations are functions from tuples to annotations, it is customary to denote the annotation of a tuple $t$ in relation $R$ as $R(t)$.
A $\ensuremath{\mathcal{K}}\xspace$-database is a set of $\ensuremath{\mathcal{K}}\xspace$-relations.
Green et al.~\cite{Green:2007:PS:1265530.1265535} did use the semiring operations to express positive relational algebra ($\ensuremath{\mathcal{RA}^{+}}$) operations over $\ensuremath{\mathcal{K}}\xspace$-relations as shown in \Cref{fig:semKLift}.
Notably for us, for the natural numbers semiring $\ensuremath{\mathbb{N}}\xspace = \tuple{\semN, +, \times, 0, 1}$,
this semantics are equivalent to those of positive bag-relational algebra.
\begin{figure}
\centering
\noindent
\begin{minipage}{0.45\columnwidth}
$$\liftsemiringeval{\pi_A(R)}(t) = \sum_{t':\; t = \pi_A t'} R(t')$$
$$\liftsemiringeval{R \cup S}(t) = R(t) + S(t)$$
\end{minipage}
\begin{minipage}{0.45\columnwidth}
$$\liftsemiringeval{\sigma_\theta(R)}(t) = \begin{cases}
R(t)&\textbf{ if }\theta(t)\\
0 &\textbf{ otherwise}
\end{cases}$$
$$\liftsemiringeval{R \times S}(t) = R(t) \times S(t)$$
\end{minipage}
\\[-2mm]
\setlength{\belowcaptionskip}{-15pt}
\caption{Evaluation semantics $\liftsemiringeval{\cdot}$ that lift the operations of a semiring $\ensuremath{\mathcal{K}}\xspace$ to $\ensuremath{\mathcal{RA}^{+}}$ operations over $\ensuremath{\mathcal{K}}\xspace$-relations.}
\label{fig:semKLift}
\end{figure}
\subsection{Incomplete N-Relations}
\label{sec:incompl-prob-k}
An incomplete $\ensuremath{\mathbb{N}}\xspace$-database $\mathcal{D} = \{D_1, \ldots, D_n\}$ (resp., incomplete $\ensuremath{\mathbb{N}}\xspace$-relation $\mathcal R = \{R_1, \ldots, R_n\}$) is a set of $\ensuremath{\mathbb{N}}\xspace$-databases $D_i$ (resp., $\ensuremath{\mathbb{N}}\xspace$-relations $R_i$) called possible worlds.
Queries over incomplete $\ensuremath{\mathbb{N}}\xspace$-databases use possible world semantics:
The result of a query $Q$ over an incomplete $\ensuremath{\mathbb{N}}\xspace$-database $\mathcal{D}$ is the set of relations $\mathcal{R}$ (possible worlds) derived by evaluating $Q$ over every world in $\mathcal{D}$ using the semantics of \Cref{fig:semKLift}.
In addition to enumerating all possible query results, past work has introduced the concept of \textit{certain} and \textit{possible answers} for set semantics, which are respectively the set of tuples present in all worlds or in at least one world.
Certain and possible answers have been generalized~\cite{GL19a,FH19} to bag semantics as the extrema of the tuple's annotations across all possible worlds.
Formally, the certain and possible annotations of a tuple $t$ in $\mathcal R$ are:
\\[-10mm]
\begin{center}
\begin{align*}
{\certainName}_{\semN}(\mathcal R,t) &\coloneqq \min(\{R(t) \mid R \in \mathcal R \})\\
{\possibleName}_{\semN}(\mathcal R,t) &\coloneqq \max(\{R(t) \mid R \in \mathcal R \})
\end{align*}
\end{center}
\subsection{AU-Databases (\abbrUAADBs)} \label{sec:AU-model}
Using $\ensuremath{\mathcal{K}}\xspace$-relations
,
we
introduced \textit{\abbrUAADBs}~\cite{FH19} (\textit{attribute-annotated uncertain databases}), a special type of $\ensuremath{\mathcal{K}}\xspace$-relation that summarizes an incomplete $\ensuremath{\mathcal{K}}\xspace$-relation by bounding its set of possible worlds.
An \abbrUAADB differs from the classical relational model in two key ways:
First, tuples are not defined as individual points $\mathbb{D}^n$, but rather as a bounding hypercube specified as upper and lower bounds (and selected-guess) for each attribute value.
Every such hypercube can represent zero or more tuples contained inside it.
Second, the annotation of each hypercube tuple is also a range of possible annotations (e.g., multiplicities for range-annotated $\ensuremath{\mathbb{N}}\xspace$-relations).
Intuitively, an \abbrUAADB \textit{bounds} a possible world if the hypercubes of its tuples contain all of the possible world's tuples, and the total multiplicity of tuples in the possible world fall into the range annotating the hypercube.
An \abbrUAADB \textit{bounds} an incomplete $\ensuremath{\mathcal{K}}\xspace$-database $\mathcal{D}$ if it bounds all of $\mathcal{D}$'s possible worlds.
\revm{To be able to model, e.g., the choice of repair made by a heuristic data repair algorithm, }
the value and annotation domains of an \abbrUAADB also contain a third component: a \textit{selected-guess} (SGW\xspace) that encodes one world.
Formally, in an \abbrUAADB, attribute values are \textit{range-annotated values} $c = \uv{\lbMarker{c}}{\bgMarker{c}}{\ubMarker{c}}$ from a \textit{range-annotated domain} ${\dataDomain_{\rangeName}}$ that encodes the selected-guess value $\bgMarker{c} \in \mathbb{D}$ and two values ($\lbMarker{c}$, $\ubMarker{c} \in \mathbb{D}$) that bound $\bgMarker{c}$ from below and above.
For any $c \in {\dataDomain_{\rangeName}}$ we have $\lbMarker{c} \leq \bgMarker{c} \leq \ubMarker{c}$.
We call a value $c \in {\dataDomain_{\rangeName}}$ certain if $\lbMarker{c}=\bgMarker{c}=\ubMarker{c}$.
\abbrUAADBs encode bounds on the multiplicities of tuples by using $\semq{\semN} = \tuple{\semq{\semN}, \addOf{\semq{\semN}}, \multOf{\semq{\semN}}, \zeroOf{\semq{\semN}}, \oneOf{\semq{\semN}}}$ annotations on tuples in ${\dataDomain_{\rangeName}}^n$.
The annotation $\ut{\lbMarker{k}}{\bgMarker{k}}{\ubMarker{k}}$ encodes a lower bound on the certain multiplicity of the tuple, the multiplicity of the tuple in the SGW\xspace, and an over-approximation of the tuple's possible multiplicity.
\iftechreport{We note that an \abbrUAADB can be encoded in a relational database by encoding each annotated value as three columns encoding the lower bound, selected-guess and upper bound value.}
\revm{Consider the AU-DB relation $\mathbf{\rela}(A,B)$ with a tuple $(\uv{1}{3}{5}, \uv{a}{a}{a})$ annotated with $\ut{1}{1}{2}$. This tuple represents the fact that each world consists of either 1 and 2 tuples with $B=a$ and $A$ between $1$ and $5$. The SGW\xspace contains a tuple $(3,a)$ with multiplicity 1.}
\paragraph{Bounding Databases}
As noted above, an \abbrUAADB summarizes an incomplete $\ensuremath{\mathbb{N}}\xspace$-relation by defining bounds over the possible worlds that comprise it.
To formalize bounds over $\ensuremath{\mathbb{N}}\xspace$-relations, we first define what it means for a range-annotated tuple to bound a set of deterministic tuples.
Let $\textbf{\sffamily \tup}$ be a range-annotated tuple with schema $\tuple{a_1, \ldots, a_n}$ and $t$ be a tuple with the same schema as $\textbf{\sffamily \tup}$.
$\textbf{\sffamily \tup}$ bounds $t$ (denoted $t \sqsubseteq \textbf{\sffamily \tup}$) iff
$\forall{i \in \{1, \ldots ,n\}}:
\lbMarker{\textbf{\sffamily \tup}.a_i} \leq t.a_i \leq \ubMarker{\textbf{\sffamily \tup}.a_i}
$
Note that a single \abbrUAADB tuple may bound multiple deterministic tuples, and conversely that a single deterministic tuple may be bound by multiple \abbrUAADB tuples.
Informally, an AU-relation\xspace bounds a possible world if we can distribute the multiplicity of each tuple in the possible world over the AU-relation\xspace's tuples.
This idea is formalized through \textit{tuple matchings}.
A tuple matching $\mathcal{TM}$ from an $n$-ary AU-relation\xspace $\mathbf{\rela}$ to an $n$-ary relation $R$ is a function $({\dataDomain_{\rangeName}})^{n} \times \mathbb{D}^n \to \ensuremath{\mathbb{N}}\xspace$ that fully allocates the multiplicity of every tuple of $R$:
\begin{align*}
&\forall \textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n: \forall t \not\sqsubseteq \textbf{\sffamily \tup}: \mathcal{TM}(\textbf{\sffamily \tup}, t) = 0
&\forall t \in \mathbb{D}^n: \hspace*{-1mm}\sum_{\textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n} \hspace*{-1mm}\mathcal{TM}(\textbf{\sffamily \tup}, t)=R(t)
\end{align*}
$\mathbf{\rela}$ bounds $R$ (denoted $R \sqsubset \mathbf{\rela}$) iff there exists a tuple matching $\mathcal{TM}$ where the total multiplicity allocated to each $\textbf{\sffamily \tup} \in \mathbf{\rela}$ falls within the bounds annotating $\textbf{\sffamily \tup}$:
\begin{align*}
&\forall \textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n:\sum_{t \in \mathbb{D}^n} \mathcal{TM}(\textbf{\sffamily \tup},t) \geq \lbMarker{\mathbf{\rela}(\textbf{\sffamily \tup})}
\;\text{\textbf{and}}\;
\sum_{t \in \mathbb{D}^n} \mathcal{TM}(\textbf{\sffamily \tup},t) \leq \ubMarker{\mathbf{\rela}(\textbf{\sffamily \tup})}
\end{align*}
An \abbrUAADB relation $\mathbf{\rela}$ bounds an incomplete $\ensuremath{\mathbb{N}}\xspace$-relation $\mathcal{R}$ (denoted $\mathcal{R} \sqsubset \mathbf{\rela}$) iff it bounds every possible world (i.e., $\forall R \in \mathcal{R}: R \sqsubset \mathbf{\rela}$), and if projecting down to the selected guess attribute of $\mathbf{\rela}$ results in a possible world of $\mathcal{R}$.
As
shown in~\cite{FH19,FH21}, (i) \abbrUAADB query semantics is closed under $\ensuremath{\mathcal{RA}^{+}}$, set difference and aggregations, and (ii) queries preserve bounds.
That is, if every relation $\mathbf{\rela}_i \in {\mathbf{\db}}$ bounds the corresponding relation of an incomplete database $\mathcal{R}_i \in \mathcal{D}$ (i.e., $\forall i: \mathcal{R}_i \sqsubset \mathbf{\rela}_i$), then for any query $Q$, the results over ${\mathbf{\db}}$ bound the results over $\mathcal{D}$ (i.e., $Q(\mathcal{D}) \sqsubset Q({\mathbf{\db}})$).
\paragraph{Expression Evaluation}
In~\cite{FH21},
we defined a semantics $\seval{e}{\textbf{\sffamily \tup}}$ for evaluating primitive-valued expressions $e$ over the attributes of a range tuple $\textbf{\sffamily \tup}$. These semantics
preserves bounds:
given any expression $e$ and any deterministic tuple $t$ bounded by $\textbf{\sffamily \tup}$ (i.e., $t \sqsubseteq \textbf{\sffamily \tup}$), the result of deterministically evaluating the expression ($\seval{e}{t}$) is guaranteed to be bounded by the ranged evaluation $\seval{e}{\textbf{\sffamily \tup}}$.
$$\forall t \sqsubseteq \textbf{\sffamily \tup} : c = \seval{e}{t}, \ut{\lbMarker{c}}{\bgMarker{c}}{\ubMarker{c}} = \seval{e}{\textbf{\sffamily \tup}} \;\; \rightarrow \;\; \lbMarker{c} \leq c \leq \ubMarker{c}$$
\cite{FH21} proved this property for any $e$ composed of attributes, constants, arithmetic and boolean operators, and comparisons.
\revm{For example,
$\uv{\lbMarker{a}}{\bgMarker{a}}{ \ubMarker{a}} + \uv{\lbMarker{b}}{\bgMarker{b}}{ \ubMarker{b}} = \uv{\lbMarker{a} + \lbMarker{b}}{\bgMarker{a} + \bgMarker{b}}{\ubMarker{a} + \ubMarker{b}}$}
\section{Conclusions and Future Work}
\label{sec:concl-future-work}
In this work, we present an efficient approach for under-ap\-prox\-i\-mat\-ing certain answers and over-ap\-prox\-i\-mat\-ing possible answers for top-k, sorting, and windowed aggregation queries over incomplete databases.
Our approach based on AU-DBs~\cite{FH21} is unique in that it supports windowed aggregation, is also closed under under full relational algebra with aggregation, and is implemented as efficient one-pass algorithms in Postgres.
Our approach significantly outperforms existing algorithms for ranking uncertain data while being applicable to more expressive queries and bounding all certain and possible query answers.
\iftechreport{Thus, our approach enables the efficient evaluational of complex queries involving sorting over incomplete databases. We present a SQL-based implementation as well as the aforementioned one-pass algorithms. Using an implementation of these algorithms in Postgres, we demonstrate that our approach significantly outperforms the SQL-based implementation and for windowed aggregations we have performance close to sampling based approach with 10 samples while 10 sample produces low recall comparing with the accuracy our approach have. Furthermore, our approach significantly outperforms an existing algorithm for ranking uncertain data while being applicable to more expressive queries and bounding all certain and possible query answers.}
In future work, we plan to extend our approach to deal more expressive classes of queries, e.g., recursive queries, and will investigate index structures for AU-DBs to further improve performance.
\section{Deterministic Semantics}
\label{sec:det-materialized-sort}
Before introducing the AU-DB semantics for ranking and windowed aggregation, we first formalize the corresponding deterministic algebra operators that
materialize sort positions of rows \emph{as data}.
\mypar{Sort order}
Assume a total order $<$ for the domains of all attributes.
For simplicity, we only consider sorting in ascending order.
The extension for supporting both ascending and descending order is straightforward.
For any two tuples $t$ and $t'$ with schema $(A_1, \ldots, A_n)$ and sort attributes $O = (A_{i_1}, \ldots, A_{i_m})$ we define:\\[-5mm]
\begin{align*}
t \les{O} t' \Leftrightarrow \exists &j \in \{ 1, \ldots, m \} : \\
&\forall k \in \{1, \ldots, j-1 \}: t.A_{i_k} = t'.A_{i_k} \land t'.A_{i_j} < t.A_{i_j}
\end{align*}\\[-4mm]
The less-than or equals comparison operator $\lest{O}$ generalizes this definition in the usual way. Note that SQL sorting (\lstinline!ORDER BY!) and some window bounds (\lstinline!ROW BETWEEN ...!) may be non-deterministic.
For instance, consider a relation $R$ with schema $(A,B)$ with two rows $t_1 = (1,1)$ and $t_2 = (1,2)$ each with multiplicity 1;
Sorting this relation on attribute $A$ (the tuples are indistinguishable on this attribute), can return the tuples in either order.
Without loss of generality, we ensure a fully deterministic semantics (up to tuple equivalence) by extending the ordering on attributes $O$, using the remaining attributes of the relation as a tiebreaker:
The total order $t \fles{O} t'$ for tuples from a relation $R$ is defined as $t \les{O,\mathrm{Sch}(R)-O} t'$ (assuming some arbitrary order of the attributes in $\mathrm{Sch}(R)$).
\iftechreport{
\begin{Example}[Sorting]\label{ex:ranking}
Consider the relation $R$ shown on the left below. The multiplicity from \ensuremath{\mathbb{N}}\xspace assigned to each tuple is shown on the right. The result of sorting the relation on attribute $A$ using our deterministic semantics and storing the sort positions in column \texttt{pos} is shown below on the right. Note the order of tuples $t_1 = (3,15)$ and $t_2 = (1,1)$ is made deterministic by $t_2 \fles{A} t_1$, because $t_2.B < t_1.B$. Note also that the two copies of $t_2$ are each assigned a different position.
\begin{minipage}{0.35\linewidth}
\centering
\begin{tabular}{c|cc}
\thead{A} & \thead{B} & \ensuremath{\mathbb{N}}\xspace \\
\hline
3 & 15 & 1 \\
1 & 1 & 2\\
\end{tabular}
\end{minipage}
%
\begin{minipage}{0.64\linewidth}
\centering
\begin{tabular}{c|c|cc}
\thead{A} & \thead{B} & \thead{pos} & \ensuremath{\mathbb{N}}\xspace\\
\hline
1 & 1 & 0 & 1\\
1 & 1 & 1 & 1\\
3 & 15 & 4 & 1\\
\end{tabular}
\end{minipage}
\end{Example}
}
We first introduce operators for windowed aggregation, because sorting
can be defined as a special case of windowed aggregation.
\subsection{Windowed Aggregation}
\label{sec:windowed-aggregation}
A windowed aggregate is defined by an aggregate function, a sort order (\lstinline!ORDER BY!), and a window bound specification.
A window boundary is relative to the defining tuple, by the order-by attribute value (\lstinline!RANGE BETWEEN...!), or by position (\lstinline!ROWS BETWEEN!).
In the interest of space, we will limit our discussion to row-based windows, as range-based windows are strictly simpler.
A window includes every tuple within a specified interval of the defining tuple.
Windowed aggregation extends each input tuple with the aggregate value computed over the tuple's window.
If a \lstinline!PARTITION BY! clause is present, then window boundaries are evaluated within a tuple's partition.
In SQL, a single query may define a separate window for each aggregate function (SQL's \lstinline!OVER! clause).
This can be modeled by applying multiple window operators in sequence.
\begin{Example}[Row-Based Windows]\label{ex:row-based-windows}
Consider the bag relation below and consider the windowed aggregation $sum(B)$ sorting on $A$ with bounds $[-2,0]$ (including the two preceding tuples and the tuple itself).
The window for the first duplicate of $t_1 = (a,5,3)$ contains tuple $t_1$ with multiplicity $1$, the window for the second duplicate of $t_1$ contains $t_1$ with multiplicity $2$ and so on.
Because each duplicate of $t_1$ ends up in a different window, there are three result tuples produced for $t_1$, each with a different $sum(B)$ value. Furthermore, tuples $t_2 = (b,3,1)$ and $t_3 = (b,3,4)$ have the same position in the sort order, demonstrating the need to use $\fles{O}$ to avoid non-determinism in what their windows are. We have $t_2 \fles{O} t_3$ and, thus, the window for $t_2$ contains $t_2$ with multiplicity $1$ and $t_1$ with multiplicity $2$ while the window for $t_3$ contains $t_1$, $t_2$ and $t_3$ each with multiplicity $1$.
\begin{minipage}{0.35 \linewidth}
\centering
\scalebox{0.8}{
\begin{tabular}{c|c|cc}
\thead{A} & \thead{B} & \thead{C} & \thead{$\ensuremath{\mathbb{N}}\xspace$}\\ \hline
a & 5 & 3 & 3 \\
b & 3 & 1 & 1 \\
b & 3 & 4 & 1\\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.64 \linewidth}
\centering
\scalebox{0.8}{
\begin{tabular}{c|c|c|cc}
\thead{A} & \thead{B} & \thead{C} & \thead{sum(B)} & \thead{$\ensuremath{\mathbb{N}}\xspace$}\\ \hline
a & 5 & 3 & 5 & 1 \\
a & 5 & 3 & 10 & 1 \\
a & 5 & 3 & 15 & 1 \\
b & 3 & 1 & 13 & 1\\
b & 3 & 4 & 11 & 1\\
\end{tabular}
}
\end{minipage}
\end{Example}
The semantics of the row-based window aggregate operator $\omega$ is shown in \Cref{fig:windowed-aggregation-oper}.
The parameters of $\omega$ are partition-by attributes $G$, order-by attributes $O$, an aggregate function $f(A)$ with $A \subseteq \mathrm{Sch}(R)$, and an interval $[l,u]$. For simplicity, we hide some arguments ($G$,$O$,$l$,$u$) in the definitions and assume they passed to intermediate definitions where needed.
The operator outputs a relation with schema $\mathrm{Sch}(R) \circ X$.
The heavy lifting occurs in the definition of relation
$\mathcal{ROW}(R)$, which ``explodes'' relation $R$, adding an attribute $i$ to replace each tuple of multiplicity $n$ with $n$ distinct tuples.
$\mathcal{ROW}(R)$ computes the windowed aggregate over the window defined for the pair $(t, i)$, denoted as
$\windowshort{R}{t}{i}(t')$.
To construct this window, we define the range of the sort positions the tuple $t$ covers (
$\tcovershort{R}{t}$), and the range of positions in its window ($\tboundsshort{R}{t}{i}$).
The multiplicity of tuple $t'$ in the partition of $t$ (denoted $\partitionshort{R}{t}(t')$) is the size of the overlap between the bounds of $t$, and the cover of $t'$.
\iftechreport{
$\Omega$ computes the dense-rank windowed aggregate.
$\trank{R}{O}{t}$ computes the dense rank of tuple $t \in R$ using $\les{O}$: the number of tuple groups (tuples with matching values of $O$) preceding $t$.
We define the window for tuple $t$ (denoted $\awindow{t}$) point-wise for each tuple by considering the tuple's dense rank within its partition (denoted $\apartition{t}$).
Because dense-rank windows are computed over tuple groups, the multiplicity of teach tuple in the window is taken directly from the relation.
For sparse rank windows, we need to ensure that the window contains exactly the desired number of rows, e.g., a window with bounds $[-2,0]$ should contain exactly 3 rows, e.g., 1 row with multiplicity 2 and one row with multiplicity 1.
Since we need to ensure that windows contain a fixed number of tuples, it may be the case that a tuple will be included with a multiplicity in the window that is less than the tuple's multiplicity in the input relation. Furthermore, the window for one duplicate of a tuple may differ from the window of another duplicate of the same tuple.
}
\begin{figure}[t]
\centering
\iftechreport{
\begin{align*}
\winrange{\aggf(\agga)}{X}{\gbAttrs}{\sortattrs}{l}{u}(R)(t) &=
\begin{cases}
R(t') &\text{\textbf{if}}\,\, t = t' \ensuremath{\circ} f(\pi_{A}(\awindow{t'})) \\
0 &\text{\textbf{otherwise}}\\
\end{cases} \\
\awindow{t}(t') &=
\begin{cases}
R(t') &\text{\textbf{if}}\,\, (\trank{\apartition{t}}{O}{t} \\
& \hspace{6mm} - \trank{\apartition{t}}{O}{t'}) \in [l,u]\\
0 &\text{\textbf{otherwise}}\\
\end{cases}\\
\trank{R}{O}{t} &= \card{ \{\; t'.O \mid R(t') > 0 \land t' \les{O} t\; \}}
\end{align*}
}
\begin{align*}
\winrow{\aggf(\agga)}{X}{\gbAttrs}{\sortattrs}{l}{u}(R)(t) &= \pi_{\mathrm{Sch}(R),X}(\mathcal{ROW}(R))\\
\mathcal{ROW}(R)(t) &=
\begin{cases}
1 &\text{\textbf{if}}\,\, t = t' \ensuremath{\circ} f(\pi_{A}(\windowshort{R}{t'}{i})) \ensuremath{\circ} i\\ &\hspace{6mm} \land i \in [0,R(t')-1]\\
0 &\text{\textbf{otherwise}}\\
\end{cases} \\
\partitionshort{R}{t}(t') &=
\begin{cases}
R(t') &\text{\textbf{if}}\,\, t'.G = t.G\\
0 &\text{\textbf{otherwise}}\\
\end{cases}
\\
\windowshort{R}{t}{i}(t') &= \vert\, \tcovershort{\partitionshort{R}{t}}{t'} \cap \tboundsshort{\partitionshort{R}{t}}{t}{i} \,\vert\\
\tposshort{R}{t}{i} &= i + \sum_{t' \fles{O} t} R(t')\\
\tcovershort{R}{t} &= [ \tposshort{R}{t}{0}, \tposshort{R}{t}{R(t) - 1} ]\\
\tboundsshort{R}{t}{i} &= [ \tposshort{R}{t}{i} +l, \tposshort{R}{t}{i} + u ]
\end{align*}
\\[-4.5mm]
\setlength{\belowcaptionskip}{-10pt}
\caption{Windowed Aggregation}\label{fig:windowed-aggregation-oper}
\end{figure}
\subsection{Sort Operator}\label{sec:ranking-operato}
We now define a sort operator $\rank{O}{\tau}(R)$ which extends each row of $R$ with an attribute $\tau$ that stores the position of this row in $R$ according to $\fles{O}$. This operator is just ``syntactic sugar'' as it can be expressed using windowed aggregation.
\begin{Definition}[Sort Operator]\label{def:rank-operator}
Consider a relation $R$ with schema $(A_1, \ldots, A_n)$, list of attributes $O = (B_1, \ldots, B_m)$ where each $B_i$ is in $\mathrm{Sch}(R)$. The \emph{sort operator} $\rank{O}{\tau}(R)$ returns a relation with schema $(A_1, \ldots, A_n, \tau)$ as defined below.
\begin{align*}
\rank{O}{\tau}(R) &= \pi_{\mathrm{Sch}(R), \tau - 1 \to \tau}( \winrow{count(1)}{\tau}{\emptyset}{O}{-\infty}{0}(R))
\end{align*}
\end{Definition}
Top-k queries can be expressed using the sort operator followed by a selection. For instance,
the SQL query shown below can be written as $\pi_{A,B}(\sigma_{r \leq 3}(\rank{A}{r}(R)))$.
\begin{lstlisting}
SELECT A,B FROM R ORDER BY A LIMIT 3;
\end{lstlisting}
\section{Experiments}
\label{sec:experiment}
\newcommand{\textit{MCDB10}\xspace}{\textit{MCDB10}\xspace}
\newcommand{\textit{MCDB20}\xspace}{\textit{MCDB20}\xspace}
\newcommand{\textit{MCDB}\xspace}{\textit{MCDB}\xspace}
\newcommand{\textit{Det}\xspace}{\textit{Det}\xspace}
\newcommand{\textit{Rewr}\xspace}{\textit{Rewr}\xspace}
\newcommand{\textit{Rewr(Index)}\xspace}{\textit{Rewr(Index)}\xspace}
\newcommand{\textit{Imp}\xspace}{\textit{Imp}\xspace}
\newcommand{\textit{Symb}\xspace}{\textit{Symb}\xspace}
\newcommand{\textit{PT-k}\xspace}{\textit{PT-k}\xspace}
\newcommand{\competitorImp/\competitorRewr\xspace}{\textit{Imp}\xspace/\textit{Rewr}\xspace\xspace}
We evaluate the efficiency of our rewrite-based approach and the native implementation of the algorithms presented in \Cref{sec:native-algorithms-au} \revm{in Postgres} and the accuracy the of approximations they produce.
\mypar{Compared Algorithms}
We compare against several baselines: \textit{Det}\xspace evaluates queries deterministically ignoring uncertainty in the data. We present these results to show the overhead of the different incomplete query evaluation semantics wrt. deterministic query evaluation;
\textit{MCDB}~\cite{jampani2008mcdb} evaluates queries over a given number of possible worlds sampled from the input incomplete database using deterministic query evaluation. \textit{MCDB10}\xspace and \textit{MCDB20}\xspace are MCDB with 10 and 20 sampled worlds, respectively. \revb{For tests, we treat the highest and lowest possible value for all samples as the upper and lower bounds and compare against the tight bounds produced by the compared algorithms (since computing optimal bounds is often intractable). Given a tightest bound $[c,d]$, we define the recall of a bound $[a,b]$ as $\frac{min(b,d)-max(a,c)}{d-c}$ and the accuracy of $[a,b]$ as $\frac{max(b,d)-min(a,c)}{min(b,d)-max(a,c)}$. The recall/accuracy for a relation is then the average recall/accuracy of all tuples.} %
\iftechreport{
\textit{PT-k}\xspace~\cite{10.1145/1376616.1376685} only supports sorting and returns all answers with a probability larger than a user-provided threshold of being among the top-k answers. By setting the threshold to 1 (0) we can use this approach to compute all certain (possible) answers.
\textit{Symb}\xspace represents aggregation results, rank of tuples, and window membership as symbolic expressions which compactly encode the incomplete database produced by possible world semantics using the model from \cite{amsterdamer2011provenance} for representing aggregation results and a representation similar to \cite{AB14} to encode uncertainty in the rank of tuples. We use an SMT solver (Z3~\cite{moura-08-z}) to compute tight bounds on the possible ranks / aggregation results for tuples.
\textit{Rewr}\xspace is a rewrite-based approach we implemented uses self-unions for sorting queries and self-joins for windowed aggregation queries.
\textit{Imp}\xspace is the native implementation of our algorithms in Postgres.
All experiments are run on a 2$\times$6 core 3300MHz 8MB cache AMD Opteron 4238 CPUs, 128GB RAM, 4$\times$1TB 7.2K HDs (RAID 5) with the exception of \textit{PT-k}\xspace which was provided by the authors as a binary for Windows only. We run \textit{PT-k}\xspace on a separate Windows machine with a 8 core 3800MHz 32MB cache AMD Ryzen 5800x CPU, 64G RAM, 2TB HD. Because the \textit{PT-k}\xspace implementation is single-threaded and in-memory, we consider our comparisons are in favor of \textit{PT-k}\xspace. We implement our algorithms as an extension for Postgres 13.3 and evaluate all algorithms on Postgres. We report the average of 10 runs.
} %
\ifnottechreport{ %
For \textit{PT-k}\xspace~\cite{10.1145/1376616.1376685}, we set its threshold to 1 (0) to compute all certain (possible) answers.
\textit{Symb}\xspace represents ranking and aggregation results as symbolic expressions similar to ~\cite{amsterdamer2011provenance,AB14}. We use an SMT solver (Z3~\cite{moura-08-z}) to compute tight bounds on the possible sort positions / aggregation results for tuples.
\textit{Rewr}\xspace is our rewrite-based approach~\cite{techreport} that to process the input relation twice for sorting and uses self-joins to determine the content of windows.
\textit{Imp}\xspace is implemented as a native extension for Postgres 13.3.
All experiments are run on a 2$\times$6 core 3300MHz 8MB cache AMD Opteron 4238 CPUs, 128GB RAM, 4$\times$1TB 7.2K HDs (RAID 5) with the exception of \textit{PT-k}\xspace which was provided by the authors as a Windows binary. We run \textit{PT-k}\xspace on a separate Windows machine with an 8-core 3800MHz 32MB cache AMD Ryzen 5800x CPU, 64G RAM, and 2TB HD. \textit{PT-k}\xspace is single-threaded and in-memory. Since we deactivated intra-query parallelism in Postgres, but still have to go to disk, the comparison is in favor of \textit{PT-k}\xspace. We report the average of 10 runs.
}
\subsection{Microbenchmarks on Synthetic Data}
To evaluate how specific characteristics of the data affect our system's performance and accuracy, we generated synthetic data consisting of a single table with 2 attributes for sorting and 3 attributes for windowed aggregation.
Attribute values are uniform randomly distributed.
\OK{Should be randomly distributed \textbf{values in the range ...} to provide context for the range-scaling graphs.}
Except where noted, we default to 50k rows and 5\% uncertainty with maximum 1k attribute range on uncertain values.
\subsubsection{Sorting and Top-k Queries}
\vspace{-2mm}
\mypar{Scaling Data Size}
\Cref{fig:rankaltsize} shows the runtime of sorting,
varying the dataset size.
Since \textit{Symb}\xspace and \textit{PT-k}\xspace perform significantly worse, we only include these methods for smaller datasets (\Cref{fig:rank-altsize-low}).
MCDB and our techniques significantly outperform \textit{Symb}\xspace and \textit{PT-k}\xspace ($\sim$2+ OOM). \textit{Rewr}\xspace is roughly on par with \textit{MCDB20}\xspace while \textit{Imp}\xspace outperforms \textit{MCDB10}\xspace.
Given their poor performance and their lack of support for windowed aggregation, we exclude \textit{Symb}\xspace and \textit{PT-k}\xspace from the remaining microbenchmarks.
\mypar{Varying k, Ranges, and Rate}
\Cref{fig:rankmicro} shows runtime of top-k (k is specified) and sorting queries (k is not specified) when varying (i) the number of tuples returned $k$, (ii) the size of the ranges of uncertain order-by attributes (\emph{range}), and (iii) the fraction of tuples with uncertain order-by attributes.
\textit{Imp}\xspace is the fastest method, with an overhead of deterministic query processing between 3.5 (top-k) and 10 (full sorting).
\textit{Rewr}\xspace has higher overhead over \textit{Det}\xspace than \textit{MCDB}\xspace.
Notably, the performance of \textit{MCDB}\xspace and \textit{Rewr}\xspace is independent of all three varied parameters. Uncertainty and \emph{range} have small impact on the performance of \textit{Imp}\xspace while computing top-k results is significantly faster than full sorting when $k$ is small.
\input{microbenchmark-figs}
\mypar{Accuracy}
\Cref{fig:microrankacc} shows the error of the bounds generated by \textit{Imp}\xspace (\textit{Rewr}\xspace produces identical outputs), and \textit{MCDB}\xspace.
Recall that \textit{Imp}\xspace is guaranteed to over-approximate the correct bounds, while \textit{MCDB}\xspace is guaranteed to under-approximate the bounds, \revm{because it does not compute all possible results}. We measure the size of the bounds related to the size of the correct bound (as computed by \textit{Symb}\xspace and \textit{PT-k}\xspace), and then take the average over all normalized bound sizes.
In all cases our approach produces bounds that are closer to the exact bounds than \textit{MCDB}\xspace (~30\% over-approximation versus ~70\% under-approximation in the worst case).
We further note that an over-approximation \revm{of possible answers is often} preferable to an under-approximation because no possible results will be missed.
\subsubsection{Windowed Aggregation}
\mypar{Scaling Data Size}
\Cref{fig:windowaltsize} shows the runtime of windowed aggregation when varying dataset size.
We compare two variants of our rewrite-based approach which uses a range overlap join to determine which tuples could possibly belong to a window.
\textit{Rewr(Index)}\xspace uses a range index supported by Postgres.
\revm{We show index creation time and query time separately}.
We exclude \textit{Symb}\xspace, because for more than 1k tuples, Z3 exceeds the maximal allowable call stack depth and crashes.
The performance of \textit{Imp}\xspace is roughly on par with \textit{MCDB10}\xspace.
\textit{Rewr(Index)}\xspace is almost as fast as \textit{MCDB20}\xspace, but is 5 $\times$ slower than \textit{Imp}\xspace.
\mypar{Varying window spec, Ranges, and Rate}
\Cref{fig:microw} shows the runtime of windowed aggregation varying attribute uncertain value ranges (on all columns), percentage of uncertain tuples, and window size. For \textit{Imp}\xspace
(\Cref{fig:microw-orderby})
we use a query without partition-by. We also compare runtime of our rewriting based approach
(\Cref{fig:microw-partitionby})
using
both partition-by and order-by on 8k rows.
\textit{Imp}\xspace exhibits similar runtime to \textit{MCDB10}\xspace and outperforms \textit{MCDB20}\xspace.
\iftechreport{Doubling the window size have only a slight impact (about 10\%) on our implementation performance.}
\textit{Rewr}\xspace is slower than \textit{MCDB}\xspace by several magnitudes due to the range-overlap join.
\iftechreport{Our techniques are not significantly affected by the range and uncertainty rate.}
\begin{figure}[t]
\centering
\resizebox{1\linewidth}{!}{
\begin{tabular}{cc|r|r|r|r|r|r}
\multicolumn{2}{c|}{\thead{Datasets}} & \multicolumn{1}{|c}{\thead{\textit{Imp}\xspace}} & \multicolumn{1}{|c}{\thead{\textit{Det}\xspace}} & \multicolumn{1}{|c}{\thead{\textit{MCDB20}\xspace}} & \multicolumn{1}{|c}{\thead{\textit{Rewr}\xspace}} & \multicolumn{1}{|c}{\thead{\textit{Symb}\xspace}} & \multicolumn{1}{|c}{\thead{\textit{PT-k}\xspace}} \\
\multicolumn{2}{c|}{\thead{\& Queries}} & \multicolumn{1}{|c}{\thead{(time)}} & \multicolumn{1}{|c}{\thead{(time)}} & \multicolumn{1}{|c}{\thead{(time)}} & \multicolumn{1}{|c}{\thead{(time)}} & \multicolumn{1}{|c}{\thead{(time)}} & \multicolumn{1}{|c}{\thead{(time)}} \\ \hline
\multirow{2}{*}{\rotatebox[origin=c]{0}{\parbox[c]{2cm}{\centering \textbf{Iceberg}~\cite{icebergdata} \\(1.1\%, 167K)}}}
& Rank & 0.816msms & 0.123ms & 2.337ms & 1.269ms & 278ms & 1s \\
\cline{2-8}
& Window & 2.964ms & 0.363ms & 7.582ms & 1.046ms & 589ms & N.A. \\
\hline
\multirow{2}{*}{\rotatebox[origin=c]{0}{\parbox[c]{2cm}{\centering \textbf{Crimes}~\cite{crimesdata} \\(0.1\%, 1.45M)}}}
& Rank & 1043.505ms & 94.306ms & 2001.12ms & 14787.723ms & >10min & >10min \\
\cline{2-8}
& Window & 3.050ms & 0.416ms & 8.337ms & 2.226ms & >10min & N.A. \\
\hline
\multirow{2}{*}{\rotatebox[origin=c]{0}{\parbox[c]{2cm}{\centering \textbf{Healthcare}~\cite{healthcaredata} \\(1.0\%, 171K)}}}
& Rank & 287.515ms & 72.289ms & 1451.232ms & 4226.260ms & 15s & 8s \\
\cline{2-8}
& Window & 130.496ms & 15.212ms & 323.911ms & 13713.218ms & >10min & N.A. \\
\hline
\end{tabular}
}}\\[-4mm]
\setlength{\belowcaptionskip}{2pt}
\bfcaption{\footnotesize Real world data - performance}
\label{fig:realq_perf}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{cc|l|l|l}
\multicolumn{2}{c|}{\thead{Datasets \& Measures}} & \multicolumn{1}{|c}{\thead{\competitorImp/\competitorRewr\xspace}} & \multicolumn{1}{|c}{\thead{\textit{MCDB20}\xspace}} & \multicolumn{1}{|c}{\thead{\textit{PT-k}\xspace/\textit{Symb}\xspace}} \\ \hline
\multirow{2}{*}{\rotatebox[origin=c]{0}{\parbox[c]{1.5cm}{\centering \textbf{Iceberg}\\~\cite{icebergdata} }}}
& bound accuracy & 0.891 & 1 & 1 \\
\cline{2-5}
& bound recall & 1 & 0.765 & 1\\
\hline
\multirow{2}{*}{\rotatebox[origin=c]{0}{\parbox[c]{1.5cm}{\centering \textbf{Crimes}\\~\cite{crimesdata} }}}
& bound accuracy & 0.996 & 1 & 1 \\
\cline{2-5}
& bound recall & 1 & 0.919 & 1 \\
\hline
\multirow{2}{*}{\rotatebox[origin=c]{0}{\parbox[c]{2cm}{\centering \textbf{Healthcare}\\~\cite{healthcaredata} }}}
& bound accuracy & 0.990 & 1 & 1 \\
\cline{2-5}
& bound recall & 1 & 0.767 & 1 \\
\hline
\end{tabular}
}}\\[-4mm]
\setlength{\belowcaptionskip}{2pt}
\bfcaption{\footnotesize Real world data - sort position accuracy and recall}
\label{fig:realq-rankacc}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{1\linewidth}{!}{
\begin{tabular}{cc|r|r|r|r}
\multicolumn{2}{c|}{\thead{Datasets}} & \multicolumn{1}{|c}{\thead{Grouping/Order}} & \multicolumn{1}{|c}{\thead{Grouping/Order}} & \multicolumn{1}{|c}{\thead{Aggregation}} & \multicolumn{1}{|c}{\thead{Aggregation}} \\
\multicolumn{2}{c|}{\thead{\& Methods}} & \multicolumn{1}{|c}{\thead{accuracy}} & \multicolumn{1}{|c}{\thead{recall}} & \multicolumn{1}{|c}{\thead{accuracy}} & \multicolumn{1}{|c}{\thead{recall}} \\ \hline
\multirow{3}{*}{\rotatebox[origin=c]{0}{\parbox[c]{1.5cm}{\centering \textbf{Iceberg}\\~\cite{icebergdata} }}}
& \competitorImp/\competitorRewr\xspace & 0.977 & 1 & 0.925 & 1 \\
\cline{2-6}
& \textit{MCDB20}\xspace & 1 & 0.745 & 1 & 0.604 \\
\cline{2-6}
& \textit{Symb}\xspace & 1 & 1 & 1 & 1 \\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{0}{\parbox[c]{1.5cm}{\centering \textbf{Crimes}\\~\cite{crimesdata} }}}
& \competitorImp/\competitorRewr\xspace & 0.995 & 1 & 0.989 & 1 \\
\cline{2-6}
& \textit{MCDB20}\xspace & 1 & 0.916 & 1 & 0.825 \\
\cline{2-6}
& \textit{Symb}\xspace & 1 & 1 & 1 & 1 \\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{0}{\parbox[c]{2cm}{\centering \textbf{Healthcare}\\~\cite{healthcaredata} }}}
& \competitorImp/\competitorRewr\xspace & 0.998 & 1 & 0.998 & 1 \\
\cline{2-6}
& \textit{MCDB20}\xspace & 1 & 0.967 & 1 & 0.967 \\
\cline{2-6}
& \textit{Symb}\xspace & 1 & 1 & 1 & 1 \\
\hline
\end{tabular}
}
\setlength{\belowcaptionskip}{-2pt}
\bfcaption{\footnotesize Real world data - windowed aggregation accuracy and recall}
\label{fig:realq-windacc}
\end{figure}
\subsection{Real World Datasets}
We evaluate our approach on real datasets (Iceberg~\cite{icebergdata}, Chicago crime data~\cite{crimesdata}, and Medicare provide data~\cite{healthcaredata}) using realistic sorting and windowed aggregation queries~\cite{aval}.\BG{Where do we get these queries from?}\BG{Show basic stats for these datasets} To prepare the datasets, we perform data cleaning methods (entity resolution and missing value imputation) that output a AU-DB encoding of the space of possible repairs. \Cref{fig:realq_perf} shows the performance of real queries on these datasets reporting basic statistics (uncertainty and \#rows).
\iftechreport{
\input{uaa-expr-realqs.tex}
}
For sorting and top-k queries \revb{that contain aggregation which commonly seen in real use-cases, we only measure the performance of the sorting/top-k part over pre-aggregated data} (see~\cite{FH21} for an evaluation of the performance of aggregation over AU-DBs)
In general, our approach (\textit{Imp}\xspace) is faster than \textit{MCDB20}\xspace. \textit{Symb}\xspace and \textit{PT-k}\xspace are significantly more expensive. \Cref{fig:realq-rankacc} shows the approximation quality for our approach and \textit{MCDB}\xspace. Our approach has precision close to 100\% except for sorting on the Iceberg dataset which has a larger fraction of uncertain tuples and wider ranges of uncertain attribute values due to the pre-aggregation. \textit{MCDB}\xspace has lower recall on Iceberg and Healthcare sorting queries since these two datasets have more uncertain tuples (10 times more than the Crimes dataset). \Cref{fig:realq-windacc} shows the approximation quality of our approach and \textit{MCDB}\xspace for windowed aggregation queries.
We measured both the approximation quality of grouping of tuples to windows and for the aggregation result values.
For Crimes and Iceberg, the aggregation accuracy is affected by the partition-by/order-by attribute accuracy and the uncertainty of the aggregation attribute itself. The healthcare query computes a count, i.e., there is no uncertainty in the aggregation attribute and approximation quality is similar to the one for sorting. Overall, we provide good approximation quality at a significantly lower cost than the two exact competitors.
\section{Introduction}\label{sec:introduction}
Many application domains need to deal with uncertainty arising from data entry/extraction errors~\cite{jeffery-06-dssdc,sarawagi2008information}, \revm{data lost because of node failures~\cite{DBLP:conf/sigmod/LangNRN14}}, ambiguous data integration~\cite{OP13,AS10,HR06a}, heuristic data wrangling~\cite{Yang:2015:LOA:2824032.2824055,F08,Beskales:2014:SRC:2581628.2581635}\revm{, and
bias in machine learning training data~\cite{DBLP:conf/deem/GrafbergerGS22, SP22}}.
\revm{Incomplete and probabilistic} databases~\cite{DBLP:conf/pods/ConsoleGLT20, suciu2011probabilistic}
\revm
model uncertainty as a set} of so-called possible worlds.
Each world is a deterministic database representing one possible state of the real world.
\revm{The commonly used \textit{possible world semantics}~\cite{suciu2011probabilistic} returns for each world the (deterministic) query answer in this world.
Instead of this set of possible answer relations,} most systems produce either
\textit{certain answers}~\cite{DBLP:journals/jacm/ImielinskiL84} (result tuples that are returned in every world), or \textit{possible answers}~\cite{DBLP:journals/jacm/ImielinskiL84} (result tuples that are returned in at least one world).
Unfortunately, incomplete databases lack the expressiveness of deterministic databases and have high computational complexity.
Notably, uncertain versions of order-based operators like \texttt{SORT / LIMIT} (i.e., Top-K)
have been studied extensively in the past~\cite{4221738,4812412,4221737,10.14778/1687627.1687685}.
However, the resulting semantics often
lacks \emph{closure}. That is, composing such operators with other operators typically requires a complete rethinking of the entire system~\cite{DBLP:journals/tods/SolimanIC08}, \revm{because the model that the operator expects its \emph{inputs} to be encoded with differs from the model encoding the operator's \emph{outputs}.}
In \cite{FH19,FH21},
we started addressing the \revm{linked challenges of computational complexity, closure, and expressiveness in incomplete database systems,} by proposing \textbf{AU-DBs}, an approach to uncertainty management \revm{that can be competitive with deterministic query processing.
Rather than trying to encode a set of possible worlds losslessly, each \abbrUAADB tuple is defined by one range of possible values for each of its attributes and a range of (bag) multiplicities.
Each tuple of an \abbrUAADB is a hypercube that bounds a region of the attribute space, and together, the tuples bound the set of possible worlds between} an \textit{under-approximation of certain answers} and an \textit{over-approximation of possible answers}.
This model is closed under relational algebra~\cite{FH19} with aggregates~\cite{FH21} ($\ensuremath{\mathcal{RA}^{agg}}$).
That is, if an \abbrUAADB $D$ bounds a set of possible worlds, the result of any $\mathcal{RA}^{agg}$ query over $D$ bounds the set of possible query results.
We refer to this correctness criteria as \textbf{bound preservation}.
In this paper, we add support for bounds-preserving order-based operators to the \abbrUAADB model, along with a set of (nontrivial) operator implementations that make this extension efficient. \revm{The closure of the \abbrUAADB model under $\ensuremath{\mathcal{RA}^{agg}}$, its efficiency, its property of bounding certain and possible answers, and its capability to compactly represent large sets of possible tuples using attribute-level uncertainty are the main factor for our choice to extend this model in this work.}
When sorting uncertain attribute values, the possible order-by attribute values of two tuples $t_1$ and $t_2$ may overlap, which leads to multiple possible sort orders.
Supporting order-based operators over \abbrUAADBs requires encoding multiple possible sort orders.
Unfortunately, a dataset can only have one physical ordering.
\revm{We address this limitation by introducing a \textbf{position} attribute, decoupling the \emph{physical} order in which the tuples are stored from the set of possible \emph{logical} orderings.}
With a tuple's position in a sort order encoded as a numerical attribute, operations that act on this order (i.e., \lstinline!LIMIT!) can be redefined in terms of standard relational operators, which, crucially already have well-defined semantics \revm{in the \abbrUAADB model}.
\revm{In short, by virtualizing sort order into a position attribute}, the existing \abbrUAADB model is sufficient to express the output of SQL's order-dependent operations \revm{in the presence of uncertainty}.
\revm{We start this paper by (i) formalizing uncertain orders within the \abbrUAADB model and present a semantics of sorting and windowed aggregation operations that can be implemented as query rewrites.
When combined with existing \abbrUAADB rewrites~\cite{FH19,FH21}, any $\mathcal{RA}^{agg}$ query with order-based operations can be executed using a deterministic DBMS.
Unfortunately, these rewrites introduce SQL constructs that necessitate computationally expensive operations, driving a central contribution of this paper: (iii) new algorithms for sort, top-k, and windowed aggregation operators for \abbrUAADBs.}
To understand the intuition behind these operators, consider the logical sort operator, which extends each input row with a new attribute storing the row's position wrt. to ordering the input relation on a list $O$ of order-by attributes.
If the order-by attributes' values are uncertain, we have to reason about each tuple $t$'s lowest possible position (the number of tuples that certainly precede it over all possible worlds), and highest possible position (the number of tuples that possibly precede it in at least one possible world).
\revm{We can naively compute a lower (resp., upper) bound by joining every tuple $t$ with every other tuple, counting pairs where $t$ is certainly (resp., possibly) preceded by its pairing.}
\revm{We refer to this approach as} the \emph{rewrite method}, \revm{as it} can be implemented in SQL.
However, the rewrite approach has quadratic runtime.
Inspired by techniques for aggregation over interval-temporal databases such as \cite{10.1007/978-3-319-64367-0_7}, we propose a one-pass algorithm to compute the bounds on a tuple's position that also supports top-k queries.
\begin{Example}[Uncertain Sorting and Top-k]\label{ex:uncertain-ranking}
\revm{
\Cref{fig:an-uncertain-brand-databa} shows a sales DB, extracted from 3 press releases.
Uncertainty
arises for a variety of reasons, including extraction errors (e.g., $D_3$ includes term 5) or missing information (e.g., only preliminary data is available for the 4th term in $D_1$).
The task of finding the two terms with the most sales is semantically ambiguous for uncertain data.
Several attempts to define semantics include
(i) U-top~\cite{4221738} (\Cref{fig:u-rank-query-result}), which returns the most likely ranked order;
(ii) U-rank~\cite{4221738} (\Cref{fig:u-rank-query-result}), which returns the most likely tuple at each position (term 4 is more likely than any other value for both the 1st and 2nd position); or
(iii) Probabilistic threshold queries (PT-k)~\cite{10.1145/1376616.1376685,4498380}, which return tuples that appear in the top-k with a probability exceeding a threshold (PT), generalizing both possible (PT $>$ 0; \Cref{fig:pt0-query-result}) and certain (PT $\geq$ 1; \Cref{fig:pt1-query-result-tuple}) answers.
}
\end{Example}
With the exception of U-Top, none of these semantics
\revm{
return both information about certain and possible results, making it difficult for users to gauge the (i) trustworthiness or (ii) completeness of an answer.
Risk assessment on the resulting data is difficult, preventing its use for critical applications, e.g., in the medical, engineering, or financial domains.
}
\revm{
Furthermore, the outputs of uncertain ranking operators like U-Top are not valid as inputs to further uncertainty-aware queries, because they lose information about uncertainty in the source data.
}
\revm{
These factors motivate our choice of the \abbrUAADB data model.
First, the data model naturally encodes query result reliability. By providing each attribute value (and tuple multiplicity) as a range, users can quickly assess the precision of each answer.
Second, the data model is complete: the full set of possible answers is represented.
Finally, the model admits a closed, efficiently computable, and bounds-preserving semantics for $\mathcal{RA}^{agg}$.
}
\iftechreport{
\begin{figure*}[t]
\centering
\begin{minipage}{0.49\linewidth}
\begin{subfigure}{1\linewidth}
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\footnotesize{$D_1$} & \textbf{Term} & \textbf{Sales} \\
\cline{1-3}
& $1$ & $2$ \\
& $2$ & $3$ \\
\rowcolor{grey}
& $3$ & $7$ \\
\rowcolor{grey}
& $4$ & $4$ \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\footnotesize{$D_2$} & \textbf{Term} & \textbf{Sales} \\
\cline{1-3}
& $1$ & $3$ \\
& $2$ & $2$ \\
\rowcolor{grey}
& $3$ & $4$ \\
\rowcolor{grey}
& $4$ & $6$ \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\footnotesize{$D_3$} & \textbf{Term} & \textbf{Sales} \\
\cline{1-3}
& $1$ & $2$ \\
& $2$ & $2$ \\
\rowcolor{grey}
& $5$ & $4$ \\
\rowcolor{grey}
& $4$ & $7$ \\
\end{tabular}
}
\end{minipage}
\end{minipage}
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\textbf{Term} & Sales & \textbf{Sum} \\
\cline{1-3}
$1$ & $2$ & $5$ \\
$2$ & $3$ & $10$ \\
$3$ & $7$ & $11$ \\
$4$ & $4$ & $4$ \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\textbf{Term} & Sales & \textbf{Sum} \\
\cline{1-3}
$1$ & $3$ & $5$ \\
$2$ & $2$ & $6$ \\
$3$ & $4$ & $10$ \\
$4$ & $6$ & $6$ \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\textbf{Term} & Sales & \textbf{Sum} \\
\cline{1-3}
$1$ & $2$ & $4$ \\
$2$ & $2$ & $9$ \\
$5$ & $4$ & $4$ \\
$4$ & $7$ & $11$ \\
\end{tabular}
}
\end{minipage}
\end{minipage}
\\[-3mm]
\caption{An uncertain sales database with three possible worlds (with probability .4, .3 and .3 respectively) with top-2 highest selling term in each world high-lighted and the result of the rolling sum of sales for the current and next term.}\label{fig:an-uncertain-brand-databa}
\end{subfigure} %
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\begin{minipage}{0.49\linewidth}
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tabular}{ c}
\textbf{Term} \\
\cline{1-1}
$4$ \\
$3$ \\
\end{tabular}
\\[-3mm]
\caption{U-Top Top-k result}\label{fig:u-top-query-result}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tabular}{ c}
\textbf{Term} \\
\cline{1-1}
$4$ \\
$4$ \\
\end{tabular}
\\[-3mm]
\caption{U-Rank query result}\label{fig:u-rank-query-result}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\begin{subfigure}{\linewidth}
\captionsetup{width=.7\linewidth}
\centering
\begin{tabular}{ c}
\textbf{Term} \\
\cline{1-1}
$3$ \\
$4$ \\
$5$ \\
\end{tabular}
\\[-3mm]
\caption{PT(0) query result (possible answers)}\label{fig:pt0-query-result}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\begin{subfigure}{\linewidth}
\captionsetup{width=.7\linewidth}
\centering
\begin{tabular}{ c}
\textbf{Term} \\
\cline{1-1}
$4$ \\
$ $ \\
$ $ \\
\end{tabular}
\\[-3mm]
\caption{PT(1) Top-k result (certain answers)}\label{fig:pt1-query-result-tuple}
\end{subfigure}
\end{minipage}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\begin{subfigure}{1\linewidth}
\centering
\begin{minipage}{0.4\linewidth}
\centering
{\small
\begin{tabular}{c|c|c}
\textbf{Term} & \textbf{Sales} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\cline{1-3}
$1$ & $\uv{2}{2}{3}$ & \ut{1}{1}{1} \\
$2$ & $\uv{2}{3}{3}$ & \ut{1}{1}{1} \\
\rowcolor{grey}
$\uv{3}{3}{5}$ & $\uv{4}{7}{7}$ & \ut{1}{1}{1} \\
\rowcolor{grey}
$4$ & $\uv{4}{4}{7}$ & \ut{1}{1}{1} \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.59\linewidth}
\centering
{\small
\begin{tabular}{c|c|cc}
\textbf{Term} & \textbf{Sales} & \textbf{Position} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\cline{1-3}
$1$ & $\uv{2}{2}{3}$ & \uv{2}{3}{3} & \ut{0}{0}{0} \\
$2$ & $\uv{2}{3}{3}$ & \uv{2}{2}{3} & \ut{0}{0}{0} \\
\rowcolor{grey}
$\uv{3}{3}{5}$ & $\uv{4}{7}{7}$ & \uv{0}{0}{1} & \ut{1}{1}{1} \\
\rowcolor{grey}
$4$ & $\uv{4}{4}{7}$ & \uv{0}{1}{1} & \ut{1}{1}{1} \\
\end{tabular}
}
\end{minipage}
\\[-2mm]
\caption{\abbrUAADB bounding the worlds and top-2 result produced by our approach}\label{fig:au-db-result-topk}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\begin{subfigure}{1\linewidth}
\centering
\captionsetup{width=.8\linewidth}
{\small
\begin{tabular}{ c|c|c|c}
\textbf{Term} & \textbf{Sales} & \textbf{Sum} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\cline{1-4}
$1$ & $\uv{2}{2}{3}$ & $\uv{4}{5}{6}$ & \ut{1}{1}{1} \\
$2$ & $\uv{2}{3}{3}$ & $\uv{6}{10}{10}$ & \ut{1}{1}{1} \\
$\uv{3}{3}{5}$ & $\uv{4}{7}{7}$ & $\uv{4}{11}{14}$ & \ut{1}{1}{1} \\
$4$ & $\uv{4}{4}{7}$ & $\uv{4}{4}{14}$ & \ut{1}{1}{1} \\
\end{tabular}
}
\\[-2mm]
\caption{\abbrUAADB windowed aggregation result produced by our approach}\label{fig:au-db-result-window}
\end{subfigure}
\end{minipage}
\\[-3mm]
\setlength{\belowcaptionskip}{-15pt}
\caption{Ranking, Top-k, and Window Queries over an Incomplete (Probabilistic) Database. We get different results for the various semantics proposed in related work. Our approach stands out in that it bounds both certain and possible answers and is closed not just under these specific query types, but also \ensuremath{\mathcal{RA}^{agg}}.}\label{fig:ranking-top-k-and-window-}
\end{figure*}
}
\ifnottechreport{
\begin{figure}[t]
\centering
\begin{minipage}{0.99\linewidth}
\begin{subfigure}{1\linewidth}
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\footnotesize{$D_1$} & \textbf{Term} & \textbf{Sales} \\
\cline{1-3}
& $1$ & $2$ \\
& $2$ & $3$ \\
\rowcolor{grey}
& $3$ & $7$ \\
\rowcolor{grey}
& $4$ & $4$ \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\footnotesize{$D_2$} & \textbf{Term} & \textbf{Sales} \\
\cline{1-3}
& $1$ & $3$ \\
& $2$ & $2$ \\
\rowcolor{grey}
& $3$ & $4$ \\
\rowcolor{grey}
& $4$ & $6$ \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\footnotesize{$D_3$} & \textbf{Term} & \textbf{Sales} \\
\cline{1-3}
& $1$ & $2$ \\
& $2$ & $2$ \\
\rowcolor{grey}
& $5$ & $4$ \\
\rowcolor{grey}
& $4$ & $7$ \\
\end{tabular}
}
\end{minipage}
\end{minipage}
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\textbf{Term} & Sales & \textbf{Sum} \\
\cline{1-3}
$1$ & $2$ & $5$ \\
$2$ & $3$ & $10$ \\
$3$ & $7$ & $11$ \\
$4$ & $4$ & $4$ \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\textbf{Term} & Sales & \textbf{Sum} \\
\cline{1-3}
$1$ & $3$ & $5$ \\
$2$ & $2$ & $6$ \\
$3$ & $4$ & $10$ \\
$4$ & $6$ & $6$ \\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\scalebox{0.7}{
\begin{tabular}{ c|c|c}
\textbf{Term} & Sales & \textbf{Sum} \\
\cline{1-3}
$1$ & $2$ & $4$ \\
$2$ & $2$ & $9$ \\
$5$ & $4$ & $4$ \\
$4$ & $7$ & $11$ \\
\end{tabular}
}
\end{minipage}
\end{minipage}
\\[-3mm]
\caption{An uncertain sales database with three possible worlds (with probability .4, .3 and .3 respectively) with top-2 highest selling terms high-lighted and the result of the rolling sum of sales for the current and next term.}\label{fig:an-uncertain-brand-databa}
\end{subfigure} %
\end{minipage}
\begin{minipage}{0.99\linewidth}
\centering
\begin{minipage}{0.24\linewidth}
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tabular}{ c}
\textbf{Term} \\
\cline{1-1}
$4$ \\
$3$ \\
\end{tabular}
\\[-3mm]
\caption{U-Top}\label{fig:u-top-query-result}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.24\linewidth}
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tabular}{ c}
\textbf{Term} \\
\cline{1-1}
$4$ \\
$4$ \\
\end{tabular}
\\[-3mm]
\caption{U-Rank}\label{fig:u-rank-query-result}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.24\linewidth}
\centering
\begin{subfigure}{\linewidth}
\captionsetup{width=.7\linewidth}
\centering
\scalebox{0.8}{
\begin{tabular}{ c}
\textbf{Term} \\
\cline{1-1}
$3$ \\
$4$ \\
$5$ \\
\end{tabular}
}
\\[-3mm]
\caption{PT(0)}\label{fig:pt0-query-result}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.24\linewidth}
\centering
\begin{subfigure}{\linewidth}
\captionsetup{width=.7\linewidth}
\centering
\begin{tabular}{ c}
\textbf{Term} \\
\cline{1-1}
$4$ \\
$ $ \\
\end{tabular}
\\[-3mm]
\caption{PT(1)}\label{fig:pt1-query-result-tuple}
\end{subfigure}
\end{minipage}
\end{minipage}
\begin{minipage}{0.99\linewidth}
\begin{subfigure}{1\linewidth}
\centering
\begin{minipage}{0.4\linewidth}
\centering
{\small
\scalebox{0.9}{
\begin{tabular}{c|c|c}
\textbf{Term} & \textbf{Sales} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\cline{1-3}
$1$ & $\uv{2}{2}{3}$ & \ut{1}{1}{1} \\
$2$ & $\uv{2}{3}{3}$ & \ut{1}{1}{1} \\
\rowcolor{grey}
$\uv{3}{3}{5}$ & $\uv{4}{7}{7}$ & \ut{1}{1}{1} \\
\rowcolor{grey}
$4$ & $\uv{4}{4}{7}$ & \ut{1}{1}{1} \\
\end{tabular}
}}
\end{minipage}
\begin{minipage}{0.59\linewidth}
\centering
{\small
\scalebox{0.9}{
\begin{tabular}{c|c|cc}
\textbf{Term} & \textbf{Sales} & \texttt{Position} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\cline{1-4}
$1$ & $\uv{2}{2}{3}$ & \uv{2}{3}{3} & \ut{0}{0}{0} \\
$2$ & $\uv{2}{3}{3}$ & \uv{2}{2}{3} & \ut{0}{0}{0} \\
\rowcolor{grey}
$\uv{3}{3}{5}$ & $\uv{4}{7}{7}$ & \uv{0}{0}{1} & \ut{1}{1}{1} \\
\rowcolor{grey}
$4$ & $\uv{4}{4}{7}$ & \uv{0}{1}{1} & \ut{1}{1}{1} \\
\end{tabular}
}
}
\end{minipage}
\\[-2mm]
\caption{\abbrUAADB bounding the worlds and top-2 result produced by our approach}\label{fig:au-db-result-topk}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.99\linewidth}
\begin{subfigure}{1\linewidth}
\centering
\captionsetup{width=.9\linewidth}
{\small
\begin{tabular}{ c|c|c|c}
\textbf{Term} & \textbf{Sales} & \textbf{Sum} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\cline{1-4}
$1$ & $\uv{2}{2}{3}$ & $\uv{4}{5}{6}$ & \ut{1}{1}{1} \\
$2$ & $\uv{2}{3}{3}$ & $\uv{6}{10}{10}$ & \ut{1}{1}{1} \\
$\uv{3}{3}{5}$ & $\uv{4}{7}{7}$ & $\uv{4}{11}{14}$ & \ut{1}{1}{1} \\
$4$ & $\uv{4}{4}{7}$ & $\uv{4}{4}{14}$ & \ut{1}{1}{1} \\
\end{tabular}
}
\\[-2mm]
\caption{\abbrUAADB windowed aggregation result produced by our approach}\label{fig:au-db-result-window}
\end{subfigure}
\end{minipage}
\\[-3mm]
\setlength{\belowcaptionskip}{-15pt}
\caption{Ranking, top-k, and windowed aggregation queries over an incomplete (probabilistic) database , including \abbrUAADBs.
}
\end{figure}
}
\begin{Example}[\abbrUAADB top-2 query]\label{ex:au-db-top-2-query}
\Cref{fig:au-db-result-topk} (left) shows an \abbrUAADB, \revm{which uses triples, consisting of a lower bound, a selected-guess value (defined shortly), and an upper bound to bound the value range of an attribute (\textbf{Term}, \textbf{Sales}) and the multiplicity of a tuple ($\mathbb N^3$)}.
The \abbrUAADB \emph{bounds} all of the possible worlds of our running example. Intuitively, each world's tuples fit into the ranges defined by the \abbrUAADB.
The selected-guess values encode one distinguished world (here, $D_1$)
\revm{--- supplementing the bounds with an educated guess about which possible world correctly reflects the real world~\footnote{\revm{
The process of obtaining a selected-guess world is domain-specific, but \cite{FH19,FH21} suggest the most likely world, if it can be feasibly obtained.
}}, providing backwards compatibility with existing systems, and a convenient reference point for users~\cite{BS20,kumari:2016:qdb:communicating}.}
\Cref{fig:au-db-result-topk} (right) shows the result of computing the top-2 answers sorted on \emph{term}.
The rows marked in grey encode all tuples that could exist in the top-2 result in \revm{some possible world. For example, the tuples $\tuple{3, 4}$ ($D_1$), $\tuple{3, 7}$ ($D_2$), and $\tuple{5, 7}$ ($D_3$) are all encoded by the \abbrUAADB tuple $\tuple{\uv{3}{3}{5}, \uv{4}{7}{7}} \rightarrow \ut{1}{1}{1}$}.
Results with a row multiplicity range of \ut{0}{0}{0} are certainly not in the result.
The \abbrUAADB compactly represents an \emph{under-approximation} of \emph{certain answers} and an \emph{over-approximation} of all the \emph{possible answers}, e.g., for our example, the \abbrUAADB admit additional worlds
with 5 sales in term 4.
\end{Example}
Implementing windowed aggregation requires determining the (uncertain) membership of each window, \revm{which may be affected both by uncertainty in sort position, and in group-by attributes. Furthermore, we have to reason about which of the tuples possibly belonging to a window minimize / maximize the aggregation function result.
}
It is possible implemented this reasoning in SQL, albeit at the cost of range self-joins on the relation (%
this \emph{rewrite method} is discussed in detail in~\cite{techreport} and
evaluated in \Cref{sec:experiment}).
\revm{We propose a one-pass algorithm for windowed aggregation over \abbrUAADBs, which we will refer to as the \emph{native method}.}
The intuition behind our algorithm is to share state between multiple windows.
For example, consider the window \lstinline!ROWS BETWEEN 3! \lstinline!PRECEDING AND CURRENT ROW!.
In the deterministic case, with each new window one row enters the window and one row leaves.
Sum-based aggregates (\lstinline{sum}, \lstinline{count}, \lstinline{average}) can leverage commutativity and associativity of addition, i.e., updating the window requires only constant time.
\revm{Similar techniques~\cite{DBLP:conf/sigmod/AlbrightDGGLKSSW08} can maintain} of \lstinline{min}/\lstinline{max} aggregates in time logarithmic in the window size.
Non-determinism in the row position makes such resource sharing problematic.
First, tuples with non-deterministic positions \revm{do not necessarily leave the window in FIFO order};
We need iteration over tuples sorted on both the upper- and lower-bounds of their position.
Second, \revm{the number of tuples that} could \emph{possibly} belong to the window may be significantly larger than the window size.
\revm{Considering all possible rows for a $k$-row window (using the naive \abbrUAADB aggregation operator~\cite{FH21}) results in a looser bound than if only} subsets of size $k$ \revm{are considered}.
For that, we need access to rows possibly in a window sorted on the bounds of the aggregation attribute values (e.g., to find the $k$-subset with the minimal/maximal sum) in both decreasing order of their upper bound and increasing order of their lower bound. Furthermore, we have to \revm{separate maintain} tuples that certainly belong to a window (\revm{which must contribute} to both bounds).
To efficiently maintain sets of tuples such that they can be accessed in several sort orders efficiently, we develop a new data structure which we refer to as a \emph{connected heap}.
A connected heap is a set of heaps
where an element popped from one heap can be efficiently \revm{($O(\log n)$)} removed from the other heaps even if their sort orders differ from the heap we popped the element from.
This data structure allows us to efficiently maintain sufficient state for computing \abbrUAADB results for windowed aggregation.
\revm{In preliminary experiments, we demonstrated that, connected heaps significantly outperform a solution based on classical heaps.}
\begin{Example}[Windowed Aggregation]\label{ex:windowed-aggregation}
Consider the following windowed aggregation query:
\begin{lstlisting}
SELECT *, sum(Sales) OVER (ORDER BY term ASC
BETWEEN CURRENT ROW AND 1 FOLLOWING) as sum FROM R;
\end{lstlisting}
\Cref{fig:au-db-result-window} shows the result of this query over our running example \abbrUAADB. The column \texttt{Sum} bounds all possible windowed aggregation results for each \abbrUAADB tuple and the entire \abbrUAADB relation bounds the windowed aggregation result for all possible worlds. Notice that \abbrUAADBs ignore correlations which causes an over-approximation of ranges in the result. For example, term 1 has a maximum aggregation result value of 6 according to the \abbrUAADB representation but the maximum possible aggregation value across all possible world is 5.
\end{Example}
\section{Native Algorithms}
\label{sec:native-algorithms-au}
\newcommand\mycommfont[1]{\footnotesize\ttfamily\textcolor{blue}{#1}}
\SetCommentSty{mycommfont}
\SetKwComment{tcc}{// }{}
\SetAlFnt{\small}
\newcommand{\texttt{compBounds}\xspace}{\texttt{compBounds}\xspace}
We now introduce optimized algorithms for ranking and windowed aggregation over UA-DBs that are more efficient than their SQL counterparts presented in~\cite{techreport}.
Through a \emph{connected heap} data structure, these algorithms leverage the fact that the lower and upper position bounds are typically close approximations of one another to avoid performing multiple passes over the data.
\revm{We assume a physical encoding of an \abbrUAADB relation $\mathbf{\rela}$ as a classical relation~\cite{FH21} where each range-annotated value of an attribute $A$ is stored as three attributes $\lbMarker{A}$, $\bgMarker{A}$, and $\ubMarker{A}$. In this encoding, attributes $\lbMarker{\textbf{\sffamily \tup}.\#}$, $\bgMarker{\textbf{\sffamily \tup}.\#}$, and $\ubMarker{\textbf{\sffamily \tup}.\#}$ store the tuple's multiplicity bounds.}
\subsection{Non-deterministic Sort, Top-k}\label{sec:nondet-sort-topk}
\Cref{alg:topkOnepass} sorts an input \abbrUAADB $\mathbf{\rela}$.
The algorithm assigns to each tuple its position $\tau$
given as lower and upper bounds: $\texttt{t}.\lbMarker{\tau}, \texttt{t}.\ubMarker{\tau}$, respectively\footnote{
The selected guess position $\bgMarker{\tau}$ is trivially obtained using an additional linked heap, and omitted here for clarity.
}.
Given a parameter $\texttt{k}$, the algorithm can also be used to find the top-\texttt{k} elements; otherwise we set $\texttt{k} = \revm{\card{R}}$ (the size of the input relation).
\revm{\Cref{alg:topkOnepass} \iftechreport{(\Cref{fig:sortOnePassLifecycle})} takes as input the relational encoding of an \abbrUAADB relation $\mathbf{\rela}$ sorted on $\lbMarker{O}$, the lower-bound of the sort order attributes.
Recall from \Cref{eq:upos-lb} that to determine a lower bound on the sort position of a tuple $\textbf{\sffamily \tup}$ we have to sum up the smallest multiplicity of tuples $\rangeOf{s}$ that are certainly sorted before $\textbf{\sffamily \tup}$, i.e., where $\ubMarker{\rangeOf{s}.O} \fles{O} \lbMarker{\textbf{\sffamily \tup}.O}$. Since $\lbMarker{\rangeOf{s}.O} \fles{O} \ubMarker{\rangeOf{s}.O}$ holds for any tuple, we know that these tuples are visited by \Cref{alg:topkOnepass} before $\textbf{\sffamily \tup}$. We store tuples in a min-heap \texttt{todo} sorted on $\ubMarker{O}$ and maintain a variable $\lbMarker{\texttt{rank}}$ to store the current lower bound. For every incoming tuple $\textbf{\sffamily \tup}$, we first determine all tuples $\rangeOf{s}$ from \texttt{todo} certainly preceding $\textbf{\sffamily \tup}$ ($\ubMarker{\rangeOf{s}.O} < \lbMarker{\textbf{\sffamily \tup}.O}$) and update $\lbMarker{\texttt{rank}}$ with their multiplicity. Since $\textbf{\sffamily \tup}$ is the first tuple certainly ranked after any such tuple $\rangeOf{s}$ and all tuples following $\textbf{\sffamily \tup}$ will also certainly ranked after $\rangeOf{s}$, we can now determine the upper bound on $\rangeOf{s}$'s position. Based on \Cref{eq:upos-ub} this is the sum of the maximal multiplicity of all tuples that may precede $\rangeOf{s}$. These are all tuples $\rangeOf{u}$ such that $\ubMarker{\rangeOf{s}.O} \geq \lbMarker{\rangeOf{u}.O}$, i.e., all tuples we have processed so far. We store the sum of the maximal multiplicity of these tuples in a variable $\ubMarker{\texttt{rank}}$ which is updated for every incoming tuple. We use a function \texttt{emit} to compute $\rangeOf{s}$'s upper bound sort position, adapt $\lbMarker{\rangeOf{s}.\#}$ (for a top-k query, $\rangeOf{s}$ may not exist in the result if its position may be larger than $k$), add $\rangeOf{s}$ to the result, and adapt $\lbMarker{\texttt{rank}}$ (all tuples processed in the following are certainly ranked higher than $\rangeOf{s}$). Function \texttt{split} splits a tuple with $\textbf{\sffamily \tup}.\# > 1$ into multiple tuple as required by \Cref{def:au-db-sorting-operat}.
If we are only interested in the top-k results, then we can stop processing the input once $\lbMarker{\texttt{rank}}$ is larger than $k$, because all following tuples will be certainly not in the top-k. Once all inputs have been processed, the heap may still contain tuples whose relative sort position wrt. to each other is uncertain. We flush these tuples at the end.
}
\setlength{\textfloatsep}{3pt}
\setlength{\floatsep}{0pt}
\begin{algorithm}[t]
\SetKwProg{Fn}{def}{}{}
\KwIn{$\mathbf{\rela}$ (sorted by $\lbMarker{O}$), $\texttt{k} \in \semN$ (or $\texttt{k} = \textbar R\textbar$)}
$\texttt{todo} \gets \texttt{minheap}(\ubMarker{O})$ ;\ $\lbMarker{\texttt{rank}} \gets 0$ ;\ $\ubMarker{\texttt{rank}} \gets 0$ ;\ $\texttt{res} \gets \emptyset$ \\
\For{$\texttt{\textbf{\sffamily \tup}} \in \mathbf{\rela}$}{
\While(\tcc*[f]{emit tuples}){$\texttt{todo}.|peek|().\ubMarker{O} < \textbf{\sffamily \tup}.\lbMarker{O}$}{
$|emit|(\texttt{todo}.|pop|())$\\
\If(\tcc*[f]{tuples certainly out of top-k?}){$\lbMarker{\texttt{rank}}>\texttt{k}$}{
\Return{\texttt{res}}
}
}
$\rangeOf{t}.\lbMarker{\tau} \gets \lbMarker{\texttt{rank}}$ \tcc*[f]{set position lower bound}\\
$\texttt{todo}.|insert|(\textbf{\sffamily \tup})$ \tcc*[f]{insert into todo heap}\\
$\ubMarker{\texttt{rank}}\ \texttt{+=}\;\ubMarker{\textbf{\sffamily \tup}.\#}$ \tcc*[f]{update position upper bound}
}
\While(\tcc*[f]{flush remaining tuples}){\textbf{\upshape not} $\texttt{todo}.|isEmpty|()$}{
$|emit|(\texttt{todo}.|pop|())$\\
}
\Return{\texttt{res}}\\
\BlankLine
\Fn{\texttt{emit}$(\rangeOf{s})$}{
$\rangeOf{s}.\ubMarker{\tau} \gets \texttt{min}(\texttt{k},\;\ubMarker{\texttt{rank}})$ \tcc*[f]{position upper bound capped at $k$}\\
\If(\tcc*[f]{$\rangeOf{s}$ may not be in result if $\rangeOf{s}.\ubMarker{\tau} > k$}){$\ubMarker{\texttt{rank}} >\texttt{k}$}{
$\rangeOf{s}.\lbMarker{\#} \gets 0$
}
$\texttt{res} \gets \texttt{res} \cup |split|(\{\rangeOf{s}\})$ \\
$\lbMarker{\texttt{rank}}\ \texttt{+=}\ \lbMarker{\rangeOf{s}.\#}$ \tcc*[f]{update position lower bound}
}
\caption{Non-deterministic sort on $O$ (top-k)}
\label{alg:topkOnepass}
\end{algorithm}
\iftechreport{
\begin{algorithm}[h]
\SetKwProg{Fn}{Function}{:}{end}
\Fn{|split|(t)}{
\For{$i \in [1,\ubMarker{\texttt{t}.\textbf{\sffamily \tup}.\#}]$}{
$\texttt{t}_i = |copy|(t)$ \;
$\bgMarker{\texttt{t}_i.\textbf{\sffamily \tup}.\#} = \bgMarker{\texttt{t}.\textbf{\sffamily \tup}.\#}<i:1?0$ \;
$\ubMarker{\texttt{t}_i.\textbf{\sffamily \tup}.\#} = 1$ \;
$\lbMarker{\texttt{t}_i.\textbf{\sffamily \tup}.\#} = \lbMarker{\texttt{t}.\textbf{\sffamily \tup}.\#}<i:1?0$ \;
$\ubMarker{\texttt{t}_i.\tau} += i$ \;
$\lbMarker{\texttt{t}_i.\tau} += i$ \;
\Return{$\texttt{t}_i$} \;}}
\caption{\revm{Split bag tuple}}
\label{alg:topkSplit}
\end{algorithm}
}
\iftechreport{
\begin{figure}
\includegraphics[width=0.6\columnwidth]{figs/sort-lifecycle.pdf}
\vspace{-3mm}
\setlength{\belowcaptionskip}{-2pt}
\caption{The lifecycle of tuples in \Cref{alg:topkOnepass}}
\label{fig:sortOnePassLifecycle}
\end{figure}
}
\iftechreport{
\revm{
\Cref{alg:topkSplit} defines function $split(\textbf{\sffamily \tup})$ split multiplicities of range tuple $\textbf{\sffamily \tup}$ into multiplicity of ones using the semantics in \Cref{ fig:uadbSortSemantics}.
}
}
\ifnottechreport{
\mypar{Complexity Analysis}
\revm{Let $n = \card{\mathbf{\rela}}$. The algorithm requires $O(n \cdot \log n)$ to sort the input. Since the heap size is bound by $n$ and \texttt{pop} and \texttt{insert} are logarithmic in the heap size, the overall worst-case runtime is $O(n \cdot \log n)$. \Cref{alg:topkOnepass}'s worst-case memory requirement is $O(n)$.}
}
\iftechreport{
\mypar{Complexity Analysis}
\revm{Let $n = \card{\mathbf{\rela}}$. The algorithm requires $O(n \cdot \log n)$ to sort the input. It then processes the data in one-pass. For each tuple, we compare $\textbf{\sffamily \tup}.\ubMarker{O}$ in $O(1)$ with the root of the heap and insert the tuple into the heap in $O(\log \card{heap})$. Tuples are removed from the heap just once in $O(\log \card{heap})$. In the worst-case, if the sort positions of all tuples may be less than $k$, then the heap will contain all $n$ tuples at the end before flushing. Thus, $\card{heap}$ is bound $n$ and we get $O(n \cdot \log n)$ as the worst-case runtime complexity for our algorithm requiring $O(n)$ memory. However, in practice, heap sizes are typically much smaller.}
}
\subsection{Connected Heaps}\label{sec:connected-heaps}
\revm{In our algorithm for windowed aggregation that we will present in \cref{sec:ranged-window-agg}, we need to maintain the tuples possibly in a window ordered increasingly on $\ubMarker{\tau}$ (for fast eviction), sorted on $\lbMarker{A}$ to compute $\winbotk{\mathbf{\rela}}{A}$, and sorted decreasingly on $\ubMarker{A}$ to compute $\wintopk{\mathbf{\rela}}{A}$.
We could use separate heaps to access the smallest element(s) wrt. to any of these orders efficiently.
However, if a tuple needs to be deleted, the tuple will likely not be the root element in all heaps which means we have to remove non-root elements from some heaps which is inefficient (linear in the heap size). Of course it would be possible to utilize other data structures that maintain order such as balanced binary trees. However, such data structures do not achieve the $O(1)$ lookup performance for the smallest element that heaps provide. \iftechreport{Furthermore, trees are typically are not as efficient in practice as heaps which can be implemented as arrays.}
Instead,
we introduce a simple, yet effective, data structure we refer to as a \emph{connected heap}.
}
A \textit{connected heap} is comprised of $H$ heaps which store pointers to a shared set of records. Each heap has its own sort order. A record stored in a connected heap consists of a tuple (the payload) and $H$ backwards pointers that point to the nodes of the individual heaps storing this tuple. These backward pointers enable efficient deletion \revm{($O(H \cdot \log n)$)} of a tuple from all heaps when it is popped as the root of one of the component heaps. \ifnottechreport{\revm{In \cite{techreport} we explain how the standard sift-up and sift-down heap operations are used to restore the heap property in $O(\log n)$ when removing a non-root element from a component heaps.}}
When a tuple is inserted into a connected heap, it is inserted into each component heap in \revm{$O(\log n)$} in the usual way with the exception that the backwards pointers
are populated.
\ifnottechreport{
\revb{
In~\cite{techreport}, we experimentally comparing heaps with connected heaps. Even for small databases (10k tuples) and a small fraction of tuples with uncertain order-by values (1\%), connected heaps outperform heaps by a factor of $\sim 2$. Larger databases / more uncertain data result in larger heaps and, thus, even better performance.
}
}
\iftechreport{
\mypar{Preliminary experiments}
To measure the impact of the backpointers in connected heaps on performance, we did a preliminary experimental comparison with using a set of independent heaps.
Without the backlinks, removing an non-root element from a heap is linear in the size of the heap in the worst-case, because it may require a search over the whole heap to find the position of such an element. Afterwards, the element can be deleted and the heap property can be restored in $O(\log n)$. Using the backlinks, finding the positions of an element in other heaps is $O(1)$ and so popping the root element of one heap and removing it from all other heaps is in $O((\log n) \cdot m)$ where $n$ is the size of the largest heap and $m$ is the number of heaps. The table below shows the execution times in milliseconds using connected heaps (back pointers) versus classical unconnected heaps ( linear search).
This experiment was run on a database with 50k tuples and 1\%-5\% uncertainty (amount of tuples that are uncertain) varying the size of the ranges for the attribute we are aggregating over. The main factor distinguish linear search performance from back pointers is the heap size which for our windowed aggregation algorithm is affected by attribute range size, percentage of tuples which have uncertain order-by and data size. Even though in this experiment the amount of uncertain data and database size are quite low, we already see 25\% up to a factor of $\sim 10$ improvement. For larger databases or larger percentage of uncertain data, the sizes of heaps will increase and, thus, we will see even more significant performance improvements.
{\centering
\begin{tabular}{ l|l|c|c }
\textbf{Uncert} & \textbf{Range} & \textbf{Connected heaps} & \textbf{Unconnected heaps} \\
& & \textbf{(Back pointers)(ms)} & \textbf{(Linear search)(ms)} \\
\hline
1\% & 2000 & 1979.272 & 3479.042\\
1\% & 15000 & 2045.162 & 6676.732\\
1\% & 30000 & 2103.974 & 9646.330\\
5\% & 2000 & 1976.651 & 4078.487\\
5\% & 15000 & 2149.990 & 15186.657\\
5\% & 30000 & 2191.823 & 22866.713\\
\end{tabular}
}
}
\iftechreport{
\mypar{Deletion from a connected heap} When a node is popped from one of the component heaps
the nodes of the other heaps storing the tuple are identified in $O(H)$ using the backwards pointers. Like in standard deletion of nodes from a heap, a deleted node is replaced with the right-most node at the leaf level. Standard sift-down and sift-up are then used to restore the heap property in $O(\log n)$.
Recall that the heap property for a min-heap requires that for each node in the heap its value is larger than the value of its parent. Insertion of a new node $v$ into a heap places the new element at the next free position at the leaf level. This may violate the heap property. The heap property can be restored in $O(\log n)$ using sift-up (repeatedly replacing a node with its parent). Similarly, to delete the root of a heap, we replace the root with the right-most child. This again may violate the heap property if the new root is larger than one of its children. The heap property can be restored in $O(\log n)$ steps using sift-down, i.e., replacing a node that is larger than a child with the smaller of its children.
For a connected heap, deletion may cause a node to be deleted that is currently not the root of the heap. Like in standard heaps, we replace the node $v$ to be deleted with the right-most node $v_l$ from the leaf level. This may violate the heap property (every child is larger than its parent) in two possible ways: either $v_l$ is smaller than the parent of $v$ or $v_l$ is larger than one of the children of $v$. Note that it is not possible for both cases to occur at the same time, because the heap was valid before and, thus, if $v_l$ is larger than a child of $v$, then it has to be larger than the parent of $v$.
If $v_l$ is smaller than the parent of $v$, then it has to be smaller than all other nodes in the subtree rooted at $v$. We can restore the heap property by sifting up $v_l$.
%
Now consider the case where $v_l$ is larger than one of the children of $v$ and let $v_c$ denote that child (or the smaller child if $v_l$ is larger than both children). Note that the subtree rooted at $v$ was a valid heap. Thus, replacing $v$ with $v_l$ is replacing the root element of this subheap and the heap property for the subheap can be restored using sift-down. Since $v_l$ is larger than the parent of $v$ this restores the heap property for the whole heap.}
\tikzset{
treenode/.style = {align=center, inner sep=0pt, text centered,
font=\sffamily, circle, black, draw=black,
text width=1.5em, very thick},
tuple/.style = {align=center, inner sep=3pt, text centered,
font=\sffamily, rectangle, rounded corners, black, draw=black, thick, rectangle split, rectangle split horizontal,
rectangle split parts=2},
link/.style = {thick, dashed,<->},
}
\begin{Example}[Connected heap]\label{ex:connected-heap}
Consider the connected heap shown below on the left storing tuples $t_1 = (1,3)$, $t_2 = (2,6)$, $t_3 = (3,2)$, and $t_4=(4,1)$. Heap $h_1$ ($h_2$) is sorted on the first (second) attribute.
Calling \texttt{pop()} on $h_1$ removes $t_1$ from $h_1$. Using the backwards pointer from $t_1$ to the corresponding node in $h_2$ (shown in red), we also remove $t_1$ from $h_2$. The node pointing to $t_1$ from $h_2$ is replaced with the right most leaf node of $h_2$ (pointing to $t_2$). In this case the heap property is not violated and, thus, no sift-down / up is required.
\begin{minipage}{1\linewidth}
\centering
\begin{minipage}{0.5\linewidth}
\centering
\scalebox{0.6}{
\begin{tikzpicture}[level/.style = {sibling distance = 1.2cm,
level distance = 1cm}]]
\node [treenode, label=left:$\bf h_1$] (l1) at (0,1.5) {1}
child { node [treenode] (l2) {2} edge from parent[very thick]
child { node [treenode] (l4) {4} edge from parent[very thick] }
}
child { node [treenode] (l3) {3} edge from parent[very thick]};
\node [tuple, label=left:t1] (t1) at (2.1,1.5) {
\nodepart{one} 1
\nodepart{two} 3
};
\node [tuple, label=left:t2] (t2) at (2.1,1) {
\nodepart{one} 2
\nodepart{two} 6
};
\node [tuple, label=left:t3] (t3) at (2.1,0.5) {
\nodepart{one} 3
\nodepart{two} 2
};
\node [tuple, label=left:t4] (t4) at (2.1,0) {
\nodepart{one} 4
\nodepart{two} 1
};
\node [treenode, label=left:$\bf h_2$] (r1) at (4,1.5) {1}
child { node [treenode] (r2) {2} edge from parent[very thick]
child {node [treenode] (r6) {6} edge from parent[very thick] }
}
child { node [treenode] (r3) {3} edge from parent[very thick]};
\draw [link, red] (l1.north) to[out=50, in=125] (t1.one north);
\draw [link, red] (t1.two north) to[out=50, in=90, distance=1.4cm] (r3.north);
\draw [link, black] (t2.north west) to[out=160, in=30] (l2.east);
\draw [link, black] (t2.east) to[out=10, in=180] (r6.west);
\draw [link, black] (t3.south west) to[out=-170, in=-20] (l3.south east);
\draw [link, black] (t3.south east) to[out=-30, in=-120] (r2.west);
\draw [link, black] (t4.one south) to[out=-100, in=0] (l4.east);
\draw [link, black] (t4.two south) to[out=-45, in=-90, distance=2.2cm] (r1.south);
\end{tikzpicture}
}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\textbf{Result of \texttt{h1.pop()}}
\scalebox{0.6}{
\begin{tikzpicture}[level/.style = {sibling distance = 1.2cm,
level distance = 1cm}]]
\node [treenode, label=left:$\bf h_1$] (l2) at (0,1.5) {2}
child { node [treenode] (l4) {4} edge from parent[very thick]}
child { node [treenode] (l3) {3} edge from parent[very thick]};
\node [tuple, label=left:t2] (t2) at (2.2,1.5) {
\nodepart{one} 2
\nodepart{two} 6
};
\node [tuple, label=left:t3] (t3) at (2.2,1) {
\nodepart{one} 3
\nodepart{two} 2
};
\node [tuple, label=left:t4] (t4) at (2.2,0.5) {
\nodepart{one} 4
\nodepart{two} 1
};
\node [treenode, label=left:$\bf h_2$] (r1) at (4,1.5) {1}
child { node [treenode] (r2) {2} edge from parent[very thick]}
child { node [treenode] (r6) {6} edge from parent[very thick]};
\draw [link, black] (t2.north west) to[out=160, in=30] (l2.north east);
\draw [link, black] (t2.north east) to[out=35, in=90, looseness = 1.3] (r6.north);
\draw [link, black] (t3.south west) to[out=-150, in=0] (l3.east);
\draw [link, black] (t3.south east) to[out=-30, in=-180, looseness = 1.4] (r2.west);
\draw [link, black] (t4.south west) to[out=-140, in=-20] (l4.south east);
\draw [link, black] (t4.south east) to[out=-20, in=-90, distance=1.4cm] (r1.south);
\end{tikzpicture}
}
\end{minipage}
\end{minipage}
\\[-5mm]
\end{Example}
\subsection{Ranged Windowed Aggregation}\label{sec:ranged-window-agg}
\newcommand{\lbMarker{\texttt{pagg}}\xspace}{\lbMarker{\texttt{pagg}}\xspace}
\newcommand{\ubMarker{\texttt{pagg}}\xspace}{\ubMarker{\texttt{pagg}}\xspace}
\newcommand{\texttt{cert}\xspace}{\texttt{cert}\xspace}
\newcommand{\texttt{poss}\xspace}{\texttt{poss}\xspace}
\newcommand{\lbMarker{\texttt{c-rank}}\xspace}{\lbMarker{\texttt{c-rank}}\xspace}
\newcommand{\lbMarker{X}}{\lbMarker{X}}
\newcommand{\ubMarker{X}}{\ubMarker{X}}
\newcommand{\texttt{ub\_\pheap}}{\texttt{ub\_\texttt{poss}\xspace}}
\newcommand{\texttt{lb\_\pheap}}{\texttt{lb\_\texttt{poss}\xspace}}
\newcommand{\texttt{openw}\xspace}{\texttt{openw}\xspace}
Without loss of generality, we focus on window specifications with only a \lstinline|ROWS PRECEDING| clause; a \lstinline|FOLLOWING| clause can be mimicked by offsetting the window, i.e., \revm{a window bound of $[-N,0]$.}
\revm{\Cref{alg:window_precOnePass} uses a function \texttt{compBounds}\xspace to compute the bounds on the aggregation function result based on the certain and possible content of a window.}
\ifnottechreport{
We present the definition of this function for several aggregation functions in~\cite{techreport}.}
\iftechreport{
\revm{We discuss the code for these functions below for aggregation functions \ensuremath{\mathbf{min}}\xspace, \ensuremath{\mathbf{max}}\xspace, and \ensuremath{\mathbf{sum}}\xspace (\ensuremath{\mathbf{count}}\xspace uses the same algorithm as \ensuremath{\mathbf{sum}}\xspace using $\uv{1}{1}{1}$ instead of the values of an attribute $A$).}
}
\revm{\Cref{alg:window_precOnePass} follows a sweeping pattern similar to \Cref{alg:topkOnepass} to compute the windowed aggregate in a single pass over the data which has been preprocessed by applying $\rank{O}{\tau}(\mathbf{\rela})$ and then has been sorted on $\lbMarker{\tau}$. This algorithm uses a minheap \texttt{openw}\xspace which is sorted on $\ubMarker{\tau}$ to store tuples for which have not seen yet all tuples that could belong to their window. Additionally, the algorithm maintains the following data structures:
\texttt{cert}\xspace is a map from a sort position $i$ to a tree storing tuples $\textbf{\sffamily \tup}$ that certainly exist and for which $\textbf{\sffamily \tup}.\lbMarker{\tau} = i$ sorted on $\ubMarker{\tau}$. This data structure is used to determine which tuples certainly belong to the window of a tuple; $(\texttt{poss}\xspace,\lbMarker{\texttt{pagg}}\xspace,\ubMarker{\texttt{pagg}}\xspace)$ is a connected minheap with $\texttt{poss}\xspace$, \lbMarker{\texttt{pagg}}\xspace, and \ubMarker{\texttt{pagg}}\xspace are sorted on $\ubMarker{\tau}$, $\ubMarker{A}$, $\ubMarker{A}$, respectively. This connected heap stores tuples possibly in a window. The different sort orders are needed to compute bounds on the aggregation function result for a window efficiently (we will expand on this later). Finally, we maintain a watermark \lbMarker{\texttt{c-rank}}\xspace for the lower bound of the certain part of windows.
}
\begin{figure}[t]
\centering
\begin{tikzpicture}
[
edg/.style={->,line width=0.4mm},
]
\def0.5{0.6}
\def0.4{0.4}
\def-6 * \yscaler{-5.5 * 0.4}
\draw[|->, thick] (-0,-6 * \yscaler) -- (10 * 0.5,-6 * \yscaler);
\foreach \x in {0,...,9}
\draw[thick] (\x * 0.5,0.3 * 0.4 + -6 * \yscaler) -- (\x * 0.5,-0.3 * 0.4 + -6 * \yscaler) node[below] {\x};
\draw[edg,blue,|-|] (5 * 0.5,-1 * 0.4 * 1) node[left,black]{} -- node[above,black]{$\rangeOf{s}$} (7 * 0.5,-1 * 0.4 * 1);
\draw[edg,darkgreen,|-|] (0 * 0.5, -1 * 0.4 * 1.7) -- node[above,black]{\textcolor{darkgreen}{\texttt{poss}}} (7 * 0.5, -1 * 0.4 * 1.7);
\draw[edg,red,|-|] (2 * 0.5, -1 * 0.4 * 1.7) -- node[below,black]{\textcolor{red}{cert}} (5 * 0.5, -1 * 0.4 * 1.7);
\fill[darkgreen, very nearly transparent] (0,-1 * 0.4 * 1.7) rectangle (7 * 0.5,-6 * \yscaler);
\fill[red, very nearly transparent] (2 * 0.5,-1 * 0.4 * 1.7) rectangle (5 * 0.5,-6 * \yscaler);
\draw[thick,blue,|-|] (0 * 0.5,-1 * 0.4 * 2.7) node[left,black]{} -- node[above,black]{$\rangeOf{t_1}$} (3 * 0.5,-1 * 0.4 * 2.7);
\draw[thick,blue,|-|] (2 * 0.5,-1 * 0.4 * 3.7) node[left,black]{} -- node[above,black]{$\rangeOf{t_2}$} (3 * 0.5,-1 * 0.4 * 3.7);
\draw[black,<-] (3.1 * 0.5,-1 * 0.4 * 3.7) node[left,black]{} -- node[above right,black,xshift=3.5mm]{\footnotesize\texttt{cert}[2]} (4.5 * 0.5,-1 * 0.4 * 2.7);
\draw[thick,blue,|-|] (3 * 0.5,-1 * 0.4 * 4.7) node[left,black]{} -- node[above,black]{$\rangeOf{t_3}$} (4 * 0.5,-1 * 0.4 * 4.7);
\draw[black,<-] (4.1 * 0.5,-1 * 0.4 * 4.7) node[left,black]{} -- node[above right,black,xshift=3.5mm]{\footnotesize\texttt{cert}[3]} (5.5 * 0.5,-1 * 0.4 * 3.7);
\draw[thick,blue,|-|] (8 * 0.5,-1 * 0.4 * 4) node[left,black]{} -- node[above,black]{$\rangeOf{t_4}$} (9 * 0.5,-1 * 0.4 * 4);
\draw[black,->] (1 * 0.5, -6 * \yscaler+0.5) node[left,black]{} -- node[above left,black,xshift=-1.5mm,yshift=1.5mm]{\footnotesize\texttt{{c-rank}\lbMarker{}}} (1.9 * 0.5, -6 * \yscaler+0.1);
\end{tikzpicture}
\vspace{-10pt}
\caption{\revb{State example for \Cref{alg:window_precOnePass}, N=5, $\lbMarker{\texttt{c-rank}}$=2.}}\label{fig:wind-alg-state-example}
\end{figure}
\begin{algorithm}[t]
\SetKwProg{Fn}{def}{}{}
\SetSideCommentLeft{}{}{}
\KwIn{$f$, $X$,$O$, $N$, $A$, $\rank{O}{\tau}(\mathbf{\rela})$ \textbf{sorted on} $\lbMarker{\tau}$}
$\texttt{openw}\xspace \gets |minheap|(\ubMarker{\tau})$ \tcc*[f]{tuples with open windows}\\
$\texttt{cert}\xspace \gets |Map|(|int|, |Tree|(\ubMarker{\tau}))$ \tcc*[f]{certain window members by pos.} \\
$(\texttt{poss}\xspace,\lbMarker{\texttt{pagg}}\xspace,\ubMarker{\texttt{pagg}}\xspace) \gets \texttt{connected-minheap}(\ubMarker{\tau}, \lbMarker{A}, \ubMarker{A})$ \\
$\lbMarker{\texttt{c-rank}}\xspace \gets 0$ \tcc*[f]{watermark for certain window}\\
$|res| \gets \emptyset$\\
\For{$\textbf{\sffamily \tup} \in \mathbf{\rela}$}{
$|\texttt{openw}\xspace|.|insert|(\textbf{\sffamily \tup})$ \label{algl:nwin-insert-openw}\\
\If(\tcc*[f]{insert into potential certain window}){$\lbMarker{\rangeOf{t}.\#} > 0$}{
$\texttt{cert}\xspace[\lbMarker{\textbf{\sffamily \tup}.\tau}].|insert|(\textbf{\sffamily \tup}) $ \\
}
\While(\tcc*[f]{close windows}){$|\texttt{openw}\xspace|.|peek|().\ubMarker{\tau} < \textbf{\sffamily \tup}.\lbMarker{\tau}$}{
$\rangeOf{s} \gets |\texttt{openw}\xspace|.|pop|()$ \label{algl:nwin-emit}\\
\While(\tcc*[f]{evict certain win.}){$\lbMarker{\texttt{c-rank}}\xspace < \rangeOf{s}.\ubMarker{\tau} - N$}{ \label{algl:nwin-cert-evict}
$\texttt{cert}\xspace[\lbMarker{\texttt{c-rank}}\xspace] = \NULL$ \\
$\lbMarker{\texttt{c-rank}}\xspace++$\\
}
$\rangeOf{s}.|X| \gets |\texttt{compBounds}\xspace|(f,\rangeOf{s},|\texttt{cert}\xspace|, |\texttt{poss}\xspace|)$ \tcc*[f]{compute agg.} \label{algl:nwin-calc-agg}\\
\While(\tcc*[f]{evict poss. win.}){$|\texttt{poss}\xspace|.|peek|.\ubMarker{\tau} < \rangeOf{s}.\lbMarker{\tau} -N$}{
$|\texttt{poss}\xspace|.|pop|()$\\
}
$|res| \gets |res| \cup \{\rangeOf{s}\}$\\
}
$|\texttt{poss}\xspace|.|insert|(\textbf{\sffamily \tup})$ \tcc*[f]{insert into poss. win.}\\
}
\caption{Aggregate $f(A) \to X$, sort on $O$, $N$ preceding}
\label{alg:window_precOnePass}
\end{algorithm}
\input{agg-algos}
\revm{
\Cref{alg:window_precOnePass} first inserts each incoming tuple into \texttt{openw}\xspace
(\Cref{algl:nwin-insert-openw}). If the tuple certainly exists, it is inserted
into the tree of certain tuples whose lower bound position is
$\textbf{\sffamily \tup}.\lbMarker{\tau}$. Note that each of these trees is sorted on
$\ubMarker{\tau}$ which will be relevant later. Next the algorithm determines
for which tuples from \texttt{openw}\xspace, their windows have been fully observed. These are
all tuples $\rangeOf{s}$ which are certainly ordered before the tuple
$\textbf{\sffamily \tup}$ we are processing in this iteration ($\rangeOf{s}.\ubMarker{\tau}
< \textbf{\sffamily \tup}.\lbMarker{\tau}$). To see why this is the case first observe that
(i) we are processing input tuples in increasing order of $\lbMarker{\tau}$
and (ii) tuples are ``finalized'' by computing the aggregation bounds in
monotonically increasing order of $\ubMarker{\tau}$. Given that we are using a
window bound $[-N,0]$, all tuples $\rangeOf{s}$ that could possibly belong to
the window of a tuple $\textbf{\sffamily \tup}$ have to have $\rangeOf{s}.\lbMarker{\tau}
\leq \textbf{\sffamily \tup}.\ubMarker{\tau}$. Based on these observations, once we
processed a tuple $\textbf{\sffamily \tup}$ with $\textbf{\sffamily \tup}.\lbMarker{\tau} >
\rangeOf{s}.\ubMarker{\tau}$ for a tuple $\rangeOf{s}$ in \texttt{openw}\xspace, we know that
no tuples that we will process in the future can belong to the window for
$\rangeOf{s}$. In \Cref{algl:nwin-emit} we iteratively pop such tuples from
\texttt{openw}\xspace. For each such tuple $\rangeOf{s}$ we evict tuples from \texttt{cert}\xspace and update
the high watermark \lbMarker{\texttt{c-rank}}\xspace (\Cref{algl:nwin-cert-evict}). Recall that for a
tuple $\rangeOf{u}$ to certainly belong to the window for $\rangeOf{s}$ we have
to have $\rangeOf{s}.\ubMarker{\tau} -N \geq \textbf{\sffamily \tup}.\lbMarker{\tau}$.
Thus, we update \lbMarker{\texttt{c-rank}}\xspace to $\rangeOf{s}.\ubMarker{\tau} -N$ and evict from
\texttt{cert}\xspace all trees storing tuples for sort positions smaller than
$\rangeOf{s}.\ubMarker{\tau} -N$. Afterwards, we compute the bounds on the
aggregation result for $\rangeOf{s}$ using \texttt{cert}\xspace and \texttt{poss}\xspace (we will describe
this step in more detail in the following). Finally, evict tuples from \texttt{poss}\xspace
(and, thus, also \lbMarker{\texttt{pagg}}\xspace and \ubMarker{\texttt{pagg}}\xspace) which cannot belong to any windows we will
close in the future. These are tuples which are certainly ordered before the
lowest possible position in the window of $\rangeOf{s}$, i.e., tuples
$\rangeOf{u}$ with $\rangeOf{u}.\ubMarker{\tau} < \rangeOf{s}.\lbMarker{s} -
N$ (see \Cref{fig:possible-and-certain-wind}). Evicting tuples from \texttt{poss}\xspace based
on the tuple for which we are currently computing the aggregation result bounds
is safe because we are emitting tuples in increasing order of
$\ubMarker{\tau}$, i.e., for all tuples $\rangeOf{u}$ emitted after
$\rangeOf{s}$ we have $\rangeOf{u}.\ubMarker{\tau} >
\rangeOf{s}.\ubMarker{\tau}$.
\revb{\Cref{fig:wind-alg-state-example} shows an example state for the algorithm when tuple $\rangeOf{s}$ is about to be emitted. Tuples fully included in the red region ($\rangeOf{t_2}$ and $\rangeOf{t_3}$) are currently in $\texttt{cert}\xspace[i]$ for sort positions certainly in the window for $\rangeOf{s}$. Tuples with sort position ranges overlapping with green region are in the possible window (these tuples are stored in $\texttt{poss}\xspace$). Tuples like $\rangeOf{t_4}$ with upper-bound position higher than $\rangeOf{s}$ will be popped and processed after $\rangeOf{s}$.} Once all input tuples have been processed, we
have to close the windows for all tuples remaining in \texttt{openw}\xspace. This process is the same as emitting tuples before we have processed all inputs and, thus, is omitted form \Cref{alg:window_precOnePass}.
}
\ifnottechreport{
\revm{
\Cref{alg:window_precOnePass} uses function \texttt{compBounds}\xspace to compute the bounds on the aggregation function result for a tuple $\textbf{\sffamily \tup}$ using \texttt{cert}\xspace, \lbMarker{\texttt{pagg}}\xspace and \ubMarker{\texttt{pagg}}\xspace following the definition from \cref{sec:audb-win-agg-semantics}. First, we fetch all tuples that are certainly in the window from \texttt{cert}\xspace based on the sort positions that certainly belong to the window for $\textbf{\sffamily \tup}$ ([\textbf{\sffamily \tup}.\ubMarker{\tau} - N, \textbf{\sffamily \tup}.\lbMarker{\tau}]) and aggregate their $A$ bounds. Afterwards, we use \lbMarker{\texttt{pagg}}\xspace and \ubMarker{\texttt{pagg}}\xspace to efficiently fetch up $\certaincount{\mathbf{\rela}, \textbf{\sffamily \tup}}$ tuples possibly in the window for $\textbf{\sffamily \tup}$ to calculate the final bounds based on \textsf{max-k}\xspace and \textsf{min-k}\xspace. As mentioned before the detailed algorithm and further explanations are presented in~\cite{techreport}.
}
}
\iftechreport{
\Cref{alg:window_precOnePass} uses function \texttt{compBounds}\xspace to compute the bounds on the aggregation function result for a tuple $\textbf{\sffamily \tup}$ using \texttt{cert}\xspace, \lbMarker{\texttt{pagg}}\xspace and \ubMarker{\texttt{pagg}}\xspace following the definition from \cref{sec:audb-win-agg-semantics}.
}
\BG{openw is minheap on $\tau^{\uparrow}$ cert is maxheap on $\tau^{\downarrow}$ and poss is minheap on $\tau^{uparrow}$}
\mypar{Complexity Analysis}
\revm{
\Cref{alg:window_precOnePass} first sorts the input in $O(n \log n)$ time using \Cref{alg:topkOnepass} followed by a deterministic sort on $\lbMarker{\tau}$. Each tuple is inserted into \texttt{openw}\xspace, \texttt{poss}\xspace, and \texttt{cert}\xspace at most once and poped from \texttt{openw}\xspace exactly once. The size of the heaps the algorithm maintains is certainly less than $n$ at all times. To compute the aggregation function bounds, we have to look at the certain tuples in $\texttt{cert}\xspace[i]$ at most $size([N,0]) = N+1$ sort positions $i$ and at most $N+1$ tuples from $\texttt{poss}\xspace$ that can be accessed using the connected heaps in $O(N \cdot \log n)$. Thus, the overall runtime of the algorithm is $O(N \cdot n \cdot \log n)$.
}
\subsection{Incomplete K-Relations}
\iftechreport{
\subsubsection{Certain and Possible Annotations}
\label{sec:poss-worlds-cert}
For incomplete $\ensuremath{\mathcal{K}}\xspace$-relations, we define the certain and possible annotations of tuples as a generalization of certain and possible answers in classical incomplete databases. For these concepts to be well-defined we require that $\ensuremath{\mathcal{K}}\xspace$ is an l-semiring~\cite{DBLP:conf/icdt/KostylevB12} which means that the natural order forms a lattice. Most commonly considered semirings (e.g., sets, bags, most provenance semirings, \ldots) are l-semirings.
The \emph{certain annotation} of a tuple, is the greatest lower bound (\emph{glb}) of its annotations across all possible world while the \emph{possible annotation} is the least upper bound (\emph{lub}) of these annotations.
We use $\glbKof{\semK}$ (glb) and $\lubKof{\semK}$ (lub)
to denote the $\sqcap$ and $\sqcup$
operations for a semiring $\ensuremath{\mathcal{K}}\xspace$.
The certain (possible)
annotation ${\certainName}_{\semK}(\mathcal{D}, t)$ (${\possibleName}_{\semK}(\mathcal{D}, t)$) of a tuple $t$ in an incomplete $\ensuremath{\mathcal{K}}\xspace$-database $\mathcal{D}$ is defined as the glb (lub) over the annotations of tuple $t$ across all possible worlds of $\mathcal{D}$:
\begin{center}
\begin{align*}
{\certainName}_{\semK}(\mathcal{D},t) &\coloneqq \sqcap_{\semK}(\{D(t) \mid D \in \mathcal{D} \})\\
{\possibleName}_{\semK}(\mathcal{D},t) &\coloneqq \lubKof{\semK}(\{D(t) \mid D \in \mathcal{D} \})
\end{align*}
\end{center}
Importantly, this coincides with the standard definition of certain and possible answers for set semantics ($\ensuremath{\mathbb{B}}\xspace$): the natural order of the set semiring $\ensuremath{\mathbb{B}}\xspace$ is $\bot \ordOf{\ensuremath{\mathbb{B}}\xspace} \top$, $k_1 \lubKof{\ensuremath{\mathbb{B}}\xspace} k_2 = k_1 \vee k_2$,
and $k_1 \glbKof{\ensuremath{\mathbb{B}}\xspace} k_2 = k_1 \wedge k_2$. That is, a tuple is certain (has certain annotation $\top$) if it exists (is annotated with $\top$) in every possible world and possible if it exists in at least one possible world (is annotated with $true$ in one or more worlds). The natural order of $\ensuremath{\mathbb{N}}\xspace$ is the standard order of natural numbers. We get $\pwCertOf{\ensuremath{\mathbb{N}}\xspace} = \min$ and $\pwPossOf{\ensuremath{\mathbb{N}}\xspace} = \max$.
This coincides with
the definition of certain and possible multiplicity for bag semantics from~\cite{GL16,CG19,DBLP:conf/pods/ConsoleGLT20}.
}
\section{Related Work}
\label{sec:related-work}
\iftechreport{
We build on prior research in
incomplete and probabilistic databases, uncertain aggregation, uncertain top-k, uncertain sorting, and temporal databases.}
\ifnottechreport{
We build on prior research in
incomplete and probabilistic databases, uncertain aggregation, uncertain top-k and uncertain sorting.
}
\mypar{Probabilistic/Incomplete databases}
Certain answer semantics~\cite{DBLP:journals/jacm/ImielinskiL84,AK91,L16a,GL16,GL17,L79a} only returns answers that are guaranteed to be correct.
Computing certain answers is coNP-complete\xspace in data-complexity~\cite{AK91,DBLP:journals/jacm/ImielinskiL84}. However, under-approximations~\cite{R86,GL17,L16a,GL16,CG19,FH19} can be computed in \texttt{PTIME}\xspace.
AU-DBs\xspace~\cite{FH21} build on the selected-guess and lower bounds-based approach of~\cite{FH19}, adding an upper bound on possible answers and attribute-level uncertainty with ranges
to support aggregation.
MCDB~\cite{jampani2008mcdb} and Pip~\cite{5447879} sample from the set of possible worlds to generate expectations of possible outcomes, but can not generally obtain bounds on their estimates.
Queries over symbolic models for incomplete data like C-tables~\cite{DBLP:journals/jacm/ImielinskiL84} and
m-tables~\cite{sundarmurthy_et_al:LIPIcs:2017:7061} often have \texttt{PTIME}\xspace data complexity, but obtaining
certain answers from query results
is
intractable.
\mypar{Aggregation in Incomplete/Probabilistic Databases}
General solutions for non-windowed aggregation over uncertain data remain an open problem~\cite{DBLP:conf/pods/ConsoleGLT20}.
Due to the complexity of uncertain aggregation, most approaches focus on identifying tractable cases and producing lossy representations~\cite{5447879,DBLP:journals/tkde/MurthyIW11,DBLP:conf/icdt/AbiteboulCKNS10,DBLP:journals/tods/SolimanIC08,CC96,DBLP:conf/soda/JayramKV07,DBLP:journals/vldb/BurdickDJRV07,DBLP:conf/sigmod/YangWCK11, liang-20-frmdcanp}.
These result encodings are not closed (i.e., not useful for subsequent queries), and are also expensive to compute (often NP-hard\xspace).
Symbolic models~\cite{amsterdamer2011provenance,DBLP:journals/pvldb/FinkHO12,DBLP:journals/jiis/LechtenborgerSV02} that are closed under aggregation permit
\texttt{PTIME}\xspace data complexity, but extracting certain / possible answers is still intractable.
We proposed \abbrUAADBs~\cite{FH21} which are closed under $\ensuremath{\mathcal{RA}^{agg}}$ and achieve efficiency through approximation.
\mypar{Uncertain Top-k}
A key challenge in uncertain top-k ranking is defining a meaningful semantics.
The set of tuples certainly (resp., possibly) in the top-k may have fewer (more) than k tuples.
U-Top~\cite{4221738} picks the top-k set with the highest probability.
U-Rank~\cite{4221738} assigns to each rank the tuple which is most-likely to have this rank.
Global-Topk~\cite{4498380} first ranks tuples by their probability of being in the top-k and returns the k most likely tuples.
Probabilistic threshold top-k (PT-k)~\cite{10.1145/1376616.1376685} returns all tuples that have a probability of being in the top-k that exceeds a pre-defined threshold.
Expected rank~\cite{4812412} calculates the expected rank for each tuple across all possible worlds and picks the k tuples with the highest expected rank.
R\'{e} et al.~\cite{4221737} proposed a multi-simulation algorithm that stops when a guaranteed top-k probability can be guaranteed. Soliman et al.~\cite{10.1145/1386118.1386119} proposed a framework that integrates tuple retrieval, grouping, aggregation, uncertainty management, and ranking in a pipelined fashion.
\iftechreport{Li et al.~\cite{10.14778/1687627.1687685} proposed a unified ranking approach for top-k based on generating functions which use and/xor trees to reason about complex correlations.}
Each of these generalizations necessarily breaks some intuitions about top-k, producing more (or fewer) than k tuples, or producing results that are not the top-k in any world.
\mypar{Uncertain Order}
Amarilli et. al. extends the relational model with a partial order to encode uncertainty in the sort order of a relation~\cite{AB17, AB19}.
For more general use cases where posets can not represent all possible worlds, Amarilli et. al. also develop a symbolic model of provenance~\cite{AB14} whose expressions encode possible orders.
Both approaches are limited to set semantics.
\iftechreport{
\mypar{Temporal Aggregation}
Temporal databases must reason about tuples associated with partially overlapping intervals.
For example, a window aggregate
may be recast as an interval self-join, where one table defines the set of windows and each of its tuples is joined with the tuples in the window.
We take inspiration from temporal databases for our own operator implementations.
\BG{If we need to save space, then we can limit the discussion of temporal aggregation.}
The first temporal aggregation algorithm was given in~\cite{tuma1993implementing}.
Moon et al.~\cite{839401} proposed a balanced tree algorithm for \ensuremath{\mathbf{count}}\xspace, \ensuremath{\mathbf{sum}}\xspace and \ensuremath{\mathbf{avg}}\xspace aggregates, and a divide-and-conquer algorithm for \ensuremath{\mathbf{min}}\xspace and \ensuremath{\mathbf{max}}\xspace.
Kline and Snodgrass proposed the \textit{aggregation tree}~\cite{380389}, an in-memory data structure that supports incremental computation of temporal aggregates.
Yang et al.~\cite{914813} proposed a materialized version called the \textit{SB-tree} that can be used as an index for incremental temporal aggregation computations.
The \textit{MVSB-tree}~\cite{10.1145/375551.375600} is and extension of the SB-tree that supports predicates in the aggregation query.
Piatov and Helmer~\cite{10.1007/978-3-319-64367-0_7} proposed a sweep-line based approach the reduces the space needed to compute \ensuremath{\mathbf{min}}\xspace and \ensuremath{\mathbf{max}}\xspace aggregates over temporal data.
}
\subsection{K-Relations} \label{sec:data_provenance}
\needRewrite{
The generalization of incomplete databases we use here is based on \textbf{$\ensuremath{\mathcal{K}}\xspace$-relations}~\cite{Green:2007:PS:1265530.1265535}.
In this framework, relations are annotated with elements from the domain $K$ of a (commutative) semiring $\ensuremath{\mathcal{K}}\xspace=\tuple{K,\addsymbol_{\semK},\multsymb_{\semK},\onesymbol_{\semK},\zerosymbol_{\semK}}$, i.e., a mathematical structure with commutative and associative addition ($\addsymbol_{\semK}$) and product ($\multsymb_{\semK}$) operations where $\addsymbol_{\semK}$ distributes over $\multsymb_{\semK}$ and $k \multsymb_{\semK} \zerosymbol_{\semK} = \zerosymbol_{\semK}$ for all $k \in K$.
An $n$-nary $\ensuremath{\mathcal{K}}\xspace$-relation is a function that maps tuples
to elements from $K$.
Tuples that are not in the relation are annotated with $\zerosymbol_{\semK}$.
Only finitely many tuples may be mapped to an element other than $\zerosymbol_{\semK}$.
Since $\ensuremath{\mathcal{K}}\xspace$-relations are functions from tuples to annotations, it is customary to denote the annotation of a tuple $t$ in relation $R$ as $R(t)$. }
\ifnottechreport{
Operators of positive relational algebra ($\ensuremath{\mathcal{RA}^{+}}$) over $\ensuremath{\mathbb{N}}\xspace$-relations
combine input annotations using
$+$ and $\cdot$.
\begin{align*}
\textbf{Union: } & (R_1 \cup R_2)(t) = R_1(t) + R_2(t) \\
\textbf{Join: } & (R_1 \Join R_2)(t) = R_1(t[{\mathrm{Sch}(R_1}) ]) \cdot R_2(t[{\mathrm{Sch}(R_2}) ]) \\
\textbf{Projection: } & (\pi_U (R))(t) = \sum_{t=t'[U]}R(t')\\
\textbf{Selection: } & (\sigma_\theta(R))(t) = R(t) \cdot \theta(t)
\end{align*}
}
\iftechreport{
\mypar{Query Semantics}
Operators of the positive relational algebra ($\ensuremath{\mathcal{RA}^{+}}$) over $\ensuremath{\mathcal{K}}\xspace$-relations are defined by combining input annotations using operations $\addsymbol_{\semK}$ and $\multsymb_{\semK}$.
\begin{align*}
\textbf{Union: } & (R_1 \cup R_2)(t) = R_1(t) \addsymbol_{\semK} R_2(t) \\
\textbf{Join: } & (R_1 \Join R_2)(t) = R_1(t[\schemasymb(R)_1 ]) \multsymb_{\semK} R_2(t[\schemasymb(R)_2 ]) \\
\textbf{Projection: } & (\pi_U (R))(t) = \sum_{t=t'[U]}R(t')\\
\textbf{Selection: } & (\sigma_\theta(R))(t) = R(t) \multsymb_{\semK} \theta(t)
\end{align*}
}
\iftechreport{
For simplicity we assume in the definition above that tuples are of a compatible schema (e.g., $\schemasymb(R)_1$ for a union $R_1 \cup R_2$).
We use $\theta(t)$ to denote a function that returns $\onesymbol_{\semK}$ iff $\theta$ evaluates to true over tuple $t$ and $\zerosymbol_{\semK}$ otherwise.
}
\iftechreport{
A \textbf{homomorphism} is a mapping $h: \ensuremath{\mathcal{K}}\xspace \to \ensuremath{\mathcal{K}}\xspace'$ from a semiring $\ensuremath{\mathcal{K}}\xspace$ to a semiring $\ensuremath{\mathcal{K}}\xspace'$ that maps $\zeroOf{\ensuremath{\mathcal{K}}\xspace}$ and $\oneOf{\ensuremath{\mathcal{K}}\xspace}$ to their counterparts in $\ensuremath{\mathcal{K}}\xspace'$ and distributes over sum and product (e.g., $h(k \addsymbol_{\semK} k') = h(k) \addOf{\ensuremath{\mathcal{K}}\xspace'} h(k')$).
Any homomorphisms $h$ can be lifted from semirings to $\ensuremath{\mathcal{K}}\xspace$-relations or $\ensuremath{\mathcal{K}}\xspace$-databases by applying $h$ to the annotation of every tuple $t$: $h(R)(t) = h(R(t))$. We will use the same symbol for a homomorphism and its lifted variants.
Importantly, queries commute with semiring homomorphisms: $h(Q(D)) = Q(h(D))$.
}
We will make use of the so called \textit{natural order} $\ordersymbol_{\semK}$ for a semiring $\ensuremath{\mathcal{K}}\xspace$
which is the standard order $\leq$ of natural numbers for $\ensuremath{\mathbb{N}}\xspace$.
\iftechreport{
Formally, $k \ordersymbol_{\semK} k'$ if it is possible to obtain $k'$ by adding to $k$: $\exists k'': k \addsymbol_{\semK} k'' = k'$.
Semirings for which the natural order is a partial order are called \textit{naturally ordered}~\cite{Geerts:2010bz}.
\begin{align}
\forall k, k' \in K: \big(k \ordersymbol_{\semK} k'\big) \Leftrightarrow \big(\exists k'' \in K: k \addsymbol_{\semK} k'' = k'\big)
\end{align}
}
\section{AU-DB Sorting and Top-k Semantics}\label{sec:ranking}
We now develop a bound-preserving semantics for sorting and top-k queries over AU-DBs.
Recall that each tuple in an \abbrUAADBs is annotated with a triple of multiplicities and that each (range-annotated) value is likewise a triple. Elements of a range-annotated value $\mathbf{c} = \uv{c_1}{c_2}{c_3}$ or multiplicity triple $\ut{n_1}{n_2}{n_3}$ are accessed as: $\lbMarker{\mathbf{c}} = c_1$, $\bgMarker{\mathbf{c}} = c_2$, and $\ubMarker{\mathbf{c}} = c_3$.
\revm{We use bold face to denote range-annotated tuples, relations, values, and databases.}
Both the uncertainty of a tuple's multiplicity and the uncertainty of the values of order-by attributes creates uncertainty in a tuple's position in the sort order. \revm{The former, because it determines how many duplicates of a tuple appear in the sort order which affects the position of tuples which may be larger wrt. the sort order and the latter because it affects which tuples are smaller than a tuple wrt. the sort order.}
As mentioned before, a top-k query is a selection over the result of a sort operator \revm{which checks that the sort position of a tuple is less than or equal to $k$}.
A bound-preserving semantics for selection was already presented in~\cite{FH21}.
Thus, we focus on sorting and use the existing selection semantics
for top-k queries.
\mypar{Comparison of Uncertain Values}
Before introducing sorting over \abbrUAADBs, we first discuss \revm{the evaluation of} $\les{O}$ over tuples with uncertain values (recall that $\fles{O}$ is defined in terms of $\les{O}$).
Per \cite{FH21}, a Boolean expression over range-annotated values evaluates to a bounding triple (using the order $\bot < \top$ \revm{where $\bot$ denotes false and $\top$ denotes true}). \revm{The result of an evaluation of an expression $e$ is denoted as $\seval{e}{}$.}
For instance, $\seval{\uv{1}{1}{3} < \uv{2}{2}{2}} = \uv{\bot}{\top}{\top}$, because the expression may evaluate to false (e.g., if the first value is $3$ and the second values is $2$), evaluates to true in the selected-guess world, and may evaluate to true (if the $1^{st}$ value is $1$ and the $2^{nd}$ value is $2$). The extension of $<$ to comparison of tuples on attributes $O$ using $\les{O}$ is shown below.
\revm{For example, consider tuples $\rangeOf{t_1} = (\uv{1}{1}{3}, \uv{a}{a}{a})$ and $\rangeOf{t_2} = (\uv{2}{2}{2}, \uv{b}{b}{b})$ over schema $R(A,B)$. We have $\rangeOf{t_1} \les{A,B} \rangeOf{t_2} = \uv{\bot}{\top}{\top}$, because $\rangeOf{t_1}$ could be ordered before $\rangeOf{t_2}$ (if $\rangeOf{t_1}.A$ is 1), is ordered before $\rangeOf{t_2}$ in the selected-guess world ($1 < 2$), and may be ordered after $\rangeOf{t_2}$ (if $A$ is 3).}
\vspace{-3mm}
\begin{equation*}
\begin{aligned}
\lbMarker{ \seval{\textbf{\sffamily \tup} \les{O} \rangeOf{t'}}{} } &= \exists i \in \{ 1, \ldots, n \}: \forall j \in \{1, \ldots, i-1 \}: \\
&\hspace{10mm}\lbMarker{\seval{\textbf{\sffamily \tup}.A_j = \rangeOf{t'}.A_j}{}} \land \lbMarker{\seval{\textbf{\sffamily \tup}.A_i < \rangeOf{t'}.A_i}{}}\\
\bgMarker{ \seval{\textbf{\sffamily \tup} \les{O} \rangeOf{t'}}{} } &= \exists i \in \{ 1, \ldots, n \}: \forall j \in \{1, \ldots, i-1 \}: \\
&\hspace{10mm}\bgMarker{\seval{\textbf{\sffamily \tup}.A_j = \rangeOf{t'}.A_j}{}} \land \bgMarker{\seval{\textbf{\sffamily \tup}.A_i < \rangeOf{t'}.A_i}{}}\\
\ubMarker{ \seval{\textbf{\sffamily \tup} \les{O} \rangeOf{t'}}{} } &= \exists i \in \{ 1, \ldots, n \}: \forall j \in \{1, \ldots, i-1 \}: \\
&\hspace{10mm}\ubMarker{\seval{\textbf{\sffamily \tup}.A_j = \rangeOf{t'}.A_j}{}} \land \ubMarker{\seval{\textbf{\sffamily \tup}.A_i < \rangeOf{t'}.A_i}{}}\\
\end{aligned}
\end{equation*}
\vspace{-1mm}
\noindent
To simplify notation, we will use $\textbf{\sffamily \tup} \les{O} \rangeOf{t'}$ instead of $\seval{\textbf{\sffamily \tup} \les{O} \rangeOf{t'}}{}$.
\BG{DELETED (see whether the second part needs to be merged into the following sections)
each tuple attribute values
Recall that for ranking on \abbrUAADBs, each tuple may have a range of possible rankings. In order to get an over-estimation of the top-K result, computing top-K for \abbrUAADB may return more than K tuples. In this section we define top-K semantics for \abbrUAADBs (where sorting is a special case of top-K where $K=\mid \mathbf{\rela} \mid $) and propose a one pass algorithm that implement ranking and top-K on \abbrUAADB relational encodings.}
\mypar{Tuple Rank and Position}
To define windowed aggregation and sorting over \abbrUAADBs, we generalize
$\textsf{pos}$ using the uncertain version of $\les{O}$.
The lowest possible position of the first duplicate of a tuple $\textbf{\sffamily \tup}$ in an \abbrUAADB relation $\mathbf{\rela}$ is the total multiplicity of tuples $\rangeOf{t'}$ that certainly exist ($\lbMarker{\mathbf{\rela}(\rangeOf{t'})} > 0$) and are certainly smaller than $\textbf{\sffamily \tup}$ (i.e., $\lbMarker{\seval{\rangeOf{t'} \les{O} \textbf{\sffamily \tup}}{}} = \top$).
The selected-guess position of a tuple is the position of the tuple in the selected-guess world, and the greatest possible position of $\textbf{\sffamily \tup}$ is the total multiplicity of tuples that possibly exist ($\ubMarker{\mathbf{\rela}(\rangeOf{t'})} > 0$) and possibly precede $\textbf{\sffamily \tup}$ (i.e., $\ubMarker{\seval{\rangeOf{t'} \les{O} \textbf{\sffamily \tup}}{}} = \top$). \revm{The sort position of the $i^{th}$ duplicate (with the first duplicate being $0$) is computed by adding $i$ to the position bounds of the first duplicate.}
\vspace{-1mm}
\begin{align}
\lbMarker{\tpos{\mathbf{\rela}}{O}{\textbf{\sffamily \tup}}{i}} &= i + \textstyle\sum_{\lbMarker{ ( \revb{ \rangeOf{t'} \les{O} \textbf{\sffamily \tup} } ) }} \lbMarker{ \mathbf{\rela}(\rangeOf{t'}) } \label{eq:upos-lb}\\
\bgMarker{\tpos{\mathbf{\rela}}{O}{\textbf{\sffamily \tup}}{i}} &= i + \textstyle\sum_{ \bgMarker{ ( \revb{ \rangeOf{t'} \les{O} \textbf{\sffamily \tup} } ) }} \bgMarker{ \mathbf{\rela}(\rangeOf{t'}) } \label{eq:upos-sg}\\
\ubMarker{\tpos{\mathbf{\rela}}{O}{\textbf{\sffamily \tup}}{i}} &= i + \textstyle\sum_{ \ubMarker{ ( \revb{ \rangeOf{t'} \les{O} \textbf{\sffamily \tup} } ) }} \ubMarker{ \mathbf{\rela}(\rangeOf{t'}) }\label{eq:upos-ub}
\end{align}
\vspace{-6mm}
\subsection{\abbrUAADB Sorting Semantics}
To define \abbrUAADB sorting, we split the \revm{possible duplicates} of a tuple and extend the resulting tuples with a range-annotated value denoting the tuple's (possible) positions in the sort order.
The certain multiplicity of the $i^{th}$ duplicate of a tuple $\textbf{\sffamily \tup}$ in the result is either $1$ for duplicates that are guaranteed to exist ($i < \lbMarker{\mathbf{\rela}(\textbf{\sffamily \tup})}$) and $0$ otherwise. \revm{The selected-guess multiplicity is $1$ for duplicate that do not certainly exist (in some possible world there may be less than $i$ duplicates of the tuple), but are in the selected-guess world (the selected-guess world has $i$ or more duplicates of the tuple). Finally, the possible multiplicity is always $1$.}
\begin{Definition}[\abbrUAADB Sorting Operator]\label{def:au-db-sorting-operat}
Let $\mathbf{\rela}$ be an \abbrUAADB relation and $O \subseteq \mathrm{Sch}(\mathbf{\rela})$. The result of applying the sort operator $\rank{O}{\tau}$ to $\mathbf{\rela}$ is defined in \Cref{fig:uadbSortSemantics}
\end{Definition}
\begin{figure}
\begin{align*}
\rank{O}{\tau}&(\mathbf{\rela})(\textbf{\sffamily \tup}) = \\ & \hspace{-15pt}
\begin{cases}
(1,1,1) & \text{\textbf{if}}\,\, \textbf{\sffamily \tup} = \parbox[t]{.5\textwidth}{$
\textbf{\sffamily \tup}' \circ \tpos{\mathbf{\rela}}{O}{\textbf{\sffamily \tup}'}{i}
\land i \in \left[0, \lbMarker{\mathbf{\rela}(\rangeOf{t'})} \right)$}\\
(0,1,1) & \text{\textbf{if}}\,\, \textbf{\sffamily \tup} =\parbox[t]{.5\textwidth}{$
\textbf{\sffamily \tup}' \circ \tpos{\mathbf{\rela}}{O}{\textbf{\sffamily \tup}'}{i}
\land i \in \left[\lbMarker{\mathbf{\rela}(\rangeOf{t'})}, \bgMarker{\mathbf{\rela}(\rangeOf{t'})} \right) $} \\
(0,0,1) & \text{\textbf{if}}\,\, \textbf{\sffamily \tup} = \parbox[t]{.5\textwidth}{$
\textbf{\sffamily \tup}' \circ \tpos{\mathbf{\rela}}{O}{\textbf{\sffamily \tup}'}{i}
\land \revb{ i \in \left[\bgMarker{\mathbf{\rela}(\rangeOf{t'})}, \ubMarker{\mathbf{\rela}(\rangeOf{t'})} \right) }
$}\\
(0,0,0) & \text{\textbf{otherwise}}\\
\end{cases}
\end{align*}
\vspace*{-5mm}
\setlength{\belowcaptionskip}{-15pt}
\caption{Range-annotated sort operator semantics.}
\label{fig:uadbSortSemantics}
\end{figure}
\revm{Every tuple in the result of sorting is constructed by extending an input tuple $\textbf{\sffamily \tup}'$ with the range of positions $\tpos{\mathbf{\rela}}{O}{\textbf{\sffamily \tup}'}{i}$ it may occupy wrt. the sort order.}
The definition decomposes $\textbf{\sffamily \tup}$ into a base tuple $\textbf{\sffamily \tup}'$, and a position triple for each
duplicate of $\textbf{\sffamily \tup}$ in $\mathbf{\rela}$.
We annotate all certain duplicates as certain $\ut{1}{1}{1}$, remaining selected-guess (but uncertain) duplicates as uncertain $\ut{0}{1}{1}$ and non-selected guess duplicates as possible $\ut{0}{0}{1}$.
\BG{The example below illustrates the semantics of $\ensuremath{\text{\upshape{\textsc{sort}}}}\xspace$ over an \abbrUAADB relation.}
\begin{Example}[\abbrUAADB Sorting]\label{ex:audb-ranking}
Consider the \abbrUAADB relation $\mathbf{\rela}$ shown on the left below with certain, selected guess and possible multiplicities from $\ensuremath{\mathbb{N}}\xspace^3$ assigned to each tuple. For values or multiplicities that are certain, we write only the certain value instead of the triple.
The result of sorting the relation on attributes $A,B$ using \abbrUAADB sorting semantics and storing the sort positions in column \texttt{pos} (\revm{$\rank{A,B}{pos}(\mathbf{\rela})$}) is shown below on the right. \revm{Observe how the $1^{th}$ input tuple $\rangeOf{t_1} = (1, \uv{1}{1}{3})$ was split into two result tuples occupying adjacent sort positions. The $3^{rd}$ input tuple $\rangeOf{t_3} = (\uv{1}{1}{2},2)$ could be the $1^{th}$ in sort order (if it's $A$ value is $1$ and the $B$ values of the duplicates of $\rangeOf{t_1}$ are equal to $3$) or be at the $3^{rd}$ position if two duplicates of $\rangeOf{t_1}$ exist and either $A$ is $2$ or the $B$ values of $\rangeOf{t_1}$ are all $<3$.}
\noindent
\resizebox{1\linewidth}{!}{
\begin{minipage}[t]{1.09\linewidth}
\begin{minipage}{0.4\linewidth}
\begin{tabular}{c|cc}
\thead{A} & \thead{B} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\hline
1 & \uv{1}{1}{3} & (1,1,2)\\
\uv{2}{3}{3} & 15 & (0,1,1) \\
\uv{1}{1}{2} & 2 & (1,1,1)\\
\end{tabular}
\end{minipage}
%
\begin{minipage}{0.63\linewidth}
\centering
\begin{tabular}{c|c|cc}
\thead{A} & \thead{B} & \thead{pos} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\hline
1 & \uv{1}{1}{3} & \uv{0}{0}{1} & (1,1,1)\\
1 & \uv{1}{1}{3} & \uv{1}{1}{2} & (0,0,1)\\
\uv{1}{1}{2} & 2 & \uv{0}{1}{2} & (1,1,1)\\
\uv{2}{3}{3} & 15 & \uv{2}{2}{3} & (0,1,1)\\
\end{tabular}
\end{minipage}
\end{minipage}
}
\end{Example}
\vspace{-3mm}
\subsection{Bound Preservation}
We now prove that our semantics for the sorting operator on AU-DB relations is bound preserving, i.e., given an AU-DB $\mathbf{\rela}$ that bounds an incomplete bag database $\mathcal{R}$, the result of a sort operator $\rank{O}{\tau}$ applied to $\mathbf{\rela}$ bounds the result of $\rank{O}{\tau}$ evaluated over $\mathcal{R}$.
\begin{Theorem}[Bound Preservation of Sorting]\label{theo:bound-preservation-o}
Given an \abbrUAADB relation $\mathbf{\rela}$ and incomplete bag relation $\mathcal{R}$
such that $\mathcal{R} \sqsubset \mathbf{\rela}$, and $O \subseteq \mathrm{Sch}(\mathcal{R})$. We have:
\vspace{-2mm}
\[\rank{O}{\tau}(\mathcal{R}) \sqsubset \rank{O}{\tau}(\mathbf{\rela})\]
\end{Theorem}
\vspace{-2mm}
\detailedproof{
We prove the theorem by taking a tuple matching $\mathcal{TM}$ for each possible world $R$ in the input (that is guaranteed to exist, because $\mathcal{R} \sqsubset \mathbf{\rela}$) and construct a tuple matching $\mathcal{TM}'$ for the output of sorting based on which $\rank{O}{\tau}(\mathcal{R}) \sqsubset \rank{O}{\tau}(\mathbf{\rela})$ holds. In the proof we make use of the fact that the sort operator distributes the multiplicity of an input tuple $t$ to multiple output tuples which each are extensions of $t$ with a sort position, keeping all other attributes the same as in the input.}{
Since $\mathcal{R} \sqsubset \mathbf{\rela}$, for every possible world $R \in \mathcal{R}$, there has to exist a tuple matching $\mathcal{TM}$ (see~\Cref{sec:AU-model}) based on which this property holds. We will show that based on $\mathcal{TM}$ we can generate a tuple matching for $\rank{O}{\tau}(\mathbf{\rela})$ and $\rank{O}{\tau}(\mathcal{R})$. The existence of such a tuple matching for every $\rank{O}{\tau}(R) \in \rank{O}{\tau}(\mathcal{R})$ implies that $\rank{O}{\tau}(\mathcal{R}) \sqsubset \rank{O}{\tau}(\mathbf{\rela})$. Define $S(t)$ as the set of tuples smaller than $t$ in sort order. WLOG consider one tuple $\textbf{\sffamily \tup}$ with $\mathbf{\rela}(\textbf{\sffamily \tup}) > 0$ and let $t_1, \ldots, t_n$ be the tuples in $R$ for which $\mathcal{TM}(\textbf{\sffamily \tup},t_i) > 0$. Recall that the sort operator splits each input tuple $t_i$ with $R(t) = n$ into $n$ output tuples $t_{ij}$ with multiplicity $1$ each. Thus, the sum of the multiplicities of tuples $t_{ij}$ for $j \in [0,R(t_i) - 1]$ is equal to $R(t_i)$. Both the deterministic and \abbrUAADB version of $\rank{}{}$ does preserve all attribute values of input tuples and adds another attribute for sorting and ranged sorting results. We first prove that $\{G(t_i)\} \sqsubset G(\textbf{\sffamily \tup})$. For all instances $G(t_i)$ where $t_i \in t_1, \ldots, t_n$, we have $G(t_i) \sqsubset G(\textbf{\sffamily \tup})$ given $t_i \les{O} t_x \sqsubset \textbf{\sffamily \tup} \les{O} \textbf{\sffamily \tup}_x$ if $t_i \sqsubset \textbf{\sffamily \tup}$ and $t_x \sqsubset \textbf{\sffamily \tup}_x$. Given ranged result value $\textbf{p}$ for $\textbf{\sffamily \tup}$ and result value $p_i$ for $t_i$. By the ranged sorting definition, we have $p_i \sqsubset \textbf{p}$. So $t_i \circ p_i \sqsubseteq \textbf{\sffamily \tup} \circ \textbf{p}$ holds for all $i$. So output tuple matching $\mathcal{TM}(\textbf{\sffamily \tup} \circ \textbf{p}, t_i \circ p_i) = \mathcal{TM}(\textbf{\sffamily \tup}, t_i)$ exists.
}
\section{AU-DB Windowed Aggregation}\label{sec:audb-windowed-agg-sem}
\BG{For windowed aggregation, the a tuple's membership in a partition may be uncertain, and the membership or multiplicity of a tuple in a window may be uncertain.
Furthermore, both types of uncertainty and the uncertainty of attribute values we are aggregating over may cause uncertainty of an aggregation function result.}
We now introduce a bound preserving semantics for windowed aggregation over \abbrUAADBs.
We have to account for three types of uncertainty:
(i) uncertain partition membership if a tuple may not exist ($\lbMarker{\mathbf{\rela}(\textbf{\sffamily \tup})} = 0$) or has uncertain partition attributes;
(ii) uncertain window membership if a tuple's partition membership, position, or multiplicity are uncertain; and
(iii) uncertain aggregation results from either preceding type of uncertainty, or if we are aggregating over uncertain values.
\revm{We compute the windowed aggregation result for each input tuple in multiple steps: (i) we first use \abbrUAADB sorting to split each input tuple into tuples whose multiplicities are at most one. This is necessary, because the aggregation function result may differ among the duplicates of a tuple (as is already the case for deterministic windowed aggregation); (ii) we then compute for each tuple $\textbf{\sffamily \tup}$ an \abbrUAADB relation $\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}$ storing the tuples that certainly and possibly belong to the partition for that tuple; (iii) we then compute an \abbrUAADB relation $\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}$ encoding which tuples certainly and possibly belong to the tuple's window; (iv) since row-based windows contain a fixed number of tuples, we then determine from the tuples that possibly belong to the window, the subset that together with the tuples that certainly belong to the window (these tuples will be in the window in every possible world) minimizes / maximizes the aggregation function result. This then enables us to bound the aggregation result for each input tuple from below and above. For instance, for a row-based window $[-2,0]$, we know that the window for a tuple $\textbf{\sffamily \tup}$ will never contain more than 3 tuples. If we know that two tuples certainly belong to the window, then at most one of the possible tuples can be part of the window.}
\subsection{Windowed Aggregation Semantics}\label{sec:audb-win-agg-semantics}
As before, we omit windowed aggregation parameters ($G$,$O$,$l$,$u$,$f$,$A$) from the arguments of intermediate constructs and assume they are passed along where needed.
\textbf{Partitions}
\revm{We start by defining \abbrUAADB relation which $\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}$ encodes the multiplicity with which a tuple $\rangeOf{t'}$ belongs to the partition for $\textbf{\sffamily \tup}$ based on partition-by attributes $G$. This is achieved using selection, comparing a tuple's values in $G$ with the values of $\textbf{\sffamily \tup}.G$ on equality. \abbrUAADB selection sets the certain, selected-guess, possible multiplicity of a tuple to $0$ if the tuple possibly, in the selected-guess world, or certainly does not fulfill the selection condition.}
\vspace{-1mm}
$$
\revm{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}} = \seval{\sigma_{G = \textbf{\sffamily \tup}.G}(\mathbf{\rela})}{}}
$$
\vspace{-1mm}
\mypar{Certain and Possible Windows}
We need to be able to reason about which tuples (and with which multiplicity) belong certainly to the window for a tuple and which tuples (with which multiplicity) could possibly belong to a window.
For a tuple $\textbf{\sffamily \tup}$, \revm{we model the window's tuples as an \abbrUAADB relation $\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}$ where a tuple's lower bound multiplicity encodes the number of duplicates of the tuple that are certainty in the window, the selected-guess multiplicity encodes the multiplicity of the tuple in the selected-guess world, and the upper bound encodes the largest possible multiplicity with which the tuple may occur in the window minus the certain multiplicity. In the remainder of this paper we omit the definition of the select-guess, because it can be computed using the deterministic semantics for windowed aggregation. For completeness, we include it in the extended version of this paper~\cite{techreport}.
We formally define $\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}$ in \Cref{fig:certain-and-poss-row-based-audb-win}.} \revm{Recall that in the first step we used sort to split the duplicates of each tuple into tuples with multiplicity upper bounds of $1$. Thus, the windows we are constructing here are for tuples instead of for individual duplicates of a tuple.} A tuple $\rangeOf{t'}$ is guaranteed to belong to the window for of a tuple $\textbf{\sffamily \tup}$ with a multiplicity of $n = \lbMarker{\mathbf{\rela}(\rangeOf{\t'})}$ (the number of duplicates of the tuple that certainly exist)
if the tuple certainly belongs to the partition for $\textbf{\sffamily \tup}$ and all possible positions that these $n$ duplicates of the tuple occupy in the sort order are guaranteed to be contained in the smallest possible interval of sort positions contained in the bounds of the window for $\textbf{\sffamily \tup}$. Tuple $\rangeOf{t'}$ possibly belongs to the window
of $\textbf{\sffamily \tup}$ if any of its possible positions falls within the interval of all possible positions
of $\textbf{\sffamily \tup}$. \revm{As an example consider \Cref{fig:possible-and-certain-wind} which shows the sort positions that certainly (red) and possibly (green) belong to tuple $\textbf{\sffamily \tup}$'s window (window bounds [-1,4]). For any window $[l,u]$, sort positions certainly covered by the window start from latest possible starting position for $\textbf{\sffamily \tup}$'s window which is $\ubMarker{\textbf{\sffamily \tup}.\tau} + l$ ($6 + (-1) = 5$ in our example) and end at the earliest possible upper bound for the window which is
$\lbMarker{\textbf{\sffamily \tup}.\tau} +u$ ($4 + 4 = 8$ in our example).
Furthermore, \Cref{fig:possible-and-certain-wind} shows the membership of three tuples in the window. Tuple $\rangeOf{t_1}$ does certainly not belong to the window, because none of its possible sort positions are in the window's set of possible sort positions, $\rangeOf{t_{2}}$ does certainly belong to the window, because all of its possible sort positions are in the set of positions certainly in the window. Finally, $\rangeOf{t_{3}}$ possibly belongs to the window, because some of its sort positions are in the set of positions possibly covered by the window.}
\begin{figure}[t]
\centering
\begin{tikzpicture}
[
edg/.style={->,line width=0.4mm},
]
\def0.5{0.5}
\def0.4{0.4}
\def-6 * \yscaler{-6 * 0.4}
\draw[|->, thick] (-0,-6 * \yscaler) -- (13 * 0.5,-6 * \yscaler);
\foreach \x in {0,...,12}
\draw[thick] (\x * 0.5,0.3 * 0.4 + -6 * \yscaler) -- (\x * 0.5,-0.3 * 0.4 + -6 * \yscaler) node[below] {\x};
\draw[thick,blue,|-|] (4 * 0.5,-1 * 0.4 * 1) node[left,black]{} -- node[above,black]{\textbf{\sffamily \tup}} (6 * 0.5,-1 * 0.4 * 1)
;
\draw[edg,darkgreen,|-|] (3 * 0.5, -1 * 0.4 * 2) -- node[below,black]{\textcolor{darkgreen}{possible window}} (10 * 0.5, -1 * 0.4 * 2);
\draw[edg,red,|-|] (5 * 0.5, -1 * 0.4 * 2) -- node[above,black]{\textcolor{red}{certain window}} (8 * 0.5, -1 * 0.4 * 2);
\foreach \b/\s/\pos/\expl/t in {0/2/3/not in window/$\rangeOf{t_1}$,5/7/4/certainly in/$\rangeOf{t_2}$,9/12/4.3/possibly in/$\rangeOf{t_3}$}
\draw[thick,blue,|-|] (\b * 0.5,-1 * 0.4 * \pos) node[left,black]{} -- node[below,black]{\expl} node[above,black]{t} (\s * 0.5,-1 * 0.4 * \pos) node[right,black]{}
;
\end{tikzpicture}
\vspace{-10pt}
\setlength{\belowcaptionskip}{-15pt}
\caption{Possible and certain window membership of tuples in window of $\textbf{\sffamily \tup}$ based on their possible sort positions for window spec [-1,4].}\label{fig:possible-and-certain-wind}
\end{figure}
\begin{figure*}[t]
\centering
\begin{align*}
\lbMarker{\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} & =
\begin{cases}
\revb{\lbMarker{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}(\rangeOf{t'})}} & \text{\textbf{if}}\,\, \revb{[\lbMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\rangeOf{t'}}{0}}, \ubMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\rangeOf{t'}}{0}}] \subseteq [\ubMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\textbf{\sffamily \tup}}{0}} +l, \lbMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\textbf{\sffamily \tup}}{0}} + u] } \\
0 &\text{\textbf{otherwise}} \\
\end{cases}\\
\ubMarker{\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} & =
\begin{cases}
\revb{\ubMarker{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}(\rangeOf{t'})} - \lbMarker{\awindowshort{\mathbf{\rela}}{\textbf{\sffamily \tup}}{i}(\rangeOf{t'})}}
&\text{\textbf{if}}\,\, \revb{([\lbMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\rangeOf{t'}}{0}}, \ubMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\rangeOf{t'}}{0}}]
\cap [\lbMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\textbf{\sffamily \tup}}{0}} +l, \ubMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\textbf{\sffamily \tup}}{0}} + u]) \neq \emptyset } \\
0 &\text{\textbf{otherwise}}\\
\end{cases}
\iftechreport{
\\
\bgMarker{\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} & =
\begin{cases}
\bgMarker{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}(\rangeOf{t'})} &\text{\textbf{if}}\,\, \bgMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\rangeOf{t'}}{0}}
\subseteq [\bgMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\textbf{\sffamily \tup}}{0}} +l, \bgMarker{\tpos{\shortpart{\textbf{\sffamily \tup}}{\mathbf{\rela}}}{O}{\textbf{\sffamily \tup}}{0}} + u] \\
0 &\text{\textbf{otherwise}}\\
\end{cases}
}
\end{align*}
\vspace*{-5mm}
\setlength{\belowcaptionskip}{-10pt}
\caption{Certain and possible windows for row-based windowed aggregation over \abbrUAADBs}
\label{fig:certain-and-poss-row-based-audb-win}
\end{figure*}
\mypar{Combining and Filtering Certain and Possible Windows}
As mentioned above, row-based windows contain a fixed maximal number of tuples based on their bounds. We use $\wsize{l}{u}$ to denote the size of a window with bounds $[l,u]$, i.e., $\wsize{l}{u} = (u - l) + 1$. This limit on the number of tuples in a window should be taken into account when computing bounds on the result of an aggregation function.
For that, we combine the tuples certainly in the window
\revm{(say there are $m$ such tuples) with a selected bag of up to $\wsize{l}{u} - m$ rows possibly} in the window that minimizes (for the lower aggregation result bound) or maximizes the (for the upper aggregation result bound) the aggregation function result for an input tuple. Let us use $\certaincount{\mathbf{\rela}, \textbf{\sffamily \tup}}$ to denote $\wsize{l}{u} - m$:
\revm{
\[
\certaincount{\mathbf{\rela},\textbf{\sffamily \tup}} = \wsize{l}{u} - \sum_{\rangeOf{t'}}{\mathbf{\rela}(\lbMarker{\rangeOf{t'})}}
\]
}
Which bag of up to $\certaincount{\mathbf{\rela}, \textbf{\sffamily \tup}}$ tuples minimizes / maximizes the aggregation result depends on what aggregation function is applied.
For \ensuremath{\mathbf{sum}}\xspace, the up to \revm{$\certaincount{\mathbf{\rela}, \textbf{\sffamily \tup}}$} rows with the smallest negative values are included in the lower bound and the \revm{$\certaincount{\mathbf{\rela},\textbf{\sffamily \tup}}$} rows with the greatest positive values for the upper bound.
For \ensuremath{\mathbf{count}}\xspace no additional row are included for the lower bound and up to \revm{$\certaincount{\mathbf{\rela}}$} rows for the upper bound.
\revm{ For each tuple $\textbf{\sffamily \tup}$, we define \abbrUAADB relation $\restrwinishorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}$ where each tuple's lower/upper bound multiplicities encode the multiplicity of this tuple contributing to the lower and upper bound aggregation result, respectively.
We only show the definition for \ensuremath{\mathbf{sum}}\xspace, the definitions for other aggregation functions are similar. In the definition, we make use $\lbMarker{\mathbf{\rela}}$ and $\ubMarker{\mathbf{\rela}}$:}
\revm{
\begin{align*}
\mathbf{\rela}^{\downarrow}(\textbf{\sffamily \tup}) &= \mathbf{\rela}(\textbf{\sffamily \tup})^{\downarrow} &
\mathbf{\rela}^{\uparrow}(\textbf{\sffamily \tup}) &= \mathbf{\rela}(\textbf{\sffamily \tup})^{\uparrow}
\end{align*}
}
\revm{
Note that $\mathbf{\rela}^{\downarrow}$ and $\mathbf{\rela}^{\uparrow}$ are bags ($\ensuremath{\mathbb{N}}\xspace$-relations) over range-annotated tuples.
Furthermore, we define $\winbotk{\mathbf{\rela}}{A}$ (and $\wintopk{\mathbf{\rela}}{A}$) that are computed by restricting $\mathbf{\rela}$ to the tuples with the smallest negative values as lower (upper) bounds (largest positive values) on attribute $A$ that could contribute to the aggregation, keeping tuples with a total multiplicity of up to $\certaincount{\mathbf{\rela},\textbf{\sffamily \tup}}$. Note that the deterministic conditions / expressions in the definition of $\winbotk{\mathbf{\rela}}{A}$ (and $\wintopk{\mathbf{\rela}}{A}$) are well-defined, because single values are extracted from all range-annotated values.
For \ensuremath{\mathbf{max}}\xspace (resp., \ensuremath{\mathbf{min}}\xspace) and similar idempotent aggregates, it suffices to know the greatest (resp., least) value possibly in the window.}
\vspace{-2mm}
\revm{
\begin{align*}
\lbMarker{\restrwinishorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} &= \lbMarker{\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} +\winbotk{\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}}{A}(\rangeOf{t'}) \\
\ubMarker{\restrwinishorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} &=
\lbMarker{\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} +
\wintopk{\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}}{A}(\rangeOf{t'})\\
\iftechreport{
\bgMarker{\restrwinishorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} &= \bgMarker{\awindowshorter{\mathbf{\rela}}{\textbf{\sffamily \tup}}(\rangeOf{t'})} \\
}
\winbotk{\mathbf{\rela}}{A} & = \sigma_{\tau<\certaincount{\mathbf{\rela},\textbf{\sffamily \tup}}}(\rank{\lbMarker{A}}{\tau}(\sigma_{\lbMarker{A}<0}(\lbMarker{\mathbf{\rela}}))) \\
\wintopk{\mathbf{\rela}}{A} & = \sigma_{\tau<\certaincount{\mathbf{\rela}, \textbf{\sffamily \tup}}}(\rank{- \ubMarker{A}}{\tau}(\sigma_{\ubMarker{A}>0}(\ubMarker{\mathbf{\rela}}))) \\
\end{align*}
}
\\[-8mm]
\mypar{Windowed Aggregation}
Using the filtered combined windows we are ready to define row-based windowed aggregation over UA-DBs\xspace.
\BG{In the definition we use $\bgMarker{\mathbf{\rela}}$ to denote the select-guess world encoded by $\mathbf{\rela}$, i.e., replacing each uncertain attribute value $\uv{c_1}{c_2}{c_3}$ with $c_2$ and tuple annotation $(n_1,n_2,n_3)$ with $n_2$.}
\revm{To compute aggregation results, we utilize the operation $\mysmbNAU{f}$ defined in \cite{FH21} for aggregation function $f$ that combines the range-annotated values of tuples in the aggregation attribute with the tuple's multiplicity bounds. For instance, for \ensuremath{\mathbf{sum}}\xspace, $\mysmbNAU{\ensuremath{\mathbf{sum}}\xspace}$ is multiplication, e.g., if a tuple with $A$ value $\uv{10}{20}{30}$ has multiplicity $\ut{1}{2}{3}$ it contributes $\uv{10}{40}{90}$ to the sum. Here, $\bigoplus$ denotes the application of the aggregation function over a set of elements (e.g., $\sum$ for \ensuremath{\mathbf{sum}}\xspace). Note that, as explained above, the purpose of $\flattern{\mathbf{\rela}}$ is to split a tuple with $n$ possible duplicates into $n$ tuples with a multiplicity of $1$. Furthermore, note that the bounds on the aggregation result may be the same for the $i^{th}$ and $j^{th}$ duplicate of a tuple. To deal with that we apply a final projection to merge such duplicate result tuples.
}
\begin{Definition}[Row-based Windowed Aggregation]\label{def:row-based-windowed-a}
Let $\mathbf{\rela}$ be an UA-DB\xspace relation. We define window operator $\winrow{\aggf(\agga)}{X}{\gbAttrs}{\sortattrs}{l}{u}$ as:
\revm{\begin{align*}
\winrow{\aggf(\agga)}{X}{\gbAttrs}{\sortattrs}{l}{u}(\mathbf{\rela})(\textbf{\sffamily \tup}) &= \pi_{\mathrm{Sch}(\mathbf{\rela}),X}(\mathcal{ROW}(\mathbf{\rela}))\\
\mathcal{ROW}(\mathbf{\rela})(\textbf{\sffamily \tup} \circ \textendaggishorter{\textbf{\sffamily \tup}}) &= \flattern{\mathbf{\rela}}(\textbf{\sffamily \tup}) \\
\textendaggishorter{\textbf{\sffamily \tup}} &= \bigoplus_{\rangeOf{t'}} \rangeOf{t'}.A \mysmbNAU{f} \restrwinishorter{\flattern{\mathbf{\rela}}}{\textbf{\sffamily \tup}}(\rangeOf{t'}) \\
\flattern{\mathbf{\rela}} & = \pi_{\mathrm{Sch}(\mathbf{\rela}),\tau_{id}}(\rank{\mathrm{Sch}(\rangeOf{R})}{\tau_{id}}(\mathbf{\rela}
)
\end{align*}}}
\end{Definition}
\vspace{-5mm}
\begin{Example}[\abbrUAADB Windowed Aggregation]\label{ex:audbwindow}
Consider the \abbrUAADB relation $\mathbf{\rela}$ shown below and query $\winrow{sum(C)}{SumA}{A}{B}{-1}{0}(\mathbf{\rela})$, ire.,
windowed aggregation partitioning by $A$, ordering on $B$, and computing $sum(C)$ over windows including $1$ preceding and the current row. \revm{For convenience we show an identifier for each tuple on the left. As mentioned above, we first expand each tuple with a possible multiplicity larger then one using sorting. Consider tuple $\rangeOf{t_3}$. Both $\rangeOf{t_1}$ and $\rangeOf{t_2}$ may belong to the same partition as $\rangeOf{t_3}$ as their $A$ value ranges overlap. There is no tuple that certainly belongs to the same partition as $\rangeOf{t_3}$. Thus, only tuple $\rangeOf{t_{3}}$ itself will certainly belong to the window. To compute the bounds on the aggregation result we first determine which tuples (in the expansion created through sorting) may belong to the window for $\rangeOf{t_{3}}$. These are the two tuples corresponding to the duplicates of $\rangeOf{t_1}$, because these tuples may belong to the partition for $\rangeOf{t_{3}}$ and their possible sort positions ($\uv{0}{0}{1}$ and $\uv{1}{1}{2}$) overlap with the sort positions possibly covered by the window for $\rangeOf{t_{3}}$ ($\uv{0}{1}{2}$). Since the size of the window is 2 tuples, the bounds on the sum are computed using the lower / upper bound on the $C$ value of $\rangeOf{t_{3}}$ ($\uv{2}{4}{5}$) and no additional tuple from the possible window (because the $C$ value of $\rangeOf{t_{1}}$ is positive) for lower bound and the largest possible $C$ value of one copy (we can only fit one additional tuple into the window) of $\rangeOf{t_1}$ ($7$) for the upper bound. Thus, we get the aggregation result $\uv{2}{11}{11}$ as shown below.
}
\begin{minipage}{0.99\linewidth}
\centering
\scalebox{0.8}{
\begin{tabular}{cc|c|cc}
&\thead{A} & \thead{B} & \thead{C} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\cline{2-4}
$\rangeOf{t_1}$& 1 & \uv{1}{1}{3} & 7 & (1,1,2)\\
$\rangeOf{t_2}$& \uv{2}{3}{3} & 15 & 4 & (0,1,1) \\
$\rangeOf{t_3}$& \uv{1}{1}{2} & 2 & \uv{2}{4}{5} & 1\\
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.99\linewidth}
\centering
\scalebox{0.8}{
\begin{tabular}{cc|c|c|cc}
&\thead{A} & \thead{B} & \thead{C} & \thead{SumC} & $\ensuremath{\mathbb{N}}\xspace^3$ \\
\cline{2-5}
$\rangeOf{r_1}$& 1 & \uv{1}{1}{3} & 7 & \uv{7}{7}{14} & 1\\
$\rangeOf{r_2}$& 1 & \uv{1}{1}{3} & 7 & \uv{7}{7}{14} & (0,0,1)\\
$\rangeOf{r_3}$& \uv{1}{1}{2} & 2 & \uv{2}{4}{5} & \uv{2}{11}{11} & 1\\
$\rangeOf{r_4}$& \uv{2}{3}{3} & 15 & 4 & \uv{4}{4}{9} & (0,1,1)\\
\end{tabular}
}
\end{minipage}
\end{Example}
\subsection{Bound Preservation}\label{sec:audb-windowed-agg-bound-preservation}
We now prove this semantics for group-based and row-based windowed aggregation over \abbrUAADBs to be bound preserving.
\begin{Theorem}[Bound Preservation for Windowed Aggregation]\label{theo:bound-preservation-f}
Given an \abbrUAADB relation $\mathbf{\rela}$ and incomplete bag relation $\mathcal{R}$
such that $\mathcal{R} \sqsubset \mathbf{\rela}$, and $O \subseteq \mathrm{Sch}(\mathcal{R})$. For any
row-based windowed aggregation $\winrow{\aggf(\agga)}{X}{\gbAttrs}{\sortattrs}{l}{u}$, we have:
\begin{align*}
\winrow{\aggf(\agga)}{X}{\gbAttrs}{\sortattrs}{l}{u}(\mathcal{R}) &\sqsubset \winrow{\aggf(\agga)}{X}{\gbAttrs}{\sortattrs}{l}{u}(\mathbf{\rela})
\end{align*}\BG{If we nee d a shorter version, we can replace this with ``Group-based and row-based Windowed aggregation over \abbrUAADBs is bound preserving.}
\end{Theorem}
\detailedproof{ %
As in the proof for the sorting over \abbrUAADBs, we consider WLOG one of the possible worlds $R \in \mathcal{R}$ and a tuple matching $\mathcal{TM}$ based on which $\mathbf{\rela}$ is bounding $R$. We then construct a tuple matching $\mathcal{TM}'$ for the output of windowed aggregation. In the proof, we utilize the fact that group-based windowed aggregation produces one output tuple for each input tuple that has the same multiplicity as the input tuple which extends the input tuple with the aggregation result for the tuple's window. Thus, we only need to show the bounds on aggregation function result bound the values in the result for the possible world $R$ and that tuples with multiplicity $n$ are split into $n$ output tuples with multiplicity $1$. We show that for each of these output tuples, the corresponding range-annotated output tuples bound these tuples' aggregation results.}{ %
For ranged tuple $\textbf{\sffamily \tup} \in \mathbf{\rela}$, define group of tuples in possible window of $\textbf{\sffamily \tup}$ as $ W_\textbf{\sffamily \tup} = \{ \textbf{\sffamily \tup}' \mid \aurwindowshort{\mathbf{\rela}}{\textbf{\sffamily \tup}}{i}(\rangeOf{t'})+\alrwindowshort{\mathbf{\rela}}{\textbf{\sffamily \tup}}{i}(\rangeOf{t'}) > 0 \}$. We first prove that $\bigcup_{\mathcal{TM}(\textbf{\sffamily \tup}, t_i) > 0} W_{t_i} \sqsubset W_\textbf{\sffamily \tup}$. By definition of $\textsf{pos}$ we know that $\textsf{pos}(t_i) \subseteq \textsf{pos}(\textbf{\sffamily \tup})$. So for $\mathcal{TM}(\textbf{\sffamily \tup}_1, t_1i) > 0$ and $\mathcal{TM}(\textbf{\sffamily \tup}_2, t_2i) > 0$ we have $W_{\textbf{\sffamily \tup}_1}(\textbf{\sffamily \tup}_2) \geq W_{t_1i}(t_2i)$. Thus $\bigcup_{\mathcal{TM}(\textbf{\sffamily \tup}, t_i) > 0} W_{t_i} \sqsubset W_\textbf{\sffamily \tup}$. Then $f(A)$ of top $u$ for $W_\textbf{\sffamily \tup}$ is greater or equal to top $f(A)$ of top $u$ for any possible worlds. In case of $\texttt{sum}$ calculating lower-bound, assume $\mathcal{TM}(\textbf{\sffamily \tup}, t) > 0$, if $F < \lbMarker{\mathbb{F}}$, then there must exist $W_{t} > 0$ s.t. $\forall_{W_{\textbf{\sffamily \tup}} >0}:t_x.A < \lbMarker{\textbf{\sffamily \tup}.A}$, which implies $\bigcup_{\mathcal{TM}(\textbf{\sffamily \tup}, t) = 0}$ and contradict with our assumption. So $F \geq \lbMarker{\mathbb{F}}$. Symmetrically we can prove that $F \leq \ubMarker{\mathbb{F}}$. So $\mathbb{F}$ bounds $F$. So tuple matching $\mathcal{TM}(\textbf{\sffamily \tup} \circ \mathbb{F}, t_i \circ F)=\mathcal{TM}(\textbf{\sffamily \tup}, t)$ exists. Other aggregation functions can be proved in a similar fashion.
}
\section{SQL-based Implementation}
\label{sec:sql-based-operator}
We now present a rewrite-based implementation of the bound-preserving sorting and windowed aggregation semantics presented above as relational algebra expressions.
Specifically, we define a function \rewrUAA{Q} that translates a query into an equivalent query that operates over a relational encoding of \abbrUAADBs.
Using the encoding of \abbrUAADBs from~\cite{FH21}, each uncertain attribute is encoded as a triple of selected guess and lower and upper bounds attributes. Furthermore, three additional attributes are used to store triples of row multiplicities $\uv{\lbMarker{\textbf{\sffamily \tup}.\#}}{\textbf{\sffamily \tup}.\#}{\ubMarker{\textbf{\sffamily \tup}.\#}}$.
\newcommand{\textsf{isend}}{\textsf{isend}}
\newcommand{\query_{lower}}{Q_{lower}}
\newcommand{\query_{upper}}{Q_{upper}}
\newcommand{\query_{sg}}{Q_{sg}}
\newcommand{\query_{bounds}}{Q_{bounds}}
\begin{figure}
\ifnottechreport{
{\small
\begin{align*}
\rewrUAA{\rank{O}{\tau}(Q)} & \coloneqq \gamma_
\mathrm{Sch}(\rewrUAA{Q}
,e_{pos}}(\query_{bounds})
\\
%
%
e_{pos} & \coloneqq \ensuremath{\mathbf{sum}}\xspace(\lbMarker{pos}) \rightarrow \lbMarker{\tau}, \ensuremath{\mathbf{sum}}\xspace(\ubMarker{pos}) \rightarrow \ubMarker{\tau}
\\
%
%
\query_{bounds} & \coloneqq \omega^{[-\infty,0]}_{\lbMarker{e} \rightarrow \lbMarker{pos}, \ubMarker{e} \rightarrow \ubMarker{pos}; \emptyset; pt}(\query_{lower} \cup \query_{upper})
\\
%
%
\lbMarker{e} & \coloneqq \ifte{\textsf{isend}=0}{\ensuremath{\mathbf{sum}}\xspace(\textsf{isend} \cdot \lbMarker{\textbf{\sffamily \tup}.\#})}{0}
\\
\ubMarker{e} & \coloneqq \ifte{\textsf{isend}=1}{\ensuremath{\mathbf{sum}}\xspace((1-\textsf{isend}) \cdot \ubMarker{\textbf{\sffamily \tup}.\#})}{0}
\\
\query_{lower} & \coloneqq \pi_{\mathrm{Sch}(\rewrUAA{Q}), \lbMarker{O} \rightarrow pt, 0 \rightarrow \textsf{isend}}(\rewrUAA{Q})
\\
\query_{upper} & \coloneqq \pi_{\mathrm{Sch}(\rewrUAA{Q}), \ubMarker{O} \rightarrow pt, 1 \rightarrow \textsf{isend}}(\rewrUAA{Q})
\end{align*}
}\\[-4mm]
} %
\iftechreport{
{\small
\begin{align*}
\rewrUAA{\rank{O}{\tau}(Q)} & \coloneqq \gamma_
\mathrm{Sch}(\rewrUAA{Q}
,e_{pos}}(\query_{bounds})
\\
%
%
e_{pos} & \coloneqq \ensuremath{\mathbf{sum}}\xspace(\lbMarker{pos}) \rightarrow \lbMarker{\tau}, \ensuremath{\mathbf{sum}}\xspace(\bgMarker{pos}) \rightarrow \bgMarker{\tau}, \ensuremath{\mathbf{sum}}\xspace(\ubMarker{pos}) \rightarrow \ubMarker{\tau}
\\
%
%
\query_{bounds} & \coloneqq \omega^{[-\infty,0]}_{\lbMarker{e} \rightarrow \lbMarker{pos}, \bgMarker{e} \rightarrow \bgMarker{pos}, \ubMarker{e} \rightarrow \ubMarker{pos}; \emptyset; pt}(\query_{lower} \cup \query_{upper} \cup \query_{sg})
\\
%
%
\lbMarker{e} & \coloneqq \ifte{\textsf{isend}=0}{\ensuremath{\mathbf{sum}}\xspace(\textsf{isend} \cdot \lbMarker{\textbf{\sffamily \tup}.\#})}{0}
\\
\bgMarker{e} & \coloneqq \ifte{\textsf{isend}=-1}{\ensuremath{\mathbf{sum}}\xspace(\bgMarker{\textbf{\sffamily \tup}.\#})}{0}
\\
\ubMarker{e} & \coloneqq \ifte{\textsf{isend}=1}{\ensuremath{\mathbf{sum}}\xspace((1-\textsf{isend}) \cdot \ubMarker{\textbf{\sffamily \tup}.\#})}{0}
\\
\query_{lower} & \coloneqq \pi_{\mathrm{Sch}(\rewrUAA{Q}), \lbMarker{O} \rightarrow pt, 0 \rightarrow \textsf{isend}}(\rewrUAA{Q})
\query_{sg} &\coloneqq \pi_{\mathrm{Sch}(\rewrUAA{Q}), \bgMarker{O} \rightarrow pt, -1 \rightarrow \textsf{isend}}(\rewrUAA{Q})
\\
\query_{upper} & \coloneqq \pi_{\mathrm{Sch}(\rewrUAA{Q}), \ubMarker{O} \rightarrow pt, 1 \rightarrow \textsf{isend}}(\rewrUAA{Q})
\end{align*}
}
}
\caption{Rewriting the sorting operator; The selected-guess position computation is omitted for clarity.}
\label{fig:orderByRewrite}
\end{figure}
\subsection{Rewriting Sorting Queries}\label{sec:sql-based-order}
The query rewrite rule for our ordering operator is shown in \Cref{fig:orderByRewrite}:
This approach creates two copies of the input relation: $\query_{lower}$ containing the lower bounds of the sort attributes, and $\query_{upper}$ with the upper bounds.
These tuples represent endpoints of value ranges for $O$, and so we refer to the tuples of the former as start tuples, and the latter as end tuples.
The lower bound on the tuple's position wrt. $\fles{O}$ is the number of tuples that certainly precede it: For a given start tuple, this is the total certain multiplicity ($\lbMarker{\textbf{\sffamily \tup}.\#}$) of end tuples that appear before it.
The upper bound is computed similarly from the total possible multiplicity ($\ubMarker{\textbf{\sffamily \tup}.\#}$) of start tuples that precede an end tuple.
$\query_{bounds}$ computes these values for each start and end tuple using windowed aggregation to find all end tuple / start tuples that precede a point and then sum up their certain / possible multiplicity. Note that we use windowed aggregation with more than one aggregation function here which can be expressed as two windowed aggregation operators in our formalism. The resulting start tuples store lower bounds (resp., upper bounds for end tuples). The final rewrite is obtained by merging the start and end tuples back together using a group-by aggregate.
The selected-guess position, not shown above, is computed analogously using a second window specification as part of $\query_{bounds}$ (see \cite{techreport} for the full version).
\subsection{Rewriting Windowed Aggregation Queries}
\label{subsec:sql-based-window}
\newcommand{\ensuremath{\query_{\lbMarker{pos}}}}{\ensuremath{Q_{\lbMarker{pos}}}}
\newcommand{\ensuremath{\query_{\ubMarker{pos}}}}{\ensuremath{Q_{\ubMarker{pos}}}}
\newcommand{\ensuremath{\query_{bnds}}}{\ensuremath{Q_{bnds}}}
\newcommand{\ensuremath{\query_{endpoints}}}{\ensuremath{Q_{endpoints}}}
\newcommand{\ensuremath{\query_{part}}}{\ensuremath{Q_{part}}}
\newcommand{\ensuremath{\query_{pos}}}{\ensuremath{Q_{pos}}}
\newcommand{\ensuremath{\query_{withselfpos}}}{\ensuremath{Q_{withselfpos}}}
\newcommand{\ensuremath{\query_{selfpos}}}{\ensuremath{Q_{selfpos}}}
\newcommand{\ensuremath{\query_{winposs}}}{\ensuremath{Q_{winposs}}}
\newcommand{\ensuremath{\query_{markcert}}}{\ensuremath{Q_{markcert}}}
\newcommand{\ensuremath{\query_{agglower}}}{\ensuremath{Q_{agglower}}}
\newcommand{\ensuremath{\query_{aggupper}}}{\ensuremath{Q_{aggupper}}}
\newcommand{\ensuremath{\query_{inwindow}}}{\ensuremath{Q_{inwindow}}}
\newcommand{\ensuremath{\query_{aggbnds}}}{\ensuremath{Q_{aggbnds}}}
\newcommand{\ensuremath{selfpos}}{\ensuremath{selfpos}}
\newcommand{\ensuremath{e_{isposs_w}}}{\ensuremath{e_{isposs_w}}}
\newcommand{\ensuremath{e_{iscert_w}}}{\ensuremath{e_{iscert_w}}}
\newcommand{\ensuremath{e_{iscert_p}}}{\ensuremath{e_{iscert_p}}}
\newcommand{\ensuremath{e_{iscert}}}{\ensuremath{e_{iscert}}}
\newcommand{\winrow{\aggsum(A)}{X}{\gbAttrs}{\sortattrs}{l}{u}}{\winrow{\ensuremath{\mathbf{sum}}\xspace(A)}{X}{G}{O}{l}{u}}
\begin{figure}
{\small
\begin{align*}
\hspace*{-7mm}
\rewrUAA{\winrow{\aggsum(A)}{X}{\gbAttrs}{\sortattrs}{l}{u}(Q)} \coloneqq &\;
\pi_{\mathrm{Sch}(\ensuremath{\query_{aggbnds}}),X}(\\ &\hspace{5mm}\ensuremath{\query_{aggbnds}} \Join_{id_1=id} \winrow{\aggsum(A)}{X}{\gbAttrs}{\sortattrs}{l}{u}(Q))
\end{align*}
\begin{align*}
\ensuremath{\query_{aggbnds}}
\coloneqq &\; \gamma_{\mathrm{Sch}(\rewrUAA{Q});\,\ensuremath{\mathbf{sum}}\xspace(e_{lb}) \rightarrow \lbMarker{X},\ensuremath{\mathbf{sum}}\xspace(e_{ub}) \rightarrow \ubMarker{X}}(\ensuremath{\query_{inwindow}})
\\
e_{lb} \coloneqq &\; {\bf if}\,{
\lbMarker{pos}<l+u \wedge (iscert \vee \lbMarker{A}<0)
}\\&\;{\bf then}\,{
(\ifte{\lbMarker{A} < 0}{ \lbMarker{A} \cdot \ubMarker{\textbf{\sffamily \tup}.\#_2}}{ \lbMarker{A} \cdot \lbMarker{\textbf{\sffamily \tup}.\#_2}})
}\,{\bf else}\,{0}
\\
e_{ub} \coloneqq &\; {\bf if}\,{
\ubMarker{pos}<l+u \wedge (iscert \vee \ubMarker{A}>0)
}\\&\;{\bf then}\,{
(\ifte{\ubMarker{A} > 0}{ \ubMarker{A} \cdot \ubMarker{\textbf{\sffamily \tup}.\#_2}}{ \ubMarker{A} \cdot \lbMarker{\textbf{\sffamily \tup}.\#_2}})
}\,{\bf else}\,{0}
\\ \hline
\ensuremath{\query_{inwindow}} \coloneqq &\; \sigma_{\lbMarker{pos}<l+u \;\lor\; \ubMarker{pos}<l+u}(\ensuremath{\query_{aggupper}})
\\
\ensuremath{\query_{aggupper}} \coloneqq & \winrow{count(*)}{\ubMarker{pos}}{id_1}{iscert,\ubMarker{A}}{-\infty}{0}(\ensuremath{\query_{agglower}}) \\
\ensuremath{\query_{agglower}} \coloneqq & \winrow{count(*)}{\lbMarker{pos}}{id_1}{iscert,\lbMarker{A}}{-\infty}{0}(\ensuremath{\query_{markcert}})\\
\hline
\end{align*}
\begin{align*}
\ensuremath{\query_{markcert}} \coloneqq &\; \pi_{\mathrm{Sch}(\ensuremath{\query_{winposs}}),\ensuremath{e_{iscert}} \rightarrow iscert}(\ensuremath{\query_{winposs}})
\\
\ensuremath{e_{iscert}} \coloneqq &\; \ensuremath{e_{iscert_p}} \wedge \ensuremath{e_{iscert_w}}
\\
\ensuremath{e_{iscert_p}} \coloneqq &\; \lbMarker{Q_1.G}=\ubMarker{Q_1.G}=\lbMarker{Q_2.G}=\ubMarker{Q_2.G}
\\
\ensuremath{e_{iscert_w}} \coloneqq &\; \lbMarker{\tau} >= \ubMarker{\ensuremath{selfpos}}-l \wedge \ubMarker{\tau} <= \lbMarker{\ensuremath{selfpos}}+u
\\
\ensuremath{\query_{winposs}} \coloneqq &\; \sigma_{\ensuremath{e_{isposs_w}}}(\ensuremath{\query_{withselfpos}})
\\
\ensuremath{e_{isposs_w}} \coloneqq &\; \lbMarker{\ensuremath{selfpos}}-l \leq \ubMarker{\tau} \wedge \ubMarker{\ensuremath{selfpos}}+u \geq \lbMarker{\tau}
\\
\ensuremath{\query_{withselfpos}} \coloneqq &\; \ensuremath{\query_{pos}} \Join_{id_1=id} \ensuremath{\query_{selfpos}}
\\
\ensuremath{\query_{selfpos}} \coloneqq &\; \pi_{id_1 \rightarrow id,\lbMarker{\tau} \rightarrow \lbMarker{\ensuremath{selfpos}},\ubMarker{\tau} \rightarrow \ubMarker{\ensuremath{selfpos}}}(\sigma_{id_1=id_2}(\ensuremath{\query_{pos}})) \\
\hline
\end{align*}
\begin{align*}
\ensuremath{\query_{pos}} \coloneqq &\; \aggregation{\mathrm{Sch}(\ensuremath{\query_{bnds}})}{e_{pos}}(\ensuremath{\query_{bnds}})
\\
e_{pos} \coloneqq &\; \ensuremath{\mathbf{sum}}\xspace(\lbMarker{pos}) \rightarrow \lbMarker{\tau},\ensuremath{\mathbf{sum}}\xspace(\ubMarker{pos}) \rightarrow \ubMarker{\tau}\\
\ensuremath{\query_{bnds}} \coloneqq &\; \omega_{e_{c} \rightarrow \lbMarker{pos}, e_{p} \rightarrow \ubMarker{pos};id_1;pt}(\ensuremath{\query_{endpoints}})
\\
e_{c} \coloneqq &\; \ifte{\textsf{isend}=1}{\ensuremath{\mathbf{sum}}\xspace(\textsf{isend} \cdot \lbMarker{\textbf{\sffamily \tup}.\#_2})}{0}
\\
e_{p} \coloneqq &\; \ifte{\textsf{isend}=1}{\ensuremath{\mathbf{sum}}\xspace((1-\textsf{isend}) \cdot \ubMarker{\textbf{\sffamily \tup}.\#_2})}{0}
\\
\ensuremath{\query_{endpoints}} \coloneqq &\; \ensuremath{\query_{\lbMarker{pos}}} \cup \ensuremath{\query_{\ubMarker{pos}}}
\\
\ensuremath{\query_{\lbMarker{pos}}} \coloneqq &\; \pi_{\mathrm{Sch}(\ensuremath{\query_{part}}), \lbMarker{O} \rightarrow pt, 0 \rightarrow \textsf{isend}}(\ensuremath{\query_{part}})
\\
\ensuremath{\query_{\ubMarker{pos}}} \coloneqq &\; \pi_{\mathrm{Sch}(\ensuremath{\query_{part}}), \ubMarker{O} \rightarrow pt, 1 \rightarrow \textsf{isend}}(\ensuremath{\query_{part}})
\\
\ensuremath{\query_{part}} \coloneqq &\; \rho_{\{B_1 \gets B \mid B \in \mathrm{Sch}(\rewrUAA{Q}) \}} (\rewrUAA{Q}) \\ &\hspace{3mm}\Join_{\theta_{join}} \rho_{\{B_2 \gets B \mid B \in \mathrm{Sch}(\rewrUAA{Q}) \} }(\rewrUAA{Q})
\\
\theta_{join} \coloneqq &\; \lbMarker{Q_1.G} \leq \ubMarker{Q_2.G} \wedge \ubMarker{Q_1.G} \geq \lbMarker{Q_2.G}
\end{align*}
}
\caption{Rewrite rule for windowed aggregation operator}
\label{fig:windowAggregateRewrite}
\end{figure}
The ranged window aggregation query rewriting rule is shown in \Cref{fig:windowAggregateRewrite}.
The method uses a range overlap self-join on partition-by attributes to link partition-definition tuple from $Q_1$ with potential members of the group from $Q_2$; The result is denoted $Q_{join}$.
The relation $Q_{rank}$ is defined in a manner analogous to \Cref{fig:orderByRewrite}, assigning each tuple to a position within its partition.
$Q_{window}$ builds the window, first filtering out all tuples that certainly do not belong to the window, and then $Q_{cert}$ labels tupls with whether they definitely belong to the window.
Next $Q_{out}$ computes the window aggregate, and then the final rewritten query includes a join with a computation of the selected guess result.
\subsection{AU-Databases} \label{sec:AU-model}
Using $\ensuremath{\mathcal{K}}\xspace$-relations and UA-DBs\xspace, Feng et al.~\cite{FH19} introduced \textit{\abbrUAADBs} (\textit{attribute-annotated uncertain databases}) which is a special type of $\ensuremath{\mathcal{K}}\xspace$-relation over \textit{range-annotated domains}. Encodes attribute and tuple level under- and over-\-ap\-prox\-i\-ma\-tion of the certain and possible multiplicity of tuples from an incomplete $\ensuremath{\mathbb{N}}\xspace$-database $\mathcal{D}$ (bag semantics).
\BGI{Needs a bit introduction what the purpose of these range-annotated values is (compact encoding of many possibilities / attribute-level uncertainty)}
In a \abbrUAADB, a value is called \textit{range-annotated value} $c = \uv{\lbMarker{c}}{\bgMarker{c}}{\ubMarker{c}}$ from a \textit{range-annotated domain} ${\dataDomain_{\rangeName}}$ encodes a value $\bgMarker{c} \in \mathbb{D}$ and two values ($\lbMarker{c}$ and $\ubMarker{c}$) that bound $\bgMarker{c}$ from below and above. We call a value $c \in {\dataDomain_{\rangeName}}$ certain if $\lbMarker{c}=\bgMarker{c}=\ubMarker{c}$.
For any $c \in {\dataDomain_{\rangeName}}$ we have $\lbMarker{c} \leq \bgMarker{c} \leq \ubMarker{c}$. For each tuple, AU-DBs\xspace use $\semq{\semN} = \tuple{\semq{\semN}, \addOf{\semq{\semN}}, \multOf{\semq{\semN}}, \zeroOf{\semq{\semN}}, \oneOf{\semq{\semN}}}$ annotations to encode $\ut{\lbMarker{k}}{\bgMarker{k}}{\ubMarker{k}}$: a lower bound on the certain multiplicity of the tuple, the multiplicity of the tuple in the SGW\xspace, and an over-approximation of the tuple's possible multiplicity.
\subsubsection{\abbrUAADB Expression evaluation}
Let $e$ be an expression. Given a range valuation $\tilde{\varphi}: \textsc{vars}(e) \rightarrow {\dataDomain_{\rangeName}}$,
selected guess expression evaluation is defined as
$\bgMarker{\tilde{\varphi}}(x) \coloneqq \bgMarker{\tilde{\varphi}(x)}$.
The result of expression $e$ over $\tilde{\varphi}$ denoted as $\seval{e}{\tilde{\varphi}}$ is defined as:
\begin{align*}
\seval{x}{\tilde{\varphi}} &\coloneqq \uv{\lbMarker{\tilde{\varphi}(x)}}{\bgMarker{\tilde{\varphi}(x)}}{\ubMarker{\tilde{\varphi}(x)}} &
\seval{c}{\tilde{\varphi}} &\coloneqq \uv{c}{c}{c}
\end{align*}
\ifnottechreport{
Selected guess\BG{?} For any of the following expressions is defined as $\bgMarker{\seval{e}{\tilde{\varphi}}} \coloneqq \seval{e}{\bgMarker{\tilde{\varphi}}}$. Let $\seval{e_1}{\tilde{\varphi}} = a$, $\seval{e_2}{\tilde{\varphi}} = b$, and $\seval{e_3}{\tilde{\varphi}} = c$. All expressions omitted below are defined point-wise (e.g., $\lbMarker{\seval{e_1 + e_2}{\tilde{\varphi}}} \coloneqq \lbMarker{a} + \lbMarker{b}$).\BGI{We can see later which of these operations we need and just refer to the previous paper for the others}
\begin{align*}
\lbMarker{\seval{\neg e_1}{\tilde{\varphi}}} &\coloneqq \neg\, \ubMarker{a} &
\ubMarker{\seval{\neg e_1}{\tilde{\varphi}}} &\coloneqq \neg\, \lbMarker{a}
\end{align*}\\[-10mm]
\begin{align*}
\lbMarker{\seval{e_1 \cdot e_2}{\tilde{\varphi}}} &\coloneqq \min(\;\;\ubMarker{a} \cdot \ubMarker{b},\;\;
\ubMarker{a} \cdot \lbMarker{b},\;\;
\lbMarker{a} \cdot \ubMarker{b},\;\;
\lbMarker{a} \cdot \lbMarker{b}\;\;)\\
\ubMarker{\seval{e_1 \cdot e_2}{\tilde{\varphi}}} &
\coloneqq \max(\;\;\ubMarker{a} \cdot \ubMarker{b},\;\;
\ubMarker{a} \cdot \lbMarker{b},\;\;
\lbMarker{a} \cdot \ubMarker{b},\;\;
\lbMarker{a} \cdot \lbMarker{b}\;\;)
\end{align*}\\[-10mm]
\begin{align*}
\lbMarker{\seval{e_1 \leq e_2}{\tilde{\varphi}}} &\coloneqq \ubMarker{a} \leq \lbMarker{b} &
\lbMarker{\seval{e_1=e_2}{\tilde{\varphi}}} &\coloneqq \ubMarker{a}=\lbMarker{b} \wedge \ubMarker{b}=\lbMarker{a} \\
\ubMarker{\seval{e_1 \leq e_2}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} \leq \ubMarker{b} &
\ubMarker{\seval{e_1=e_2}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} \leq \ubMarker{b} \wedge \lbMarker{b} \leq \ubMarker{a}
\end{align*}\\[-10mm]
\begin{align*}
\lbMarker{\seval{\ifte{e_1}{e_2}{e_3}}{\tilde{\varphi}}} &\coloneqq
\begin{cases}
\lbMarker{b} & \text{if } \lbMarker{a}= \ubMarker{a}=\top \\
\lbMarker{c} & \text{if } \lbMarker{a}= \ubMarker{a}=\bot \\
\min(\lbMarker{b},\lbMarker{c}) & \text{else}
\end{cases}\\
\ubMarker{\seval{\ifte{e_1}{e_2}{e_3}}{\tilde{\varphi}}} &\coloneqq
\begin{cases}
\ubMarker{b} & \text{if } \lbMarker{a}= \ubMarker{a}=\top \\
\ubMarker{c} & \text{if } \lbMarker{a}= \ubMarker{a}=\bot \\
\max(\ubMarker{b},\ubMarker{c}) & \text{else}
\end{cases}
\end{align*}
}
\subsubsection{Aggregation monoids}
\abbrUAADB use commutative monoids~\cite{amsterdamer2011provenance} $\tuple{M, \madd{\monoid}, \mzero{\monoid}}$ to define $\ensuremath{\mathcal{K}}\xspace$-relations aggregation semantics for aggregation functions. Values from commutative monoids $M$ are the input to aggregation that combined through operation $\madd{\monoid}$ with a neutral element $\mzero{\monoid}$. Consider a monoid $M \in \{\text{\upshape\textsf{SUM}}, \text{\upshape\textsf{MIN}}, \text{\upshape\textsf{MAX}}\}$.
Let $\uv{\lbMarker{m}}{m}{\ubMarker{m}}$ be a range-annotated value from ${\dataDomain_{\rangeName}}$ and $\ut{\lbMarker{k}}{k}{\ubMarker{k}} \in \uaaK{\ensuremath{\mathbb{N}}\xspace}$.
Monoid over range annotated domain is defined as $\ut{\lbMarker{k}}{k}{\ubMarker{k}} \mysmbNAU{\monoid} \uv{\lbMarker{m}}{m}{\ubMarker{m}} =$
\begin{align*}
[& min(\lbMarker{k} \smbN{M} \lbMarker{m},\lbMarker{k} \smbN{M} \ubMarker{m},\ubMarker{k} \smbN{M} \lbMarker{m},\ubMarker{k} \smbN{M} \ubMarker{m}), \\
& {k} \smbN{M} {m}, \\
& max(\lbMarker{k} \smbN{M} \lbMarker{m},\lbMarker{k} \smbN{M} \ubMarker{m},\ubMarker{k} \smbN{M} \lbMarker{m},\ubMarker{k} \smbN{M} \ubMarker{m})]
\end{align*}
\subsubsection{\abbrUAADB Aggregation}
In order to define uncertain window functions, we base on some notations from \abbrUAADB aggregations. Different from aggregations, uncertain window functions intuitively have an uncertain group for each tuple and compute aggregation results on each group individually which is a different group strategy.
\BGI{This requires some more explanations and we should start by stating why the reader needs to understand AU-DB aggregation (because we need it for window functions)}
\abbrUAADB aggregation semantics first define a grouping strategy for each group.
\begin{align*}
\stratGrps(G, \mathbf{\rela}) & \coloneqq \{ t.G \mid \exists \textbf{\sffamily \tup}: \bgMarker{\textbf{\sffamily \tup}} = t \wedge \mathbf{\rela}(\textbf{\sffamily \tup}) \neq \uaaZero{\ensuremath{\mathbb{N}}\xspace} \} &
\stratPoss(\textbf{\sffamily \tup}) & \coloneqq \bgMarker{\textbf{\sffamily \tup}.G}
\end{align*}
Then bounds for group by attribute values are defined as:
\begin{align*}
\lbMarker{\textbf{\sffamily \tup}_{i}.a} &= \min_{\textbf{\sffamily \tup}: \alpha(\textbf{\sffamily \tup}) = i} \lbMarker{\textbf{\sffamily \tup}.a} &
\bgMarker{\textbf{\sffamily \tup}_{i}.a} &= g.a &
\ubMarker{\textbf{\sffamily \tup}_{i}.a} &= \max_{\textbf{\sffamily \tup}: \alpha(\textbf{\sffamily \tup}) = i} \ubMarker{\textbf{\sffamily \tup}.a}
\end{align*}
Then the aggregation function result bounds are defined as:
\begin{align*}
\mathbb{g}(i) \coloneqq \{ \textbf{\sffamily \tup} \mid \mathbf{\rela}(\textbf{\sffamily \tup}) \neq \uaaZero{\ensuremath{\mathbb{N}}\xspace} \wedge \textbf{\sffamily \tup}.G \sqcap \textbf{\sffamily \tup}_{i}.G\}
\end{align*}
The aggregation function result bounds for tuple $\textbf{\sffamily \tup}_{i}$ are defined as: \\[-3mm]
\begin{align*}
\lbMarker{\textbf{\sffamily \tup}_{i}.f(A)} &= \sum_{\textbf{\sffamily \tup} \in \mathbb{g}(i)} \lbagg{\textbf{\sffamily \tup}}
&\ubMarker{\textbf{\sffamily \tup}_{i}.f(A)} &= \sum_{\textbf{\sffamily \tup} \in \mathbb{g}(i)} \ubagg{\textbf{\sffamily \tup}}
\end{align*}\\[-6mm]
\begin{align*}
\bgMarker{\textbf{\sffamily \tup}_{i}.f(A)} &= \sum_{\textbf{\sffamily \tup} \in \mathbb{g}(i)} \bgMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)}
\end{align*}\\[-6mm]
\begin{align*}
\lbagg{\textbf{\sffamily \tup}} &= \begin{cases}
min(\mzero{\monoid},\lbMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)}) &\mathtext{if }\uncertg{G}{\mathbf{\rela}}{\textbf{\sffamily \tup}} \\
\lbMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)} &\mathtext{otherwise}\\
\end{cases}\\
\ubagg{\textbf{\sffamily \tup}} &= \begin{cases}
max(\mzero{\monoid},\ubMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)}) &\mathtext{if }\uncertg{G}{\mathbf{\rela}}{\textbf{\sffamily \tup}} \\
\ubMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)}&\mathtext{otherwise}\\
\end{cases} \\
\uncertg{G}{\mathbf{\rela}}{\textbf{\sffamily \tup}} & \coloneqq (\exists a \in G: \lbMarker{\textbf{\sffamily \tup}.a} \neq \ubMarker{\textbf{\sffamily \tup}.a}) \vee \lbMarker{\mathbf{\rela}(\textbf{\sffamily \tup})} = 0 \\
\textbf{\sffamily \tup} \sqcap \textbf{\sffamily \tup}' & \coloneqq \bigwedge_{i \in \{1, \ldots, n\}} [ \lbMarker{\textbf{\sffamily \tup}.A_i}, \ubMarker{\textbf{\sffamily \tup}.A_i} ] \cap [ \lbMarker{\textbf{\sffamily \tup}'.A_i}, \ubMarker{\textbf{\sffamily \tup}'.A_i} ] \neq \emptyset
\end{align*}
\subsubsection{\needRewrite{Preservation of bounds}}
\BGI{Add brief description of the main result from the AU-DB paper and what it means. We should probably define bound-preservation and bounding since we need to prove it for the ranking / window semantics.}
AU-DB\xspace define what it means to bound an incomplete $\ensuremath{\mathbb{N}}\xspace$-relation by first define bounding of deterministic tuples by range-annotated tuples.
Let $\textbf{\sffamily \tup}$ be a range-annotated tuple with schema $\tuple{a_1, \ldots, a_n}$ and $t$ be a tuple with the same schema as $\textbf{\sffamily \tup}$. We say that $\textbf{\sffamily \tup}$ bounds $t$ written as $t \sqsubseteq \textbf{\sffamily \tup}$ iff\\[-3mm]
%
\[
\forall{i \in \{1, \ldots ,n\}}:
\lbMarker{\textbf{\sffamily \tup}.a_i} \leq t.a_i \leq \ubMarker{\textbf{\sffamily \tup}.a_i}
\]
One AU-DB\xspace tuple may bound multiple conventional tuples and vice versa. We define \textit{tuple matchings} as a way to match the multiplicities of tuples of a $\uaaK{\semN}$-database (or relation) with that of one possible world of an incomplete $\ensuremath{\mathbb{N}}\xspace$-database (or relation).
Based on tuple matchings we then define how to bound possible worlds.
Let $\mathbf{\rela}$ be an $n$-ary AU-relation\xspace and $R$ an $n$-ary relation. A \emph{tuple matching} $\mathcal{TM}$ for $\mathbf{\rela}$ and $R$ is a function $({\dataDomain_{\rangeName}})^{n} \times \mathbb{D}^n \to \ensuremath{\mathbb{N}}\xspace$.
s.t. the following conditions hold:\\[-3mm]
\begin{align*}
&\forall \textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n: \forall t \not\sqsubseteq \textbf{\sffamily \tup}: \mathcal{TM}(\textbf{\sffamily \tup}, t) = 0
&\forall t \in \mathbb{D}^n: \hspace*{-1mm}\sum_{\textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n} \hspace*{-1mm}\mathcal{TM}(\textbf{\sffamily \tup}, t)=R(t)
\end{align*}
A tuple matching distributes the multiplicity of a tuple from $R$ over one or more matching tuples from $\mathbf{\rela}$. Multiple tuples from an AU-DB\xspace may encode the same tuple when the multidimensional rectangles of their attribute-level range annotations overlap, as with the first two tuples of the AU-DB\xspace in \Cref{table:UAAR_overall} and the SGW\xspace.
Given an n-ary AU-DB\xspace relation $\mathbf{\rela}$ and an n-ary deterministic $\ensuremath{\mathbb{N}}\xspace$-relation $R$ (a possible world), relation $\mathbf{\rela}$ bounds $R$ (denoted $R \sqsubset \mathbf{\rela}$) iff there exists a tuple matching $\mathcal{TM}$ for $\mathbf{\rela}$ and $R$ s.t. \\[-5mm]
\begin{align*}
&\forall \textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n:\sum_{t \in \mathbb{D}^n} \mathcal{TM}(\textbf{\sffamily \tup},t) \geq \lbMarker{\mathbf{\rela}(\textbf{\sffamily \tup})}
\;\text{\textbf{and}}\;
\sum_{t \in \mathbb{D}^n} \mathcal{TM}(\textbf{\sffamily \tup},t) \leq \ubMarker{\mathbf{\rela}(\textbf{\sffamily \tup})}
\end{align*}
Having defined when a possible world is bound by a $\uaaK{\semN}$-relation, we are ready to define bounding of incomplete $\ensuremath{\mathbb{N}}\xspace$-relations.
Given an incomplete $\ensuremath{\mathbb{N}}\xspace$-relation $\mathcal{R}$ and a $\uaaK{\semN}$-relation $\mathbf{\rela}$, we say that $\mathbf{\rela}$ bounds $\mathcal{R}$, written as $\mathcal{R} \sqsubset \mathbf{\rela}$
iff
\begin{align*}
&\forall R \in \mathcal{R}: R \sqsubset \mathbf{\rela}
&&\exists R \in \mathcal{R}: R = \bgOf{\mathbf{\rela}}
\end{align*}
\abbrUAADB querying semantics is defined and closed under $\ensuremath{\mathcal{RA}^{+}}$, Set difference and aggregations. \abbrUAADB also proved bound preservation on all operations.
\subsubsection{Relational encodings}
\abbrUAADB can be encoded in a relational database by encoding domain values in triple columns each encodes the lower bound, best guess and upper bound value.
\section{Aggregation}\label{sec:aggregation}
We now introduce a bound preserving semantics for aggregation over $\uaaK{\semN}$-relations
that is based on results from~\cite{amsterdamer2011provenance} that we review below.We leave a generalization to other semirings to future work. See~\cite{techreport} for a discussion of the challenges involved with that. Importantly, our semantics has \texttt{PTIME}\xspace data complexity.
\mypar{Aggregation Monoids}
The semantics for aggregation queries
over $\ensuremath{\mathcal{K}}\xspace$-relations from~\cite{amsterdamer2011provenance} deals with aggregation functions that
are commutative monoids $\tuple{M, \madd{\monoid}, \mzero{\monoid}}$, i.e., where
the values from
$M$ that are the input to aggregation are combined through an operation
$\madd{\monoid}$ which has a neutral element $\mzero{\monoid}$. Abusing notation, we will use $M$ to both denote the monoid and its domain.
Standard aggregation functions \ensuremath{\mathbf{sum}}\xspace, \ensuremath{\mathbf{min}}\xspace, \ensuremath{\mathbf{max}}\xspace, and \ensuremath{\mathbf{count}}\xspace are
monoids. \ensuremath{\mathbf{avg}}\xspace can be derived from \ensuremath{\mathbf{sum}}\xspace and \ensuremath{\mathbf{count}}\xspace.
As an example, consider the monoids for $\ensuremath{\mathbf{sum}}\xspace$ and $\ensuremath{\mathbf{min}}\xspace$:
$\text{\upshape\textsf{SUM}} \coloneqq \tuple{\mathbb{R}, +, 0}$ and $\text{\upshape\textsf{MIN}} \coloneqq \tuple{\mathbb{R}, \min,
\infty}$.
For $M \in \{\text{\upshape\textsf{SUM}}, \text{\upshape\textsf{MIN}}, \text{\upshape\textsf{MAX}}\}$ (\ensuremath{\mathbf{count}}\xspace uses \text{\upshape\textsf{SUM}}), we define a corresponding monoid ${\monoid}_{I}$ using
range-annotated expression semantics (\Cref{sec:expression}). Note that this gives us aggregation functions which can be applied to range-annotated values and are bound preserving, i.e., the result of the aggregation function bounds all possible results for any set of values bound by the inputs.
For example, $\ensuremath{\mathbf{min}}\xspace$ is expressed as $\min(v,w) \coloneqq \ifte{v \leq w}{v}{w}$.
\begin{Lemma}\label{lem:range-annotated-monoids}
$\rangeMof{\text{\upshape\textsf{SUM}}}$, $\rangeMof{\text{\upshape\textsf{MIN}}}$, $\rangeMof{\text{\upshape\textsf{MAX}}}$ are monoids.
\end{Lemma}
\begin{proof}[Proof Sketch]
Proven by unfolding of definitions to show that the addition operations of these monoids is pointwise application of the operation of the monoids they originated from.
\end{proof}
\mypar{Semimodules}
Aggregation over
$\ensuremath{\mathcal{K}}\xspace$-relations has to take the annotations of tuples into account for computing
aggregation results. For instance, consider an $\ensuremath{\mathbb{N}}\xspace$-relation $R(A)$ with
tuples $\tuple{30} \mapsto 2$ and $\tuple{40} \mapsto 3$, (i.e., respectively 2 and 3 duplicates).
The sum over $A$ should be $30 \cdot 2 + 40 \cdot 3 = 180$.
More generally speaking, we
need an operation $\ast_{M}: \ensuremath{\mathbb{N}}\xspace \times M \rightarrow M$
that combines the tuple multiplicities with domain values.
As observed in~\cite{amsterdamer2011provenance}
this operation has to fulfill the algebraic laws of semimodules.
Note that $\ast$ is well-defined
for $\ensuremath{\mathbb{N}}\xspace$ and all of the monoids we consider:\\[-6mm]
\begin{align*}
k \smbN{\text{\upshape\textsf{SUM}}} m & = k \cdot m &
k \smbN{\text{\upshape\textsf{MIN}}} m = k \smbN{\text{\upshape\textsf{MAX}}} m & =
\begin{cases}
m & \text{if}\,\,\,k \neq 0 \\
0 & \text{else} \\
\end{cases}
\end{align*}
Unfortunately, as the following lemma shows, it is not possible to use semimodules for aggregation over $\uaaK{\semN}$-relations, because such semimodules, if they exist, cannot be bound preserving.
\BG{DEL: As we will demonstrate, even though it may be possible to
define $\uaaK{\semN}$-semimodules, such semimodules cannot be bound preserving.
We show that it is
possible to define bound preserving operations $\mysmbNAU{M}$ that combine $\uaaK{\semN}$
elements with ${\dataDomain_{\rangeName}}$ elements and that this is sufficient for defining a
bound preserving aggregation semantics.}
\begin{Lemma}\label{lem:bound-preserving-impossible}
$\ast_{\uaaK{\ensuremath{\mathbb{N}}\xspace},\text{\upshape\textsf{SUM}}}$, if it is well-defined, is not bound preserving.
\end{Lemma}
\begin{proof}[Proof Sketch]
Assume that this semimodule exists and is bound preserving. We lead this assumption to a contradiction by deriving two different results for the expression $\ut{1}{1}{2} \smbNAU{\text{\upshape\textsf{SUM}}} \uv{0}{0}{0}$ using semimodule laws: (i) the obvious $\uv{0}{0}{0}$ and (ii) $\uv{-1}{0}{1} = \ut{1}{1}{2} \smbNAU{\text{\upshape\textsf{SUM}}} (\uv{-1}{-1}{-1} + \uv{1}{1}{1})$
(full proof in \cite{techreport}).
\end{proof}
\iftechreport{Intuitively, an aggregate's bounds may be affected by order of operations, becoming weaker than strictly necessary.
While a stronger constraint is not efficiently enforceable, we can still require that the aggregate preserve bounds.
A naive non-semimodule definition of $\mysmbNAU{\monoid}$ might use point-wise application of
$\smbN{M}$. However, this does not always preserve bounds. For instance, consider sum
aggregation ($\text{\upshape\textsf{SUM}}$) over
a $\uaaK{\ensuremath{\mathbb{N}}\xspace}$-relation $\mathbf{\rela}(A)$ with a single tuple $\tuple{\uv{-3}{-2}{-1}}$
with annotation $\ut{0}{1}{1}$ and query $\aggnogb{sum(A)} (R)$. If
$\mysmbNAU{\text{\upshape\textsf{SUM}}}$ would be defined as point-wise application of
$\smbN{\text{\upshape\textsf{SUM}}}$, then the result of aggregation would be
$\uv{-3 \cdot 0}{-2 \cdot 1}{-1 \cdot 1} = \uv{0}{-2}{-1}$ which is not a
valid range-annotated value because $0 > -2$.
}
In spite of this negative result, not everything is lost. Observe that it not necessary for the operation that combines semiring elements (tuple annotations) with elements of the aggregation monoid to follow semimodule laws. After all, what we care about is that the operation is bound-preserving.
Below we define operations $\mysmbNAU{M}$ that are not semimodules, but are bound-preserving.
To achieve bound-preservation we can rely on the bound-preserving expression semantics we have defined in~\Cref{sec:expression}.
For example,
since $\smbN{\text{\upshape\textsf{SUM}}}$ is multiplication, we can define
$\mysmbNAU{\text{\upshape\textsf{SUM}}}$ using multiplication for range-annotated values.
This approach of computing the bounds as the minimum and maximum over all pair-wise combinations of value and tuple-annotation bounds also works for $\text{\upshape\textsf{MIN}}$ and $\text{\upshape\textsf{MAX}}$.
In~\cite{techreport} we prove that $\mysmbNAU{\monoid}$ is in fact bound preserving.
\begin{Definition}\label{def:range-semimodules}
Consider a monoid $M \in \{\text{\upshape\textsf{SUM}}, \text{\upshape\textsf{MIN}}, \text{\upshape\textsf{MAX}}\}$.
Let $\uv{\lbMarker{m}}{m}{\ubMarker{m}}$ be a range-annotated value from ${\dataDomain_{\rangeName}}$ and $\ut{\lbMarker{k}}{k}{\ubMarker{k}} \in \uaaK{\ensuremath{\mathbb{N}}\xspace}$.
We define $\ut{\lbMarker{k}}{k}{\ubMarker{k}} \mysmbNAU{\monoid} \uv{\lbMarker{m}}{m}{\ubMarker{m}} =$
\begin{align*}
[& min(\lbMarker{k} \smbN{M} \lbMarker{m},\lbMarker{k} \smbN{M} \ubMarker{m},\ubMarker{k} \smbN{M} \lbMarker{m},\ubMarker{k} \smbN{M} \ubMarker{m}), \\
& {k} \smbN{M} {m}, \\
& max(\lbMarker{k} \smbN{M} \lbMarker{m},\lbMarker{k} \smbN{M} \ubMarker{m},\ubMarker{k} \smbN{M} \lbMarker{m},\ubMarker{k} \smbN{M} \ubMarker{m})]
\end{align*}
\end{Definition}
\subsection{Bound-Preserving Aggregation}
\label{sec:bound-pres-aggr}
We now define a bound preserving aggregation semantics based on the
$\mysmbNAU{\monoid}$ operations.
Since AU-DBs\xspace can be used to encode an arbitrary number of groups as a single
tuple, we need to decide how to trade conciseness of the representation for
accuracy. Furthermore, we need to ensure that the aggregation result in the
SGW\xspace is encoded by the result. There are many possible strategies for how to
group possible aggregation results. In this paper, we limit the discussion to a default strategy that we introduce next (see~\cite{techreport} for a general treatment).
Afterwards, we demonstrate how to calculate group-by attribute and aggregation result ranges for output tuples.
\mypar{Grouping Strategy}
Our \textit{grouping strategy}
takes as input an n-ary $\uaaK{\ensuremath{\mathbb{N}}\xspace}$-relation $\mathbf{\rela}$ and a list of group-by attributes $G$ and returns a pair $(\mathcal{G}, \alpha)$ where $\mathcal{G}(G, \mathbf{\rela})$ is a set of output tuples --- one for every SG\xspace group (an input tuple's group-by values in the SGW\xspace), and
$\alpha$ assigns each input tuple to one output based on its SG\xspace group-by values.
Even if the SG\xspace annotation of an input tuple is $0$, we still use its SG\xspace values
to assign it to an output tuple.
Only tuples that are not possible (annotated with $\uaaZero{\ensuremath{\mathbb{N}}\xspace} = \ut{0}{0}{0}$) are not considered.
Since output tuples are identified by their SG\xspace group-by values, we will use these values to identify elements from $\mathcal{G}$.
\begin{Definition
\label{def:default-grouping-strategy}
Consider a query $Q \coloneqq \aggregation{G}{f(A)}(\mathbf{\rela})$.
Let $\textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n$.
The default grouping strategy $\gstrat_{def} \coloneqq (\stratGrps, \stratPoss)$ is defined as:
\begin{align*}
\stratGrps(G, \mathbf{\rela}) & \coloneqq \{ t.G \mid \exists \textbf{\sffamily \tup}: \bgMarker{\textbf{\sffamily \tup}} = t \wedge \mathbf{\rela}(\textbf{\sffamily \tup}) \neq \uaaZero{\ensuremath{\mathbb{N}}\xspace} \} &
\stratPoss(\textbf{\sffamily \tup}) & \coloneqq \bgMarker{\textbf{\sffamily \tup}.G}
\end{align*}
\end{Definition}
For instance, consider
three tuples $\textbf{\sffamily \tup}_1 \coloneqq \tuple{\uv{1}{2}{2}}$ and
$\textbf{\sffamily \tup}_2 \coloneqq \tuple{\uv{2}{2}{4}}$ and $\textbf{\sffamily \tup}_3 \coloneqq \tuple{\uv{2}{3}{4}}$ of a relation $\mathbf{\rela}(A)$.
Furthermore, assume that $\mathbf{\rela}(\textbf{\sffamily \tup}_1) = \ut{1}{1}{1}$, $\mathbf{\rela}(\textbf{\sffamily \tup}_2) = \ut{0}{0}{1}$, and
$\mathbf{\rela}(\textbf{\sffamily \tup}_3) = \ut{0}{0}{3}$.
Grouping on $A$,
the default strategy will generate two output groups $g_1$ for SG\xspace group $(2)$ and
$g_2$ for SG\xspace group $(3)$.
Based on their SG\xspace group-by values, $\stratPoss$ assigns $\textbf{\sffamily \tup}_1$ and $\textbf{\sffamily \tup}_2$ to $g_1$ and $\textbf{\sffamily \tup}_3$ to $g_2$.
\mypar{Aggregation Semantics}
We now introduce an aggregation semantics based on this grouping strategy. For
simplicity we define aggregation without group-by as a special case of
aggregation with group-by (the only difference is how tuple annotations are
handled). We first define how to construct a result tuple $\textbf{\sffamily \tup}_{i}$ for each
output group $i$ returned by the grouping strategy and then present how to
calculate tuple annotations. The construction of an output tuple is divided into
two steps: (i) determine range annotations for the group-by attributes and (ii)
determine range annotations for the aggregation function result attributes. To
ensure that all possible groups that an input tuple $\textbf{\sffamily \tup}$ with
$\stratPoss(\textbf{\sffamily \tup}) = i$ belongs to are contained in $\textbf{\sffamily \tup}_{i}.G$ we
have to merge the group-by attribute bounds of all of these tuples. Furthermore,
we use the unique SG\xspace group-by values common to all input tuples assigned to $\textbf{\sffamily \tup}_{i}$ (i.e., $\bgMarker{\textbf{\sffamily \tup}_{i}.G} = i$) as the output's SG\xspace group-by value.
\begin{Definition
\label{def:range-bounded-groups}
Consider a result group $i \in \stratGrps(G, \mathbf{\rela})$ for an aggregation with group-by attributes $G$ over a $\uaaK{\ensuremath{\mathbb{N}}\xspace}$-relation $\mathbf{\rela}$. The bounds for the group-by attributes values of $\textbf{\sffamily \tup}_{i}$ are defined as shown below.
For all $a \in G$ we define:\\[-5mm]
\begin{align*}
\lbMarker{\textbf{\sffamily \tup}_{i}.a} &= \min_{\textbf{\sffamily \tup}: \alpha(\textbf{\sffamily \tup}) = i} \lbMarker{\textbf{\sffamily \tup}.a} &
\bgMarker{\textbf{\sffamily \tup}_{i}.a} &= g.a &
\ubMarker{\textbf{\sffamily \tup}_{i}.a} &= \max_{\textbf{\sffamily \tup}: \alpha(\textbf{\sffamily \tup}) = i} \ubMarker{\textbf{\sffamily \tup}.a}
\end{align*}
\end{Definition}
Note that in the definition above, $\min$ and $\max$ are the minimum and maximum
wrt. to the order over the data domain $\mathbb{D}$ which we used to define
range-annotated values. Reconsider the three example tuples and two result groups from above.
The group-by range annotation for output tuple $\outof{g_1}$ is
$\uv{\min(1,2)}{2}{\max(2,4)} = \uv{1}{2}{4}$. Observe that $\uv{1}{2}{4}$ bounds
every group $\textbf{\sffamily \tup}_1$ and $\textbf{\sffamily \tup}_2$ may belong to in some possible world.
To calculate bounds on the result of an aggregation function for one group, we have to reason about the minimum and maximum possible aggregation function result based on the bounds of aggregation function input values, their associated row annotations, and their possible and guaranteed group memberships.
To calculate a conservative lower bound of the aggregation function result for an output tuple $\textbf{\sffamily \tup}_{i}$, we use $\mysmbNAU{\monoid}$ to pair the aggregation function value of each tuple $\textbf{\sffamily \tup}$ for which $\alpha(\textbf{\sffamily \tup}) = i$ holds with the tuple's annotation and then extract the lower bound from the resulting range-annotated value.
The group membership of a contributing tuple is uncertain if either its group-by values are uncertain or if it need not exist in all possible worlds (its certain multiplicity is $0$).
We thus take the minimum of the neutral element of the aggregation monoid and the result of $\mysmbNAU{\monoid}$ for such tuples. We introduce an \emph{uncertain group} predicate $\uncertg{G}{\mathbf{\rela}}{\textbf{\sffamily \tup}}$ for this purpose:\\[-5mm]
\begin{align*}
\uncertg{G}{\mathbf{\rela}}{\textbf{\sffamily \tup}} \coloneqq (\exists a \in G: \lbMarker{\textbf{\sffamily \tup}.a} \neq \ubMarker{\textbf{\sffamily \tup}.a}) \vee \lbMarker{\mathbf{\rela}(\textbf{\sffamily \tup})} = 0
\end{align*}
We then sum up the resulting values in the aggregation monoid. Note that here summation is addition in $M$.
The upper bound calculation is analogous (using the upper bound and maximum instead). The SG\xspace result is calculated using standard $\ensuremath{\mathbb{N}}\xspace$-relational semantics. We use $\textbf{\sffamily \tup} \sqcap \textbf{\sffamily \tup}'$ to denote that the range annotations of tuples $\textbf{\sffamily \tup}$ and $\textbf{\sffamily \tup}'$ with the same schema $(A_1, \ldots, A_n)$ overlap on each attribute $A_i$, i.e.,\\[-4mm]
\begin{align*}
\textbf{\sffamily \tup} \sqcap \textbf{\sffamily \tup}' \coloneqq \bigwedge_{i \in \{1, \ldots, n\}} [ \lbMarker{\textbf{\sffamily \tup}.A_i}, \ubMarker{\textbf{\sffamily \tup}.A_i} ] \cap [ \lbMarker{\textbf{\sffamily \tup}'.A_i}, \ubMarker{\textbf{\sffamily \tup}'.A_i} ] \neq \emptyset
\end{align*}
\begin{Definition
\label{def:agg-function-bounds}
Consider input $\mathbf{\rela}$, set of group-by
attributes $G$, an output $i \in \stratGrps(G, \mathbf{\rela})$,
and aggregation function $f(A)$ with monoid $M$.
We use $\mathbb{g}(i)$ to denote the set of inputs with group-by
attribute bounds overlaping $\textbf{\sffamily \tup}_{i}.G$, i.e., belonging to a
group represented by $\textbf{\sffamily \tup}_{i}$:\\[-5mm]
\begin{align*}
\mathbb{g}(i) \coloneqq \{ \textbf{\sffamily \tup} \mid \mathbf{\rela}(\textbf{\sffamily \tup}) \neq \uaaZero{\ensuremath{\mathbb{N}}\xspace} \wedge \textbf{\sffamily \tup}.G \sqcap \textbf{\sffamily \tup}_{i}.G\}
\end{align*}
The aggregation function result bounds for tuple $\textbf{\sffamily \tup}_{i}$ are defined as: \\[-3mm]
\begin{align*}
\lbMarker{\textbf{\sffamily \tup}_{i}.f(A)} &= \sum_{\textbf{\sffamily \tup} \in \mathbb{g}(i)} \lbagg{\textbf{\sffamily \tup}}
&\ubMarker{\textbf{\sffamily \tup}_{i}.f(A)} &= \sum_{\textbf{\sffamily \tup} \in \mathbb{g}(i)} \ubagg{\textbf{\sffamily \tup}}
\end{align*}\\[-6mm]
\begin{align*}
\bgMarker{\textbf{\sffamily \tup}_{i}.f(A)} &= \sum_{\textbf{\sffamily \tup} \in \mathbb{g}(i)} \bgMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)}
\end{align*}\\[-6mm]
\begin{align*}
\lbagg{\textbf{\sffamily \tup}} &= \begin{cases}
min(\mzero{\monoid},\lbMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)}) &\mathtext{if }\uncertg{G}{\mathbf{\rela}}{\textbf{\sffamily \tup}} \\
\lbMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)} &\mathtext{otherwise}\\
\end{cases}\\
\ubagg{\textbf{\sffamily \tup}} &= \begin{cases}
max(\mzero{\monoid},\ubMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)}) &\mathtext{if }\uncertg{G}{\mathbf{\rela}}{\textbf{\sffamily \tup}} \\
\ubMarker{(\mathbf{\rela}(\textbf{\sffamily \tup}) \mysmbNAU{\monoid} \textbf{\sffamily \tup}.A)}&\mathtext{otherwise}\\
\end{cases}
\end{align*}
\end{Definition}
\begin{Example}
For instance, consider calculating the sum of $A$ grouping on $B$ for a relation $R(A,B)$, which consists of two tuples $\textbf{\sffamily \tup}_1 \coloneqq \tuple{\uv{3}{5}{10},\uv{3}{3}{3}}$ and $\textbf{\sffamily \tup}_2 \coloneqq \tuple{\uv{-4}{-3}{-3},\uv{2}{3}{4}}$ that are both annotated with \ut{1}{2}{2} (appear certainly once and may appear twice). Consider calculating the aggregation function result bounds for the result tuple $\textbf{\sffamily \tup}_{i}$ for the output group $i \coloneqq \tuple{3}$. The lower bound $\sum_{\textbf{\sffamily \tup} \in \mathbb{g}(i)} \lbagg{\textbf{\sffamily \tup}}$ on {\upshape \lstinline!sum(A)!} is calculated (\Cref{def:range-semimodules}) as:
\begin{align*}
= &\lbMarker{\left(\ut{1}{2}{2} \cdot \uv{3}{5}{10} \right)} + min(0, \lbMarker{\left(\ut{1}{2}{2} \cdot \uv{-4}{-3}{-3}\right)})\\
= &\lbMarker{\uv{3}{10}{20}} + min(0, \lbMarker{\uv{-8}{-6}{-3}})
=3 + min(0,-8) = -5
\end{align*}
The aggregation result is guaranteed to be greater than or equal to $-5$ since $\textbf{\sffamily \tup}_1$ certainly belongs to $i$ (no minimum operation), because its group-by attribute value $\uv{3}{3}{3}$ is certain and the tuple certainly exists ($\lbMarker{\ut{1}{2}{1}} > 0$). This tuple contributes $3$ to the sum and $\textbf{\sffamily \tup}_2$ contributions at least $-8$. While it is possible that $\textbf{\sffamily \tup}_2$ does not belong to $i$ this can only increase the final result ($3 + 0 > 3 + -8$).
\end{Example}
We still need to calculate the multiplicity annotation for each result tuple. For aggregation without group-by, there is always exactly one result tuple.
In this case there exists a single possible SG\xspace output group (the empty tuple $\tuple{}$ ) and all input tuples are assigned to it through $\stratPoss$. Let $\textbf{\sffamily \tup}_{\tuple{}}$ denote this single output tuple.
Recalling that all remaining tuples have multiplicity $0$, we define:
\begin{align*}
\lbMarker{\aggregation{}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})}
= \bgMarker{\aggregation{}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})}
= \ubMarker{\aggregation{}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})}
\coloneqq \begin{cases}
1 &\text{if}\,\,\, \textbf{\sffamily \tup} = \textbf{\sffamily \tup}_{\tuple{}}\\
0 & \text{otherwise}
\end{cases}
\end{align*}
In order to calculate the upper bound on the possible multiplicity for a result tuple of a group-by aggregation, we have to determine the maximum number of distinct groups this output tuple could correspond to.
We compute the bound for an output $\textbf{\sffamily \tup}_{i}$ based on $\mathcal{G}$ making the worst-case assumption that
(i) each input tuple $\textbf{\sffamily \tup}$ with $\stratPoss(\textbf{\sffamily \tup}) = i$ has the maximal multiplicity possible ($\ubMarker{\mathbf{\rela}(\textbf{\sffamily \tup})}$);
(ii) each tuple $t$ encoded by $\textbf{\sffamily \tup}$ is in a separate group; and
(iii) groups produced from two inputs $\textbf{\sffamily \tup}$ and $\textbf{\sffamily \tup}'$ do not overlap.
We can improve this bound by partitioning the input into two sets: tuples with uncertain group-by values and tuples with certain group membership.
When calculating the maximum number of groups for an output $\textbf{\sffamily \tup}_{i}$, the set of input tuples with certain group-by values that fall into the group-by bounds of $\textbf{\sffamily \tup}_{i}$ only contribute the number of distinct SG\xspace group-by values from this set to the bound.
For the first set we still apply the worst-case assumption.
To determine the lower bound on the certain annotation of a tuple we have to reason about which input tuples certainly belong to a group. These are inputs whose group-by attributes are certain. For such tuples we sum up their tuple annotation lower bounds.
We then need to derive the annotation of a result tuple from relevant input tuple annotations. \cite{amsterdamer2011provenance} extended semirings with a duplicate elimination operator $\delta_\ensuremath{\mathbb{N}}\xspace$ for this purpose which is
defined as $\delta_\ensuremath{\mathbb{N}}\xspace(k) = 0$ if $k = 0$ and $\delta_{\ensuremath{\mathbb{N}}\xspace}(k) = 1$ otherwise.
\begin{Definition
\label{def:aggr-op-semantics-gb}
Let $Q \coloneqq \aggregation{G}{f(A)}(\mathbf{\rela})$ and $\gstrat_{def}(\mathbf{\rela},G) = (\stratGrps, \stratPoss)$.
Consider a tuple $\textbf{\sffamily \tup}$ such that $\exists i \in \stratGrps$ with $\textbf{\sffamily \tup} = \textbf{\sffamily \tup}_{i}$. Then,\\[-4mm]
\begin{align*}
\lbMarker{\aggregation{G}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})} &\coloneqq \delta_{\ensuremath{\mathbb{N}}\xspace}\left(\sum_{\textbf{\sffamily \tup}': \stratPoss(\textbf{\sffamily \tup}') = i \wedge \neg\, \uncertg{G}{\mathbf{\rela}}{\textbf{\sffamily \tup}'}}\lbMarker{\mathbf{\rela}(\textbf{\sffamily \tup}')}\right) \\
\bgMarker{\aggregation{G}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})} &\coloneqq \delta_{\ensuremath{\mathbb{N}}\xspace}\left(\sum_{\textbf{\sffamily \tup}': \stratPoss(\textbf{\sffamily \tup}') = i} \bgMarker{\mathbf{\rela}(\textbf{\sffamily \tup}')}\right)\\
\ubMarker{\aggregation{G}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})} &\coloneqq
\sum_{\textbf{\sffamily \tup}': \stratPoss(\textbf{\sffamily \tup}') = i} \ubMarker{\mathbf{\rela}(\textbf{\sffamily \tup}')}
\end{align*}
For any tuple $\textbf{\sffamily \tup}$ such that $\neg \exists i \in \stratGrps$ with $\textbf{\sffamily \tup} = \textbf{\sffamily \tup}_{i}$, we define
\[
\lbMarker{\aggregation{G}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})} =
\bgMarker{\aggregation{G}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})} =
\ubMarker{\aggregation{G}{f(A)}(\mathbf{\rela})(\textbf{\sffamily \tup})}
= 0
\]
\end{Definition}
\begin{figure}[t]
\centering
\begin{subtable}{\linewidth}
\centering
\begin{tabular}{ c|c|cc}
\textbf{street} & \textbf{number} & \thead{\#inhab} & \underline{$\uaaK{\ensuremath{\mathbb{N}}\xspace}$} \\
\cline{1-3}
\cMarker{Canal} & $\uv{165}{165}{165}$ & $\uv{1}{1}{1}$ & (1,1,2) \\
\uMarker{Canal} & $\uv{154}{153}{156}$ & $\uv{1}{2}{2}$ & (1,1,1) \\
\cMarker{State} & $\uv{623}{623}{629}$ & $\uv{2}{2}{2}$ & (2,2,3) \\
\cMarker{Monroe} & $\uv{3574}{3550}{3585}$ & $\uv{2}{3}{4}$ & (0,0,1) \\
\end{tabular}\\[-1mm]
\caption{Input relation address}
\label{fig:AU-agg-example-input}
\end{subtable}\\[-1mm]
\begin{subtable}{\linewidth}
\centering
\begin{minipage}{0.49\linewidth}
\begin{lstlisting}
SELECT street,
count(*) AS cnt
FROM address
GROUP BY street;
\end{lstlisting}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\begin{tabular}{ c|cc}
\textbf{street} & \textbf{cnt} & \underline{$\uaaK{\ensuremath{\mathbb{N}}\xspace}$} \\
\cline{1-2}
\uMarker{Canal} & $\uv{1}{2}{3}$ & (1,1,2) \\
\cMarker{State} & $\uv{2}{2}{4}$ & (1,1,1) \\
\cMarker{Monroe} & $\uv{1}{1}{2}$ & (0,0,1) \\
\end{tabular}
\end{minipage}\\[-3mm]
\caption{Aggregation with Group-by}
\label{fig:AU-agg-example-GB}
\end{subtable}\\[-3mm]
\caption{Aggregation over AU-DBs\xspace}
\label{fig:AU-aggregation}
\vspace*{-5mm}
\end{figure}
The following example illustrates the application of the aggregation semantics we have defined in this section.
\begin{Example}\label{ex:aggregation}
Consider the relation shown in \Cref{fig:AU-aggregation} which records addresses (street, street number, number of inhabitants).
Following~\cite{Yang:2015:LOA:2824032.2824055}, uncertain text fields are marked in red to indicate that their bound encompasses the whole domain.
Street values $v$ in black are certain, i.e., $\lbMarker{v} = \bgMarker{v} = \ubMarker{v}$.
We are uncertain about particular street numbers, number of inhabitants at certain addresses, and one street name. Furthermore, several tuples may represent more than one address.
For the query shown in \Cref{fig:AU-agg-example-GB}
consider the second result tuple (group \textit{State}). This tuple certainly exists since the 3rd tuple in the input appears twice in every world and its group-by value is certain. Thus, the count for group \textit{State} is at least two. The second input tuple could also belong to this group and, thus, the count could be $4$ (the upper bound).
Note also the multiplicity of the first output tuple (group \textit{Canal}): \uv{1}{1}{2}.
In the SGW\xspace the first two input tuples belong to this group.
However, the second input tuple need not be in this group in all worlds, and in fact may not belong to any existing group.
Thus, in some possible worlds this one \abbrUAADB tuple may represent 2 distinct output tuples.
\end{Example}
We are ready to state the main result of this section: our aggregation semantics for AU-DBs\xspace is bound-preserving.
\begin{Theorem}\label{theo:aggregation-preserves-bounds}
Let $Q \coloneqq \aggregation{G}{f(A)}(R)$ or $Q \coloneqq \aggregation{}{f(A)}(R)$.
$\uaaK{\semN}$-rel\-ational query semantics preserves bounds for $Q$.
\end{Theorem}
\begin{proof}[Proof Sketch]
The claim is proven by demonstrating that for every possible world $R$ of the input incomplete $\ensuremath{\mathbb{N}}\xspace$-relation $\mathcal{R}$ and a tuple matching $\mathcal{TM}$ based on which $\mathbf{\rela}$ bounds $R$, we can construct a tuple matching based on which $Q(\mathbf{\rela})$ bounds $Q(R)$. For each aggregation result $t$ in $R$, we show that there is a result in $Q(\mathbf{\rela})$ that bounds the values of $t$, because our aggregation semantics computes aggregation function results based on the minimum and maximum contribution of each possible input tuple, and group-by attribute bounds are generated by conservatively merging the group-by bounds of input tuples.
See~\cite{techreport} for the full proof.
\end{proof}
Our main technical result follows from \Cref{theo:aggregation-preserves-bounds}\iftechreport{, \Cref{theo:set-difference-prese},}\ifnottechreport{,~\cite[Thm. 3]{techreport} (bound preservation for set difference)} and \Cref{lem:ra-plus-preserves-bounds}:
\begin{Corollary}
\label{cor:preservation-of-boun}
$\uaaK{\semN}$-semantics preserves bounds for $\ensuremath{\mathcal{RA}^{agg}}$.
\end{Corollary}
\section{Scalar Expressions}
\label{sec:expression}
\ifnottechreport{Consider a domain $\mathbb{D}$ that consists of $\mathbb{R}$ and the boolean values ($\bot$ and $\top$).}
\iftechreport{Recall that $\mathbb{D}$ denotes a domain of values. For this section we assume that $\mathbb{D}$ that consists of $\mathbb{R}$ and the boolean values ($\bot$ and $\top$).}
Furthermore, let $\mathbb{V}$ denote a countable set of variables.
\iftechreport{ %
\begin{Definition}[Expression Syntax]\label{def:expr-syntax}
For any variable $x \in \mathbb{V}$, $x$ is an expression and for any constant $c \in \mathbb{D}$, $c$ is an expression.
If $e_1$, $e_2$ and $e_3$ are expressions, then \ldots
\begin{align*}
&e_1 \wedge e_2
&&e_1 \vee e_2
&& \neg e_1
&&e_1 = e_2
&&e_1 \neq e_2
&&e_1 \leq e_2
\end{align*}
\begin{align*}
&&e_1 + e_2
&&e_1 \cdot e_2
&&\frac{1}{e_1}
&&\ifte{e_1}{e_2}{e_3}\\
\end{align*}
are also expressions. Given an expression $e$, we denote the variables in $e$ by $\textsc{vars}(e)$. \end{Definition}
}
\ifnottechreport{
For any variable $x \in \mathbb{V}$, $x$ is an expression and for any constant $c \in \mathbb{D}$, $c$ is an expression.
If $e_1$, $e_2$ and $e_3$ are expressions, then \ldots\\[-6mm]
\begin{align*}
&e_1 \wedge e_2
&&e_1 \vee e_2
&& \neg e_1
&&e_1 = e_2
&&e_1 \neq e_2
&&e_1 \leq e_2
\end{align*}\\[-11mm]
\begin{align*}
&&e_1 + e_2
&&e_1 \cdot e_2
&&\ifte{e_1}{e_2}{e_3}
\end{align*}\\[-4mm]
are also expressions. Given an expression $e$, we denote the variables of $e$ by $\textsc{vars}(e)$.
}
We will also use $\neq$, $\geq$, $<$, $-$, and $>$ since these operators can be defined using the expression syntax above, e.g., $e_1 > e_2 = \neg\, (e_1 \leq e_2)$. \iftechreport{Assuming that $\mathbb{D}$ contains negative numbers, subtraction can be expressed using addition and multiplication.}
For an expression $e$, given a valuation $\varphi$ that maps variables from $\textsc{vars}(e)$ to constants from $\mathbb{D}$, the expression evaluates to a constant \ifnottechreport{$\seval{e}{\varphi}$} from $\mathbb{D}$. \ifnottechreport{The semantics of these expressions are standard (see \cite{techreport} for explicit definitions).}
\iftechreport{The semantics of expression evaluation is defined below.
\begin{Definition}[Expression Semantics]\label{def:expr-semantics}
Let $e$ be an expression. Given a valuation $\varphi: \textsc{vars}(e) \rightarrow \mathbb{D}$, the result of expression $e$ over $\varphi$ is denoted as $\seval{e}{\varphi}$. Note that $\seval{\frac{1}{e}}{\varphi}$ is undefined if $\seval{e}{\varphi} = 0$. The semantics of expression is defined as shown below:
\begin{align*}
\seval{x}{\varphi} & \coloneqq \varphi(x)
& \seval{c}{\varphi} & \coloneqq c &
\seval{\neg e_1}{\varphi} & \coloneqq \neg \seval{e_1}{\varphi}
\end{align*}
\begin{align*}
\seval{e_1 \wedge e_2}{\varphi} & \coloneqq \seval{e_1}{\varphi} \wedge \seval{e_2}{\varphi} &
\seval{e_1 \vee e_2}{\varphi} & \coloneqq \seval{e_1}{\varphi} \vee \seval{e_2}{\varphi} \\
\seval{e_1 + e_2}{\varphi} & \coloneqq \seval{e_1}{\varphi} + \seval{e_2}{\varphi} &
\seval{e_1 \cdot e_2}{\varphi} & \coloneqq \seval{e_1}{\varphi} \cdot \seval{e_2}{\varphi} \\
\seval{\frac{1}{e_1}}{\varphi} &\coloneqq \frac{1}{\seval{e_1}{\varphi}}\\
\seval{e_1 = e_2}{\varphi} & \coloneqq \seval{e_1}{\varphi} = \seval{e_2}{\varphi} &
\seval{e_1 \leq e_2}{\varphi} & \coloneqq \seval{e_1}{\varphi} \leq \seval{e_2}{\varphi}
\end{align*}
\begin{align*}
\seval{\ifte{e_1}{e_2}{e_3}}{\varphi} & \coloneqq
\begin{cases}
\seval{e_2}{\varphi} &\mathtext{if}\; \seval{e_1}{\varphi}\\
\seval{e_3}{\varphi} &\mathtext{otherwise}
\end{cases}
\end{align*}
\end{Definition}
}
\subsection{Incomplete Expression Evaluation}\label{sec:expr-incomplete}
We now define evaluation of expressions over incomplete valuations, which are sets of valuations.
Each valuation in such a set, called a possible world, represents one possible input for the expression.
The semantics of expression evaluation are then defined using
possible worlds semantics: the result of evaluating an expression $e$ over an incomplete valuation ${\Phi} = \{ \varphi_1, \ldots, \varphi_n \}$
\ifnottechreport{ denoted as $\seval{e}{{\Phi}}$
is the set of results obtained by evaluating $e$ over each $\varphi_i$ using deterministic expression evaluation semantics:
$\seval{e}{{\Phi}} \coloneqq \{ \seval{e}{\varphi} \mid \varphi \in {\Phi} \}$.
Consider an expression $e \coloneqq x+y$ and an incomplete valuation ${\Phi} = \{ (x=1,y=4), (x=2,y=4), (x=1,y=5) \}$.
We get $\seval{e}{{\Phi}} = \{ 1+4, 2+4, 1+5 \} = \{5,6\}$. }
\iftechreport{
is the set of results obtained by evaluating $e$ over each $\varphi_i$ using the deterministic expression evaluation semantics defined above.
\begin{Definition}[Incomplete Expression Semantics]\label{def:incomplete-expr-sem}
An incomplete valuation ${\Phi}$ is a set $\{ \varphi_1, \ldots, \varphi_n \}$ where each $\varphi_i$ is a valuation.
The result of evaluating an expression $e$ over ${\Phi}$ denoted as $\seval{e}{{\Phi}}$ is:
$$\seval{e}{{\Phi}} \coloneqq \{ \seval{e}{\varphi} \mid \varphi \in {\Phi} \}$$
\end{Definition}
\begin{Example}
Consider an expression $e \coloneqq x+y$ and an incomplete valuation with possible bindings ${\Phi} = \{ (x=1,y=4), (x=2,y=4), (x=1,y=5) \}$. Applying deterministic evaluation semantics for each of the three valuations from ${\Phi}$ we get $1+4=5$ ,$2+5=6$, and $1+5=6$. Thus, the possible outcomes of this expression under this valuation are: $\seval{e}{{\Phi}} = \{5,6\}$.
\end{Example}
}
\ifnottechreport{
\subsection{Range-Annotated Domains}\label{sec:range-dom}
}
\iftechreport{
\subsection{Range-Annotated Domains}\label{sec:range-dom}
}
We now define \textit{range-annotated values}, which are domain values that are annotated with an interval that bounds the value from above and below.
We define an expression semantics for valuations that maps variables to range-annotated values and then prove that if the input bounds an incomplete valuation, then the range-annotated output produced by this semantics bounds the possible outcomes of the incomplete expression.
\begin{Definition
\label{def:range-domain}
Let $\mathbb{D}$ be a domain and let $\leq$ denote a total order over its elements. Then the \emph{range-annotated domain} ${\dataDomain_{\rangeName}}$ is defined as:
$$\left\{ \uv{\lbMarker{c}}{\bgMarker{c}}{\ubMarker{c}} \mid \lbMarker{c}, \bgMarker{c}, \ubMarker{c} \in \mathbb{D} \wedge \lbMarker{c} \leq \bgMarker{c} \leq \ubMarker{c} \right\}$$
\end{Definition}
A value $c = \uv{\lbMarker{c}}{\bgMarker{c}}{\ubMarker{c}}$ from ${\dataDomain_{\rangeName}}$ encodes a value $\bgMarker{c} \in \mathbb{D}$ and two values ($\lbMarker{c}$ and $\ubMarker{c}$) that bound $\bgMarker{c}$ from below and above.
We call a value $c \in {\dataDomain_{\rangeName}}$ \emph{certain} if $\lbMarker{c}=\bgMarker{c}=\ubMarker{c}$.
Observe, that the definition requires that for any $c \in {\dataDomain_{\rangeName}}$ we have $\lbMarker{c} \leq \bgMarker{c} \leq \ubMarker{c}$.
\iftechreport{
\begin{Example}
For the boolean domain $\mathbb{D}=\{\bot,\top\}$ with order $\bot < \top$, the corresponding range annotated domain is:
$${\dataDomain_{\rangeName}}=\{\uv{\top}{\top}{\top},\uv{\bot}{\top}{\top},\uv{\bot}{\bot}{\top},\uv{\bot}{\bot}{\bot}\}$$
\end{Example}
}
We use valuations that map the variables of an expression to elements from ${\dataDomain_{\rangeName}}$
to
bound
incomplete valuations.
\iftechreport{
\begin{Definition}[Range-annotated valuation]\label{def:range-val}
Let $e$ be an expression. A \emph{range-annotated valuation} $\tilde{\varphi}$ for $e$ is a mapping $\textsc{vars}(e) \to {\dataDomain_{\rangeName}}$.
\end{Definition}
}
\begin{Definition
\label{def:range-expr-bound}
\ifnottechreport{A \emph{range-annotated valuation} $\tilde{\varphi}$ for an expression $e$ is a mapping $\textsc{vars}(e) \to {\dataDomain_{\rangeName}}$.}
Given an incomplete valuation ${\Phi}$
and a range-annotated valuation $\tilde{\varphi}$ for $e$,
we say that $\tilde{\varphi}$ bounds ${\Phi}$ iff
\begin{align*}
\forall x \in \textsc{vars}(e): \forall \varphi \in {\Phi}: \lbMarker{\tilde{\varphi}(x)} \leq \varphi(x) \leq \ubMarker{\tilde{\varphi}(x)}\\
\exists \varphi \in {\Phi}: \forall x \in \textsc{vars}(e): \varphi(x) = \bgMarker{\tilde{\varphi}(x)}
\end{align*}
\end{Definition}
\ifnottechreport{
Consider the incomplete valuation ${\Phi} = \{(x=1),(x=2),(x=3)\}$. The range-annotated valuation $x=\uv{0}{2}{3}$ is a bound for ${\Phi}$, while $x=\uv{0}{2}{2}$ is not a bound.
}
\iftechreport{
\begin{Example}
Consider the incomplete valuation ${\Phi} = \{(x=1),(x=2),(x=3)\}$. The range-annotated valuation $x=\uv{0}{2}{3}$ is a bound for ${\Phi}$, while $x=\uv{0}{2}{2}$ is not a bound.
\end{Example}
}
\iftechreport{
\subsection{Range-annotated Expression Evaluation}\label{sec:range-expr-eval}
}
We now define a semantics for evaluating expressions over range-annotated valuations. We then demonstrate that this semantics preserves bounds.
\begin{Definition}\iftechreport{[Range-annotated expression evaluation]}
\label{def:range-expr-eval}
Let $e$ be an expression. Given a range valuation $\tilde{\varphi}: \textsc{vars}(e) \rightarrow {\dataDomain_{\rangeName}}$,
we define
$\bgMarker{\tilde{\varphi}}(x) \coloneqq \bgMarker{\tilde{\varphi}(x)}$.
The result of expression $e$ over $\tilde{\varphi}$ denoted as $\seval{e}{\tilde{\varphi}}$ is defined as:
\begin{align*}
\seval{x}{\tilde{\varphi}} &\coloneqq \uv{\lbMarker{\tilde{\varphi}(x)}}{\bgMarker{\tilde{\varphi}(x)}}{\ubMarker{\tilde{\varphi}(x)}} &
\seval{c}{\tilde{\varphi}} &\coloneqq \uv{c}{c}{c}
\end{align*}
\ifnottechreport{
For any of the following expressions we define $\bgMarker{\seval{e}{\tilde{\varphi}}} \coloneqq \seval{e}{\bgMarker{\tilde{\varphi}}}$. Let $\seval{e_1}{\tilde{\varphi}} = a$, $\seval{e_2}{\tilde{\varphi}} = b$, and $\seval{e_3}{\tilde{\varphi}} = c$. All expressions omitted below are defined point-wise (e.g., $\lbMarker{\seval{e_1 + e_2}{\tilde{\varphi}}} \coloneqq \lbMarker{a} + \lbMarker{b}$).
\begin{align*}
\lbMarker{\seval{\neg e_1}{\tilde{\varphi}}} &\coloneqq \neg\, \ubMarker{a} &
\ubMarker{\seval{\neg e_1}{\tilde{\varphi}}} &\coloneqq \neg\, \lbMarker{a}
\end{align*}\\[-10mm]
\begin{align*}
\lbMarker{\seval{e_1 \cdot e_2}{\tilde{\varphi}}} &\coloneqq \min(\;\;\ubMarker{a} \cdot \ubMarker{b},\;\;
\ubMarker{a} \cdot \lbMarker{b},\;\;
\lbMarker{a} \cdot \ubMarker{b},\;\;
\lbMarker{a} \cdot \lbMarker{b}\;\;)\\
\ubMarker{\seval{e_1 \cdot e_2}{\tilde{\varphi}}} &
\coloneqq \max(\;\;\ubMarker{a} \cdot \ubMarker{b},\;\;
\ubMarker{a} \cdot \lbMarker{b},\;\;
\lbMarker{a} \cdot \ubMarker{b},\;\;
\lbMarker{a} \cdot \lbMarker{b}\;\;)
\end{align*}\\[-10mm]
\begin{align*}
\lbMarker{\seval{e_1 \leq e_2}{\tilde{\varphi}}} &\coloneqq \ubMarker{a} \leq \lbMarker{b} &
\lbMarker{\seval{e_1=e_2}{\tilde{\varphi}}} &\coloneqq \ubMarker{a}=\lbMarker{b} \wedge \ubMarker{b}=\lbMarker{a} \\
\ubMarker{\seval{e_1 \leq e_2}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} \leq \ubMarker{b} &
\ubMarker{\seval{e_1=e_2}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} \leq \ubMarker{b} \wedge \lbMarker{b} \leq \ubMarker{a}
\end{align*}\\[-10mm]
\begin{align*}
\lbMarker{\seval{\ifte{e_1}{e_2}{e_3}}{\tilde{\varphi}}} &\coloneqq
\begin{cases}
\lbMarker{b} & \text{if } \lbMarker{a}= \ubMarker{a}=\top \\
\lbMarker{c} & \text{if } \lbMarker{a}= \ubMarker{a}=\bot \\
\min(\lbMarker{b},\lbMarker{c}) & \text{else}
\end{cases}\\
\ubMarker{\seval{\ifte{e_1}{e_2}{e_3}}{\tilde{\varphi}}} &\coloneqq
\begin{cases}
\ubMarker{b} & \text{if } \lbMarker{a}= \ubMarker{a}=\top \\
\ubMarker{c} & \text{if } \lbMarker{a}= \ubMarker{a}=\bot \\
\max(\ubMarker{b},\ubMarker{c}) & \text{else}
\end{cases}
\end{align*}
}
\iftechreport{
Note that $\seval{\frac{1}{e}}{\tilde{\varphi}}$ is undefined if $\lbMarker{\seval{e}{\tilde{\varphi}}} \leq 0$ and $\ubMarker{\seval{e}{\tilde{\varphi}}} \geq 0$, because then $\tilde{\varphi}$ may bound a valuation $\varphi$ where $\seval{e}{\varphi} = 0$.
For any of the following expressions we define $\bgMarker{\seval{e}{\tilde{\varphi}}} \coloneqq \seval{e}{\bgMarker{\tilde{\varphi}}}$. Let $\seval{e_1}{\tilde{\varphi}} = a$, $\seval{e_2}{\tilde{\varphi}} = b$, and $\seval{e_3}{\tilde{\varphi}} = c$. Then,
\begin{align*}
\lbMarker{\seval{e_1 \wedge e_2}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} \wedge \lbMarker{b} &
\ubMarker{\seval{e_1 \wedge e_2}{\tilde{\varphi}}} &\coloneqq \ubMarker{a} \wedge \ubMarker{b}\\
\lbMarker{\seval{e_1 \vee e_2}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} \vee \lbMarker{b} &
\ubMarker{\seval{e_1 \vee e_2}{\tilde{\varphi}}} &\coloneqq \ubMarker{a} \vee \ubMarker{b} \\
\lbMarker{\seval{\neg e_1}{\tilde{\varphi}}} &\coloneqq \neg\, \ubMarker{a} &
\ubMarker{\seval{\neg e_1}{\tilde{\varphi}}} &\coloneqq \neg\, \lbMarker{a} \\
\lbMarker{\seval{e_1 + e_2}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} + \lbMarker{b} &
\ubMarker{\seval{e_1 + e_2}{\tilde{\varphi}}} &\coloneqq \ubMarker{a} + \ubMarker{b}
\end{align*}\\[-10mm]
\begin{align*}
\lbMarker{\seval{e_1 \cdot e_2}{\tilde{\varphi}}} &\coloneqq \min(\ubMarker{a} \cdot \ubMarker{b},\ubMarker{a} \cdot \lbMarker{b},\lbMarker{a} \cdot \ubMarker{b},\lbMarker{a} \cdot \lbMarker{b})\\
\ubMarker{\seval{e_1 \cdot e_2}{\tilde{\varphi}}} &\coloneqq \max(\ubMarker{a} \cdot \ubMarker{b},\ubMarker{a} \cdot \lbMarker{b},\lbMarker{a} \cdot \ubMarker{b},\lbMarker{a} \cdot \lbMarker{b})\\
\lbMarker{\seval{\frac{1}{e_1}}{\tilde{\varphi}}} &\coloneqq \frac{1}{\ubMarker{a}})\\
\ubMarker{\seval{\frac{1}{e_1}}{\tilde{\varphi}}} &\coloneqq \frac{1}{\lbMarker{a}}
\end{align*}
\begin{align*}
\lbMarker{\seval{a \leq b}{\tilde{\varphi}}} &\coloneqq \ubMarker{a} \leq \lbMarker{b} &
\ubMarker{\seval{a \leq b}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} \leq \ubMarker{b} \\
\lbMarker{\seval{a=b}{\tilde{\varphi}}} &\coloneqq (\ubMarker{a}=\lbMarker{b} \wedge \ubMarker{b}=\lbMarker{a} ) &
\ubMarker{\seval{a=b}{\tilde{\varphi}}} &\coloneqq \lbMarker{a} \leq \ubMarker{b} \wedge \lbMarker{b} \leq \ubMarker{a}
\end{align*}\\[-10mm]
\begin{align*}
\lbMarker{\seval{\ifte{e_1}{e_2}{e_3}}{\tilde{\varphi}}} &\coloneqq
\begin{cases}
\lbMarker{b} & \text{if } \lbMarker{a}= \ubMarker{a}=\top \\
\lbMarker{c} & \text{if } \lbMarker{a}= \ubMarker{a}=\bot \\
\min(\lbMarker{b},\lbMarker{c}) & \text{else}
\end{cases}\\
\ubMarker{\seval{\ifte{e_1}{e_2}{e_3}}{\tilde{\varphi}}} &\coloneqq
\begin{cases}
\ubMarker{b} & \text{if } \lbMarker{a}= \ubMarker{a}=\top \\
\ubMarker{c} & \text{if } \lbMarker{a}= \ubMarker{a}=\bot \\
\max(\ubMarker{b},\ubMarker{c}) & \text{else}
\end{cases}
\end{align*}
}
\end{Definition}
\iftechreport{
\subsection{Preservation of Bounds}\label{sec:range-expr-eval-preserves-bounds}
}
Assuming that an input range-annotated valuation bounds an incomplete valuation, we need to prove that the output of range-annotated expression evaluation also bounds the possible outcomes.
\begin{Definition
\label{def:bounding-vals}
A value $c \in {\dataDomain_{\rangeName}}$
bounds a set of values $S \subseteq \mathbb{D}$ if:
\begin{align*}
&\forall c_i \in S: \lbMarker{c} \leq c_i \leq \ubMarker{c}
&&\exists c_i \in S: c_i = \bgMarker{c}
\end{align*}
\end{Definition}
\begin{Theorem
\label{theo:expr-bound}
Let $\tilde{\varphi}$ be a range-annotated valuation that bounds an incomplete valuation $e$ for an expression $e$,
then $\seval{e}{\tilde{\varphi}}$ bounds $\seval{e}{{\Phi}}$.
\end{Theorem}
\ifnottechreport{
\begin{proof}[Proof Sketch]
Proven by straightforward induction over the structure of formulas. We present the full proof in \cite{techreport}.
\end{proof}
}
\iftechreport{
\begin{proof}
We prove this theorem through induction over the structure of an expression under the assumption that $\tilde{\varphi}$ bounds ${\Phi}$.
\proofpara{Base case}
If $e \coloneqq c$ for a constant $c$, then $\lbMarker{e} = \bgMarker{e} = \ubMarker{e} = c$ which is also the result of $e$ in any possible world of ${\Phi}$. If $e \coloneqq x$ for a variable $x$, then since $\tilde{\varphi}$ bounds ${\Phi}$, the value of $x$ in any possible world is bounded by $\tilde{\varphi}(x)$.
\proofpara{Induction step}
Assume that for expressions $e_1$, $e_2$, and $e_3$, we have that their results under ${\Phi}$ are bounded by their result under $\tilde{\varphi}$:
\begin{align*}
\forall i \in \{1,2,3\}: \forall c \in \seval{e_i}{{\Phi}}: \lbMarker{\seval{e_i}{\tilde{\varphi}}} \leq c \leq \ubMarker{\seval{e_i}{\tilde{\varphi}}}\\
\exists \varphi \in {\Phi}: \forall i \in \{1,2,3\}: \bgMarker{\seval{e_i}{\tilde{\varphi}}} = \seval{e_i}{\varphi}
\end{align*}
Note that the second condition trivially holds since $\bgMarker{\seval{e}{\tilde{\varphi}}}$ was defined as applying deterministic expression semantics to $\bgMarker{\tilde{\varphi}}$. We, thus, only have to prove that the lower and upper bounds are preserved for all expressions $e$ that combine these expressions using one of the scalar, conditional, or logical operators.
\proofpara{$e \coloneqq e_1 + e_2$}
Inequalities are preserved under addition. Thus, for any $\varphi \in {\Phi}$ we have $\lbMarker{\seval{e_1}{\tilde{\varphi}}} + \lbMarker{\seval{e_2}{\tilde{\varphi}}} \leq \seval{e_1}{\varphi} + \seval{e_2}{\varphi} \leq \ubMarker{\seval{e_1}{\tilde{\varphi}}} + \ubMarker{\seval{e_2}{\tilde{\varphi}}}$.
\proofpara{$e \coloneqq e_1 \cdot e_2$}
We distinguish sixteen cases based on which of $\lbMarker{\seval{e_1}{\tilde{\varphi}}}$, $\lbMarker{\seval{e_2}{\tilde{\varphi}}}$, $\ubMarker{\seval{e_1}{\tilde{\varphi}}}$, and $\lbMarker{\seval{e_2}{\tilde{\varphi}}}$ are negative. For instance, if all numbers are positive then clearly $\lbMarker{\seval{e_1}{\tilde{\varphi}}} \cdot \lbMarker{\seval{e_2}{\tilde{\varphi}}} \leq \seval{e_1}{\varphi} \cdot \seval{e_2}{\varphi}$. While there are sixteen cases, there are only four possible combinations of lower and upper bounds we have to consider. Thus, if we take the minimal (maximal) value across all these cases, we get a lower (upper) bound on $e$.
\proofpara{$e \coloneqq \frac{1}{e_1}$}
For any pair of numbers $c_1$ and $c_2$ that are either both positive or both negative, we have $c_1 \leq c_2$ implies $\frac{1}{c_1} \geq \frac{1}{c_2}$. Thus, $\frac{1}{\ubMarker{a}}$ is an upper bound on $\frac{1}{c}$ for any $c$ bound by $a$. Analog, $\frac{1}{\lbMarker{a}}$ is an upper bound.
\proofpara{$e \coloneqq e_1 \wedge e_2$ and $e \coloneqq e_1 \vee e_2$}
Both $\vee$ and $\wedge$ are monotone in their arguments wrt. the order $F \ordOf{\semB} T$. Thus, applying these operations to combine lower (upper) bounds preserves these bounds.
\proofpara{$e \coloneqq \neg\,e_1$}
We distinguish three cases: (i) $\seval{e_1}{\varphi} = \bot$ for all $\varphi \in {\Phi}$; (ii)$\seval{e_1}{\varphi} = \top$ for some $\varphi \in {\Phi}$ and $\seval{e_1}{\varphi} = \bot$ for some $\varphi \in {\Phi}$; and (iii) $\seval{e_1}{\varphi} = \bot$ for all $\varphi \in {\Phi}$. In case (i) for $\tilde{\varphi}$ to bound the input either $\seval{e_1}{\tilde{\varphi}} = \uv{\bot}{\bot}{\bot}$ in which case $\seval{e}{\tilde{\varphi}} = \uv{\top}{\top}{\top}$ or $\seval{e_1}{\tilde{\varphi}} = \uv{\bot}{\bot}{\top}$ and $\seval{r}{\tilde{\varphi}} = \uv{\bot}{\top}{\top}$. We have $\seval{e}{\varphi} = \top$ for all $\varphi \in {\Phi}$ and, thus, in either case $\seval{e}{\tilde{\varphi}}$ bounds $\seval{e}{{\Phi}}$. In case (ii), $\lbMarker{\seval{e}{\tilde{\varphi}}} = \bot$ and $\ubMarker{\seval{e}{\tilde{\varphi}}} = \top$ which trivially bound $\seval{e}{{\Phi}}$. The last case is symmetric to (i).
\proofpara{$e \coloneqq e_1 \leq e_2$}
Recall that $\bot \leq \top$. $e_1 \leq e_2$ is guaranteed to evaluate to true in every possible world if the upper bound of $e_1$ is lower than or equal to the lower bound of $e_2$. In this case it is safe to set $\lbMarker{\seval{e}{\tilde{\varphi}}} = \top$. Otherwise, there may exist a possible world where $e_1 \leq e_2$ evaluates to false and we have to set $\lbMarker{\seval{e}{\tilde{\varphi}}} = \bot$. Similarly, if the lower bound of $e_1$ is larger than the upper bound of $e_2$ then $e_1 \leq e_2$ evaluates to false in every possible world and $\ubMarker{\seval{e}{\tilde{\varphi}}} = \bot$ is an upper bound. Otherwise, there may exist a world where $e_1 \leq e_2$ holds and we have to set $\ubMarker{\seval{e}{\tilde{\varphi}}} = \top$.
\proofpara{$\ifte{e_1}{e_2}{e_3}$}
When $e_1$ is certainly true ($\lbMarker{\seval{e_1}{\tilde{\varphi}}} = \ubMarker{\seval{e_1}{\tilde{\varphi}}} = \top$) or certainly false ($\lbMarker{\seval{e_1}{\tilde{\varphi}}} = \ubMarker{\seval{e_1}{\tilde{\varphi}}} = \bot$) then the bounds $e_2$ (certainly true) or $e_3$ (certainly false) are bounds for $e$. Otherwise, $e$ may evaluate to $e_2$ in some worlds and to $e_3$ in others. Taking the minimum (maximum) of the bounds for $e_2$ and $e_3$ is guaranteed to bound $e$ from below (above) in any possible world.
We conclude that the result of range-annotated expression evaluation under $\tilde{\varphi}$ which bounds an incomplete valuation ${\Phi}$ bounds the result of incomplete expression evaluation for any expression $e$.
\end{proof}
}
Conditional range-annotated expressions are bound-preserving for any totally-ordered domain. \iftechreport{
We can add support for additional scalar operations as long as a bound-preserving evaluation semantics can be defined.
Important classes of scalar operations are operations that preserve the order over the domain, i.e., for any domain values $a < b$ and $c < d$ which $a \diamond b $. For instance, this is the case for addition over natural numbers. For any such operation, point-wise application to range-annotated domain values is bound preserving. }
For categorical values where no sensible order can be defined, we impose an arbitrary order.
\iftechreport{Note that in the worst-case, we can just annotate a value with the range covering the whole domain to indicate that it is completely uncertain.}
\section{AU-DBs\xspace}
\label{sec:uaa-range}
We define attribute-annotated uncertain databases\xspace (AU-DBs\xspace) as a special type of $\ensuremath{\mathcal{K}}\xspace$-relation over range-annotated domains and demonstrate how to bound an incomplete $\ensuremath{\mathcal{K}}\xspace$-relation using this model.
Afterwards, we define a metric for how precise the bounds of an incomplete $\ensuremath{\mathcal{K}}\xspace$-database encoded by an AU-DB\xspace are, and define a query semantics for AU-DBs\xspace that preserves bounds.
We limit the discussion to semiring $\ensuremath{\mathbb{N}}\xspace$ (bag semantics). See~\cite{techreport} for the generalization to other semirings including $\ensuremath{\mathbb{B}}\xspace$ (set semantics).
\subsection{$\uaaK{\semN}$-relations}\label{sec:uaadbs}
In addition to allowing for range-annotated values, AU-DBs\xspace also differ from UA-DBs\xspace in that they encode an upper bound of the possible annotation of each tuple. We use $\semq{\semN}$ annotations to encode for each tuple: a lower bound on the certain multiplicity of the tuple, the multiplicity of the tuple in the SGW\xspace, and an over-approximation of the tuple's possible multiplicity.
\begin{Definition
\label{def:uaa-tuple-annot}
The
range-annotated multiplicity domain $\domN_{{\uaaName}}$ is defined as:\\[-6mm]
$$\{ \ut{\lbMarker{k}}{k}{\ubMarker{k}} \mid \lbMarker{k}, k, \ubMarker{k} \in \ensuremath{\mathbb{N}}\xspace \wedge \lbMarker{k} \leq k \leq \ubMarker{k} \}$$
We use $\uaaK{\semN}$ to denote semiring $\semq{\semN}$ restricted to elements from $\domN_{{\uaaName}}$.
\end{Definition}
Note that $\uaaK{\semN}$ is a semiring since when combining two elements of
$\uaaK{\semN}$ with $\addOf{\semq{\semN}}$ and $\multOf{\semq{\semN}}$, the result
$\ut{\lbMarker{k}}{\bgMarker{k}}{\ubMarker{k}}$ fulfills the requirement $\lbMarker{k} \leq \bgMarker{k} \leq \ubMarker{k}$. This is
the case because addition and multiplication in $\ensuremath{\mathbb{N}}\xspace$ preserve
order~\cite{FH19}, and these operations in $\semq{\semN}$ are pointwise
application of $+$ and $\cdot$.
\begin{Definition
\label{def:uaa-rels}
A $\uaaK{\semN}$-relation of arity $n$ is a function $\mathbf{\rela}: {\dataDomain_{\rangeName}}^n \rightarrow \uaaK{\semN}$ with finite support $\{ \textbf{\sffamily \tup} \mid \mathbf{\rela}(\textbf{\sffamily \tup}) \neq \ut{0}{0}{0} \}$.
\end{Definition}
\subsection{Extracting Selected-Guess Worlds\xspace}\label{sec:uaab-get-bgw}
$\uaaK{\semN}$-relations permit multiple tuples with identical values in the selected-guess world\xspace (SGW\xspace).
We extract the selected-guess world\xspace encoded by a $\uaaK{\semN}$-relation by grouping tuples based on their SG\xspace values and summing up their SG\xspace multiplicities.
\begin{Definition}\label{def:bool-to-K}
We lift operator $sg$ from values to tuples: $sg: {\dataDomain_{\rangeName}}^n \to \mathbb{D}^n$, i.e., given an AU-DB\xspace tuple $\textbf{\sffamily \tup} = \tuple{c_1, \ldots, c_n}$,
$\bgOf{\textbf{\sffamily \tup}} \coloneqq \tuple{\bgMarker{c_1}, \ldots, \bgMarker{c_n}}$. For a $\uaaK{\semN}$-relation $\mathbf{\rela}$, $\bgOf{\mathbf{\rela}}$, the SGW\xspace encoded by $\mathbf{\rela}$, is then defined as:\\[-5mm]
$$\bgOf{\mathbf{\rela}}(t) \coloneqq \sum_{\bgOf{\textbf{\sffamily \tup}} = t} \bgOf{\left(\mathbf{\rela}(\textbf{\sffamily \tup})\right)}$$
\end{Definition}
\begin{Example}\label{ex:audb-instance}
\Cref{table:UAAR_inst} shows an instance of a $\uaaK{\semN}$-relation $R$ where each attribute is a triple showing the lower bound, selected-guess\xspace, and upper bound of the value. Each tuple is annotated by a triple showing the lower multiplicity bound, selected-guess\xspace multiplicity, and the upper multiplicity bound of the tuple.
For example, the first tuple represents a tuple $\tuple{1,1}$ that appears at least twice in every possible world (its lower multiplicity bound is $2$), appears twice in the SGW\xspace, and appears no more than thrice in any world.
\Cref{table:UAAR_bg} shows the SGW\xspace encoded by this AU-DB\xspace which is computed by summing up the multiplicities of tuples with identical SG\xspace values. For instance, the first two tuples both represent tuple $\tuple{1,1}$ in the SGW\xspace and their annotations sum up to $5$ (i.e., the tuple $\tuple{1,1}$ appears five times).
Conversely, the first two tuples of $D_2$ match different AU-DB\xspace tuples.
\end{Example}
\begin{figure}[t]
\centering
\begin{minipage}{0.465\linewidth}
\begin{subtable}{\linewidth}
\centering
\begin{tabular}{ c|cc}
\textbf{A} & \textbf{B} & \underline{\semq{\semN}} \\
\cline{1-2}
$\uv{1}{1}{1}$ & $\uv{1}{1}{1}$ & \ut{2}{2}{3}\\
$\uv{1}{1}{1}$ & $\uv{1}{1}{3}$ & \ut{2}{3}{3}\\
$\uv{1}{2}{2}$ & $\uv{3}{3}{3}$ & \ut{1}{1}{1}\\
\end{tabular}
\caption{Example AU-DB\xspace instance}
\label{table:UAAR_inst}
\end{subtable}
\end{minipage}
\begin{minipage}{0.505\linewidth}
\begin{subtable}{\linewidth}
\centering
\begin{minipage}{0.490\linewidth}
\centering
\underline{$D_1$} \\
\begin{tabular}{lc|cc}
& \textbf{A} & \textbf{B} & \underline{$\ensuremath{\mathbb{N}}\xspace$} \\
\cline{2-3}
$t_1$ & $1$ & $1$ & 5 \\
$t_2$ & $2$ & $3$ & 1 \\
\end{tabular}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\underline{$D_2$} \\
\begin{tabular}{lc|cc}
& \textbf{A} & \textbf{B} & \underline{$\ensuremath{\mathbb{N}}\xspace$} \\
\cline{2-3}
$t_3$ & $1$ & $1$ & 2 \\
$t_4$ & 1 & 3 & 2 \\
$t_5$ & $2$ & $3$ & 1 \\
\end{tabular}
\end{minipage}
\caption{Worlds $D_1$ (SGW\xspace) and $D_2$}
\label{table:UAAR_bg}
\end{subtable}
\end{minipage}\\[-3mm]
\caption{Example AU-DB\xspace relation and two worlds it bounds}
\label{table:UAAR_overall}
\end{figure}
\subsection{Encoding Bounds}\label{sec:uaadb-bounds}
We now define what it means for an AU-DB\xspace to bound an incomplete $\ensuremath{\mathbb{N}}\xspace$-relation. For that, we first define bounding of deterministic tuples by range-annotated tuples.
\begin{Definition
\label{def:bounding-tuples}
Let $\textbf{\sffamily \tup}$ be a range-annotated tuple with schema $\tuple{a_1, \ldots, a_n}$ and $t$ be a tuple with the same schema as $\textbf{\sffamily \tup}$. We say that $\textbf{\sffamily \tup}$ bounds $t$ written as $t \sqsubseteq \textbf{\sffamily \tup}$ iff\\[-3mm]
%
\[
\forall{i \in \{1, \ldots ,n\}}:
\lbMarker{\textbf{\sffamily \tup}.a_i} \leq t.a_i \leq \ubMarker{\textbf{\sffamily \tup}.a_i}
\]
\end{Definition}
One AU-DB\xspace tuple may bound multiple conventional tuples and vice versa. We define \textit{tuple matchings} as a way to match the multiplicities of tuples of a $\uaaK{\semN}$-database (or relation) with that of one possible world of an incomplete $\ensuremath{\mathbb{N}}\xspace$-database (or relation).
Based on tuple matchings we then define how to bound possible worlds.
\begin{Definition
\label{def:tuple-matching}
Let $\mathbf{\rela}$ be an $n$-ary AU-relation\xspace and $R$ an $n$-ary relation. A \emph{tuple matching} $\mathcal{TM}$ for $\mathbf{\rela}$ and $R$ is a function $({\dataDomain_{\rangeName}})^{n} \times \mathbb{D}^n \to \ensuremath{\mathbb{N}}\xspace$.
s.t. the following conditions hold:\\[-3mm]
\begin{align*}
&\forall \textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n: \forall t \not\sqsubseteq \textbf{\sffamily \tup}: \mathcal{TM}(\textbf{\sffamily \tup}, t) = 0
&\forall t \in \mathbb{D}^n: \hspace*{-1mm}\sum_{\textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n} \hspace*{-1mm}\mathcal{TM}(\textbf{\sffamily \tup}, t)=R(t)
\end{align*}
\end{Definition}
A tuple matching distributes the multiplicity of a tuple from $R$ over one or more matching tuples from $\mathbf{\rela}$. Multiple tuples from an AU-DB\xspace may encode the same tuple when the multidimensional rectangles of their attribute-level range annotations overlap, as with the first two tuples of the AU-DB\xspace in \Cref{table:UAAR_overall} and the SGW\xspace.
\begin{Definition}\label{def:bounding-worlds}
Given an n-ary AU-DB\xspace relation $\mathbf{\rela}$ and an n-ary deterministic $\ensuremath{\mathbb{N}}\xspace$-relation $R$ (a possible world), relation $\mathbf{\rela}$ bounds $R$ (denoted $R \sqsubset \mathbf{\rela}$) iff there exists a tuple matching $\mathcal{TM}$ for $\mathbf{\rela}$ and $R$ s.t. \\[-5mm]
\begin{align*}
&\forall \textbf{\sffamily \tup} \in {\dataDomain_{\rangeName}}^n:\sum_{t \in \mathbb{D}^n} \mathcal{TM}(\textbf{\sffamily \tup},t) \geq \lbMarker{\mathbf{\rela}(\textbf{\sffamily \tup})}
\;\text{\textbf{and}}\;
\sum_{t \in \mathbb{D}^n} \mathcal{TM}(\textbf{\sffamily \tup},t) \leq \ubMarker{\mathbf{\rela}(\textbf{\sffamily \tup})}
\end{align*}
\end{Definition}
Having defined when a possible world is bound by a $\uaaK{\semN}$-relation, we are ready to define bounding of incomplete $\ensuremath{\mathbb{N}}\xspace$-relations.
\begin{Definition}\label{def:bounding-incomplete-dbs}
Given an incomplete $\ensuremath{\mathbb{N}}\xspace$-relation $\mathcal{R}$ and a $\uaaK{\semN}$-relation $\mathbf{\rela}$, we say that $\mathbf{\rela}$ bounds $\mathcal{R}$, written as $\mathcal{R} \sqsubset \mathbf{\rela}$
iff
\begin{align*}
&\forall R \in \mathcal{R}: R \sqsubset \mathbf{\rela}
&&\exists R \in \mathcal{R}: R = \bgOf{\mathbf{\rela}}
\end{align*}
\end{Definition}
\noindent These definitions are extended to databases in the obvious way.
\begin{Example}\label{ex:bounding-incomplete-dbs}
Consider the AU-DB\xspace of \Cref{ex:audb-instance} and the worlds of~\Cref{table:UAAR_bg}.
The AU-DB\xspace bounds these worlds, since there exist tuple matchings that provide a lower and an upper bound for the annotations of the tuples of each world. For instance, denoting the tuples from this example as
%
\begin{align*}
\textbf{\sffamily \tup}_1 & \coloneqq \tuple{\uv{1}{1}{1}, \uv{1}{1}{1}}
& \textbf{\sffamily \tup}_2 & \coloneqq \tuple{\uv{1}{1}{1}, \uv{1}{1}{3}}\\
\textbf{\sffamily \tup}_3 & \coloneqq \tuple{\uv{1}{2}{2}, \uv{3}{3}{3}}
\end{align*} %
\iftechreport{Tuple matchings $\mathcal{TM}_1$ and $\mathcal{TM}_2$ shown below to bound $D_1$ and $D_2$.}
\ifnottechreport{Tuple matching $\mathcal{TM}_1$ (shown below) bounds $D_1$.}
\begin{align*}
\mathcal{TM}_1(\textbf{\sffamily \tup}_1, t_1) & = 2
& \mathcal{TM}_1(\textbf{\sffamily \tup}_2, t_1) & = 3
& \mathcal{TM}_1(\textbf{\sffamily \tup}_3, t_1) & = 0 \\
\mathcal{TM}_1(\textbf{\sffamily \tup}_1, t_2) & = 0
& \mathcal{TM}_1(\textbf{\sffamily \tup}_2, t_2) & = 0
& \mathcal{TM}_1(\textbf{\sffamily \tup}_3, t_2) & = 1
\iftechreport{\\[2mm]
\mathcal{TM}_2(\textbf{\sffamily \tup}_1, t_3) & = 2
& \mathcal{TM}_2(\textbf{\sffamily \tup}_2, t_3) & = 0
& \mathcal{TM}_2(\textbf{\sffamily \tup}_3, t_3) & = 0 \\
\mathcal{TM}_2(\textbf{\sffamily \tup}_1, t_4) & = 0
& \mathcal{TM}_2(\textbf{\sffamily \tup}_2, t_4) & = 2
& \mathcal{TM}_2(\textbf{\sffamily \tup}_3, t_4) & = 0 \\
\mathcal{TM}_2(\textbf{\sffamily \tup}_1, t_5) & = 0
& \mathcal{TM}_2(\textbf{\sffamily \tup}_2, t_5) & = 0
& \mathcal{TM}_2(\textbf{\sffamily \tup}_3, t_5) & = 1
}
\end{align*}
\end{Example}
\subsection{Tightness of Bounds}
\label{sec:approximation-factor}
Given an incomplete database, there are many possible AU-DBs\xspace that bound it of varying tightness. For instance, both $\textbf{\sffamily \tup}_1 \coloneqq \tuple{\uv{1}{15}{100}}$ and $\textbf{\sffamily \tup}_2 \coloneqq \tuple{\uv{13}{14}{15}}$ bound tuple $\tuple{15}$, but intuitively the bounds provided by $\textbf{\sffamily \tup}_2$ are tighter. We develop a metric for the tightness of the approximation provided by an AU-DB\xspace and prove that finding an AU-DB\xspace that maximizes tightness
is intractable. Given two AU-DBs\xspace ${\mathbf{\db}}$ and ${\mathbf{\db}}'$ that both bound an incomplete $\ensuremath{\mathbb{N}}\xspace$-database $\mathcal{D}$, ${\mathbf{\db}}$ is a tighter bound than ${\mathbf{\db}}'$ if the set of worlds bound by ${\mathbf{\db}}$ is a subset of the set of worlds bound by ${\mathbf{\db}}'$.
\begin{Definition
\label{def:bound-tightness}
Consider two $\uaaK{\semN}$-databases ${\mathbf{\db}}$ and ${\mathbf{\db}}'$ over the same schema
$S$. We say that ${\mathbf{\db}}$ is at least as tight as ${\mathbf{\db}}'$, written as ${\mathbf{\db}} \preceq_{\rangeName} {\mathbf{\db}}'$, if for all $\ensuremath{\mathbb{N}}\xspace$-databases $D$ with schema $S$ we have:
\begin{align*}
D \sqsubset {\mathbf{\db}} \rightarrow D \sqsubset {\mathbf{\db}}'
\end{align*}
We say that ${\mathbf{\db}}$ is strictly tighter than ${\mathbf{\db}}'$, written as ${\mathbf{\db}} \prec_{\rangeName} {\mathbf{\db}}'$ if ${\mathbf{\db}} \preceq_{\rangeName} {\mathbf{\db}}'$ and there exists $D \sqsubset {\mathbf{\db}}'$ with $D \not\sqsubset {\mathbf{\db}}$. Furthermore, we call ${\mathbf{\db}}$ a maximally tight bound for an incomplete $\ensuremath{\mathbb{N}}\xspace$-database $\mathcal{D}$ if:
\begin{align*}
\mathcal{D} &\sqsubset {\mathbf{\db}}
&\not \exists {\mathbf{\db}}':\; \mathcal{D} \sqsubset {\mathbf{\db}}' \wedge {\mathbf{\db}}' \prec_{\rangeName} {\mathbf{\db}}
\end{align*}
\end{Definition}
This notion of tightness is well-defined even if the data domain $\mathbb{D}$ is infinite.
In general AU-DBs\xspace that are tighter bounds are preferable. However, computing a maximally tight bound is intractable.
\begin{Theorem}\label{def:intractability-of-tight-bounds}
Let $\mathcal{D}$ be an incomplete $\ensuremath{\mathbb{N}}\xspace$-database encoded as a C-table\xspace~\cite{DBLP:journals/jacm/ImielinskiL84}.
Computing a maximally tight bound ${\mathbf{\db}}$ for $\mathcal{D}$ is NP-hard\xspace.
\end{Theorem}
\begin{proof}[Proof Sketch]
\ifnottechreport{
Proven by reduction from 3-colorability~\cite{techreport}.
We construct a C-table with one tuple and encode the 3-colorability of the input graph $G$ in the local condition of the tuple, i.e., the tuple is possible iff $G$ is 3-colorable.
The maximally tight upper multiplicity bound for this tuple is $1$ iff graph $G$ is 3-colorable.
}
\iftechreport{
We prove the claim through a reduction from 3-colorability.
\BG{Write proof. can we generalize to any model not just C-tables. If yes, does this require a size-bound?}
}
\end{proof}
|
{
"arxiv_id": "2302.08679",
"language": "en",
"timestamp": "2023-02-20T02:06:23",
"url": "https://arxiv.org/abs/2302.08679",
"yymm": "2302"
} | \section{Introduction}
One of the important motivations for developing quantum computers is their potential to simulate strongly correlated many-body systems efficiently \cite{aspuru-guzik_2005, whitfield_biamonte_aspuru-guzik_2011}. Algorithms that exactly diagonalize the electronic Hamiltonian, known as the full configuration interaction approach, scale exponentially with the size of the Hilbert space, making it applicable to very few cases \cite{fci} on classical computers. The configuration interaction (CI) method offers an approximate solution by truncating the Hilbert space to only include the most important basis states. However, the energy calculated by the CI method does not scale properly with the size of the system when used on molecules with varying sizes, nor does it predict the dissociation energy correctly because it cannot produce factorized atomic states. The coupled cluster (CC) method addresses these issues by being both size consistent and size extensive. It is also memory efficient because it does not explicitly construct the energy eigenstate. Instead, the set of amplitudes for the CC ansatz is calculated iteratively by the so-called amplitude equations \cite{bartlett_purvis_1978, cc, purvis_bartlett_1982}, which correspond to zeroing out the row (or column) of the Hamiltonian matrix that corresponds to the initial single-reference state. The CC method with single, double and (perturbative) triple excitations is regarded as the ``gold standard'' for computational chemistry \cite{aspuruguzik_uccreview}.
Quantum computers have been proposed as being capable of solving a set of quantum chemistry problems that are otherwise difficult or very challenging on classical machines: namely, molecules that contain both weakly and strongly correlated electrons. One of the most promising algorithms for the noisy intermediate-scale quantum (NISQ) era is the variational quantum eigensolver (VQE), where the trial wave function is prepared on the quantum hardware and the expectation value of the energy is measured there as well; the parameters in the eigenstate are optimized variationally on classical machines \cite{vqe, preskill_2018}. The conventional coupled-cluster ansatz is given as $\ket{\psi_{CC}}=e^{\hat T}\ket{\psi_{ref}}$, where $\ket{\psi_{ref}}$ is a trial wave function (often chosen to be the single-reference Hartree-Fock state), and $\hat T=\sum_{k=1}^n \hat T_k$ is the cluster operator consisting of up to rank-$n$ excitations ($n$ electrons are removed from the Hartree-Fock state and replaced by $n$ electrons in virtual orbitals). The excitation operator is given as
\begin{equation}
\hat T_k =\frac{1}{(k!)^2}\sum_{ij\cdots} ^{occ} \sum_{ab\cdots} ^{vir} t_{ij\cdots} ^{ab\cdots} \hat A_{ij\cdots} ^{ab\cdots},
\end{equation}
and $\hat A_{ij\cdots} ^{ab\cdots} = \hat a_a^\dagger \hat a_b^\dagger \cdots \hat a_j \hat a_i$, where $\hat a_a^\dagger$ is the creation operator acting on virtual orbital $a$ and $\hat a_i$ is the annihilation operator acting on occupied orbital $i$. Traditionally, the CC method employs a similarity-transformed Hamiltonian to obtain a set of equations to determine the amplitudes $t$:
\begin{align}
\bra{\psi_{ref}} e^{-\hat T}\hat H e^{\hat T} \ket{\psi_{ref}} = E \label{eq:amplitude1}\\
\bra{\psi_\mu} e^{-\hat T}\hat H e^{\hat T} \ket{\psi_{ref}} = 0 \label{eq:amplitude2}
\end{align}
where $\bra{\psi_\mu} = \bra{\psi_{ref}}\hat A_\mu$. In practice, this set of amplitude equations is solved iteratively, which yields the energy without needing to construct the energy eigenstate. The total number of amplitude equations is given by the number of amplitudes in the expansion of the $\hat T$ operator, which is much smaller than the total number of Slater determinants in the $\ket{\psi_{CC}}$ (which is typically exponentially larger). The properties of size consistency and size extensivity for the CC ansatz stem from the facts that the similarity-transformed Hamiltonian $e^{-\hat T}\hat H e^{\hat T}$ is additively separable and the term $e^{T}$ is multiplicatively separable. Notice that the electronic Hamiltonian for the molecule (in second quantization) is given by
\begin{equation}
H = \sum_{ij} h_{ij}\hat a_i^\dagger \hat a_j +\frac{1}{2}\sum_{ijkl}g_{ijkl} \hat a_i^\dagger\hat a_j^\dagger \hat a_k \hat a_l,
\end{equation}
where $h_{ij}$ are the one-electron integrals and $g_{ijkl}$ are the two-electron integrals:
\begin{align}
h_{ij} = \int dr_1 \phi_i^* (r_1) \Bigg(-\frac{1}{2}\nabla_{r_1} ^2 - \sum_{I=1} ^M \frac{Z_I}{R_{1I}} \Bigg)\phi_j (r_1) \\
g_{ijkl} = \int dr_1 dr_2 \phi_i^*(r_1)\phi_j^*(r_2)\frac{1}{r_{12}}\phi_k(r_1)\phi_l(r_2).
\end{align}
Here, $M$ is the number of atoms in the system, $Z_I$ are their atomic numbers, $R_{1I} = \abs{r_1 - R_I}$, $r_{12} = \abs{r_1 - r_2}$, and $\phi(r)$ are the single-particle optimized orbitals from the HF solution \cite{szabo_ostlund_2006, taketa_huzinaga_o-ohata_1966}. In order to solve the amplitude equations (\ref{eq:amplitude1}) and (\ref{eq:amplitude2}), we need to explicitly compute the similarity-transformed Hamiltonian. Using the Hadamard lemma, we can rewrite the transformed Hamiltonian as
\begin{align}
e^{-\hat T}\hat H e^{\hat T} = &\hat H + [\hat H, \hat T] + \frac{1}{2!} [[\hat H, \hat T],\hat T] + \frac{1}{3!}[[[\hat H, \hat T], \hat T],\hat T], \hat T] \nonumber\\
+ &\frac{1}{4!}[[[[\hat H, \hat T], \hat T], \hat T], \hat T] + \cdots \label{eq:bch}
\end{align}
Conveniently, the series truncates at the fourth order due to the Hamiltonian having only one- and two-body interaction terms \cite{shavitt_bartlett_2009, cc} and the excitations always being from real to virtual orbitals. Traditionally, this projective method to determine the CC amplitudes is preferred over variational methods due to the non-unitarity of the $e^{\hat T}$ operator \cite{aspuruguzik_uccreview, cc}.
Despite its success, the lack of unitarity prevents the CC operators to be implemented on quantum computers. This suggests using the unitary coupled-cluster ansatz (UCC), whose cluster operator now includes the excitation \textit{minus} the deexcitation operator $\hat T - \hat T^\dagger$ \cite{bartlett_kucharski_noga_1989, schaefer_2013}. Similar to the CC approximation, only the low-rank cluster operators such as singles and doubles are usually selected for the variational eigenstate ansatz; but for more strongly correlated systems, one expects that higher-rank factors will also be needed. In practice, a projective method like the one used in the CC calculation does not work with the UCC ansatz because the similarity-transformed Hamiltonian no longer truncates after the fourth term. Common strategies for carrying it out on classical computers include truncating the Hadamard lemma series at a fixed order \cite{bartlett_kucharski_noga_1989}, expanding the exponential operator in a power series and then truncating it when the higher-rank terms no longer change the eigenfunction \cite{cooper_knowles_2010}, and using an exact operator identity of the factorized form of the UCC to allow the wavefunction to be constructed in a tree structure \cite{chen_cheng_freericks_2021}. But, there exists no simple method to work directly with the UCC ansatz in its original form. Since we are working with non-commuting fermionic operators $\hat a_a^\dagger \hat a_b^\dagger \cdots \hat a_j \hat a_i - \hat a_i^\dagger \hat a_j^\dagger \cdots \hat a_b \hat a_a$ in the exponent, one common way to decompose such a function is to adopt a Trotter product formula:
\begin{equation}
\hat U_{UCC} = e^{\sum_k \theta_k (\hat T_k - \hat T_k^\dagger)} = \lim_{N\rightarrow \infty}\Bigg(\prod_k e^{\frac{\theta_k}{N}(\hat T_k - \hat T_k^\dagger)} \Bigg)^N.
\label{eq:ucctrotter}
\end{equation}
Another useful method is to express the ansatz in a factorized form, given by
\begin{equation}
\hat U^\prime_{UCC} = \prod_k e^{\theta_k (\hat T_k - \hat T_k^\dagger)},
\end{equation}
which corresponds to the first-order approximation of the Trotter product formula in Eq.~(\ref{eq:ucctrotter}). The benefit of only using the $N=1$ extreme case is two-fold: the quantum resources required to prepare the factorized UCC ansatz are much smaller than higher-order approximations and the Trotter errors of the first-order approximation can be ameliorated by the fact that the calculation is variational \cite{romero_babbush_mcclean_hempel_love_aspuru-guzik_2018, barkoutsos_2018}. Within the classical computational chemistry framework, work by Chen, \textit{et al.} \cite{chen_cheng_freericks_2021} created an algorithm using the factorized form of the UCC that produces significantly better results for strongly correlated systems and comparable results in terms of accuracy for weakly correlated systems.
To implement the factorized UCC ansatz on quantum computers, one needs to transform the cluster operators $\hat T - \hat T^\dagger$ expressed in the fermionic language into a spin language (via the Jordan-Wigner transformation, or other fermionic encodings). A common realization of this approach is to exactly simulate the individual exponentials of Pauli strings found after the JW transformation of $e^{\hat T - \hat T^\dagger}$ \cite{romero_babbush_mcclean_hempel_love_aspuru-guzik_2018, barkoutsos_2018}. This is possible because the different $2^{2n-2}$ Pauli strings (for a rank-$n$ UCC factor) commute with each other. In our previous work, we found a way of reducing the number of control-NOT (\textsc{Cnot}) gates in quantum circuits for the factorized UCC ansatz by introducing extra ancilla qubits \cite{xu_lee_freericks_2022}, with the largest reductions for the higher-rank factors. In this work, we introduce a method to directly simulate the sum of terms obtained from a hidden SU($2$) symmetry of the first-order Trotter product that greatly reduces the number of multi-qubit entanglement gates of factorized UCC circuits.
\section{Background}
\subsection{SU(2) identity for individual UCC factors}
Recall the rank-$n$ cluster operator is defined as
\begin{equation}
\hat T_k =\frac{1}{(k!)^2}\sum_{ij\cdots} ^{occ} \sum_{ab\cdots} ^{vir} \theta_{ij\cdots} ^{ab\cdots} \Big(\hat A_{ij\cdots} ^{ab\cdots}-\hat A_{ab\cdots} ^{ij\cdots} \Big).
\end{equation}
The first two ranks are
\begin{align}
\hat T_1 &= \sum_{ia}\theta_i ^a \big( \hat a_a ^\dagger \hat a_i - \hat a_i ^\dagger \hat a_a\big) = \sum_{ia} \theta_i^a \big(\hat A_i^a - \hat A_a^i\big)\\
\hat T_2 &= \frac{1}{2}\sum_{ijab} \theta_{ij}^{ab} \big(\hat a_a ^\dagger \hat a_b ^\dagger \hat a_j \hat a_i-\hat a_i ^\dagger \hat a_j ^\dagger \hat a_b \hat a_a\big) \nonumber\\
&= \frac{1}{2}\sum_{ijab} \theta_{ij}^{ab} \Big(\hat A_{ij}^{ab} - \hat A_{ab}^{ij}\Big),
\end{align}
where $\hat a_a^{\dagger}$ is the fermionic creation operator on the virtual orbital $a$ and $\hat a_i$ is the fermionic annihilation operator on the real orbital $i$, and they obey the standard anti-commutation relations given by
\begin{equation}\label{ccr}
\{\hat a_i, \hat a_j\} = 0; \{\hat a_i^{\dagger}, \hat a_j^{\dagger}\} = 0; \{\hat a_i, \hat a_j^\dagger\} = \delta_{ij}
\end{equation}
where $\{A, B\}=AB+BA$ and $\delta_{ij}$ is the Kronecker delta function. First, we note that because $\{i,j,k,\cdots\}$ and $\{a,b,c,\cdots\}$ are disjoint sets, $\hat{A}^2=0=\hat{A}^{\dagger 2}$, so the squared term becomes
\begin{align}
(\hat A - \hat A^\dagger)^2 &= -\hat{A}\hat A^\dagger - \hat A^\dagger \hat{A} \nonumber \\
&= -\hat n_{a_1}\hat n_{a_2}\cdots\hat n_{a_n}(1-\hat n_{i_1})(1-\hat n_{i_2})\cdots(1-\hat n_{i_n}) \nonumber\\
& -(1-\hat n_{a_1})(1-\hat n_{a_2})\cdots(1-\hat n_{a_n})\hat n_{i_1}\hat n_{i_2}\cdots\hat n_{i_n},
\label{eq:square_identity}
\end{align}
where $\hat n_\alpha = \hat a_\alpha ^\dagger \hat a_\alpha$ is the number operator for spin-orbital $\alpha$. The cubed term then becomes
\begin{equation}
(\hat A-\hat A^\dagger)^3 =\hat A\hat A^\dagger\hat A-\hat A^\dagger\hat A\hat A^\dagger=\hat A-\hat A^\dagger,
\label{eq:cube_identity}
\end{equation}
because the projection operators $\hat{n}$ and $1-\hat{n}$ evaluate to one when they act on the corresponding fermionic operators.
For any UCC factor, the power series expansion is given as
\begin{equation}
e^{\theta (\hat A - \hat A^\dagger)} = \sum_{n=0}^{\infty} \frac{\theta ^n}{n!} (\hat A - \hat A^\dagger)^n.
\label{eq:uccpowerseries}
\end{equation}
Combining with equations (\ref{eq:square_identity}) and (\ref{eq:cube_identity}), we can then exactly write the sum as
\begin{align}
e^{\theta (\hat A - \hat A^\dagger)} = &\hat I + \sin{\theta}(\hat A - \hat A^\dagger)
+(\cos{\theta} - 1)(\hat n_{a_1}\hat n_{a_2}\cdots\hat n_{a_n}\nonumber\\
&\times(1-\hat n_{i_1})(1-\hat n_{i_2})\cdots(1-\hat n_{i_n}) +(1-\hat n_{a_1})\nonumber\\
&\times(1-\hat n_{a_2})\cdots(1-\hat n_{a_n})\hat n_{i_1}\hat n_{i_2}\cdots\hat n_{i_n}),
\label{eq:uccidentity}
\end{align}
for any given set of occupied orbitals $\{i_1\cdots i_n \}$ and virtual orbitals $\{a_1\cdots a_n \}$ of rank $n$ \cite{xu_lee_freericks_2020, freericks_2022, chen_cheng_freericks_2021}. This identity gives a clear picture of what is happening after a UCC factor is applied to a state. If the state is neither excited by $\hat A$ nor deexcited by $\hat A^\dagger$, the state is unchanged by the UCC factor. Otherwise, the UCC factor acting on the state is equivalent to a cosine multiplied by the original state plus a sine multiplied by the excited (or deexcited) state, just as we would expect from a rotation in the many-body configuration space.
\subsection{Jordan-Wigner transformation of the SU(2) identity}
Hamiltonians written in fermionic terms need to be re-expressed in terms of spin operators in order to be implemented by quantum computers. In this work, we choose to work with the JW transformation for the fermionic encoding. This transformation is given by
\begin{align}
\hat a_k &= \frac{1}{2}(X_k + iY_k)\otimes Z_{k+1}\otimes Z_{k+2}\otimes\cdots\otimes Z_N\label{eq:jw1}\\
\hat a_k ^\dagger &= \frac{1}{2}(X_k - iY_k)\otimes Z_{k+1}\otimes Z_{k+2}\otimes\cdots\otimes Z_N\label{eq:jw2}\\
\hat n_k &= \hat a_k ^\dagger \hat a_k = \frac{1}{2}(1-Z_k)\label{eq:jw3},
\end{align}
where $ X$, $ Y$, and $ Z$ are the standard Pauli matrices, and $0 \leq k \leq N-1$, for the $N$ qubits that describe the molecule. The qubit state $|0\rangle$ has no electrons and $|1\rangle$ has one electron. The SU($2$) identity for a UCC factor, as shown in Eq.~ (\ref{eq:uccidentity}), can be reexpressed in terms of the Pauli operators using Eqs.~(\ref{eq:jw1}), (\ref{eq:jw2}), and (\ref{eq:jw3}). For a factorized UCC double (UCCD) operator, the transformation is as follows
\begin{align}
\hat U(\theta) = &\exp\Big(\theta \big(\hat a_i ^\dagger \hat a_j ^\dagger \hat a_k \hat a_l - \hat a_l ^\dagger \hat a_k ^\dagger \hat a_j \hat a_i\big) \Big) \nonumber\\
= &\hat I + \sin{\theta}(\hat a_i ^\dagger \hat a_j ^\dagger \hat a_k \hat a_l - \hat a_l ^\dagger \hat a_k ^\dagger \hat a_j \hat a_i) + (\cos{\theta}-1)\nonumber\\
&\times\big(\hat n_l \hat n_k (1-\hat n_i)(1-\hat n_j) + (1-\hat n_l)(1-\hat n_k)\hat n_i \hat n_j\big)\nonumber\\
= & \hat I + \frac{i\sin{\theta}}{8} \bigotimes _{a=l+1} ^{k-1} Z_{a}\bigotimes _{b=j+1} ^{i-1} Z_{b} \times \nonumber\\
\bigg( &X_l X_k Y_j X_i + Y_l X_k Y_j Y_i + X_l Y_k Y_j Y_i + X_l X_k X_j Y_i \nonumber \\
- &Y_l X_k X_j X_i - X_l Y_k X_j X_i - Y_l Y_k Y_j X_i - Y_l Y_k X_j Y_i\bigg) \nonumber\\
+ &\frac{1}{8}(\cos{\theta}-1)(\hat I + Z_i Z_j + Z_l Z_k - Z_j Z_l - Z_j Z_k\nonumber\\
- &Z_i Z_l - Z_i Z_k + Z_i Z_j Z_k Z_l).
\label{eq:su2jw}
\end{align}
Note that the JW strings simplify, because $Z_k^2=\mathbb{I}$ for all cases where two strings overlap. This expression is a unitary operator, but it is also here expressed as a sum over unitary operators, because Pauli strings are both Hermitian and unitary.
\subsection{Linear combination of unitaries}
To simulate the sum in Eq.~(\ref{eq:su2jw}) on a quantum computer, we use the linear combination of unitaries (LCU) query model \cite{childs_wiebe_2012, berry_childs_cleve_kothari_somma_2015}. Given an input operator $\hat{U}$ represented by a sum of unitaries $\hat U = \sum_n \alpha_n \hat U_n$, with coefficients $\alpha_n$ for each unitary operator $\hat U_n$, the LCU technique will create a circuit to evaluate this operator acting on a state. It first prepares an ancilla bank with coefficients based on the coefficients in the linear combination:
\begin{equation}
\hat{B}\ket{0} = \frac{1}{\sqrt{s}} \sum_n \sqrt{\alpha_n} \ket{n}.
\label{eq:lcuprepare}
\end{equation}
Here, $\frac{1}{\sqrt{s}}$ is a normalization factor, $\ket{0}$ is the initial state of the ancilla bank, and $\ket{n}$ is the product state that will later encode the unitaries in the LCU procedure. The operator \textsc{Select}($\hat U$) is then used to create entanglement between the ancilla bank and system states
\begin{equation}
\textsc{Select}(U)\ket{n}\otimes\ket{\psi} = \ket{n}\otimes U_n\ket{\psi}.
\label{eq:selectdef}
\end{equation}
One of the hallmarks of the LCU approach is that if the original operator $\hat U$ is unitary and $s\leq 2$, then a single step of oblivious amplitude amplification is able to exactly apply the $\hat U$ to the state \cite{childs_wiebe_2012}. Note that in our case the UCC factor, given in Eq.~(\ref{eq:su2jw}) is unitary and $s = \cos{\theta} + \sin{\theta} \leq 2$ for all $\theta$, so it always satisfies this criteria. Hence, the LCU treatment of the sum is exact. The oblivious amplitude amplification is given by
\begin{equation}
-\hat{W}\hat{R}\hat{W}^\dagger \hat{R}\hat{W}\ket{0}\otimes\ket{\psi} = \ket{0}\otimes\hat{U}\ket{\psi},
\end{equation}
where the $\hat{W}$ and $\hat{R}$ operators are defined as
\begin{align}
\hat{W} := &(\hat{B}^\dagger\otimes 1) \textsc{Select}(\hat U)(\hat{B}\otimes 1), \nonumber\\
\hat{R} := &1 - 2(\ket{0}\bra{0}\otimes 1).
\end{align}
The main source of circuit complexity of the LCU query model comes from the unitary transformation $\hat{W}$ because it involves applying \textsc{Select}($\hat U$), which itself can contain a substantial number of multi-qubit gates and quickly outgrows the capability of near-term hardware. One efficient circuit implementation of the \textsc{Select}($\hat U$) subroutine for a generic fermionic Hamiltonian uses $\mathcal{O}(\eta)$ Clifford and $T$ gates, with Clifford gates running in $\mathcal{O}(\log^2 \eta)$ layers and $T$ gates in $\mathcal{O}(\eta)$ layers. Here, $\eta$ is the number of spin orbitals \cite{wan_2021}. The ancilla preparation operator $\hat{B}$ is often implemented by rotations and controlled rotations on the target qubits, followed by $\text{Hadamard} ^{\otimes \eta}$ gates that create the required entanglement state for the ancilla bank.
\section{Circuit construction}\label{sec:circuitexample}
We begin by illustrating the circuit implementation of the \textsc{Prepare} and \textsc{Select}($\hat U$) subroutines present in the LCU adaptation of the UCC factors for doubles. The doubles are the most ubiquitous terms in the low-rank representation of a UCC ansatz. Later in this section, we will show that UCC factors of arbitrary rank-$n$ can be implemented via a similar algorithm. High-rank factors are necessary to generate an accurate correlation energy in strongly correlated systems.
\subsection{\textsc{Prepare} subroutine}\label{sec:prepare}
The unitary transformation $\hat{B}$ is used to generate required entangled state in the ancilla bank, shown in Eq.~(\ref{eq:lcuprepare}). The operator in Eq.~(\ref{eq:su2jw}), lends itself to a binary encoding, where we create the linear combination of states multiplied by amplitudes: that is, the sum of $\alpha_1\ket{0000}+ \alpha_2\ket{0001}+\cdots+\alpha_8\ket{1111}$. Because there are only three distinct coefficients present in the UCC factor regardless of the rank, the binary encoding allows us to reduce the size of the ancilla bank logarithmically so that it grows with the rank, not the exponential of the rank.
\begin{table}[h!]
\[
\begin{array}{c}
\Qcircuit @C=1.2em @R=1.5em{
\lstick{\ket{0}_1}&\gate{R_{X_1}} &\ctrl{1}&\ctrlo{1}&\ctrlo{1} &\ctrlo{1}&\ctrlo{1}&\ctrlo{1} &\qw \\
\lstick{\ket{0}_2}&\qw &\multigate{2}{H}&\gate{R_{Y_2}}&\ctrl{1}&\ctrlo{1}&\ctrlo{1}&\ctrlo{1} &\qw\\
\lstick{\ket{0}_3}&\qw &\ghost{H}&\qw&\multigate{1}{H}&\gate{R_{Y_3}}&\ctrl{1}&\ctrlo{1} &\qw\\
\lstick{\ket{0}_4}&\qw &\ghost{H}&\qw&\ghost{H}&\qw&\gate{H} &\gate{R_{Y_4}}&\qw\\}
\end{array}
\]
\captionsetup{justification=raggedright}
\caption{Quantum circuit for preparing the ancilla bank of the LCU query for a rank-$2$ UCC factor (so-called doubles).}
\label{table:preparedouble}
\end{table}
A \textsc{Prepare} circuit for the doubles factor is shown in the Tab.~\ref{table:preparedouble}. The $H$ gates are Hadamard operators, and the $R_{X_i}$ and $R_{Y_i}$ gates are rotations by an angle $\Theta_i$ along the $X$ and $Y$ axis, respectively. For a UCC doubles operator, $n=2$, and there are three distinct coefficients: eight terms with $\frac{i\sin{\theta}}{8}$, seven terms with $\frac{\cos{\theta}-1}{8}$, and one term with $\frac{\cos{\theta}+7}{8}$. The four angles used in the circuit shown in Tab.~\ref{table:preparedouble} can be found analytically:
\begin{align}
&\Theta_1 = \arcsin{\bigg(-\frac{\sqrt{2}}{4}\sin{\theta}\bigg)} \\
&\Theta_2 = \arcsin{\bigg(\frac{\cos{\theta}-1}{\sqrt{14+2\cos^2{\theta}}}\bigg)}\\
&\Theta_3 = \arcsin{\bigg(\frac{\sqrt{2}}{2}\frac{\cos{\theta}-1}{\sqrt{13+\cos^2{\theta}+2\cos{\theta}}}\bigg)}\\
&\Theta_4 = \arcsin{\bigg(\frac{\sqrt{2}}{2}\frac{\cos{\theta}-1}{\sqrt{25+\cos^2{\theta}+6\cos{\theta}}}\bigg)}
\end{align}
And the magnitudes of these four angles are shown in Fig.~ \ref{fig:uccd_prepare_angles}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{double_prepare_angles.png}
\caption{The four angles used in the \textsc{Prepare} subroutine for a UCC doubles factor with an amplitude given by $\theta$.}
\label{fig:uccd_prepare_angles}
\end{figure}
The \textsc{Prepare} subroutine is implemented by encoding the amplitudes $1+\frac{\cos\theta - 1}{2^{2n-1}}$ and $\frac{\cos\theta - 1}{2^{2n-1}}$ in $2^{2n-1}$ states that always have one different binary digit than those encoding the amplitude $\frac{i\sin\theta}{2^{2n-1}}$. For example, in the doubles circuit, we encode the amplitudes $1+\frac{\cos\theta - 1}{2^3}$ and $\frac{\cos\theta - 1}{2^3}$ in states $\ket{0000}$ and $\ket{0001} \cdots \ket{0111}$, and encode the amplitude $\frac{i\sin\theta}{2^3}$ in states $\ket{1000} \cdots \ket{1111}$.
The \textsc{Prepare} circuit for a doubles UCC factor can be straightforwardly generalized to one for a rank-$n$ operator. In this case, we require $2n$ ancilla qubits, where there are $2^{2n-1}$ subterms with coefficients $\frac{i\sin{\theta}}{2^{2n-1}}$, $2^{2n-1}-1$ subterms with coefficients $\frac{\cos{\theta}-1}{{2^{2n-1}}}$, and one term with coefficient $1+\frac{\cos{\theta}-1}{2^{2n-1}}$. The hierarchical structure of the circuit is a simple generalization of the doubles circuit to higher rank, where the first element, which sets the $\sin$ terms is the same, while the remaining factors are created by extending the hierarchy with multiple controlled Hadamards followed by multiply controlled rotations. The angles for each rotation in the algorithm are
\begin{align}
&\Theta_1 = \arcsin{\bigg(-\frac{1}{\sqrt{2^{2n-1}} }\sin\theta\bigg)} \\
&\Theta_k = \arcsin{\bigg(\frac{\cos\theta - 1}{\sqrt{2^{2n-2+k}-2^k+2+2\cos^2\theta +(2^k-4)\cos\theta}} \bigg)}
\end{align} for $2\leq k\leq 2n$.
Most of the quantum computing costs on near-term quantum computers come from the \textsc{Cnot} circuit elements; thus, we focus on counting the number of these gates to estimate the total cost of the circuits. The controlled-$H$ gate, \textsc{CH}, is implemented by one \textsc{Cnot} sandwiched by one $R_Y(\pi/4)$ gate and one $R_Y(-\pi/4)$ gate. The controlled-$R_Y(\theta)$ operator is employed by two \textsc{Cnot} gates and two half rotations. We use the linear-depth method proposed in Ref.~\cite{silva_park_2022} to decompose the multi-qubit control operators into standard \textsc{Cnot} and one-qubit gates. The \textsc{Cnot} cost of an $n$-qubit controlled operator is $8n-12$ for all $n\geq 2$. The circuit needs $2(2n-k+1)(8k-12)$ \textsc{Cnot} gates for each $k$-qubit controlled operation. For $k=1$, the \textsc{Cnot} cost is $2n$. To implement the circuit for an arbitrary rank-$n$, we need to employ the modular sub-circuit shown in Tab.~\ref{table:module} on $2n-1$ ancilla qubits. The upper bound for the number of control qubits used in any module is $2n-1$. The total cost of running the \textsc{Prepare} subroutine is then $2n+2\sum_{k=2} ^{2n-1} (8k-12)(2n+1-k)$, which can be further simplified to $\frac{8}{3}(8n^3-6n^2-\frac{41n}{4}+9)$. Hence, the cost scales like the cube of the rank.
\begin{table}[h!]
\[
\begin{array}{c}
\Qcircuit @C=1.5em @R=1.5em {
\lstick{\ket{q}_1}& \multigate{5}{\mathcal{M}(\ket{q}_n)}&\qw & & &\ctrlo{2}&\ctrlo{2}&\qw \\
\vdots& & & & & & & \vdots \\
\lstick{\ket{q}_m}&\ghost{\mathcal{M}(\ket{q}_n)} & \qw & & &\ctrl{1}&\ctrlo{1}&\qw \\
\lstick{\ket{q}_n}&\ghost{\mathcal{M}(\ket{q}_n)} &\qw & = & &\multigate{2}{H}&\gate{R_Y(\Theta)}&\qw \\
\vdots& & & & & & & \vdots \\
\lstick{\ket{q}_l}&\ghost{\mathcal{M}(\ket{q}_n)} &\qw & & &\ghost{H}&\qw&\qw
}
\end{array}
\]
\captionsetup{justification=raggedright}
\caption{Quantum circuit for the modules of qubit-$\ket{q}_n$ used in the \textsc{Prepare} subroutine. The Hadamard operator is anti-controlled by the qubits between $\ket{q}_1$ and $\ket{q}_m$ and controlled by the qubit $\ket{q}_m$. The $R_Y(\Theta)$ operator is anti-controlled by qubits $\ket{q}_1$ to $\ket{q}_m$.}
\label{table:module}
\end{table}
\begin{table}[h!]
\[
\begin{array}{c}
\Qcircuit @C=1em @R=1.5em{
\lstick{\ket{0}_1}&\gate{R_{X_1}} &\multigate{4}{\mathcal{M}(\ket{0}_2)}&\multigate{4}{\mathcal{M}(\ket{0}_3)}&\qw&\cdots& &\multigate{4}{\mathcal{M}(\ket{0}_l)}&\qw\\
\lstick{\ket{0}_2}&\qw &\ghost{\mathcal{M}(\ket{0}_2)}&\ghost{\mathcal{M}(\ket{0}_3)}&\qw&\cdots& &\ghost{\mathcal{M}(\ket{0}_l)}&\qw\\
\lstick{\ket{0}_3}&\qw &\ghost{\mathcal{M}(\ket{0}_2)}&\ghost{\mathcal{M}(\ket{0}_3)}&\qw&\cdots& &\ghost{\mathcal{M}(\ket{0}_l)}&\qw\\
\vdots \\
\lstick{\ket{0}_l}&\qw &\ghost{\mathcal{M}(\ket{0}_2)}&\ghost{\mathcal{M}(\ket{0}_3)}&\qw&\cdots& &\ghost{\mathcal{M}(\ket{0}_l)}&\qw
}
\end{array}
\]
\captionsetup{justification=raggedright}
\caption{Quantum circuit for the \textsc{Prepare} subroutine of arbitrary rank with modules introduced in table \ref{table:module}.}
\label{table:generalwithmodules}
\end{table}
\subsection{\textsc{Select}(\texorpdfstring{$\hat U$}{Lg}) subroutine for rank-2 factors}\label{sec:select}
In this section, we introduce a quantum circuit for implementing the $\textsc{Select}(\hat U)$ operation, illustrated in Eq.~(\ref{eq:selectdef}), for the UCCD factors, where the unitary $\hat U$ is the operator given in Eq.~(\ref{eq:su2jw}). The first step of the \textsc{Select}($\hat U$) circuit is to create one of the Pauli strings from the pool of the excitation operators ($XY$ strings), and one of the Pauli strings from the projection pool ($IZ$ strings). In this example, we opt to create the $Y_l X_k X_j X_i$ and $I_l Z_k Z_j I_i$ strings as shown in Tab.~\ref{table:selectstep1}, however, any arbitrary Pauli string can be the candidate for this step. It is important to note that we use control operations for the $Y_l X_k X_j X_i$ terms, whereas we use anticontrol operations for the $I_l Z_k Z_j I_i$ terms. This is because we are partitioning the ancilla bank into two sectors, one part for the $XY$ strings, and one part for the $IZ$ strings. In this case, ancilla-bank states $\ket{1000}$ to $\ket{1111}$ are used to create the $XY$ strings, and ancilla-bank states $\ket{0000}$ to $\ket{0111}$ are used to create the $IZ$ strings. The resulting state, omitting the corresponding coefficients, which are prepared in the previous step, becomes
\begin{align}
&(\ket{0000} + \cdots + \ket{0111})\ket{Z_k Z_j}\nonumber \\
+ &(\ket{1000}+\cdots + \ket{1111})\bigotimes _{a=l+1} ^{k-1} Z_{a}\bigotimes _{b=j+1} ^{i-1} Z_{b}\ket{Y_l X_k X_j X_i}
\end{align}
\begin{table}[h!]
\[
\begin{array}{c}
\Qcircuit @C=1.2em @R=1.2em{
\lstick{\ket{0}_1} & \ctrl{1}&\ctrl{5}&\ctrl{1} &\ctrl{3} &\ctrl{5}& \ctrl{7} &\ctrlo{3} &\ctrlo{5}&\qw \\
\lstick{l} &\multigate{2}{Z_{k-1} ^{l+1}}&\qw&\gate{Y} &\qw&\qw&\qw&\qw&\qw&\qw \\
\vdots \\
\lstick{k} &\ghost{Z_{k-1} ^{l+1}} &\qw&\qw &\gate{X}&\qw &\qw&\gate{Z}&\qw&\qw\\
\vdots \\
\lstick{j} &\qw &\multigate{2}{Z_{i-1} ^{j+1}}&\qw&\qw &\gate{X}&\qw &\qw&\gate{Z}&\qw\\
\vdots \\
\lstick{i} &\qw &\ghost{Z_{i-1} ^{j+1}}&\qw&\qw &\qw & \gate{X}&\qw&\qw&\qw\\
}
\end{array}
\]
\captionsetup{justification=raggedright}
\caption{Circuit to create $Y_l X_k X_j X_i$ and $I_l Z_k Z_j I_i$. $\ket{0}_1$ denotes the first qubit of the ancilla bank. The first two circuit components are the controlled Pauli $Z$ gates applied on qubits between (exclusively) indices $l$ and $k$, and between indices $j$ and $i$.}
\label{table:selectstep1}
\end{table}
The construction of the controlled Pauli $Z$ gates shown in Tab.~(\ref{table:selectstep1}) is described next.
\begin{table}[h!]
\[
\begin{array}{c}
\Qcircuit @C=1.5em @R=1.5em {
& \ctrl{1}&\qw & & &\ctrl{2}&\ctrl{3}&\qw \\
\lstick{l} &\multigate{3}{Z_{i-1} ^{j+1}} & \qw & & &\qw&\qw&\qw \\
&\ghost{Z_{i-1} ^{j+1}} &\qw & = & &\gate{Z}&\qw&\qw \\
&\ghost{Z_{i-1} ^{j+1}} &\qw & & &\qw&\gate{Z}&\qw\\
\lstick{k} &\ghost{Z_{i-1} ^{j+1}} & \qw& & &\qw&\qw&\qw
}
\end{array}
\]
\captionsetup{justification=raggedright}
\caption{The circuit for implementing the boxed controlled Pauli $Z$ operators shown in table (\ref{table:selectstep1}). The Pauli $Z$ operators are being applied to the qubits $l+1$ to $k-1$.}
\end{table}
With the starting reference states prepared, we can then create the entire state exactly. The first step is to apply a single-qubit controlled Pauli $Z$ operator on qubits $l$ and $k$, where the control is to be conditioned on the last qubit of the ancilla bank. The second step is similar, in that a single-qubit controlled Pauli $Z$ operator is applied on qubits $j$ and $i$. The control qubit of the second step is the second to last qubit of the ancilla bank. The final step is to apply the single-qubit controlled Pauli $Z$ on qubits $k$ and $j$, with the control qubit being the second qubit of the ancilla bank. The circuit diagram is illustrated in Tab.~\ref{table:selectstep2}.
\begin{table}[h!]
\[
\begin{array}{c}
\Qcircuit @C=1.6em @R=1.2em{
\lstick{\ket{0}_2}&\qw&\qw&\qw&\qw&\ctrl{5}&\ctrl{7}&\qw\\
\lstick{\ket{0}_3}&\qw&\qw&\ctrl{6}&\ctrl{8}&\qw&\qw&\qw\\
\lstick{\ket{0}_4} & \ctrl{1}&\ctrl{3}&\qw &\qw &\qw& \qw &\qw \\
\lstick{l} &\gate{Z}&\qw&\qw &\qw&\qw&\qw&\qw \\
\vdots \\
\lstick{k} &\qw&\gate{Z}&\qw &\qw &\gate{Z} &\qw&\qw\\
\vdots \\
\lstick{j} &\qw &\qw&\gate{Z}&\qw &\qw&\gate{Z} &\qw\\
\vdots \\
\lstick{i} &\qw &\qw&\qw&\gate{Z} &\qw & \qw&\qw\\
}
\end{array}
\]
\captionsetup{justification=raggedright}
\caption{Circuit to create the state shown in equation (\ref{eq:su2jw}). $\ket{0}_i$ denotes the $i$th qubit of the ancilla bank.}
\label{table:selectstep2}
\end{table}
The Pauli strings and their corresponding states in the ancilla bank are shown in Tab.~\ref{table:correspondingtable}.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c}
$\ket{0 0 0}_{2,3,4}$ & $\ket{0}_1 = 1$ & $\ket{0}_1 = 0$\\
\hline
$\ket{000}$ &$YXXX$ & $IZZI$\\
$\ket{001}$ &$XYXX$ & $ZIZI$\\
$\ket{010}$ &$YXYY$ & $IZIZ$\\
$\ket{011}$ &$XYYY$ & $ZIIZ$\\
$\ket{100}$ &$YYYX$ & $IIII$\\
$\ket{101}$ &$XXYX$ & $ZZII$\\
$\ket{110}$ &$YYXY$ & $IIZZ$\\
$\ket{111}$ &$XXXY$ & $ZZZZ$\\
\end{tabular}
\caption{The $16$ Pauli strings created by schemes shown in Tabs.~\ref{table:selectstep1} and \ref{table:selectstep2}, and their associated binary encodings in the ancilla bank. }
\label{table:correspondingtable}
\end{table}
The qubits on which the control operations are conditioned are chosen specifically for this table. In practice, when applying this algorithm, one needs to predetermine a table similar to Tab.~\ref{table:correspondingtable} for all the binary encodings and their corresponding Pauli substrings, and choose accordingly the starting reference states and the control qubits to be used in the approach illustrated shown in Tabs.~\ref{table:selectstep1} and \ref{table:selectstep2}.
\subsection{\textsc{Select}(\texorpdfstring{$\hat U$}{Lg}) for arbitrary rank-n}
In this section, we demonstrate the algorithm for the rank-$n$ UCC factor buy generalizing the algorithms shown in Secs.~\ref{sec:prepare} and \ref{sec:select}. First, let us re-examine the case for rank-$2$; that is, the doubles. Define groups $\mathbf{G_1} = \{G_{11},G_{12}\}$ and $\mathbf{G_2} = \{G_{21},G_{22}\}$, with elements $G = \sigma_{i} \otimes \sigma_{j}$, where $\sigma_{i}$ and $\sigma_j$ are two different Pauli operators acting on different qubits. Additionally, the group elements have the following identities
\begin{align}
&G_{11} \cdot (\sigma_{z}\otimes \sigma_{z}) = G_{12}, G_{12} \cdot (\sigma_{z}\otimes \sigma_{z}) = G_{11} \label{eq:group1}\\
&G_{11} \cdot (\mathds{1}\otimes \sigma_{z}) = G_{22}, G_{11} \cdot (\sigma_{z}\otimes \mathds{1}) = G_{21} \label{eq:group2}\\
&G_{21} \cdot (\sigma_{z}\otimes \sigma_{z}) = G_{22}, G_{22} \cdot (\sigma_{z}\otimes \sigma_{z}) = G_{21} \label{eq:group3}\\
&G_{21} \cdot(\mathds{1}\otimes \sigma_{z}) = G_{12}, G_{21} \cdot (\sigma_{z}\otimes \mathds{1}) = G_{11}\label{eq:group4},
\end{align}
where $\sigma_{z}$ is the Pauli $Z$ operator being applied on qubit $a_1$. It should be clear that the expression in Eq.~(\ref{eq:su2jw}) is of the schematic form $\sum_{p\neq q}\sum_r G_{pr}\otimes G_{qr} + G^\prime_{rp}\otimes G^\prime_{rq}$, omitting the coefficients, where the $\mathbf{G}$ groups contain $XY$ subterms and the $\mathbf{G^\prime}$ contain $IZ$ subterms. In the example shown in the Secs.~\ref{sec:prepare} and \ref{sec:select}, the corresponding groups are $\mathbf{G_1} = \{G_{11}=YX, G_{12}=XY \}$, $\mathbf{G_2} = \{G_{21}=XX, G_{22}=YY \}$, $\mathbf{G_1^\prime} = \{G_{11}^\prime=IZ, G_{12}^\prime=ZI \}$, and $\mathbf{G_2^\prime} = \{G_{21}^\prime=II, G_{22}^\prime=ZZ \}$. There are in total three steps needed to create the eigenfunction, hence we opted to use three digits for the binary encoding in the ancilla bank.
\begin{table}[h!]
\centering
\begin{tabular}{c|cccc}
$\otimes$&$G_{11}$ & $G_{12}$ &$G_{21}$ &$G_{22}$ \\
\hline
$G_{11}$ & 0&0 &\cellcolor[HTML]{89CFF0}$G_{11}G_{21}$&\cellcolor[HTML]{F4C2C2}$G_{11}G_{22}$\\
$G_{12}$ &0 &0 &\cellcolor[HTML]{FFE135}$G_{12}G_{21}$&\cellcolor[HTML]{F4C2C2}$G_{12}G_{22}$\\
$G_{21}$ &\cellcolor[HTML]{7FFFD0}$G_{21}G_{11}$&\cellcolor[HTML]{7FFFD0}$G_{21}G_{12}$&0 &0 \\
$G_{22}$ &\cellcolor[HTML]{7FFFD0}$G_{22}G_{11}$&\cellcolor[HTML]{7FFFD0}$G_{22}G_{12}$&0 &0 \\
\end{tabular}
\caption{Matrix that shows the scheme for the $XY$ subterms presented in the Sec.~\ref{sec:select}. The entry colored in blue, $G_{11}G_{21}$ is the initial reference state. After the first step, the yellow entry $G_{12}G_{21}$ is created. The pink entries are created after the second step. The first two steps use the identities in Eqs.~(\ref{eq:group1}) and (\ref{eq:group3}). The final step is to use the identities in Eqs.~(\ref{eq:group2}) and (\ref{eq:group4}) to create the green block.}
\label{table:doublestable}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{c|cccc}
$\otimes$&$G_{11}^\prime$ & $G_{12}^\prime$ &$G_{21}^\prime$ &$G_{22}^\prime$ \\
\hline
$G_{11}^\prime$ & \cellcolor[HTML]{F4C2C2}$G_{11}^\prime G_{11}^\prime$&\cellcolor[HTML]{89CFF0}$G_{11}^\prime G_{12}^\prime$&0&0\\
$G_{12}^\prime$ &\cellcolor[HTML]{F4C2C2}$G_{12}^\prime G_{11}^\prime$&\cellcolor[HTML]{FFE135}$G_{12}^\prime G_{12}^\prime$&0&0\\
$G_{21}^\prime$ &0&0&\cellcolor[HTML]{7FFFD0}$G_{21}^\prime G_{21}^\prime$&\cellcolor[HTML]{7FFFD0}$G_{21}^\prime G_{22}^\prime$ \\
$G_{22}^\prime$ &0&0&\cellcolor[HTML]{7FFFD0}$G_{22}^\prime G_{21}^\prime$ &\cellcolor[HTML]{7FFFD0}$G_{22}^\prime G_{22}^\prime$ \\
\end{tabular}
\caption{Matrix that shows the scheme for the $IZ$ subterms presented in Sec.~\ref{sec:select}. The entry colored in blue, $G_{11}^\prime G_{12}^\prime$ is chosen to be the starting reference state. After the first step, the yellow entry $G_{12}^\prime G_{12}^\prime$ is created. The pink entries result after the second step. The first two steps use the identities in Eqs.~(\ref{eq:group1}) and (\ref{eq:group3}). The final step is to use the identities in Eqs.~(\ref{eq:group2}) and (\ref{eq:group4}) to create the entire green block.}
\label{table:doublestableprime}
\end{table}
Similar to the way we define for the rank-$2$ factors, for rank-$3$, we define groups $\mathbf{G_1} = \{G_{11},G_{12},G_{13},G_{14}\}$ and $\mathbf{G_2} = \{G_{21},G_{22},G_{23},G_{24}\}$, therefore the wavefunction for the UCC triples factors after the JW transformation take the form $\sum_{p\neq q}\sum_r G_{pr}\otimes G_{qr} + G^\prime_{rp}\otimes G^\prime_{rq}$. The set of identities for rank-$3$ is
\begin{align}
&G_{11} (\sigma_{z}\otimes \sigma_{z}\otimes \mathds{1}) = G_{14}, G_{14} (\sigma_{z}\otimes \sigma_{z}\otimes \mathds{1}) = G_{11} \label{eq:triplegroup1}\\
&G_{12} (\sigma_{z}\otimes \sigma_{z}\otimes \mathds{1}) = G_{13}, G_{13} (\sigma_{z}\otimes \sigma_{z}\otimes \mathds{1}) = G_{12} \label{eq:triplegroup9}\\
&G_{11} (\mathds{1}\otimes \sigma_{z}\otimes \sigma_{z}) = G_{12}, G_{12} (\mathds{1}\otimes \sigma_{z}\otimes \sigma_{z}) = G_{11} \label{eq:triplegroup2}\\
&G_{13} (\mathds{1}\otimes \sigma_{z}\otimes \sigma_{z}) = G_{14}, G_{14} (\mathds{1}\otimes \sigma_{z}\otimes \sigma_{z}) = G_{13} \label{eq:triplegroup10}\\
&G_{21} (\sigma_{z}\otimes \sigma_{z}\otimes \mathds{1}) = G_{24}, G_{24} (\sigma_{z}\otimes \sigma_{z}\otimes \mathds{1}) = G_{21} \label{eq:triplegroup3}\\
&G_{22} (\sigma_{z}\otimes \sigma_{z}\otimes \mathds{1}) = G_{23}, G_{23} (\sigma_{z}\otimes \sigma_{z}\otimes \mathds{1}) = G_{24} \label{eq:triplegroup11}\\
&G_{21} (\mathds{1}\otimes \sigma_{z}\otimes \sigma_{z}) = G_{22}, G_{22} (\mathds{1}\otimes \sigma_{z}\otimes \sigma_{z}) = G_{21} \label{eq:triplegroup4}\\
&G_{23} (\mathds{1}\otimes \sigma_{z}\otimes \sigma_{z}) = G_{24}, G_{24} (\mathds{1}\otimes \sigma_{z}\otimes \sigma_{z}) = G_{22} \label{eq:triplegroup12}\\
&G_{11} (\mathds{1}\otimes \mathds{1}\otimes \sigma_{z}) = G_{21}, G_{12} (\mathds{1}\otimes \mathds{1}\otimes \sigma_{z}) = G_{22} \label{eq:triplegroup5}\\
&G_{13} (\mathds{1}\otimes \mathds{1}\otimes \sigma_{z}) = G_{23}, G_{14} (\mathds{1}\otimes \mathds{1}\otimes \sigma_{z}) = G_{24} \label{eq:triplegroup6}\\
&G_{21} (\sigma_{z}\otimes\mathds{1}\otimes \mathds{1}) = G_{13}, G_{22} (\sigma_{z}\otimes\mathds{1}\otimes \mathds{1}) = G_{14} \label{eq:triplegroup7}\\
&G_{23} (\sigma_{z}\otimes\mathds{1}\otimes \mathds{1}) = G_{11}, G_{24} (\sigma_{z}\otimes\mathds{1}\otimes \mathds{1}) = G_{12} \label{eq:triplegroup8}.
\end{align}
Shown in the Tab.~\ref{table:triplestable}, there are in total five steps needed to create the exact JW-transformed unitary, thus we use five out of six digits for the binary encoding in the ancilla bank.
\begin{table*}[h!]
\centering
\begin{tabular}{c|cccccccc}
$\otimes$&$G_{11}$ & $G_{12}$ &$G_{13}$ &$G_{14}$&$G_{21}$ & $G_{22}$ &$G_{23}$ &$G_{24}$ \\
\hline
$G_{11}$ & 0&0 &0&0&\cellcolor[HTML]{89CFF0}$G_{11}G_{21}$&\cellcolor[HTML]{ED872D}$G_{11}G_{22}$&\cellcolor[HTML]{ED872D}$G_{11}G_{23}$&\cellcolor[HTML]{F4C2C2}$G_{11}G_{24}$\\
$G_{12}$ &0 &0& 0&0&\cellcolor[HTML]{FFE135}$G_{12}G_{21}$&\cellcolor[HTML]{ED872D}$G_{12}G_{22}$&\cellcolor[HTML]{ED872D}$G_{12}G_{23}$&\cellcolor[HTML]{F4C2C2}$G_{12}G_{24}$\\
$G_{13}$ &0 &0& 0&0&\cellcolor[HTML]{FFE135}$G_{13}G_{21}$&\cellcolor[HTML]{ED872D}$G_{13}G_{22}$&\cellcolor[HTML]{ED872D}$G_{13}G_{23}$&\cellcolor[HTML]{F4C2C2}$G_{13}G_{24}$\\
$G_{14}$ &0 &0& 0&0&\cellcolor[HTML]{91A3B0}$G_{14}G_{21}$&\cellcolor[HTML]{ED872D}$G_{14}G_{22}$&\cellcolor[HTML]{ED872D}$G_{14}G_{23}$&\cellcolor[HTML]{F4C2C2}$G_{14}G_{24}$\\
$G_{21}$ &\cellcolor[HTML]{7FFFD0}$G_{21}G_{11}$&\cellcolor[HTML]{7FFFD0}$G_{21}G_{12}$&\cellcolor[HTML]{7FFFD0}$G_{21}G_{13}$&\cellcolor[HTML]{7FFFD0}$G_{21}G_{14}$&0 &0& 0&0 \\
$G_{22}$ &\cellcolor[HTML]{7FFFD0}$G_{22}G_{11}$&\cellcolor[HTML]{7FFFD0}$G_{22}G_{12}$&\cellcolor[HTML]{7FFFD0}$G_{22}G_{13}$&\cellcolor[HTML]{7FFFD0}$G_{22}G_{14}$&0 &0& 0&0 \\
$G_{23}$ &\cellcolor[HTML]{7FFFD0}$G_{23}G_{11}$&\cellcolor[HTML]{7FFFD0}$G_{23}G_{12}$&\cellcolor[HTML]{7FFFD0}$G_{23}G_{13}$&\cellcolor[HTML]{7FFFD0}$G_{23}G_{14}$&0 &0& 0&0 \\
$G_{24}$ &\cellcolor[HTML]{7FFFD0}$G_{24}G_{11}$&\cellcolor[HTML]{7FFFD0}$G_{24}G_{12}$&\cellcolor[HTML]{7FFFD0}$G_{24}G_{13}$&\cellcolor[HTML]{7FFFD0}$G_{24}G_{14}$&0 &0& 0&0
\end{tabular}
\caption{Matrix illustrating the scheme for constructing $XY$ substrings in the JW-transformed unitary of the UCCT factors. The blue entry is chosen to be the starting reference state. After the first step using identity (\ref{eq:triplegroup1}), grey entry $G_{14}G_{21}$ is created. Using the identities in Eqs.~(\ref{eq:triplegroup9}) and (\ref{eq:triplegroup2}), the yellow entries are created after the second step. The orange and pink columns are created after the next two steps via the identities in Eqs.~(\ref{eq:triplegroup10}) through (\ref{eq:triplegroup4}). The final step is to apply the identities in Eqs.~(\ref{eq:triplegroup5}) through (\ref{eq:triplegroup8}) to create the entire green block.}
\label{table:triplestable}
\end{table*}
\begin{table*}[h!]
\centering
\begin{tabular}{c|cccccccc}
$\otimes$&$G_{11}^\prime $ & $G_{12}^\prime $ &$G_{13}^\prime $ &$G_{14}^\prime $&$G_{21}^\prime $ & $G_{22}^\prime $ &$G_{23}^\prime $ &$G_{24}^\prime $ \\
\hline
$G_{11}^\prime $ & \cellcolor[HTML]{F4C2C2}$G_{11}^\prime G_{11}^\prime $&\cellcolor[HTML]{ED872D}$G_{11}^\prime G_{12}^\prime $&\cellcolor[HTML]{ED872D}$G_{11}^\prime G_{13}^\prime $&\cellcolor[HTML]{89CFF0}$G_{11}^\prime G_{14}^\prime $&0&0&0&0\\
$G_{12}^\prime $ &\cellcolor[HTML]{F4C2C2}$G_{12}^\prime G_{11}^\prime $&\cellcolor[HTML]{ED872D}$G_{12}^\prime G_{12}^\prime $& \cellcolor[HTML]{ED872D}$G_{12}^\prime G_{13}^\prime $&\cellcolor[HTML]{FFE135}$G_{12}^\prime G_{14}^\prime $&0&0&0&0\\
$G_{13}^\prime $ &\cellcolor[HTML]{F4C2C2}$G_{13}^\prime G_{11}^\prime $&\cellcolor[HTML]{ED872D}$G_{13}^\prime G_{23}^\prime $& \cellcolor[HTML]{ED872D}$G_{13}^\prime G_{13}^\prime $&\cellcolor[HTML]{FFE135}$G_{13}^\prime G_{14}^\prime $&0&0&0&0\\
$G_{14}^\prime $ &\cellcolor[HTML]{F4C2C2}$G_{14}^\prime G_{11}^\prime $&\cellcolor[HTML]{ED872D}$G_{14}^\prime G_{12}^\prime $&\cellcolor[HTML]{ED872D}$G_{14}^\prime G_{13}^\prime $&\cellcolor[HTML]{91A3B0}$G_{14}^\prime G_{14}^\prime $&0&0&0&0\\
$G_{21}^\prime $ &0 &0& 0&0&\cellcolor[HTML]{7FFFD0}$G_{21}^\prime G_{21}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{21}^\prime G_{22}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{21}^\prime G_{23}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{21}^\prime G_{24}^\prime $\\
$G_{22}^\prime $ &0 &0& 0&0&\cellcolor[HTML]{7FFFD0}$G_{22}^\prime G_{21}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{22}^\prime G_{22}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{22}^\prime G_{23}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{22}^\prime G_{24}^\prime $\\
$G_{23}^\prime $ &0 &0& 0&0&\cellcolor[HTML]{7FFFD0}$G_{23}^\prime G_{21}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{23}^\prime G_{22}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{23}^\prime G_{23}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{23}^\prime G_{24}^\prime $\\
$G_{24}^\prime $ &0 &0& 0&0&\cellcolor[HTML]{7FFFD0}$G_{24}^\prime G_{21}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{24}^\prime G_{22}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{24}^\prime G_{23}^\prime $&\cellcolor[HTML]{7FFFD0}$G_{24}^\prime G_{24}^\prime $\\
\end{tabular}
\caption{Matrix illustrating the scheme for constructing $IZ$ substrings in the JW-transformed unitary of the UCCT factors. The blue entry is the starting reference state. After the first step using the identity in Eq.~(\ref{eq:triplegroup1}), the grey entry $G_{14}G_{14}$ is created. Using the identities in Eqs.~(\ref{eq:triplegroup9}) and (\ref{eq:triplegroup2}), the yellow entries are created after the second step. The orange and pink columns are created after the next two steps via the identities in Eqs.~(\ref{eq:triplegroup10}) through (\ref{eq:triplegroup4}). The final step is to apply the identities in Eqs.~(\ref{eq:triplegroup5}) through (\ref{eq:triplegroup8}) to create the entire green block.}
\label{table:triplestableprime}
\end{table*}
For a UCC factor with arbitrary rank $n$, a total number of $n-1$ transformations are required to complete the first column of the matrix. An additional $n-1$ transformations are then required to complete the rest of the diagonal or off-diagonal block, depending on whether the subterms are $XY$ strings or $IZ$ strings. The final step is to perform the 'flip' transformation to make the entire matrix. Therefore, for any UCC factor of rank $n$, a total number of $2n-1$ steps are needed to construct the JW-transformed unitary operator.
\begin{table*}[h!]
\centering
\begin{tabular}{c|cccccccc}
$\otimes$&$G_{21}$ & $G_{22}$ &$G_{23}$ &$G_{24}$&$G_{25}$ & $G_{26}$ &$G_{27}$ &$G_{28}$ \\
\hline
$G_{11}$ &\cellcolor[HTML]{89CFF0}$G_{11}G_{21}$&\cellcolor[HTML]{ED872D}$G_{11}G_{22}$&\cellcolor[HTML]{F4BBFF}$G_{11}G_{23}$&\cellcolor[HTML]{F4BBFF}$G_{11}G_{24}$&\cellcolor[HTML]{F4BBFF}$G_{11}G_{25}$&\cellcolor[HTML]{F4BBFF}$G_{11}G_{26}$&\cellcolor[HTML]{ED872D}$G_{11}G_{27}$&\cellcolor[HTML]{F4C2C2}$G_{11}G_{28}$\\
$G_{12}$ &\cellcolor[HTML]{7FFFD0}$G_{12}G_{21}$&\cellcolor[HTML]{ED872D}$G_{12}G_{22}$&\cellcolor[HTML]{F4BBFF}$G_{12}G_{23}$&\cellcolor[HTML]{F4BBFF}$G_{12}G_{24}$&\cellcolor[HTML]{F4BBFF}$G_{12}G_{25}$&\cellcolor[HTML]{F4BBFF}$G_{12}G_{26}$&\cellcolor[HTML]{ED872D}$G_{12}G_{27}$&\cellcolor[HTML]{F4C2C2}$G_{12}G_{28}$\\
$G_{13}$ &\cellcolor[HTML]{FFE135}$G_{13}G_{21}$&\cellcolor[HTML]{ED872D}$G_{13}G_{22}$&\cellcolor[HTML]{F4BBFF}$G_{13}G_{23}$&\cellcolor[HTML]{F4BBFF}$G_{13}G_{24}$&\cellcolor[HTML]{F4BBFF}$G_{13}G_{25}$&\cellcolor[HTML]{F4BBFF}$G_{13}G_{26}$&\cellcolor[HTML]{ED872D}$G_{13}G_{27}$&\cellcolor[HTML]{F4C2C2}$G_{13}G_{28}$\\
$G_{14}$ &\cellcolor[HTML]{FFE135}$G_{14}G_{21}$&\cellcolor[HTML]{ED872D}$G_{14}G_{22}$&\cellcolor[HTML]{F4BBFF}$G_{14}G_{23}$&\cellcolor[HTML]{F4BBFF}$G_{14}G_{24}$&\cellcolor[HTML]{F4BBFF}$G_{14}G_{25}$&\cellcolor[HTML]{F4BBFF}$G_{14}G_{26}$&\cellcolor[HTML]{ED872D}$G_{14}G_{27}$&\cellcolor[HTML]{F4C2C2}$G_{14}G_{28}$\\
$G_{15}$ &\cellcolor[HTML]{FFE135}$G_{15}G_{21}$&\cellcolor[HTML]{ED872D}$G_{15}G_{22}$&\cellcolor[HTML]{F4BBFF}$G_{15}G_{23}$&\cellcolor[HTML]{F4BBFF}$G_{15}G_{24}$&\cellcolor[HTML]{F4BBFF}$G_{15}G_{25}$&\cellcolor[HTML]{F4BBFF}$G_{15}G_{26}$&\cellcolor[HTML]{ED872D}$G_{15}G_{27}$&\cellcolor[HTML]{F4C2C2}$G_{15}G_{28}$\\
$G_{16}$ &\cellcolor[HTML]{FFE135}$G_{16}G_{21}$&\cellcolor[HTML]{ED872D}$G_{16}G_{22}$&\cellcolor[HTML]{F4BBFF}$G_{16}G_{23}$&\cellcolor[HTML]{F4BBFF}$G_{16}G_{24}$&\cellcolor[HTML]{F4BBFF}$G_{16}G_{25}$&\cellcolor[HTML]{F4BBFF}$G_{16}G_{26}$&\cellcolor[HTML]{ED872D}$G_{16}G_{27}$&\cellcolor[HTML]{F4C2C2}$G_{16}G_{28}$\\
$G_{17}$ &\cellcolor[HTML]{7FFFD0}$G_{17}G_{21}$&\cellcolor[HTML]{ED872D}$G_{17}G_{22}$&\cellcolor[HTML]{F4BBFF}$G_{17}G_{23}$&\cellcolor[HTML]{F4BBFF}$G_{17}G_{24}$&\cellcolor[HTML]{F4BBFF}$G_{17}G_{25}$&\cellcolor[HTML]{F4BBFF}$G_{17}G_{26}$&\cellcolor[HTML]{ED872D}$G_{17}G_{27}$&\cellcolor[HTML]{F4C2C2}$G_{17}G_{28}$\\
$G_{18}$ &\cellcolor[HTML]{B2BEB5}$G_{18}G_{21}$&\cellcolor[HTML]{ED872D}$G_{18}G_{22}$&\cellcolor[HTML]{F4BBFF}$G_{18}G_{23}$&\cellcolor[HTML]{F4BBFF}$G_{18}G_{24}$&\cellcolor[HTML]{F4BBFF}$G_{18}G_{25}$&\cellcolor[HTML]{F4BBFF}$G_{18}G_{26}$&\cellcolor[HTML]{ED872D}$G_{18}G_{27}$&\cellcolor[HTML]{F4C2C2}$G_{18}G_{28}$\\
\end{tabular}
\caption{Off-diagonal block of a matrix illustrating the scheme for constructing $XY$ substrings in the JW-transformed unitary of the UCCQ factors. The blue entry of the first column is chosen to be the starting reference state. The grey entry is created after the first step, followed by the green entries in the second step and the yellow entries in the third step. The pink entries of the last column are created after the fourth step, followed by the orange columns in the fifth step and purple columns in the sixth step.}
\label{table:quadoffblock}
\end{table*}
\subsection{Gate counts}
The LCU framework for rank-$n$ UCC factors is given by a simple circuit implementation of the JW-transformed unitary in the form of
\begin{equation}
\sum_{p\neq q}\sum_r G_{pr}\otimes G_{qr} + G^\prime_{rp}\otimes G^\prime_{rq},
\end{equation}
where each group $\mathbf{G_{p}}$ contains $2^{n-1}$ elements that commute with every other element in the same group but anti-commute with elements in the other group. The total number of steps in the \textsc{Select}($\hat U$) subroutine is $2n-1$, for rank-$n$ factors. Within each step of the \textsc{Select}($\hat U$) subroutine, two single-qubit controlled Pauli $Z$ operators are needed, making the total number of \textsc{Cnot} gates $4n-2$. In addition, $4n+\sum_{i}^{2n-2} \rho_i$ \textsc{Cnot} gates are needed for initializing the reference Jordan-Wigner strings. Here, $\rho_i$ is the number of qubits between the $i$th pair of the active orbitals. In the case of a UCCD factor discussed in Secs.~\ref{sec:prepare} and \ref{sec:select}, $\rho_1$ is the number of qubits between qubits $k$ and $l$ and $\rho_2$ is the number of qubits between qubits $j$ and $i$. The circuit for preparing the ancilla bank hosts the majority of the complexity, where the total \textsc{Cnot} cost is $\frac{8}{3}(8n^3-6n^2-\frac{41n}{4}+9)$. The total number of \textsc{Cnot} gates used in the LCU circuit for preparing the JW-transformed unitary is then $6\cdot (\frac{8}{3}(8n^3-6n^2-\frac{41n}{4}+9))+3\cdot (8n-2+\sum_{i}^{2n-2}\rho_i)$, which can be further simplified to $128n^3-96n^2-140n+138+3\sum_{i}^{2n-2}\rho_i$. Note that this has a large prefactor in the scaling with the rank. The total number of ancilla qubits required for this framework is $2n$. For a UCC factor with an arbitrary set of active orbitals, the \textsc{Cnot} cost of the circuits proposed in our work eventually becomes favorable compared with other existing methods when the rank becomes large ($n\geq 9$), including the one proposed in the authors' previous work \cite{xu_lee_freericks_2022}, although for low-rank factors, the fermionic-excitation-based (FEB) algorithm proposed by Ref. \cite{magoulas_evangelista_2023} is more efficient.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{cnot_comparison.png}
\caption{\textsc{Cnot} gate counts for three different algorithms, the standard \textsc{Cnot} cascading circuits, the FEB circuits \cite{magoulas_evangelista_2023}, and the linear-combination-of-unitary query circuits introduced in this work.}
\label{fig:cnotcomparison}
\end{figure}
\section{Discussion}
In summary, we have introduced an application of the LCU query model that efficiently simulates the factorized UCC ansatze with a scaling that goes like the cube of the rank. We have demonstrated the quantum circuits for the \textsc{Prepare} and the \textsc{Select}($\hat U$) subroutines, whose \textsc{Cnot} counts scale linearly with the rank of the UCC ansatze and the number of active spin-orbitals. The \textsc{Prepare} operator employs a quantum circuit that scales as the cube of the rank of the UCC factor and requires a linear number of ancilla. The proposed LCU framework greatly reduces the number of two-qubit gates for high-rank UCC factors, which are needed for simulating strongly correlated systems on NISQ devices and future fault-tolerant quantum computers \cite{freericks_2022}. Alternatively, a low-rank factorization method does exist for the electronic Hamiltonian and the UCC operator \cite{motta_ye_mcclean_li_minnich_babbush_chan_2021}, but it is not clear how well it extends to high rank, so we do not compare the scaling of their method with ours. Circuits that implement the \textsc{Select}($\hat U$) subroutine for more general Jordan-Wigner strings with linear scaling have also been developed \cite{wan_2021}. Another approach that optimizes the \textsc{Cnot} cascading circuits in Ref. (\cite{romero_babbush_mcclean_hempel_love_aspuru-guzik_2018}) for low-rank factors have been recently developed as well \cite{wang_li_monroe_nam_2021, cowtan_dilkes_duncan_simmons_sivarajah_2020, magoulas_evangelista_2023}.
\section{Acknowledgements}
We acknowledge helpful discussions with Joseph T. Lee. L. Xu and J. K. Freericks were supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR), Quantum Computing Application Teams (QCATS) program, under field work proposal number ERKJ347. J. K. Freericks was also supported by the McDevitt bequest at Georgetown University.
|
{
"arxiv_id": "2302.08690",
"language": "en",
"timestamp": "2023-02-20T02:06:50",
"url": "https://arxiv.org/abs/2302.08690",
"yymm": "2302"
} | \section{\label{sec:level1}Introduction}
Reliability is an unavoidable crux in the quest for beyond-classical computational capabilities. As to the circuit-based quantum computation, improving the reliability of entire computational tasks is decomposed into a series of subtasks, among which implementing high-fidelity single-qubit gates is an important component. For example, both the noisy intermediate-scale quantum (NISQ) application and the fault-tolerant quantum computation, i.e. the near-term and the ultimate goals of the circuit-based quantum computation, make requirements on gate fidelities that exceed the state-of-the-art values. As a result, considerable efforts have been invested in realizing high-fidelity single qubit gates in leading platforms for quantum information processing.
As to the superconducting quantum computation, great progress has been made over the past two decades, including the realization of accurate and precise quantum gates. The single- and two-qubit gate errors in transmon qubit are below $10^{-3}$ and $10^{-2}$ \cite{jurcevic2020,somoroff2021,sungRealizationHighFidelityCZ2021a,stehlik2021PRL,kandala2021PRL,wei2022PRL,acharyaQEC2022,bao2022PRL} respectively, and the single-qubit gate error in fluxonium qubit is below $10^{-4}$ owing to a millisecond coherence time \cite{somoroffMillisecondCoherenceSuperconducting2021}. To further improve the single-qubit gate fidelity, identifying the nature of the
dominant errors is particularly
important for improving performance.and the fidelity of single-qubit can be seen as the upper boundary of two-qubit gate.
One of the essential requirements for the reliable implementation of circuit-based quantum computation is a sufficiently large ratio between the coherence time and the gate length. To increase this ratio, one prevalent way is to increase the coherence times of the quantum devices, including the energy relaxation time and the dephasing time. As to the superconducting transmon system, the detrimental impact of the two-level-system (TLS) defects in dielectrics on the energy-relaxation time has been extensively studied~\cite{melville2020APL, murray2021MSaERR, woods2019PRA}. To relieve
this impact, two alternative directions have been explored, one is to suppress the coupling between the TLS defects and the transmon by optimizing the geometry~\cite{martinis2022nQI}, and the other is to lower the density of the TLS defects by appropriate materials and recipes~\cite{Place2017nc,wang2022nQI}.
On the other hand, environmental noises, such as the fluctuation of magnetic flux and the residue of the thermal photons, can cause qubit dephasing and thus decrease the dephasing time \cite{bialczak2007PRL,goetz2017PRL, tomonaga2021PRB, yan2018PRL}. The solution might be to keep the transmon-qubit well separated from its environment, e.g. mitigating its couplings to the drive lines, the readout cavity or the neighboring qubits. However, this is obviously in contradiction to the implementation of fast qubit control, which requires the qubit apt to be driven. As a result, balancing the coherence times and the gate length becomes particularly important for the implementation of high-fidelity single-qubit gates.
Besides, to mitigate the impact of the spurious reflection signal due to impedance mismatch in the line, a short buffer should be added after each single-qubit operations to avoid residual pulse overlap \cite{chow2009PRL}.
In this work, we design and fabricate a superconducting device consisting of superconducting transmons, which are of long coherence times and apt to strong drivings. With this device, we implement a set of single-qubit gates, constructing from the $X_{\frac{\pi}{2}}$ gate and the virtual Z gates. The average gate fidelity, as well as the fidelities for $\pi/2$-pulses, is benchmarked to be higher than $99.99\%$, exceeding the state-of-the-art record in superconducting transmon-qubit systems. We also analyse the sources of the residual errors, including the incoherent error the leakadge rate per Clifford gate. As a cross validation, we experimentally obtain process matrices for the identity and $\pi/2$-pulses with the gate-set tomography. Besides extending the computational upper limit of a transmon-qubit processor, our experiment also indicates that the bottleneck to further increase the reliability might be to suppress the non-Markovian effect.
\section{\label{sec:level2}Experiment}
We implement high-fidelity single-qubit gates on a fixed-frequency transmon qubit fabricated with tantalum films~\cite{Place2017nc,wang2022nQI}. As shown in the insert of Fig.~\ref{fig:gate_characterization} (a), the qubit, labelled by $Q_5$, is embedded on a superconducting device consisting of five separate transmon qubits, each of which is coupled to a readout cavity sharing one transmission line. The $Q_5$ qubit is coupled to a microwave control line, which facilitates fast single-qubit operations. The transition frequency between $|0\rangle$ and $|1\rangle$ and the anharmonicity are $\omega_{01} = 2\pi\times4.631\:\mathrm{GHz}$ and $\Delta = -2\pi\times240 \:\mathrm{MHz}$, respectively. As to the coherence times, the energy relaxation time and the dephasing time are measured to be $T_{1} = 231$ $\mu$s and $T^E_{2}=204$ $\mu$s. These long coherence times indicate that the qubit is quite isolated from its external environment. Together with the electronics for the control and measurement system shown in Fig.~\ref{fig:gate_characterization} (a). At this coherence time level, single-qubit gates with the gate length of $20$ ns and single-shot measurement with high fidelities (see Supplementary Materials (SM)~\cite{supplementary}) can be achieved.
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{wiring_experiment_221210.pdf}
\caption{\label{fig:gate_characterization}(a) The optical image of the qubit device and the schematic of the experiment setup.
(b) Calibration of the pulse amplitude $\Omega_{0}$ for the $X_{\pi/2}$ gate.
The inset shows the pseudo-identity sequence. (c) Optimize the detuning of the pulse. The pseudo-identity sequence is shown in the inset. The curves show the excited state population as a function of detuning $\delta{f}$ for N = 50, 100, 200 pairs. (d) $X_{\pi}$ and $X_{-\pi}$ gate sequence to check the trailing edge of the pulse. There is no population change along the duration of the buffer $t_{\rm{buff}}$ of the pseudo-identity sequence.
}
\end{figure}
Specifically, we use the single-sideband (SSB) technology to modulate the control pulses, where the microwave signals are generated by a Digital-to-Analog Converter (DAC), a Signal Generator (SG1) and a mixer. To suppress local leakage and unwanted spurious signals, like the image or reflection of the control signals due to impedance mismatch, a bandpass filter and an isolator are introduced before the control signals access the qubit. The readout pulses are generated by a radio frequency-DAC (RF-DAC) with the sampling rate of 25 GS/s and fed into the fridge from the positive ($+$) port, while an out-of-phase signal from the negative ($-$) port is used as reference. Finally, the down-converted signals, going out of the fridge, are collected by an Analog-to-Digital Converter (ADC).
In this experiment, we construct arbitrary single-qubit rotations with $X_{\frac{\pi}{2}}$ pulses and virtual Z gates~\cite{mckay2017PRA}. As the virtual Z gates can be treated as faultless, $X_{\frac{\pi}{2}}$ is the only gate that needs precise calibration. Here we implement the $X_{\frac{\pi}{2}}$ gate by a microwave pulse with a cosine-shaped envelope $\Omega(t) = \Omega_0\left(1-\cos(2\pi t/t_g)\right)$ with the gate length $t_g=20~{\rm ns}$. To suppress leakage and phase errors, we introduce the derivative reduction by adiabatic gate (DRAG) scheme~\cite{gambetta2011PRA, motzoi2009PRL, chen2016PRL} for the pulse envelope, i.e. $\Omega_{\rm DRAG}(t)=e^{i2\pi\delta f t}\left(\Omega(t)-i\alpha\frac{\dot\Omega(t)}{\Delta}\right)$ with $\Delta$ being the anharmonicity of the transmon qubit, where $\alpha$ and $\delta f$ are the DRAG weighting and detuning, respectively. Considering the fact that long periodic sequences can boost the sensitivity of certain coherent errors, we design and implement long-sequence calibration schemes to determine the exact values of the pulse amplitude $\Omega_0$ and the DRAG detuning $\delta f$. To be specific, $\Omega_0$ is calibrated with $N$ times of $X_\pi$ pulses, each of which is composed of two $X_{\frac{\pi}{2}}$ pulses. With $N$ being a large and odd number, we sweep the value of $\Omega_0$, with the optimum marked by a peak in the population of the $\ket{1}$ state, denoted as $P_1$. The calibration results are shown in Fig.~\ref{fig:gate_characterization}(b). The periodic calibration sequences for $\delta f$ consist of $N$ psudo-identity operators, each of which is composed of a composite $X_\pi$ pulse and its inverse gate, i.e. the $X_{-\pi}$ gate. As shown in Fig.~\ref{fig:gate_characterization}(c), the minimum in $P_1$ when sweeping $\delta f$ gives the opitimum of $\delta f$. The experimental results clearly demonstrate the increasing sensitivity as the sequence length, which is $\propto N$, increases. The final step is to determine the length of the buffer by what time the spurious signals can be neglected. In order to shorten the buffer time, we measure the trailing edge of the driving pulse by directly using the qubit~\cite{gustavsson2013PRL} and then correct the signal by pre-distortion. The efficacy of this technique is reflected in the experimental data shown in Fig.~\ref{fig:gate_characterization}(d). Starting from the $\ket{0}$ state, we implement a sequence consisting of 50 pairs of $\left(X_{\pi}, X_{-\pi}\right)$ pulses and measure the population in the $\ket{1}$ state. We collect experimental data by sweeping the duration of the buffer and the DRAG weighting parameter $\alpha$, with the variation of the latter providing an oscillating pattern . Although we observe no visible movement of the pattern as the buffer time increases, we still chose $t_{\rm buff}=2$ ns to guarantee there is no overlap between pulses, i.e. the gate length of a single $X_{\frac{\pi}{2}}$ gate is 22 ns.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{RB99992_230215_bootstrap_noXY.pdf}
\caption{\label{fig:rb_exp} Average sequence fidelity as a function of the number of Clifford gates for the reference and interleaved RB. Each data point is averaged over 20 different random sequences, with the error bars being the standard error of the mean. The average error rate, obtained from the reference RB, and the error rates of the identity and the $\pm\pi/2$ pulses are listed in the legend, which are all smaller than $10^{-4}$, and the uncertainties are obtained by bootstrapping. For comparison, the yellow dashed line indicates the exponential decay for the average gate fidelity of 0.9999.
}
\end{figure}
The Clifford-based RB~\cite{emerson2007symmetrized, knill2008randomized}, as well as the interleaved RB~\cite{barends2014N}, is adopted widely to characterize the performance of quantum devices, which is approximately independent of errors in the state preparation and measurement (SPAM). Essentially, the RB experiment uses the fidelities of random pseudo-identity sequences, consisting of $m$ randomly sampled Clifford gates and their inverse, to estimate the average gate fidelity. With the original RB sequences as reference, the interleaved RB sequences have the the target gate, which should also be a Clifford gate, interleaved between the Clifford gates in the reference sequences, with the last inverse gate recalculated to guarantee pseudo-identity. The waveforms corresponding to all sequences are collected first, and then the DAC output them sequentially. To get better statistical results, we generate 20 different random sequences for each value of $m$ and repeat each sequence 1024 times. The maximum number of Clifford gates is 4500 to achieve the credible precision ($\mathcal{O}(1/N)$). To partly resist the impact of the unstable readout and temporal drift of the system parameters, we perform the reference and interleaved RB sequences alternately with respect to the Clifford gate numbers $m$.
The measured average sequence fidelities for the reference and interleaved RB experiments are shown in Fig.~\ref{fig:rb_exp}, which are fitted by the exponential model $F_{\rm seq}(m) = Ap^m + B$, with $m$ being the sequence length, to obtain the reference and interleaved decay parameters, denoted as $p_{\rm ref}$ and $p_{\rm int}$, respectively. Here $A$ and $B$ are fitting parameters to accommodate the errors in SPAM and the last gate. The error per Clifford (EPC) is then extracted by $r_{\rm{clif}} = (1-p_{\rm{ref}})(1-1/d)$, with $d=2$ for a single qubit. Combined with the fact that in this experiment each Clifford gate consists of 2.2083 $\frac{\pi}{2}$-pulses on average, the average EPG is given by $r_{\rm avg}=r_{\rm clif}/2.2083=(7.42\pm0.04)\times10^{-5}$. To assess the performance of each of the single-qubit gates in ${\mathcal G}\equiv\left\{I, X_{\pm\frac{\pi}{2}}, Y_{\pm\frac{\pi}{2}}\right\}$, we continue to perform interleaved RB experiments, where the gate sequences are derived from those of the reference RB experiment by adding the specific gate right after each of the $m$ random Clifford gates. The EPG for the interleaved gate $G\in{\mathcal G}$ is given by $r_G=(1 - p_{\rm{\rm{int}}}/p_{\rm{ref}})(1 - 1/d)$. As shown in Fig.~\ref{fig:rb_exp}, the average EPG and those of specific gates in ${\mathcal G}$ are all lower than $10^{-4}$, with the EPG of the identity operator $r_I$ being the lowest and indicating the coherence limit of the transmon qubit.
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{PurityRB99995_230215_1.pdf}
\caption{\label{fig:pb_exp}(a) The sequence purity averaged over 100 random sequences as a function of the number of Cliffords $m$ for the reference. (b) The sequence RB (100 averages) is a function of the number of Cliffords $m$ for the reference. (c) $|2\rangle$ state population versus the gate numbers showing accumulation of leakage with gate numbers.
}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=1.0\textwidth]{gstsimrb_matrix_230215_noXY.pdf}
\caption{\label{fig:gst_mv_mat
(a) Reconstructing process matrix estimated by GST. The process matrix estimates of the $I$, $X_{\frac{\pi}{2}}$ and $Y_{\frac{\pi}{2}}$ gates are shown as superoperators on the basis of Pauli matrices, respectively.(b) RB data simulated using the gate set tomography $\mathcal{G}_0$ derived from experimental GST results. The reconstructing process matrix generates the single qubit Clifford group. Simulate the RB experiment and analysis the gate fidelity.
}
\end{figure*}
To investigate error sources, we implement the purity benchmarking (PB) experiment~\cite{wallman2015NJP} and monitor the $\ket{2}$ state population, to characterize the imperfections caused by decoherence and leakage. The PB is an RB-based technique, which is delibrately designed to be insensitive to coherent control errors. The random Clifford gate sequences used in PB are generated in the same way as those in RB, with the final projective measurement replaced by quantum state tomography. The experimental data of the average sequence purity, i.e. $P_{\rm seq}(m)=\left\langle{\rm Tr}\left[\hat\rho^2\right]\right\rangle_m$ with the average taken over random sequences of length $m$, are then fitted by the exponential model $P_{\rm seq}(m)=Au^{m-1}+B$ to get the average purity decay parameter $u$, which gives a lower bound for the average decoherence EPC $r_{\rm clif}^{\rm dec}=(1-\sqrt{u})(1-1/d)$. To obtain the leakage rate per Clifford $\Gamma_{\rm clif}$, we fit the measurement results of the $\ket{2}$ state population by a discrete rate equation $P_{\ket{2},m}=P_{\ket{2},\infty}\left(1-\exp\left(-\Gamma_{\rm clif} m\right)\right)+P_{\ket{2},0}\exp\left(-\Gamma_{\rm clif} m\right)$. With the experiment results in Figs.~\ref{fig:pb_exp} (a) and (c), the lower bound of the average decoherence EPC and the leakage rate per Clifford are estimated to be $r_{\rm clif}^{\rm dec}=\left(1.02\pm0.01\right)\times10^{-4}$ and $\Gamma_{\rm clif}=\left(2.57\pm0.08\right)\times10^{-5}$, respectively. The lower bound of decoherence EPG and the leakage rate per gate is $r_{\rm avg}^{\rm dec} = r_{\rm clif}^{\rm dec}/2.2083 = \left(4.62\pm0.04\right)\times10^{-5}$ and $\Gamma_{\rm avg}=\Gamma_{\rm clif}/2.2083=(1.16\pm0.04)\times10^{-5}$, with the former consistent with the expected EPG in the decoherence limit, i.e. $4.77\times10^{-5}$ given by numerically solving the master equation. Together with the averaged EPG obtained using the same set of measurement results, i.e. $r'_{\rm avg}=r'_{\rm clif}/2.2083=\left(9.42\pm0.09\right)\times10^{-5}$ as shown in Fig.~\ref{fig:pb_exp} (b), we estimate that the lower bound of the incoherent error contribution is $r_{\rm avg}^{\rm dec}/r'_{\rm avg} = 49.04\%$.
Besides the average gate fidelity and the error budget, experimentally-obtained process matrices undoubtedly provide more information on and insights into the implemented quantum operations, the underlying physical platform, and the involving control system. Here we introduce GST to obtain the process matrices of a single-qubit gate set ${\mathcal G}\equiv\left\{I, X_{\frac{\pi}{2}}, Y_{\frac{\pi}{2}}\right\}$. Compared to RB and QPT, the advantages of GST are of two folds. First, it provides process matrices of quantum operations in the same context, which are optimized simultaneously to fit the experimental outcomes. Next, the SPAM errors are separated from those in the noisy quantum gates by the introduction of the Gram matrix, i.e. a mathematical description of the SPAM. Finally, accuracy of $10^{-5}$ can be obtained with an experimentally feasible number of repetitions by using long and periodic gate sequences.
To facilitate the GST experiment and the following data analysis, we take advantage of the python package pyGSTi~\cite{nielsen2020QST}. The procedure begins with constructing a fiducial set ${\mathcal F}\equiv\{\emptyset, X_{\frac{\pi}{2}}, Y_{\frac{\pi}{2}}, X_{\frac{\pi}{2}}X_{\frac{\pi}{2}}X_{\frac{\pi}{2}}, Y_{\frac{\pi}{2}}Y_{\frac{\pi}{2}}Y_{\frac{\pi}{2}}, X_{\frac{\pi}{2}}X_{\frac{\pi}{2}}\}$, where $\emptyset$ denotes the null sequence. Acting on the initial state $\ket{0}$ and before the measurement on the $z$-basis, the operations in ${\mathcal F}$ span a symmetric and informationally-complete reference frame, in which arbitrary quantum operations can be unambiguously determined. To achieve Heisenberg-limited accuracy with respect to the circuit depth, a set of short circuits, i.e. the germs, should be elaborately chosen to amplify deviations in all parameters. The germ set contains 12 short circuits, i.e. $\{I, X_{\frac{\pi}{2}},Y_{\frac{\pi}{2}},X_{\frac{\pi}{2}}Y_{\frac{\pi}{2}},X_{\frac{\pi}{2}}X_{\frac{\pi}{2}}Y_{\frac{\pi}{2}},X_ {\frac{\pi}{2}}Y_{\frac{\pi}{2}}Y_{\frac{\pi}{2}},X_{\frac{\pi}{2}}Y_{\frac{\pi}{2}}I,X_{\frac{\pi}{2}}I\\Y_{\frac{\pi}{2}},X_{\frac{\pi}{2}}II,Y_{\frac{\pi}{2}}II,X_{\frac{\pi}{2}}Y_{\frac{\pi}{2}}Y_{\frac{\pi}{2}}I,X_{\frac{\pi}{2}}X_{\frac{\pi}{2}}Y_{\frac{\pi}{2}}X_{\frac{\pi}{2}}Y_{\frac{\pi}{2}}Y_{\frac{\pi}{2}}\}$, each of which is then repeated up to a length in the logarithmically spaced set $L\in\left\{1, 2, 4,\ldots, 4096\right\}$ to construct periodic gate sequences. The maximum depth is chosen such that, together with the repetition time for each circuit being $M=1024$, it leads to an estimated accuracy of $\sim\frac{1}{\sqrt{M}L}<10^{-5}$. The collected experimental data are analysed by the maximum likelihood estimation, leading to process matrices for the gate set which maximizes the log-likelihood function. The reconstructed process matrices are shown in Fig.~\ref{fig:gst_mv_mat} (a). With these matrices, we simulate the gate sequences used in the RB experiment shown in Fig.~\ref{fig:rb_exp} and obtain the simulated EPG $r^{\rm{sim}}_{\rm{avg}} = (8.97\pm0.03)\times10^{-5}$, as shown in Fig. \ref{fig:gst_mv_mat} (b). Note that these experiments, i.e. RB and GST, are executed in a relatively long period of time. The consistency of these two values shows the stability of our superconducting device, which is capable of implementing high-fidelity quantum gates over a relatively long period of time, while the deviation between them gives an intuitive quantification of the temporal fluctuation, which is on the level of $\sim 10^{-5}$.
Besides the estimated process matrices, the GST experiment also reveals the temporal correlation property, i.e. the non-Markovianity, of the physical platform in terms of model violations~\cite{nielsen2021Q}. The GST models a noisy quantum device with complete positive and trace preserving quantum channels, which are expressed by process matrices. Model violations emerge when there are deviations between the probabilities predicted by the optimal process matrices obtained from the GST and the observed frequencies in experiments, probably due to the existence of time-correlated errors, e.g. the slow drifting of system parameters. In pyGSTi, model violation is quantified by the loglikelihood score, which is operational defined as the distance between the optimized log-likelihood function and its mean, measured in the unit of its standard deviation, with the statistics given by the $\chi^2$-districution widely used in hypothesis tests. As to the GST experiment carried in our device, the individual loglikelihood scores (see SM~\cite{supplementary}) show statistically significant model violation mostly in long sequences with depth $\geq 256$. In other words, the non-Markovian effect can be measured on the accuracy level of $\sim10^{-4}$, which is comparable to the average EPG given by the RB experiment.
\section{\label{sec:level3}Discussion}
Although the non-Markovian errors defy accurate and reliable error analysis of the benchmarking results, direct monitoring of the control parameters still provides useful information about the error sources of the experimental platform. We consider two of the possible noise sources, i.e. classical noise from the electronics in the control system and the fluctuation of the transmon frequency, where the former corresponds to the AWG amplitude fluctuation in our experiment. These two fluctuations are monitored to be about $0.3\%$ and $0.1$ MHz (see SM~\cite{supplementary}), of which the contributions to the EPG are estimated to be $0.2\times10^{-5}$ and $0.1\times10^{-5}$, respectively. Specifically, we numerically simulate the evolution of the Schr\"odinger equation, with the pulse amplitude or the qubit frequency randomly sampled from normal distributions, whose mean and variance are determined by the target values and the monitored fluctuation strengths. As the estimated contributions to the EPG are much lower than the coherent part of the measured EPG, we conclude that the non-Markovianity is the main obstructive factor to further improve the fidelity.
\begin{acknowledgments}
This work is supported by the NSFC of China (Grants No. 11890704, 12004042, 11674376), the NSF of Beijing (Grants No. Z190012), National Key Research and Development Program of China (Grant No. 2016YFA0301800) and the Key-Area Research and Development Program of Guang-Dong Province (Grants No. 2018B030326001).
\end{acknowledgments}
|
{
"arxiv_id": "2302.08621",
"language": "en",
"timestamp": "2023-02-20T02:03:43",
"url": "https://arxiv.org/abs/2302.08621",
"yymm": "2302"
} | \section{Introduction}
\label{section:introduction}
Structured data is ubiquitous across various application domains, with examples such as viewing molecules as graphs with node attributes or citation network as directed graphs.
Developing metrics and efficient algorithms to compare (directed) graphs have been traditionally studied in fields such as graph theory and theoretical computer science.
In the last two decades, this problem also received tremendous attention in the graph learning and optimization community, especially for comparing (directed) graphs with node attributes (which we will call {\bf labeled graphs}).
In particular, two ideas have become prominent in the modern treatment of labeled graphs. The first idea is to leverage the so-called Weisfeiler-Lehman (WL) graph isomorphism test \citep{leman1968reduction}.
Indeed, the Weisfeiler-Lehman (WL) test is a classic graph isomorphism test \citep{leman1968reduction} that, in linear time, can distinguish a large family of graphs \citep{babai1979canonical,babai1983canonical}. It has recently gained renewed interest both in designing WL-inspired graph kernels \citep{shervashidze2011weisfeiler,togninalli2019wasserstein} and as a tool for analyzing Message Passing Graph Neural Networks (MP-GNNs) \citep{xu2018powerful,azizian2020expressive}.
The second idea in modern treatment of graphs is to treat labeled graphs (or related structured data) as suitable discrete measure space and then use the idea of Optimal Transport (OT) to compare them. Examples include the Wasserstein WL (WWL) kernel \citep{togninalli2019wasserstein}, the Fused Gromov-Wasserstein (FGW) distance \citep{titouan2019optimal}, and the WL test based tree-mover distance \citep{chuangtree}.
Very recently, several studies took a less combinatorial approach and proposed to view graphs as Markov chains: \cite{chen2022weisfeilerlehman} introduced the Weisfeiler-Lehman (WL) distance, which generalizes the graph comparing problem to the Markov chain comparison problem through a WL-like process in a natural way. This distance has been found to be more discriminative than the previously popular WWL graph kernel.
Around the same time, the optimal transition coupling (OTC) distance was proposed by \citet{o2022optimal} for comparing stationary Markov chains, i.e., Markov chains with stationary distributions and the study was followed by \cite{yi2021alignment} with applications in comparing graphs.
The WL distance proposed in \citep{chen2022weisfeilerlehman} and the OTC distance in \citep{o2022optimal} represent two state-of-the-art approaches in comparing labeled graphs (i.e., graphs with node attributes). In fact, both of them compare more general Markov chains like objects. The Markov chain perspective not only relieves the difficulty in handling combinatorial structures of graphs but also provides a natural and unified way of modelling both directed and undirected graphs. To broaden the use of these distances, especially in graph learning and optimization (e.g., to use such distance as graph reconstruction loss in a generative model), it is crucial that we are able to differentiate such distances w.r.t. changes in input graphs. However, differentiating these distances appear to be challenging.
\paragraph{Our contributions.}
We propose in \sectionref{generalized-optimal-transport-Markov-distances} a unified framework to generate distances between Markov chains (and thus also for labeled graphs), which we call the \emph{Optimal Transport Markov (OTM)} distances.
We establish that the framework of OTM distances encompasses both the WL distance and the OTC distance and in particular, we prove that the two distances serve as extreme points in the family of OTM distances.
We further identify a special one-parameter family of distances within our general framework of OTM distances and we call our new distance \emph{the \gls{dWLdelta}} (for a parameter $\delta >0$) in \sectionref{sec:wldelta}.
Not only do we unveil succinct connections between our discounted WL distance and both the WL and the OTC distances, but we also show that the discounted WL distance has better theoretical properties than the other two distances:
\begin{enumerate}\itemsep 0pt\parsep=1pt\partopsep 0pt
\item Contrary to the WL and the OTC distances, the discounted WL distance can be used to compare non-stationary Markov chains.
\item The discounted WL distance has the same discriminative power as the OTC distance and possibly stronger discriminative power than the WL distance. %
\item All the three types of distances are computed via iterative schemes. We devise an algorithm of the discounted WL distance which converges provably faster than the one for the WL distance introduced in \citep{chen2022weisfeilerlehman}; whereas to the best of our knowledge, there is no known study on convergence rate of the OTC distance.
\item Furthermore, contrary to both the OTC and the WL distances, the \gls{dWLdelta}s
can be differentiated against its parameters,
enabling a range of possible applications as a loss in machine learning or in other optimization tasks.
In \sectionref{differentiation}, we give a simple formula to compute its gradients.
\end{enumerate}
Note that the effectiveness of the WL distance has already been shown in \citep{chen2022weisfeilerlehman} where it is favorably compared with other graph kernels. As we show in this paper, our discounted WL distance is provably more discriminative (e.g, Proposition \ref{prop:dwl_lower_bound}), and thus we expect it will lead to even better performance in practice.
\paragraph{Relation to the fused-GW (FGW) distance of \citet{titouan2019optimal}.}
We note that the fused-GW (FGW) distance also leverages the optimal transport idea, and in fact, uses the Gromov-Wasserstein distance to compute two graphs (equipped with metric structures at nodes). The authors also developed a heuristic algorithm to approximate this algorithm in practice. While the algorithms seem to work well in practice \citep{vincent2021online,titouan2019optimal}, there are no theoretical guarantees of these them and in fact, the FGW algorithm is only proven to converge to a local minimum (of a provably non-convex function). Current methods on optimizing to minimize FGW as a loss relies on a kind of block coordinate descent, updating alternatively the
OT matching (using the FGW algorithm) the parameters by gradient descent with fixed matching~\citep{vincent2021online,brogat2022learning}.
In contrast, we can compute our $\delta$-discounted WL distance –and its gradient– exactly, allowing us to easily optimize it.
We further remark that the FGW distance is defined by considering graphs as metric spaces. In particular, their original framework was only defined for symmetric graphs
(which can be viewed as metric spaces with shortest-path distance). It is, of course, possible to extend its formula to deal with asymmetric graphs, but there is no known theoretical understanding such as the metric properties on this topic yet.
Our OTM distances (including $\delta$-discounted WL distance), however, handles directed graphs easily since they are defined for Markov chains.
\section{Preliminary}
We provide a Glossary of all the distance functions and notations we use in
\section*{Acknowledgments}#1}
\newcommand{\item[$\square$]}{\item[$\square$]}
\newcommand{\item[\CheckedBox]}{\item[\CheckedBox]}
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\definecolor{darkblue}{rgb}{0.0, 0.0, 0.8}
\definecolor{darkred}{rgb}{0.8, 0.0, 0.0}
\definecolor{darkgreen}{rgb}{0.0, 0.8, 0.0}
\definecolor{purple}{RGB}{153,50,204}
\newcommand{\itemsep 0pt\parsep=1pt\partopsep 0pt}{\itemsep 0pt\parsep=1pt\partopsep 0pt}
\renewcommand \thepart{}
\renewcommand \partname{}
\makeatletter
\newcommand*{\glsplainhyperlink}[2]{%
\colorlet{currenttext}{.}%
\colorlet{currentlink}{\@linkcolor}%
\hypersetup{linkcolor=currenttext}%
\hyperlink{#1}{#2}%
\hypersetup{linkcolor=currentlink}%
}
\let\@glslink\glsplainhyperlink
\makeatother
|
{
"arxiv_id": "2302.08640",
"language": "en",
"timestamp": "2023-02-21T02:20:51",
"url": "https://arxiv.org/abs/2302.08640",
"yymm": "2302"
} | \section{Introduction}
The Efimov effect, as predicted in 1970\;\cite{Efimov1970,Efimov1973}, has attracted broad interest in atomic and nuclear physics\;\cite{Kraemer2006,Thomas2010,Berninger2011,Huang2014,Zaccanti2009,Bloom2013,Hu2014,Maier2015,Kato2017,Pires2014,Tung2014Dec,HUlmanis2016,hafner2017role,
Barontini2009,blume2012few,Greene2017universal,Jensen2004structure,Kolganova2011,gong2019compensating,yao2018combined}. When the two-body s-wave scattering length $a$ is tuned to a large value compared to the characteristic range $r_{0}$ of the two-body interaction potential, an infinite series of trimer states are formed. These Efimov trimer states follow a discrete scale invariance: $E_{n} = \lambda^{2}E_{n+1}$, where $n$ denotes the $n$th Efimov state with three pairs of interactions in the resonant limit. $\lambda = e^{\pi/s_{0}}$ is the scaling constant, where $s_0$ depends on the number of resonant interactions as well as the quantum statistics and mass ratio of a trimer's constituents\;\cite{Eric2006,Naidon2017,IncaoMass2006,Incao2005}.
In ultracold gases, where two-body scattering length $a$ can be tuned to any value using Feshbach resonances, the Efimov effects can be observed by measuring the three-body event rate constant, which is resonantly enhanced when an Efimov trimer state approaches zero binding energy at a negative $a$ denoted $a_{\scriptscriptstyle-}$. On the other side of the Feshbach resonance, interference effects
lead to minima in the three-body rate coefficient at a positive $a$ denoted $a_{\scriptscriptstyle+}$ and the scattering length where an Efimov trimer state degenerates with the atom-molecule threshold denoted $a_{\scriptscriptstyle*}$\;\cite{Naidon2017,D_Incao_2018}. These Efimov features have been observed in experiments with ultracold homonuclear Bose, three-component Fermi, and heteronuclear Bose-Fermi gases\;\cite{Kraemer2006,Nakajima2010,Lompe2010,Pollack2009,Knoop2009,Floerchinger2009,Zaccanti2009,Bloom2013,Hu2014,Maier2015,Kato2017,Pires2014,Tung2014Dec,HUlmanis2016,hafner2017role}. The universal relationship
between these features for the case of three identical bosons was first studied by Braaten and Hammer in Ref.\;\cite{Eric2006} and later expanded upon in Ref.\;\cite{Gogolin2008Apr}. Therefore, with these universal relationships, the ground Efimov state energy $E_{0}$ and the first Efimov resonance position can fully determine the Efimov spectrum; these factors are usually considered as the three-body parameters (3BP). Recent measurements of Efimov resonances for a number of ultracold atom species have revealed an unexpected universality, in which the first Efimov resonance $a^{\scriptscriptstyle(1)}_{\scriptscriptstyle-}$ is determined by the van der Waals (vdW) length $r_{\scriptscriptstyle\textsl{vdW}}$: $a^{\scriptscriptstyle(1)}_{\scriptscriptstyle-}=-(8.9 \pm 1.8)r_{\scriptscriptstyle\textsl{vdW}}$\;\cite{D_Incao_2018,Thomas2010,Berninger2011,Huang2014,Ottenstein2008,Scott2009,Chin2010}, where $r_{\scriptscriptstyle\textsl{vdW}}=(2\mu_{2b}C_{6})^{1/4}/2$, which characterizes the length scale between two neutral atoms interacting via the van der Waals interaction $-C_{6}/r^{6}$ with two-body reduced mass $\mu_{2b}$. The calculations from theories based on a single-channel van der Waals model confirmed this vdW universality for broad Feshbach resonances\;\cite{WangJia2012,WangYujun2012,NaidonMar2014,NaidonAug2014,Huang2014Oct}. Similarly, universality of the 3BP has been proposed for heteronuclear atomic systems such as BBX that consist of two heavy atoms B resonantly interacting with a light one X, with an additional dependence on the intraspecies interaction between the atoms B\;\cite{WangYujun2012}.
There are several experimental groups working on Efimov unfavored systems of K-Rb admixtures\;\cite{Barontini2009,Bloom2013,Hu2014,Wacker2016KRb}. The initial experiment in $^{41}$K-$^{87}$Rb Bose-Bose mixtures reported an unexpected nonuniversal behavior of Efimov resonances $a_{\scriptscriptstyle-}=-246\,a_{0}$\;\cite{Barontini2009}, where $a_{0}$ denotes Bohr radius. It thus prompted several further experiments in $^{40}$K-$^{87}$Rb Bose-Fermi mixtures, which all surprisingly showed an Efimov-type resonance around $a_{\scriptscriptstyle*}=230(30)\,a_{0}$ for collisions between atoms and dimers but not for atomic three-body recombination\;\cite{Zirbel2008,Bloom2013,Kato2017}. Moreover, the predicted universal relationships for the
location of Efimov features due to the same trimer state are $|a_{\scriptscriptstyle-}|/a_{\scriptscriptstyle*} = 1.96 $\;\cite{Helfrich2010}. Given the observed
resonance at $a_{\scriptscriptstyle*}=230(30)\,a_{0}$, these relations yield $a_{\scriptscriptstyle-}=-450\,a_{0}$, which is
excluded by the measurements mentioned above. Therefore, a breakdown of universal relationships exists between Efimov resonance and interference features in K-Rb mixtures under the zero-range theory, which assumes $a_{\scriptscriptstyle\textsl{BB}}\rightarrow 0$.
In experiments with ultracold mixtures of Cesium and Lithium atoms, the absence of ground state resonance for positive Cs-Cs scattering length is observed, which suggests that two Efimov branches may exist for $a_{\scriptscriptstyle\textsl{BB}}>0$ ($a_{\scriptscriptstyle\textsl{BB}}$ represents the B-B scattering length) in the BBX system, with different microscopic mechanisms governing the value of the 3BP\;\cite{Tung2014Dec,HUlmanis2016,hafner2017role}. The existence of two Efimov branches may
help explain the breakdown of universal relationships between resonance and interference features in K-Rb mixtures. The atom-dimer features in the RbRb + K channel would shed new light on universal few-body scenarios in mixed systems. Ref.\;\cite{Acharya2016} has derived a set of universal relations that connect the different Efimov features, including the effects of the finite interspecies effective range and the finite intraspecies scattering length, which requires two three-body observables as input parameters. For the $^{87}$Rb-$^{87}$Rb-$^{40}$K system, the $a_{\scriptscriptstyle*}=230(30)\,a_{0}$ resonance is a reliable parameter that has been observed by several experiments. Thus, data of $a_{\scriptscriptstyle+}$ or $a_{\scriptscriptstyle-}$ is important for elucidating the universal relationships between different Efimov features as well as the role of finite-range corrections. The Efimov resonance $a_{\scriptscriptstyle-}< -30000\,a_{0}$ was suggested by Wang \textit{et al.}\;\cite{WangYujun2012}, yet is far outside the range of previous measurements. It is also very difficult for theory to obtain the three-body recombination rate at such a large s-wave scattering length for the numerical stability. Thus, we focus on the positive side of the Rb-K Feshbach resonance, where interference effects lead to minima in the three-body rate coefficient at $a_{\scriptscriptstyle+}$. The prediction that the possible recombination minima at $a_{\scriptscriptstyle+}=2800\,a_{0}$\;\cite{WangYujun2012} for $^{40}$K-$^{87}$Rb-$^{87}$Rb was excluded by the late experiment\;\cite{Bloom2013}, and the analysis suggested that $a_{\scriptscriptstyle+}$ should be shifted to a higher scattering length value.
In this paper, we calculate the Efimov features in $^{87}$Rb-$^{40}$K Bose-Fermi mixtures by investigating the three-body recombination and $^{40}$K-$^{87}$Rb$^{87}$Rb elastic collision processes with $^{87}$Rb-$^{87}$Rb interaction fixed at $a_{\scriptscriptstyle\textsl{RbRb}} = 100\,a_{0}$, which is close to the experimental condition. We use the R-matrix propagation method in the hyperspherical coordinate frame based on the Lennard-Jones model potential for the interactions between atoms. The hyperradius is divided into two regions: short distances, where the slow-variable-discretization (SVD)\;\cite{Tolstikhin1996SVD} method is applied to overcome the numerical difficulties at sharp nonadiabatic avoided crossings; and large distances, where the traditional adiabatic hyperspherical method is employed to avoid the large memory and central processing unit time needed in SVD. Then, the $\underline{\mathcal{R}}$ matrix was propagated from short to large distances. The scattering properties are obtained through the $\underline{\mathcal{S}}$ matrix by matching the $\underline{\mathcal{R}}$ matrix with asymptotic functions and boundary conditions\;\cite{WangJia2011}.
The paper is organized as follows.
In Sec. II, our calculation method and all necessary formulas for calculations are presented.
In Sec. III, we discuss the results and the universal relationship between the different Efimov features.
Finally, we provide a brief summary. Atomic units are
applied throughout the paper unless stated otherwise. \\
\section{Theoretical formalism}
In this work, we calculate the collision properties of the $^{87}$Rb-$^{87}$Rb-$^{40}$K system in $J^{\Pi}=0^{+}$ symmetry. $m_{i}$(i=1,2,3) represent the masses of the three atoms $^{87}$Rb, $^{87}$Rb and $^{40}$K, respectively. We employ Delves's hyperspherical coordinates and introduce the mass-scaled Jacobi coordinates. The first Jacobi vector $\vec{\rho}_{1}$ is chosen to be the vector from atom 1 to atom 2, with reduced mass $\mu_{1}$, and the second Jacobi vector $\vec{\rho}_{2}$ goes from the diatom center of mass to the third atom, with reduced mass $\mu_{2}$. $\theta$ is the angle
between $\vec{\rho}_{1}$ and $\vec{\rho}_{2}$. The hyperradius $R$ and hyperangle $\phi$ are defined as\\
\begin{equation}
\label{1}
\mu R^{2}=\mu_{1}\rho_{1}^{2}+\mu_{2}\rho_{2}^{2} ,
\end{equation}
and\\
\begin{equation}
\label{2}
\tan\phi=\sqrt{\frac{\mu_{2}}{\mu_{1}}}\frac{\rho_{2}}{\rho_{1}},\;\; 0 \leq\phi\leq\frac{\pi}{2},
\end{equation}
respectively, where $R$ is the only coordinate with the dimension of length and represents the overall size of the three-body system. $\theta$, $\phi$, and three Euler angles $(\alpha, \beta, \gamma)$ are represented collectively by $\Omega$ $[\Omega \equiv (\theta, \phi, \alpha, \beta, \gamma)]$, which describe the rotation of the plane that contains the three particles. $\mu$ is an arbitrary scaling factor that is chosen as $\mu=\sqrt{\mu_{1}\mu_{2}}$ in our calculation.\\
In hyperspherical coordinates, the Schr$\mathrm{\ddot{o}}$dinger equation can be written after rescaling the three-body wave function $\Psi_{\upsilon'}$ as $\psi_{\upsilon'}(R;\theta,\phi)=\Psi_{\upsilon'}(R;\theta,\phi) R^{5/2} \sin\phi \cos\phi$:\\
\begin{equation}
\label{3}
\left[ -\frac{1}{2\mu}\frac{d^{2}}{dR^{2}}+\left( \frac{\Lambda^{2}-\frac{1}{4}}{2\mu R^{2}}+V(R;\theta,\phi)\right) \right] \psi_{\upsilon'}(R;\Omega)=E\psi_{\upsilon'}(R;\Omega),
\end{equation}
where $\Lambda^{2}$ is the squared ``grand angular momentum operator'', and its expression is as given in Ref.\;\cite{lin1995}. The three-body interaction potential $V(R;\theta,\phi)$ is expressed as a sum of three pairwise two-body interaction potentials $v(r_{ij})$:\\
\begin{equation}
\label{4}
V(R;\theta,\phi)=v(r_{12})+v(r_{13})+v(r_{23})\,,
\end{equation}
where $r_{ij}$ is the interparticle distance. Here, we use the Lennard-Jones potential with a vdW tail to model the interactions between two atoms \;\cite{WangJia2012}, which was proven to be an excellent model potential to explore van der Waals universality in Efimov physics\;\cite{WangYujun2012,WangJia2012,HUlmanis2016,NaidonAug2014}. The potential is expressed in the form of \\
\begin{equation}
\label{5}
v(r_{ij})=-\frac{C_{6,ij}}{r_{ij}^{6}}\left[1-\frac{1}{2}\left(\frac{\lambda_{ij}}{r_{ij}}\right)^{6}\right],
\end{equation}
where the $i$ and $j$ indices in $r_{ij}$ label particles $i$ and $j$. The two-body scattering length $a_{ij}$ is changed by adjusting the $\lambda_{ij}$, and $C_{6,ij}$ is the dispersion coefficient. The values of $C_{6,RbRb}$ and $C_{6,RbK}$ used here are 4698 from Ref.\;\cite{C6RbRb2014} and 4106.5 from Ref.\;\cite{C6RbK1999}, respectively.\\
The three-body wave function $\psi_{\upsilon^{'}}$ can be expanded with the complete, orthonormal set of angular wave function $\Phi_{\nu}$ and radial wave functions $F_{\nu\upsilon'}$ as\\
\begin{equation}
\label{6}
\psi_{\upsilon'}(R;\varOmega)=\sum\limits_{\nu=0}^{\infty}F_{\nu\upsilon'}(R)\Phi_{\nu}(R;\varOmega).
\end{equation}
The adiabatic potentials $U_{\nu}(R)$ and channel functions $\Phi_{\nu}(R;\varOmega)$ at fixed $R$ can be obtained by solving the following adiabatic eigenvalue equation:\\
\begin{equation}
\label{7}
\left( \frac{\Lambda^{2}-\frac{1}{4}}{2\mu R^{2}}+V(R;\theta,\phi)\right) \Phi_{\nu}(R;\varOmega)=U_{\nu}(R) \Phi_{\nu}(R;\varOmega).
\end{equation}
Then, we solve Eq.\;(\ref{3}) using the R-matrix propagation method. First, we divide the hyperradius into $(N-1)$ intervals with a set of grid points $R_{1}<R_{2}< ...<R_{N}$ according to the method of Ref.\;\cite{yao2019}. At a short distance, we use the SVD method to solve Eq.\;(\ref{3}) in the interval $[R_{i},R_{i+1}]$. The total wave function $\psi_{\upsilon'}(R;\Omega)$ is expanded in terms of the discrete variable representation (DVR) basis $\pi_{i}$ and the channel functions $\Phi_{\nu}(R;\Omega)$ as\\
\begin{equation}
\label{8}
\psi_{\upsilon'}(R;\Omega)=\sum\limits_{i}^{N_{DVR}}\sum\limits_{\nu}^{N_{chan}}C_{i\mu}^{\upsilon'}\pi_{i}(R)\Phi_{\nu}(R;\Omega),\\
\end{equation}
where $N_{DVR}$ is the number of DVR basis functions and $N_{chan}$ is the number of included channel functions. Inserting Eq.\;(\ref{8}) into Eq.\;(\ref{3}), the three-body Schr$\mathrm{\ddot{o}}$dinger equation can be rewritten as:
\begin{equation}
\label{9}
\sum\limits_{j}^{N_{DVR}}\sum\limits_{\mu}^{N_{chan}}T_{ij}O_{i\nu,j\mu}C_{i\nu}^{\upsilon'}+U_{\nu}(R_{i})C_{i\nu}^{\upsilon'}=E^{\upsilon'}C_{i\nu}^{\upsilon'},\\
\end{equation}
where the kinetic energy matrix elements can be expressed as\\
\begin{equation}
\label{10}
T_{ij}=\frac{1}{2\mu}\int_{R_{i}}^{R_{i+1}}\frac{d}{dR}\pi_{i}(R)\frac{d}{dR}\pi_{j}(R)dR,\\
\end{equation}
$R_{i}$ and $R_{i+1}$ are the boundaries of the calculation box, and the overlap matrix elements between the adiabatic channels defined at different quadrature points are:\\
\begin{equation}
\label{11}
O_{i\nu,j\mu}=\langle\Phi_{\nu}(R_{i};\Omega)|\Phi_{\mu}(R_{j};\Omega)\rangle.\\
\end{equation}
At large distances, we use the traditional adiabatic hyperspherical method to solve Eq.\;(\ref{3}). Substituting Eq.\;(\ref{6}) into Eq.\;(\ref{3}), a set of coupled ordinary differential equations are obtained:\\
\begin{equation}
\label{12}
\left[-\frac{1}{2\mu}\frac{d^{2}}{dR^{2}}+U_{\nu}(R)-E\right]F_{\nu,\upsilon'}(R)-\frac{1}{2\mu}\sum\limits_{\mu}\left[2P_{\mu\nu}(R)\frac{d}{dR}+Q_{\mu\nu}(R)\right]F_{\mu\upsilon'}(R)=0,\\
\end{equation}
where the nonadiabatic couplings $P_{\mu\nu}$ and $Q_{\mu\nu}$ that control the inelastic transition and the width of resonance supported by adiabatic potential $U_{\nu}(R)$ are \\
\begin{equation}
\label{13}
P_{\mu\nu}(R)=\int\Phi_{\mu}(R;\Omega)^{*}\frac{\partial}{\partial R}\Phi_{\nu}(R;\Omega)d\Omega,\\
\end{equation}
and\\
\begin{equation}
\label{14}
Q_{\mu\nu}(R)=\int\Phi_{\mu}(R;\Omega)^{*}\frac{\partial^{2}}{\partial R^{2}}\Phi_{\nu}(R;\Omega)d\Omega.\\
\end{equation}
respectively.
The effective potentials are usually more physical than adiabatic potentials and can thus give physical asymptotic behaviors of the system at large R, which are defined as:\\
\begin{equation}
\label{15}
W_{\nu\nu}(R)=U_{\nu}(R)-\frac{\hbar^{2}}{2\mu}P_{\nu\nu}^{2}(R).\\
\end{equation}
During scattering calculation, the $\underline{\mathcal{R}}$ matrix is a fundamental quantity that can be subsequently used to determine the scattering matrix $\underline{\mathcal{S}}$, which is the main goal of the study. We use the $R$-matrix propagation method over interval $[a_{1},a_{2}]$ to calculate the relative $\underline{\mathcal{R}}$ matrix from $a_1$ to $a_2$ as follows:
\begin{equation}
\label{16}
\underline{\mathcal{R}}(a_2)=\underline{\mathcal{R}}_{22}-\underline{\mathcal{R}}_{21}\left[\underline{\mathcal{R}}_{11}+\underline{\mathcal{R}}(a_1)\right]^{-1}\underline{\mathcal{R}}_{12}.
\end{equation}
After we have the $\underline{\mathcal{R}}$ matrix at large distances, the physical scattering matrix $\underline{\mathcal{S}}$ and the relative reaction matrix $\underline{\mathcal{K}}$ can be determined by applying asymptotic boundary conditions:\\
\begin{equation}
\label{17}
\underline{\mathcal{K}}=(\underline{f}-\underline{f}'\underline{\mathcal{R}})(\underline{g}-\underline{g}'\underline{\mathcal{R}})^{-1},
\end{equation}
\begin{equation}
\label{18}
\underline{\mathcal{S}}=(\underline{1}+i \underline{\mathcal{K}})(\underline{1}-i\underline{\mathcal{K}})^{-1}.\\
\end{equation}
where $\underline{f}$, $\underline{f}'$, $\underline{g}$, and $\underline{g}'$ are diagonal matrices with matrix elements given by energy-normalized asymptotic solutions $f_{\nu\nu'}$ and $g_{\nu\nu'}$ and derivatives $f'_{\nu\nu'}$ and $g'_{\nu\nu'}$, respectively. $f_{\nu\nu'}$ and $g_{\nu\nu'}$ are described in terms of spherical Bessel and Neumann functions: \\
\begin{equation}
\label{19}
f_{\nu\nu'}=\sqrt{\frac{2\mu k_{\nu}}{\pi}}R j_{l_{\nu}}(k_{\nu}R)\delta_{\nu\nu'},\\
\end{equation}
and\\
\begin{equation}
\label{20}
g_{\nu\nu'}=\sqrt{\frac{2\mu k_{\nu}}{\pi}}R n_{l_{\nu}}(k_{\nu}R)\delta_{\nu\nu'}.\\
\end{equation}
respectively. For the $\nu$th recombination channel, $l_\nu$ is the angular momentum of the third atom relative to the dimer and is given by Eq.\;(\ref{21}), and $k_{\nu}=\sqrt{2 \mu\left(E-E_{2 b}\right)}$.
For the $\nu$th entrance channel, $l_{\nu}=\lambda_{\nu}+3/2$, and $k_{\nu}=\sqrt{2 \mu E}$.\\
\section{Results and Discussion}
\subsection{Atom-dimer elastic scattering cross sections}
In this section, we focus on the case where $^{87}$Rb-$^{87}$Rb has a finite positive scattering length $100\,a_{0}$ and the Rb-K scattering length is varied at the negative side of the Feshbach resonance. Figure\;\ref{fig1a} shows the hyperspherical potential curves for the $^{87}$Rb-$^{87}$Rb-$^{40}$K system with $a_{\scriptscriptstyle\textsl{RbK}}=-450\,a_{0}$ and $a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$. The black line in Fig.\;\ref{fig1} denotes the recombination channel (labeled by f), which corresponds to a K atom relative to the RbRb dimer. The effective potentials for the recombination channels have asymptotic behavior at large R as,\\
\begin{equation}
\label{21}
W=\frac{l_{f}(l_{f}+1)}{2\mu R^{2}}+E_{2b},\\
\end{equation}
where $E_{2b}$ is the two-body bound state energy and $l_{f}$ is the corresponding angular momentum of the third atom relative to the dimer. The dashed lines represent the three-body continuum channels (labeled i), which corresponds to the case in which the three atoms are far away from each other as $R \to \infty$. The corresponding potential curves behave as\\
\begin{equation}
\label{22}
W=\frac{\lambda(\lambda+4)+15/4}{2\mu R^{2}}.\\
\end{equation}
Here, $\lambda(\lambda+4)$ is the eigenvalue of the grand angular momentum operator $\Lambda^{2}$, and $\lambda$ are nonnegative integer values, which are restricted by the requirements of permutation symmetry.\\
\begin{figure}[htbp]
\centering
\subfigure{
\includegraphics[width=7.8cm]{1a.eps}
\label{fig1a}
}
\subfigure{
\includegraphics[width=7.8cm]{1b.eps}
\label{fig1b}
}
\caption{(Color online) The $J^{\Pi}=0^{+}$ adiabatic hyperspherical potential curves U$_{\nu}$(R) for $^{87}$Rb-$^{87}$Rb-$^{40}$K with $a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$. The values of $\lambda$ indicate the asymptotic behavior of the potential curves representing the three-body continuum channels, as given in Eq.\;(\ref{22}). (a) The potential curves obtained with $a_{\scriptscriptstyle\textsl{RbK}}=-450\,a_{0}$. The lowest adiabatic potential curve denotes a recombination channel that corresponds to an RbRb dimer and a K atom. (b) The potential curves with $a_{\scriptscriptstyle\textsl{RbK}}=178\,a_{0}$. The lowest adiabatic potential curve denotes the RbRb + K channel, and the red solid line is the RbK + Rb channel.}
\label{fig1}
\end{figure}
In these calculations, the basis sets $N_{\theta}=168$ and $N_{\phi}=434$ are chosen, and the potential curves have at least six significant digits.
We denote the atom-dimer scattering length by $a_{ad}$. The cross-section for atom-dimer scattering is expressed in terms of the $S$ matrix as\\
\begin{equation}
\label{23}
\sigma_{2}=\sum\limits_{J,\Pi}\sigma_{2}^{J,\Pi}=\sum\limits_{J,\Pi}\frac{(2J+1)\pi}{k_{ad}^{2}}|S_{f\leftarrow i}^{J,\Pi}-1|^{2},\\
\end{equation}
where $\sigma_{2}^{J,\Pi}$ is the partial atom-dimer elastic scattering cross section corresponding to the $J^{\Pi}$ symmetry, $k_{ad}=\sqrt{2\mu_{ad}(E-E_{d})}$ is the atom-dimer wavenumber, $\mu_{ad}$ is the atom-dimer reduced mass, and $E_{d}$ is the binding energy of the weakly bound molecular state. The threshold behavior of $\sigma_{2}^{J,\Pi}$ is determined solely by $J$ and follows the standard Wigner threshold law\;\cite{Esry2001Dec}\\
\begin{equation}
\label{24}
\sigma_{2}^{J,\Pi} \sim (E-E_{d})^{2J}.\\
\end{equation}
The atom-dimer scattering length is defined as\\
\begin{equation}
\label{25}
a_{ad}=-\lim\limits_{k_{ad} \to 0}\frac{\tan\delta_{0}}{k_{ad}},\\
\end{equation}
where $\delta_{0}$ is the phase shift for atom-dimer elastic scattering and is related to the diagonal $S$-matrix element by the formula\\
\begin{equation}
\label{26}
S_{f\leftarrow i}^{0+}=\exp{(2i\delta_{0})}.\\
\end{equation}
We typically use $15$ channels and $349$ sectors distributed from $R = 18\; a_{0}$ to $R = 3100\; a_{0}$. In the matching procedure of this section, the asymptotic wave functions are expressed in rotated Jacobi coordinates, which improve the convergence of observables as a function of matching distance\;\cite{cyzhao2022}. The convergence of the scattering length with respect to the final matching distance $R_{m}$ is shown in Table\;\ref{tableI} when the Rb-K scattering length is $-450\,a_{0}$. The atom-dimer scattering length values reach converged values at $R_{m}$ = 2974 $a_{0}$.
\begin{table*}
\caption{\label{t4} Convergence of the scattering length between K and RbRb $a_{ad}$ with respect to the matching distances $R_{m}$. The Rb-K and Rb-Rb scattering length is $a_{\scriptscriptstyle\textsl{RbK}}=-450\,a_{0}$ and $a_{\scriptscriptstyle\textsl{RbRb}}=100 \,a_{0}$, respectively.}
\begin{ruledtabular}
\begin{tabular}{cllllllll}
&\multicolumn{1}{c}{$R_{m}$}&\multicolumn{1}{c}{2543}&\multicolumn{1}{c}{2752}&\multicolumn{1}{c}{2974} \\
\hline
& $a_{ad}$ &467.02 &467.34 &467.36 \\
\end{tabular}
\end{ruledtabular}
\label{tableI}
\end{table*}
In Fig.\;\ref{fig2} we present cross sections for $^{40}$K + $^{87}$Rb$^{87}$Rb elastic scattering as a function of the collision energy $E$ with $a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$ and $a_{\scriptscriptstyle\textsl{RbK}}=-450\,a_{0}$. The data show that $\sigma_{2}^{0+}$ obeys the threshold behavior as $\sigma_{2}^{0+} \propto E^{0}$ and approaches the constant value $ 4\pi a_{ad}^{2}$ in the ultracold limit.\\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\linewidth]{2.eps}
\caption{Cross sections $\sigma_{2}$ for $^{40}$K + $^{87}$Rb$^{87}$Rb elastic scattering as a function of the collision energy $E$ in $J^{\Pi}=0^{+}$ symmetry with $a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$ and $a_{\scriptscriptstyle\textsl{RbK}}=-450\,a_{0}$.}
\label{fig2}
\end{figure}
Fig.\;\ref{fig3a} shows the cross section for $^{40}$K + $^{87}$Rb$^{87}$Rb elastic scattering as a function of $^{87}$Rb-$^{40}$K scattering length at the negative side of the Feshbach resonance. A peak at approximately $a_{\scriptscriptstyle\textsl{RbK}}= - 155\,a_{0}$ is shown. We interpret the peak as being caused by the appearance of a three-body bound state at the atom-dimer threshold ($^{40}$K + $^{87}$Rb$^{87}$Rb). Owing
to the presence of the near-threshold trimer state, the atom-dimer scattering length is expected to diverge. As illustrated in Fig.\;\ref{fig3b}, the $^{40}$K + $^{87}$Rb$^{87}$Rb elastic scattering lengths $a_{ad}$ are divergent at the same Rb-K scattering length value. A schematic diagram of the relevant three-body energy spectrum as a function of Rb-K scattering length is shown in Fig.\;\ref{fig3c}, illustrating the energies of trimer states (the blue solid curve) and atom-dimer thresholds (the red dashed curve). The energy spectrum gives clear signatures at the threshold locations, which are consistent with the peaks of the cross section as well as the divergence position of the atom-dimer scattering length. Thus, intriguing questions are whether the obtained three-body bound state supported by the lowest hyperspherical potential curve is an Efimov state, and whether it is related to the observed atom-dimer resonance on the positive Rb-K scattering length (or emerges from it). It is known that Efimov states have universal properties that generally require scattering lengths much larger than $r_{\scriptscriptstyle\textsl{vdW}}$ and energies much smaller than $E_{\scriptscriptstyle\textsl{vdW}}$. The Rb-Rb-K system can be characterized by a length $r_{\scriptscriptstyle\textsl{vdW,RbRb}}=83\,a_{0}$ and a corresponding energy $E_{\scriptscriptstyle\textsl{vdW,RbRb}}= 9.25\times 10^{-10}$ a.u. The three-body state is located at $|a_{\scriptscriptstyle\textsl{vdW,RbK}}|= 155\, a_{0}< 2\, r_{\scriptscriptstyle\textsl{vdW,RbRb}} $, with a binding energy larger than $E_{\scriptscriptstyle\textsl{vdW,RbRb}}$ by one order of magnitude. Thus, the three-body state is not an Efimov state, nor is it related to the Efimov resonance in the positive Rb-K scattering length. Analogous to a typical two-body Feshbach resonance, such a three-body bound state could offer the possibility to tune the atom-dimer interaction from attractive to repulsive with the atomic two-body scattering length tuned at small range adjustment.
\begin{figure}[htbp]
\centering
\subfigure{
\includegraphics[width=5.0cm]{3a.eps}
\label{fig3a}
}
\subfigure{
\includegraphics[width=5.0cm]{3b.eps}
\label{fig3b}
}
\subfigure{
\includegraphics[width=5.4cm]{3c.eps}
\label{fig3c}
}
\caption{(Color online)
The $^{40}$K + $^{87}$Rb$^{87}$Rb elastic scattering observables with the $J^{\Pi}=0^{+}$ symmetry as a function of Rb-K scattering lengths when $a_{\scriptscriptstyle\textsl{RbRb}} = 100\,a_{0}$.
All the data are obtained at low collision energy $E=100$ nk. (a) Cross sections: the red dashed line shows the position of the peak; (b) the atom-dimer scattering lengths; and the red dashed line shows the divergence position; (c) three-body energy spectrum. The red dashed line is the RbRb energy, and the solid line is the three-body state energy. The inset shows the Rb-K scattering length at which the three-body state disappears.}
\label{fig3}
\end{figure}
\subsection{Three-body recombination rates}
We now consider three-body recombination, which is widely used to reveal Efimov physics in cold atoms. The event rate constant for three-body recombination is \\
\begin{equation}
\label{27}
K_{3}^{(f\leftarrow i)}=\sum\limits_{J,\Pi}K_{3}^{J\Pi}=2!\sum\limits_{J,\Pi}\sum\limits_{f,i}\frac{32(2J+1)\pi^{2}}{\mu k^{4}}|S_{f \leftarrow i}^{J,\Pi}|^{2}.\\
\end{equation}
Here, $K_{3}^{J\Pi}$ is the partial recombination rate corresponding to the $J^{\Pi}$ symmetry, and $k=(2\mu E)^{1/2}$ is the hyperradial wavenumber in the incident channel. $S_{f \leftarrow i}^{J,\Pi}$ represents the scattering matrix element from the initial three-body continuum channels $i$ to the final atom-dimer channel $f$ for the $J^{\Pi}$ symmetry. The factor (2!) derives from the number of indistinguishable bosonic particles. The asymptotic form of the effective hyperradial potentials in $R\to \infty$ allows for generalization of Wigner's threshold law\;\cite{Esry2001Dec,Incao2005Jun} for recombination (i.e., the low-energy behavior of the recombination rates), which leads to the partial recombination rates $K_{3}^{J\Pi}$ satisfying\\
\begin{equation}
\label{28}
K_{3}^{J\Pi}\propto E^{\lambda_{min}}
\end{equation}
near threshold, where $\lambda_{min}$ is the minimum value of $\lambda$ in Eq.\;(\ref{22}).
We first study the three-body recombination rates $K_{3}^{(f\leftarrow i)}$ for the $^{87}$Rb$^{87}$Rb$^{40}$K system with $J^{\Pi}=0^{+}$ symmetry on the positive Rb-K scattering length side. Figure\;\ref{fig1b} shows the hyperspherical potential curves for the $^{87}$Rb-$^{87}$Rb-$^{40}$K system, in which the Rb-K scattering length is $a_{\scriptscriptstyle\textsl{RbK}}=178\,a_{0}$ and Rb-Rb interacts via the s-wave scattering length $a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$. It is shown that there are two recombination channels: one being the weakly bound $^{87}$Rb$^{40}$K + $^{87}$Rb channel and the other being the deeply bound channel $^{40}$K + $^{87}$Rb$^{87}$Rb. Figure\;\ref{fig4} shows the three-body recombination rate $K_{3}^{(f\leftarrow i)}$ as a function of the collision energy $E$ for the $^{87}$Rb$^{87}$Rb$^{40}$K system with $J^{\Pi}=0^{+}$ symmetry. At lower collision energies, the recombination rates $K_{3}^{(f\leftarrow i)}$ are constant and follow the Wigner threshold law prediction, $K_{3}^{(f \leftarrow i)}\propto E^{0}$.\\
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{4K.eps}
\caption{(Color online) The three-body recombination rate $K_{3}^{(f \leftarrow i)}$ as a function of the collision energy $E$ for the $^{87}$Rb$^{87}$Rb$^{40}$K system in the $J^{\Pi}=0^{+}$ symmetry with $a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$ and $a_{\scriptscriptstyle\textsl{RbK}}=178\,a_{0}$. The black and red solid lines indicate three-body recombination rates for the sum of different incident channels $i$ to particular recombination channels $f=1,2$, respectively. The blue solid line indicates the sum of the above two three-body recombination rates. The recombination rates $K_{3}^{(f \leftarrow i)}$ follow the Wigner threshold law prediction, $K_{3}^{(f \leftarrow i)}\propto E^{0}$ in the low-energy region.}
\label{fig4}
\end{figure}
Figure\;\ref{fig5a} shows the $a_{\scriptscriptstyle\textsl{RbK}}$ dependence of the three-body recombination rates at positive Rb-K scattering lengths with the Rb-Rb interaction fixed at $a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$. The data are obtained at a collision energy of $E=100$ nk. The filled black squares represent recombination into the deeply bound channel RbRb + K, red solid circles represent recombination into the weakly bound channel RbK + Rb, and blue solid triangles depict their sums. The recombination rates of the deep channel (RbRb + K) are close to those of the shallow channel (RbK + Rb) at small Rb-K scattering lengths. Then, the recombination rates of the deep channel begin to decrease when the Rb-K scattering lengths are tuned larger. A minima appears at approximately $a_{\scriptscriptstyle\textsl{RbK}}=1641\,a_{0}$ in the deep recombination channel. However, this recombination minima has no effect on the total recombination rates since it is smaller than that into the weakly bound channel by at least 3 orders of magnitude. An Efimov recombination minima for total rates arises at approximately $a_{\scriptscriptstyle\textsl{RbK}} = 3638\,a_{0}$. For $a > 0$, the three-body recombination minimum is a well-known feature of Efimov physics\;\cite{Zaccanti2009,Pollack2009}, which is explained as the destructive interference between two different decay pathways.
Figure\;\ref{fig5b} shows the dependence of Efimov recombination minima on the collision energy. The recombination minima is at $a_{\scriptscriptstyle+} = 3264\;a_{0}$ with the collision energy of $E = 50$ nk and is $ a_{\scriptscriptstyle+} = 3638\,a_{0} $ for $E = 100$ nk. However, this minimum is less evident at the collision energy of $E = 300$ nk.
\begin{figure}[htbp]
\centering
\subfigure{
\includegraphics[width=7.8cm]{5a.eps}
\label{fig5a}
}
\subfigure{
\includegraphics[width=7.8cm]{5b.eps}
\label{fig5b}
}
\caption{(Color online) (a) The $J^{\Pi} = 0^{+}$ three-body recombination rate $K_{3}^{(f \leftarrow i)}$ is shown as a function of Rb-K scattering length $a_{\scriptscriptstyle\textsl{RbK}}$ for $E=100$ nk. The Rb-Rb interaction is fixed at $a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$. The filled squares (black) represent recombination of the deep channel Rb + Rb+ K $\to$ RbRb + K, the filled circles (red) represent recombination of the shallow channel Rb + Rb + K $\to$ RbK + Rb, and the filled triangles (blue) depict the sum of the channels. The local minima is at $a_{\scriptscriptstyle\textsl{RbK}}=1641\,a_{0}$ in the deep recombination channel. An Efimov recombination minima on the total rates is shown at $a_{\scriptscriptstyle\textsl{RbK}} = 3638 \,a_{0}$. (b) Magnifying the position of Efimov recombination minima for different collision energies.}
\label{fig5}
\end{figure}
Aside from calculating the positions of the three-body recombination minima, it is desirable to compare the actual shapes of the $a$-dependence of the loss rate with the experimental measurements.
To date, the three-body recombination rate in $^{87}$Rb-$^{40}$K mixtures has been measured by Ref.\;\cite{Bloom2013,Hu2014}.
In Figure.\;\ref{fig6} we show our numerically calculated three-body recombination rates $K_{3}^{(f \leftarrow i)}$ for the $^{87}$Rb$^{87}$Rb$^{40}$K system compared with experimental data. The solid circles show experimental data obtained from Ref.\;\cite{Bloom2013}, and the solid squares show numerical results at a collision energy of $E=100$ nk. The solid line shows an $a^{4}$ dependence. At positive scattering lengths, our numerical results are multiplied by 8, and at negative scattering lengths, our numerical results are multiplied by 105. These scaled numerical results show reasonable agreement with the experimental results. The expected $a^{4}$ dependence of recombination rates is shown for both positive and negative scattering lengths. The experimental and numerical data both show that there are no Efimov features at negative $a$ between $-200\,a_{0}$ and $-3000\,a_{0}$ or at positive $a$ between $200\,a_{0}$ and $3000\,a_{0}$. A possible Efimov recombination minima at $a_{\scriptscriptstyle\textsl{RbK}}=3638\, a_{0}$ is displayed in our numerical calculations.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{6K.eps}
\caption{(Color online) The three-body recombination rates $K_{3}^{(f \leftarrow i)}$ for the $^{87}$Rb$^{87}$Rb$^{40}$K system as functions of $^{87}$Rb-$^{40}$K scattering lengths with $J^{\Pi}=0^{+}$ symmetry. The solid circles are experimental data obtained from Ref.\;\cite{Bloom2013}, and the solid squares are numerical results at a collision energy of $E=100$ nk. The solid line shows an $a^{4}$ dependence.}
\label{fig6}
\end{figure}
\subsection{Discussion of the universal relationship between the different Efimov features }
The universal relations between various Efimov features have been predicted in the zero-range limit by Helfrich
\textit{et al.}\;\cite{Helfrich2010} using an effective-field-theory framework. The predicted universal relationship for the
location of Efimov features due to the same trimer state is $|a_{\scriptscriptstyle-}|/a_{\scriptscriptstyle*}=1.96$. In $^{40}$K-$^{87}$Rb mixtures, an atom-dimer resonance was observed at $a_{\scriptscriptstyle*}= 230\,a_0$ \;\cite{Zirbel2008,Bloom2013,Kato2017}. If the atom-dimer resonances are associated with the lowest Efimov state, universal scaling predicts $a_{\scriptscriptstyle-}=-450\,a_{0}$, which is excluded by several measurements. The breakdown of the universal relation is expected to be associated with the fact that the universal relationship does not consider the effect of finite effective ranges and nonresonant RbRb interactions. Thus, a set of universal relations that connects the different Efimov features, which include the effects of the finite interspecies effective range and the finite intraspecies scattering length, was derived by Ref.\;\cite{Acharya2016} (Eq. (25) of the paper). This equation requires two Efimov features as inputs.
For the $^{87}$Rb-$^{87}$Rb-$^{40}$K system, $\lambda=122.7$, and the other parameters are provided in Ref.\;\cite{Acharya2016}.
Taking $a_{\scriptscriptstyle*}=230\,a_{0}$, $a_{\scriptscriptstyle+}=3638\,a_{0}$($E=100$ nk), $a_{22}=a_{\scriptscriptstyle\textsl{RbRb}}=100\,a_{0}$, and $r_{0}=r_{\scriptscriptstyle \textsl{vdW,RbK}}=72\,a_{0}$ as inputs, the first Efimov resonance is at $a_{\scriptscriptstyle-}=-52679\,a_{0}$ predicted through Eq. (25) of the paper. For the collision energy of $E=50$ nk, the recombination minima predicted by us resides at $a_{\scriptscriptstyle+}=3264\, a_{0}$, and the first Efimov resonance would be at $a_{\scriptscriptstyle-}=-48315\,a_{0}$ predicted by the universal relation. These results are consistent with the predicted value of $a_{\scriptscriptstyle-}<-30000\,a_{0}$ from Wang \textit{et al.}\;\cite{WangYujun2012} and outside current experiment reach. Therefore, the universal relation including the finite range effects and the nonresonant intraspecies scattering length can give reasonable predictions, and no breakdown universal relations occur.
\section{Summary}
In this paper, we studied the atom-dimer elastic scattering cross sections, scattering lengths and three-body recombination rates in $^{87}$Rb-$^{40}$K mixtures with total angular momentum $J=0$. We use the R-matrix propagation method in the hyperspherical coordinate frame based on the Lennard-Jones model potential with a vdW tail for the interactions between atoms. The finite intraspecies scattering length is considered with the $^{87}$Rb-$^{87}$Rb s-wave scattering length fixed at $a_{\scriptscriptstyle\textsl{RbRb}} = 100\,a_{0}$.
To make clear whether there is an Efimov state disappearing at the $^{40}$K + $^{87}$Rb$^{87}$Rb threshold, we investigate the $^{40}$K + $^{87}$Rb$^{87}$Rb elastic scattering at a negative $^{87}$Rb-$^{40}$K scattering length. We calculate the scattering lengths and cross sections in the range from $-100\,a_{0} $ to $-1000\,a_{0}$. An atom-dimer elastic scattering resonance at $a_{\scriptscriptstyle\textsl{RbK}} = - 155\,a_0$ is found, which is due to the presence of the near-threshold trimer state at the atom-dimer threshold ($^{40}$K + $^{87}$Rb$^{87}$Rb). The three-body state is located at $|a_{\scriptscriptstyle\textsl{RbK}}|= 155\,a_{0}< 2 r_{\scriptscriptstyle \textsl{vdW,RbRb}} $, and its binding energy is larger than $E_{\scriptscriptstyle \textsl{vdW,RbRb}}$ by one order of magnitude. Thus, the three-body state is not an Efimov state, and it is not related to the previously observed atom-dimer Efimov resonance at $a_{\scriptscriptstyle*} = 230(30)\,a_{0}$.
We further investigate the three-body recombination, which is widely used to reveal Efimov physics in cold atoms. Our calculations find an Efimov recombination minima at a positive interspecies scattering length. Taking the position of the recombination minima and the experimentally observed atom-dimer resonance $a_{\scriptscriptstyle*} = 230(30)\,a_{0}$ as inputs, the universal relations accounting for the finite range effects and finite intraspecify scattering length are tested. An Efimov resonance arising at the three-body threshold $a_{\scriptscriptstyle-}=-52679\,a_{0}$($E=100$ nk) is predicted, in agreement with previous theoretical predictions.
Testing shows that the main reason for the breakdown of universal relations in $^{40}$K-$^{87}$Rb predicted by the zero-range to be finite range effects and the intraspecify scattering length. This study also demonstrates the importance of finite range effects in $^{40}$K-$^{87}$Rb mixtures. Finally, the present results help elucidate the universality of three-body parameters for heteronuclear systems with finite intraspecies scattering lengths.
\section{Acknowledgments}
We thank C. H. Greene, P. G. Panos and Chen Ji for helpful discussions.
Hui-Li Han was supported by the National Natural Science Foundation of China under Grant No. 11874391.
Ting-Yun Shi was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDB21030300.
|
{
"arxiv_id": "2302.08687",
"language": "en",
"timestamp": "2023-02-24T02:17:48",
"url": "https://arxiv.org/abs/2302.08687",
"yymm": "2302"
} |
\section{Introduction}
Deep learning (DL) is used in various domains including computer vision, recommendation systems, and natural language processing~\cite{mlfb_hpca19, dlrm19, tpu-isca}.
However, the training and inference for those DL models require a large number of \textbf{computations} and huge \textbf{memory}.
To accelerate those, high-end GPUs (such as ones with NVIDIA Tensor cores~\cite{nvidia_volta}), domain-specific accelerators (such as Google's TPUs~\cite{tpu-isca}), and high-performance CPUs~\cite{intel2018vnni, intel2020isa} along with optimized libraries and frameworks~\cite{onednn, tvm, cudnn, tensorflow2015-whitepaper} have been introduced.
Although CPUs have received relatively less attention in the DL era compared with GPUs and domain-specific accelerators, CPUs are widely used at datacenters thanks to their flexibility~\cite{mlfb_hpca19}.
In fact, there are cases where CPUs are more suitable than GPUs/accelerators as the primary processor for processing deep neural networks (DNNs).
For example, edge devices with tight area/power budget~\cite{sparce_taco19} prefer CPUs since GPUs/accelerators cannot be easily deployed in addition to CPUs.
Moreover, a large number of server-class CPUs are (and will be) used in datacenters to process High Performance Computing (HPC) workloads, and adding/maintaining additional accelerators just for DL workloads would increase complexity and cost~\cite{save_micro20}.
Furthermore, offloading a modest-size task to an accelerator or a GPU might not be the best option if the offloading overhead
is relatively substantial~\cite{accelerometer}.
Finally, large DNNs are hard to be used with GPUs or accelerators due to their smaller size of memory compared with that of CPUs~\cite{intel-mlperf}.
Considering these reasons, it is not surprising that many CPU vendors including Arm~\cite{arm20ethos}, Intel~\cite{intel2020isa}, and IBM~\cite{ibm21micro} have started deploying \textit{dense} matrix engines along with conventional scalar and vector engines to accelerate GEMM (i.e., general matrix multiplication) which is at the core of DL models~\cite{gemm_petewarden}.
Since they only focus on dense matrix computations, there is a challenge as sparsity becomes pervasive in DL. Thus, it is time to ponder how to utilize these engines efficiently for sparse DNNs on CPUs, similar to how we have evolved scalar/vector engines over many years.
Recent research has focused on adding hardware support in DNN accelerators to handle sparsity as it can be used to improve power and performance by skipping or gating ineffectual computations (since anything multiplied by zero is zero) and to reduce
memory capacity and bandwidth requirements by saving only non-zero values and their indices in a compressed manner~\cite{extensor, sigma, scnn, eyeriss_v2}.
CPUs, however, bring in the following additional challenges.
(i) Being general-purpose, CPUs are particularly sensitive to the amount of hardware devoted to improving a subset of workloads.
(ii) At the same time, new functional units in CPUs need to be able to support a wide variety of use cases.
(iii) CPUs need programmability support to handle sparsity unlike offload accelerators with custom DMA engines for sparse accesses.
In this paper, we introduce VEGETA, \underline{V}ertically-Integrated \underline{E}xtensions for Sparse/Dense \underline{GE}MM \underline{T}ile \underline{A}cceleration on CPUs.
To expose VEGETA to programmers, we extend the ISA with new instructions and registers, which enables accelerating sparse DNNs by removing redundant computations and reducing memory traffic.
We also introduce a new light-weight matrix engine that enhances a systolic array and register file to support flexible $N$:$M$ structured sparsity.
Further, we demonstrate how VEGETA can accelerate \textit{both} structured and unstructured sparse-dense GEMMs (SPMMs) via software transformations of the sparse matrix.
\begin{comment}
We take the following design decisions within VEGETA.
\begin{itemize}
\item \textbf{Static vs Dynamic Sparsity.}
There are two types of sparsity in DL models: static and dynamic.
Static sparsity refers to the sparsity in the weights which are usually generated by network pruning~\cite{deepcompression_iclr16, domino2021nips, zhou2021learning}.
Dynamic sparsity is the result of the activation functions such as rectified linear unit (ReLU) which outputs zero for negative inputs.
We limit our exploration to static sparsity which we program via software, leaving
software/hardware support for handling dynamic sparsity support (such as dynamic compaction across different tile registers to build non-zero tiles similar to the approach that SAVE~\cite{save_micro20} used for vector registers) as future work.
\item \textbf{Structured vs Unstructured Sparsity.}
Within static sparsity, a key design question is structured vs. unstructured.
While native support for unstructured sparsity could provide higher accuracy with extreme sparsity, the area overhead to support the sparsity in the form of a highly reconfigurable NoC~\cite{sigma} and a sizable sparse controller \cite{extensor, geng2019awb}
is only justifiable on standalone accelerators~\cite{sambanova, cerebras}, but not within CPUs.
We leverage insights from recent works~\cite{zhou2021learning, domino2021nips} that identify layer-wise flexible $N$:$M$ structured sparsity (where every M elements contain at most N non-zero values) on a per-layer basis (i.e., N can change for each layer) shows comparable accuracy with unstructured sparsity until certain degree.
Thus, we design an architecture with flexible $N$:$M$ structured sparse support and show how we can use the structured sparse support to leverage unstructured sparsity.
Further, to handle unstructured sparsity using flexible $N$:$M$ structured sparse support, we propose a method to express an unstructured matrix as a \textit{row-wise flexible structured sparse matrix} and show how our architecture exploit row-wise flexible structured sparse matrix to skip ineffectual computations.
\end{itemize}
\end{comment}
This is the first work, to the best of our knowledge, to demonstrate sparsity support in a CPU matrix engine.
We believe this work is extremely timely given growing interest in industry products for sparsity, such as Sparse Tensor Core (STC) in NVIDIA's Ampere GPU~\cite{mishra2021accelerating} and Sparsity-aware NPU in a Samsung Mobile AP~\cite{samsung}.
\textbf{Summary of Contributions:}
\begin{itemize}
\item We introduce VEGETA ISA extensions that include new instructions and registers for supporting sparse tile operations along with software optimizations.
\item We propose new architectural extensions to a systolic-array-based matrix engine to support flexible $N$:$M$ structured sparsity in terms of sparsity pattern and granularity.
\item We explore different VEGETA engine design choices to understand the performance and area trade-offs
\item Across a suite of GEMM/SPMM kernels, we observe 1.09$\times$, 2.20$\times$, 3.74$\times$, and 3.28$\times$ speed-ups compared to a SOTA dense array when running 4:4 (dense), 2:4, 1:4, and unstructured (95\%) sparse DNN layers, respectively.
\end{itemize}
\section{Background}
\subsection{GEMM and DNN Layers}
A DNN comprises multiple layers of potentially different types.
Two of the most time-consuming layer types are fully connected layers, which are widely used in Multi-Layer Perceptrons/Transformers, and convolutional layers~\cite{eyeriss}.
The underlying building block of these popular and compute-intensive DNN layers is GEMM.
Fully connected layers are composed of GEMMs naturally, while modern high-performance convolutional layers are implemented with GEMMs as their innermost operation~\cite{libxsmm_ipdps20}.
\subsection{Systolic Array (SA)}
In the last few years, systolic arrays~\cite{systolic} have become a prime design choice for accelerating dense GEMM, especially in the context of DNNs, due to its simple and efficient architecture.
The most well-known commercial systolic array architecture is in Google's TPU~\cite{tpu-isca}.
A systolic array is a two-dimensional array composed of highly scalable, pipelined, and homogeneous processing elements (PEs).
Each PE is responsible for a Multiply-and-Accumulate (MAC) operation and forwarding operands to the neighboring PEs.
An element of one matrix is pre-loaded into the PE (often called the stationary element~\cite{eyeriss}).
The elements of the remaining matrices are streamed to the left and top edge of the systolic array. Then, they are propagated to the PEs using the horizontal and vertical links between neighboring PEs.
This maximizes data reuse between PEs, thus minimizing the number of reads and writes between the array and external buffers.
\subsection{Sparsity in DNNs}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/different-sparsity-patterns.pdf}
\vspace{-1em}
\caption{
Comparison with unstructured sparsity, tile-wise 2:4 sparsity and row-wise $N$:4 sparsity (different rows could have different $N$ values between 0 and 4). The size of tile is 8$\times$8 in this example and the size of a block is 4.
}
\label{fig:sparsity}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figures/compressed-matrix.pdf}
\caption{Compression of a matrix with $N$:$M$ structured sparsity. The $M$ determines the bit widths of indexes.
}
\label{fig:compressed-matrix}
\vspace{-1em}
\end{figure}
Even though the number of computations required for executing a DNN is huge, there is an opportunity to reduce that significantly by leveraging sparsity in DNNs. We describe different dimensions of sparsity in DNN inference.
\textbf{Sparsity source.} Weights and input activations are the main sources of sparsity in DNNs.
Weight (static) sparsity is derived by pruning some edges in a DNN with a small sacrifice in accuracy~\cite{pruning_nips15, deepcompression_iclr16}.
This leads to zeros in the weight matrix.
Input (dynamic) sparsity is usually induced by a popular activation function, Rectified Linear Unit (ReLU), which clips negative inputs to zero, leading to a significant fraction of zero-valued input activations for the next layer.
\textbf{Sparsity degree.} Sparsity degree is the fraction of zeros in a given matrix.
The degree of weight sparsity can often be tuned during the pruning process. However, the degree of input sparsity is usually non-deterministic and unpredictable since the actual values of input activations are determined at runtime
\textbf{Sparsity pattern.}
The non-zeros for a DNN may have a pattern, such as certain channels of weights being all zero (through channel pruning) or each block with M elements having at most N non-zeros (often called $N$:$M$ sparsity~\cite{domino2021nips}).
``Unstructured sparsity'' indicates the lack of a pattern.
\textbf{Sparsity granularity.}
Sparsity patterns may exist at different granularities.
For example, ``network-wise $N$:$M$ sparsity'' indicates all layers in a network have the same $N$:$M$ ratio, while ``layer-wise $N$:$M$ sparsity'' means different layers may have different $N$:$M$ ratios.
During execution, usually a layer is decomposed into 2D tiles; each tile is composed of rows of vectors.
Similar to the network granularity as explained before, sparsity patterns could exist at the tile or row granularity.
In \autoref{fig:sparsity}, we compare matrices with different sparsity patterns/granularities, with a tile size of 8$\times$8.
Previous HW support focused on either network-wise~\cite{mishra2021accelerating, sta_cal20, maohua_micro19} or layer-wise $N$:$M$ sparsity~\cite{s2ta_hpca22}, while we target to support row-wise $N$:$M$ sparsity to cover broader flexibility as well as some unstructured sparsity as shown in \autoref{table:nm-hw}.
\begin{comment}
As mentioned, GEMMs often dominate both DNN inference and training.
An inference with a DNN uses a forward pass while training a DNN requires both forward and backward passes.
In a forward pass, inputs are propagated by multiplications of the weights of each layer and a set of input values (also known as activations) for that layer to generate outputs. During this stage, some networks exhibit \textbf{dynamic sparsity} where an unpredictable set of input values is zero.
This typically results from common activation functions such as Rectified Linear Unit (ReLU), which clips negative inputs to zero leading to a significant fraction of zero-valued activations for the next layer.
Since multiplication by zero is wasted work, a design can skip operations on zero-valued inputs to save energy and/or improve performance. However, exploiting dynamic sparsity is not straightforward due to its unpredictability.
Similar operations are required during the backward pass where the gradient is calculated to train the model by updating weights with gradients~\cite{zhou2021learning}.
\TODO{Make bulleted definitions for each type of sparsity}
On the other hand, AI researchers have also developed methods to ``prune'' some edges in a neural network with only a small sacrifice in accuracy~\cite{pruning_nips15, deepcompression_iclr16}. This leads to zeros in the weight matrix. For inference tasks, the weights are fixed; thus, this is known as \textbf{static sparsity}. In this work, we focus on exploiting static sparsity.
Static sparsity for DNNs is divided into \textbf{unstructured sparsity} and \textbf{structured sparsity}~\cite{zhou2021learning}, as shown in \autoref{fig:sparsity}.
The examples in the figure are all 8$\times$8 and 50\% sparse, which means 32 non-zero elements.
Unstructured sparsity is usually achieved by removing unimportant weight values during model training without any definite pattern~\cite{deepcompression_iclr16}.
Unstructured sparsity can significantly reduce model size, but generally leads to under-utilization of hardware without complex support.
In contrast, \textbf{coarse-grained structured sparsity} provides a hardware-friendly pattern that can be exploited without significant hardware overhead (with filter/channel/block pruning), but this constrained pruning leads to lower accuracy~\cite{maohua_micro19, columncombining_asplos19} due to the low representation power.
Recent work by Zhou \textit{et al.}~\cite{zhou2021learning} introduced $N$:$M$ \textbf{fine-grained structured sparsity}.
As illustrated in~\autoref{fig:sparsity}, there are a fixed number, N, of non-zero elements in every \textbf{block} of M contiguous elements.
~\autoref{fig:compressed-matrix} shows how the matrix can be compressed with block-wise indexes.
This has motivated previous sparse accelerators to target uniform $N$:$M$ fine-grained structured sparsity~\cite{mishra2021accelerating, sta_cal20}. However, they support fixed N and M (for e.g., the NVIDIA tensor core sets N=2 and M=4 at design-time).
Another recent work using $N$:$M$ structured sparsity~\cite{domino2021nips} shows that adopting layer-wise $N$:$M$ sparsity shows even better accuracy compared to enforcing uniform $N$:$M$ sparsity across all layers, thanks to the higher representation power, which can be accelerated by S2TA~\cite{s2ta_hpca22}.
To the best of our knowledge, there is no HW work which can leverage both flexible $N$:$M$ fine-grained structured sparsity and unstructured sparsity using structured sparse architecture support.
\end{comment}
\begin{comment}
\subsection{BFloat16}
Brain Floating Point Format or bfloat16 is a 16-bit floating-point format introduced by Google to accelerate DNN workloads on TPUs\cite{bf16}. The bfloat16 format has the advantages of reducing memory usage and chip area for the multiplier while maintaining the same dynamic range as FP32. Additionally, conversion from FP32 to bfloat16 only requires truncation of the least significant 16-bits of the mantissa, allowing for fast conversion between the two formats.
\subsection{Intel Pin}
Intel Pin is a dynamic binary instrumentation tool that performs instrumentation at runtime on precompiled binaries\cite{pin}. Pintools are tools created using Pin and can be used to perform performance analysis or to emulate instructions at runtime\cite{pintool}. The pintool runs in the same address space as the executable to be instrumented and hence can access all the data of the executable. We describe a subset of functions provided by Pin:
\begin{itemize}
\item \textbf{PIN\_SafeCopy()}: Copies the specified number of bytes from a source memory location to a destination memory location, which allows the pintool to read or write values used by the executable.
\item \textbf{INS\_Opcode()}: Reads instruction opcode of the form XED\_ICLASS\_name.
\item \textbf{INS\_OperandReg()}: Reads instruction operand.
\item \textbf{INS\_Next()}: Gets next instruction.
\item \textbf{INS\_InsertCall()}: Inserts a call to the function pointer relative to input instruction. This function is used to insert instrumentation code.
\end{itemize}.
\subsection{MacSim}
MacSim~\cite{hyesoon12macsim} is a trace-driven cycle-accurate architectural simulator which can be used to explore new architectural features. A binary instrumentation tool such as Intel Pin can be used to generate a trace of executed instructions. An instruction in the trace has both static information (such as the opcode and instruction address) and dynamic information (such as store address and load address). Each instruction is translated into micro-ops during the execution of MacSim, which finally outputs performance statistics including instruction count, total cycles, cache hit ratio, average memory latency, and etc.
\end{comment}
\section{VEGETA Instruction Set Architecture}
\label{sec:isa}
\begin{comment}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/matrices.pdf}
\caption{The visualization of tile matrices for \textsc{tile\_spmm\_u}\xspace.}
\label{fig:matrices}
\end{figure}
\HK{ I'm not sure what to conclude from this figure. is it trying to visualize sparity? }
\GJ{I was trying to show the granularity of TILE\_SPMM\_U and what it's computing .. maybe this is redundant}
\end{comment}
\begin{comment}
\subsection{ISA and Architectural support to accelerate GEMM in CPU}
To meet the high-performance demands of Deep Learning workloads, various CPU vendors have started adding GEMM operations in their ISA. In this project, we focus on Intel's Advanced Matrix Extensions (AMX)~\cite{intel2020isa} to describe our designs, but our work can be generalized to other CPU architectures.
Intel AMX introduces eight $1 KB$ $2D$ tile registers composed of 16 rows, where each row holds 64 bytes. AMX also includes new instructions which use the tile registers detailed in Table \ref{table:amx_instructions}.
The \textsc{tile\_load}\xspace instruction loads a tile from the memory system to the destination tile register while the \textsc{tile\_store}\xspace instruction stores elements in the source tile register to the target memory address. The other key instruction is \textsc{tile\_gemm/tile\_spmm}\xspace, which performs a mixed-precision $\boldsymbol{C} \mathrel{+}= \boldsymbol{A} \times \boldsymbol{B}$ on tile registers, where $\boldsymbol{A}$ and $\boldsymbol{B}$ are $BF16$ and $\boldsymbol{C}$ is in $FP32$ precision. To support the new sparse matrix engine, we extend the AMX instruction set as described in Section \ref{sec:isa} with additional registers.
\end{comment}
\subsection{Register File Support}
\label{sec:rf}
\begin{figure}[!t]
\centering
\includegraphics[width=0.47\textwidth]{figures/registers-final.pdf}
\vspace{-0.5em}
\caption{VEGETA tile registers and metadata registers.}
\label{fig:regs}
\vspace{-1em}
\end{figure}
Inspired by Intel AMX~\cite{intel2020isa}, we assume there are eight 1 KB tile registers (treg0-7), each comprising 16 rows of 64 Bytes.
We define a \textit{tile} as a 2D block comprising rows of data separated by a fixed stride. We define an \textit{effective tile} as the larger sparse tile captured by the non-zeros present in the compressed tile in the case of a sparse matrix (along with metadata).
A tile register can hold 16$\times$32 BF16 elements or 16$\times$16 FP32 elements. To support 2:4 and 1:4 sparsity, we introduce aliased tile registers of size 2 KB (utile register or ureg) and 4 KB (vtile register or vreg), respectively.
One ureg is composed of two consecutive tregs, while one vreg is composed of two consecutive uregs as shown in \autoref{fig:regs}.
Next, we introduce metadata registers (mreg0-7) to store metadata information for sparse tiles. As shown in \autoref{fig:compressed-matrix}, a pair of bits in the metadata represents the position of one non-zero element in a block of the compressed sparse matrix. Since a single row of the tile register holds 32 non-zero BF16 elements, the corresponding metadata register row holds $32\times2$ bits of metadata.
Hence, an mreg has 16 rows, each with 64 bits of metadata for a total of 128 Bytes.
Note that while a treg can hold 16$\times$32 BF16 elements,
a treg and mreg for 2:4 sparsity can be used to store data for the effective tile whose dimension is 16$\times$64.
Similarly, when mreg is used for 1:4 sparsity, a treg and mreg can represent an effective tile whose dimension is 16$\times$128.
\input{tables/amx_instructions}
\subsection{VEGETA Instructions}
\label{sec:isa_extensions}
\autoref{table:amx_instructions} summarizes the VEGETA instructions using the aforementioned registers.
\textsc{tile\_load\_t}\xspace, \textsc{tile\_load\_u}\xspace, and \textsc{tile\_load\_v}\xspace load a 1 KB, 2 KB, and 4 KB tile from the specified address to a treg, ureg, and vreg, respectively, while \textsc{tile\_store}\xspace stores a 1 KB tile from a treg to memory.
\textsc{tile\_load\_t}\xspace can be used to load either a dense tile or the non-zero values of a compressed sparse tile. In the latter case, the 1 KB tile has an effective tile size of 2 KB or 4 KB for 1:4 and 2:4 sparsity ratios, respectively, as mentioned above. Furthermore, the load of a sparse tile must be accompanied by a corresponding \textsc{tile\_load\_m}\xspace instruction, which loads 128 B of metadata from memory to an mreg.
The \textsc{tile\_gemm}\xspace, \textsc{tile\_spmm\_u}\xspace, and \textsc{tile\_spmm\_v}\xspace instructions perform a tile matrix multiply and accumulate, $\boldsymbol{C}\mathrel{{+}{=}}\boldsymbol{A} \times \boldsymbol{B}$, where $\boldsymbol{A}$ and $\boldsymbol{B}$ are BF16 tiles and $\boldsymbol{C}$ is the FP32 output.
$\boldsymbol{A}$ holds a tile of a sparse matrix. With data in compressed format, $\boldsymbol{A}$ holds a fixed number of non-zeros, but with higher sparsity (smaller $N$:$M$), its {\it effective} size is larger.
In contrast, the actual size of the dense $\boldsymbol{B}$ tile must grow as $\boldsymbol{A}$ gets sparser.
For example, assuming a dense $\boldsymbol{A}$ tile is $16\times32$, the effective size of $\boldsymbol{A}$ for a 2:4 sparsity ratio is $16\times64$ (2 KB), while the non-zero values could still fit into a 1 KB treg.
Thus, the corresponding $\boldsymbol{B}$ tile should be $64\times16$, which will fit into a 2 KB ureg.
Similarly, for a 1:4 ratio, the effective size of $\boldsymbol{A}$ is 4 KB and the corresponding $\boldsymbol{B}$ tile will fit into a 4 KB vreg.
Note that the corresponding output tile $\boldsymbol{C}$ is a constant size ($16\times16, FP32$) and fits in a 1 KB treg.
We call \textsc{tile\_gemm}\xspace a VEGETA tile GEMM instruction and \textsc{tile\_spmm\_u}\xspace and \textsc{tile\_spmm\_v}\xspace VEGETA tile SPMM instructions.
To summarize, \textsc{tile\_gemm}\xspace performs a dense (4:4) GEMM operation on 1 KB treg inputs, while \textsc{tile\_spmm\_u}\xspace performs an SPMM operation where $\boldsymbol{A}$ is a 2:4 compressed sparse 1 KB tile, $\boldsymbol{B}$ is a dense 2 KB tile, and the output $\boldsymbol{C}$ is a dense 1 KB tile.
Thus, \textsc{tile\_spmm\_u}\xspace calculates $\boldsymbol{C}$ $(16 \times 16, FP32)\mathrel{{+}{=}}\boldsymbol{A}$ $(16 \times 64, BF16) \times \boldsymbol{B}$ $(64 \times 16, BF16)$.
Similarly, \textsc{tile\_spmm\_v}\xspace calculates $\boldsymbol{C}$ $(16 \times 16, FP32)\mathrel{{+}{=}}\boldsymbol{A}$ $(16 \times 128, BF16) \times \boldsymbol{B}$ $(128 \times 16, BF16)$.
Note that $\boldsymbol{C}$ is used as both input and output.
\input{algorithms/gemm}
The number of useful MAC operations required to calculate $\boldsymbol{C}$ is the same for \textsc{tile\_gemm}\xspace, \textsc{tile\_spmm\_u}\xspace, and \textsc{tile\_load\_v}\xspace (8192).
For each output element, the number of effectual MAC computations is 32.
Finally, to support SPMM with a row-wise $N$:$M$ sparse matrix $A$, we introduce \textsc{tile\_spmm\_r}\xspace. \textsc{tile\_spmm\_r}\xspace calculates $\boldsymbol{C}$ $(R \times 16, FP32)\mathrel{{+}{=}}\boldsymbol{A}$ $(R \times 64, BF16) \times \boldsymbol{B}$ $(64 \times 16, BF16)$ where $R$ can vary from 8 to 32, depending on $N$:4 sparsity for each row (which will be stored as extra metadata, 32$\times$2 bits, or 8B, at most).
We show an example SPMM kernel assuming $\boldsymbol{A}$ with 2:4 sparsity using VEGETA instructions in \autoref{lst:spmm_base}.
$D_m$, $D_n$, and $D_k$ indicate the size of each dimension while $T_m$, $T_n$, and $T_k$ show the corresponding tile sizes.
In this case, $T_m$, $T_n$, and $T_k$ are 16, 16, and 64, respectively.
We store the values of matrix $B$ in a transposed manner in the tile registers.
We implemented an optimized version of this kernel to evaluate our architecture that we introduce in \autoref{sec:architecture}.
\section{Motivation and VEGETA Overview}
\subsection{Vector vs. Matrix Engine}
\label{sec:vector_matrix}
Single Instruction Multiple Data (SIMD) and/or short vector execution have been supported in mainstream CPUs for several generations.
Due to the smaller granularity of vectors compared to matrices, the industry has started integrating matrix engines inside CPUs~\cite{ intel2020isa, arm20ethos, ibm20isa}.
A matrix engine can provide power-efficient high compute throughput via simple control logic and significant internal data reuse.
For example, in Intel's upcoming Sapphire Rapids processors, the peak matrix compute capability per cycle of each core is 8$\times$ the vector capability~\cite{intel2021}.
In \autoref{fig:vector-matrix-comparison}, we show effective compute throughputs of sparse/dense matrix/vector engines on a convolutional layer with different densities derived from a roofline model.
We assume 64 and 512 GFLOPS for the vector and matrix engines, respectively, with a memory bandwidth of 94 GB/s~\cite{intelBWcal_orig}.
For the 100\% dense case, the dense matrix (vector) and sparse matrix (vector) engines achieve the same compute throughput since no computation can be skipped.
We observe that sparse engines outperform dense engines significantly by skipping non-effectual computations, especially when density is low.
Also, there is a significant gap in compute throughput between vector and matrix engines.
Moreover, due to the smaller granularity of vector instructions, the same GEMM kernel requires many more instructions to be executed when using vector engines, contributing to the runtime gap as shown in \autoref{fig:skx-spr-comparison} (we estimated them using a cycle accurate simulator, MacSim~\cite{hyesoon12macsim}).
When memory bound, i.e., at extremely low density and thus arithmetic intensity, vector compute throughput is sufficient, so a sparse vector engine performs similar to a sparse matrix engine.
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{figures/vector-matrix-comparison.pdf}
\vspace{-1em}
\caption{Effective compute throughput for dense/sparse vector/matrix engines using a roofline model.}
\label{fig:vector-matrix-comparison}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{figures/skx-spr-comparison.pdf}
\vspace{-1em}
\caption{Executed instruction counts and runtime ratio comparison on a CPU with matrix engines on GEMM workloads with equal-sized dimensions.
}
\label{fig:skx-spr-comparison}
\end{figure}
\subsection{Structured vs. Unstructured Sparsity}
CPUs are general-purpose processors. Ideally, a CPU should be able to support {\it any} sparsity pattern, including unstructured sparsity. However, this introduces two practical challenges.
\textbf{Challenge 1: Programmability.}
GEMM implementations typically partition the matrices into tiles and iterate over the tiles to optimize data reuse in caches, facilitate parallelization, etc. Innermost GEMM kernels are often optimized for a predetermined tile size, using instructions operating on fixed size tiles~\cite{libxsmm_ipdps20}.
The irregular nature of unstructured sparsity means we cannot know tile sizes a priori, and further, they may all be different. In the context of how software is written, this makes it tricky to define an easy-to-use ISA.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figures/underutlization-systolic.pdf}
\vspace{-0.5em}
\caption{Comparison of utilization of PEs in a Weight Stationary (WS) systolic array and VEGETA engines with sparse weights.
}
\label{fig:underutlization-systolic}
\vspace{-1.5em}
\end{figure}
\textbf{Challenge 2: Implementation overheads.}
There are trade-offs between different options for the register file (RF) and systolic array (SA) when supporting sparsity.
The RF feeding the SA could hold a dense (i.e., conventional, with zeros) or sparse/compressed representation of each tile; if sparse, we need indexing logic and metadata to match each non-zero to the appropriate values from the other matrix.
Also, the SA could be comprised of conventional PEs (i.e., dense) or be enhanced to be sparsity-aware and skip zeros, at the cost of additional interconnects inside each PE~\cite{sigma}. Naturally, all these structures and interconnects add overhead.
To address these challenges, we make the following design decisions: (i) We limit our scope to sparsity support for DL workloads, where the typical sparsity degree is up to 95\%.
Further, we add HW support for \textit{flexible $N$:$M$ structured sparsity}, leveraging insights from a recent work~\cite{domino2021nips} which has shown that adopting \textit{layer-wise} $N$:$M$ sparsity shows better accuracy compared to network-wise.
We also show how the HW can support unstructured sparsity by transforming the target matrix using row-wise $N$:$M$ sparsity.
(ii) We add sparsity support in \textit{both} the RF (\autoref{sec:isa}) and SA (\autoref{sec:architecture}) to achieve efficient utilization of the register storage and MACs.
\input{tables/type-of-sparsity}
\subsection{HW Support for Flexible $N$:$M$ Structured Sparsity}
\label{subsec:hw_support}
\autoref{fig:underutlization-systolic} shows under-utilization challenges for a conventional WS systolic array when used with sparse weights.
A WS systolic array keeps the weight values stationary over time while inputs and outputs are streamed in a skewed manner.
If sparse weights are used with 2:4/1:4 structured sparsity, 50\%/25\% of PEs are mapped with zero weights, which causes useless computation.
This work proposes an enhanced systolic array-based matrix engine which maps only non-zero weight values and distributes/reduces the correct input/output elements at the right time, leveraging the strengths of a systolic array, in the presence of some irregularity in the inputs.
This also requires logic in a PE to pick the right input elements for MACs.
The indexing logic and input distribution and output reduction logic to support flexible $N$:$M$ structured sparsity (i.e., the purple boxes in \autoref{fig:underutlization-systolic}) are presented in \autoref{sec:architecture}.
In terms of sparsity granularity, we not only support network/layer/tile-wise, but also row-wise $N$:$M$ sparsity.
In \autoref{table:nm-hw}, we compare the supported sparsity granularity of our design and previous works that support $N$:$M$ sparsity.
\subsection{Transforming Unstructured to Row-Wise $N$:$M$ Sparsity}
\label{subsec:transform}
While native support for unstructured sparsity can provide higher accuracy with extreme sparsity, the area overhead to support the sparsity in the form of a highly reconfigurable networks on chips (NoC)~\cite{sigma} and a sizable sparse controller \cite{extensor, geng2019awb}
is only justifiable on standalone accelerators~\cite{sambanova, cerebras}, not within CPUs.
However, given an unstructured sparse tile, one can derive a row-wise $N$:$M$ sparse tile that covers all non-zeros in the given sparse tile by selecting appropriate $N$:$M$ per each row.
For example, assuming 1:4, 2:4, and 4:4 are available sparsity patterns, one can analyze each row of the target unstructured tile to find the most sparse $N$:$M$ sparsity that covers all non-zeros in the row.
Then, each row can be compressed using the corresponding $N$:$M$ sparsity.
For example, the first and the second rows of \autoref{fig:sparsity} (a) would be compressed with 2:4 while the third and the fourth would be compressed with 1:4 to guarantee that none of non-zero values get lost.
This transformation does not cause any accuracy drop since it is lossless, meaning that all non-zeros in the original unstructured sparse matrix will still exist in the corresponding structured sparse matrix.
\autoref{fig:sparsity} (c) is derived using this transformation from \autoref{fig:sparsity} (a), covering all non-zeros (similarly, tile-wise 2:4 is used to derive \autoref{fig:sparsity} (b) from \autoref{fig:sparsity} (a)).
We use this transformation to leverage unstructured sparsity using VEGETA and show the estimated performance gain in \autoref{sec:row-wise-evalution}.
\begin{comment}
\begin{figure}[!t]
\centering
\includegraphics[width=0.47\textwidth]{figures/sparse-dnn.pdf}
\caption{Expected effective compute throughput for dense and sparse DNNs with different matrix engines: RASA~\cite{rasa_dac21}, Sparse Tensor Core~\cite{sta_cal20, mishra2021accelerating} and VEGETA.}
\label{fig:sparse-dnn}
\end{figure}
\end{comment}
\subsection{VEGETA Design Overview}
This work presents VEGETA,
which includes ISA extensions and microarchitecture support for structured/unstructured sparsity using flexible $N$:$M$ fine-grained structured sparsity HW~\cite{domino2021nips, zhou2021learning} in CPUs. We present a detailed design for some specific and important points (1:4, 2:4, 4:4) to explain detailed extensions for both ISA and the microarchitecture, but both can naturally be extended for different block sizes (\textit{M}).
We use a 32$\times$16 conventional WS systolic array as the baseline, inspired by RASA~\cite{rasa_dac21} and Intel's TMUL~\cite{intel2020isa}.
Comparing against a dense matrix engine rather than a vector engine provides a strong baseline due to the huge gap in compute throughput between typical matrix engines and vector engines (\autoref{sec:vector_matrix}).
Our proposed VEGETA engine maintains the same number of MAC units as the baseline, adding the ability to skip zero-valued weight values via new control logic, multiplexers, and adders for reductions, along with some wider data paths.
We target mixed-precision with BF16/FP32 which is widely used for both inference and training on commercial devices~\cite{intel2020isa, tpu-bf16, nvidia-bf16}.
\input{sections/03_ISA}
\input{sections/04_SW}
\input{sections/05_Engine_Microarchitecture}
\subsection{Flexibility in the Block Size, M}
In this work, we assume $M=4$ to explain our extension in detail, which can handle different fine-grained structured sparsity patterns including 1:4/2:4/4:4, but our extension is not limited to the specific size of $M$.
While we use $M=4$ in our implementation, our approach can be extended to $M=2^m$ by modifying the tile/metadata registers and ISA to support larger input registers. A larger $M$ provides greater flexibility to the sparse model design and may result in
improved accuracy~\cite{domino2021nips}, but would cost more HW.
\begin{comment}
\begin{itemize}
\item Data prefetch: We propose the use of software prefetching techniques (\textit{\_\_builtin\_prefetch()}) to fetch tiles for the next iteration into the cache.
\item Loop tiling: We further propose the use of loop tiling to increase temporal locality, thereby reducing cache misses.
\end{itemize}
\end{comment}
\subsection{SW Optimizations for GEMM Kernel}
\label{sec:compiler}
\begin{comment}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{figures/tilespmm-example.pdf}
\caption{Active tiles for algorithm 1. \AB{Maybe you can add colour to tile register file also. Also sreg numbering renaming}}
\label{fig:tilespmm-example}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{figures/matrix_code-2.pdf}
\caption{Active tiles for algorithm 1. }
\label{fig:tilespmm-example}
\end{figure}
\end{comment}
We focus our software discussion on constructing a high-performance GEMM kernel on VEGETA.
We implemented the software optimizations manually on our kernel, but this could be employed by higher-level DNN libraries/frameworks~\cite{onednn} and/or tensor compilers~\cite{tvm}.
Moreover, these instructions may also be used by other kernels that rely on matrix multiplication.
We have added intrinsics for each of the VEGETA instructions. These VEGETA intrinsics give the programmer or library writer low-level control of inline assembly while retaining readability and ease of programming. Our software infrastructure additionally provides code generation support for these new instructions which a compiler or library developer can use to write and optimize their own kernels.
As an example, \autoref{lst:spmm_base} shows a basic implementation of the GEMM kernel. $T_m, T_n, T_k$ represent the register tile size for each dimension while $D_m, D_n, D_k$ represent the GEMM dimensions. In \autoref{lst:spmm_opt}, we optimize the kernel performance using the following loop transformation techniques to reduce the number of load instructions.
\begin{itemize}
\item \textbf{Register promotion:} The load from $C[i][j]$ is independent of the innermost loop index, $k$ and therefore there are redundant loads from the same address within the $k$-loop. We store the value of $C[i][j]$ in a register for the duration of the $k$-loop, thereby reducing the total number of loads from $3\times D_m\times D_n\times D_k$ to $2\times D_m\times D_n\times D_k + D_m \times D_n$.
\item \textbf{Loop unroll and jam:} We unroll the outer $i$ loop by a factor of $3$ and fuse the inner loops. This allows for the reuse of $B[j][k]$ across iterations of the $i$-loop, further reducing the number of tile loads to $\frac{4}{3}\times D_m\times D_n\times D_k + D_m\times D_n$. Unrolling also provides the out-of-order scheduler with a larger window of independent instructions, thereby reducing pipeline stalls.
\end{itemize}
Depending on the target CPU's specific memory organization and the performance bottlenecks of the kernel, a user could explore additional transformations to improve cache locality including loop tiling, permutation, kernel fusion and data prefetch~\cite{libxsmm_ipdps20, onednn}, but we leave them as future work for HPC/ML compiler/library developers.
\input{algorithms/gemm} \label{gemm-amx}
\section{VEGETA Engine Architecture}
\label{sec:architecture}
\begin{comment}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{figures/tilespmm-example-large.pdf}
\caption{Active tiles for 64, 64, 64 GEMM .}
\label{fig:tilespmm-example-large}
\end{figure}
\end{comment}
\subsection{Processing Units and Processing Elements} \label{subsec:pu-pe}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.95\textwidth]{figures/vegeta-designs.pdf}
\caption{Two VEGETA designs: VEGETA-D-1-1 and VEGETA-S-2-2.}
\label{fig:vegeta-designs}
\end{figure*}
\input{tables/vegeta_designs}
\label{subsec:pe}
\begin{comment}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/pe-design-final.pdf}
\caption{Detailed architecture for DPE-1L-1R and SPE-4L-2R}
\label{fig:pe-design}
\end{figure}
\end{comment}
\textbf{Processing Unit (PU).} A PU is composed of a number of MAC units that contribute to the same output element.
In a WS systolic array, the dot product to produce each output element is mapped to a column of PUs. Partially accumulated results trickle down a column, and the final result of the dot product exits the bottom of the array.
In a conventional design, a PU is composed of one MAC unit, the height of the array is the length of the dot product, and each PU produces a single partial sum each cycle.
We can, however, break the set of operations for each dot product into pieces, or ``lanes.'' If we pack multiple MAC units into a PU, each PU can work on all lanes in parallel, and we can scale down the height of the systolic array.
In this case, we must combine the partial sum for each lane at the bottom of the systolic array.
We call the number of lanes or the number of MAC units in a PU, or the \textit{reduction factor}, $\beta$.
$\beta$ also indicates how many partial sums need to be reduced at the bottom of the systolic array to generate a single output element.
\textbf{Processing Element (PE).}
We group PUs that share the same eastbound inputs and output buffers into a PE.
That is, peer-to-peer forwarding happens between PEs, and the input fed from the west is broadcasted to all PUs in a PE.
The more PUs in a PE, the narrower the systolic array, and the more we amortize the overhead of the horizontal pipeline buffer.
This improvement in area and power comes at the cost of lower achievable frequency since the broadcast must reach more PUs.
We call the number of PUs in a PE, or the \textit{broadcast factor}, $\alpha$.
We label PE designs as PE-$\alpha$-$\beta$.
For example, PE-1-1 indicates each PE has one single-MAC PU, while PE-4-2 indicates each PE has four two-MAC PUs (PU-2s)
\textbf{SPU and SPE.}
We enhance PUs and PEs to support tile SPMM with flexible $N$:$M$ structured sparsity.
To distinguish them, we call a PU and PE without sparsity support Dense Processing Unit (DPU) and Dense Processing Element (DPE), respectively, while we call a PU and PE with sparsity support Sparse Processing Unit (SPU) and Sparse Processing Element (SPE), respectively.
DPEs and SPEs are the building blocks for VEGETA-D (for dense) and VEGETA-S (for dense and sparse) engines which we explain in the following sub-sections.
The main differences between a DPE and SPE are $M$ to 1 MUXes and a metadata buffer added to each weight buffer.
Each cycle, an SPE receives multiple input elements, and uses this extra hardware to select, for each weight, the one corresponding to its index.
To support flexible $N$:$M$ structured sparsity, we choose $\beta$ as $\frac{M}{2}$; this ensures that input elements need only be fed into a single row.
Since we use $M=4$, we use $\beta=2$ for our SPEs.
In summary, we use DPE-$\alpha$-$\beta$ and SPE-$\alpha$-$\beta$ to indicate the broadcast factor ($\alpha$) and the reduction factor ($\beta$) of each PE.
For example, the broadcast factor ($\alpha$) of SPE-2-2 (shown in ~\autoref{fig:vegeta-designs} (b)) is 2, which indicates that a single SPE-2-2 is composed of two ($=\alpha$) SPU-2s.
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\textwidth]{figures/input-selection.pdf}
\vspace{-1em}
\caption{Comparison of executions of \textsc{tile\_gemm}\xspace, \textsc{tile\_spmm\_u}\xspace, and \textsc{tile\_spmm\_v}\xspace on VEGETA-S-2-2.}
\label{fig:input-selection}
\vspace{-1em}
\end{figure*}
\textbf{Execution flow of DPE/SPE}:
DPE-1-1, a conventional single-MAC PE, operates as follows.
First, a weight element is loaded into its weight buffer.
Then, an input element and partial sum element are streamed from the left and top ports, respectively.
The input element is multiplied with the element in the weight buffer and accumulated with the partial sum element.
The generated partial sum flows to the PE below the current PE, and the input element flows to the PE on the right.
An SPE is composed of $\alpha$ SPUs, each of which comprises $\beta$ MAC units.
Different SPUs calculate partial sums in parallel for the \textit{different} output elements, while each MAC unit in an SPU calculates partial sums in parallel for the \textit{same} output element.
An SPE-2-2 receives two ($=\beta$) input \textbf{blocks} (instead of one input \textbf{element}) from the west and broadcasts them to $\alpha$ SPUs.
An input block is fed into each $M$ to 1 MUX in an SPU, and using the metadata for the corresponding weight value in the weight buffer, the corresponding input element from the block is selected.
Then, each selected input element is multiplied by the corresponding weight element and accumulated.
Since each SPU generates $\beta$ partial sums, and there are $\alpha$ SPUs, it forwards a total of $\alpha\times\beta$ partial sums (for example, SPE-2-2 will generate $2\times2=4$ partial sums) to the south port.
\subsection{VEGETA Matrix Engine for Tile-Wise $N$:$M$ Sparsity}
\begin{comment}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.95\textwidth]{figures/SA-design.pdf}
\caption{Different DSA and SSA designs using DPE-1L-1R, DPE-2L-2R, SPE-4L-2R, and SPE-8L-2R.}
\label{fig:sa-design}
\end{figure*}
\end{comment}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.99\textwidth]{figures/spmmv-cycles-K-WS-final.pdf}
\caption{Cycle-level visualization for \textsc{tile\_spmm\_v}\xspace instruction on VEGETA-S-2-2 with 1:4 structured sparsity for matrix $\boldsymbol{A}$ with dimensions $\boldsymbol{A}$: 16$\times$128 (yellow), $\boldsymbol{B}$: 128$\times$16 (magenta), $\boldsymbol{C}$:16$\times$16 (green).
}
\vspace{-1em}
\label{fig:spmm-cycles}
\end{figure*}
Using DPEs and SPEs, we show how to build various VEGETA engines.
We use -\{S (for sparse) $|$ D (for dense)\}-$\alpha$-$\beta$ to indicate the type of the base PE of a VEGETA engine.
For example, VEGETA-D-1-1 has DPE-1-1s as its PEs while VEGETA-S-2-2 is composed of SPE-2-2s.
In~\autoref{fig:vegeta-designs}, we show VEGETA-D-1-1 and VEGETA-S-2-2 as examples.
Note that adders (or adder trees) are needed at the bottom of the VEGETA engine if the reduction factor $\beta > 1$ to generate the final output element by reducing partial sums.
A VEGETA engine is a 2D array of $N_{rows} \times N_{cols}$ PEs, where neighboring PEs are connected.
Since a column of SPUs cooperates to calculate a single output element, $N_{rows}$ can be derived as $
N_{rows} = \frac{\text{\# of effectual computations per output element}}{\beta}$.
The number of effectual computations per output element is 32 for the VEGETA Tile GEMM/SPMM instructions.
With $N_{rows}$, the number of PUs in a PE ($\alpha$), the number of MACs in a PU ($\beta$), and the total number of MAC units in a VEGETA engine, $N_{cols}$ can be derived
$N_{cols} = \frac{\text{\# of total MAC units}}{N_{rows} \times \alpha \times \beta}$.
In~\autoref{table:vegeta_designs}, we summarize different VEGETA engine design choices.
Each row of a VEGETA-S engine has a special unit called an Input Selector to select the right input blocks to support flexible structured sparsity.
In~\autoref{fig:input-selection}, we show how different VEGETA tile GEMM/SPMM instructions are executed on VEGETA-S-2-2 focusing on one SPE and the corresponding input selector.
For all three cases, non-zero values in each row of a weight tile are mapped onto a column of SPU-2s (i.e. 2 columns of MAC units). For example, weight elements 1-32 are mapped onto the first column of SPU-2 while weight elements 33-64 are mapped onto the second column of SPU-2.
\textbf{4:4 structured sparsity}: For \textsc{tile\_gemm}\xspace (4:4 sparsity), the weight tile is dense; thus, there are four effectual computations for an output element per input block (size of 4).
Thus, a half input block (two input elements) is fed into a row of PEs from the west port, and they are broadcast to each SPU in an SPE.
Since a weight tile is dense for \textsc{tile\_gemm}\xspace, the first element of an input block is multiplied with the first weight element in an SPU while the second element of the input block is calculated with the second weight element in the SPU.
Similar to a classic systolic array, the input is fed in a skewed manner so that the reduction for partial sums happens in a spatio-temporal manner along a column of MAC units.
Once it reaches the bottom of the array, the partial sums from each MAC column in an SPU are accumulated in a reduction unit (an adder in this case) to get the final output element.
\textbf{2:4 structured sparsity}: For \textsc{tile\_spmm\_u}\xspace (2:4 sparsity), an input block (instead of a half block) will be fed into a row of PEs from the left, and they are broadcast to each SPU in a SPE.
4 to 1 MUXes in an SPU choose the corresponding input element from an input block to be used in a MAC unit.
For this case, the same block is used between MAC units in an SPU since there are two effectual computations for an output element per input block due to the 2:4 structured sparsity.
\textbf{1:4 structured sparsity}: Finally, for \textsc{tile\_spmm\_v}\xspace (1:4 sparsity), there is only one effectual computation for an output element per input block.
Thus, two input blocks will be fed into a row of PEs from the left, and they are broadcast to each SPU in an SPE.
Each MAC unit in an SPU gets an input block and chooses and computes with the corresponding input element in the block.
Unlike the \textsc{tile\_spmm\_u}\xspace, there is only one non-zero element in a weight block due to the 1:4 structured sparsity.
Thus, two weight elements in an SPU are belonging to two different weight blocks, which implies that they need two different input blocks for computation.
In~\autoref{fig:spmm-cycles}, we show a detailed cycle-level visualization about how a VEGETA-S-2-2 executes a \textsc{tile\_spmm\_v}\xspace, which computes tile SPMM with 1:4 sparse $\boldsymbol{A}$ (weight), dense $\boldsymbol{B}$ (input), and dense $\boldsymbol{C}$ (output).
\subsection{Optimizations}
\textbf{Pipelining.}
So far, we have shown how one VEGETA tile GEMM/SPMM instruction can be executed on a VEGETA engine.
For modest-sized tiles, filling and draining the systolic array for a tile GEMM/SPMM instruction can significantly lower PE utilization.
A recent work RASA~\cite{rasa_dac21} introduced pipelining the execution of tile GEMM instructions on a dense systolic array-based matrix engine; this overlaps different stages of execution for different instructions.
We extend the pipelining concept and show how different tile SPMM instructions can be executed concurrently.
Similar to the stages defined in the previous work, Weight Load (WL), Feed First (FF), Feed Second (FS), and Drain (DR) stages are used for pipelining both VEGETA-D designs and VEGETA-S designs.
WL is the first stage of the execution on the systolic array where weight elements (which will be stationary) are loaded to the corresponding PEs from north ports, which takes $N_{rows}$ cycles.
Next, during the FF stage, the corresponding inputs and output elements are getting fed from the west and north ports, which takes $T_n$ cycles, where $T_n$ is the number of columns in an input tile.
This stage ends when the top-left PE stops receiving input/output elements.
Since the inputs are streamed in a skewed manner, the remaining rows of the systolic array are still receiving the new input elements from the west ports.
This stage is called as the FS stage and ends when there is no more input element coming in from the west ports, which takes $N_{rows}-1$ cycles.
Finally, to finish the remaining calculations in the systolic array, it needs $N_{cols}$ cycles during the DR stage followed by cycles required for reduction at the bottom.
Note that we need an additional stage for reduction after DR stage to wait for the remaining values in reduction units.
Also, unlike the conventional systolic array where each PE is receiving one input, weight, and output element at a time, $\beta$ input blocks, $\alpha$ weight elements, $\alpha$ output elements should be fed to each SPE.
~\autoref{fig:spmm-cycles} shows the steaming pattern of input element blocks.
$N_{rows}$ and $N_{cols}$ can be reduced by increasing $\alpha$ and $\beta$, larger $\alpha$ and $\beta$ might reduce latency of a single instruction.
Larger $\alpha$ causes lower frequency while larger $\beta$ causes longer reduction latency and larger reduction unit, so we cannot arbitrarily increase those.
Furthermore, to properly overlap different stages, it is crucial to balance the latencies of different stages.
Different VEGETA-S designs can use same pipelining stages since they follow the same streaming pattern.
~\autoref{fig:pipelining} (a) and (b) shows the pipelining examples for VEGETA-D-1-2 and VEGETA-S-16-2, assuming no dependency between instructions.
With pipelining, multiple instructions may be executed in the systolic array concurrently, but no two of them may be in the same stage of execution, since that would oversubscribe at least one hardware resource. Since the latency of each stage is known, the scheduler can easily enforce this.
We observe that the next instruction can be executed after 16 cycles for VEGETA-S-16-2, which is same as VEGETA-D-1-2 since it is limited by the total number of MAC units (maximum compute throughput).
Due to the smaller $N_{rows}$ and $N_{cols}$, the latency of each instruction for VEGETA-S-16-2 is shorter than that of VEGETA-D-1-2.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/pipeline-final.pdf}
\caption{Pipelining on VEGETA-D-1-2/VEGETA-S-16-2.}
\label{fig:pipelining}
\vspace{-1em}
\end{figure}
\textbf{Output forwarding.}
Pipelining allows concurrent execution of independent instructions.
However, it is often not allowed if the instructions have dependence between them.
Since VEGETA tile GEMM/SPMM instructions perform accumulation, the destination register is a source as well;
thus, for two back-to-back tile multiplication instructions with the same destination register, the second cannot begin execution until the first finishes.
A traditional approach to resolve this kind of pipeline stall is data forwarding.
We extend this ``output forwarding (OF)'' concept to matrix engines.
In~\autoref{fig:pipelining} (c) and (d), we show the pipelining examples of dependent instructions for VEGETA-S-16-2 with and without output forwarding.
Since we feed a $\boldsymbol{C}$ tile into the systolic array over multiple cycles, we need to only be sure that particular $\boldsymbol{C}$ elements will be ready before we need to feed them.
A key insight for this is that the register reads/writes of $\boldsymbol{C}$ tile follow the exactly same order in a systolic array.
This is because every output element will be calculated $N_{rows}+log_2{\beta}$ cycles after it gets fed into the systolic array.
For example, when executing a VEGETA-S-16-2, a \textsc{tile\_spmm\_u}\xspace starts reading the C tile when FF stage begins (Cycle $N_{rows}+1$).
In the same order, the C tile will be written back from Cycle $2N_{rows}+log_2{\beta}$ so that the dependent instruction can start reading the same $\boldsymbol{C}$ tile.
This can significantly reduce the stalls which might have occurred between instructions with a dependency without OF, but a bypass buffer would be required to keep and forward the data before writing it back to the tile register.
\begin{comment}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/packing.pdf}
\caption{HW-aware sparse packing. We use 2:4 structured sparsity for this example.}
\label{fig:packing}
\end{figure}
\end{comment}
\begin{comment}
To execute a large GEMM kernel, the programmer or compiler must break it down into smaller 16$\times$32$\times$16 GEMM kernels. Figure \ref{fig:sp_engine} shows the sparse matrix engine and how the tiles are mapped onto the matrix engine. The sparse matrix engine is composed of a set of 4-lane Dot-Product (DP) units, where each row consists of $16$ DP units and each column has $8$ DP units.
Using the \sparseload instruction, non-zero values of the $16\times32$ A tile are loaded to the tile register for A. Since we use 2:4 structured sparsity, only half of the tile register is being used. The tile register for B will hold the transpose of the B tile and the tile register for C will hold the corresponding C tile.
A $16\times32\times16$ GEMM kernel is further broken down into four micro-kernels, each of which computes $4\times32\times16$ GEMM. The sparse matrix engine computes the given $16\times32\times16$ GEMM kernel by computing each micro-kernel sequentially. To compute the micro-kernel, $4\times16$ sub-tile A, $16\times16$ sub-tile B, and $8\times16$ sub-tile C are fetched from the corresponding registers. Note that we only load the useful values for sub-tile B using the corresponding values from the metadata register for sparse tile A.
Next, we broadcast sub-tile A to the upper half and lower halves of the sparse matrix engine and then unicast each row of sub-tile A to a row of the sparse matrix engine. Sub-tile B is divided into halves horizontally and distributed to the upper half and the lower half of the matrix engine and each row is distributed onto different sets of columns of the sparse matrix engine. Note that four columns are assembled into a set of DP units in each row of the sparse matrix engine. Since we have four sets of DP units in each row of sparse matrix engine and 8 rows of the sub-tile B, this will be done in two cycles.
With this data flow, each set of DP units computes one element of the C matrix. Since there are 32 sets of DP units, we can compute 32 distinct elements of the sub-tile C in each cycle assuming no fetch delays, resulting in 2 cycles for each micro-kernel and 8 cycles for each kernel.
\end{comment}
\subsection{Flexibility in the Block Size, M}
Similar to the case for VEGETA ISA to support a broader range of sparsity,
VEGETA engines can be easily extended for larger $M$.
For example, to support the $M=16$ case (for 1:16, 2:16, 4:16, 8:16, and 16:16),
a 16-to-1 MUX (or five 4-to-1 MUXes) is needed per MAC unit, which would take an input block composed of 16 elements and select the corresponding input element.
$\alpha$ and $\beta$ should be configured considering the balance of pipeline stages, data reuse, critical path, etc.
\subsection{Row-Wise $N$:$M$ Sparsity Support}
\label{subsec:row-wise-support}
VEGETA engines support row-wise $N$:$M$ sparsity, which can be used for leveraging unstructured sparsity using the method described in \autoref{subsec:transform}.
In \autoref{fig:Row-wise-vegeta-mapping}, we show a row-wise sparse matrix using 4:4, 2:4, 1:4 at Row 1, Row 2/3, and Row $H_{A}$-3 to $H_{A}$, respectively.
A row with 4:4 maps to an SPE-1-4 column, and outputs of the four SPU columns reduce to generate one partial sum.
A row with 2:4 maps to an SPE-2-2 column, and outputs of each pair of SPU columns reduce, resulting in two partial sums per SPE column.
A row with 1:4 maps to an SPE-4-1 column, generating 4 partial sums per SPE column.
Using this mapping, we can ensure all columns are fully utilized while allowing different $N$:$M$ sparsity across different rows.
The number of columns of a weight tile, $W_A$, is equal to $M\times N_{rows}$ to keep all columns fully utilized.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/row_wise_uarch.pdf}
\caption{Mapping of row-wise $N$:$M$ sparse matrix on VEGETA-S.}
\label{fig:Row-wise-vegeta-mapping}
\vspace{-1em}
\end{figure}
Similar to \autoref{fig:input-selection} (b), an input block (4 elements) is fed into a row of SPEs from the left and broadcast to each MAC in an SPE.
In \autoref{fig:Row-wise-vegeta-mapping}, the number of rows of a weight tile can vary based on the $N$:$M$ sparsity combinations in a tile.
When a row-wise $N$:$M$ sparse tile has $N_{4:4}$ rows with 4:4 sparsity, $N_{2:4}$ rows with 2:4 sparsity and $N_{1:4}$ rows with 1:4 sparsity, the number of columns in VEGETA-S and the number of rows of the weight tile, $H_{A}$ can be derived as $N_{cols} = N_{4:4} + \frac{N_{2:4}}{2} + \frac{N_{1:4}}{4}$ and $H_{A} = N_{4:4} + N_{2:4} + N_{1:4}$.
$H_A$ can vary from 8 to 32 depending on the sparsity degree of the tile.
Our approach requires having consecutive groups of rows of the weight tile with the same sparsity degree (e.g., 2 for 2:4 and 4 for 1:4); we call this as pseudo row-wise $N$:$M$ sparsity.
We can employ a simple reordering in input/output DMA engines to group input rows with the same sparsity, and reorder the output elements back to their original order. Since this only needs an extra row of adders, the hardware overhead would be negligible compared to VEGETA-S which supports tile-wise flexible $N$:$M$.
\subsection{Integration of VEGETA Engines with CPU}
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{figures/vegeta-system-overview.pdf}
\caption{Overview of VEGETA in a CPU. We highlight the parts including our contributions with red.}
\label{fig:vegeta-system-overview}
\vspace{-1em}
\end{figure}
In~\autoref{fig:vegeta-system-overview}, we show how we integrate VEGETA in a CPU.
We also marked the components that we modify or introduce to integrate VEGETA in red.
Our baseline core design maintains separate physical and architectural register files.
Also, we extend this to tile registers and metadata registers for VEGETA; this includes enhancements to the register file itself, allocator, renamer, scheduler, and ROB to add the ability to manage the new registers, track dependencies on them, and support precise exceptions.
We also enhance the scheduler to track static and dynamic information of instructions being executed in VEGETA engines to support pipelining and output forwarding at the right timing without interrupting scalar/vector instructions.
A \textsc{tile\_load\_t}\xspace (or \textsc{tile\_store\_t}\xspace) will be converted into 16 memory requests and each will be loading (or storing) 64 Bytes (cache line size) through load/store queue, not imposing any extra implication on cache/memory coherence/consistency.
\section{Evaluation} \label{sec:eval}
\input{tables/evaluation_old}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/runtime-comparison-same-freq-new.png}
\vspace{-1em}
\caption{
Normalized runtime with different matrix engines in~\autoref{table:vegeta_designs}.
We use reddish colors, black, greenish colors, and blue for dense matrix engines~\cite{rasa_dac21, intel2020isa}, a design using STC-like config, config~\cite{mishra2021accelerating}, VEGETA-S designs, and VEGETA-S with OF, respectively.
}
\vspace{-1.5em}
\label{fig:dense_sparse_comparison}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.96\linewidth]{figures/area-frequency-v-designs.pdf}
\vspace{-1em}
\caption{
Area and power normalized to RASA-SM and frequency for different VEGETA engines. V indicates VEGETA.
}
\label{fig:area_freq}
\vspace{-1em}
\end{figure}
\subsection{VEGETA Implementation}
We modified LLVM~\cite{llvm} for the new VEGETA ISA and implemented VEGETA C++ intrinsics.
Next, we wrote GEMM/SPMM kernels that exploit layer-wise $N$:$M$ sparsity using VEGETA intrinsics.
Since there is no commercial CPU that can execute VEGETA instructions,
we developed a Pintool, an instrumentation tool using Pin APIs~\cite{pin_pldi05}, which registers instrumentation callback routines that emulate each of the VEGETA instructions described in~\autoref{table:amx_instructions}.
Then, we generated the traces of the kernels which have every executed instruction with dynamic information using the Pintool, and extended MacSim~\cite{hyesoon12macsim}
to handle VEGETA instructions and registers along with the different executions of matrix engines.
Finally, we simulated the GEMM/SPMM kernels using the generated traces on MacSim.
We also developed RTL designs to explore different VEGETA engines with different $\alpha$ and $\beta$.
We model baseline dense matrix engines with RASA-SM, RASA-DM, and the Intel TMUL-like config through VEGETA-D-1-1, VEGETA-D-1-2, and VEGETA-D-16-1, respectively.
We estimate the performance of an engine with the NVIDIA STC-like config using VEGETA-S-1-2 forcing only 2:4 support.
We synthesized each RTL design using Synopsis DC compiler with Nangate 15nm Technology Library.
We used the post-layout design for area/power/timing numbers for designs shown in~\autoref{table:vegeta_designs}.
\subsection{Experimental Setup}
Although VEGETA is not limited to a single use case, thanks to its generic SPMM instructions that can be used in various kernels, we use DL inferences as a use case to show the performance of VEGETA.
As DNN compression is done offline (usually once before deployment) for inference~\cite{zhou2021learning, domino2021nips, maohua_micro19, nvidia_ampere}, the cost of DNN compression is amortized to multiple inferences, thus the inference performance does not include this.
For the workload, we choose representative DNN layers from ResNet50~\cite{resnet_cvpr16}, BERT~\cite{devlin-etal-2019-bert}, and GPT-3~\cite{GPT}.
The parameters for the layers are summarized in~\autoref{table:dnn-layers}.
To convert convolutional layers for ResNet50 to GEMM kernels, we use the dimensions derived by applying image to column (im2col) algorithm.
We run the DNN layers with 1:4/2:4/4:4 structured sparsity on different VEGETA engines listed in~\autoref{table:vegeta_designs} using MacSim.
We set the frequency of the core as 2 GHz and fetch/issue/retire width as four with 16 pipeline
stages, 97 ROB entries, and 96 load buffer entries.
To focus on the performance trade-off of different VEGETA designs, we assume that the data is prefetched to the L2 cache.
\subsection{Performance Analysis}
\autoref{fig:dense_sparse_comparison} shows the runtime with various DNN layers.
For this experiment, we assume that all the matrix engines are running with 0.5 GHz.
We chose 0.5 GHz since it met the timing for all matrix designs that we used in the evaluation, which are derived based on the corresponding RTL implementations.
We normalized the runtime using the longest runtime (runtime on GPT-L3 with RASA-SM).
We first observe that the RASA-SM suffers from the under-utilization of processing elements due to the mismatch of matrix engine pipeline stages (WL/FF/FS/DR), resulting in the highest runtime.
RASA-DM is a state-of-the-art matrix engine for CPUs and achieves good throughput by matching the latencies of its matrix engine pipeline stages.
Compared to RASA-DM, our sparse engine designs performs comparably for the dense workload showing a performance gain of up to 7\%.
The performance gain is mainly coming from the reduced latency of the drain stage by reducing the width of the SA and output forwarding.
Since VEGETA-D engines are not able to leverage sparsity, they cannot skip ineffectual computations and show the same performance with 2:4 and 1:4 structured sparsity, unlike VEGETA-S engines.
Also, the design with the STC-like config can only accelerate 2:4 sparsity while our VEGETA-S designs can accelerate various fine-grained structured sparsities.
\rev{
For layers with 2:4 structured sparsity, the matrix engine using the STC-like config shows 16\% runtime reduction on average compared with the RASA-DM.
Using VEGETA-S-16-2, additional 18\% runtime reduction was achieved compared to the design with the STC-like config.
Finally, with output forwarding, another 32\% runtime was reduced.}
For layers with 1:4 structured sparsity, the design with the STC-like config does not show better performance compared with 2:4 structured sparsity since it cannot exploit the extra zeros to skip extra ineffectual computations, unlike our VEGETA-S designs.
We observe that VEGETA-S-1-2 shows 51\% runtime reduction on average compared with the RASA-DM since it can skip all ineffectual computations.
Using our VEGETA-S-16-2 engine, additional 8\% runtime reduction was achieved.
Finally, with output forwarding, another 37\% runtime was reduced by resolving output dependencies early.
\subsection{Area and Power Analysis}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{figures/row-wise-results.pdf}
\vspace{-2em}
\caption{
Average of normalized speed-ups of different sparsity granularity HW support and sparsity degrees using workloads in \autoref{table:dnn-layers}.
}
\label{fig:row-wise-eval}
\vspace{-1em}
\end{figure}
In~\autoref{fig:area_freq}, we show the normalized area and frequency for different VEGETA engines.
First, we observe that when we increase the number of PUs in a PE ($\alpha$), the area of the VEGETA engines decreases due to the lower number of horizontal pipeline buffers as described in~\autoref{subsec:pu-pe}.
Since we add small modules to support sparsity, the VEGETA-S design with the largest area overhead compared with RASA-SM only causes 6\% area overhead.
Moreover, by increasing $\alpha$, VEGETA-S-8-2 and VEGETA-S-16-2 show lower area compared to RASA-SM or state-of-the-art dense matrix engine for CPU (RASA-DM).
This is because the overhead gets amortized and compensated as more MACs share the data, reducing the total pipeline registers.
Power shows a similar trend. When we vary $\alpha$ for VEGETA-S-$\alpha$-2 as 1, 2, 4, 8, 16, the power overhead (both static and dynamic) is 17\%, 8\%, 4\%, 3\%, 1\% compared with the baseline.
In the meantime, higher $\alpha$ limits maximum frequency due to the increased wire length for broadcasting across PUs.
\subsection{Analysis for Unstructured Sparsity Support Using VEGETA} \label{sec:row-wise-evalution}
We convert unstructured sparse matrices into row-wise $N$:4 sparse matrices to accelerate SPMM with VEGETA, as discussed in \autoref{subsec:row-wise-support}.
Since there is no work on CPU sparse matrix engines, we use a few SOTA sparse accelerators~\cite{sigma,s2ta_hpca22} as baseline matrix engines for comparison.
As shown in \autoref{table:nm-hw}, S2TA~\cite{s2ta_hpca22} naturally supports layer-wise $N$:$M$ and potentially be enhanced to support tile-wise $N$:$M$ while VEGETA can support pseudo row-wise and row-wise $N$:$M$.
SIGMA~\cite{sigma} can leverage unstructured sparsity, but it comes with area overhead (we normalize performance with the area).
For the conservative evaluation, we assume that they are also enhanced to fully hide fill and drain overhead through perfect pipelining.
Unlike a layer-wise $N$:4 evaluation, it is not straightforward to implement optimized kernels using VEGETA instructions, so we use an analytical roofline model, leaving the development of optimized SW kernels using VEGETA instructions as future work.
We use the same workloads, but induce random and unstructured sparsity of varying degrees and report average speed-up normalized to a dense engine in \autoref{fig:row-wise-eval}.
A smaller sparsity granularity increases the possibility of finding $N$:$M$ sparsity that covers non-zeros. For example,
it is unlikely that an entire unstructured sparse layer exhibits a certain $N$:$M$ sparsity; thus, layer-wise does not show much performance improvement over dense.
In contrast, row-wise achieves 2.36$\times$ and 3.28$\times$ at 90\% and 95\% sparsity degree.
SIGMA performs better than others with extremely high sparsity degrees ($>$95\%), but it is inefficient for the modest sparsity degree (the target of our work) indicating that extra area overhead is not useful.
\vspace{0em}
\section{Related Work}
\textbf{CPU support to run GEMMs efficiently.} SAVE~\cite{save_micro20} is a sparsity-aware CPU vector engine that skips redundant computations in sparse GEMM operations to accelerate sparse DNN and high-performance computing workloads. Similarly, SparCE~\cite{sparce_taco19} also increases the utilization of vector engines by tracking the general-purpose registers with zeros.
With extremely high sparsity, a program gets memory bounded due to the low arithmetic intensity, making
vector compute throughput enough, and vector engines like SAVE/SparCE can be equal performance to a matrix engine. Otherwise, we expect them to be significantly slower than VEGETA due to the lower compute throughput.
ZCOMP~\cite{zcomp_micro19} introduces a vector ISA extension to reduce cross-layer communication for DNNs.
Our work is orthogonal and complementary to those works since we are targeting a matrix engine that operates on tiles of the matrix instead of individual vectors.
RASA~\cite{rasa_dac21} proposes control and data optimizations for CPU matrix engines to improve utilization
via efficient pipelining and overlap.
They divide a matrix multiplication with different sub-stages on the systolic array and introduce optimizations with pipelining and overlapping different stages.
It inspired a lot on our work regarding pipelining with sub-stages. However, it does not consider SPMM and their design cannot be used directly as a sparse matrix engine.
\textbf{Handling dynamic sparsity.}
Since dynamic (input) sparsity is hard to predict, it has to be handled at runtime, unlike static sparsity which is usually pre-determined.
We could use compaction of tile registers to build non-zero tiles, similar to the approach that SAVE~\cite{save_micro20} used for merging vector registers to remove zeros. However, this is not practical for a matrix engine due to the high probability of conflicts across different tiles since the number of operands in a vector register is 32 while that of a tile register is 512 (16$\times$32).
There could be an efficient way
to exploit dynamic sparsity on matrix engines in CPUs
without much overhead, but we leave it as future work.
\textbf{Sparse DNN accelerators.} There have been several papers focusing on SPMM/SPGEMM acceleration with standalone accelerators~\cite{scnn, sigma, extensor, srivastava2020matraptor, srivastava2020tensaurus, outerspace, zhang2020sparch}.
Sparse-TPU~\cite{sparsetpu_ics20} and the work from Kung et al.~\cite{columncombining_asplos19} introduce packing algorithms targeting unstructured sparsity for systolic array-like architectures.
Being standalone accelerators with large area-power budgets, these works have explored co-design of compression formats, dataflow, indexing logic, and control structures.
CPU matrix engines, on the other hand, have a tight coupling with the RF and operate in strict area-power budgets, motivating our solution of enhancing a dense systolic array with a pipeline-friendly dataflow.
Zhu et al.~\cite{maohua_micro19} introduce a sparse tensor core by extending NVIDIA Tensor Core using vector-wise pruning and encoding to enforce structured sparsity.
NVIDIA has also recently introduced Sparse Tensor Core as a sparse matrix engine in their Ampere GPUs~\cite{mishra2021accelerating} to accelerate sparse DNNs~\cite{mishra2021accelerating} with 2:4 structured sparsity.
STA~\cite{sta_cal20} proposes a systolic tensor array for inference on mobile devices with density-bound block sparsity, which is similar to the $N$:$M$ structured sparsity, and S2TA~\cite{s2ta_hpca22} extended STA for flexible $N$:$M$ sparsity.
None of the previous works supports row-wise $N$:$M$ sparsity, which this work shows as a promising technique to cover unstructured sparsity. Also, prior works do not consider dividing the execution of tile computations into fine-grained stages for pipelining without resource conflicts, which is critical when integrated into CPUs.
\section{Future Work}
In this work, we focused on evaluating the performance of our engine in convolution and fully-connected layers where the most of the time are spent for executing deep neural network models.
Inter-layer optimizations such as fused layers can be explored to further improve performance.
Mix-match $N$:$M$ sparsity
Using matrix engine for other workloads
Memory system
HW prefetcher
\end{comment}
\section{Conclusion}
This work adds flexible $N$:$M$ structured sparsity support in CPU matrix engines via VEGETA ISA and engines supporting various sparsity granularity.
We propose several design choices for the VEGETA engine, studying kernel performance, clock frequency, and area trade-offs.
Also, we show how efficiently the row-wise $N$:$M$ sparsity feature of VEGETA can accelerate unstructured sparsity by transforming unstructured sparsity to row-wise $N$:$M$ sparsity.
We believe this work opens up opportunities for future work in enhancing the design further for dynamic sparsity and studying the interaction of the sparse matrix engine with the rest of the CPU memory hierarchy.
\section{Acknowledgement}
This work was funded by an award from Intel’s Corporate Research Council.
We thank Raveesh Garg, Po-An Tsai, and Vivek Sarkar for profound discussions on this work.
|
{
"arxiv_id": "2302.08597",
"language": "en",
"timestamp": "2023-02-20T02:02:42",
"url": "https://arxiv.org/abs/2302.08597",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
With increasing access to information technologies, misinformation becomes more easily and widely disseminated through social media \cite{del2016spreading}. Research shows that misinformation spreads more rapidly than true information. Widespread misinformation can potentially disrupt public health, democratic processes, and public discourse \cite{xie2020global,hochschild2015facts}. Consequently, there is a growing public and academic interest in tackling the challenges associated with misinformation. Specifically, fact-checking as a method for curbing misinformation has gained attention in research and practice \cite{graves2019fact,Arnold2020}.
While we have seen a surge of fact-checking organizations in the past few years, addressing the gigantic scale of misinformation online may be intractable with expert fact-checkers alone. Consequently, we have seen interest in automating the fact-checking process, either towards building end-to-end automation, or as decision support systems for fact-checkers \cite{Nakov2021AutomatedFF,Guo2021ASO,graves2018understanding}.
Automated fact-checking research are often evaluated using automated metrics. However, the primary goal of automated fact-checking tools are to help fact-checkers with their decision making. Recent research has started looking into several human factors associated with fact-checking such as usability, trust, and intelligibility \cite{DAS2023103219,nguyen2018believe,Shi2022effect,jiang2020factoring}. Additionally, some research has also analyzed the factors affecting human judgment during the interaction with misinformation, including the reading behaviors and beliefs in the misinformation \cite{ecker2022psychological, roozenbeek2020susceptibility}. However, a detailed understanding of users cognitive load and fact-checking system have not been studied in great detail.
Previous research in cognition and discerning misinformation typically displayed one news headline per trial. However, in realistic information search scenarios, people often encounter several pieces of information in one search. Moreover, the headlines shown in the previous experiments were mostly related to political topics. Currently, misinformation not only threatens democracy but also public health regarding the situation of the COVID-19 pandemic. As stated in \cite{zarocostas2020fight}, ``we're not just fighting an epidemic; we're fighting an infodemic''. Therefore, we designed a lab-based experiment where users checked COVID-19-related health claims in a fact-checking system and were shown multiple news headlines related to the claim simultaneously in one screen. During the experimental session, an eye-tracker recorded their pupillary response.
The aim of the study was to examine how the cognitive load is impacted when users read the news headlines in a fact-checking context, and how it is related to their belief change and misinformation judgment.
The contributions of this study include:
(1) examining the effectiveness of pupil dilation measurements as an indicator of cognitive load in reading news headlines;
(2) comparing the cognitive load imposed by reading news headlines under varied conditions of claim correctness, headline-belief stance, and evidence correctness;
and
(3) developing the understanding of the cognitive processes in discerning misinformation by investigating the cognitive load in a more realistic scenario.
\section{Background}
\label{sec:bg}
\subsection{Human Evaluation of Automated Fact-Checking}
Several studies have evaluated automated fact-checking from a human factor perspective. Such factors include understanding, usability, intelligibility, and trust in those systems \cite{nguyen2018believe,Shi2022effect,mohseni2021machine,das2019cobweb}. \citet{nguyen2018believe} studied the effect of intelligibility in automated fact-checking on user trust and task accuracy. \citet{mohseni2021machine} examined the effect of intelligibility in calibrating user trust on automated fact-checking systems. Complementary to that, \citet{das2019cobweb} investigated the role of confirmation bias in using automated fact-checking systems. The studies mentioned above focused on measures related to the fact-checking task and not on the user behavior while interacting with the system.
In contrast, \citet{Shi2022effect} examined user interaction with the fact-checking interface. They studied the effect of interactivity on several factors associated with user interaction, e.g., dwell time, attention, and mental resources, with the help of eye tracking. Our work extends such research and employs similar eye-tracking methodology in investigating users' cognitive load while interacting with automated fact-checking systems..
\subsection{Cognitive load and discerning misinformation}
\label{sec:cog_misinfo}
\citeN{kahneman2011thinking}'s dual-process theory states that human cognition can be conceptualized as two systems, System 1 and System 2.
System 1 ``operates automatically and quickly, with little or no effort and no sense of voluntary control'', i.e., these are autonomous and intuitive processes.
System 2 ``allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration'', i.e., these are deliberative and analytic processes.
Previous research found that people who engage analytical thinking perform better on rational thinking tests \cite{stanovich2011complexity}. Similarly, the heuristic-systematic model explains that System 1 uses heuristics, while System 2 uses analysis, which makes people using System 1 more susceptible to decision-making biases \cite{chen1999motivated}. People who are engaging System 2 put conscious effort into thinking and think analytically, and thus are more likely to identify misinformation. To investigate it, \cite{pennycook2019lazy} used Cognitive Reflection Test (CRT) as a measure of the willingness of activating System 2 and found that CRT is positively correlated with the capability to discern fake news. Another study investigated the impact of deliberation on correcting intuitive mistakes \cite{bago2020fake}. Researchers found that when people have more time to reconsider false claims, they are less likely to trust them \cite{moravec2020appealing}.
Based on the dual-process theory \cite{kahneman2011thinking}, cognitive load can be used as an indication of System 2 activation, and thereby to study the performance and cognitive processes engaged in identifying fake news articles.
\citeN{Mirhoseini_Early_Hassanein_2022} found that higher cognitive load was imposed when users have better performance in discerning misinformation.
Users with higher cognitive load utilize more System 2 resources, and deliberate and rationally examine the information correctness and ultimately discern misinformation. Additionally, pupillary response was shown to be a reliable physiological measure of cognitive load \cite{hossain2014understanding} since pupil dilation is associated with the amount of load on memory \cite{kahneman1966pupil}.
\subsection{Cognitive load and pupil dilation}
\label{sec:cog_pupil}
Cognitive load refers to the amount of working memory resources required to perform a cognitive task \cite{paas2016cognitive}.
Typically there are three types of cognitive load measurements: task performance, subjective, and physiological \cite{o1986workload,gwizdka2021overloading}.
Task performance measures capture how well the user is performing a given task, such as task completion time, and the number of errors.
Subjective measures use self-rating scales of cognitive load, such as NASA-TLX questionnaire \cite{hart2006nasa}. These measures are simple to collect but cannot reflect rapid and dynamic cognitive load changes \cite{palinko2010estimating}.
Physiological measures include heart-rate variability (HRV), galvanic skin response (GSR), Electroencephalography (EEG), and eye-tracking measurements \cite{urrestilla2020measuring, antonenko2010using, shi2007galvanic}. Multiple eye-tracking measures, such as blink frequency and pupil dilation, have been shown to correlate with cognitive load levels \cite{siegle2008blink}.
In the past decades, researchers have found that the behavior of the pupil is a direct reflection of neurological and mental activity \cite{hess1964pupil}. \cite{kahneman1966pupil} showed that the changes of pupil diameter are related to task difficulties, and pupil dilation is associated with the amount of load on memory. Therefore, the pupillary response could be utilized as a reliable physiological measurement of cognitive load \cite{hossain2014understanding} in the misinformation studies.
Various metrics are used to process pupil diameter data, and therefore to estimate mental workload.
A common approach is to measure pupil dilation relative to a baseline.
The baseline could be the average pupil diameter measured during a baseline trial \cite{kruger2013measuring} or during a baseline measurement made at the beginning of each trial \cite{krejtz2018eye}, or during whole experimental session of each participant \cite{gwizdka2014characterizing}.
Then the pupil size difference which is calculated with respect to the baseline is so called Relative Pupil Dilation (RPD) \cite{gwizdka2017temporal,wang2021pupillary}.
Another kind of alternative metric is proposed based on the moment-to-moment change in pupil diameter.
This method estimates the frequency of pupil oscillation and fluctuation of pupil dilation while separating the effects of illumination.
It was first proposed by Marshall with the measurement called the Index of Cognitive Activity (ICA) \cite{marshall2002index}.
Since ICA is closed source, \citeN{duchowski2018index} offered a similar and open-source, fully-detailed measurement called the Index of Pupil Dilation (IPA).
The researchers further proposed the Index of High/Low Pupillary Activity (LHIPA), by taking the tonic component (LF) into account, in addition to pupil phasic response (HF), which suggested to be a more reliable indicator of cognitive load \cite{duchowski2020low}. In this paper, we attempted to use the LHIPA and RPD as the indicators of cognitive load.
In previous research, eye-tracking was employed to investigate the effect of misinformation on cognitive activities.
It was found that people fixated more frequently, had longer fixation duration, and increased pupil diameter when reading fake news compared to real news \cite{sumer2021fakenewsperception, hansen2020factuality, ladeira2022visual}. This is because reading false news imposed higher cognitive load on account of the reduced heuristic availability \cite{ladeira2022visual}.
Furthermore, researchers measured pupil dilation in investigating the performance of judging the accuracy of the headlines and demonstrated that pupils dilate more when people perform better on the misinformation judgment task \cite{Mirhoseini_Early_Hassanein_2022}. This study showed that higher cognitive load was associated with identifying misinformation.
Therefore, in our research, we measured pupil dilation as indication of cognitive load in information processing. We extended previous works to more realistic search scenarios, in which users encountered several relevant news headlines related to a single claim, identify misinformation, and determine the correctness of the claim. We aimed to explore how cognitive load is impacted in the fact-checking context (i.e., by the evidence correctness and users' prior beliefs), and if it was related to users' belief change. We hypothesized that:
\textbf{H1:} Reading news headlines that provide incorrect evidence imposes higher cognitive load.
\textbf{H2:} Reading news headlines that are inconsistent with their prior beliefs imposes higher cognitive load.
\textbf{H3:} Changing one's beliefs, and especially correcting beliefs, imposes higher cognitive load.
\begin{figure*}[h]
\centering
\includegraphics[width=\linewidth]{figs/interface.pdf}
\caption{Interface of the mock fact-checking system used in the study.
Red frames represent the areas of interest (AOIs) around the news headlines, that are used in the analysis.}
\label{fig:interface}
\end{figure*}
\section{Methods}
\label{sec:methods}
\subsection{Experimental Design}
\label{sec:exp_design}
A controlled, within-subjects eye-tracking study was conducted
in a usability lab at a university, with
$N=40$ participants (22 females).
Participants interacted with a mock fact-checking system containing claims and news-article headlines in English language (Figure ~\ref{fig:interface}).
Participants were pre-screened for native-level English familiarity, 20/20 vision (uncorrected or corrected), and non-expert topic familiarity of the content being shown in the fact checking system.
Upon completion of the study, each participant was compensated with USD 25.
\subsection{Apparatus}
\label{sec:exp_appatratus}
A Tobii TX-300 eye-tracker was used to record participants' eye movements and pupil dilation.
Commercial usability and eye-tracking software was used to conduct the study, record raw gaze data, and perform partial data-cleaning and filtration for downstream analyses.
Data analysis was performed in Python and R languages.
\subsection{Mock Fact Checking System}
\label{sec:tasks_and_data_set}
Participants interacted with a mock fact checking system (Figure \ref{fig:interface}), and examined 24 COVID-19 related claims in the system.
Each claim was shown at the top of the interface.
Surrogates of five related news articles were presented below the claim, each with its corresponding news source, source reputation, news headline, and the article’s stance towards the claim.
Based on the article’s stance and news source reputation, the system provided a prediction of the claim’s correctness at the bottom.
The news headlines were clickable, and upon clicking, opened the news article in a new browser tab.
Each claim examination consisted of viewing the claim, the headlines of the news articles, and, optionally, clicking the news articles to read them in detail.
To mitigate the effect of background luminance of pupil dilation, the color and luminance of the fact-checking-system interface was kept constant during the experimental session.
The claims and corresponding news-articles were on the topic of the COVID-19 pandemic.
They were handpicked by the researchers to simulate a COVID-19 fact-checking system for usability analysis.
Each claim was selected so as to have a pre-assigned ground-truth correctness value of TRUE, FALSE, or UNSURE (for claims that are partially true, or not totally proven at the time of data collection).
The TRUE and UNSURE claims were handpicked from reputed websites in the medical domain, such as World Health Organization, WebMD, Mayoclinic, Johns Hopkins University, US State Government webpages, and others.
The FALSE claims were sourced by searching for ``coronavirus myths'' on popular search engines.
The relevant news articles for each claim were collected by manually searching the web.
The source reputations for news articles were collected from existing datasets \cite{gruppi_nela-gt-2019_2020,norregaard2019nela}, while the stance values of each news article towards each claim were labelled by the researchers.
Two example claims are ``wearing masks is not required during exercising'', and ``asymptomatic people can transmit COVID-19''.
In total there were 24 claims (8 TRUE, 8 FALSE, 8 UNSURE).
The order of presenting the claims during each study session was randomized.
\subsection{Procedure}
\label{sec:procedure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figs/diagram.pdf}
\caption{Flowchart of the experimental procedure.}
\Description{}
\label{fig:experiment_flowchart}
\end{figure}
The overall procedure of the experimental session is illustrated in Figure ~\ref{fig:experiment_flowchart}.
Each session started with training task
for participants to get familiar with the interface of the fact-checking system, and the procedure.
Then the participants started the 24 trials.
Each trial consisted of three parts:
\textit{(i)} Pre-Claim Questions
\textit{(ii)} Examining the claim in the mock fact-checking interface, and
\textit{(iii)} Post-Claim Questions.
\textbf{Pre-Claim Questions} asked the following:
\begin{itemize}
\item Pre-perceived Correctness: \textit{Do you think the claim is: False / Probably False / Neutral / Probably True / True}
\end{itemize}
For \textbf{examining the claim}, participants interacted with the interface freely without a time limit.
Participants were also instructed to click on news headlines to open the underlying news articles in a new browser tab, and read it, if they considered it necessary for evaluating the claim.
\textbf{Post-Claim Questions} asked the following:
\begin{itemize}
\item Post-perceived Correctness: \textit{After seeing the output of the fact-checking system, do you think the claim is: False / Probably False / Neutral / Probably True / True}
\end{itemize}
\subsection{Measures}
\label{sec:measures}
Our aim was to study cognitive load involved in reading news headlines.
Previous research \cite{Shi2022effect} found that most fixations on this type of fact-checking interface fell into the headline AOIs.
This supports the plausibility of studying pupil dilation only on the news headline AOIs.
So we marked each news headline area (Figure ~\ref{fig:interface}) as an \textit{area of interest} (AOI) for eye-tracking analysis.
Thus there were five AOIs in the fact-checking interface (i.e., from the first news headline to the fifth headline).
Javascript function \texttt{Element.getBoundingClientRect()}\footnote{\url{https://developer.mozilla.org/en-US/docs/Web/API/Element/getBoundingClientRect}} was used to get the coordinates for the AOIs. These coordinates were appropriately adjusted to match the coordinates recorded by the eye-tracker.
\subsubsection{Claim Correctness}
Each claim was selected so as to have a pre-assigned ground-truth correctness value of TRUE, FALSE, or UNSURE (denoted in UPPERCASE). This is the defined as the claim correctness.
In this research, we wanted to understand the "definitive" behavior on TRUE and FALSE claims first, before trying to tease apart the more complex behavior that may be associated with UNSURE claims. Therefore,
the analyses in this study only include trials in which users examined TRUE or FALSE claims.
\subsubsection{Headline Stance}
For each claim, we collected relevant news articles, which could be supporting or not-supporting the claim.
Researchers labeled the news Headline Stance based on whether the news article supported the claim or denied the claim, on a 5-item scale:
-1 (strong deny),
-0.5 (partially deny),
0 (neither support nor deny),
0.5 (partially support),
1 (strong support).
\subsubsection{Pre- and Post-Perceived Correctness}
Participants' perceived correctness regarding each claim was collected before (Pre-) and after (Post-) they viewed each claim in the fact-checking interface (Section \ref{sec:procedure}).
Responses to these Pre- and Post-perceived Correctness were on a five-item scale ranging from false to true (denoted in lowercase).
\subsubsection{Evidence correctness}
Evidence correctness denotes the relationship between the headline stance and the claim correctness.
If the news supports a TRUE claim or denies a FALSE claim, it is categorized as correct evidence.
In contrast, if the news denies a TRUE claim or supports a FALSE claim, it is categorized as false evidence.
In this paper, we consider only those news articles that fully supported or fully denied a claim.
\begin{itemize}
\item \textbf{correct evidence}: headline stance is -1 (strong deny) and claim correctness is FALSE, or, headline stance is 1 (strong support) and claim correctness is TRUE.
\item \textbf{incorrect evidence}: headline stance is 1 (strong support) and claim correctness is FALSE, or, headline stance is -1 (strong deny) and claim correctness is TRUE.
\end{itemize}
\subsubsection{Headline-Belief-Consistency}
\begin{itemize}
\item \textbf{headline-belief-consistent}: headline stance is -1 (strong deny) and Pre-Perceived Correctness is false or probably-false; or, headline stance is 1 (strong support) and Pre-Perceived Correctness is true or probably-true.
\item \textbf{headline-belief-inconsistent}: headline stance is 1 (strong support) and Pre-Perceived Correctness is false or probably false; or, headline stance is -1 (strong deny) and Pre-Perceived Correctness is true or probably true.
\end{itemize}
\subsubsection{Belief Change}
We measured participants' beliefs before and after they checked each claim in the fact-checking system. We grouped their belief change into five categories based on their Pre- and Post-Perceived Correctness, and the Claim Correctness:
\begin{itemize}
\item \textbf{stay-right}: claim correctness is TRUE and pre-trial and post-trial perceived correctness are both true or probably true; or, claim correctness is FALSE and pre-trial and post-trial perceived correctness are both false or probably false.
\item \textbf{to-right}: claim correctness is TRUE and post-trial perceived correctness is more towards true than pre-trial; or, claim correctness is FALSE and post-trial perceived correctness is more towards false than pre-trial.
\item \textbf{stay-neutral}: pre-trial and post-trial perceived correctness are both neutral.
\item \textbf{to-wrong}: claim correctness is TRUE and post-trial perceived correctness is more towards false than pre-trial; or, claim correctness is FALSE and post-trial perceived correctness is more towards true than pre-trial.
\item \textbf{stay-wrong}: claim correctness is TRUE and pre-trial and post-trial perceived correctness are both false or probably false; or, claim correctness is FALSE and pre-trial and post-trial perceived correctness are both true or probably true.
\end{itemize}
\subsubsection{Cognitive Load: The Low/High Index of Pupillary Activity (\textit{LHIPA})}
We attempted to apply the LHIPA metric to process pupil dilation data. Pupil dilation during blinks was replaced by 0 according to the eye-tracker detection. After the pre-processing step, we then computed the LHIPA on the raw pupil diameter signal for each headline AOI visits. We observed that the LHIPA value changed significantly when the visit duration was around 1.67s and 6.67s. Because of the high variability associated with the AOI visit duration, the LHIPA metric was not an appropriate indicator of cognitive load, and was not applicable to compare the pupil pupil dilation when processing various news headlines.
\subsubsection{Cognitive Load: Relative Pupil Dilation (\textit{RPD})}
We calculated pupil dilation based on the raw, high-resolution pupil data recorded at 300Hz. To eliminate individual variability in pupil sizes, we calculated a relative change in pupil diameter from a baseline for each participant. We first excluded the low-quality data (ET-Validity = 4) and the blink data (Blink detected (binary)=1) based on the blink detection algorithm implemented in our eye-tracking software.
Then we took an average pupil size over all the experimental trials as the pupil diameter baseline (\(P^i_{baseline}\)) and calculated the relative change in pupil diameter (\(RPD^i_t\)) from each pupil measurement (Eq. ~\ref{eqn:rpd}) \cite{gwizdka2017temporal,wang2021pupillary}. We removed data records with diameters that exceeded
\textpm 3 SDs of the participants' total session average.
\begin{equation}
\label{eqn:rpd}
RPD^i_t = \frac{P_t-P^i_{baseline}}{P^i_{baseline}}
\end{equation}
To calculate the \textit{RPD} for each AOI, we first downsampled the \textit{RPD} to 50Hz using a median filter to minimize the influence of the outliers. Then we excluded the RPD within 0.5 seconds after the interface visit started to reduce the influence of the variability of luminance across the web pages. We assumed that two fixations on a headline did not represent reading and, accordingly, kept the AOI visit which had more than 2 fixations and calculated the \textit{RPD} median for all the AOI visits to a single AOI in each trial.
\section{Results}
\label{sec:results}
\subsection{Testing assumptions}
All assumptions were checked according to the type of statistical testing in this paper. Normality test was conducted before conducting t-test and ANOVA analysis. Bartlett's test was conducted to check for sphericity.
The results of these tests indicated that no assumptions were violated.
\subsection{AOI position}
\label{sec:AOI_position}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figs/aoi_position.jpg}
\caption{Relative pupil dilation (\textit{RPD}) in the areas of interest (AOIs). From the left to the right it represents the AOIs of the news headlines that are from the first row to the fifth row of the fact-checking interface.}
\label{fig:aoi_position}
\end{figure}
Figure ~\ref{fig:aoi_position} shows that relative pupil dilation (\textit{RPD}) was largest when participants were reading the news headlines in the first row. Then the \textit{RPD} decreased as they read the headlines in the following rows. A one-way ANOVA showed that the effect of headline position (i.e. the rank of headline in the interface) was significant, $F(4,3004)=321.9, p<.05$. A post hoc Tukey's HSD test showed that all groups differed significantly at $p<.05$.
\subsection{Claim correctness and headline stance}
\label{sec:correct_stance}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figs/claim_headline_consist.pdf}
\caption{(a) Distribution of \textit{RPD} of the news headline AOIs as a function of headline stance (-1: headline denies the claim; 1: headline supports the claim) and claim correctness (TRUE or FALSE claim). (b) Distribution of \textit{RPD} of the news headline AOIs as a function of the evidence correctness (correct or incorrect) and claim correctness (TRUE or FALSE claim).}
\label{fig:claim_hdl_consist}
\end{figure}
Figure ~\ref{fig:claim_hdl_consist}(a) indicates that larger \textit{RPD} was on the news headline AOIs that denied the claims. \textit{RPD} was generally larger when checking TRUE claims compared to FALSE claims. A two-way ANOVA was conducted to examine the effects of headline stance and claim correctness on \textit{RPD}. Both claim correctness, F(1,34)=15.54, p<.05, and headline stance, F(1,34)=31.98, p<.05, had significant main effects on \textit{RPD}. However, the interaction effects were not significant.
A post hoc Tukey's HSD test showed that all groups differed significantly at p<.05.
The \textit{RPD} was larger when participants were reading the news headlines that denied the claim, in both TRUE and FALSE claim group.
The \textit{RPD} was larger when participants were checking TRUE claims, no matter if they were reading news headlines denying or supporting a claim.
Figure ~\ref{fig:claim_hdl_consist}(b) illustrates that \textit{RPD} was larger when participants were checking the incorrect evidence for TRUE claims, and when checking the correct evidence for FALSE claims.
\textit{RPD} was tested by a two-way ANOVA with two levels of claim correctness (TRUE, FALSE) and two levels of evidence correctness (correct, incorrect). The main effect of the claim correctness was significant, F(1,34)=15.54, p<.05. The main effect of the evidence correctness was not significant. However, the interaction of claim correctness and evidence correctness was significant, F(1,34)=31.98, p<.05.
A post hoc Tukey's HSD test showed that all the groups differed significantly at p<.05.
When participants were checking TRUE claims, the \textit{RPD} was larger in the incorrect evidence group compared to the correct evidence group, while checking FALSE claims, the \textit{RPD} was lower in the incorrect evidence group compared to the correct evidence group.
When participants were reading correct evidence, the \textit{RPD} was larger in the FALSE claim group compared to the TRUE claim group, while when participants were reading incorrect evidence, the \textit{RPD} was larger in the TRUE claim group compared to the FALSE claim group.
\subsection{Prior belief and headline stance}
\label{sec:belief_stance}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figs/belief_hdl_consist.pdf}
\caption{(a) Distribution of \textit{RPD} on the news headline AOIs as a function of the perceived correctness before examining the claim in the system (1 to 5: False to True) and the headline stance (-1: headline denies the claim; 1: headline supports the claim). (b) Distribution of \textit{RPD} of the news headline AOIs as a function of the headline-belief-consistency (consistent or inconsistent) and claim correctness (TRUE or FALSE claim).}
\label{fig:belief_hdl_consist}
\end{figure}
Figure ~\ref{fig:belief_hdl_consist}(a) shows that the largest difference in \textit{RPD} between the news headlines that supported and those that denied the claim was when participants' prior belief was true or false. That difference was smaller when their prior belief was neutral. Therefore we looked further into the relationship between headline stance and the prior belief in two directions (i.e., the perceived correctness of the claim was either true or false). Figure ~\ref{fig:belief_hdl_consist}(b) shows that the \textit{RPD} was higher when headline-belief was inconsistent in TRUE claim groups and when headline-belief was consistent in FALSE claim groups.
A two-way ANOVA was conducted to examine the effects of the headline-belief consistency and the claim correctness on \textit{RPD}. The claim correctness had a significant main effect, F(1,35)=8.42, p<.05, while the headline-belief consistency had no significant effect. The interaction effects of claim correctness and headline-belief consistency were significant, F(1,35)=23.31, p<.05.
A post hoc Tukey's HSD test showed that all the groups differed significantly at p<.05.
When checking TRUE claims, the \textit{RPD} was larger in the headline-belief inconsistent group compared to the headline-belief consistent group, while when checking FALSE claims, the \textit{RPD} was lower in the headline-belief inconsistent group compared to the headline-belief consistent group.
When headline-belief was consistent, the \textit{RPD} was larger in the FALSE claim group compared to the TRUE claim group, while when headline-belief was inconsistent, the \textit{RPD} was larger in the TRUE claim group compared to the FALSE claim group.
\subsection{Belief change}
\label{sec:belief_change}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figs/beliefchange_consist.pdf}
\caption{(a) Distribution of \textit{RPD} of the news headline AOIs as a function of the evidence correctness (correct or incorrect) and the belief change in both claims (stay/to-right in TRUE/FALSE claims). (b) Distribution of \textit{RPD} of the news headline AOIs as a function of belief-headline-consistency (consistent or inconsistent) and the belief change in both claims (stay/to-right in TRUE/FALSE claims).}
\label{fig:change_trend}
\end{figure}
A one-way ANOVA $(F(4,2175)=0.61)$ indicated that the \textit{RPD} was not significantly different between belief change conditions. Therefore, the change of the belief did not significantly influence the \textit{RPD}. In our lab experiment, participants maintained their correct beliefs (stay-right) in 44.22\% of the trials, and corrected their beliefs (to-right) in 46.56\% of the trials. Only in 9.22\% of the trials, participants stayed neutral, or remained incorrect (stay-wrong), or changed their beliefs to incorrect (to-wrong).
Furthermore, we checked the impact of the evidence correctness and the headline-belief-consistency on the \textit{RPD} of the participants whose beliefs were corrected (to-right) or remained correct (stay-right).
Figure ~\ref{fig:change_trend}(a) shows that \textit{RPD} differed between reading the headlines that are correct evidence and incorrect evidence within each belief change and claim correctness combination group (i.e., to-right in TRUE claims, to-right in FALSE claims, stay-right in TRUE claims, stay-right in FALSE claims).
\textit{RPD} when reading correct evidence had small differences between belief change and claim correctness combination groups, while \textit{RPD} when reading incorrect evidence had larger differences between belief change and claim correctness combination groups.
when reading the incorrect evidence, the \textit{RPD} differed more between change trend groups.
A paired-sample t-test was conducted to compare \textit{RPD} correct evidence and incorrect evidence conditions within each belief change and claim correctness combination group. There were significant differences in \textit{RPD} between reading correct and incorrect evidence in all the belief change groups: stay-right for FALSE claims, $t(28)=4.44, p<.05$, to-right for FALSE claims, $t(8)=3.97, p<.05$, to-right for TRUE claims, $t(13)=-2.36, p<.05$, stay-right for TRUE claims, $t(29)=-5.25, p<.05$.
\begin{table}[!thbp]
\centering
\caption{
Mann-Whitney U tests to determine whether there were significant differences between maintaining correct beliefs (stay-right) vs. correcting beliefs (to-right) (Section \ref{sec:belief_change}),
\textbf{(a)} for different combinations of Claim-Correctness and Evidence Correctness, and
\textbf{(b)} for different combinations of Claim-Correctness and Headline-belief Consistency.
(*$p<.05$, **$p<.01$, ***$p<.001$)
}
\label{mannwhitneyu}
\includegraphics[clip, trim=0 0 0 0, width=\linewidth]{figs/mannwhitneyu.pdf}
\end{table}
Figure ~\ref{fig:change_trend}(b) shows that the differences between the headline-belief consistent group and inconsistent group.
\textit{RPD} in headline-belief consistent group were larger than headline-belief inconsistent group when participants maintained their correct beliefs (stay-right) in FALSE claims.
\textit{RPD} in headline-belief inconsistent group were larger than headline-belief consistent group when participants changed to the correct belief (to-right) in both TRUE and FALSE claims, and when participants maintained their correct beliefs (stay-right) in TRUE claims.
A paired-sample t-test was conducted to compare \textit{RPD} in headline-belief consistent and inconsistent groups within each belief change and claim correctness combination group. There were significant differences in \textit{RPD} in two of the belief change groups: stay-right for FALSE claims, $t(28)=4.44, p<.05$, to-right in TRUE claims, $t(13)=-5,25, p<.05$.
A Mann-Whitney U test was conducted to determine whether there was a difference when the participants' beliefs stayed right and moved to the right in each evidence correctness or headline-belief-consistency groups and claim groups. Table ~\ref{mannwhitneyu} demonstrates the results. The Mann-Whitney U test and paired-samples t-test indicated that \textit{RPD} was significantly larger when the participants corrected their beliefs (to-right) especially in the headline-belief consistent group for TRUE claims, and in the headline-belief inconsistent group for FALSE claims.
\section{Discussion}
\label{sec:discussion}
In this study, we investigated how the cognitive load is impacted in the fact-checking context and if it is related to users' belief change. We conducted a within-subject, lab-based, quasi-experiment, in which we manipulated the evidence correctness (correct evidence, incorrect evidence), the headline-belief-consistency (consistent, inconsistent), and measured participants' belief change (stay-right, stay-neutral, stay-wrong, to-right, to-wrong). We evaluated the cognitive load when participants read the news headlines by measuring pupil dilation on the headline AOIs.
We found that \textit{RPD} is the highest when users read the news headlines in the top row, and that the \textit{RPD} decreases on the lower positioned headline rows. This suggested that the cognitive load is higher when people are processing top headlines. This could be because of the position bias \cite{azzopardi2021cognitive}, where highly ranked results tend to attract more attention, reading and clicks. Since the news headlines with different stances were randomly assigned to the headline row positions, the position bias does not influence our hypothesis testing of other factors potentially impacting cognitive load.
\textit{Evidence correctness.}
We found that higher cognitive load is imposed when reading news headlines that are denying the claim. That is, higher cognitive load is required when users read incorrect evidence for TRUE claims and correct evidence for FALSE claims. This finding supports H1 when users are fact-checking a TRUE claim but provides no support for H1 when users are fact-checking a FALSE claim.
Additionally, when users are checking TRUE claims, higher cognitive load is imposed regardless of the evidence correctness.
\textit{Headline-belief-consistency.}
When checking the relationship between headline stance and prior beliefs, the results indicate that cognitive load differs more between supportive and unsupportive headlines when the users perceived the claim correctness as true or false, instead of being neutral. The observation could be explained by that users would like to check all the information and a similar amount of cognitive load is imposed to read both supportive and unsupportive news since they do not have a prior tendency to claim correctness.
Furthermore, we looked into the association between cognitive load and the news headlines when the users held non-neutral prior beliefs. We found evidence for the effect of headline-belief consistency on cognitive load. Reading headline-belief inconsistent news imposed higher cognitive load when checking TRUE claims, while reading headline-belief consistent news imposed higher cognitive load when checking FALSE claims. This finding supports H2 for TRUE claims but denies H2 for FALSE claims.
\textit{Belief change.}
We did not find a relationship between cognitive load and belief change, which denies H3. This result is not aligned with previous findings on discerning misinformation performance \cite{Mirhoseini_Early_Hassanein_2022}. It is possible that our findings are due to the participants using the fact-checking system in the experiment and being more aware of the tasks to discern misinformation, hence they invest similar mental effort to process all news headlines shown in the interface and to judge the claim correctness.
Additionally, the belief change results suggest that participants generally performed well on the fact-checking tasks. Among all the experiment trials, users' kept their correct beliefs (stay-right) or corrected them (to-right) in more than 90\% trials. This indicates that the fact-checking system helped users to discern misinformation.
In the analysis within the stay-right and to-right belief change groups, the results cross-validated the findings in evidence correctness and headline-belief-consistency - H1 and H2 are supported for TRUE claims and rejected for FALSE claims. Moreover, the cognitive load was higher when users' corrected their beliefs (to-right) compared to when they maintained their beliefs (stayed-right) when users were reading headline-belief-consistent news for TRUE claims and headline-belief-inconsistent for FALSE claims. This implies that checking news headlines when users' beliefs were corrected (to-right) imposed higher cognitive load than checking news headlines when they maintained their correct beliefs (stay-right).
In summary, when users were reading news headlines for TRUE claims, our proposed hypotheses H1 and H2 were supported, while when users were reading news headlines for FALSE news, H1 and H2 were not supported. There was not enough evidence in our study to support H3. We found that incorrect evidence and headline-belief inconsistency may not always impose higher cognitive load. Instead, the cognitive load level imposed by reading headlines appeared to be associated with the claim correctness. When checking TRUE claims, higher cognitive load was imposed when users read incorrect evidence or read headline-belief-inconsistent news. When checking FALSE claims, higher cognitive load was imposed when users read correct evidence or read headline-belief-consistent news. The findings plausibly indicated that people tended to engage more with the news they believed in when they were checking FALSE claims, while they engaged more with the news that countered their belief when they were checking TRUE claims. We also found that cognitive load did not significantly differ between belief change conditions, which suggests that the fact-checking tasks imposed similar level of cognitive load regardless how people's beliefs changed. However, the results indicate that a higher cognitive load was imposed when users corrected their beliefs and when users were reading headline-belief-consistent news for TRUE claims, or headline-belief-inconsistent for FALSE claims.
This research develops an understanding of cognitive load in discerning misinformation in realistic scenarios. Based on previous research on the association between cognitive load and reading and identifying misinformation \cite{Mirhoseini_Early_Hassanein_2022}, we studied how people process misinformation when they encounter several headlines (or news search results) at the same time while examining a single claim. Our findings suggest that cognitive load is imposed differently when checking true claims versus false claims.
Previous research suggests that information system should encourage people to engage more cognitive effort (System 2), which could help them to identify misinformation more effectively \cite{rieger2021item}. Meanwhile, we need to prevent the cognition to be overloaded which could drive the users back to utilize their heuristic \cite{whelan2020applying}.
Our study implies that there are nuances in cognitive load when people are processing information with different claim correctness, evidence correctness, and headline-belief-consistency.
As suggested in \cite{littrell2022not}, different kinds of misinformation could invoke different information behavior. In practical system design, we should not simply increase or decrease the cognitive load, but instead seek to calibrate the cognitive load with respect to the information context. We need to adopt a more nuanced approach to nudge people to discern misinformation, such as providing personalized labels or explanations to remind people to pay attention to the misinformation at the appropriate condition.
Our study has some limitations. We only observed the effect of evidence correctness, headline-stance-consistency, and belief change on the cognitive load when reading the news headlines. The cognitive load could also be impacted by users' familiarity and the knowledge level of the claim topic. Even though we have excluded the participants with expert topic familiarity of the content based on self-reported information, there's a possibility that people are not aware of their expertise in the topic. Since higher familiarity could impose lower the cognitive load \cite{jen2017examining}, this limitation would impact the internal validity of our research.
Additionally, the claims and news headlines were pre-selected to conduct the controlled within-subject experiment. Future work should include using sets of claims on different topics and investigating cognitive load in the context of naturally generated fact-checking tasks. Another limitation of this study is that we only measured pupil dilation when they were looking at the news headline AOIs. It would therefore be interesting to measure pupil dilation when they read the full news articles and compare the cognitive load variations between distinct news conditions. Lastly,
the eye-tracking sequences in the experiment are relatively short. This renders them inapplicable to use the LHIPA technique to process and analyze the pupil dilation. Future work could improve the experimental design and allow for other pupillary response measurements (i.e., LHIPA) to reflect cognitive load with higher accuracy \cite{duchowski2020low}, or even other physiological measures, such as Electroencephalography (EEG) \cite{antonenko2010using}.
\section{Conclusion}
\label{sec:conclusion}
We presented results from a within-subject, lab-based, quasi-experiment with eye-tracking in which we examined how cognitive load is impacted by reading news headlines in a fact-checking context (i.e., by the evidence correctness and users' prior beliefs), and how it is related to people's belief change and their misinformation judgment.
We found that incorrect evidence and headline-belief inconsistency imposed higher cognitive load when people were checking true claims, while correct evidence and headline-belief consistency imposed higher cognitive load when people were checking false claims. Additionally, cognitive load was not significantly different when people's beliefs changed.
By developing the understanding of the cognition in discerning misinformation in a realistic scenario, the findings contribute to designing future information systems that support curbing of misinformation spread via appropriate technical and cognitive interventions.
\begin{acks}
This research was completed under UT Austin IRB study 2017070049 and supported in part by Wipro, the Micron Foundation, the Knight Foundation, and by Good Systems\footnote{http://goodsystems.utexas.edu/} , a UT Austin Grand Challenge to develop responsible AI technologies. The statements made herein are solely the opinions of the authors and do not reflect the views of the sponsoring agencies.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.08688",
"language": "en",
"timestamp": "2023-02-20T02:06:42",
"url": "https://arxiv.org/abs/2302.08688",
"yymm": "2302"
} |
\section{Introduction}
Because of the coronavirus pandemic since $2019$ (COVID-19 pandemic), the whole world has been impacted, causing a global health
crisis~\cite{majumder2021recent}. The COVID-19 disease is caused by a
coronavirus, the severe acute respiratory syndrome coronavirus 2
(SARS-CoV-2). People worldwide have already been
affected by COVID-19, making it more difficult to detect and design defense against. Many studies indicate that SARS-CoV-2 was likely transmitted to humans from bats~\cite{zhou-2020-pneumonia}.
Along with the growth of coronavirus, a huge collection of COVID related data has happened, specifically about its genome, like in GISAID~\cite{gisaid_website_url} database. This database comprises millions of viral genome sequences which are publicly available. It is well known that the spike region of the SARS-CoV-2 is exposed to the coronavirus (entry
point to the host cell) in humans~\cite{kuzmin2020machine} (see Figure~\ref{fig_spikeprot}). Therefore, most of the mutations that happen in the spike region are related to this virus. It is established in the literature that spike sequence is a vital feature of this type of virus~\cite{li2016structure,walls2020structure}.
However, it is unclear whether using full-length genome data is a good idea to perform coronavirus sequence classification or only using spike sequence could effectively do the same task.
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.4] {Figures/Spike_protein_new1.png}
\caption{The SARS-CoV-2 genome is roughly 29--30 kb in length,
encoding structural and non-structural proteins. Open reading
frame (ORF) 1ab encodes the non-structural proteins, and the four
structural proteins: S (spike), E (envelope), M (membrane), and N
(nucleocapsid) are encoded by their respective genes. The spike
region is composed of 3821 base pairs, hence coding for $1274$
amino acids.}
\label{fig_spikeprot}
\end{figure}
Recently, researchers focused to classify and cluster the sequences
based on hosts~\cite{kuzmin2020machine,ali2022pwm2vec} and
variants~\cite{ali2021k,tayebi2021robust,ali2021spike2vec} using spike sequences only. To make the sequences compatible for the machines, it is important to convert them into numerical vectors. Current methods achieve this task using one-hot encoding (OHE)~\cite{kuzmin2020machine} and $k$-mers~\cite{ali2021k,ali2021effective} in most cases.
Our contributions in this project are as follows:
\begin{enumerate}
\item We propose a small prototype of Federated Learning models using the Spike Sequence dataset, that outperform the baseline feature embedding method in terms of predictive accuracy.
\item We show that federated learning based approach is scalable, can be applied to different types of sequences and can be used in distributed fashion with less complexity.
\item We show that using a fraction of information (spike sequences only rather than full-length genome sequence), we can achieve better predictive performance from the underlying machine learning classifiers.
\end{enumerate}
The rest of the paper is organized as follows:
Section~\ref{sec_related_work} contains the related work for the given research problem. Our proposed federated learning model is explained in detail in Section~\ref{sec_proposed_approach}.
Section~\ref{sec_experimental_setup}
shows the dataset detail and experimental setup information.
Our results are given in Section~\ref{sec_results_discussion}. Finally, we conclude our paper in Section~\ref{sec_conclusion}.
\section{Related Work}\label{sec_related_work}
Several methods based on machine learning have been proposed previously that use $k$-mers for classification and clustering of biological sequences~\cite{wood-2014-kraken,ali2021k,ali2021effective,solis-2018-hiv}.
Although these methods are proven to be useful in respective studies, it is not clear if they can be extended on larger datasets without compromising the proposed models' predictive performance. Similarly, there have been some recent theoretical and
practical developments on
minimizers~\cite{universalHitting,marcais-2018-asymptotically}. A position weight matrix (PWM) based approach is proposed in~\cite{ali2022pwm2vec}, which generates a fixed-length representation of spike sequences based on weights of $k$-mers computed using PWM. However, their method only works with aligned sequence data.
In ~\cite{zhang2021dynamic} authors used federated learning for covid detection using x-ray images.
Although alignment-free methods are popular in the biology field, especially metagenomics, they are not studied as frequently in the data mining and big data domain as compared to the biology and bioinformatics fields. Authors in~\cite{girotto2016metaprob} propose the use of minimizer for the metagenomic data. Since metagenomic data contains short reads rather than whole sequences (such as the spike sequence), the minimizer based approach can easily be applied in that case by computing a single minimizer ($m$-mer) for each short read. Since, in our case we have the entire (spike or nucleotide) sequence, their approach is not directly applicable in our scenario.
The feature embedding based approaches has been proposed in other domains also such as graph analytics~\cite{ali2021predicting,AHMAD2020Combinatorial}, smart grid~\cite{ali2019short,ali2019short_AMI}, electromyography (EMG)~\cite{ullah2020effect}, clinical data analysis~\cite{ali2021efficient}, network security~\cite{ali2020detecting}, and text classification~\cite{Shakeel2020LanguageIndependent}. Several feature engineering and kernel based methods have been proposed recently for the classification of spike sequences.
Authors in~\cite{ali2021k,ali2021classifying} use $k$-mers along with a kernel based approach to classify SARS-CoV-2 spike sequences. Authors in~\cite{kuzmin2020machine} propose the use of one-hot encoding to classify the viral hosts of coronaviridae family.
\section{Proposed Approach}\label{sec_proposed_approach}
In this section, we give detail description of the proposed federated based approach for classification of sequences.
\subsection{Architecture}
The architecture consists of two types of components: 1) client models (local) and 2) Federated Learning models (global). The approach is based on a decentralized data approach that involves dividing dataset into different smaller parts and processing each part separately. The client model comprised of three parts of the dataset to train the models locally. These trained local models are pushed to a central (global) neural network (NN) based model. Only the weights, biases, and other parameters are provided to the NN, where they are pruned to get the final predictions. The NN model gets all the locally trained models and averages them out, effectively creating a new global model (Federated Learning model). The Federated Learning model coordinates the federated learning process and uses a fourth part of the dataset to train the global model. Each step is explained in more detail below:
\subsection*{Step 1: Generating Feature Vector:}
A fixed-length numerical feature vectors, called One Hot Encoding (OHE) is proposed in~\cite{ali2021k,kuzmin2020machine}. It generates 0-1 vector based on the character's position in the sequence given $\Sigma$, where $\Sigma$ is "\textit{ACDEFGHIKLMNPQRSTVWXY}" unique characters in each sequence. The 0-1 vectors for all characters are concatenated to make a singe vector for a given sequence.
\subsection*{Step 2: Federated Learning Approach}
After generating the numerical feature vectors $\phi$ for SARS-CoV-2 spike sequences, we use these feature vectors as input for our federated learning based model. We divide the dataset into training ($\phi_{tr}$) and testing dataset ($\phi_{ts}$). The training dataset $\phi_{tr}$ is further divided into four equal parts ($\phi_{tr1}, \phi_{tr2}, \phi_{tr3}, \phi_{tr4}$).
Our final Federated Learning based model comprised of a local and a global model, which work together to classify the spike sequences.
\subsubsection*{Local models}
We initialize $3$ individual classification (local) models (using classifiers such as XGB, Logistic Regression (LR), and Random Forest (RF)) and train them using three parts of the data ($\phi_{tr1}, \phi_{tr2}, \phi_{tr3}$). After training the ``local model``, we get the output $\lambda_{tr1}$, $\lambda_{tr2}$, and $\lambda_{tr3}$ from respective classification models for each training dataset.
After training these local models, these models are used to create a new aggregated model (global).
\subsubsection*{Global model}
Our global model consist of a neural network architecture, which takes output from the local models $\lambda_{tr1}$, $\lambda_{tr2}$, and $\lambda_{tr3}$ along with $\phi_{tr4}$ as input.
It is important to point out that only the weights, biases, and other parameters are pruned into new global model (from the local models). In the global model, none of the data from the three parts of the dataset ($\phi_{tr1}, \phi_{tr2}, \phi_{tr3}$) is used, which is core concept of federated learning. Using the fourth part of training data ($\phi_{tr4}$) along with the aggregated (local) models output, we train the NN in (global) model. We get our final trained ensemble model after this step.
\textcolor{red}{SARWAN: Need to explain neural network model. Also we can draw the NN architecture diagram}
\subsubsection*{Testing the ensemble model}
Finally using the ensemble global model we predict for the test dataset $\phi_{ts}$ to produced the final predictions and evaluate our proposed model using different evaluation metrics.
\subsection*{Workflow for proposed approach}
Figure~\ref{fig_federated_learning_flowchart} shows the complete workflow for our proposed approach. The left box shows the feature vector ($\phi$) generation process where we used One Hot Encoding to generate numerical representation (feature vectors) from the spike sequences. Each amino acid in spike sequence as shown in Figure\ref{fig_federated_learning_flowchart} (a) is encoded into numerical representation by placing $1$ at the position of charachter. For example for amino acid ``A`` we place $1$ at first position in nrespective numerical representation as shown in (b). Afterward, we divide the feature vector dataset into training $\phi_{tr}$ and testing $\phi_{ts}$. Box 2 on the right side of Figure~\ref{fig_federated_learning_flowchart} shows our federated learning-based approach. We divide the training dataset into $4$ equal parts ($\phi_{tr1}, \phi_{tr2}, \phi_{tr3} \text{ and } \phi_{tr4}$) and use $3$ of these for training the ``local models`` as shown in Figure~\ref{fig_federated_learning_flowchart} (f-h). After training, these models are aggregated and ensemble to create a new global model Figure~\ref{fig_federated_learning_flowchart} (j), which is again trained on the last fourth part ($\phi_{tr4}$) of the training dataset. In the end, we use the testing dataset ($\phi_{ts}$) to predict and evaluate the model.
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.19] {Figures/Federated_Learning_Architecture.png}
\caption{Flowchart of Federated Learning approach.}
\label{fig_federated_learning_flowchart}
\end{figure}
\begin{figure}[h!]
\centering
\centering
\includegraphics{Figures/Tikz_Figures/training_validation_accuracy.tikz}
\caption{Learning curve of different classification models with 5-fold cross validation (average score of 5 runs) and increasing training set size (x-axis). Figure is best seen in color.}
\label{}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{.50\textwidth}
\centering
\includegraphics[scale = 0.9] {Figures/Tikz_Figures/Loss.pdf}
\caption{Loss}
\end{subfigure}%
\begin{subfigure}{.50\textwidth}
\centering
\includegraphics[scale = 0.9] {Figures/Tikz_Figures/Accuracy.pdf}
\caption{Accuracy}
\end{subfigure}%
\caption{Final ensemble model Loss and Accuracy.}
\label{fig_model_loss_accuracy}
\end{figure}
\section{Experimental Setup}\label{sec_experimental_setup}
In this section, we give detail about the spike sequence dataset used for experimentation. We also discuss about the baseline model and the machine learning algorithms used for classification using baselines. In the end, we talk about the evaluation metrics used to test the performance of the models.
All experiments are conducted using an Intel(R) Core i5 system @ $2.10$GHz having Windows 10 $64$ bit OS with 32 GB memory. Implementation of of the model is done in Python and the code is available online for reproducibility~\footnote{available in the published version}. Our pre-processed data is also available online~\footnote{available in the published version}. For the classification algorithms, we use $70\%$ of the data for training and $30\%$ for testing.
\subsection{Dataset Statistics}
A famous database named GISAID~\footnote{\url{https://www.gisaid.org/}} is used to extract the spike sequences. The extracted data contains $9$ coronavirus variants within $9000$ total sequences ($1000$ sequences for each variant) that are selected randomly. Detailed statistics of the dataset can be seen in Table~\ref{tbl_variant_information}. The variants information is used as class labels for the purpose of classification.
\begin{remark}
Note that the spike sequences in our data are not of same length. The average, minimum, and maximum length of sequences is 1263.16, 9, and 1277, respectively. We use data padding in one-hot encoding to get fixed length representation.
\end{remark}
\begin{table}[ht!]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{@{\extracolsep{4pt}}p{1.5cm}lp{1.1cm}p{1.5cm} p{1.6cm}}
\toprule
\multirow{2}{*}{Lineage} & \multirow{2}{*}{Region of First Time Detection} & \multirow{2}{1.1cm}{Variant Name} &
\multirow{2}{1.8cm}{No. Mut. S/Gen.} & No. of sequences \\
\midrule \midrule
B.1.351 & South Africa~\cite{galloway2021emergence} & Beta & 9/21 & \hskip.1in 1000 \\
B.1.427 & California~\cite{zhang2021emergence} & Epsilon & 3/5 & \hskip.1in 1000 \\
B.1.429 & California & Epsilon & 3/5 & \hskip.1in 1000 \\
B.1.525 & UK and Nigeria & Eta & 8/16 & \hskip.1in 1000 \\
B.1.526 & New York~\cite{west2021detection} & Iota & 6/16 & \hskip.1in 1000 \\
B.1.617.2 & India~\cite{yadav2021neutralization} & Delta & 8/17 & \hskip.1in 1000 \\
B.1.621 & Colombia~\cite{who_website} & Mu & 9/21 & \hskip.1in 1000 \\
C.37 & Peru~\cite{who_website} & Lambda & 8/21 & \hskip.1in 1000 \\
P.1 & Brazil~\cite{naveca2021phylogenetic} & Gamma & 10/21 & \hskip.1in 1000 \\
\midrule
Total & - & - & - & \hskip.1in 9000 \\
\bottomrule
\end{tabular}
}
\caption{Statistics for $9$ variations from the SARS-CoV-2 dataset. The coronavirus lineages are displayed in the "Lineage" column and their variant names in the "Variant Name" column. A variant's genesis region is indicated in the second column. The "S/Gen." column compares the number of mutations found in the Spike (S) region to those found throughout the whole genome.
}
\label{tbl_variant_information}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{Figures/seq.png}
\caption{An illustration of the sequences for the Beta and Gamma coronavirus strains, along with the corresponding alterations (marked red). \textcolor{red}{SARWAN: maybe change the sequences. Maybe a table could work too rather than figure here.}
}
\label{fig_dummy}
\end{figure}
\subsection{Baseline Model}
As a baseline comparison, we use the following methods from literature
\subsubsection{Spike2Vec}
This method for spike sequence classification, called Spike2Vec is recently proposed in~\cite{ali2021spike2vec}.
Given a sequence, Spike2Vec computes $N$ $k$-mers, where $N= L - k + 1$ ($L$ is the length of the spike sequence and $k=3$ as given in~\cite{ali2021spike2vec}).
After generating the k-mers for a spike sequence, the count of each k-mer is used to get the frequency vector. To deal with the curse of dimensionality problem (because of larger size of feature vectors), Spike2Vec use an approximate kernel method called random Fourier features
(RFF)~\cite{rahimi2007random}, which maps the input data to a randomized low dimensional feature space (euclidean inner product space).
\subsubsection{Wasserstein Distance Guided Representation Learning (WDGRL)~\cite{shen2018wasserstein}}
WDGL is a method for unsupervised domain adoption. The Wasserstein distance (WD), used to help neural networks extract features from input data, is calculated using the source and target encoded distributions. By reducing the estimated WD and improving the feature extractor network, it seeks to decide the representation. It accepts as input the vector of a sequence typically one-hot encoded (OHE). An algorithm for producing a fixed-length numerical representation of sequences is the OHE~\cite{kuzmin2020machine}. Since WDGL uses a neural network as its foundation, gathering training data can be costly.
\subsubsection{PWM2Vec~\cite{ali2022pwm2vec}}
Another method for numerically representing biological sequences is PWM2Vec. It produces the feature embedding after receiving the sequence as input. It also adheres to the fundamental principles of $k$-mers, but instead of utilizing constant frequency values, it gives each amino acid in the $k$-mers a weight. An amino acid's weight is calculated using its position in a $k$-mer position weight matrix (PWM). PWM2Vec considers the relative relevance of amino acids while preserving the ordering information.
First, $k$-mers extraction is carried out for a spike sequence. Then, using the $k$-mers, a position frequency matrix (PFM) is produced by counting the occurrences of each alphabet relative to its associated places. Additionally, by obtaining column-wise probabilities of PFM using the following equation, a position probability matrix (PPM) is created:
\begin{equation}
PPM = \frac{\text{count of character in column of matrix}}{\text{sum of column values}}.
\end{equation}
A Laplace value of 0.1 is applied to each PPM element to prevent zero elements. The PWM is then created by calculating the log-likelihood of each character $c in Sigma$ at a location $i$ using the following formula:
\begin{equation}
PWM_{c, i} =\log_{2} \frac{p(c, i)}{p(c)}
\end{equation}
where $n(c)$ is the number of codons (as mentioned in~\cite{ali2022pwm2vec}) for each amino acid and$p(c) = \frac{n(c)}{61}$. Out of a total of $64$ codons, where $64-61=3$ are stop/termination codons, $61$ are the number of sense codons that code for an amino acid.
The appropriate spike sequence is transformed into its equivalent numerical representation using the PWM (PWM2Vec). In our studies, we chose the embedding $k=9$ using the conventional validation set technique~\cite{validationSetApproach}.
\begin{remark}
PWM2Vec is not an alignment-free approach; it should be noted. Data padding is one approach to make it work for variable-length sequences. But since the dataset we're utilizing is already aligned, padding is not necessary in this case.
\end{remark}
\subsubsection{String Kernel~\cite{farhan2017efficient}}
\textcolor{red}{SARWAN: Need to complete this}
\subsection{Machine Learning Classifiers For Baselines}
For the classification task, we use Support Vector Machine (SVM), Naive Bayes (NB), Multi Layer Perceptron (MLP), K Nearest Neighbors (KNN), Random Forest (RF), Logistic Regression (LR), and Decision Tree (DT).
\subsection{Evaluation Metrics}
To evaluate the goodness of classification algorithms, we use
average accuracy, precision, recall, weighted $F_1$, macro $F_1$, and ROC-AUC (one-vs-rest) metrics. We also report the training runtime for the classifiers.
\subsection{Data Visualization}
The t-distributed stochastic neighbor embedding (t-SNE)~\cite{van2008visualizing} is utilized to identify any hidden patterns in the data. This method works by mapping the high dimensional input data into 2D space but preserves the pairwise distance between data points in high dimensions. This visualization aims to highlight if different embedding methods introduce any changes to the overall distribution of data.
For various (baseline) embedding methods, Figure~\ref{fig_all_tsne} illustrated the t-SNE-based visualization (with SARS CoV-2 variants as labels as shown in legends). We can observe that in most of the cases, variants are nicely grouped together.
\begin{figure*}[ht!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.22]{Figures/tsne/Spike2Vec_tnse_plot.png}
\caption{Spike2Vec}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.22]{Figures/tsne/PWM2Vec_tnse_plot.png}
\caption{PWM2Vec}
\end{subfigure}%
\\
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.22]{Figures/tsne/kernel_Approx_tnse_plot.png}
\caption{String Kernel}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.22]{Figures/tsne/WDGRL_tnse_plot.png}
\caption{WDGRL}
\end{subfigure}
\caption{t-SNE plots for $9000$ spike sequences using different embedding methods. The figure is best seen in color.}
\label{fig_all_tsne}
\end{figure*}
\section{Results and Discussion}\label{sec_results_discussion}
In this section, we report the classification results using different evaluation metrics. Note that we report the average scores of $5$ runs to avoid any randomness in the data.
Table~\ref{tble_classification_results_variants} shows the classification results using different methods. We can observe that for all evaluation metrics except the training runtime, Federated learning based model along with LR outperforms all the baselines.
\begin{table}[h!]
\centering
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{cp{2cm}p{1.1cm}p{1.1cm}p{1.1cm}p{1.1cm}p{1.1cm}p{1.1cm}|p{1.2cm}}
\toprule
Method & Algo. & Acc. & Prec. & Recall & F1 (Weig.) & F1 (Macro) & ROC AUC & Train Time \\
\midrule \midrule
\multirow{7}{2cm}{Spike2Vec~\cite{ali2021spike2vec}}
& SVM & 0.925 & 0.926 & 0.925 & 0.924 & 0.924 & 0.958 & 242.499 \\
& NB & 0.919 & 0.925 & 0.919 & 0.918 & 0.918 & 0.955 & 6.452 \\
& MLP & 0.890 & 0.894 & 0.890 & 0.889 & 0.889 & 0.938 & 156.453 \\
& KNN & 0.866 & 0.871 & 0.866 & 0.867 & 0.866 & 0.925 & 16.039 \\
& RF & 0.926 & 0.927 & 0.926 & 0.925 & 0.925 & 0.958 & 11.032 \\
& LR & 0.927 & 0.929 & 0.927 & 0.927 & 0.927 & 0.959 & 23.966 \\
& DT & 0.922 & 0.924 & 0.922 & 0.922 & 0.922 & 0.956 & 4.414 \\
\cmidrule{2-9}
\multirow{7}{2cm}{PWM2Vec~\cite{ali2022pwm2vec}}
& SVM & 0.888 & 0.891 & 0.888 & 0.887 & 0.885 & 0.936 & 13.718 \\
& NB & 0.423 & 0.449 & 0.423 & 0.352 & 0.351 & 0.675 & 0.496 \\
& MLP & 0.866 & 0.869 & 0.866 & 0.864 & 0.862 & 0.923 & 12.656 \\
& KNN & 0.841 & 0.843 & 0.841 & 0.841 & 0.839 & 0.910 & 1.442 \\
& RF & 0.899 & 0.900 & 0.899 & 0.899 & 0.897 & 0.942 & 6.608 \\
& LR & 0.898 & 0.898 & 0.898 & 0.896 & 0.894 & 0.941 & 152.62 \\
& DT & 0.882 & 0.883 & 0.882 & 0.882 & 0.880 & 0.933 & 3.406 \\
\cmidrule{2-9}
\multirow{7}{2cm}{String Kernel~\cite{farhan2017efficient}}
& SVM & 0.926 & 0.931 & 0.926 & 0.924 & 0.924 & 0.959 & 24.46 \\
& NB & 0.600 & 0.705 & 0.600 & 0.611 & 0.611 & 0.775 & 0.218 \\
& MLP & 0.853 & 0.855 & 0.853 & 0.852 & 0.853 & 0.917 & 6.948 \\
& KNN & 0.866 & 0.872 & 0.866 & 0.868 & 0.868 & 0.925 & 0.827 \\
& RF & 0.918 & 0.919 & 0.918 & 0.917 & 0.917 & 0.954 & 5.120 \\
& LR & 0.927 & 0.930 & 0.927 & 0.926 & 0.926 & 0.959 & 9.258 \\
& DT & 0.897 & 0.899 & 0.897 & 0.897 & 0.897 & 0.942 & 1.426 \\
\cmidrule{2-9}
\multirow{7}{2cm}{WDGRL~\cite{shen2018wasserstein}}
& SVM & 0.902 & 0.905 & 0.902 & 0.901 & 0.902 & 0.946 & 0.403 \\
& NB & 0.825 & 0.789 & 0.825 & 0.792 & 0.795 & 0.904 & 0.016 \\
& MLP & 0.908 & 0.910 & 0.908 & 0.907 & 0.908 & 0.949 & 4.691 \\
& KNN & 0.910 & 0.913 & 0.910 & 0.909 & 0.910 & 0.950 & \textbf{0.116} \\
& RF & 0.909 & 0.911 & 0.909 & 0.907 & 0.909 & 0.949 & 0.446 \\
& LR & 0.877 & 0.880 & 0.877 & 0.877 & 0.878 & 0.931 & 0.168 \\
& DT & 0.898 & 0.900 & 0.898 & 0.897 & 0.899 & 0.943 & 0.020 \\
\cmidrule{2-9}
\multirow{3}{2cm}{Federated Learning}
& XGB & 0.930 & 0.932 & 0.930 & 0.930 & 0.928 & 0.960 & 1578.27 \\
& LR & \textbf{0.931} & \textbf{0.933} & \textbf{0.931} & \textbf{0.931} & \textbf{0.929} & \textbf{0.961} & 396.296 \\
& RF & 0.924 & 0.930 & 0.924 & 0.922 & 0.923 & 0.957 & 125.322 \\
\bottomrule
\end{tabular}
}
\caption{Variants Classification Results for Spike Sequences data. Best values are shown in bold.}
\label{tble_classification_results_variants}
\end{table}
The confusion matrix for federated learning based model with RF is shown in Table~\ref{tbl_confuse_mat_rf}.
\begin{table}[h!]
\centering
\begin{tabular}{cccccccccc}
\toprule
& B.1.351 & B.1.427 & B.1.429 & B.1.525 & B.1.526 & B.1.617.2 & B.1.621 & C.37 & P.1 \\
\midrule
B.1.351 & 283 & 0 & 0 & 1 & 4 & 3 & 0 & 0 & 0 \\
B.1.427 & 0 & 173 & 140 & 0 & 4 & 0 & 0 & 0 & 0 \\
B.1.429 & 1 & 48 & 267 & 0 & 1 & 0 & 0 & 1 & 1 \\
B.1.525 & 1 & 1 & 0 & 287 & 1 & 0 & 0 & 0 & 0 \\
B.1.526 & 0 & 0 & 0 & 1 & 297 & 0 & 0 & 0 & 0 \\
B.1.617.2 & 0 & 0 & 0 & 0 & 0 & 283 & 0 & 0 & 0 \\
B.1.621 & 0 & 0 & 0 & 0 & 2 & 0 & 296 & 0 & 0 \\
C.37 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 297 & 0 \\
P.1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 304 \\
\bottomrule
\end{tabular}
\caption{Confusion matrix for Federated learning based model using RF.}
\label{tbl_confuse_mat_rf}
\end{table}
\section{Conclusion}\label{sec_conclusion}
We propose a federated learning based model for SARS-CoV 2 variant classification using spike sequences. We compare the results using different evaluation metrics with several baseline models and show that federated learning based approach outperforms those existing baselines. One possible extension of this approach is to apply deep learning based models to classify the sequences. Another interesting direction would be to propose approximate approach to compute feature embeddings for the biological sequences to further improve the computational complexity.
\bibliographystyle{splncs04}
\section{Introduction}
The COVID-19 pandemic, caused by the SARS-CoV-2,
has impacted the entire
globe~\cite{majumder2021recent}.
It is responsible for almost $6$ million in deaths and $561$ infected people till July 2022 as reported by the world health organization
(WHO)~\cite{covid_stats_who}. This influence has drawn the attention of the research community to actively contribute their tools and techniques for the pandemic response, such as for design and assessment of containment
measures~\cite{kaimann2021containment,coccia2020impact}, image
processing for diagnosis~\cite{udugama2020diagnosing,panwar2020application}, optimal vaccine distribution~\cite{AHMAD2020Combinatorial,Tariq2017Scalable,ahmad2017spectral,lee2021performance}, computational tomography for genome
sequencing~\cite{udugama2020diagnosing}, etc.
Moreover, to comprehend the diversity and dynamics of the virus, its genome sequences are analyzed by using phylogenetic methods~\cite{hadfield2018a,minh_2020_iqtree2}. These methods can help in variant identification, however, they are not scalable~\cite{hadfield2018a,minh_2020_iqtree2}. Due to the presence of large publicly available biological sequence data in the GISAID database, it is desirable to design a scalable analytical model to get a deeper understanding of the virus.
Furthermore, as spike region, (see Figure~\ref{fig_spikeprot}) of SARS-CoV-2 genome is used to attach the virus to the host cell membrane in humans~\cite{kuzmin2020machine} and the major mutations also happen in this region, therefore only spike region suffices to further perform the virus analysis. Recently, classification and clustering approaches are put forward to analyze the SARS-CoV-2 virus using its spike sequences, like host classification~\cite{kuzmin2020machine,ali2022pwm2vec}, variant classification~\cite{ali2021k,tayebi2021robust,ali2021spike2vec}, etc. These methods first generate the numerical embeddings of the sequences and then employ either vector-space or kernel-based classifiers to do classification.
\begin{figure}[h!]
\centering
{\includegraphics[width=1\linewidth] {Figures/Spike_protein_new1.png}}
{\caption{The SARS-CoV-2 genome is roughly 30kb in length, encoding
structural and non-structural proteins.
The spike region is composed of 3821 base pairs.}}
\label{fig_spikeprot}
\end{figure}
Traditionally, the training of a machine learning (ML) model happens in a centralized way with all of the data stored on or is
available to be sampled from a single
server~\cite{kairouz2021advances}.
However, privacy and data security
concerns discourage disparate entities (e.g., healthcare governing
bodies in different countries) from sharing the data. The under-reporting
of COVID-19 statistics and other related data has already been
observed in various regions~\cite{kisa2020under,xu2020covid}, due to
political or other reasons. Even in cases where there are no
ethical, regulatory, or legal issues in data sharing, healthcare
bodies are known to prefer models validated on their
data~\cite{buch2021development}. Moreover, the local context is
already lost in a model trained on global data. On the other hand,
models trained on ``limited local'' data tend to overfit and do not
generalize.
Federated learning (FL), an AI paradigm, offers a more pragmatic and
proven approach to dealing with the many facets of the data-sharing
challenge. FL~\cite{mcmahan2017communication} enables collaborative
model learning over data from several (decentralized) places without
any data relocation. In FL, as shown in
Figure~\ref{fig_federated_learning}, first, (many) local models are
trained using the private data at each location. A \emph{`global
model'} is then trained using \emph{`federated learning'}. The
global model is kept on a central server called a \emph{`federated
server'}. Model parameters from the local models are pushed onto the
federated server, aggregating them using an
\emph{`aggregation function'}. FL preserves data privacy, overcomes
data ownership barriers, and yields generalized models.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.22]{Figures/Federated_learning_Approach.png}
\caption{The federated learning approach for a learning task using
private data from three separate organizations.}
\label{fig_federated_learning}
\end{figure}
The concept of federated learning has been used in many different
areas~\cite{aledhari2020federated,shaheen2022applications}, including
mobile apps, IoT, transportation, and defense. Due to its
applicability and the numerous trials that have previously been done,
makes it quite dependable. Recently, FL has been suggested for
inter-institutional healthcare research considering its core principle
where only model weights or gradients are sent between client sites
and the federated server, easing privacy concerns about data
governance for FL~\cite{dayan2021federated}.
In this paper, we build a small prototype of federated learning (FL)
model using a set of spike sequences for coronavirus variant
classification. We compare the performance of our proposed
federated-based approach on spike sequence data versus expensive
baseline methods. In addition, we compare our proposed solution with
other traditional state-of-the-art (SOTA) approaches, which involve a
centralized model training approach using different embedding methods
to address the classification problem.
We envision the use of an FL-based solution as a solution
for authorities and governments to facilitate different privacy and to simultaneously extract the knowledge from these large
public (global) datasets (repositories such as GISAID) along with private (local) datasets from other countries (private dataset) for a customized model catered to solving public health issues and designing policies in a specific context (e.g.,
state, country, geographical region). Here, we propose a federated
learning based approach to efficiently utilize the publicly available
data sets, and a mechanism to extract helpful information from the
private data of others while facilitating the differential privacy of
contributors to the problem of classifying variants of the SARS-CoV-2
virus. Our scalable model provides a path for solving similar problems
in other domains. Moreover, we show that using the spike protein
instead of the whole genomic sequence can give the same or better
accuracy, thus reducing computational complexity significantly.
Our contributions are as follows:
\begin{enumerate}
\item For coronavirus spike sequence classification, we provide federated learning (FL) based models, which are scalable and can be applied in a distributed fashion with less computing complexity.
\item Using the proposed FL model in a distributed manner allows us to maintain data privacy by only sending outputs (differential privacy) from the local small models to the global model (secure multi-party computation).
\item We compare FL-based models with different state-of-the-art
(SOTA) embedding techniques and show that the proposed model
outperforms SOTA methods in terms of predictive accuracy.
\item We demonstrate that the underlying machine learning classifiers can achieve high predictive performance with a fraction of information (spike sequences rather than full-length genome sequences).
\end{enumerate}
The rest of the paper is organized as follows:
Section~\ref{sec_related_work} contains the related work for the given
research problem. Our proposed federated learning model is explained
in detail in Section~\ref{sec_proposed_approach}.
Section~\ref{sec_experimental_setup} provides the details on the
dataset and experimental setup. Results are given in
Section~\ref{sec_results_discussion}, and we conclude the paper in
Section~\ref{sec_conclusion}.
\section{Related Work}\label{sec_related_work}
There are several approaches to convert biological sequences into machine learning-compatible inputs for classification and clustering, like $k$-mers-based methods~\cite{wood-2014-kraken,ali2021k,ali2021effective,solis-2018-hiv}.
Although these methods are proven useful in respective studies, it is unclear if they can be extended to larger datasets without compromising the classification/clustering models' predictive performance.
Similarly, a position weight matrix (PWM) based classification approach is proposed in~\cite{ali2022pwm2vec}, which generates a fixed-length representation of spike sequences based on weights of $k$-mers computed using PWM. However, their method only works with aligned sequence data.
Although alignment-free techniques, particularly in the realm of metagenomics are popular, authors in~\cite{girotto2016metaprob} propose utilizing a minimizer for the metagenomic data. Since it is small reads rather than full sequences (like the spike sequence), the minimizer-based approach can be simply used in that situation by computing a single minimizer ($m$-mer) for each short read. Their method is not immediately relevant in our instance because we employ the whole (spike or nucleotide) sequence.
Numerous machine learning-based methods can be used to perform the classification of a biological dataset, and federated learning (FL) is one of them. FL is a novel technique with various advantages; therefore, it has caught the attention of researchers quickly. Like, in ~\cite{nasser2022lightweight} researchers use the data gathered by individual user entities/equipment utilizing ambient sensors and wearable devices to propose a lightweight FL model that may be used to privately and collectively learn medical symptoms (like COVID-19).
Moreover, Many FL-based methods for image classification are put forward, like the authors in~\cite{li2021model} proposed MOON framework to deal with the heterogeneity of data distribution among local parties in FL. The non-identical (non-IID) data distribution among local models can degrade the overall performance of the global model, so to solve this problem, MOON used a model-based contrastive learning approach. It corrects the local model's update by maximizing the similarity of representation learned by the global model and considered the local model. In another work ~\cite{jimenez2021memory} early breast cancer prediction is made by a memory-aware curriculum federated learning-based model using mammography images. This method prioritizes the training samples, especially those forgotten after the deployment of the global model, and it improves the domain alignment. The system given in~\cite{li2020multi} is performing neuroimage analysis by following an FL-based strategy. In ~\cite{zhang2021dynamic} authors used FL for COVID detection using x-ray images. Using data from 20 institutions throughout the world, the authors in ~\cite{dayan2021federated} proposed a model called EXAM (electronic medical record (EMR) chest X-ray AI model). However, the model uses inputs of vital signs, laboratory data, and chest X-rays to forecast the future oxygen requirements of symptomatic COVID-19 patients. It is heterogeneous but is clinical and image data. Unlike these image-based approaches, our proposed method directly works on the sequence data.
Furthermore, the concept of FL is also extended to deal with time-series data, like in ~\cite{brophy2021estimation} the Ischemic heart disease is detected by training the distributed models using the arterial blood pressure (ABP) readings from a single optical photoplethysmogram (PPG) sensor. FL is also gaining popularity in edge computing; for example, ~\cite{mills2021multi} proposed a procedure to improve the performance of FL for local models (edge devices).
\section{Proposed Approach}\label{sec_proposed_approach}
In this section, we describe the proposed federated-based approach for the classification of coronavirus variants from spike protein sequences. We start by explaining the overall architecture of the proposed model, followed by a detail of each step.
\subsection{Architecture}
The architecture consists of two types of components: 1) client models (local) and 2) Federated Learning models (global). The approach is based on a decentralized data approach that involves dividing the dataset into different smaller parts and processing each part separately. The client model is comprised of three parts of the dataset to train the models locally. These trained local models are pushed to a central (global) neural network (NN) based model. Only the weights, biases, and other parameters are provided to the NN, where they are pruned to get the final predictions. The NN model gets all the locally trained models and averages them out, effectively creating a new global model (Federated Learning model). The Federated Learning model coordinates the federated learning process and uses a fourth part of the dataset to train the global model. Each step is explained in more detail below:
\subsection*{Step 1: Generating Feature Vector:}
A fixed-length numerical feature vector called One Hot Encoding (OHE) is proposed in~\cite{ali2021k,kuzmin2020machine}. It generates a $0-1$ vector based on the character's position in the sequence given $\Sigma$, where $\Sigma$ is "\textit{ACDEFGHIKLMNPQRSTVWXY}" unique characters in each sequence. The $0-1$ vectors for all characters are concatenated to make a single vector for a given sequence. For a given sequence $i$ of length $l$, the dimension of OHE based vector i.e., $\phi_i$ will be the following:
\begin{equation}
\phi_i = \vert \Sigma \vert \times l
\end{equation}
\subsection*{Step 2: Federated Learning Approach}
After generating the numerical vectors $\phi$ for SARS-CoV-2 spike sequences, we use these feature vectors as input for our federated learning-based model. We divide the dataset into training ($\phi_{tr}$) and testing dataset ($\phi_{ts}$). The training dataset $\phi_{tr}$ is further divided into four equal parts ($\phi_{tr1}, \phi_{tr2}, \phi_{tr3}, \phi_{tr4}$).
Our final Federated Learning-based model is comprised of a local and a global model, which work together to classify the spike sequences.
\subsubsection*{Local models}
We initialize $3$ individual classification (local) models (using classifiers such as XGB, Logistic Regression (LR), and Random Forest (RF)) and train them using three parts of the data ($\phi_{tr1}, \phi_{tr2}, \phi_{tr3}$). After training the ``local model``,
these models are used to create a new aggregated model (global).
\subsubsection*{Global model}
Our global model consists of a neural network architecture, which takes $\lambda_{1}$, $\lambda_{2}$, and $\lambda_{3}$ as input where $\lambda_{1}$, $\lambda_{2}$, and $\lambda_{3}$ are the outputs from local trained models for the dataset $\phi_{tr4}$, thus training the neural network using $\phi_{tr4}$. It is important to point out that only the weights, biases, and other parameters are pruned into a new global model (from the local models). In the global model, none of the data from the three parts of the dataset ($\phi_{tr1}, \phi_{tr2}, \phi_{tr3}$) is used, which is the core concept of federated learning. Using the fourth part of training data ($\phi_{tr4}$) we get output $\lambda_{1}$, $\lambda_{2}$, and $\lambda_{3}$ from respective trained classification models (local) for each data sample. This output of dimension $9\times 3 = 27$ (probability for $9$ class labels from $3$ models) is supplied to the neural network as input to train the Neural Network in the (global) model. We get our final trained ensemble model after this step.
Figure~\ref{fig_fl_NN} shows the precise architecture of the deep learning (DL) model, which is employed as the global model.
The number of neurons in the input layer is $27$ (weights from $3$ local model for $9$ class labels). The output layer, which has $9$ neurons, represents the nine classes we predict. In the neural network, we have two hidden layers having $25$ and $15$ neurons, respectively, with the ReLU activation function on both hidden layers and the Softmax function for the output layer since it will compute the probabilities for the classes by outputting a number between $0$ and $1$ and because this is a multi-class classification issue (we have nine output class labels). We use the Adam optimizer with batch size $16$ and $100$ epochs. The number of parameters is listed in Table~\ref{tbl_nn_params} the number of trainable parameters for hidden layer 1 is $700$, hidden layer 2 is $390$, and the output layer is $144$. In total, the global model uses $1254$ trainable parameters.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.22]{Figures/nn4.png}
\caption{Federated learning - Neural network.}
\label{fig_fl_NN}
\end{figure}
\begin{table}[h!]
\centering
\resizebox{0.47\textwidth}{!}{
\begin{tabular}{p{2.5cm}ccp{1.5cm}}
\toprule
Layer (type) & Input/Output Shape & Trainable Parameters \\
\midrule \midrule
Input Layer & \begin{tabular}{c}Input : (None, 27) \\ Output : (None, 27)\end{tabular} & - \\
\midrule
Hidden Layer 1 & \begin{tabular}{c}Input : (None, 27) \\ Output : (None, 25)\end{tabular} & 700 \\
\midrule
Hidden Layer 2 & \begin{tabular}{c}Input : (None, 25) \\ Output : (None, 15)\end{tabular} & 390 \\
\midrule
Output Layer & \begin{tabular}{c}Input : (None, 15) \\ Output : (None, 9)\end{tabular} & 144 \\
\midrule
\midrule
Total & \_ & 1254 \\
\bottomrule
\end{tabular}
}
\caption{Detail regarding the parameters in different layers of the Neural Network.}
\label{tbl_nn_params}
\end{table}
\subsubsection*{Testing the ensemble model}
Finally, using the ensemble-trained global model, we predict for the test dataset $\phi_{ts}$ to produce the final predictions and evaluate our proposed model using different evaluation metrics.
\subsection*{Workflow for proposed approach}
Figure~\ref{fig_federated_learning_flowchart} shows the complete workflow for our proposed approach. The left box shows the feature vector ($\phi$) generation process where we used One Hot Encoding to generate numerical representation (feature vectors) from the spike sequences. Each amino acid in the spike sequence, as shown in Figure\ref{fig_federated_learning_flowchart} (a), is encoded into numerical representation by placing $1$ at the position of a character. For example, for amino acid ``A" we place $1$ at the first position in the respective numerical representation as shown in (b). Afterward, we divide the feature vector dataset into training $\phi_{tr}$ and testing $\phi_{ts}$. Box 2 on the right side of Figure~\ref{fig_federated_learning_flowchart} shows our federated learning-based approach. We divide the training dataset into $4$ equal parts ($\phi_{tr1}, \phi_{tr2}, \phi_{tr3} \text{ and } \phi_{tr4}$) and use $3$ of these for training the ``local models" (e.g. random forest) as shown in Figure~\ref{fig_federated_learning_flowchart} (f-h).
After training, these models are aggregated and ensemble to create a new global model Figure~\ref{fig_federated_learning_flowchart} (j). The weights uploaded by each node (local model) for the training dataset $\phi_{tr4}$ are received on the server side as input. They are used to train the global neural network model. In the end, we use the testing dataset ($\phi_{ts}$) to predict and evaluate the model.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.85\linewidth] {Figures/Federated_Learning_Architecture.png}
\caption{Flowchart of Federated Learning approach.
}
\label{fig_federated_learning_flowchart}
\end{figure*}
The pseudo-code of our proposed approach is shown in Algorithm~\ref{algo_fedrated}. The given spike sequence-based data is converted to numerical vectors by employing a one-hot encoding technique. The resultant vectors are aligned following the trail zero padding method. Then we split the aligned vectors into train and test sets.
The train set is further divided into four exclusive train sets, among which the three sets are used individually to train three local models respectively. We feed the fourth train set to the local models to obtain their respective weights.
Furthermore, we combine all the extracted weights and pass them to the global model as input. After the training, we employ the test dataset to get the predictions from the global model. These predictions can provide insight into the global model's performance.
\begin{algorithm}[h!]
\caption{Ensemble Model Workflow
}
\label{algo_fedrated}
\begin{algorithmic}[1]
\State \textbf{Input:} Sequence data $S$
\State \textbf{Output:} Sequences Variant Predictions $V$
\State $\phi$ = OHE (S) \Comment{$ \text{get one-hot encodings of S }$}
\State $\phi_{tr}$, $\phi_{ts}$ = SplitDataTrainTest ($\phi$) \Comment{$ \text{70-30\% split}$}
\State $\phi_{tr1}$, $\phi_{tr2}$, $\phi_{tr3}$, $\phi_{tr4}$ = SplitTrainingData ($\phi_{tr}$ )
\newline\Comment{$ \text{split training data into 4 sets }$}
\State $model_1$ = Train ($\phi_{tr1}$) \newline\Comment{$ \text{train local $model_1$ with $\phi_{tr1}$ training set}$}
\State $\lambda_1$ = $model_1$($\phi_{tr4}$)
\State $model_2$= Train ($\phi_{tr2}$) \newline\Comment{$ \text{train local $model_2$ with $\phi_{tr2}$ training set}$}
\State $\lambda_2$ = $model_2$($\phi_{tr4}$)
\State $model_3$ = Train ($\phi_{tr3}$) \newline\Comment{$ \text{train local $model_3$ with $\phi_{tr3}$ training set}$}
\State $\lambda_3$ = $model_3$( $\phi_{tr4}$)
\State $model_{g}$ = Train ($\lambda_1$ + $\lambda_2$ + $\lambda_3$) \newline\Comment{$ \text{pass $\lambda_1$ + $\lambda_2$ + $\lambda_3$ as input to global $model_{g}$}$}
\State $V$ = $model_g$($\phi_{ts}$) \Comment{$ \text{$model_g$ output V for $\phi_{ts}$ }$}
\State return($V$ )
\end{algorithmic}
\end{algorithm}
\section{Experimental Setup}\label{sec_experimental_setup}
In this section, we detail the spike sequence dataset used for experimentation. We also discuss the baseline model and the machine learning algorithms using baselines for classification. In the end, we talk about the evaluation metrics used to test the performance of the models.
All experiments are conducted using an Intel(R) Core i5 system @ $2.10$GHz having Windows 10 $64$ bit OS with 32 GB memory. Implementation of the model is done in Python, and the code is available online for reproducibility\footnote{available in the published version}. Our pre-processed data is also available online\footnote{available in the published version}. For the classification algorithms, we use $70\%$ of the data for training and $30\%$ for testing.
\subsection{Dataset Statistics}
We extract the spike data from GISAID\footnote{\url{https://www.gisaid.org/}}.
The extracted data contains $9$ coronavirus variants within $9000$ total sequences ($1000$ sequences for each variant) that are selected randomly. Detailed statistics of the dataset can be seen in Table~\ref{tbl_variant_information}. The variants information is used as class labels for classification.
Moreover, as shown in Table~\ref{tbl_gene_alteration}, an input spike sequence is a long string of characters, each character representing an amino acid. Every sequence is associated with a lineage or variant. The variant is generated by certain mutations in the spike protein region. For example, the epsilon variant is created when the mutations S13I, W152C, and L452R happen in the spike region, where S13I means the amino acid S at position 13 is replaced by amino acid I. We use these sequence-based datasets to predict the corresponding variant names.
\begin{remark}
Note that the spike sequences in our data are not of the same length. The average, minimum, and maximum length of sequences (in the whole data) is $1263.16$, $9$, and $1277$, respectively. We use data padding in one-hot encoding to get a fixed-length representation. The sequence length statistics for individual variants are given in Table~\ref{tbl_variant_information}.
\end{remark}
\begin{table}[h!]
\centering
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{@{\extracolsep{4pt}}p{1.5cm}lp{1.1cm}p{1.5cm} p{1.6cm}ccc}
\toprule
& & & & & \multicolumn{3}{c}{Sequence Length} \\
\cmidrule{6-8}
\multirow{2}{*}{Lineage} & \multirow{2}{*}{Region of First Time Detection} & \multirow{2}{1.1cm}{Variant Name} &
\multirow{2}{1.8cm}{No. Mut. S/Gen.} & No. of sequences & \multirow{2}{*}{Min.} & \multirow{2}{*}{Max.} & \multirow{2}{*}{Avg.} \\
\midrule \midrule
B.1.351 & South Africa~\cite{galloway2021emergence} & Beta & 9/21 & \hskip.1in 1000 & 9 & 1274 & 1260.46 \\
B.1.427 & California~\cite{zhang2021emergence} & Epsilon & 3/5 & \hskip.1in 1000 & 100 & 1274 & 1272.18 \\
B.1.429 & California~\cite{who_website} & Epsilon & 3/5 & \hskip.1in 1000 & 100 & 1277 & 1271.93 \\
B.1.525 & UK and Nigeria~\cite{who_website} & Eta & 8/16 & \hskip.1in 1000 & 32 & 1273 & 1257.19 \\
B.1.526 & New York~\cite{west2021detection} & Iota & 6/16 & \hskip.1in 1000 & 9 & 1273 & 1266.62 \\
B.1.617.2 & India~\cite{yadav2021neutralization} & Delta & 8/17 & \hskip.1in 1000 & 99 & 1273 & 1265.12 \\
B.1.621 & Colombia~\cite{who_website} & Mu & 9/21 & \hskip.1in 1000 & 9 & 1275 & 1255.93 \\
C.37 & Peru~\cite{who_website} & Lambda & 8/21 & \hskip.1in 1000 & 86 & 1273 & 1248.55 \\
P.1 & Brazil~\cite{naveca2021phylogenetic} & Gamma & 10/21 & \hskip.1in 1000 & 99 & 1274 & 1270.45 \\
\midrule
Total & - & - & - & \hskip.1in 9000 \\
\bottomrule
\end{tabular}
}
\caption{Statistics for $9$ lineages from the SARS-CoV-2 dataset.
}
\label{tbl_variant_information}
\end{table}
\begin{table}[h!]
\centering
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{p{4cm}ccp{4cm}}
\toprule
Sequence & Variant Name & Lineage & Mutation \\
\midrule \midrule
\multirow{1}{*}{MFVFL . \textcolor{red}{I} . \textcolor{red}{C} . NY\textcolor{red}{R}YR . . } & Epsilon & B.1.429, B.1.427 & S13I, W152C, L452R \\
\multirow{1}{*}{MFVFL . \textcolor{red}{R} . \textcolor{red}{K} . HR\textcolor{red}{R}AR . . } & Delta & B.1.617.2 & T478K, P681R, L452R \\
\multirow{1}{*}{MFVFL . . . \textcolor{red}{N} . GV\textcolor{red}{K}GF . .} & Iota & B.1.526 & E484K, S477N \\
\multirow{2}{*}{MFVFL . . . \textcolor{red}{V} . \textcolor{red}{I} . \textcolor{red}{Q} . . .} & \multirow{2}{*}{Lambda} & \multirow{2}{*}{C.37} & G75V,T76I,L452Q,\\
& & & F490S,D614G,T859N \\
\bottomrule
\end{tabular}
}
\caption{An illustration of sequences for Epsilon and Delta coronavirus strains, along with the corresponding alterations (marked red).
}
\label{tbl_gene_alteration}
\end{table}
\subsection{Baseline Model}
We use the following models from the literature as baselines for the comparison of results with the proposed federated learning model.
\subsubsection{Spike2Vec~\cite{ali2021spike2vec}}
It is a spike sequence classification method.
Given a sequence, it computes $y$ $k$-mers, where
\begin{equation}\label{eq_kmers_total}
y = n - k + 1
\end{equation}
where $n$ is the length of the spike sequence and $k=3$ (for $k$-mers) as given in~\cite{ali2021spike2vec}).
After generating the $k$-mers for a spike sequence, the count of each $k$-mer is used to get the frequency vector. Given the alphabet $\Sigma$, where $\Sigma$ is "\textit{ACDEFGHIKLMNPQRSTVWXY}", the length of Spike2Vec based vectors $\Phi_{Spike2Vec}$ will be following:
\begin{equation}\label{eq_spike2vec_embedding}
\Phi_{Spike2Vec} = |\Sigma|^k
\end{equation}
\subsubsection{Wasserstein Distance Guided Representation Learning (WDGRL)~\cite{shen2018wasserstein}}
WDGRL is a method for unsupervised domain adoption. The Wasserstein distance (WD), used to help neural networks extract features from input data, is calculated using the source and target encoded distributions. By reducing the estimated WD and improving the feature extractor network, it seeks to decide the representation. It uses OHE (one-hot encodings~\cite{kuzmin2020machine}) of a sequence as input.
Since WDGRL uses a neural network as its foundation, gathering training data can be costly.
\subsubsection{PWM2Vec~\cite{ali2022pwm2vec}}
It is another method to produce numerical embeddings of biological sequences.
It also adheres to the fundamental principles of $k$-mers, but instead of utilizing constant frequency values, it gives each amino acid in the $k$-mers a weight. An amino acid's weight is calculated using its position in a $k$-mer position weight matrix (PWM). PWM2Vec considers the relative relevance of amino acids while preserving the ordering information.
First, $k$-mers extraction is carried out for a spike sequence. Then, using the $k$-mers, a position frequency matrix (PFM) is produced by counting the occurrences of each alphabet relative to its associated places. Additionally, by obtaining column-wise probabilities of PFM using the following equation, a position probability matrix (PPM) is created as $PPM = \frac{\text{count of character in the column of matrix}}{\text{sum of column values}}$.
A Laplace value of $0.1$ is applied to each PPM element to prevent zero elements. The PWM is then created by calculating the log-likelihood of each character $c in Sigma$ at a location $i$ using the following formula: $PWM_{c, i} =\log_{2} \frac{p(c, i)}{p(c)}$, where $n(c)$ is the number of codons (as mentioned in~\cite{ali2022pwm2vec}) for each amino acid and$p(c) = \frac{n(c)}{61}$. Out of a total of $64$ codons, where $64-61=3$ are stop/termination codons, $61$ is the number of sense codons that code for an amino acid.
The appropriate spike sequence is transformed into its equivalent numerical representation using the PWM (PWM2Vec). In our studies, we chose the embedding $k=9$ using the conventional validation set technique~\cite{validationSetApproach}.
\begin{remark}
PWM2Vec is not an alignment-free approach. As data padding can be used to align it
but since the dataset we are utilizing is already aligned, it is not required.
\end{remark}
\subsubsection{String Kernel~\cite{farhan2017efficient}}
This method works by generating a gram matrix, which consists of the approximate pairwise distance between sequences. The distance computation consists of the number of matched and mismatched $k$-mers (where k=3, which is decided using standard validation set approach~\cite{validationSetApproach}) between two sequences.
To make the computation more efficient, this method used locality-sensitive hashing theory to calculate the $k$-mers of two sequences separated by $m$. After generating the $n \times n$ gram (kernel) matrix, give the matrix as input to the kernel PCA~\cite{hoffmann2007kernel} to get the top principal components (PC). These top PC as used as input for underlying ML tasks such as classification.
\begin{remark}
We took the top $500$ principal components for kernel PCA, which contains $>90\%$ cumulative sum of explained variance.
\end{remark}
\subsubsection{Protein Bert~\cite{BrandesProteinBERT2022}}
It is a pre-trained language model specifically designed for protein sequences.
To handle the long sequences efficiently new architectural elements are added to this model. It captures both global and local representations within protein sequences. It is an end-to-end model based on the transformer, which takes protein sequences as input, fine-tunes the model's weights based on new data, learns the patterns based on updated weights, and gives the predicted labels in output.
\subsection{Machine Learning Classifiers}
For the classification task on state-of-the-art methods, we use Support Vector Machine (SVM), Naive Bayes (NB), Multi-Layer Perceptron (MLP), K Nearest Neighbors (KNN) $K=5$, Random Forest (RF), Logistic Regression (LR), and Decision Tree (DT).
For the FL, we use eXtreme Gradient Boosting (XGB), LR, and RF classifiers to train the local models. XGB is a boosting algorithm based on the gradient-boosted decision trees approach. It applies a better regularization technique to reduce over-fitting.
We select important features from the training dataset using a meta-transformer approach. This approach involves selecting features based on importance weights and is used for feature selection (dimensionality reduction). The goal of dimensionality reduction is to either improve the accuracy scores of the estimators or to boost the model's performance on high-dimensional datasets, hence avoiding the curse of dimensionality.
\subsection{Evaluation Metrics}
We use
average accuracy, precision, recall, weighted $F_1$, macro $F_1$, and ROC-AUC (one-vs-rest) metrics to evaluate the performance of classification algorithms. We also report the training runtime for the classifiers.
\subsection{Data Visualization}
The t-distributed stochastic neighbor embedding (t-SNE)~\cite{van2008visualizing} is utilized to identify any hidden patterns in the data. This method works by mapping the high dimensional input data into $2D$ space but preserves the pairwise distance between data points in high dimensions. This visualization aims to highlight if different embedding methods introduce any changes to the overall distribution of data. For various (baseline) embedding methods, Figure~\ref{fig_all_tsne} illustrated the t-SNE-based visualization (with SARS CoV-2 variants as labels as shown in legends).
In the case of WDGRL, we can observe that the variants are not clearly grouped together. For Spike2Vec, PWM2Vec, and String Kernel, the majority of the variants, such as P.1 (Gamma), B.1.526 (Iota), and C.37 (Lambda), make a single group.
\begin{figure}[h!]
\centering
{
\subfigure[Spike2Vec]{
\includegraphics[width=0.4\linewidth]{Figures/tsne/Spike2Vec_tnse.png}%
}
\subfigure[PWM2Vec]{
\includegraphics[width=0.4\linewidth]{Figures/tsne/PWM2Vec_tnse.png}%
}
\subfigure[String Kernel]{
\includegraphics[width=0.4\linewidth]{Figures/tsne/kernel_Approx_tnse_re.png}%
}
\subfigure[WDGRL]{
\includegraphics[width=0.4\linewidth]{Figures/tsne/WDGRL_tnse.png}%
}
\qquad
\includegraphics[width=1\linewidth]{Figures/tsne/Legends.png}%
}
\caption{t-SNE plots using different embedding methods. The figure is best seen in color.
}
\label{fig_all_tsne}
\end{figure}
\section{Results and Discussion}\label{sec_results_discussion}
This section reports the classification results of various methods using different evaluation metrics. We report the average and standard deviation scores of $5$ runs to avoid any randomness in the results.
Table~\ref{tble_classification_results_variants} summarizes the results for our proposed system and the SOTA models for different ML classifiers. We can observe that our proposed method with the LR classifier setting is outperforming the baselines for all the evaluation metrics except the training run time.
As our method involves training multiple models which causes the high run time but it is able to preserve the privacy of data while maintaining the highest predictive performance,
which is the prime goal of this paper.
The federated learning-based model illustrated better performance than the feature-engineering-based baselines (Spike2Vec, PWM2Vec) like it achieves $3.3$\% and $0.4$\% more accuracy than the PWM2Vec and Spike2Vec methods respectively for LR classifier. Similarly, it outperforms String Kernel with $0.4$\% accuracy using the LR classifier. Moreover, the proposed model outperforms WDGRL by $2.2\%$ and pre-trained Protein Bert by $2.9\%$ in terms of predictive accuracy using logistic regression.
\begin{table*}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{cp{1cm}cccccc|c}
\toprule
Method & Algo. & Acc. $\uparrow$ & Prec. $\uparrow$ & Recall $\uparrow$ & F1 (Weig.) $\uparrow$ & F1 (Macro) $\uparrow$ & ROC AUC $\uparrow$ & Train Time (Sec.) $\downarrow$\\
\midrule \midrule
\multirow{7}{2cm}{Spike2Vec~\cite{ali2021spike2vec}}
& SVM & 0.925 $\pm$ 0.001 & 0.926 $\pm$ 0.001 & 0.925 $\pm$ 0.001 & 0.924 $\pm$ 0.001 & 0.924 $\pm$ 0.002 & 0.958 $\pm$ 0.001 & 242.499 $\pm$ 4.623 \\
& NB & 0.919 $\pm$ 0.001 & 0.925 $\pm$ 0.003 & 0.919 $\pm$ 0.001 & 0.918 $\pm$ 0.001 & 0.918 $\pm$ 0.002 & 0.955 $\pm$ 0.001 & 6.452 $\pm$ 0.334 \\
& MLP & 0.890 $\pm$ 0.015 & 0.894 $\pm$ 0.012 & 0.890 $\pm$ 0.015 & 0.889 $\pm$ 0.014 & 0.889 $\pm$ 0.013 & 0.938 $\pm$ 0.008 & 156.453 $\pm$ 14.703 \\
& KNN & 0.866 $\pm$ 0.002 & 0.871 $\pm$ 0.002 & 0.866 $\pm$ 0.002 & 0.867 $\pm$ 0.002 & 0.866 $\pm$ 0.004 & 0.925 $\pm$ 0.002 & 16.039 $\pm$ 1.079 \\
& RF & 0.926 $\pm$ 0.003 & 0.927 $\pm$ 0.004 & 0.926 $\pm$ 0.003 & 0.925 $\pm$ 0.003 & 0.925 $\pm$ 0.003 & 0.958 $\pm$ 0.002 & 11.032 $\pm$ 0.175 \\
& LR & 0.927 $\pm$ 0.001 & 0.929 $\pm$ 0.002 & 0.927 $\pm$ 0.001 & 0.927 $\pm$ 0.001 & 0.927 $\pm$ 0.002 & 0.959 $\pm$ 0.001 & 23.966 $\pm$ 0.866 \\
& DT & 0.922 $\pm$ 0.004 & 0.924 $\pm$ 0.004 & 0.922 $\pm$ 0.004 & 0.922 $\pm$ 0.003 & 0.922 $\pm$ 0.002 & 0.956 $\pm$ 0.001 & 4.414 $\pm$ 0.172 \\
\cmidrule{2-9}
\multirow{7}{2cm}{PWM2Vec~\cite{ali2022pwm2vec}}
& SVM & 0.888 $\pm$ 0.001 & 0.891 $\pm$ 0.001 & 0.888 $\pm$ 0.001 & 0.887 $\pm$ 0.002 & 0.885 $\pm$ 0.002 & 0.936 $\pm$ 0.001 & 13.718 $\pm$ 1.894 \\
& NB & 0.423 $\pm$ 0.014 & 0.449 $\pm$ 0.026 & 0.423 $\pm$ 0.014 & 0.352 $\pm$ 0.019 & 0.351 $\pm$ 0.017 & 0.675 $\pm$ 0.007 & 0.496 $\pm$ 0.047 \\
& MLP & 0.866 $\pm$ 0.006 & 0.869 $\pm$ 0.008 & 0.866 $\pm$ 0.006 & 0.864 $\pm$ 0.006 & 0.862 $\pm$ 0.006 & 0.923 $\pm$ 0.003 & 12.656 $\pm$ 3.516 \\
& KNN & 0.841 $\pm$ 0.010 & 0.843 $\pm$ 0.009 & 0.841 $\pm$ 0.010 & 0.841 $\pm$ 0.010 & 0.839 $\pm$ 0.009 & 0.910 $\pm$ 0.005 & 1.442 $\pm$ 0.181 \\
& RF & 0.899 $\pm$ 0.003 & 0.900 $\pm$ 0.003 & 0.899 $\pm$ 0.003 & 0.899 $\pm$ 0.003 & 0.897 $\pm$ 0.003 & 0.942 $\pm$ 0.002 & 6.608 $\pm$ 0.056 \\
& LR & 0.898 $\pm$ 0.004 & 0.898 $\pm$ 0.004 & 0.898 $\pm$ 0.004 & 0.896 $\pm$ 0.004 & 0.894 $\pm$ 0.004 & 0.941 $\pm$ 0.002 & 152.62 $\pm$ 7.102 \\
& DT & 0.882 $\pm$ 0.005 & 0.883 $\pm$ 0.005 & 0.882 $\pm$ 0.005 & 0.882 $\pm$ 0.005 & 0.880 $\pm$ 0.005 & 0.933 $\pm$ 0.003 & 3.406 $\pm$ 0.110 \\
\cmidrule{2-9}
\multirow{7}{2cm}{String Kernel~\cite{farhan2017efficient}}
& SVM & 0.926 $\pm$ 0.005 & 0.931 $\pm$ 0.005 & 0.926 $\pm$ 0.005 & 0.924 $\pm$ 0.005 & 0.924 $\pm$ 0.003 & 0.959 $\pm$ 0.002 & 12.46 $\pm$ 2.543 \\
& NB & 0.600 $\pm$ 0.008 & 0.705 $\pm$ 0.010 & 0.600 $\pm$ 0.008 & 0.611 $\pm$ 0.008 & 0.611 $\pm$ 0.008 & 0.775 $\pm$ 0.004 & 0.218 $\pm$ 0.013 \\
& MLP & 0.853 $\pm$ 0.013 & 0.855 $\pm$ 0.014 & 0.853 $\pm$ 0.013 & 0.852 $\pm$ 0.013 & 0.853 $\pm$ 0.013 & 0.917 $\pm$ 0.007 & 6.948 $\pm$ 0.622 \\
& KNN & 0.866 $\pm$ 0.007 & 0.872 $\pm$ 0.008 & 0.866 $\pm$ 0.007 & 0.868 $\pm$ 0.008 & 0.868 $\pm$ 0.005 & 0.925 $\pm$ 0.003 & 0.827 $\pm$ 0.068 \\
& RF & 0.918 $\pm$ 0.004 & 0.919 $\pm$ 0.003 & 0.918 $\pm$ 0.004 & 0.917 $\pm$ 0.004 & 0.917 $\pm$ 0.002 & 0.954 $\pm$ 0.001 & 5.120 $\pm$ 0.191 \\
& LR & 0.927 $\pm$ 0.004 & 0.930 $\pm$ 0.003 & 0.927 $\pm$ 0.004 & 0.926 $\pm$ 0.004 & 0.926 $\pm$ 0.002 & 0.959 $\pm$ 0.001 & 9.258 $\pm$ 0.702 \\
& DT & 0.897 $\pm$ 0.006 & 0.899 $\pm$ 0.005 & 0.897 $\pm$ 0.006 & 0.897 $\pm$ 0.006 & 0.897 $\pm$ 0.004 & 0.942 $\pm$ 0.002 & 1.426 $\pm$ 0.065 \\
\cmidrule{2-9}
\multirow{7}{2cm}{WDGRL~\cite{shen2018wasserstein}}
& SVM & 0.902 $\pm$ 0.003 & 0.905 $\pm$ 0.004 & 0.902 $\pm$ 0.003 & 0.901 $\pm$ 0.004 & 0.902 $\pm$ 0.003 & 0.946 $\pm$ 0.002 & 0.403 $\pm$ 0.038 \\
& NB & 0.825 $\pm$ 0.004 & 0.789 $\pm$ 0.007 & 0.825 $\pm$ 0.004 & 0.792 $\pm$ 0.004 & 0.795 $\pm$ 0.004 & 0.904 $\pm$ 0.002 & \textbf{0.016} $\pm$ 0.003 \\
& MLP & 0.908 $\pm$ 0.004 & 0.910 $\pm$ 0.004 & 0.908 $\pm$ 0.004 & 0.907 $\pm$ 0.005 & 0.908 $\pm$ 0.004 & 0.949 $\pm$ 0.002 & 4.691 $\pm$ 0.736 \\
& KNN & 0.910 $\pm$ 0.012 & 0.913 $\pm$ 0.011 & 0.910 $\pm$ 0.012 & 0.909 $\pm$ 0.012 & 0.910 $\pm$ 0.011 & 0.950 $\pm$ 0.006 & 0.116 $\pm$ 0.014 \\
& RF & 0.909 $\pm$ 0.002 & 0.911 $\pm$ 0.001 & 0.909 $\pm$ 0.002 & 0.907 $\pm$ 0.002 & 0.909 $\pm$ 0.002 & 0.949 $\pm$ 0.001 & 0.446 $\pm$ 0.057 \\
& LR & 0.877 $\pm$ 0.012 & 0.880 $\pm$ 0.005 & 0.877 $\pm$ 0.012 & 0.877 $\pm$ 0.015 & 0.878 $\pm$ 0.014 & 0.931 $\pm$ 0.006 & 0.168 $\pm$ 0.016 \\
& DT & 0.898 $\pm$ 0.005 & 0.900 $\pm$ 0.006 & 0.898 $\pm$ 0.005 & 0.897 $\pm$ 0.005 & 0.899 $\pm$ 0.004 & 0.943 $\pm$ 0.002 & 0.020 $\pm$ 0.005 \\
\cmidrule{2-9}
\multirow{1}{2cm}{Protein Bert~\cite{BrandesProteinBERT2022}}
& \multirow{4}{*}{-} & \multirow{4}{*}{0.902 $\pm$ 0.004} & \multirow{4}{*}{0.903 $\pm$ 0.003} & \multirow{4}{*}{0.902 $\pm$ 0.004} & \multirow{4}{*}{0.904 $\pm$ 0.005} & \multirow{4}{*}{0.903 $\pm$ 0.009} & \multirow{4}{*}{0.945 $\pm$ 0.007} & \multirow{4}{*}{16127.76 $\pm$ 0.019} \\
&&&&&&&&\\
&&&&&&&&\\
&&&&&&&&\\
\cmidrule{2-9}
\multirow{3}{2cm}{Federated Learning (ours)}
& XGB & 0.930 $\pm$ 0.004 & 0.932 $\pm$ 0.003 & 0.930 $\pm$ 0.004 & 0.930 $\pm$ 0.005 & 0.928 $\pm$ 0.004 & 0.960 $\pm$ 0.003 & 1578.27 $\pm$ 0.045 \\
& LR & \textbf{0.931} $\pm$ 0.011 & \textbf{0.933} $\pm$ 0.010 & \textbf{0.931} $\pm$ 0.012 & \textbf{0.931} $\pm$ 0.011 & \textbf{0.929} $\pm$ 0.011 & \textbf{0.961} $\pm$ 0.010 & 396.296 $\pm$ 0.024 \\
& RF & 0.929 $\pm$ 0.005 & 0.932 $\pm$ 0.004 & 0.928 $\pm$ 0.006 & 0.927 $\pm$ 0.005 & 0.925 $\pm$ 0.006 & 0.959 $\pm$ 0.004 & 125.322 $\pm$ 0.079 \\
\bottomrule
\end{tabular}
}
\caption{Variants classification results (average $\pm$ standard deviation of 5 runs) for spike sequences data. The best average values are shown in bold.}
\label{tble_classification_results_variants}
\end{table*}
Furthermore, the RF-based federated learning model's confusion matrix is shown in Table~\ref{tbl_confuse_mat_rf}
We can observe that in most cases, the model is able to classify the variants correctly. An interesting observation here is in the results of variants B.1.427 and B.1.429. Since both of these variants are classified as Epsilon originating in California (see Table~\ref{tbl_variant_information}), the proposed model cannot distinguish between them because of their high similarity. Note that both of these variants share the same mutations in the spike region but have different mutations in other SARS-CoV-2 genes. Since we are dealing with spike regions in this study, differentiating between them becomes very difficult, that's why the model is getting confused between these two variants of Epsilon. The mutations for the Epsilon variants are shown in Table~\ref{tbl_gene_alteration}.
\begin{table}[h!]
\centering
\resizebox{0.47\textwidth}{!}{
\begin{tabular}{cccccccccc}
\toprule
& B.1.351 & B.1.427 & B.1.429 & B.1.525 & B.1.526 & B.1.617.2 & B.1.621 & C.37 & P.1 \\
\midrule
B.1.351 & 283 & 0 & 0 & 1 & 4 & 3 & 0 & 0 & 0 \\
B.1.427 & 0 & 173 & 140 & 0 & 4 & 0 & 0 & 0 & 0 \\
B.1.429 & 1 & 48 & 267 & 0 & 1 & 0 & 0 & 1 & 1 \\
B.1.525 & 1 & 1 & 0 & 287 & 1 & 0 & 0 & 0 & 0 \\
B.1.526 & 0 & 0 & 0 & 1 & 297 & 0 & 0 & 0 & 0 \\
B.1.617.2 & 0 & 0 & 0 & 0 & 0 & 283 & 0 & 0 & 0 \\
B.1.621 & 0 & 0 & 0 & 0 & 2 & 0 & 296 & 0 & 0 \\
C.37 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 297 & 0 \\
P.1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 304 \\
\bottomrule
\end{tabular}
}
\caption{Confusion matrix for Federated learning based model using Random Forest classifier.}
\label{tbl_confuse_mat_rf}
\end{table}
\subsection{Local Model Analysis}
We present the training and validation accuracy for individual ML models in Figure~\ref{fig_local_model_results} to assess the performance of individual models throughout the training phase. We can observe that these charts demonstrate accuracy improvements as the training set's size increases, showing the improvement of the model.
\begin{figure}[h!]
\centering
{
\subfigure[Local Model 1]{
\includegraphics[scale=0.45]{Figures/output-figure0.pdf}%
}
}
{
\subfigure[Local Model 2]{
\includegraphics[scale=0.45]{Figures/output-figure1.pdf}%
}
}
{
\subfigure[Local Model 3]{
\includegraphics[scale=0.45]{Figures/output-figure2.pdf}%
}
}
\caption{Training and Cross-Validation accuracy of different local models with increasing (fraction of) training set size (x-axis). The figure is best seen in color.}
\label{fig_local_model_results}
\end{figure}
\subsection{Global Model Analysis}
The accuracy and loss curves for the global model,
are shown in Figure~\ref{fig_model_loss_accuracy}. We can observe in Figure~\ref{fig_loss_NN} that the loss is stable after $20$ epochs, and accuracy ranges around 94-96\% as shown in Figure~\ref{fig_acc_NN}.
\begin{figure}[h!]
\centering
{
\subfigure[Accuracy]{\label{fig_loss_NN}
\includegraphics[scale=0.52]{Figures/Tikz_Figures/Loss.pdf}%
}
\qquad
\subfigure[Loss]{\label{fig_acc_NN}
\includegraphics[scale=0.52]{Figures/Tikz_Figures/Accuracy.pdf}%
}
}
\caption{Learning curves for Loss and Accuracy of final ensemble (Global) model (NN).}
\label{fig_model_loss_accuracy}
\end{figure}
\section{Conclusion}\label{sec_conclusion}
We propose federated learning-based models for COVID-19 variant classification. We show that by using spike sequences only, we can achieve higher predictive performance. We compare the results using different evaluation metrics with several SOTA models and show that the federated learning-based approach outperforms those existing models from the literature. An important property of the proposed model is that since it only transfers the output from local models to the global model, it preserves the privacy of users, which could be a major problem in many big organizations.
One possible extension of this approach is to apply deep learning-based local models to classify the sequences. Another interesting direction would be to propose an approximate approach to compute feature embeddings for the biological sequences to improve computational complexity further. Using different ML classifiers in a combination within a single FL architecture could also be an interesting future extension for COVID-19 variant classification. We will also explore incorporating other attributes (e.g., regions, time) along with the spike sequences to generate a vertical federated learning model.
\section{Introduction}
|
{
"arxiv_id": "2302.08598",
"language": "en",
"timestamp": "2023-02-20T02:02:46",
"url": "https://arxiv.org/abs/2302.08598",
"yymm": "2302"
} | \section{Introduction}
The elasticity complex, also known as the Kr\"oner complex,
can be derived from simpler complexes by an algebraic technique called
the Bernstein-Bernstein-Gelfand (BGG) resolution \cite{arnold2021complexes, CapSlovaSouce01,eastwood2000complex}. The utility of the BGG construction in developing and understanding stress elements for elasticity is now well appreciated~\cite{FE2006}.
However even with this machinery, the construction of conforming, inf-sup stable stress elements on simplicial
meshes is still a notoriously challenging task~\cite{boffi2008finite}.
It was not until 2002 that the first conforming elasticity elements
were successfully constructed on two-dimensional triangular meshes by
Arnold and Winther~\cite{arnold2002mixed}. There, they argued that
vertex degrees of freedom (DOFs) are necessary when using piecewise
polynomial approximations on such meshes. They in fact constructed an
entire discrete elasticity complex and showed how the last two spaces
there are relevant for discretizing the Hellinger-Reissner principle in
elasticity.
Following the creation of the first two-dimensional (2D) conforming
elasticity elements, the first three-dimensional (3D) elasticity
elements were constructed in \cite{adams2005mixed,arnold2008finite},
which paved the way for many other similar elements, as demonstrated
in \cite{hu2015family}. A natural question that arose was whether
these elements could be seen as part of an entire discrete elasticity
complex, similar to what was done in 2D. Although the work
in~\cite{arnold2008finite} laid the foundation, the task of extending
it to 3D was bogged down by complications.
This is despite the clearly understood
BGG procedure to arrive at an elasticity complex
of smooth function spaces,
\begin{equation}\label{eq:smooth-complex}
\begin{tikzcd}
0 \arrow{r}
&\mathsf{R} \arrow{r}{\subset}
&C^{\infty}\otimes \mathbb{V} \arrow{r}{\varepsilon}
& C^{\infty}\otimes \mathbb{S} \arrow{r}{\text{inc}}
& C^{\infty}\otimes \mathbb{S}\arrow{r}{\text{div}}
&C^{\infty}\otimes {\mathbb{V}} \arrow{r}&0.
\end{tikzcd}
\end{equation}
Here and throughout, $\mathbb{V} = \mathbb{R}^3$, $\mathbb{M} = \mathbb{R}^{3 \times 3}$,
$\mathsf{R}=\{ a+b \times x: a, b \in \mathbb{R}^3\}$ denotes rigid
displacements, $\text{inc} ={\rm curl} \circ \tau \circ {\rm curl}$ with
$\tau$ denoting the transpose, curl and divergence operators are
applied row by row on matrix fields, $\mathbb{S} = \text{sym} (\mathbb{M})$,
and $\varepsilon = \text{{\rm sym}} \circ \text{grad}$ denotes the
deformation operator. The complex~\eqref{eq:smooth-complex} is exact
on a 3D contractible domain. We assume throughout that our domain
$\Omega$ is contractible. To give an indication of the aforementioned
complications, first note that the techniques leading up to those
summarized in~\cite{arnold2021complexes} showed how the BGG
construction can be extended beyond smooth complexes
like~\eqref{eq:smooth-complex}. For example, applying the BGG
procedure to de Rham complexes of Sobolev spaces
$H^s \equiv H^s(\Omega)$, the authors of~\cite{arnold2021complexes} arrived at the
following elasticity complex of Sobolev spaces:
\begin{equation}
\label{eq:6}
\begin{tikzcd}[]
\mathsf{R} \arrow{r}{\subset}
&
H^s \otimes \mathbb{V} \arrow{r}{\varepsilon}
&
H^{s-1} \otimes \mathbb{S} \arrow{r}{\text{inc}}
&
H^{s-3} \otimes \mathbb{S} \arrow{r}{\text{div}}
&
H^{s-4} \otimes \mathbb{V} \arrow{r}
& 0.
\end{tikzcd}
\end{equation}
However, one of the problems in constructing finite element
subcomplexes of~\eqref{eq:6} is the increase of four orders of
smoothness from the last space ($H^{s-4}$) to the first space
($H^s$). A search for finite element subcomplexes of elasticity complexes with
different Sobolev spaces seemed to hold more
promise~\cite{arnold2008finite}.
It was not until 2020 that the first 3D discrete elasticity subcomplex
was established in~\cite{christiansen2020discrete}. To understand
that work, it is useful to look at it from the perspective of applying
the BGG procedure to a different sequence of Sobolev spaces. Starting
with a Stokes complex, lining up another de Rham complex with
different gradations of smoothness, and applying the BGG procedure,
one gets
\begin{equation}
\label{eq:the-complex}
\begin{tikzcd}[ampersand replacement=\&]
{\mathcal{R}} \arrow{r}{\subset}
\&
H^2\otimes {\mathbb{V}} \arrow{r}{{\varepsilon}}
\&
H({\operatorname{inc}}) \arrow{r}{{\operatorname{inc}}}
\&
H(\mathop{\operatorname{div}}, {\mathbb{S}}) \arrow{r}{\mathop{\operatorname{div}}}
\&
L^2\otimes {\mathbb{V}} \arrow{r}
\&
0.
\end{tikzcd}
\end{equation}
The derivation of~\eqref{eq:the-complex}
is described in more detail in \cite[p.~38--40]{MFO22}.
The key innovation in~\cite{christiansen2020discrete} was the
construction of two sequences of finite element spaces on
which this BGG argument can
be replicated at the discrete level, resulting in a fully discrete
subcomplex of \eqref{eq:the-complex}. These new finite element sequences were
inspired by the ``smoother'' discrete de Rham complexes (smoother than
the classical N\'ed\'elec spaces \cite{nedelec1980mixed}) recently
being produced in a variety of settings \cite{Ex2020, guzman2020exact,
FuGuzman, christiansen2022finite,
christiansen2018Generalized}. Specifically, the 3D discrete complex
in~\cite{christiansen2020discrete} was built on meshes of Alfeld
splits, a particular type of macro element. Soon after the results
of~\cite{christiansen2020discrete} were publicized, Chen and
Huang~\cite{chen2022finite} obtained another 3D discrete elasticity
sequence on general triangulations.
The BGG construction was also applied to obtain discrete tensor product
spaces in~\cite{bonizzoni2023discrete}.
In this paper, we apply the methodology presented
in~\cite{christiansen2020discrete} to construct a new discrete
elasticity sequence on Worsey-Farin splits
\cite{worsey1987n}. One of the expected benefits of using
triangulations of macroelements is the potential reduction of polynomial degree
and the potential escape from the unavoidability~\cite{arnold2008finite} of
vertex degrees of freedom in stress elements.
We will see that Worsey-Farin splits offer
structures where these benefits can be reaped easier than on Alfeld
splits.
Unlike Afleld splits, which divide each
tetrahedron into four sub-tetrahedra, Worsey-Farin triangulations split each tetrahedron into twelve sub-tetrahedra. Using the
Worsey-Farin split,
we are able to reduce the polynomial degree and also
eliminate vertex and edge DOFs of stress elements.
Previous works have used either quadratics \cite{christiansen2020discrete} or quartics \cite{chen2022finite}
as the lowest polynomial order for the stress spaces. However, our approach results in stress spaces that are
piecewise linear, which is the lowest possible polynomial degree. Furthermore, this approach does not require
vertex or edge DOFs for the stress spaces, making it the first 3D conforming symmetric stress element without vertex DOFs.
This is comparable to the 2D elasticity element without vertex DOFs constructed in \cite{arnold1984family, guzman2014symmetric}.
Although we have the framework in \cite{christiansen2020discrete} to
guide the construction of the discrete complex on Worsey-Farin splits,
as we shall see, we face significant new difficulties peculiar to
Worsey-Farin splits.
The most troublesome of these arises in the construction of DOFs and
corresponding commuting projections. Unlike Alfeld splits,
Worsey-Farin triangulations induce a Clough-Tocher split on each face
of the original, unrefined triangulation. As a result, discrete 2D
elasticity complexes with respect to Clough-Tocher splits play an
essential role in our construction and proofs. These 2D complexes are
more complicated than their analogues on Alfeld splits (where the
faces are not split). The resulting difficulties are most evident
in the design of DOFs for the space before the stress space (named $U_r^1$ later) in the complex, as we shall see
in Lemma~\ref{lem:dofu1}.
The paper is organized as follows. In the next section, we present the
main framework to construct the elasticity sequence, define the construction of Worsey-Farin splits, and state
the definitions and notation used throughout the paper. Section \ref{sec:2dela} gives useful de Rham sequences and
elasticity sequences on Clough-Tocher splits. Section \ref{sec:localcomplex} gives the construction of the discrete elasticity sequence locally on Worsey-Farin splits with the dimensions of each spaces involved. This leads to our main contribution in Section \ref{sec:LocalDOFs} where we present the degrees of freedom of the discrete spaces in the elasticity sequence with commuting projections.
We finish the paper with the analogous global discrete elasticity sequence in Section \ref{sec:global} and state some conclusions and future directions in Section \ref{sec:Conclude}.
\section{Preliminaries} \label{sec:Pre}
\subsection{A derived complex from two complexes}
Our strategy to obtain an elasticity sequence uses the framework in \cite{arnold2021complexes}
and utilizes two auxiliary de Rham complexes. In particular, we will use a simplified version of their results found in \cite{christiansen2020discrete}.
Suppose $A_i, B_i$ are Banach spaces, $r_i:A_i \rightarrow A_{i+1}$, $t_i: B_i \rightarrow B_{i+1}$, and $s_i: B_i \rightarrow A_{i+1}$ are bounded linear operators such that the following diagram commutes:
\begin{equation}\label{eqn:seqpattern}
\begin{tikzcd}
A_0 \arrow{r}{r_0} & A_1 \arrow{r}{r_1} & A_2 \arrow{r}{r_2} & A_3 \\
B_0 \arrow{r}{t_0} \arrow[swap]{ur}{s_0} & B_1 \arrow{r}{t_1} \arrow[swap]{ur}{s_1} & B_2 \arrow{r}{t_2} \arrow[swap]{ur}{s_2}& B_3
\end{tikzcd}
\end{equation}
The following recipe for a derived complex, borrowed from \cite[Proposition 2.3]{christiansen2020discrete}, guides the gathering of ingredients for our
construction of the elasticity complex on Worsey-Farin splits.
\begin{prop}\label{prop:exactpattern}
Suppose $s_1:B_1\to A_2$ is a bijection.
\begin{enumerate}
\item If $A_i$ and $B_i$ are exact sequences and the diagram \eqref{eqn:seqpattern} commutes, then the following is an exact sequence:
\begin{equation}
\label{eq:1}
\begin{bmatrix}
A_0 \\
B_0
\end{bmatrix} \xrightarrow{[\begin{smallmatrix}
r_0 & s_0
\end{smallmatrix}]} A_1 \xrightarrow{t_1 \circ s_1^{-1} \circ r_1 } B_2 \xrightarrow{\left[\begin{smallmatrix}
s_2 \\ t_2
\end{smallmatrix}\right]}
\begin{bmatrix}
A_3 \\ B_3
\end{bmatrix}.
\end{equation}
Here the operators $[r_0\ s_0]:\begin{bmatrix}A_0\\B_0\end{bmatrix}\to A_1$
and $\begin{bmatrix}s_2\\ t_2\end{bmatrix}:B_2\to \begin{bmatrix}A_3\\B_3\end{bmatrix}$ are defined, respectively, as
\[
[r_0\ z_0]\begin{bmatrix}a\\b\end{bmatrix} = r_0 a +z_0 b,\qquad
\begin{bmatrix}s_2\\t_2\end{bmatrix}
b = \begin{bmatrix}
s_2 b\\
t_2 b
\end{bmatrix}.
\]
\item For the surjectivity of the last map in~\eqref{eq:1}, namely $\left[\begin{smallmatrix}
s_2 \\ t_2
\end{smallmatrix}\right]$,
it is sufficient that $r_2$ and $t_2$ are surjective, $t_1 \circ t_2=0$, and $s_2t_1 = r_2s_1$.
\end{enumerate}
\end{prop}
\subsection{Construction of Worsey-Farin Splits}\label{subsec:WFconstruct}
For a set of simplices $\mathcal{S}$, we use $\Delta_s(\mathcal{S})$ to denote the set of
$s$-dimensional simplices ($s$-simplices for short) in $\mathcal{S}$.
If $\mathcal{S}$ is a simplicial triangulation of a domain $D$ with boundary, then $\Delta_s^I(\mathcal{S})$ denotes the
subset of $\Delta_s(\mathcal{S})$ that {does} not belong to the boundary
of the domain. If $S$ is a simplex, then we use the convention $\Delta_s(S) = \Delta_s(\{S\})$. For a non-negative integer $r$, we use $\mathcal{P}_r(S)$ to denote the space of piecewise polynomials of degree $\leq r$ on $S$,
and we define
\begin{align*}
\mathcal{P}_r(\mathcal{S}) = \prod_{S\in \mathcal{S}} \mathcal{P}_r(S), \quad L^2_0(D):=\{q \in L^2(D): \int_D q~ dx = 0\}, \text{with }D = \bigcup_{S\in \mathcal{S}} \bar S.
\end{align*}
Let $\Omega\subset \mathbb{R}^3$ be a contractible
polyhedral domain, and let $\{\mathcal{T}_h\}$ be family of shape-regular
and simplicial triangulations of $\Omega$.
The Worsey-Farin refinement of $\mathcal{T}_h$, denoted by
$\mathcal{T}^{wf}_h$, is obtained by
splitting each $T \in \mathcal{T}_h$ by the following two steps (cf.~\cite[Section 2]{Ex2020}):
\begin{enumerate}
\item Connect the incenter $z_T$ of $T$
to its (four) vertices.
\item For each face $F$ of $T$ choose $m_{\scriptscriptstyle{F}} \in {\rm int}(F)$. We then connect $m_{\scriptscriptstyle{F}}$ to the three vertices of $F$ and to the incenter $z_T$.
\end{enumerate}
Note that the first step is an Alfeld-type refinement of $T$ with respect to the incenter \cite{christiansen2020discrete}.
We denote the local mesh of the Alfeld-type refinement by $T^a$, which consists of four tetrahedra.
The choice of the point $m_{\scriptscriptstyle{F}}$ in the second step needs to follow specific rules: for each interior face $F = \overline{T_1} \cap \overline{T_2}$ with $T_1$,
$T_2 \in \mathcal{T}_h$, let $m_{\scriptscriptstyle{F}} = L \cap F$ where $L = [{z_{T_1},z_{T_2}}]$, the line segment connecting the incenters of $T_1$ and $T_2$; for a boundary face $F$ with $F = \overline{T} \cap \partial \Omega$ with $T\in \mathcal{T}_h$, let $m_{\scriptscriptstyle{F}}$ be the barycenter of $F$. The fact that such a $m_{\scriptscriptstyle{F}}$ exists is established in \cite[Lemma 16.24]{lai2007spline}.
For $T\in \mathcal{T}_h$, we denote by $T^{wf}$ the local
Worsey-Farin mesh induced by the global refinement $\mathcal{T}_h^{wf}$, i.e.,
\[
T^{wf} = \{K\in \mathcal{T}_h^{wf}:\ \bar K\subset \bar T\}.
\]
For any face $F \in \Delta_2(\mathcal{T}_h)$, the refinement $\mathcal{T}_h^{wf}$ induces a Clough-Tocher triangulation of $F$, i.e.,
a two-dimensional triangulation consisting of three triangles, each having
the common vertex $m_{\scriptscriptstyle{F}}$; we denote this set of three triangles by $F^{ct}$.
We then define
\[
\mathcal{E}(\mathcal{T}_h^{wf}) = \{e \in \Delta_1^I(F^{ct}): \forall F \in \Delta_2^I(\mathcal{T}_h) \}
\]
to be the set of all interior edges of the Clough-Tocher refinements
in the global mesh.
For a tetrahedron $T \in \mathcal{T}_h$ and face $F \in \Delta_2(T)$, we denote by $n_{\scriptscriptstyle{F}} := n|_{\scriptscriptstyle{F}}$ the outward unit normal of $\partial T$ restricted to $F$. Consider the triangulation $F^{\ensuremath\mathop{\mathrm{ct}\,}}$ of $F$ with three triangles labeled as $Q_i$, $i =1,2,3$. Let $e = \partial Q_1 \cap \partial Q_2$ and $t_e$ be the unit vector tangent to $e$ pointing away from $m_{\scriptscriptstyle{F}}$. Then the jump of $p \in \mathcal{P}_r(T^{wf})$ across $e$ is defined as
\[
\jmp{p}_{e} = (p|_{Q_1}-p|_{Q_2})s_e,
\]
where $s_e = n_{\scriptscriptstyle{F}} \times t_e$ is a unit vector orthogonal to $t_e$ and $n_{\scriptscriptstyle{F}}$. In addition, let $f$ be the internal face of $T^{wf}$ that has $e$ as an edge. Now let $n_f$ be a unit-normal to $f$ and set $t_s = n_f \times t_e$ to be a tangential unit vector on the internal face $f$.
Let $T_1$ and $T_2$ be two adjacent tetrahedra in $\mathcal{T}_h$ that share a face $F$, and let $Q_i$, $i =1,2,3$ denote three triangles in the set $F^{\ensuremath\mathop{\mathrm{ct}\,}}$. Let $e = \partial Q_1 \cap \partial Q_2$, and for a piecewise smooth function defined on
$T_1 \cup T_2$, we define
\begin{equation}\label{eqn:thetaEDef}
\theta_e(p) = p|_{\partial T_1 \cap Q_1}-p|_{\partial T_1 \cap Q_2}+p|_{\partial T_2 \cap Q_2}-p|_{\partial T_2 \cap Q_1},~~~~~~ \text{on~} e.
\end{equation}
Note that $\theta_e(p)=0$ if and only if $\jmp{p|_{T_1}}_{e}=\jmp{p|_{T_2}}_{e}$.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Fct.png}
\caption{A representation of $F^{ct}$ and $\Delta_1^I(F^{ct})$ (indicated in blue).}
\label{fig:Fct}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Macro.png}
\caption{The original triangulation}
\label{fig:Macro}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{WF.png}
\caption{Worsey-Farin refinement}
\label{fig:WF2}
\end{subfigure}
\caption{The Worsey-Farin Splits}
\label{fig:three graphs}
\end{figure}
\subsection{Differential identities involving matrix and vector fields}\label{subsec:Matrix}
We adopt the notation used in \cite{christiansen2020discrete}. Let $F \in \Delta_2(T)$, and recall $n_{\scriptscriptstyle{F}}$ is the unit normal vector
pointing out of $T$. Fix two tangent vectors $t_1, t_2$
such that the ordered set
$(b_1, b_2, b_3) = (t_1, t_2, n_{\scriptscriptstyle{F}})$ is an
orthonormal right-handed basis of $\mathbb{R}^3$. Any matrix field
$u: T \to \mathbb{R}^{3 \times 3}$ can be written as
$ \sum_{i, j=1}^3 u_{ij}b_i b_j'$ with scalar components
$u_{ij}: T \to \mathbb{R}$. Let $ u_{nn} = n_{\scriptscriptstyle{F}}' u n_{\scriptscriptstyle{F}}$ and
$ {\rm tr}_{\scriptscriptstyle{F}} u = \sum_{i=1}^2 t_i' u t_i.$
With $s\in \mathbb{R}^3$, let
\begin{equation}
\label{eq:11}
u_{{\scriptscriptstyle{F\!F}}} =\sum_{i, j = 1}^2 u_{ij}t_i t_j',
\qquad
u_{F s}= \sum_{i=1}^2 ( s'ut_i) t_i',
\qquad
u_{sF}= \sum_{i=1}^2 (t_i' u s) t_i,.
\end{equation}
Equivalently,
$u_{{\scriptscriptstyle{F\!F}}} = Qu Q$, $u_{F s} =s' uQ,$ and $u_{s F} = Q u s$, where
$P = n_{\F n}_{\scriptscriptstyle{F}}'$ and $Q = I-P$.
Next, for scalar-valued (component) functions
$\phi, w_i, q_i$ and $u_{ij}$, we write the standard surface operators as
\begin{align*}
& {\rm grad}_{\scriptscriptstyle{F}} \phi = (\partial_{t_1} \phi)t_1 +(\partial_{t_2}\phi) t_2,
&
{\rm grad}_{\scriptscriptstyle{F}} (w_1t_1 + w_2 t_2) = t_1 ({\rm grad}_{\scriptscriptstyle{F}} w_1)' + t_2({\rm grad}_{\scriptscriptstyle{F}} w_2)',&
\\
&{\rm rot}_{\scriptscriptstyle{F}} \phi =(\partial_{t_2}\phi) t_1 - (\partial_{t_1} \phi) t_2,
&
{\rm rot}_{\scriptscriptstyle{F}} (q_1t_1' + q_2 t_2') = t_1 ({\rm rot}_{\scriptscriptstyle{F}} q_1)' + t_2({\rm rot}_{\scriptscriptstyle{F}} q_2)',&
\\
& {\rm curl}_{\scriptscriptstyle{F}} (w_1 t_1+w_2 t_2) = \partial_{t_1} w_2 -\partial_{t_2} w_1,
& {\rm curl}_{\scriptscriptstyle{F}} u_{{\scriptscriptstyle{F\!F}}}
= t_1' \,{\rm curl}_{\scriptscriptstyle{F}} (u_{F t_1})' + t_2'\,{\rm curl}_{\scriptscriptstyle{F}} (u_{F t_2})', & \\
& {\rm div}_{\scriptscriptstyle{F}} (w_1 t_1+w_2 t_2) = \partial_{t_1} w_1 + \partial_{t_2} w_2,
& {\rm div}_{\scriptscriptstyle{F}} u_{{\scriptscriptstyle{F\!F}}}
= t_1' \,{\rm div}_{\scriptscriptstyle{F}} (u_{F t_1})' + t_2'\,{\rm div}_{\scriptscriptstyle{F}} (u_{F t_2})'. &
\end{align*}
For a vector function $v$, denote
$v_{\scriptscriptstyle{F}} = Q v = n_{\scriptscriptstyle{F}} \times (v \times n_{\scriptscriptstyle{F}})$. It is easy to see that
\begin{equation}
\label{eq:10}
\begin{aligned}
n_{\scriptscriptstyle{F}} \cdot {\rm curl}\, v &= {\rm curl}\,_{\scriptscriptstyle{F}} v_{\scriptscriptstyle{F}}, \quad
&&({\rm grad} \, v)_{{\scriptscriptstyle{F\!F}}} = {\rm grad}_{\scriptscriptstyle{F}} v_{\scriptscriptstyle{F}},\\
\quad n_{\scriptscriptstyle{F}} \times {\rm rot}_{\scriptscriptstyle{F}} \phi &= {\rm grad}_{\scriptscriptstyle{F}} \phi, \quad &&{\rm div}\, v_{\scriptscriptstyle{F}} = {\rm div}_{\scriptscriptstyle{F}} v_{\scriptscriptstyle{F}}.
\end{aligned}
\end{equation}
\begin{definition}\label{def:Vperp}
For a tangential vector function $v$ on the face $F \in \Delta_2(T)$,
write $v = \sum\limits_{i=1}^2 v_i t_i'$ with $v_i = v \cdot t_i$.
We define the orthogonal complement of $v$ as
\[
v^\perp = v_2 t_1 - v_1 t_2.
\]
\end{definition}
Using this definition and the standard surface operators introduced above, it is easy to see the following identities:
\begin{equation}\label{iden:Vperp}
{\rm div}_{\scriptscriptstyle{F}} v^\perp = {\rm curl}_{\scriptscriptstyle{F}} v', \quad v^\perp \cdot t_e = v \cdot s_e,\quad v^{\perp} = v \times n_{\scriptscriptstyle{F}}.
\end{equation}
We also define the space of rigid body displacements within $\mathbb{R}^3$ and the face $F$:
\begin{alignat}{1}
\label{eqn:rigid1}
\mathsf{R} & =\{a+b \times x: a, b \in \mathbb{R}^3\} \\
\label{eqn:rigid2}
\mathsf{R}(F)& = \{a t_1 + b t_2 + c((x \cdot t_1)t_2 - (x \cdot t_2)t_1): a, b, c \in \mathbb{R}\}.
\end{alignat}
\begin{definition}\label{def:maps}\
Set $\mathbb{V} = \mathbb{R}^3$, $\mathbb{V}_2 = \mathbb{R}^2$ and $\mathbb{M}_{k \times k} = \mathbb{R}^{k\times k}$.
\begin{enumerate}
\item The skew-symmetric operator ${\rm skw}: \mathbb{M}_{k \times k} \rightarrow \mathbb{M}_{k \times k}$ and the symmetric operator ${\rm sym}: \mathbb{M}_{k \times k} \rightarrow \mathbb{M}_{k \times k}$ are defined as follows: for any $M \in \mathbb{M}_{k \times k}$,
\[
{\rm skw}(M) = \frac{1}{2}(M-M'); \quad {\rm sym}(M) = \frac{1}{2}(M+M').
\]
Denote the range of ${\rm skw}$ and ${\rm sym}$ as $\mathbb{K}_k = {\rm skw}(\mathbb{M}_{k \times k})$ and $\mathbb{S}_k = {\rm sym}(\mathbb{M}_{k \times k})$, respectively.
\item Define the operator $\Xi : \mathbb{M}_{3 \times 3} \rightarrow \mathbb{M}_{3 \times 3}$ by $\Xi M = M'-{\rm tr} (M) \mathbb{I}$, where $\mathbb{I}$ is the $3\times 3$
identity matrix.\medskip
\item The three-dimensional symmetric gradient
and incompatibility operators
are given, respectively, by:
\[
\varepsilon = {\rm sym} \, {\rm grad}, \quad {\rm inc}\, = {\rm curl} ({\rm curl})'.
\]
\item The operators ${\rm mskw}\,: \mathbb{V} \rightarrow \mathbb{K}_3$
and ${\rm vskw}:\mathbb{M}_{3\times 3}\to \mathbb{V}$ are given by
\[{\rm mskw}\, \begin{pmatrix}
v_1 \\
v_2 \\
v_3
\end{pmatrix} = \begin{pmatrix}
0 & -v_3 & v_2 \\
v_3 & 0 & -v_1 \\
-v_2 & v_1 & 0
\end{pmatrix},\qquad {\rm vskw} := {\rm mskw}\,^{-1} \circ {\rm skw}.
\]
\item The two-dimensional surface differential operators on a face $F$
are given by
\[
\varepsilon_{\scriptscriptstyle{F}} = {\rm sym} \, {\rm grad}_{\scriptscriptstyle{F}}, \quad
{\rm airy}_\F = {\rm rot}_{\scriptscriptstyle{F}}({\rm rot}_{\scriptscriptstyle{F}})', \quad
{\rm inc}_\F := {\rm curl}_F({\rm curl}_F)'.
\]
\item The two-dimensional skew operator
defined on either a scalar or matrix-valued function
is defined, respectively, as
\[
{\rm skew} \, u =
\begin{bmatrix}
0 & u \\
-u & 0
\end{bmatrix}; \quad
{\rm skew} \, \begin{bmatrix}
u_{11} & u_{12} \\
u_{21} & u_{22}
\end{bmatrix} = u_{21} - u_{12}.
\]
\item The transpose operator $\tau$ is defined as: $\tau \, u = u'$.
\end{enumerate}
\end{definition}
It is simple to see that $\Xi$ is invertible
with $\Xi^{-1} M = M'-\frac12 {\rm tr}(M)\mathbb{I}$.
Furthermore, there holds the following fundamental identities:
\begin{subequations}
\begin{alignat}{1}
\label{eqn:iden1}
& {\rm div}\, \Xi = 2{\rm vskw} \, {\rm curl}\,, \\
\label{eqn:iden2}
& \Xi {\rm grad} = - {\rm curl}\, \, {\rm mskw}\,, \\
\label{eqn:iden3}
& {\rm \bcurl \Xi^{-1} \bcurl} {\rm mskw}\, = - {\rm curl}\, \Xi^{-1} \Xi {\rm grad} = -{\rm curl}\, {\rm grad} = 0, \\
\label{eqn:iden4}
& 2\, {\rm vskw} \, {\rm \bcurl \Xi^{-1} \bcurl} = {\rm div}\, \Xi \Xi^{-1} {\rm curl}\, = {\rm div}\, {\rm curl}\, = 0, \\
\label{eqn:iden5}
& \text{tr}({\rm curl}\, {\rm sym})=0, \quad {\rm \bcurl \Xi^{-1} \bcurl} {\rm sym} = {\rm curl}({\rm curl} {\rm sym})' = {\rm inc}\, {\rm sym}.
\end{alignat}
\end{subequations}
On a two-dimensional face $F$, there also holds
\begin{subequations}
\begin{alignat}{1}
\label{2didenela1}
&{\rm div}_{\scriptscriptstyle{F}} \, {\rm airy}_\F = {\rm div}_{\scriptscriptstyle{F}} \, {\rm rot}_{\scriptscriptstyle{F}}\, \tau\,({\rm rot}_{\scriptscriptstyle{F}}) = 0, \\
\label{2didenela2}
&{\rm inc}_\F \, {\rm sym} = {\rm inc}_\F, \quad
{\rm inc}_\F \, \varepsilon_{\scriptscriptstyle{F}} = {\rm curl}_{\scriptscriptstyle{F}} \, \tau\, {\rm curl}_{\scriptscriptstyle{F}} {\rm grad}_{\scriptscriptstyle{F}} = 0, \\
\label{2didenela3}
&{\rm curl}_{\scriptscriptstyle{F}} \, {\rm skew} = \tau \, {\rm grad}_{\scriptscriptstyle{F}}.
\end{alignat}
\end{subequations}
The following lemma states
additional identities used throughout the paper.
Its proof is found in
\cite[Lemma 5.7]{christiansen2020discrete}.
\begin{lemma}\label{lem:iden}
For a sufficiently smooth matrix-valued function $u$,
\begin{subequations} \label{id}
\begin{alignat}{1}
\label{curlid}
s' \,({\rm curl}\, u) \,n_{\scriptscriptstyle{F}} \, & = {\rm curl}_{\scriptscriptstyle{F}} (u_{{\scriptscriptstyle{F}} s})', \text{ for any } s
\in \mathbb{R}^3,
\\
\label{id4}
\big[({\rm curl}\, u)'\big]_{{\F n}}& = {\rm curl}_{\scriptscriptstyle{F}} u_{{\scriptscriptstyle{F\!F}}}.
\intertext{If in addition $u$ is symmetric, then}
\label{id1}
({\rm inc}\, u )_{nn}
&= {\rm inc}_\F \, u_{\scriptscriptstyle{F\!F}},
\\
\label{id2}
({\rm inc}\, u)_{{\F n}} &= {\rm curl}_{\scriptscriptstyle{F}}\big[({\rm curl}\, u)'\big]_{{\scriptscriptstyle{F\!F}}},
\\
\label{id3}
{\rm tr}_{\scriptscriptstyle{F}} {\rm curl}\, u
& =-{\rm curl}_{\scriptscriptstyle{F}} (u_{{\F n}})'.
\intertext{For a sufficiently smooth vector-valued function $v$,}
2 ({\rm curl}\, \varepsilon (v))'
& = {\rm grad} \, {\rm curl}\, v \label{more1},
\\
2\left[({\rm curl}\, \varepsilon (v))'\right]_{{\scriptscriptstyle{F\!F}}}
& = {\rm grad}_{\scriptscriptstyle{F}} ({\rm curl}\, v)_{\scriptscriptstyle{F}} \label{more2},
\\
\label{more3}
{\rm curl}\, v
& = n_{\scriptscriptstyle{F}} ({\rm curl}_{\scriptscriptstyle{F}} v_{\scriptscriptstyle{F}}) + {\rm rot}_{\scriptscriptstyle{F}} (v \cdot n_{\scriptscriptstyle{F}}) + n_{\scriptscriptstyle{F}} \times \partial_n v,
\\
\label{more4}
2[\varepsilon(v)]_{n {\scriptscriptstyle{F}}} & = 2 [\varepsilon(v)_{{\F n}}]'
=
{\rm grad}_{\scriptscriptstyle{F}} (v\cdot n_{\scriptscriptstyle{F}})
+ \partial_n v_{\scriptscriptstyle{F}},
\\
\label{more5}
{\rm tr}_{\scriptscriptstyle{F}} ({\rm rot}_{\scriptscriptstyle{F}} v_{\scriptscriptstyle{F}}') & = {\rm curl}\,_{\scriptscriptstyle{F}} v_{\scriptscriptstyle{F}}.
\end{alignat}
\end{subequations}
\end{lemma}
\subsection{Hilbert spaces}\label{subsec:Hilbert spaces} We summarize the definitions of Hilbert spaces which we use to define the discrete spaces.
For any $T \in \mathcal{T}_h$, we commonly use $\mathring{(\cdot)}$ to denote the corresponding spaces with vanishing traces; see the following two examples:
\begin{align*}
\mathring{H}({\rm div}, T): = \{v \in H({\rm div}, T): v \cdot n|_{\partial T} = 0\}, \quad \mathring{H}({\rm curl}\,, T): = \{v \in H({\rm curl}\,, T): v \times n|_{\partial T} = 0\}.
\end{align*}
In addition, for any face $F \in \Delta_2(T)$ with $T \in \mathcal{T}_h$, we define the following spaces by using surface operators in Section \ref{subsec:Matrix}:
\begin{alignat*}{3}
&H({\rm div}_{\scriptscriptstyle{F}}, F) := \{v \in [L^2(F)]^2: {\rm div}_{\scriptscriptstyle{F}} v \in L^2(F)\},
\mathring{H}({\rm div}_{\scriptscriptstyle{F}}, F): = \{v \in H({\rm div}_{\scriptscriptstyle{F}}, F): v \cdot s|_{\partial F} = 0\}, \\
&H({\rm curl}_{\scriptscriptstyle{F}}, F) := \{v \in [L^2(F)]^2: {\rm curl}_F v \in L^2(F)\},
\mathring{H}({\rm curl}_{\scriptscriptstyle{F}}, F): = \{v \in H({\rm curl}_{\scriptscriptstyle{F}}, F): v \cdot t|_{\partial F} = 0\},\\
&H({\rm grad}_{\scriptscriptstyle{F}}, F) := \{v \in L^2(F): {\rm grad}_F v \in L^2(F)\},
\mathring{H}({\rm grad}_{\scriptscriptstyle{F}}, F): = \{v \in H({\rm grad}_{\scriptscriptstyle{F}}, F): v|_{\partial F} = 0\},
\end{alignat*}
where $s$ denotes the outward unit normal of $\partial F$ and $t$ denotes the unit tangential of $\partial F$.
\section{Discrete complexes on Clough-Tocher splits}\label{sec:2dela}
Recall a Worsey-Farin split
of a tetrahedron induces a Clough-Tocher
split on each of its faces.
As a result, to construct
degrees of freedom and commuting projections
for discrete three-dimensional elasticity complexes on Worsey-Farin splits,
we first derive two-dimensional discrete elasticity complexes
on Clough-Tocher splits. Throughout this section,
$F\in \Delta_2(\mathcal{T}_h)$ is a face
of the (unrefined) triangulation $\mathcal{T}_h$,
and $F^{ct}$ denotes its Clough-Tocher refinement
with respect to the split point $m_F$ (arising from the
Worsey-Farin refinement of $\mathcal{T}_h$).
\subsection{de Rham complexes}
As an intermediate step to derive
elasticity complexes on $F^{ct}$,
we first state several discrete de Rham complexes
with various levels of smoothness.
First, we first define the
N\'ed\'elec spaces (without and with boundary conditions)
on the Clough--Tocher split:
\begin{alignat*}{3}
&V_{{\rm div}, r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}):= \{ v \in H({\rm div}_{\scriptscriptstyle{F}}, F): v|_Q \in [\mathcal{P}_r(F)]^2, \forall Q \in F^{\ensuremath\mathop{\mathrm{ct}\,}} \}, \quad && \mathring{V}^1_{{\rm div},r}(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := V_{{\rm div}, r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\cap
\mathring{H}({\rm div}_{\scriptscriptstyle{F}}, F)\\
&V_{{\rm curl}, r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}):= \{ v \in H({\rm curl}_{\scriptscriptstyle{F}},F): v|_Q \in {[\mathcal{P}_r(F)]^2}, \forall Q \in F^{\ensuremath\mathop{\mathrm{ct}\,}} \},\quad && \mathring{V}^1_{{{\rm curl}},r}(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := V_{\rm curl, r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\cap
\mathring{H}({{\rm curl}}_{\scriptscriptstyle{F}}, F),\\
&{V}_r^2(F^{\ensuremath\mathop{\mathrm{ct}\,}}):=\{ v \in L^2(F): v|_Q \in \mathcal{P}_r(F), \forall Q \in F^{\ensuremath\mathop{\mathrm{ct}\,}}\},\quad &&\mathring{V}_r^2(F^{\ensuremath\mathop{\mathrm{ct}\,}}):=V_r^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\cap L^2_0(F),
\end{alignat*}
and the Lagrange spaces,
\begin{alignat*}{3}
&\mathsf{L}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := V_r^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\cap H({\rm grad}_{\scriptscriptstyle{F}}, F),\quad && \mathring{\mathsf{L}}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := \mathsf{L}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\cap \mathring{H}({\rm grad}_{\scriptscriptstyle{F}}, F),\\
&\mathsf{L}_r^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := [\mathsf{L}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})]^2,\quad && \mathring{\mathsf{L}}_r^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := [\mathring{\mathsf{L}}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})]^2,\\
&\mathsf{L}_r^2(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := \mathsf{L}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}),\quad && \mathring{\mathsf{L}}_r^2(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := \mathring{\mathsf{L}}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\cap L^2_0(F).
\end{alignat*}
Finally, we define the (smooth) piecewise polynomial
subspaces with $C^1$ continuity.
\begin{alignat*}{1}
&S_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}) :=\{v\in \mathsf{L}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}):\ {\rm grad}_{\scriptscriptstyle{F}}\,v \in {\mathsf{L}_{r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})} \},\\
&\mathring{S}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}) :=\{v\in \mathring{\mathsf{L}}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}):\ {\rm grad}_{\scriptscriptstyle{F}}\,v \in {\mathring{\mathsf{L}}_{r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})} \},\\
& \mathcal{R}_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := \{v \in S_r^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}) : v|_{\partial F} = 0\}.
\end{alignat*}
Several combinations of these spaces form exact sequences,
as summarized in the following theorem.
\begin{thm}\label{2dseqs}
Let $r \geq 1$. The following sequences are exact \cite{arnold1992quadratic,FuGuzman}.
\begin{subequations}
\begin{alignat}{4}
&\mathbb{R}\
{\xrightarrow{\hspace*{0.5cm}}}\
{{\mathsf{L}}}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm grad}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{{V}}_{{\rm curl},r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm curl}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{{V}}_{r-2}^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\xrightarrow{\hspace*{0.5cm}}\
0,\label{alfseq1}\\
%
&\mathbb{R}\
{\xrightarrow{\hspace*{0.5cm}}}\
{S}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm grad}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{\mathsf{L}}_{r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm curl}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{V}_{r-2}^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\xrightarrow{\hspace*{0.5cm}}\
0,\label{alfseq2}\\
&0\
{\xrightarrow{\hspace*{0.5cm}}}\
{\mathring{\mathsf{L}}}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm grad}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{\mathring{V}}_{{\rm curl},r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm curl}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{\mathring{V}}_{r-2}^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\xrightarrow{\hspace*{0.5cm}}\
0,\label{2dbdryseq1}\\
%
&0\
{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{S}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm grad}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{\mathsf{L}}_{r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm curl}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{V}_{r-2}^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\xrightarrow{\hspace*{0.5cm}}\
0.\label{2dbdryseq2}
%
\end{alignat}
\end{subequations}
\end{thm}
Theorem \ref{2dseqs} has an alternate form that follows from a rotation of the coordinate axes, where the operators ${\rm grad}_F$ and ${\rm curl}_F$ are replaced by ${\rm rot}_F$ and ${\rm div}_F$, respectively.
\begin{cor}\label{cor:rotdiv}
Let $r \geq 1$. The following sequences are exact \cite{arnold1992quadratic,FuGuzman}.
\begin{subequations}
\begin{alignat}{4}
&\mathbb{R}\
{\xrightarrow{\hspace*{0.5cm}}}\
{{\mathsf{L}}}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm rot}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{{V}}_{{\rm div},r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm div}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{{V}}_{r-2}^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\xrightarrow{\hspace*{0.5cm}}\
0,\label{altalfseq1}\\
%
&\mathbb{R}\
{\xrightarrow{\hspace*{0.5cm}}}\
{S}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm rot}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{\mathsf{L}}_{r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm div}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{V}_{r-2}^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\xrightarrow{\hspace*{0.5cm}}\
0,\label{altalfseq2}\\
&0\
{\xrightarrow{\hspace*{0.5cm}}}\
{\mathring{\mathsf{L}}}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm rot}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{\mathring{V}}_{{\rm div}\,,r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm div}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
{\mathring{V}}_{r-2}^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\xrightarrow{\hspace*{0.5cm}}\
0,\label{alt2dbdryseq1}\\
%
&0\
{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{S}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm rot}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{\mathsf{L}}_{r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm div}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{V}_{r-2}^2(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\xrightarrow{\hspace*{0.5cm}}\
0.\label{alt2dbdryseq2}
\end{alignat}
\end{subequations}
\end{cor}
\subsection{Elasticity complexes}
In order to construct elasticity sequences in three dimensions,
we need some elasticity complexes on the two-dimensional Clough-Tocher splits.
\SG{The main results of this section are very similar to the ones found \cite{christiansen2022finite} (with spaces slightly different) and can be proved with the techniques there}. However,
to be self-contained, we provide the main proofs.
The two-dimensional elasticity complexes
utilize the following spaces:
\begin{subequations}
\begin{alignat}{1}
& \mathring{Q}_{{\rm inc},r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) :=\{v\in \mathring{\mathsf{L}}_r^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \otimes \mathbb{V}_2:\ {\rm curl}_{\scriptscriptstyle{F}} v \in {\mathring{V}_{{\rm curl},r-1}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})} \}, \\
\label{eqn:Qr0b}
& \mathring{Q}_{{\rm inc},r}^{1,s}(F^{\ensuremath\mathop{\mathrm{ct}\,}}) := \{{\rm sym}(u): u \in \mathring{Q}_{{\rm inc}, r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \}, \\
\label{eqn:Qr1}
& Q^1_{r} (F^{\ensuremath\mathop{\mathrm{ct}\,}}) := \{u \in V_{{\rm div}, r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \otimes \mathbb{V}_2: {\rm skew}(u) = 0\}, \\
\label{eqn:Qr1tilde}
& \Tilde{Q}^1_{r} (F^{\ensuremath\mathop{\mathrm{ct}\,}}):= \{u \in \mathsf{L}_{r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \otimes \mathbb{V}_2: {\rm skew}(u) = 0\} \subset Q^1_{r}(F^{\ensuremath\mathop{\mathrm{ct}\,}}), \\
\label{eqn:Qr2b}
& \mathring{Q}^2_{r} (F^{\ensuremath\mathop{\mathrm{ct}\,}}) := \{u \in V^2_{r}(F^{\ensuremath\mathop{\mathrm{ct}\,}}): u \perp \mathcal{P}_{1}(F)\}.
\end{alignat}
\end{subequations}
We further let $Q_r^{\perp}(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ be the subspace
of $Q_r^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ that is $L^2(F)$-orthogonal to $\Tilde{Q}^1_r(F^{\ensuremath\mathop{\mathrm{ct}\,}})$.
We then have $Q_r^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) = Q_r^\perp(F^{\ensuremath\mathop{\mathrm{ct}\,}})\oplus \tilde{Q}^1_r(F^{\ensuremath\mathop{\mathrm{ct}\,}})$,
and
\begin{equation} \label{eqn:}
\dim Q_r^{\perp}(F^{\ensuremath\mathop{\mathrm{ct}\,}}) = \dim Q^1_{r}(F^{\ensuremath\mathop{\mathrm{ct}\,}}) - \dim \Tilde{Q}^1_{r}(F^{\ensuremath\mathop{\mathrm{ct}\,}}).
\end{equation}
\begin{lemma}[Lemma 5.8 in \cite{christiansen2020discrete}]\label{lem:incFiden}
Let $u$ be a sufficiently smooth matrix-valued function,
and let $\phi$ be a smooth scalar-valued function.
Then there holds the following integration-by-parts identity:
\begin{equation}\label{eqn:incF}
\int_{F} ({\rm inc}_\F \, u) \, \phi = \int_{F} u \colon {\rm airy}_\F(\phi) + \int_{\partial F} ({\rm curl}_{\scriptscriptstyle{F}} u)t \, \phi ds + \int_{\partial F} u t \cdot ({\rm rot}_{\scriptscriptstyle{F}} \phi)'.
\end{equation}
Consequently, if $u \in \mathring{Q}_{{\rm inc},r-1}^{1}(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ is symmetric and $\phi \in \mathcal{P}_1(F)$,
then $\int_{F} ({\rm inc}_\F \, u) \, \phi =0$.
\end{lemma}
The next theorem is the main result
of the section, where exact local discrete elasticity complexes
are presented on Clough-Tocher splits.
Its proof is given in the appendix.
\begin{thm} \label{thm:2delaseq}
Let $r \ge 3$. The following elasticity sequences are exact.
\begin{alignat}{4}
&0 \
{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{{S}}_{r+1}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \otimes \mathbb{V}_2 \
&&\stackrel{{\varepsilon}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{Q}_{{\rm inc},r}^{1,s}(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm inc}_\F}{\xrightarrow{\hspace*{0.5cm}}}\
\mathring{Q}^2_{r-2} (F^{\ensuremath\mathop{\mathrm{ct}\,}}) \
&&\xrightarrow{\hspace*{0.5cm}}\
0,\label{elaseqsvenb}
\end{alignat}
\begin{alignat}{4}
&\mathcal{P}_1(F)\
\stackrel{\subset}{\xrightarrow{\hspace*{0.5cm}}}\
{{S}}_{r}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm airy}_\F}{\xrightarrow{\hspace*{0.5cm}}}\
Q^1_{r-2}(F^{\ensuremath\mathop{\mathrm{ct}\,}})\
&&\stackrel{{\rm div}_{\scriptscriptstyle{F}}}{\xrightarrow{\hspace*{0.5cm}}}\
V^2_{r-3}(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \otimes \mathbb{V}_2 \
&&\xrightarrow{\hspace*{0.5cm}}\
0.\label{elaseqairy}
\end{alignat}
\end{thm}
\subsection{Dimension counts}
We summarize the dimension counts of the
discrete spaces on the Clough-Tocher split in Table
\ref{tab:2DDim} which will be used in the construction
elasticity complex in three dimensions.
These dimensions are mostly found in \cite{Ex2020}
and follow from Theorem~\ref{2dseqs}
and the rank-nullity theorem. Likewise, the dimension
of $Q^1_r(F^{ct})$ follows from Theorem~\ref{thm:2delaseq}.
\begin{table}[ht]
\caption{\label{tab:2DDim}Dimension counts of the canonical (two--dimensional) N\'ed\'elec, Lagrange, and smooth spaces
with respect to the Clough--Tocher split. Here, $\dim V_{{\rm div},r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) = \dim V_{{\rm curl},r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) =:\dim V_r^1(F^{\ensuremath\mathop{\mathrm{ct}\,}})$}
{\scriptsize
\begin{tabular}{c|ccc}
& $k=0$ & $k=1$ & $k=2$\\
\hline
$\dim V_r^k(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ & $\frac12(3r^2 + 3r + 2)$ & $3(r+1)^2$ & $\frac{3}{2}(r+1)(r+2)$\\
$\dim \mathring{V}_r^k(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ & $ \frac12(3r^2 - 3r + 2)$ & $3r(r+1)$ & $\frac{3}{2}(r+1)(r+2) - 1$\\
$\dim \mathsf{L}_r^k(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ & $ \frac{1}{2}(3r^2 + 3r + 2)$ & $3r^2 + 3r + 2$ & $\frac{1}{2}(3r^2 + 3r + 2)$\\
$\dim \mathring{\mathsf{L}}_r^k(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ & $\frac{1}{2}(3r^2 - 3r + 2)$ & $3r^2 - 3r + 2$ & $\frac{3}{2}r(r-1)$\\
$\dim S_r^k(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ & $ \frac{3}{2}(r^2 - r +2)$ & $3r^2 + 3$ & $\frac{1}{2}(3r^2 + 3r + 2)$\\
$\dim \mathcal{R}_{r}^k(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ & $\frac{3}{2}(r-1)(r-2)$ \cite{ALphd} &
---
&--- \\
$\dim Q_{r}^k(F^{\ensuremath\mathop{\mathrm{ct}\,}})$ & --- &
$\frac{3}{2}(3r^2 +5r+2)$
&---
\end{tabular}
}
\end{table}
\section{Local discrete sequences on Worsey-Farin splits} \label{sec:localcomplex}
\subsection{de Rham complexes}\label{subsec:localDeRhamy}
Similar to the two-dimensional setting in Section \ref{sec:2dela},
the starting point to construct discrete 3D elasticity complexes
are the de Rham complexes consisting of piecewise polynomial spaces.
The N\'ed\'elec spaces with respect to the local Worsey-Farin split $T^{wf}$
are given as
\begin{alignat*}{2}
&V_r^1(T^{wf}):=[\mathcal{P}_r(T^{wf})]^3 \cap H({\rm curl},T),\qquad
&& \mathring{V}_r^1(T^{wf}):= V_r^1(T^{wf}) \cap \mathring{H}({\rm curl},T),\\
&V_r^2(T^{wf}):=[\mathcal{P}_r(T^{wf})]^3 \cap H({\rm div},T),\qquad
&&\mathring{V}_r^2(T^{wf}):= V_r^2(T^{wf}) \cap \mathring{H}({\rm div},T),\\
&V_r^3(T^{wf}):=\mathcal{P}_r(T^{wf}), &&\mathring{V}_r^3(T^{wf}):=V_r^3(T^{wf})\cap L^2_0(T).
\end{alignat*}
The Lagrange spaces on $T^{wf}$ are defined by
\begin{alignat*}{2}
& \mathsf{L}_{r}^0(T^{wf}) := \mathcal{P}_r(T^{wf}) \cap H^1(T),\qquad
&& \mathring{\mathsf{L}}_{r}^0(T^{wf}) := \mathsf{L}_r^0(T^{wf}) \cap \mathring{H}^1(T), \\
& \mathsf{L}_{r}^1(T^{wf}) := [\mathsf{L}_{r}^0(T^{wf})]^3, && \mathring{\mathsf{L}}_{r}^1(T^{wf}) := [\mathring{\mathsf{L}}_{r}^0(T^{wf})]^3,
\end{alignat*}
and the discrete spaces with additional smoothness
are
\begin{alignat*}{2}
& S_{r}^0(T^{wf}):=\{u \in \mathsf{L}_{r}^0(T^{wf}): {\rm grad} \, v \in \mathsf{L}_{r-1}^1(T^{wf})\},\\
& \mathring{S}_{r}^0(T^{wf}):=\{u \in \mathring{\mathsf{L}}_{r}^0(T^{wf}): {\rm grad} \, v \in \mathring{\mathsf{L}}_{r-1}^1(T^{wf})\}, &&\\
& S_{r}^1(T^{wf}):=\{u \in \mathsf{L}_{r}^1(T^{wf}): {\rm curl}\, v \in \mathsf{L}_{r-1}^1(T^{wf})\},\\
& \mathring{S}_{r}^1(T^{wf}):=\{u \in \mathring{\mathsf{L}}_{r}^1(T^{wf}): {\rm curl}\, v \in \mathring{\mathsf{L}}_{r-1}^1(T^{wf})\}.
\end{alignat*}
We also define the intermediate spaces
\begin{align*}
& {\mathcal{V}}_r^2(T^{wf}):=\{v \in V_r^2(T^{wf}): v \times n \text{ is continuous on each } F \in \Delta_2(T)\}, \\
& \mathring{\mathcal{V}}_r^2(T^{wf}) := \{v \in \mathcal{V}_r^2(T^{wf}): v \cdot n=0 \text{ on each } F \in \Delta_2(T)\}, \\
& {\mathcal{V}}_r^3(T^{wf}):=\{q \in V_r^3(T^{wf}): q \text{ is continuous on each } F \in \Delta_2(T)\}, \\
& \mathring{\mathcal{V}}_r^3 := \mathcal{V}_r^3(T^{wf}) \cap L_0^2(T),
\end{align*}
and note that
\begin{alignat*}{2}
&S_r^0(T^{wf})\subset \mathsf{L}^0_r(T^{wf}),\qquad
&&S_r^1(T^{wf)}\subset \mathsf{L}^1_r(T^{wf})\subset V_r^1(T^{wf}),\\
&\mathsf{L}^2_r(T^{wf})\subset \mathcal{V}_r^2(T^{wf})\subset V_r^2(T^{wf}),\quad
&&\mathcal{V}_r^3(T^{wf})\subset V_r^3(T^{wf}),
\end{alignat*}
with similar inclusions holding for the analogous spaces
with boundary conditions.
The next lemma summarizes
the exactness properties
of several (local) complexes using these spaces.
Its proof is found in
\cite[Theorem 3.1-3.2]{guzman2022exact}.
\begin{lemma}\label{lem:localseq}
The following sequences are exact for any $r \ge 3$.
\begin{subequations}
\begin{equation} \label{eqn:seq0}
\mathbb{R} \xrightarrow{\subset} \mathsf{L}_{r}^0(T^{wf}) \xrightarrow{{\rm grad}} V_{r-1}^1(T^{wf}) \xrightarrow{{\rm curl}\,} V_{r-2}^2(T^{wf}) \xrightarrow{{\rm div}\,} V_{r-3}^3(T^{wf}) \rightarrow 0,
\end{equation}
\begin{equation} \label{eqn:seq0b}
0 \rightarrow \mathring{\mathsf{L}}_{r}^0(T^{wf}) \xrightarrow{{\rm grad}} \mathring{V}_{r-1}^1(T^{wf}) \xrightarrow{{\rm curl}\,} \mathring{V}_{r-2}^2(T^{wf}) \xrightarrow{{\rm div}\,} \mathring{V}_{r-3}^3(T^{wf}) \rightarrow 0,
\end{equation}
\begin{equation} \label{eqn:seq1}
\mathbb{R} \xrightarrow{\subset} S_{r}^0(T^{wf}) \xrightarrow{{\rm grad}} \mathsf{L}_{r-1}^1(T^{wf}) \xrightarrow{{\rm curl}\,} V_{r-2}^2(T^{wf}) \xrightarrow{{\rm div}\,} V_{r-3}^3(T^{wf}) \rightarrow 0,
\end{equation}
\begin{equation} \label{eqn:seq1b}
0 \rightarrow \mathring{S}_{r}^0(T^{wf}) \xrightarrow{{\rm grad}} \mathring{\mathsf{L}}_{r-1}^1(T^{wf}) \xrightarrow{{\rm curl}\,} \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \xrightarrow{{\rm div}\,} \mathring{V}_{r-3}^3(T^{wf}) \rightarrow 0.
\end{equation}
\begin{equation} \label{eqn:seq2}
\mathbb{R} \xrightarrow{\subset} S_{r}^0(T^{wf}) \xrightarrow{{\rm grad}} S_{r-1}^1(T^{wf}) \xrightarrow{{\rm curl}\,} \mathsf{L}_{r-2}^2(T^{wf}) \xrightarrow{{\rm div}\,} V_{r-3}^3(T^{wf}) \rightarrow 0.
\end{equation}
\begin{equation} \label{eqn:seq2b}
0 \rightarrow \mathring{S}_{r}^0(T^{wf}) \xrightarrow{{\rm grad}} \mathring{S}_{r-1}^1(T^{wf}) \xrightarrow{{\rm curl}\,} \mathring{\mathsf{L}}_{r-2}^2(T^{wf}) \xrightarrow{{\rm div}\,} \mathring{\mathcal{V}}_{r-3}^3(T^{wf}) \rightarrow 0.
\end{equation}
\end{subequations}
\end{lemma}
\subsection{Dimension counts}
The dimensions of the spaces in Section \ref{subsec:localDeRhamy}
are summarized in Table \ref{tab:VLDim}.
These counts essentially from Lemma \ref{lem:localseq} and the rank-nullity theorem;
see \cite{guzman2022exact} for details.
\begin{table}[ht]
\caption{\label{tab:VLDim}Dimension counts of the canonical N\'ed\'elec, Lagrange spaces and smoother spaces on a WF split. Here, the superscript $+$ indicates the positive part of the number.}
{\scriptsize
\begin{tabular}{c|cccc}
& $k=0$ & $k=1$ & $k=2$ & $k=3$\\
\hline
$V_r^k(T^{wf})$ & $(2r+1)(r^2+r+1)$ & $2 (r + 1) (3 r^2 + 6 r + 4)$ & $3 (r + 1) (r + 2) (2 r + 3)$ & $2 ( r+1) (r+2) (r+3)$\\
$\mathring{V}_r^k(T^{wf})$ & $(2r-1)(r^2-r+1)$ & $ 2 (r + 1) (3 r^2 + 1)$ & $3 (r+1) (r+2) (2r+1)$ & $2 r^3 + 12 r^2 + 22 r + 11$\\
$\mathsf{L}_r^k(T^{wf})$ & $(2r+1)(r^2+r+1)$ & $3(2r+1)(r^2+r+1)$ & $3(2r+1)(r^2+r+1)$ & $(2r+1)(r^2+r+1)$\\
$ \mathring{\mathsf{L}}_r^k(T^{wf})$ & $(2r-1)(r^2-r+1)$ & $3 (2 r - 1) (r^2 - r + 1)$ & $ 3 (2 r - 1) (r^2 - r + 1)$ & $(r - 1) (2 r^2 - r + 2)$ \\
$\mathring{\mathcal{V}}_r^k(T^{wf})$ & --- & --- & $6r^3 + 21r^2 + 9r + 2$ & {$2r^3 +12r^2+10r+3$}\\
$S_r^k(T^{wf})$ & $ 2r^3-6r^2+10r-2$ & $3r(2r^2-3r+5)$ & $6r^3+8r+2$ & $(2r+1)(r^2+r+1)$\\
$\mathring{S}_r^0(T^{wf})$ & $\big(2(r-2)(r-3)(r-4)\big)^+$ & $\big(3(2r-3)(r-2)(r-3)\big)^+$ & $\big(2(r-2)(3r^2-6r+4)\big)^+$ & $(r-1)(2r^2-r+2)$
\end{tabular}
}
\end{table}
\subsection{Elasticity complex for stresses with weakly imposed symmetry}
In this section we will apply Proposition \ref{prop:exactpattern} to
the de-Rham sequences on Worsey-Farin splits. This gives rise to a
derived complex useful for analyzing mixed methods for elasticity with
weakly imposed stress symmetry. From this intermediate step, an
elasticity sequence with strong symmetry will readily follow.
We start with the following definition and lemma.
\begin{definition} \label{def:mu}
Let $\mu \in \mathring{\mathsf{L}}_1^0(T^{wf})$
be the unique continuous, piecewise linear polynomial that vanishes on $\partial T$ and takes the value $1$ at the incenter of $T$.
\end{definition}
\begin{lemma} \label{lem:mapproperty}\
\begin{enumerate}
\item The map $\Xi : \mathsf{L}_{r}^1(T^{wf}) \otimes \mathbb{V} \rightarrow \mathsf{L}_{r}^2(T^{wf})\otimes \mathbb{V} $ is a bijection.
\item The following inclusions hold ${\rm vskw} \, (V_{r-2}^2(T^{wf}) \otimes \mathbb{V}) \subset V_{r-2}^3(T^{wf}) \otimes \mathbb{V}$ and \\
${\rm vskw} \, (\mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V}) \subset \mathcal{V}_{r-2}^3(T^{wf}) \otimes \mathbb{V}$.
\item The mappings ${\rm vskw} : V^2_{r-2}(T^{wf}) \otimes \mathbb{V} \rightarrow V^3_{r-2}(T^{wf}) \otimes \mathbb{V}$ and ${\rm vskw} : \mathring{\mathcal{V}}^2_{r-2}(T^{wf}) \otimes \mathbb{V} \rightarrow \mathcal{V}^3_{r-2}(T^{wf}) \otimes \mathbb{V}$ are both surjective.
\end{enumerate}
\end{lemma}
\begin{proof}
Both (1) and (2) are trivial to verify and hence we only prove (3).
Let $v \in V^3_{r-2}(T^{wf}) \otimes \mathbb{V}$. By the exactness of \eqref{eqn:seq2}, there exists a function $z \in \mathsf{L}_{r-2}^2(T^{wf})\otimes \mathbb{V}$ such that ${\rm div}\, z = v$. Since $\Xi$ is a bijection from $ \mathsf{L}_{r-2}^1(T^{wf}) \otimes \mathbb{V}$ to $\mathsf{L}_{r-2}^2(T^{wf})\otimes \mathbb{V}$, we have $q= \Xi^{-1} z \in \mathsf{L}_{r-2}^1(T^{wf}) \otimes \mathbb{V}$. Thus, by setting
$w = {\rm curl}\, q \in V_{r-2}^2(T^{wf}) \otimes \mathbb{V}$ we obtain
\[
2 {\rm vskw} (w) = 2 {\rm vskw} \, {\rm curl}\, (q)= 2 {\rm vskw} \, {\rm curl}\, (\Xi^{-1} z ) = {\rm div}\, \Xi (\Xi^{-1} z) = v,
\]
where we used \eqref{eqn:iden1}. We conclude
${\rm vskw} : V^2_{r-2}(T^{wf}) \otimes \mathbb{V} \rightarrow V^3_{r-2}(T^{wf}) \otimes \mathbb{V}$ is a surjection.
We now prove the analogous result with boundary condition. Let $v \in \mathcal{V}^3_{r-2}(T^{wf}) \otimes \mathbb{V}$, and let $M \in \mathbb{M}_{3\times 3}$ be a constant matrix
such that $\int_{T} 2 {\rm vskw} M = \frac{1}{ \int_T \mu}{\int_T v}$.
Then, by taking $\tilde{w}= \mu M$, we have $\Tilde{w} \in \mathring{\mathcal{V}}_{1}^2(T^{wf}) \otimes \mathbb{V}$ with $\int_{T} 2 {\rm vskw} \Tilde{w} = \int_T v$. Therefore, we have $v-2\,{\rm vskw}(\Tilde{w}) \in \mathring{\mathcal{V}}^3_{r-2}(T^{wf}) \otimes \mathbb{V}$ and the exactness of \eqref{eqn:seq2b}
yields the existence of $z \in \mathring{\mathsf{L}}_{r-1}^2(T^{wf}) \otimes \mathbb{V}$, such that ${\rm div}\, z = v- 2{\rm vskw}(\Tilde{w})$. Let $q = \Xi^{-1} z \in \mathring{\mathsf{L}}_{r-1}^1(T^{wf}) \otimes \mathbb{V}$, and from \eqref{eqn:seq1b}, we have $w:={\rm curl}\,(q) + \Tilde{w} \in \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V}$. Finally, using \eqref{eqn:iden1}
\begin{align*}
2{\rm vskw}(w)= 2{\rm vskw} \, {\rm curl}\,(\Xi^{-1}z)+2{\rm vskw}(\Tilde{w}) = {\rm div}\, z +2{\rm vskw}(\Tilde{w}) =v.
\end{align*}
This shows the surjectivity of ${\rm vskw} : \mathring{\mathcal{V}}^2_{r-2}(T^{wf}) \otimes \mathbb{V} \rightarrow \mathcal{V}^3_{r-2}(T^{wf}) \otimes \mathbb{V}$, thus completing the proof.
\end{proof}
Using the complexes \eqref{eqn:seq1}-\eqref{eqn:seq2b} and the two identities \eqref{eqn:iden1}-\eqref{eqn:iden2}, we construct the following commuting diagrams:
{
\begin{equation}\label{eqn:diagram1}
\begin{tikzcd}
S_{r+1}^0(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm grad}} & S_{r}^1(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm curl}\,} & \mathsf{L}_{r-1}^2(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm div}\,} & V_{r-2}^3(T^{wf}) \otimes \mathbb{V} \arrow{r} & 0 \\
S_{r}^0(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm grad}} \arrow[swap]{ur}{-{\rm mskw}\,} & \mathsf{L}_{r-1}^1(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm curl}\,} \arrow[swap]{ur}{\Xi} & V_{r-2}^2(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm div}\,} \arrow[swap]{ur}{2{\rm vskw}}& V_{r-3}^3(T^{wf}) \otimes \mathbb{V} \arrow{r} & 0
\end{tikzcd}
\end{equation}
}
\begin{equation}\label{eqn:diagram1b}
\begin{tikzcd}
\mathring{S}_{r+1}^0(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm grad}} & \mathring{S}_{r}^1(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm curl}\,} & \mathring{\mathsf{L}}_{r-1}^2(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm div}\,} & \mathcal{V}_{r-2}^3(T^{wf}) \otimes \mathbb{V} \arrow{r}{\int} & 0 \\
\mathring{S}_{r}^0(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm grad}} \arrow[swap]{ur}{-{\rm mskw}\,} & \mathring{\mathsf{L}}_{r-1}^1(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm curl}\,} \arrow[swap]{ur}{\Xi} & \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V} \arrow{r}{{\rm div}\,} \arrow[swap]{ur}{2{\rm vskw}}& \mathring{V}_{r-3}^3(T^{wf}) \otimes \mathbb{V} \arrow{r} & 0
\end{tikzcd}
\end{equation}
Note that the top sequence of \eqref{eqn:diagram1b} is slightly different from \eqref{eqn:seq2b}, as the mean-value constraint
is not imposed on $\mathcal{V}_{r-2}(T^{wf})\otimes \mathbb{V}$.
This is due to the surjective property of the mapping
${\rm vskw}: (\mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V}) \to \mathcal{V}_{r-2}^3(T^{wf}) \otimes \mathbb{V}$
established in Lemma \ref{lem:mapproperty}.
\begin{thm} \label{thm:preseq}
The following sequences are exact for any $r \ge 3$:
\begin{equation} \label{eqn:preseq}
\begin{bmatrix}
S_{r+1}^0(T^{wf}) \otimes \mathbb{V} \\
S_{r}^0(T^{wf} \otimes \mathbb{V})
\end{bmatrix}
\xrightarrow{[{\rm grad}, -{\rm mskw}\,]} S_{r}^1(T^{wf}) \otimes \mathbb{V} \xrightarrow{{\rm \bcurl \Xi^{-1} \bcurl}} {V}_{r-2}^2(T^{wf}) \otimes \mathbb{V} \xrightarrow{ \begin{bmatrix}
2{\rm vskw} \\ {\rm div}\, \end{bmatrix}}
\begin{bmatrix}
V^3_{r-2}(T^{wf}) \otimes \mathbb{V} \\ {V}_{r-3}^3(T^{wf}) \otimes \mathbb{V}
\end{bmatrix}.
\end{equation}
\begin{equation} \label{eqn:preseqb}
\begin{bmatrix}
\mathring{S}_{r+1}^0(T^{wf}) \otimes \mathbb{V} \\
\mathring{S}_{r}^0(T^{wf} \otimes \mathbb{V})
\end{bmatrix}
\xrightarrow{[{\rm grad}, -{\rm mskw}\,]} \mathring{S}_{r}^1(T^{wf}) \otimes \mathbb{V} \xrightarrow{{\rm \bcurl \Xi^{-1} \bcurl}} \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V} \xrightarrow{ \begin{bmatrix}
2{\rm vskw} \\ {\rm div}\, \end{bmatrix}}
\begin{bmatrix}
\mathcal{V}^3_{r-2}(T^{wf}) \otimes \mathbb{V} \\ \mathring{V}_{r-3}^3(T^{wf}) \otimes \mathbb{V}
\end{bmatrix}.
\end{equation}
Moreover, the last operator in \eqref{eqn:preseq} is surjective.
\end{thm}
\begin{proof}
Lemma \ref{lem:mapproperty} tells us that $\Xi : \mathsf{L}_{r-1}^1(T^{wf}) \otimes \mathbb{V} \rightarrow \mathsf{L}_{r-1}^2(T^{wf})\otimes \mathbb{V} $ is a bijection. With the exactness of \eqref{eqn:seq1}-\eqref{eqn:seq2b} for $r\ge3$ and Proposition \ref{prop:exactpattern}, we see that these two sequences are exact. The surjectivity the last map is guaranteed by Proposition \ref{prop:exactpattern} and Lemma \ref{lem:mapproperty}.
\end{proof}
\subsection{Elasticity sequence} \label{subsec:ela~seq}
Now we are ready to describe the local discrete elasticity
sequence on Worsey-Farin splits.
The discrete elasticity complexes with strong symmetry are formed by the following spaces:
\begin{align*}
& U_{r+1}^0(T^{wf}) = S_{r+1}^0(T^{wf}) \otimes \mathbb{V}, && \mathring{U}_{r+1}^0(T^{wf}) = \mathring{S}_{r+1}^0(T^{wf}) \otimes \mathbb{V}, \\
& U_{r}^1(T^{wf}) = \{{\rm sym}(u): u \in S_{r}^1(T^{wf}) \otimes \mathbb{V}\}, && \mathring{U}_{r}^1(T^{wf}) = \{{\rm sym} (u): u \in \mathring{S}_{r}^1(T^{wf}) \otimes \mathbb{V}\}, \\
& U_{r-2}^2(T^{wf}) = \{u \in V_{r-2}^2(T^{wf}) \otimes \mathbb{V}: {\rm skw} \, u =0\}, && \mathring{U}_{r-2}^2(T^{wf}) = \{u \in \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V}: {\rm skw} \, u =0\}, \\
& U_{r-3}^3(T^{wf}) = V_{r-3}^3(T^{wf}) \otimes \mathbb{V}, && \mathring{U}_{r-3}^3(T^{wf}) = \{u \in V_{r-3}^3(T^{wf}) \otimes \mathbb{V}: u \perp \mathsf{R} \},
\end{align*}
where we recall $\mathsf{R}$, defined in \eqref{eqn:rigid1}, is the space of rigid body displacements.
\begin{thm} \label{thm:elseq}
The following two sequences are discrete elasticity complexes and are exact for $r \ge 3$:
\begin{equation}\label{eqn:elseq}
\mathsf{R} \rightarrow {U}_{r+1}^0(T^{wf}) \xrightarrow{\varepsilon} {U}_{r}^1(T^{wf}) \xrightarrow{{\rm inc}\,} {U}_{r-2}^2(T^{wf}) \xrightarrow{{\rm div}\,} {U}_{r-3}^3(T^{wf}) \rightarrow 0,
\end{equation}
and
\begin{equation}\label{eqn:elseqb}
0 \rightarrow \mathring{U}_{r+1}^0(T^{wf}) \xrightarrow{\varepsilon} \mathring{U}_{r}^1(T^{wf}) \xrightarrow{{\rm inc}\,} \mathring{U}_{r-2}^2(T^{wf}) \xrightarrow{{\rm div}\,} \mathring{U}_{r-3}^3(T^{wf}) \rightarrow 0.
\end{equation}
\end{thm}
\begin{proof}
We first show that \eqref{eqn:elseq} is a complex. In order to do this, it suffices
to show the operators map the space they are acting on into the subsequent space.
To this end, let $u \in U_{r+1}^0(T^{wf})$, then by \eqref{eqn:seq2} we have ${\rm grad} \,(u) \in S_r^1(T^{wf}) \otimes \mathbb{V}$. Hence, $\varepsilon(u) = {\rm sym} \, {\rm grad} \,(u) \in U_r^1(T^{wf})$. Now let $u \in U_r^1(T^{wf})$ which implies that $u = {\rm sym}(w)$ with $w \in S_r^1(T^{wf}) \otimes \mathbb{V}$. Thus by \eqref{eqn:iden3} we have ${\rm \bcurl \Xi^{-1} \bcurl} u = {\rm \bcurl \Xi^{-1} \bcurl} w \in V_{r-2}^2(T^{wf}) \otimes \mathbb{V}$ and ${\rm skw}(u) = 0$ due to \eqref{eqn:iden4}. Therefore, there holds ${\rm \bcurl \Xi^{-1} \bcurl} (u) \in {U}_{r-2}^2(T^{wf}) $. Finally, for any $u \in {U}_{r-2}^2(T^{wf}) \subset V_{r-2}^2(T^{wf}) \otimes \mathbb{V}$, ${\rm div}\, u \in V_{r-3}^3(T^{wf}) \otimes \mathbb{V}$.
Next, we prove exactness of the complex \eqref{eqn:elseq}. Let $w \in {U}_{r-3}^3(T^{wf})$ and consider $(0,w) \in [V_{r-2}^3(T^{wf}) \otimes \mathbb{V}] \times [V_{r-3}^3(T^{wf}) \otimes \mathbb{V}]$.
Due to the exactness of \eqref{eqn:preseq} in Theorem \ref{thm:preseq}, there exists $v \in V_{r-2}^2(T^{wf}) \otimes \mathbb{V}$ such that ${\rm div}\, v = w$
and $2 {\rm vskw}(v)=0$. Thus, $v \in {U}_{r-2}^2(T^{wf})$.
Now let $w \in {U}_{r-2}^2(T^{wf})$ with ${\rm div}\, w=0$. Then by the exactness of \eqref{eqn:preseq}, we have the existence of $v \in S_{r}^1(T^{wf}) \otimes \mathbb{V}$ such that ${\rm \bcurl \Xi^{-1} \bcurl} v= w$.
Setting $u={\rm sym}(v) \in {U}_{r}^1(T^{wf})$ yields ${\rm inc}\, u=w$ by \eqref{eqn:iden3}.
Finally, let $w \in {U}_{r}^1(T^{wf})$ with ${\rm inc}\, w=0$. Then $w={\rm sym}(v)$ for some $v \in S_{r}^1(T^{wf}) \otimes \mathbb{V}$ and with \eqref{eqn:iden3}, ${\rm \bcurl \Xi^{-1} \bcurl} v={\rm \bcurl \Xi^{-1} \bcurl} w = 0$. Due to the exactness of \eqref{eqn:preseq}, we could find $(u,z) \in [S_{r+1}^0(T^{wf}) \otimes \mathbb{V}] \times [S_{r}^0(T^{wf}) \otimes \mathbb{V}]$ such that $v={\rm grad} \, u-{\rm mskw}\,(z)$. Therefore, $\varepsilon(u)={\rm sym}(v)= w$.
We can prove that \eqref{eqn:elseqb} is a complex and it is exact very similar to above. The main difference is the surjectivity of the last map which we prove now. Let $w \in \mathring{U}_{r-3}^3(T^{wf}) \subset \mathring{V}_{r-3}^3 \otimes \mathbb{V}$. Then by the exactness of \eqref{eqn:seq1b}, there exists $v \in \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V}$ such that ${\rm div}\, v = w$. For any $c \in \mathbb{R}^3$ we have ${\rm grad} \,(c \times x)= {\rm mskw}\, \, c$ and hence, using integration by parts
\[
\int_T 2 {\rm vskw} \, v \cdot c = \int_T v \colon {\rm mskw}\, \, c = \int_T v \colon {\rm grad} \,( c \times x)=-\int_T {\rm div}\, v \cdot (c \times x)= - \int_T w \cdot (c \times x)=0,
\]
where the last equality uses the fact $w \perp \mathsf{R}$.
Therefore, ${\rm vskw} \, v \in \mathring{\mathcal{V}}_{r-2}^3(T^{wf}) \otimes \mathbb{V}$ and by the exactness of \eqref{eqn:seq2b}, we have a $m \in \mathring{\mathsf{L}}_{r-1}^2(T^{wf}) \otimes \mathbb{V}$ such that ${\rm div}\, m=2 {\rm vskw} \, v$. Let $u=v-{\rm curl}\, (\Xi^{-1}m) \in \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V}$ and we see that $2{\rm vskw} \, u =2{\rm vskw} \, v- 2{\rm vskw} \,{\rm curl}\, (\Xi^{-1}m)=0 $ by \eqref{eqn:iden1}. Hence, $u \in \mathring{U}_{r-2}^2(T^{wf})$ and ${\rm div}\, u=w$.
\end{proof}
\SG{When $r \ge 4$, there holds $\mathsf{R} \subset {U}_{r-3}^3(T^{wf})$, so it is clear that
\begin{equation}\label{Ur}
{U}_{r-3}^3(T^{wf}) = \mathsf{R} \oplus \mathring{U}_{r-3}^3(T^{wf}) \qquad \text{ for } r \ge 4.
\end{equation}
On the other hand, when $r = 3$, we need the following lemma for
the calculation of dimensions of $\mathring{U}_{r-3}^3(T^{wf})$.
Let $P_{\scriptscriptstyle{U}}$ be the $L^2$-orthogonal projection onto $U_0^3(T^{wf})$
and let $P_{\scriptscriptstyle{U}}\mathsf{R} := \{P_{\scriptscriptstyle{U}} u:\ u \in \mathsf{R}\}$. The proof of the
following lemma is provided in the appendix.
\begin{lemma}\label{lem:projRig}
It holds,
\begin{equation}\label{U03}
{U}_0^3(T^{wf}) = P_{\scriptscriptstyle{U}}\mathsf{R} \oplus \mathring{U}_{0}^3(T^{wf}),
\end{equation}
and $\dim P_{\scriptscriptstyle{U}}\mathsf{R} = \dim \mathsf{R}=6$.
\end{lemma}
}
Using the exactness of the complexes \eqref{eqn:elseq}--\eqref{eqn:elseqb}
along with Table \ref{tab:VLDim}, we calculate the dimensions of the spaces
in the next lemma.
\begin{lemma}
When $r \ge 3$, we have:
\begin{align}
& \dim {U}_{r+1}^0(T^{wf}) = 6r^3+12r+12, && \dim \mathring{U}_{r+1}^0(T^{wf}) = 6r^3-36r^2+66r-36, \\
& \dim {U}_{r}^1(T^{wf}) = 12r^3-9r^2+15r+6, && \dim \mathring{U}_{r}^1(T^{wf})= 12r^3-63r^2+87r-18, \\
& \dim {U}_{r-2}^2(T^{wf}) = 12r^3-27r^2+15r, && \dim \mathring{U}_{r-2}^2(T^{wf})=12r^3-45r^2+33r+12, \\
& \dim {U}_{r-3}^3(T^{wf})=6r^3-18r^2+12r, && \dim \mathring{U}_{r-3}^3(T^{wf})= 6r^3-18r^2+12r-6.
\end{align}
\end{lemma}
\begin{proof}
By Lemma \ref{lem:mapproperty} and the rank-nullity theorem, we have
\begin{align*}
\dim {U}_{r-2}^2(T^{wf}) & = \dim \ker (V_{r-2}^2(T^{wf}) \otimes \mathbb{V}, {\rm vskw}) = \dim V_{r-2}^2(T^{wf}) \otimes \mathbb{V} - \dim V_{r-2}^3(T^{wf}) \otimes \mathbb{V} \\
& = (6r^3-9r^2+3r) \times 3-2r(r+1)(r-1) \times 3= 12r^3-27r^2+15r,\\
\dim \mathring{U}_{r-2}^2(T^{wf}) & = \dim \ker (\mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V}, {\rm vskw}) = \dim \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V} - \dim \mathcal{V}_{r-2}^3(T^{wf}) \otimes \mathbb{V} \\
& = (6(r-2)^3+21(r-2)^2+9(r-2)+2) \times 3 \\
&~~~~ -2((r-2)^3+6(r-2)^2+5(r-2)+2) \times 3 \\
& =18 r^3 - 45 r^2 - 9 r + 60 -(6 r^3 - 42 r + 48) = 12 r^3 - 45 r^2 + 33 r + 12.
\end{align*}
The dimensions of ${U}_{r+1}^0(T^{wf})$, $\mathring{U}_{r+1}^0(T^{wf})$ and ${U}_{r-3}^3(T^{wf})$ are computed similarly using the dimensions of $S_{r+1}^0(T^{wf})$,
$\mathring{S}_{r+1}^0(T^{wf})$ and $V_{r-3}^3(T^{wf})$.
Also, \SG{using Lemma \ref{lem:projRig} when $r=3$ or \eqref{Ur} when $r \ge 4$, we obtain}
\[\dim \mathring{U}_{r-3}^3(T^{wf})= \dim {U}_{r-3}^3(T^{wf})-6.\]
Using the exactness of the sequences \eqref{eqn:elseq} and \eqref{eqn:elseqb} in Theorem \ref{thm:elseq}, with the rank-nullity theorem, we have
\begin{align*}
\dim {U}_{r}^1(T^{wf})& = \dim {U}_{r+1}^0(T^{wf})+\dim {U}_{r-2}^2(T^{wf})- \dim {U}_{r-3}^3(T^{wf})-\dim \mathsf{R} \\
& = 12 r^3 - 9 r^2 + 15 r + 6,\\
\dim \mathring{U}_{r}^1(T^{wf})& = \dim \mathring{U}_{r+1}^0(T^{wf})+\dim \mathring{U}_{r-2}^2(T^{wf})-\dim \mathring{U}_{r-3}^3(T^{wf}) \\
& = 12 r^3 - 63 r^2 + 87 r - 18.
\end{align*}
\end{proof}
\subsection{An equivalent characterization of \texorpdfstring{$U^1_r(T^{wf})$}{U1r} and \texorpdfstring{$\mathring{U}_r^1(T^{wf})$}{U1r0}}
In order to find the local degrees of freedom of $U_r^1(T^{wf})$ and $\mathring{U}_r^1(T^{wf})$, we first provide a characterization of these two spaces.
\begin{thm}\label{thm:charU1}
We have the following equivalent definitions of ${U}_{r}^1(T^{wf})$ and $\mathring{U}_{r}^1(T^{wf})$:
\begin{align}
\label{eqn:char1}
& {U}_{r}^1(T^{wf}) = \{u \in H^1(T;\mathbb{S}): u \in \mathcal{P}_r(T^{wf}; \mathbb{S}), ({\rm curl}\, u)' \in V_{r-1}^1(T^{wf}) \otimes \mathbb{V}\}, \\
\label{eqn:char1b}
& \mathring{U}_{r}^1(T^{wf}) = \{u \in \mathring{H}^1(T;\mathbb{S}): u \in \mathcal{P}_r(T^{wf}; \mathbb{S}), ({\rm curl}\, u)' \in \mathring{V}_{r-1}^1(T^{wf}) \otimes \mathbb{V},
\\ \nonumber
& \hspace{8cm}
{\rm inc}(u) \in \mathring{\mathcal{V}}_{r-2}^2(T^{wf}) \otimes \mathbb{V} \}.
\end{align}
\end{thm}
\begin{proof}
Let the right-hand side of \eqref{eqn:char1} and \eqref{eqn:char1b} be denoted by $M_r$ and $\mathring{M}_r$, respectively. If $u \in {U}_{r}^1(T^{wf})$, then $u = {\rm sym}(z)$ for some $z \in S_{r}^1(T^{wf}) \otimes \mathbb{V}$, so \eqref{eqn:iden5}, \eqref{eqn:iden2} and Definition \ref{def:maps} give
\begin{equation}\label{aux617}
({\rm curl}\, u)'= \Xi^{-1} {\rm curl}\, u =\Xi^{-1} {\rm curl}\, z +{\rm grad} \, {\rm vskw}(z),
\end{equation}
from which we conclude $({\rm curl}\, u)' \in V_{r-1}^1(T^{wf}) \otimes \mathbb{V}$. This proves the inclusion
\begin{equation}
{U}_{r}^1(T^{wf}) \subset M_r.
\end{equation}
Similarly, if $u \in \mathring{U}_{r}^1(T^{wf})$, then \eqref{aux617} for $z \in \mathring{S}_{r}^1(T^{wf}) \otimes \mathbb{V}$, hence we have $({\rm curl}\, u)' \in \mathring{V}_{r-1}^1(T^{wf}) \otimes \mathbb{V}$. Moreover, using \eqref{eqn:iden3} and the exact sequence \ref{eqn:seq2b}, we obtain
\[{\rm inc}(u)={\rm \bcurl \Xi^{-1} \bcurl}(u)={\rm \bcurl \Xi^{-1} \bcurl}(z) \in {\rm curl}\,(\mathring{\mathsf{L}}_{r-1}^1(T^{wf}) \otimes \mathbb{V}) \subset \mathring{\mathcal{V}}_{r-2}^2 \otimes \mathbb{V}. \] This proves
\begin{equation}
\mathring{U}_{r}^1(T^{wf}) \subset \mathring{M}_r.
\end{equation}
We continue to prove the reverse inclusion of \eqref{eqn:char1}. For any $m \in M_r$, let $\sigma={\rm curl}\, ({\rm curl}\, m)'$ which immediately implies that ${\rm div}\, \sigma=0$. Moreover, by \eqref{eqn:iden5} $\sigma= {\rm \bcurl \Xi^{-1} \bcurl}(m)$ and by \eqref{eqn:iden4} ${\rm vskw}(\sigma)=0$. Hence, we have $\sigma \in V_{r-2}^2(T^{wf}) \otimes \mathbb{V}$, and by the exact sequence \eqref{eqn:preseq} there exists $w \in S_{r}^1(T^{wf})\otimes \mathbb{V} $ such that ${\rm \bcurl \Xi^{-1} \bcurl}(w)=\sigma$. Therefore, $w-m \in V_{r}^1(T^{wf}) \otimes \mathbb{V}$ with ${\rm \bcurl \Xi^{-1} \bcurl}(w-m)=0$ and hence, by the exact sequence \eqref{eqn:seq0}, there exists $v \in {\mathsf{L}}_{r+1}^0(T^{wf})\otimes \mathbb{V}$ such that ${\rm grad} \, v =\Xi^{-1} {\rm curl}\,(w-m)$. Setting $z= m+ {\rm vskw}(v)$ gives ${\rm sym}(z)=m$ and by \eqref{eqn:iden2},
\begin{equation*}
{\rm curl}\, z={\rm curl}\, m+ {\rm curl}\, {\rm mskw}\, v={\rm curl}\, m- \Xi {\rm grad} \, v={\rm curl}\, w \in {\mathsf{L}}_{r}^1(T^{wf})\otimes \mathbb{V}.
\end{equation*}
We conclude
\begin{equation}
M_r \subset {U}_{r}^1(T^{wf}).
\end{equation}
The reverse inclusion to prove \eqref{eqn:char1b} follows similar arguments, using the exact sequence \eqref{eqn:preseqb} and \eqref{eqn:seq0b} in place of \eqref{eqn:preseq} and \eqref{eqn:seq0}, respectively.
\end{proof}
\section{Local degrees of freedom for the elasticity complex on Worsey-Farin splits} \label{sec:LocalDOFs}
In this section we present degrees of freedom
for the discrete spaces arising in the elasticity complex. We first need to introduce some notation as follows.
Recall that $T^a$ is the set of four tetrahedra obtained by connecting the vertices of $T$ with its incenter.
For each $K \in T^a$, we denote the local Worsey-Farin splits of $K$ as $K^{wf}$, i.e.,
\[
K^{wf} = \{S \in T^{wf}: \Bar{S} \subset \Bar{K} \}.
\]
Then, similar to the discrete functions spaces on $T^{wf}$ defined in Section \ref{subsec:localDeRhamy},
we define
spaces on $K^{wf}$ by taking their restriction:
\[
\mathsf{L}_{r}^0(K^{wf}):=\{u|_K: u \in \mathsf{L}_{r}^0(T^{wf})\}; \qquad
S_{r}^0(K^{wf}) : = \{u|_K: u \in S_{r}^0(T^{wf})\}.
\]
\begin{lemma} \label{lem:mu}
Let $T \in \mathcal{T}_h$, and let $F \in \Delta_2(T)$.
If $p \in \mathsf{L}_r^0(T^{wf})$ with $p=0$ on $F$, then ${\rm grad} \, p$ is continuous on $F$. In particular,
the normal derivative $\partial_n p$ is continuous on $F$.
In addition, if $p \in S_r^0(T^{wf})$ with $p=0$ on $F$, then ${\rm grad} \, p|_{\scriptscriptstyle{F}} \in S_{r-1}^0(F^{ct}) \otimes \mathbb{V}$ and in particular, $\partial_n p|_{F} \in S_{r-1}^0(F^{\ensuremath\mathop{\mathrm{ct}\,}})$.
\end{lemma}
\begin{proof}
Let $K \in T^a$ such that $F \in \Delta_2(K)$. Then, since $p$ vanishes on $F$, we have that $p= \mu q$ on $K$ where $q \in \mathsf{L}_{r-1}^0(K^{wf})$ and $\mu$ is the piecewise linear polynomial in Definition \ref{def:mu}. We write ${\rm grad} \, p= \mu {\rm grad} \, q+ q {\rm grad} \, \mu$, and since $\mu$ vanishes on $F$ and ${\rm grad} \, \mu$ is constant on $F$, we have ${\rm grad} \, p$ is continuous on $F$.
Furthermore, if $p \in S_r^0(T^{wf})$, then $p= \mu q$ on $K$ where $q \in S_{r-1}^0(K^{wf})$
because $\mu$ is a strictly positive polynomial on $K$. Hence by the same reasoning as the previous case,
${\rm grad} \, p|_{\scriptscriptstyle{F}} \in S_{r-1}^0(F^{ct}) \otimes \mathbb{V}$.
\end{proof}
\subsection{Dofs of \texorpdfstring{$U^0$}{U0} space}
\begin{lemma}\label{lem:dofu0}
A function $u \in {U}_{r+1}^0(T^{wf})$, with $r \ge 3$, is fully determined by the following DOFs:
\begin{subequations} \label{dof:U0}
\begin{alignat}{3}
\label{U0:dofa}
& u(a), && \qquad a \in \Delta_0(T), && \qquad 12 \\
\label{U0:dofb}
& {\rm grad}\, u(a), && \qquad a \in \Delta_0(T), && \qquad 36 \\
\label{U0:dofc}
& \int_{e} u \cdot \kappa, && \qquad \kappa \in [\mathcal{P}_{r-3}(e)]^3,~\forall e \in \Delta_1(T), && \qquad 18(r-2) \\
\label{U0:dofd}
& \int_{e} \frac{\partial u}{\partial n_e^{\pm}} \cdot \kappa, && \qquad \kappa \in [\mathcal{P}_{r-2}(e)]^3,~\forall e \in \Delta_1(T), && \qquad 36(r-1) \\
\label{U0:dofe}
& \int_{\scriptscriptstyle{F}} \varepsilon_{\scriptscriptstyle{F}}(u_{F}) \colon \varepsilon_{\scriptscriptstyle{F}}(\kappa), && \qquad \kappa \in [\mathring{S}_{r+1}^0(F^{ct})]^2 , \forall F \in \Delta_2(T) && \qquad 12r^2-36r+24 \\
\label{U0:doff}
& \int_{\scriptscriptstyle{F}} [\varepsilon(u)]_{{\F n}} \cdot \kappa, && \qquad \kappa \in {\rm grad}_{\scriptscriptstyle{F}} \mathring{S}_{r+1}^0(F^{ct}) , \forall F \in \Delta_2(T) && \qquad 6r^2-18r+12 \\
\label{U0:dofg}
& \int_{\scriptscriptstyle{F}} \partial_{n}(u \cdot n_{\scriptscriptstyle{F}}) \kappa, && \qquad \kappa \in {\mathcal{R}}_{r}^0(F^{ct}), \forall F \in \Delta_2(T) && \qquad 6r^2-18r+12 \\
\label{U0:dofh}
& \int_{\scriptscriptstyle{F}} \partial_{n} u_{F} \cdot \kappa, && \qquad \kappa \in [{\mathcal{R}}_{r}^0(F^{ct})]^2, \forall F \in \Delta_2(T) && \qquad 12r^2-36r+24 \\
\label{U0:dofi}
& \int_T \varepsilon(u) \colon \varepsilon(\kappa), && \qquad \kappa \in \mathring{U}_{r+1}^0(T^{wf}), && \qquad 6(r-1)(r-2)(r-3),
\end{alignat}
\end{subequations}
where $\frac{\partial}{\partial n_e^{\pm}}$ represents two normal derivatives to edge $e$ and $\{ n_e^{+}, n_e^{-}, t_e \}$ forms an edge-based orthonormal basis of $\mathbb{R}^3$.
\end{lemma}
\begin{proof}
The dimension of ${U}_{r+1}^0(T^{wf})$ is $6r^3+12r+12$, which is equal to the sum of the given DOFs.
Let $u \in {U}_{r+1}^0(T^{wf})$ such that it vanishes on the DOFs \eqref{dof:U0}. On each edge $e \in \Delta_1(T)$, $u|_{e}=0$ by \eqref{U0:dofa}-\eqref{U0:dofc}. Furthermore, ${\rm grad} \, u|_e=0$ by \eqref{U0:dofb} and \eqref{U0:dofd}. Hence on any face $F \in \Delta_2(T)$, we have $u_{\scriptscriptstyle{F}} \in [\mathring{S}_{r+1}^0(F^{ct})]^2$. Then with DOFs \eqref{U0:dofe}, $u_{\scriptscriptstyle{F}}= 0$ on $F$.
Now with Lemma \ref{lem:mu} applied to $u_{\scriptscriptstyle{F}} \in S_{r+1}^0(T^{wf}) \otimes \mathbb{V}_2$, we have $\partial_{n} u_{\scriptscriptstyle{F}} \in S_{r}^0(F^{ct}) \otimes \mathbb{V}_2$. In addition, since ${\rm grad} \, u_{\scriptscriptstyle{F}}|_{\partial F} = 0$, it follows that $\partial_{n} u_{\scriptscriptstyle{F}} \in [{\mathcal{R}}_{r}^0(F^{ct})]^2$ and with \eqref{U0:dofh}, we have $\partial_{n} u_{\scriptscriptstyle{F}}=0$.
Using the identity \eqref{more4}, we have $2 [\varepsilon(u)]_{{\F n}} = \partial_{n} u_{\scriptscriptstyle{F}} + {\rm grad}_{\scriptscriptstyle{F}}(u \cdot n_{\scriptscriptstyle{F}}) = {\rm grad}_{\scriptscriptstyle{F}}(u \cdot n_{\scriptscriptstyle{F}})$. With $u \cdot n_{\scriptscriptstyle{F}} \in S_{r+1}^0(F^{ct})$, we have in \eqref{U0:doff}, $[\varepsilon(u)]_{{\F n}} = 0$ and thus $u \cdot n_{\scriptscriptstyle{F}} = 0$ on $F$.
Now similar to $u_{\scriptscriptstyle{F}}$, with Lemma \ref{lem:mu} applied to $u \cdot n_{\scriptscriptstyle{F}}$, we have $\partial_{n} (u \cdot n_{\scriptscriptstyle{F}}) \in {\mathcal{R}}_{r}^0(F^{ct})$ and with \eqref{U0:dofg}, we have $\partial_{n} (u \cdot n_{\scriptscriptstyle{F}})=0$.
Since $u|_{\partial T} =0$, all the tangential derivatives of $u$ vanish. With $\partial_{n} (u \cdot n_{\scriptscriptstyle{F}})=0$ and $\partial_{n} u_{\scriptscriptstyle{F}}=0$, we conclude that ${\rm grad} \, u|_{\partial T} =0$. Thus $u \in \mathring{U}_{r+1}^0(T^{wf})$, and \eqref{U0:dofi} shows that $u$ vanishes.
\end{proof}
\subsection{Dofs of \texorpdfstring{$U^1$}{U1} space}
Before giving the DOFs of the space $U^1$ we need preliminary results to see the continuity of the functions involved. In the following lemmas, we use the jump
operator $\jmp{\cdot}$ and the set of internal edges of a split face $\Delta_1^I(F^{ct})$
given in Section \ref{subsec:WFconstruct}.
The proofs of the next four results are found in the appendix.
\begin{lemma} \label{lem:symcts1}
Let $\sigma \in V_{r}^2(T^{wf}) \otimes \mathbb{V}$ with ${\rm skw} (\sigma) = 0$. If $n_{\scriptscriptstyle{F}} '\sigma \ell=0$ on $\partial T$ for some $\ell \in \mathbb{R}^3$,
then $\sigma_{{\scriptscriptstyle{F}} \ell} \in V_{{\rm div}, r}^1(F^{ct})$ on each $F \in \Delta_2(T)$.
\end{lemma}
\begin{lemma} \label{lem:curlcts}
Let $w \in V^1_{r-1}(T^{wf}) \otimes \mathbb{V}$ such that $w' \in V^2_{r-1}(T^{wf}) \otimes \mathbb{V}$. If $w_{{\F n}} =0$
on some $F\in \Delta_2(T)$, then we have
\begin{equation}
\jmp{t_e'w n_f}_e = 0; \quad
\jmp{s_e' w s_e}_e = 0, \qquad \forall e \in \Delta_1^I(F^{ct}).
\end{equation}
On the other hand, if $w_{\scriptscriptstyle{F\!F}} = 0$ on $F$, then we have
\begin{equation}
\label{eqn:curlcts2}
\jmp{t_e'w n_f}_e = 0; \quad
\jmp{t_e' w n_F}_e = 0, \qquad \forall e \in \Delta_1^I(F^{ct}).
\end{equation}
\end{lemma}
\begin{lemma}\label{cor:curliden}
Let $T$ be a tetrahedron, and let $\ell, m$ be two tangent vectors to a face $F\in \Delta_2(T)$
such that $\ell \cdot m = 0$ and $\ell \times m = n_{\scriptscriptstyle{F}}$.
Let $u \in \mathsf{L}_{r}^1(T^{wf}) \otimes \mathbb{V}$ for some $r\ge 0$. If $u_{\scriptscriptstyle{F\!F}} = 0$ on some $F\in \Delta_2(T)$, then
\begin{alignat}{2}
\label{eqn:curliden1}
\jmp{\ell'({\rm curl}\, u )m}_e & = -\jmp{{\rm grad}_{\scriptscriptstyle{F}}(u_{{\F n}} \cdot \ell) \cdot \ell}_e, & \qquad \forall e \in \Delta_1^I(F^{ct}), \\
\label{eqn:curliden2}
\jmp{\ell' ({\rm curl}\, u) \ell}_e & = -\jmp{{\rm grad}_{\scriptscriptstyle{F}}(u_{{\F n}} \cdot \ell) \cdot m}_e, & \qquad \forall e \in \Delta_1^I(F^{ct}).
\end{alignat}
On the other hand, if $u_{n\F} = 0$ on $F$, then
\begin{equation}
\label{eqn:curliden3}
\jmp{n_F' ({\rm curl}\, u) \ell}_e = \jmp{({\rm grad}_{\scriptscriptstyle{F}} u_{nn}) \cdot m}_e, \qquad \forall e \in \Delta_1^I(F^{ct}).
\end{equation}
\end{lemma}
\begin{lemma} \label{lem:curlFF}
Suppose $u \in {U}_{r}^1(T^{wf})$ and $w = ({\rm curl} u)'$ are such that
$u_{{\scriptscriptstyle{F\!F}}}$ and $w_{{\F n}}$ vanish on a face $F\in \Delta_2(T)$. Then
$w_{{\scriptscriptstyle{F\!F}}} - {\rm grad}_{\scriptscriptstyle{F}} u_{{\F n}}^\perp$ is continuous on $F$. Furthermore, if $u = \varepsilon(v)$ for some $v \in {U}_{r+1}^0(T^{wf})$, then the following identity holds:
\begin{equation}\label{eqn:curlFFiden}
w_{{\scriptscriptstyle{F\!F}}} = [({\rm curl} \varepsilon (v))']_{{\scriptscriptstyle{F\!F}}} = {\rm grad}_{\scriptscriptstyle{F}} u_{{\F n}}^\perp + {\rm grad}_{\scriptscriptstyle{F}}(\partial_n v_{\scriptscriptstyle{F}} \times n_{\scriptscriptstyle{F}}).
\end{equation}
\end{lemma}
In addition to \eqref{eqn:incF} in Lemma \ref{lem:incFiden}, we need another identity to proceed with our construction. The following result is shown in \cite[Lemma 5.8]{christiansen2020discrete}.
\begin{lemma}\label{lem:inciden}
Let $u$ be a symmetric matrix-valued function with $[({\rm curl}\, u)']_{{\scriptscriptstyle{F\!F}}} t |_{\partial F} = 0$ , $u|_{\partial F} =0$. Let $q \in \mathsf{R}(F)$ be defined in \eqref{eqn:rigid2}. Then
there holds
\begin{equation}\label{eqn:incfn}
\int_{F} ({\rm inc}\, u)_{{\F n}} \cdot q = 0.
\end{equation}
\end{lemma}
\begin{lemma}\label{lem:dofu1}
A function $u \in {U}_{r}^1(T^{wf})$, with $r \ge 3$, is fully determined by the following DOFs:
\begin{subequations} \label{dof:U1}
\begin{alignat}{3}
\label{U1:dofa}
& u(a), && \quad a \in \Delta_0(T), && \quad 24 \\
\label{U1:dofb}
& \int_{e} u \colon \kappa, && \quad \kappa \in {\rm sym}[\mathcal{P}_{r-2}(e)]^{3 \times 3}, ~\forall e \in \Delta_1(T), && \quad 36(r-1) \\
\label{U1:dofc}
& \int_{e} ({\rm curl}\, u)' t_e \cdot \kappa, && \quad \kappa \in [\mathcal{P}_{r-1}(e)]^3, ~\forall e \in \Delta_1(T), && \quad 18r \\
\label{U1:dofd}
& \int_{F} ({\rm inc~}u)_{{\scriptscriptstyle{F\!F}}} \colon \kappa, && \quad \kappa \in Q_{r-2}^{\perp}, ~\forall F \in \Delta_2(T), && \quad 12(r-2)\\
\label{U1:dofe}
& \int_{\scriptscriptstyle{F}} ({\rm inc~}u)_{nn} \kappa, && \quad \kappa \in
\mathring{Q}^2_{r-2}(F^{\ensuremath\mathop{\mathrm{ct}\,}}),
\forall F \in \Delta_2(T) && \quad 6r^2-6r-12 \\
\label{U1:doff}
& \int_{\scriptscriptstyle{F}} ({\rm inc~}u)_{{\F n}} \cdot \kappa, && \quad \kappa \in V_{{\rm div}, r-2}^1(F^{ct})/\mathsf{R}(F) ,\forall F \in \Delta_2(T) && \quad 12r^2-24r \\
\label{U1:dofg}
& \int_{F} u_{{\scriptscriptstyle{F\!F}}} \colon \kappa, && \quad \kappa \in \varepsilon_{\scriptscriptstyle{F}}([\mathring{S}_{r+1}^0(F^{ct})]^2) ,\forall F \in \Delta_2(T), && \quad 12(r^2-3r+2) \\
\label{U1:dofh}
& \int_{\scriptscriptstyle{F}} ([({\rm curl}\, u)']_{{\scriptscriptstyle{F\!F}}} - {\rm grad}_{\scriptscriptstyle{F}} (u_{{\F n}}^\perp)) \colon \kappa, && \quad \kappa \in {\rm grad}_{\scriptscriptstyle{F}} [(\mathcal{R}_{r}^0(F^{ct})]^2, \forall F \in \Delta_2(T), && \quad 12(r^2-3r+2) \\
\label{U1:dofi}
& \int_{F} u_{{\F n}} \cdot \kappa, && \quad \kappa \in {\rm grad}_{\scriptscriptstyle{F}} ([\mathring{S}_{r+1}(F^{ct})])
,\forall F \in \Delta_2(T), && \quad 6(r^2-3r+2) \\
\label{U1:dofj}
& \int_{F} u_{nn} \kappa, && \quad \kappa \in \mathcal{R}_{r}^0(F^{ct}),\forall F \in \Delta_2(T), && \quad 6(r^2-3r+2) \\
\label{U1:dofk}
& \int_T {\rm ~inc}(u) \colon {\rm ~inc}(\kappa), && \quad \kappa \in \mathring{U}_{r}^1(T^{wf}), && \quad 6r^3-27r^2+21r+18\\
\label{U1:dofl}
& \int_T \varepsilon(u) \colon \varepsilon(\kappa), && \quad \kappa \in \mathring{U}_{r+1}^0(T^{wf}), && \quad 6(r-1)(r-2)(r-3).
\end{alignat}
\end{subequations}
\end{lemma}
\begin{proof}
The dimension of ${U}_{r}^1(T^{wf})$ is $12r^3-9r^2+15r+6$, which is equal to the sum of the given DOFs. Suppose that all DOFs \eqref{dof:U1} vanish for a $u \in U_r^1(T^{wf})$.
{\bf Step 0:}
Using the DOFs (\ref{U1:dofa}, \ref{U1:dofb}) and \eqref{U1:dofc},
we conclude
\begin{equation}\label{eqn:U1Step0}
u|_e = 0, \quad ({\rm curl}\, u)'t|_e = 0, \qquad {\rm for~} e \in \Delta_1(T).
\end{equation}
{\bf Step 1:} We show ${\rm inc}\, u \in \mathring{\mathcal{V}}_{r-2}^2(T^{wf})\otimes \mathbb{V}$.\smallskip\\
By \eqref{id4} and \eqref{eqn:U1Step0},
we have \[
0=n_{\scriptscriptstyle{F}}'({\rm curl}\, u )' t = ({\rm curl}_{\scriptscriptstyle{F}} u_{{\scriptscriptstyle{F\!F}}})t\quad \text{on $\partial F$ for each $F\in \Delta_2(T)$.}
\]
Since $u$ is symmetric and continuous,
by \eqref{id1}, we see that $({\rm inc}\, u)_{nn} = {\rm inc}_\F \, u_{\scriptscriptstyle{F\!F}}$
with $u_{\scriptscriptstyle{F\!F}} \in \mathring{Q}_{{\rm inc},r}^{1,s}(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \subset \mathring{Q}_{{\rm inc},r}^{1}(F^{\ensuremath\mathop{\mathrm{ct}\,}})$. Thus, the complex \eqref{elaseqsvenb} in Theorem \ref{thm:2delaseq} and
the DOFs \eqref{U1:dofe} yield
\begin{equation}\label{eqn:Step1Intermediate}
({\rm inc}\, u)_{nn} =0\qquad \text{on each $F\in \Delta_2(T)$.}
\end{equation}
Next, Lemma \ref{lem:symcts1} (with $\ell = n_{\scriptscriptstyle{F}}$ and $\sigma = {\rm inc}\, u$) shows $({\rm inc}\, u)_{F n}\in V^1_{{\rm div},r-2}(F^{ct})$. Therefore using the DOFs \eqref{U1:doff}
and \eqref{eqn:incfn} in Lemma \ref{lem:inciden},
we conclude $({\rm inc}\, u)_{{\F n}} =0$.
The identities $({\rm inc}\, u)_{nn} =0$ and $({\rm inc}\, u)_{{\F n}}=0$
yield $({\rm inc}\, u)n_{\scriptscriptstyle{F}}=0$. So, by Lemma \ref{lem:symcts1} (with $\ell = t_1, t_2$), we see that $({\rm inc}\, u)_{\scriptscriptstyle{F\!F}} \in V_{{\rm div},r-2}^1(F^{ct})\otimes \mathbb{V}_2$. In particular, since $({\rm inc}\, u)_{\scriptscriptstyle{F\!F}}$ is symmetric,
there holds $({\rm inc}\, u)_{\scriptscriptstyle{F\!F}}\in Q_{r-2}^1(F^{ct})$ (cf.~\eqref{eqn:Qr1}).
Thus by the DOFs \eqref{U1:dofd} and the definition of $Q^{\perp}_{r-2}(F^{ct})$ in Section \ref{sec:2dela}, we have $({\rm inc}\, u)_{{\scriptscriptstyle{F\!F}}} \in L_{r}^1(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \otimes \mathbb{V}_2$. Therefore, we conclude ${\rm inc}\, u \in \mathring{\mathcal{V}}_{r-2}^2(T^{wf})\otimes \mathbb{V}$.
{\bf Step 2:}
We show $({\rm curl}\, u)' \in \mathring{V}_{r-1}^1(T^{wf})\otimes \mathbb{V}$.\smallskip\\
Using \eqref{eqn:Step1Intermediate} and \eqref{id1}, we have
$0 = ({\rm inc}\, u)_{nn}={\rm inc}_\F u_{{\scriptscriptstyle{F\!F}}}$.
Thus by the exact sequence \eqref{elaseqsvenb} in Theorem \ref{thm:2delaseq},
there holds $u_{{\scriptscriptstyle{F\!F}}} = \varepsilon_{\scriptscriptstyle{F}} (\kappa)$ for some $\kappa\in \mathring{S}_{r+1}^0(F^{ct})\otimes \mathbb{V}_2$. We then conclude
from the DOFs \eqref{U1:dofg} that $u_{{\scriptscriptstyle{F\!F}}}=0$ on each $F\in \Delta_2(F)$.
Furthermore by \eqref{id4}, $[({\rm curl}\, u)']_{Fn} = {\rm curl}_F u_{{\scriptscriptstyle{F\!F}}} = 0$.
Since $({\rm curl}\, u)' \in V_{r-1}^1(T^{wf})\otimes \mathbb{V}$ by Theorem \ref{thm:charU1}
and from \eqref{eqn:U1Step0}
\[
[({\rm curl}\, u)']_{{\scriptscriptstyle{F\!F}}} t_e|_e = ({\rm curl}\, u)'t_e|_e = 0, \qquad \text{for all } e \in \Delta_1(T),
\]
we have $[({\rm curl}\, u)']_{{\scriptscriptstyle{F\!F}}} \in \mathring{V}^1_{{\rm curl},r-1}(F^{ct})\otimes \mathbb{V}_2$ on $F \in \Delta_2(T)$. In addition, by the identity $({\rm inc}\, u)_{Fn} = {\rm curl}_F[({\rm curl}\,u)']_{{\scriptscriptstyle{F\!F}}}$
(cf.~\eqref{id2})
and $({\rm inc}\, u)_{{\F n}} =0$ derived in {\bf Step 1}, there exists $\phi \in \mathring{\mathsf{L}}^0_{r}(F^{\ensuremath\mathop{\mathrm{ct}\,}}) \otimes \mathbb{V}_2$ such that ${\rm grad}_{\scriptscriptstyle{F}} \phi = [({\rm curl}\, u)']_{{\scriptscriptstyle{F\!F}}}$. With Lemma \ref{lem:curlFF}, we further have $\phi - u_{{\F n}}^\perp \in [\mathcal{R}^0_{r}(F^{\ensuremath\mathop{\mathrm{ct}\,}})]^2$.
Therefore, using the DOFs \eqref{U1:dofh} we conclude
\begin{equation}\label{eqn:uFn}
[({\rm curl}\, u)']_{{\scriptscriptstyle{F\!F}}} = {\rm grad}_{\scriptscriptstyle{F}} u_{{\F n}}^\perp.
\end{equation}
Since with \eqref{id3}, we have
\[-{\rm curl}_{\scriptscriptstyle{F}} (u_{{\F n}})' = {\rm tr}_{\scriptscriptstyle{F}} {\rm curl}\, u = {\rm tr}_{\scriptscriptstyle{F}} ({\rm curl}\, u)' = {\rm tr}_{\scriptscriptstyle{F}} ({\rm curl}\, u)'_{{\scriptscriptstyle{F\!F}}}.\]
With \eqref{eqn:uFn} and \eqref{iden:Vperp}, we have
\[
-{\rm curl}_{\scriptscriptstyle{F}} (u_{{\F n}})' = {\rm tr}_{\scriptscriptstyle{F}} ({\rm curl}\, u)'_{{\scriptscriptstyle{F\!F}}} = {\rm div}_{\scriptscriptstyle{F}} u_{{\F n}}^\perp = {\rm curl}_{\scriptscriptstyle{F}} (u_{{\F n}})',
\]
and this implies that ${\rm curl}_{\scriptscriptstyle{F}} (u_{{\F n}})' =0$. Since $u_{{\F n}} \in \mathring{\mathsf{L}}^1_{r}(F^{\ensuremath\mathop{\mathrm{ct}\,}})$,
the exact sequence \eqref{2dbdryseq2} yields $u_{{\F n}} \in {\rm grad}_{\scriptscriptstyle{F}} ([\mathring{S}_{r+1}(F^{ct})])$.
Therefore by \eqref{U1:dofi}, we have $u_{{\F n}} = 0$. Now with \eqref{eqn:uFn} and $u_{{\F n}} = 0$,
we have $[({\rm curl}\, u)']_{{\scriptscriptstyle{F\!F}}} = 0$ and so $({\rm curl} u)'\in \mathring{V}_{r-1}^1(T^{wf})\otimes \mathbb{V}$.
{\bf Step 3:} We show $u\in \mathring{H}^1(T;\mathbb{S})$.\\
From {\bf Step 2}, we already see that $u_{{\scriptscriptstyle{F\!F}}}=0$ and $ u_{{\F n}} =0$, so we
only need to show $u_{nn} =0$. Since $({\rm curl}\, u)' \in \mathring{V}_{r-1}^1(T^{wf})\otimes \mathbb{V}$ with ${\rm curl}\, u \in V_{r-1}^2(T^{wf})\otimes \mathbb{V}$ and $[({\rm curl}\, u)']_{\scriptscriptstyle{F\!F}} = 0$ on $F$, then by \eqref{eqn:curlcts2}, we have
\begin{equation}
\jmp{t_e' ({\rm curl}\, u)' n_{\scriptscriptstyle{F}} }_{e} = 0, \qquad \text{for all } e \in \Delta_1^I(F^{\ensuremath\mathop{\mathrm{ct}\,}}).
\end{equation}
We know that $u \in \mathsf{L}_{r}^1(T^{wf})$, $u_{{\F n}} =0$ and by \eqref{eqn:curliden3} in Lemma \ref{cor:curliden} with $\ell = t_e$, $m = s_e$,
\[
0 = \jmp{t_e' ({\rm curl}\, u)' n_{\scriptscriptstyle{F}} }_{e} = \jmp{n_{\scriptscriptstyle{F}}' ({\rm curl}\, u) t_e }_{e} = \jmp{({\rm grad}_F u_{nn}) \cdot s_e}.
\]
Therefore, we have $u_{nn} \in \mathcal{R}_{r}^0(F^{ct})$ and \eqref{U1:dofj} implies $u_{nn} =0$ on $F$. Thus $u|_{\partial T} =0$.
{\bf Step 4:}\\
Using the second characterization of Theorem \ref{thm:charU1}, $u \in \mathring{U}_r^1(T^{wf})$. Hence \eqref{U1:dofk} implies ${\rm inc}\, u =0$ on $T$ and using the exactness of the sequence \eqref{eqn:elseqb} and the DOFs of \eqref{U1:dofl}, we see that $u=0$ on $T$.
\end{proof}
\subsection{Dofs of the \texorpdfstring{$U^2$}{U2} and \texorpdfstring{$U^3$}{U3} spaces}
\begin{lemma}\label{lem:dofu2}
A function $u \in {U}_{r-2}^2(T^{wf})$, with $r \ge 3$, is fully determined by the following DOFs:
\begin{subequations} \label{dof:U2}
\begin{alignat}{3}
\label{U2:dofa}
& \int_{F} u_{{\scriptscriptstyle{F\!F}}} \colon \kappa, && \qquad \kappa \in Q_{r-2}^{\perp}, \forall F \in \Delta_2(T), && \qquad 12(r-2)\\
\label{U2:dofb}
& \int_{\scriptscriptstyle{F}} u_{nn} \kappa, && \qquad \kappa \in V_{r-2}^2(F^{ct}), \forall F \in \Delta_2(T), && \qquad 6r^2-6r \\
\label{U2:dofc}
& \int_{\scriptscriptstyle{F}} u_{{n\F}} \cdot \kappa, && \qquad \kappa \in V_{{\rm div}, r-2}^1(F^{ct}), \forall F \in \Delta_2(T), && \qquad 12r^2-24r+12 \\
\label{U2:dofd}
& \int_T {\rm div}\, u \cdot \kappa, && \qquad \kappa \in \mathring{U}_{r-3}^3(T^{wf}), && \qquad 6r^3-18r^2+12r-6 \\
\label{U2:dofe}
& \int_T u \colon \kappa, && \qquad \kappa \in {\rm inc~} \mathring{U}_{r}^1(T^{wf}), && \qquad 6r^3-27r^2+21r+18.
\end{alignat}
\end{subequations}
\end{lemma}
\begin{proof}
The dimension of ${U}_{r-2}^2(T^{wf})$ is $12r^3-27r^2+15r$, which is equal to the sum of the given DOFs.
Let $u \in {U}_{r-2}^2(T^{wf})$ such that $u$ vanishes on the DOFs \eqref{dof:U2}. By DOFs \eqref{U2:dofb}, we have $u_{nn}=0$ on each $F \in \Delta_2(T)$. By Lemma \ref{lem:symcts1} and DOFs \eqref{U2:dofc}, we have $u_{{n\F}}=0$ on each $F \in \Delta_2(T)$. Then, $u \in \mathring{V}_{r-2}^2(T^{wf}) \otimes \mathbb{V}$. With the definition of $Q^{\perp}_{r-2}$ in Section \ref{sec:2dela} and \eqref{U2:dofa}, we have $u \in \mathring{\mathcal{V}}_{r}^2(T^{wf}) \otimes \mathbb{V}$ and thus $u \in \mathring{U}_{r-2}^2(T^{wf})$. In addition, since ${\rm div}\, u \in {\rm div}\,(\mathring{U}_{r-2}^2(T^{wf})) \subset \mathring{U}_{r-3}^3(T^{wf})$, we have ${\rm div}\, u=0$ by DOFs \eqref{U2:dofd}. Using the exactness of \eqref{eqn:elseqb}, there exist $\kappa \in \mathring{U}_{r}^1(T^{wf})$ such that ${\rm inc}\, \kappa = u$. With DOFs \eqref{U2:dofe}, we have $u=0$, which is the desired result.
\end{proof}
\begin{lemma}\label{lem:dofu3}
A function $u \in {U}_{r-3}^3(T^{wf})$, with $r \ge 3$, is fully determined by the following DOFs:
\begin{subequations} \label{dof:U3}
\begin{alignat}{3}
\label{U3:dofa}
& \int_T u \cdot \kappa, && \qquad \qquad \kappa \in \mathsf{R}, && \qquad \qquad 6 \\
\label{U3:dofb}
& \int_T u \cdot \kappa, && \qquad \qquad \kappa \in \mathring{U}_{r-3}^3(T^{wf}), && \qquad \qquad 6r^3-18r^2+12r-6.
\end{alignat}
\end{subequations}
\end{lemma}
\SG{
\begin{remark}
Note that \eqref{U3:dofa} is equivalent to
\[
\int_T u \cdot \kappa, \quad \kappa \in P_{\scriptscriptstyle{U}} \mathsf{R},
\]
since by the definition of $L^2$-projection, for any $\kappa \in \mathsf{R}$,
\[
\int_T u \cdot \kappa = \int_T u \cdot P_{\scriptscriptstyle{U}} \kappa, \quad \forall u \in {U}_{r-3}^3(T^{wf}).
\]
\end{remark}
}
\section{Commuting projections}
In this section, we show that the degrees of freedom constructed in the previous sections induce projections which satisfy commuting properties.
\begin{thm}\label{thm:Commuting}
Let $r \ge 3$. Let $\Pi_{r+1}^0: C^{\infty}(\Bar{T}) \otimes \mathbb{V} \rightarrow U_{r+1}^0(T^{wf})$
be the projection defined in Lemma \ref{lem:dofu0},
let $\Pi_{r}^1: C^{\infty}(\Bar{T}) \otimes \mathbb{V} \rightarrow U_{r}^1(T^{wf})$ be the projection defined in Lemma \ref{lem:dofu1},
let $\Pi_{r-2}^2: C^{\infty}(\Bar{T}) \otimes \mathbb{V} \rightarrow U_{r-2}^2(T^{wf})$ be the projection defined in Lemma \ref{lem:dofu2},
and let $\Pi_{r-3}^3: C^{\infty}(\Bar{T}) \otimes \mathbb{V} \rightarrow U_{r-3}^3(T^{wf})$ be the projection defined in Lemma \ref{lem:dofu2}.
Then the following commuting properties are satisfied.
\begin{subequations}
\begin{alignat}{2}
\label{com:epsilon}
\varepsilon\big( \Pi_{r+1}^0 u\big) & = \Pi_{r}^1 \varepsilon(u) , && \quad \forall u \in C^{\infty}(\Bar{T})\otimes \mathbb{V} \\
\label{com:inc}
{\rm inc}\, \Pi_{r}^1 v & = \Pi_{r-2}^2 {\rm inc}\, v, && \quad \forall v \in C^{\infty}(\Bar{T})\otimes \mathbb{S} \\
\label{com:div}
{\rm div}\, \Pi_{r-2}^2 w & = \Pi_{r-3}^3 {\rm div}\, w, && \quad \forall w \in C^{\infty}(\Bar{T})\otimes \mathbb{S}
\end{alignat}
\end{subequations}
\end{thm}
\begin{proof}
(i) Proof of \eqref{com:epsilon}: Given $u \in C^{\infty}(\Bar{T})\otimes \mathbb{V}$, let $\rho = \varepsilon\big(\Pi_{r+1}^0 u\big) - \Pi_{r}^1 \varepsilon(u) \in U^1_r(T^{wf})$. To prove that \eqref{com:epsilon} holds, it suffices to show that $\rho$ vanishes on the DOFs \eqref{dof:U1} in Lemma \ref{lem:dofu1}. Since ${\rm inc}\, \circ \varepsilon = 0$, we have DOFs of \eqref{U1:dofd}, \eqref{U1:dofe}, \eqref{U1:doff} and \eqref{U1:dofk} applied to $\rho$ vanish. Next, with \eqref{U0:dofb}, \eqref{U0:dofe}, \eqref{U0:doff}, \eqref{U0:dofg},
\eqref{U0:dofi} applied to $u$, and with \eqref{U1:dofa}, \eqref{U1:dofg}, \eqref{U1:dofi}, \eqref{U1:dofj}, \eqref{U1:dofl} applied to $\varepsilon(u)$, each term respectively imply that \eqref{U1:dofa}, \eqref{U1:dofg}, \eqref{U1:dofi}, \eqref{U1:dofj}, \eqref{U1:dofl} applied to $\rho$ vanish. By the identity \eqref{eqn:curlFFiden} in Lemma \ref{lem:curlFF}, for any $\kappa \in {\rm grad}_{\scriptscriptstyle{F}} [(\mathcal{R}_{r}^0(F^{ct})]^2, \forall F \in \Delta_2(T)$, we have:
\[
\int_{\scriptscriptstyle{F}} ([({\rm curl}\, \rho)']_{{\scriptscriptstyle{F\!F}}} - {\rm grad}_{\scriptscriptstyle{F}} (\rho_{{\F n}}^\perp)) \colon \kappa = \int_{\scriptscriptstyle{F}} {\rm grad}_{\scriptscriptstyle{F}}(\partial_n (\Pi_{r+1}^0 u)_{\scriptscriptstyle{F}} - \partial_n u_{\scriptscriptstyle{F}}) \colon \kappa = 0,
\]
where the last equality holds with \eqref{U0:dofh} applied to $u$. Thus, the DOFs \eqref{U1:dofh} applied to $\rho$ vanish.
It only remains to prove that the DOFs of \eqref{U1:dofb}, \eqref{U1:dofc} applied to $\rho$ vanish.
To show this, we need to employ the edge-based orthonormal basis
$\{ n_e^+, n_e^-, t_e\}$ and write
$\kappa \in {\rm sym} [\mathcal{P}_{r-2}(e)]^{3 \times 3}$ as
$ \kappa = \kappa_{11} n_e^{+} (n_e^{+})' + \kappa_{12} \big(n_e^+
(n_e^-)' + n_e^- (n_e^+)'\big) + \kappa_{13} \big(n_e^+ t_e' + t_e
(n_e^{+})'\big) + \kappa_{22} n_e^-(n_e^-)' +
\kappa_{23}\big(n_e^+(n_e^-)' + n_e^- (n_e^+)'\big) + \kappa_{33}
t_e t_e'$ where $\kappa_{ij} \in \mathcal{P}_{r-2}(e). $ Then,
\begin{alignat*}{2}
\int_e \rho: \kappa
& = \int_e [\varepsilon(\Pi_{r+1}^0 u)
-\Pi_{r}^1\varepsilon(u)] : \kappa
= \int_e \varepsilon(\Pi_{r+1}^0 u -u) : \kappa
& \text{by~\eqref{U1:dofb}}
\\
& = \int_e {\rm grad} \,(\Pi_{r+1}^0 u - u) : \kappa \\
& = \int_e {\rm grad} \,(\Pi_{r+1}^0 u - u) t_e \cdot ( \kappa_{13} n_e^+ +\kappa_{33} t_e) & \text{by~\eqref{U0:dofd}}\\
& = \int_e (\Pi_{r+1}^0 u - u) \cdot \frac{\partial}{\partial t_e} ( \kappa_{13} n_e^+ +\kappa_{33} t_e) & \text{integration by parts}\\
&= 0 & \text{by~\eqref{U0:dofa} and \eqref{U0:dofc}}.
\end{alignat*}
Thus
the DOFs of \eqref{U1:dofb} applied to $\rho$ vanish. Next, letting
$\kappa \in [\mathcal{P}_{r-1}(e)]^3$, we note that
\begin{alignat*}{2}
\int_e ({\rm curl} \rho)'t_e \cdot \kappa=
& \int_e \big[{\rm curl} \varepsilon(\Pi_{r+1}^0 u-u)\big]'t_e \cdot \kappa \quad && \text{by~\eqref{U1:dofc}} \\
=& \frac{1}{2} \int_e \big[{\rm grad} \, {\rm curl} (\Pi_{r+1}^0 u-u)\big] t_e \cdot \kappa \quad && \text{by~\eqref{more1}} \\
=& -\frac{1}{2} \int_e {\rm curl} (\Pi_{r+1}^0 u-u)
\cdot \partial_t\kappa \quad && \text{by~\eqref{U0:dofa} and \eqref{U0:dofb}
\end{alignat*}
where in the last step, we have integrated by parts, and put
$\partial_t \kappa = ({\rm grad} \, \kappa) t_e$. The curl in the
integrand above can be decomposed into terms involving
$\partial_t (\Pi_{r+1}^0 u-u)$ and those involving
$\partial_{n_e^\pm}(\Pi_{r+1}^0 u-u)$. The former terms can
be integrated by parts yet again, which after
using~\eqref{U0:dofa}, \eqref{U0:dofb} and \eqref{U0:dofc}, vanish. The
latter terms also vanish by~\eqref{U0:dofd}, noting that
$\partial_t \kappa$ is of degree at most $r-2$.
(ii) Proof of \eqref{com:inc}: Given $v \in C^{\infty}(\Bar{T})\otimes \mathbb{S}$, let
$\rho = {\rm inc}\, \Pi_{r}^1 v - \Pi_{r-2}^2 {\rm inc}\, v \in {U}_{r-2}^2(T^{wf})$. To prove that
\eqref{com:inc} holds, we need to show that $\rho$ vanishes on the DOFs \eqref{dof:U2} in Lemma \ref{lem:dofu2}.
By using \eqref{U2:dofb} on ${\rm inc}\, v$, we have
\begin{align}\label{eqn:ABC321A}
\int_{\scriptscriptstyle{F}} \rho_{nn} \kappa = \int_{\scriptscriptstyle{F}} [{\rm inc}\, (\Pi_{r}^1 v - v)]_{nn} \kappa,\qquad \forall \kappa\in V_{r-2}^2(F^{ct}).
\end{align}
From \eqref{U1:dofe}, we have that the right-hand side of \eqref{eqn:ABC321A}
vanishes for $\kappa \in V_{r-2}^2(F^{ct})/\mathcal{P}_{1}(F)$.
With \eqref{eqn:incF} of Lemma~\ref{lem:incFiden}, we have for any $\kappa_1 \in \mathcal{P}_{1}(F)$,
\begin{equation} \label{eqn:comm~incnn~2}
\int_{\scriptscriptstyle{F}} \rho_{nn} \kappa_1 = \int_{\partial F} ({\rm curl}_{\scriptscriptstyle{F}} (\Pi_{r}^1 v - v)_{{\scriptscriptstyle{F\!F}}})t \kappa_1 + \int_{\partial F} (\Pi_{r}^1 v - v)_{{\scriptscriptstyle{F\!F}}} t \cdot ({\rm rot}_{\scriptscriptstyle{F}} \kappa_1)'.
\end{equation}
By~\eqref{id4},
${\rm curl}_{\scriptscriptstyle{F}} ( \Pi_{r}^1 v - v)_{{\scriptscriptstyle{F\!F}}} t\, \kappa_1 = [{\rm curl} ( \Pi_{r}^1 v -
v)']_{{\F n}} t \,\kappa_1 = {\rm curl} ( \Pi_{r}^1 v - v)': \kappa_1 n t',$ so the
first term on the right-hand side of~\eqref{eqn:comm~incnn~2} vanishes by
\eqref{U1:dofc}. The last term in~\eqref{eqn:comm~incnn~2} also
vanishes because
\begin{align*}
\int_{\partial F} ( \Pi_{r}^1 v - v)_{\scriptscriptstyle{F\!F}} t \cdot ({\rm rot}_{\scriptscriptstyle{F}} \kappa_1)' & = \int_{\partial F} Q( \Pi_{r}^1 v - v) Q t
\cdot ({\rm rot}_{\scriptscriptstyle{F}} \kappa_1)' = \int_{\partial F} ( \Pi_{r}^1 v - v) Q t \cdot Q({\rm rot}_{\scriptscriptstyle{F}} \kappa_1)' \\
&=\int_{\partial F} (\Pi_{r}^1 v - v) : {\rm sym}(Q({\rm rot}_{\scriptscriptstyle{F}} \kappa_1)' t)=0,
\end{align*}
where we used \eqref{U1:dofb} in the last equality.
Thus, the right-hand side of \eqref{eqn:comm~incnn~2} vanishes,
and therefore the right-hand side of \eqref{eqn:ABC321A} vanishes, i.e., the DOFs \eqref{U2:dofb} vanish for $\rho$.
Next using \eqref{U2:dofc} we have
\begin{align}
\label{eqn:ABC321B}
\int_{\scriptscriptstyle{F}} \rho_{{n\F}} \cdot \kappa = \int_{\scriptscriptstyle{F}} [{\rm inc}\, (\Pi_{r}^1 v - v)]_{{n\F}} \cdot \kappa,\qquad \forall \kappa \in V_{{\rm div},r-2}^1(F^{ct}).
\end{align}
The DOFs \eqref{U1:doff} imply the right-hand side of \eqref{eqn:ABC321B} vanishes for all $\kappa \in V_{{\rm div}, r-2}^1(F^{ct})/\mathsf{R}(F)$.
Considering $\kappa\in \mathsf{R}(F)$ in \eqref{eqn:ABC321B}, we may conduct a similar argument as above,
but now using \eqref{eqn:incfn} of Lemma~\ref{lem:inciden}, to conclude the right-hand side of \eqref{eqn:ABC321B} vanishes.
Thus, we conclude that \eqref{U2:dofc} vanishes for $\rho$.
In addition, note that \eqref{U1:dofd} and \eqref{U2:dofa} imply that the DOFs \eqref{U2:dofa} vanish for $\rho$. Finally, the remaining DOFs of \eqref{U2:dofd} and \eqref{U2:dofe} applied to $\rho$ also vanish, thus leading to \eqref{com:inc}.
(iii) Proof of \eqref{com:div}: Given $w \in C^{\infty}(\Bar{T})\otimes \mathbb{S}$, let $\rho = {\rm div}\, \Pi_{r-2}^2 w - \Pi_{r-3}^3 {\rm div}\, w \in {U}_{r-3}^3(T^{wf})$. To prove that \eqref{com:div} holds, we will show that $\rho$ vanishes on the DOFs \eqref{dof:U3} in Lemma \ref{lem:dofu3}. Using \eqref{U2:dofd} and \eqref{U3:dofb}, we have for any $\kappa \in \mathring{U}_{r-3}^3(T^{wf})$,
\begin{align*}
\int_{T} \rho \cdot \kappa = \int_T ({\rm div}\, \Pi_{r-2}^2 w - {\rm div}\, w) \cdot \kappa = \int_T ({\rm div}\, w -{\rm div}\, w) \cdot \kappa = 0.
\end{align*}
For $\kappa \in \mathsf{R}$, we find
\begin{align*}
\int_{T} \rho \cdot \kappa & = \int_T ({\rm div}\, \Pi_{r-2}^2 w - {\rm div}\, w) \cdot \kappa && \text{by~} \eqref{U3:dofa}\\
&= \int_{\partial T} (\Pi_{r-2}^2 w-w)n_{\scriptscriptstyle{F}} \cdot \kappa\\
& = \sum\limits_{F \in \Delta_2(T)} \int_{\partial F} (\Pi_{r-2}^2 w - w)_{nn} (\kappa\cdot n_F) -\int_{\partial F} (\Pi_{r-2}^2 w - w)_{{n\F}} \cdot \kappa \\
& = 0 && \text{by~} \eqref{U2:dofb} \text{~and~} \eqref{U2:dofc},
\end{align*}
Thus, $\rho =0$, and so
the commuting property \eqref{com:div} is satisfied.
\end{proof}
\section{Global complexes} \label{sec:global}
In this section, we construct the discrete elasticity complex globally by putting the local spaces together. Recall that $\Omega\subset \mathbb{R}^3$ is a contractible
polyhedral domain, and $\calT^{wf}_h$ is the Worsey-Farin refinement of the mesh $\mathcal{T}_h$ on $\Omega$.
We first present below the global exact de~Rham complexes on Worsey-Farin splits which are needed to construct elasticity complexes; for more details, see \cite[Section 6]{guzman2022exact}:
\begin{subequations}
\begin{equation} \label{eqn:global~seq1}
0 \rightarrow {\mathcal{S}}_{r}^0(\calT^{wf}_h) \xrightarrow{{\rm grad}} \mathcal{L}_{r-1}^1(\calT^{wf}_h) \xrightarrow{{\rm curl}\,} {\mathscr{V}}_{r-2}^2(\calT^{wf}_h) \xrightarrow{{\rm div}\,} {V}_{r-3}^3(\calT^{wf}_h) \rightarrow 0,
\end{equation}
\begin{equation} \label{eqn:global~seq2}
0 \rightarrow {\mathcal{S}}_{r}^0(\calT^{wf}_h) \xrightarrow{{\rm grad}} {\mathcal{S}}_{r-1}^1(\calT^{wf}_h) \xrightarrow{{\rm curl}\,} \mathcal{L}_{r-2}^2(\calT^{wf}_h) \xrightarrow{{\rm div}\,} {\mathscr{V}}_{r-3}^3(\calT^{wf}_h) \rightarrow 0,
\end{equation}
\end{subequations}
where the spaces involved are defined as follows:
\begin{align*}
\mathcal{S}_r^0(\calT^{wf}_h) &= \{ q \in C^1(\Omega) : q|_T \in S_r^0(T^{wf}), \, \forall T \in \mathcal{T}_h\}, \\
\mathcal{S}_{r-1}^1(\calT^{wf}_h) &= \{ v \in [C(\Omega)]^3 : {\rm curl} v \in [C(\Omega)]^3, \, v|_T \in S_{r-1}^1(T^{wf}) \, \forall T \in \mathcal{T}_h\}, \\
\mathcal{L}_{r-1}^1(\calT^{wf}_h) &= \{ v \in [C(\Omega)]^3 : v|_T \in \mathcal{L}_{r-1}^1(T^{wf}), \, \forall T \in \mathcal{T}_h\}, \\
\mathcal{L}_{r-2}^2(\calT^{wf}_h) &= \{ w \in [C(\Omega)]^3 : w|_T \in \mathcal{L}_{r-2}^2(T^{wf}), \, \forall T \in \mathcal{T}_h\}, \\
\mathscr{V}_{r-2}^2(\calT^{wf}_h) &= \{w \in H({\rm div} ; \Omega) : w|_T \in V_{r-2}^2(T^{wf}), \, \forall T \in \mathcal{T}_h, \\
& \qquad {\theta_e(w \cdot t) =0}, \, \forall e \in \mathcal{E}(\calT^{wf}_h)\}, \\
\mathscr{V}_{r-3}^3(\calT^{wf}_h) &= \{p \in L^2(\Omega) : p|_T \in V_{r-3}^3(T^{wf}), \, \forall T \in \mathcal{T}_h, \, \theta_e(p) = 0, \, \forall e \in \mathcal{E}(\calT^{wf}_h)\}, \\
V_{r-3}^3(\calT^{wf}_h) &= \mathcal{P}_{r-3}(\calT^{wf}_h),
\end{align*}
and we recall $\theta_e(\cdot)$ is defined in \eqref{eqn:thetaEDef}.
The global analogue of Theorem \ref{thm:preseq} is now given.
\begin{thm} \label{thm:global~preseq}
The following sequence is exact for any $r \ge 3$:
\begin{equation} \label{eqn:global~preseq}
\begin{bmatrix}
\mathcal{S}_{r+1}^0(\calT^{wf}_h) \otimes \mathbb{V} \\
\mathcal{S}_{r}^0(\calT^{wf}_h \otimes \mathbb{V})
\end{bmatrix}
\xrightarrow{[{\rm grad}, -{\rm mskw}\,]} \mathcal{S}_{r}^1(\calT^{wf}_h) \otimes \mathbb{V} \xrightarrow{{\rm \bcurl \Xi^{-1} \bcurl}} \mathscr{V}_{r-2}^2(\calT^{wf}_h) \otimes \mathbb{V} \xrightarrow{ \begin{bmatrix}
2{\rm vskw} \\ {\rm div}\, \end{bmatrix}}
\begin{bmatrix}
\mathscr{V}^3_{r-2}(\calT^{wf}_h) \otimes \mathbb{V} \\ V_{r-3}^3(\calT^{wf}_h) \otimes \mathbb{V}
\end{bmatrix}.
\end{equation}
Moreover, the kernel of the first operator is isomorphic to $\mathsf{R}$ and the last operator are surjective.
\end{thm}
\begin{proof}
The result follows from the exactness of the
complexes \eqref{eqn:global~seq1}-- \eqref{eqn:global~seq2},
Proposition \ref{prop:exactpattern},
and the exact same arguments in the proof of Theorem \ref{thm:preseq}.
\end{proof}
The global spaces involved in the elasticity complex,
induced by the local spaces in Section \ref{subsec:ela~seq}
are defined as follows:
\begin{align*}
& U_{r+1}^0(\calT^{wf}_h) = \mathcal{S}_{r+1}^0(\calT^{wf}_h) \otimes \mathbb{V}, && U_{r}^1(\calT^{wf}_h) = \{{\rm sym}(u): u \in \mathcal{S}_{r}^1(\calT^{wf}_h) \otimes \mathbb{V}\},\\
& U_{r-2}^2(\calT^{wf}_h) = \{u \in \mathscr{V}_{r-2}^2(\calT^{wf}_h) \otimes \mathbb{V}: {\rm skw} \, u =0\}, && U_{r-3}^3(\calT^{wf}_h) = V_{r-3}^3(\calT^{wf}_h) \otimes \mathbb{V}.
\end{align*}
\begin{thm}\label{thm:global~charU1}
We have the following equivalent definition of ${U}_{r}^1(\calT^{wf}_h)$:
\begin{align*}
U_{r}^1(\calT^{wf}_h) = \{ u \in H^1(\Omega;\mathbb{S}): & u|_T \in U_{r}^1(T^{wf}), \, \forall T \in \mathcal{T}_h, \\
& ({\rm curl}\, u)' \in {V}_{r-1}^1(\calT^{wf}_h) \otimes \mathbb{V}, {\rm inc}(u) \in \mathscr{V}_{r-2}^2(\calT^{wf}_h) \otimes \mathbb{V} \}.
\end{align*}
\end{thm}
\begin{proof}
This is proved similarly as the proof of Theorem \ref{thm:charU1} using Theorem \ref{thm:global~preseq} in place of Theorem \ref{thm:preseq}.
\end{proof}
Then we have the global complex summarized in the following theorem. Its proof is the
same as Theorem \ref{thm:elseq}, with Theorem \ref{thm:global~preseq} in place of Theorem \ref{thm:preseq}.
\begin{thm} \label{thm:global~elseq}
The following sequence of global finite element spaces
\begin{equation}\label{eqn:global~elseq}
0 \rightarrow \mathsf{R} \xrightarrow{\subset} {U}_{r+1}^0(\calT^{wf}_h) \xrightarrow{\varepsilon} {U}_{r}^1(\calT^{wf}_h) \xrightarrow{{\rm inc}\,} {U}_{r-2}^2(\calT^{wf}_h) \xrightarrow{{\rm div}\,} {U}_{r-3}^3(\calT^{wf}_h) \rightarrow 0
\end{equation}
is a discrete elasticity complex and is exact for $r \ge 3$.
\end{thm}
\section{Conclusions}\label{sec:Conclude}
This paper constructed
both local and global finite element elasticity complexes with respect
to three-dimensional Worsey-Farin splits.
A notable
feature of the discrete spaces is the lack of
extrinsic supersmoothess and
accompanying DOFs at vertices in the triangulation.
For example,
the $H({\rm div})$-conforming space does not involve
vertex or edge DOFs and is therefore conducive for hybridization.
These results suggest
that the last two pairs in the sequence \eqref{eqn:global~elseq} may be suitable
to construct mixed finite element methods for three-dimensional elasticity.
However, due to the assumed regularity in Theorem \ref{thm:Commuting},
the result does not automatically yield an inf-sup stable pair.
In the future, we plan on studying
the pair $U_{r-2}^2(\calT^{wf}_h)\times U_{r-3}^3(\calT^{wf}_h)$
and prove inf-sup stability by modifying the commuting projections.
|
{
"arxiv_id": "2302.08711",
"language": "en",
"timestamp": "2023-02-20T02:07:35",
"url": "https://arxiv.org/abs/2302.08711",
"yymm": "2302"
} |
\section{Introduction}
\IEEEPARstart{O}{}rthogonal frequency-division multiplexing (OFDM) modulation is a widely used technique for transmission over mobile wireless channels since it
offers high spectral efficiency whilst providing resilience to frequency-selective fading \cite{farhang2016ofdm}.
One of the key requirements for optimum demodulation of OFDM signals is accurate time synchronization
because a small synchronization error can dramatically degrade the system performance. Hence, a variety of time synchronization methods
have been developed for OFDM systems \cite{zhang2019fine,abdzadeh2019timing, zhang2015maximum, mohebbi2014novel,morelli2007synchronization, park2004blind}. These methods typically consist of two tasks: 1) offset estimation, and 2) offset correction.
The former task relies on statistical signal processing algorithms to obtain an estimation of \ac{to} incurred due to lack of common time reference between the transmitter and receiver \cite{lin2018analysis,ziamaxli2018, abdzadeh2016improved}.
The latter task is a simple compensation of \ac{to} by shifting \cite{morelli2007synchronization}.
Time synchronization for OFDM can be performed using either synchronization-assisting signals, such as pilot signals and synchronization symbols \cite{gul2014timing}, or exploiting some redundant information in the transmitted signal, such as the guard interval redundancy employed to combat the \ac{isi} in frequency-selective
fading channels. The former approach is \ac{da}, and the latter is \ac{nda} time synchronization \cite{nasir2016timing}. \ac{da} time synchronization comes at the cost of reduced spectral efficiency, especially for short burst transmission, which is widely
employed in \ac{iot} use case of the fifth generation (5G) wireless systems \cite{de20195g}.
Guard intervals are useful for time synchronization in OFDM systems \cite{ven5G}. The guard interval can be in the form of \ac{cp} \cite{wang2015maximum,chin2011blind,van1997ml}, \ac{zp} \cite{wang2011frequency, su2008new, wang2006frames}, and known symbol padding (KSP) \cite{van2012iterative}. The choice of \ac{zp} versus \ac{cp} and KSP depends on several parameters, such as the operating \ac{snr}, delay spread of the fading channel, and coherent versus differential demodulation.
\ac{zp}-OFDM provides great benefits over CP-OFDM and KSP-OFDM in the sense that \cite{giannazpcp} 1) it guarantees symbol recovery regardless of the channel zero locations; hence, it can improve the BER, 2) it enables finite impulse response equalization of channels regardless of the channel nulls, 3) it makes channel estimation and channel tracking easier compared to that of CP-OFDM, and 4) it offers higher power efficiency.
While \ac{da} time synchronization for \ac{zp}-OFDM has been well explored in the literature \cite{nasir2016timing}, \ac{nda} approach has not been extensively investigated. Hence, the focus of this work is on \ac{nda} time synchronization for \ac{zp}-OFDM.
\subsection{Related Work}
For \ac{zp}-OFDM, most existing \ac{da} approaches rely on periodic autocorrelation properties of the received signal induced by the employed training sequences with good autocorrelation properties \cite{li2008synchronization}.
Moreover, most of the \ac{da} approaches developed for \ac{cp}-OFDM can be applied to \ac{zp}-OFDM \cite{chung2017preamble,zhang2011autocorrelation, abdzadeh2012novel,sheng2010novel}.
On the other hand,
to the best of the authors’ knowledge, the few existing \ac{nda} synchronization approaches for \ac{zp}-OFDM have been developed based on change point detection methods \cite{LeNir2010} or cyclostationarity properties in OFDM signal \cite{bolcskei2001blind}. Synchronization algorithms based on change point detection usually employ a transition metric, tracing the ratio of power in two slicing windows corresponding to each \ac{to} hypothesis in the OFDM packet.
These \ac{nda} solutions do not always offer a good performance in terms of lock-in probability, i.e. correct synchronization. Moreover, we show that their performance further drops in doubly-selective (time- and frequency-selective) fading channels.
\vspace{-0.5em}
\subsection{Motivation}
In the presence of perfect time synchronization, \ac{zp}-OFDM offers higher reliability and power efficiency compared to \ac{cp}-OFDM \cite{giannazpcp}.
Hence, \ac{zp}-OFDM can be considered an appealing solution for low-power \ac{iot} networks.
One of the main reasons that \ac{zp}-OFDM has not been extensively used
in practice is attributed to the lack of an efficient time synchronization method.
While sub-optimal \ac{cp}-based synchronization algorithms in \ac{cp}-OFDM offer high lock-in probabilities,
there is no synchronization algorithm with comparable performance for \ac{zp}-OFDM \cite{nasir2016timing}.
In addition, derivation of the optimal \ac{ml} time synchronization for \ac{zp}-OFDM, which results in a high
lock-in probability,
has remained intact. This is mainly because there currently exist no compact expression for the joint \ac{pdf} of the received samples.
Moreover, most existing synchronization methods ignore time-selectivity of the fading channel, i.e., the devastating effect of mobility and Doppler spread on time synchronization.
Motivated by the advantages of \ac{zp}-OFDM for the emerging low-power wireless networks, we study the problem of \ac{nda}-\ac{ml} time synchronization for \ac{zp}-OFDM.
In the first step, and for the first time, we derive a closed-form expression for the \ac{pdf} of the received \ac{zp}-OFDM samples in frequency-selective fading channel. We then use the PDF of the samples to approximate their joint PDF. The joint \ac{pdf} is given to a hypothesis testing algorithm to find the \ac{to}. Simulation results show that the proposed NDA-ML time synchronization algorithm significantly outperforms other existing \ac{nda} time synchronization methods. For example, at 5 dB $E_{\rm{b}}/N_0$ for WiMAX SUI-4 channels \cite{LeNir2010}, the
proposed \ac{nda}-ML time synchronization algorithm achieves a lock-in probability of 0.85 while the state of the art \cite{LeNir2010} achieves 0.55.
\vspace{-0.5em}
\subsection{Contributions}
The main contributions of this paper are as follows:
\begin{itemize}
\item A closed-form approximate expression for the joint \ac{pdf} of the received \ac{zp}-OFDM samples in frequency-selective fading channels is derived.
\item The \ac{nda}-\ac{ml} time synchronization for \ac{zp}-OFDM in frequency-selective fading channels is analytically derived.
The proposed method exhibits the following advantages: (i) unlike existing sub-optimal \ac{nda} time synchronization methods, it is applicable to highly selective fading channels, such as the ones in underwater communications and \ac{uwb} communication, (ii) it is valid for doubly-selective fading channels, and (iii) it can be used for both frame and symbol synchronization.
\item A low-complexity implementation of the developed theoretical \ac{nda}-\ac{ml} time synchronization algorithm by using Monte Carlo sampling (MCS) technique and golden-section search is proposed.
\item Complexity analysis of the proposed time synchronization methods is provided.
\end{itemize}
The remaining of the paper is organized as follows: Section~\ref{sec: sys model} introduces the system model. Section~\ref{sec: ml estimator} describes the derivation of the \ac{nda}-\ac{ml} time synchronization. In Section~\ref{sec: importance samp}, a practical implementation of the proposed time synchronization algorithm by employing MCS technique and golden-section search is presented. Simulation results are provided in Section~\ref{simmp}, and conclusions are drawn in Section~\ref{sec: conclu}.
\textit{Notations}: Throughout this paper, we use bold lowercase and
bold uppercase letters to show
column vectors and matrices, respectively. The symbols $(\cdot)^{*}$, $(\cdot)^{\{rm{T}}$, $|\cdot|$, and $\lfloor{\cdot}\rfloor$ denote conjugate, transpose, absolute value, and
floor function, respectively. $\mathbb{E}\{\cdot\}$ denotes
the statistical expectation, and $\Re\{\cdot\}$ and $\Im\{\cdot\}$ represent the the real and
imaginary parts, respectively, The subscripts
${\rm{I}}$ and ${\rm{Q}}$ show the in-phase and quadrature components of a variable. The symbols $\bigcap$ and $\bigcup$ denote the set intersection and union operands, respectively.
\section{System Model} \label{sec: sys model}
We consider a ZP-OFDM system in frequency-selective fading channel. Let $\{x_{n,k}\}_{k=0}^{n_{\rm{x}}-1}$, $\mathbb{E}\{|x_{n,k}|^2\}= \sigma^2_{\rm{x}}$, be the $n_{\rm{x}}$ complex data to be transmitted in the $n$-th OFDM symbol. The OFDM modulated baseband signal is given by \cite{li2006orthogonal,hwang2008ofdm}
\begin{align}\label{477www131314113}
x_n(t)=\sum_{k=0}^{n_{{\rm{x}}-1}}x_{n,k} e^{\frac{j2\pi k t}{T_{\rm{x}}}},\,\,\,\,\,\,\,\,\,\,\ 0 \le t \le T_{\rm{x}},
\end{align}
where $T_{\rm{x}}$ and $W\triangleq n_{\rm{x}}/T_{\rm{x}}$
is the OFDM symbol duration and channel bandwidth, respectively.
To avoid \ac{isi}, zero-padding guard interval of length $T_{\rm{z}}$ is added to each OFDM symbol. Hence, $x_n(t)$ is extended into $s_n(t)$ as
\begin{align}\label{cp_add}
s_n(t)=
\begin{cases}
x_n(t) \,\,\,\,\,\,\,\,\,\,\,\ 0 \le t < T_{\rm{x}} \\
0 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, T_{\rm{x}} \le t < T_{\rm{x}}+T_{\rm{z}}.
\end{cases}
\end{align}
The OFDM signal in \eqref{cp_add} propagates through a multi-path fading channel with the equivalent baseband impulse response as follows
\begin{align}
h(\tau)=\sum\limits_i {}\alpha_i \delta (\tau-\tau_i),
\end{align}
where $\alpha_i \in \mathbb{C}$. The delay spread of the channel in the ensemble sense is $\tau_{\rm{d}}$ where
$\mathbb{E}\{|\alpha_i|^2\}=0$ for $\tau_i > \tau_{\rm{d}}$.
When the transmitter and receiver are synchronized and there is no \ac{isi}, i.e. $T_{\rm{z}} \ge \tau_{\rm{d}}$,
the complex baseband received signal sampled at multiples of $T_{\rm{sa}} \triangleq 1/W$ is given by
{
\begin{align}\label{uiomoer21}
y_{n}[k]=\sum_{l=0}^{n_{\rm{h}}-1} h[nn_{\rm{s}}+k;l] s_{n}[k-l]+w_n[k],
\end{align}}
$m=0,1,\dots,n_{\rm{s}}-1$, where $n_{\rm{s}}=\lfloor(T_{\rm{x}}+T_{\rm{z}})/T_{\rm{sa}}\rfloor$, $s_n[m]\triangleq s_n(mT_{\rm{sa}})$,
\begin{align}
h[l]=\sum\limits_i {} \alpha_i
g[l-\tau_iW],
\end{align}
$l=0,1,\dots,n_{\rm{h}}-1$,
$n_{\rm{h}} \triangleq \lfloor\tau_{\rm{d}}/T_{\rm{sa}}\rfloor$, $g[l] \triangleq g(lT_{\rm{sa}})$,
$g(t) \triangleq g_{\rm{Tx}}(t)\circledast g_{\rm{Rx}}(r)$ with
$g_{\rm{Tx}}(t)$ and $g_{\rm{Rx}}(t)$ as transmit and receive filters, respectively. Also, $w_n[m] \sim \mathcal{CN}(0,\sigma^2_{\rm{w}})$ is the \ac{awgn}.
We consider the wide-sense stationary uncorrelated scattering (WSSUS) assumption so that the channel coefficients from different delay taps are independent. The channel taps $h[l]$, $l=0,1,\dots,n_{\rm{h}}-1$, are modeled as statistically independent zero-mean complex Gaussian random variables (Rayleigh fading) with the delay profile
\begin{align}\label{7u8i0000}
\mathbb{E}{\{}h[l]h^*[l-m]{\}}=\sigma_{{{\rm{h}}_l}}^2\delta[m],
\end{align}
$l=0,1,\dots, n_{\rm{h}}-1$, where
\begin{align}
\sigma_{{{\rm{h}}_l}}^2=\mathbb{E}\{|h[l]|^2\}=\sum_i \mathbb{E}{\{}|\alpha_i|^2\}{|}g[l-\tau_iW]{|}^2.
\end{align}
It is assumed that the delay profile of the fading channel is known to the receiver.
{\it Remark 1:} The PDP of an environment is obtained through field measurements by transmitting a short pulse (wide-band) and measuring the received power as a function of delay at various locations in a small area during channel sounding.
These measurements are then averaged over spatial locations to generate a profile of the average received signal power as a function of delay.
\cite{Delay}.
Theoretically, the PDP is defined as the expectation of the squared impulse response of the channel as
\begin{equation}
\breve{p}(\tau) = \mathbb{E}\{|h(\tau)|^2\}.
\end{equation}
Assuming WSSUS scattering, the PDP is given as
\begin{equation}
\breve{p}(\tau) = \sum_{k=0}^{N}\alpha_k \delta(\tau-\tau_k).
\end{equation}
To determine the number of paths $N$, different criteria for model order selection are available in the existing literature. Estimating the path delays using frequency domain pilots is equivalent to estimating the arrival angle using an antenna array \cite{swindlehurst1998time}. Hence, well-known signal processing techniques, e.g., estimation of signal parameters via rotational in-variance techniques (ESPRIT) \cite{roy1989esprit}, can be adopted for this purpose. With the estimates of path delays, the path gains $\alpha_k$, $k=0,1,\dots,N$, can be obtained using typical linear estimators \cite{liu2014channel}.
We define
$n_{\rm{z}} \triangleq \lfloor{T_{\rm{z}}}/{T_{\rm{sa}}}\rfloor$
as the number of padded zeros. Hence, the number of samples per \ac{zp}-OFDM symbol is
$n_{\rm{s}} \triangleq n_{\rm{x}}+n_{\rm{z}}$.
Equation \eqref{uiomoer21} can be written in a vector form as follows
\begin{align}\label{signal}
{\bf y}_{n}=
\begin{cases}
{\bf H} {\bf s}_n +{\bf w}_n \triangleq {\bf v}_n +{\bf w}_n, \,\,\,\,\,\,\,\,\,\,\,\ n \ge 0 \\
{\bf w}_n, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, n<0,
\end{cases}
\end{align}
where
\begin{equation}\label{eq:11}
{\bf{s}}_n \triangleq
\left[ \begin{array}{l}
\,\,\,\,\,\ s_n[0]\\
\,\,\,\,\,\ s_n[1]\\
\,\,\,\,\,\,\,\,\,\ \vdots \\
s_n[{n_{\rm{s}}}-1]
\end{array} \right]=
\left(
\begin{array}{c}
x_{n}(0)\\
\vdots\\
x_n((n_{\rm x}-1)T_{\rm{sa}}) \\
0\\
\vdots\\
0\\
\end{array}
\right)
\setstackgap{L}{1.2\normalbaselineskip}
\vcenter{\hbox{\stackunder[1pt]{%
\left.{\Centerstack{\\ \\}}\right\}n_{\rm{x}}%
}{
\left.{\Centerstack{\\ \\}}\right\}n_{\rm{z}}%
}}},
\end{equation}
\begin{subequations} \label{uuoio}
\begin{align}
{\bf y}_{n} &\triangleq \Big{[}y_n[0]~y_n[1]~\dots~ y_n[n_{\rm{s}}-1] \Big{]}^{\rm{T}},\\ \label{eq:11a}
{\bf w}_n &\triangleq \Big{[}w_n[0]~w_n[1]~\dots~ w_n[n_{\rm{s}}-1] \Big{]}^{\rm{T}},\\
{\bf v}_n &\triangleq \Big{[}v_n[0]~ v_n([1] \dots v_n[n_{\rm{s}}-1] \Big{]}^{\rm{T}},
\end{align}
\end{subequations}
${\bf H}$ is an $n_{\rm{s}}\times n_{\rm{s}}$ matrix, where its $i$-th ($0\le i \le n_{\rm{s}}-1$) column is $[ {\bf 0}_{i-1} \ h[nn_{\rm{s}}+i-1;0] \ h[nn_{\rm{s}}+i-1;1] \ \dots \ h[nn_{\rm{s}}+i-1;n_{\rm h}-1] ~ {\bf{0}}_{n_{\rm{s}}-n_{\rm h}-i+1}]^\text{T}$, ${\bf v}_n \triangleq{\bf H} {\bf s}_n$, and ${\bf w}_n$ is the \ac{awgn} vector.
Based on the \ac{clt}, the $T_{\rm{sa}}$-spaced baseband OFDM samples can be accurately modeled by \ac{iid} zero-mean complex Gaussian random variables as follows \cite{banelli2003theoretical}
\begin{align} \label{909m0p}
x_n(mT_{\rm{sa}}) \sim \mathcal{CN}(0,\sigma^2_{\rm{x}}),
\end{align}
where
\begin{align}
\mathbb{E}\Big{\{}x_n(mT_{\rm{sa}})x_n^*(kT_{\rm{sa}})\Big{\}}=\sigma^2_{\rm{x}} \delta[m-k].
\end{align}
We consider that the transmitter and receiver are not synchronized in time domain, and there is a \ac{to} between them defined as $\tau \triangleq dT_{\rm{sa}}+\epsilon$, where $d$ and $\epsilon$ represent the integer and fractional part of the \ac{to}. The fractional part of the delay appears as phase offset at each sub-carrier. Hence, its effect is compensated when carrier frequency offset is estimated \cite{morelli2007synchronization}. However, estimation of the integer part $d$ is required in order to detect the starting point of the \ac{fft} at the receiver. Estimating the integer part of the \ac{to} is the subject of this paper. We consider that the transmitter does not use pilot or preamble for \ac{to} estimation; thus, the receiver relies on the received samples, noise samples in the zero-guard interval, and the second-order statistics of the fading channel to estimate the \ac{to}.
\section{Maximum Likelihood Estimation}
\label{sec: ml estimator}
In this section, we analytically derive the \ac{nda}-ML \ac{to} estimator for ZP-OFDM.
For the ease of discussions and presentation, we consider $d\in \{-n_{\rm{s}}+1,\dots,-1,0,1,\dots,n_{\rm{s}}-1\}$. However,
the range of $d$ can be considered arbitrary large.
We first formulate \ac{to} estimation as a
multiple hypothesis testing problem as
${\rm{H}}_{p}: d=p$ where $-n_{\rm{s}}+1 \le p \le n_{\rm{s}}-1$. Since both positive and negative values of \ac{to} are considered, the ML estimator can address frame and OFDM symbol synchronization.
Considering negative \ac{to} enables us to find the onset of the packet and incorporating positive \ac{to} enables us to find the starting point of the ZP-OFDM symbols to efficiently apply \ac{fft} for channel equalization and data detection.
We consider that the OFDM receiver gathers
$N$ observation vectors of length $n_{\rm{s}}$, ${\bf y}_{0}, {\bf y}_{1}, \dots, {\bf y}_{N-1}$, to estimate
the \ac{to}, $d$. The initial step for \ac{ml} derivation is to obtain the joint \ac{pdf} of the observation vectors under the $2n_{\rm{s}}+1$ \ac{to} hypotheses. We denote this joint \ac{pdf} given ${\rm{H}}_d$ by $f_{\rm{Y}}({\bf y}| {\rm{H}}_{d})$, where
\begin{align}\label{eq:14}
{\bf{y}} &= \big{[}y[0] \ y[1] \ \dots \ y[Nn_{\rm{s}}-1]\big{]}^{\rm{T}} \\ \nonumber
&\triangleq[{\bf{y}}_0^{\rm{T}} \ {\bf{y}}_1^{\rm{T}} \ \dots \ {\bf{y}}_{N-1}^T {]}^{\rm{T}}
\end{align}
with
\begin{align}\label{eq:61}
y[nn_{\rm{s}}+m] \triangleq y_{n}[m].
\end{align}
as the $m$-th sample in the $n$-th block. By using the chain rule in probability theory \cite{leon1994probability}, we can write
\begin{align}\label{bnbnm}
f_{\bf Y}({\bf y}| &{\rm{H}}_{d}) \\ \nonumber
&= \prod_{n=0}^{N-1} \prod_{m=0}^{n_{\rm{s}}-1} f\Bigg{(}y_n[m] \,\Big|\, \bigcap_{u=0}^{m-1} y_n[u], \bigcap_{k=0}^{n-1}{\bf{y}}_k, {\rm{H}}_d\Bigg{)}.
\end{align}
To obtain the joint \ac{pdf} in \eqref{bnbnm},
we rely on Theorem \ref{uuonkml}.
\begin{theorem}\label{uuonkml}
The elements of the observation vector $\bf{y}$ in \eqref{eq:14} irrespective to the value of \ac{to} are uncorrelated random variables, i.e., $\mathbb{E}\{y_n[u]y_{\tilde{n}}^*[v]\}=0$, $u \neq {{v}}$.
\end{theorem}
\begin{proof} See Appendix \ref{proof: theo uncorrelated}.
\end{proof}
Fig. \ref{fig:example} illustrates the scatter plot of the in-phase components of ${y_{10}[100]}$ (i.e., ${y_{{10}_{\rm{I}}}[100]}=\Re\{{y_{10}[100]}\}$) and ${y_{10}[101]}$ (i.e., ${y_{{10}_{\rm{I}}}[101]}=\Re\{{y_{10}[101]}\}$) given hypothesis ${\rm{H}}_0$. As seen, there is no correlation between the two successive samples.
\begin{figure}
\vspace{-1em}
\centering
\includegraphics[height=2.835in]{Fig_correlation_25_dB.pdf}
\caption{Scatter plot of $y_{10_{\rm{I}}}[100]$ and $y_{10_{\rm{I}}}[101]$ given hypothesis ${\rm{H}}_0$ at $25$ dB $E_{\rm{b}}/N_0$ ($n_{\rm{x}}=128, n_{\rm{z}}=15, n_{\rm{h}}=10, n=10$). }
\label{fig:example}%
\label{fig: correlation}
\end{figure}
\begin{figure}
\vspace{-2em}
\centering
\includegraphics[height=2.835in]{Fig_pdf_afterreview.pdf}
\caption{The empirical and analytical \ac{pdf}s of ${y_{n_{\rm{I}}}[1]}$ given hypothesis ${\rm{H}}_0$ at $15$ dB $E_{\rm{b}}/N_0$ ($n_{\rm{x}}=128, n_{\rm{z}}=15, n_{\rm{h}}=10, n=10$). The empirical \ac{pdf} was obtained for $10^6$ samples.}\label{fig: Empirical vs Analytical}
\vspace{-1em}
\end{figure}
According to Theorem \ref{uuonkml}, the observation samples in \eqref{eq:14} are uncorrelated random variables. Also, we can show that the in-phase $y_{n_{{\rm{I}}}}[m]$ and quadrature $y_{n_{{\rm{Q}}}}[m]$ components of the $m$-th received sample from the $n$-th observation vector, i.e.,
$y_n[m]=y_{n_{{\rm{I}}}}[m]+
iy_{n_{{\rm{Q}}}}[m]$,
are uncorrelated random variables.
Although uncorrelated random variables are not necessarily independent, the independency assumption becomes more valid for the received samples in the case of very fast-varying channels where
the maximum Doppler spread of the channel approaches infinity. When the maximum Doppler spread reaches infinity, the channel taps contributing to one received sample become independent from other samples.
Moreover, most practical algorithms are particularly sensitive to the distribution and less to correlation. Thus, we can consider that the observation samples are independent random variables to simplify the signal model \cite{kay2013fundamentals}. Accordingly, we approximate
the joint \ac{pdf} in \eqref{bnbnm} by the multiplication of the first-order \ac{pdf}s as
\begin{align}\label{eq: indepency 1}
\hspace{-0.2em} f_{\bf{Y}}({\bf y}| {\rm{H}}_{d}) &\approx
\prod_{n=0}^{N-1}
\prod_{m=0}^{n_{\rm{s}}-1} f_{Y_n[m]}(y_n[m]| {\rm{H}}_{d})\\ \nonumber
& \approx\prod_{n=0}^{N-1}
\prod_{m=0}^{n_{\rm{s}}-1} f_{Y_{n_{\rm{I}}[m]}}(y_{n_{{\rm{I}}}}[m] | {\rm{H}}_{d} ) f_{Y_{n_{\rm{Q}}}[m]}(y_{n_{{\rm{Q}}}}[m] | {\rm{H}}_{d}),
\end{align}
where $f_{Y_{n_{\rm{I}}}[m]}(\cdot | {\rm{H}}_{d}) $ and $f_{Y_{n_{\rm{Q}}}[m]}(\cdot | {\rm{H}}_{d})$ are the \ac{pdf} of the in-phase and quadrature components of the $m$-th received sample from the $n$-th observation vector.
The accuracy of this assumption is verified in the simulation results.
\subsection{\ac{pdf} Derivation for Delay Hypothesis ${\rm H}_0$ }
Theorem \eqref{theo: pdf y} provides closed-form expressions for the \ac{pdf} of the in-phase and quadrature components of the $m$-th received sample from the $n$-th observation vector given the hypothesis $\mathrm{H}_0$ ($d=0$).
We later show that the conditional \ac{pdf}s given hypothesis $\mathrm{H}_d$, $d \ne 0$, can be easily extracted from these \ac{pdf}s due to
the periodicity incurred by the zero-padded guard interval.
\begin{theorem} \label{theo: pdf y}
The PDF of the in-phase (quadrature) component of the received samples $y_{n_{{\rm{I}}}}[m]$ ($y_{n_{{\rm{Q}}}}[m]$), $m \in \{ m ~|~ 0 \le m \le n_{\rm{s}}-1 ~{\rm{when}}~ n<0 \} \cup \{ m ~|~ n_{\rm{x}}+n_{\rm{h}}-1 \le m \le n_{\rm{s}}-1 ~{\rm{when}}~ n \ge 0\}$ for $d=0$
is given by \footnote {In Theorem \ref{theo: pdf y}, by $n<0$, we mean as if the receiver starts to receive samples before any data is transmitted from the transmitter.
}
\begin{equation} \label{uioooppioio}
\begin{split}
f_{Y_{n_{\rm{I}}}[m]}(y|{\rm{H}}_0) &=
\frac{1}{\sqrt{\pi \sigma^2_{\rm w} } } \exp\Big{(}-\frac{y^2}{\sigma^{2}_{\rm w}} \Big{)}.
\end{split}
\end{equation}
Also, for $d=0$ and $m \in \{ m ~|~ 0 \le m \le n_{{\rm{x}}}+n_{{\rm{h}}}-2~{\rm{when}}~ n \ge 0\}$, we have
\begin{align} \label{eq: pdf}
f_{Y_{n_{\rm{I}}}[m]}& (y|{\rm{H}}_0)=f_{Y_{n_{\rm{I}}}[m]} (-y|{\rm{H}}_0) \\ \nonumber
&=
\Big{(}\prod_{i=a}^{b} \lambda_i\Big{)}^2 \sum_{j=a}^{b} \sum_{n=a}^{b} \frac{ e^{\big{(}\frac{\lambda_j \sigma_{\rm{w}}}{2}\big{)}^2}}{ \prod_{k=a, k \neq j}^{b} (\lambda_k - \lambda_j)} \\ \nonumber
&~ \times \frac{ 1 }{ \prod_{u=a, u \neq j}^{b} (\lambda_u - \lambda_n)} \frac{1}{(\lambda_j + \lambda_n) } \\ \nonumber
& \times 1/2 \Bigg[ e^{-\lambda_j y} \Bigg( 1-\Phi \Big(\frac{\lambda_j \sigma_{\rm{w}}}{2}-\frac{y}{\sigma_{\rm{w}}}\Big) \Bigg) + \\
&~~~~~~~~~~~~~~~~~~~~~~ e^{\lambda_j y} \Bigg(1-\Phi \Big(\frac{\lambda_j \sigma_{\rm{w}}}{2}+\frac{y}{\sigma_{\rm{w}}} \Big) \Bigg) \Bigg],
\end{align}
\noindent where $\lambda_k \triangleq 2/( \sigma_{h_k} \sigma_{{\rm x}})$, $\Phi(x)= {\rm{erf}}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x} e^{-t^2} dt$ denotes the Gaussian error function,
and $a$ and $b$ depend on $m$, and are given as follows
\begin{equation} \label{eq: a b no isi}
\hspace{-1em}
(a,b) =
\begin{cases}
(0,m) &~ 0 \le m \le n_{\rm{h}}-2\\
(0,n_{\rm{h}}-1) &~ n_{\rm{h}}-1 \le m \le n_{\rm{x}}-1\\
(m-n_{\rm{x}}+1,n_{\rm{h}}-1) &~ n_{\rm{x}} \le m \le n_{\rm{x}}+n_{\rm{h}}-2.\\
\end{cases}
\end{equation}
Similar expressions hold for $f_{Y_{n_{\rm{Q}}}[m]}(y|{\rm{H}}_0)$.
\end{theorem}
\begin{proof}
See Appendix \ref{proof: theo}.
\end{proof}
Fig. \ref{fig: Empirical vs Analytical} illustrates the derived \ac{pdf} in \eqref{eq: pdf} for ${y_{n_{\rm{I}}}[1]}$ given hypothesis ${\rm{H}}_0$ at 15 dB $E_{\rm{b}}/N_0$ .
For comparison, we also show the empirical \ac{pdf} obtained by histogram density estimator and theoretical Gaussian \ac{pdf}. As seen, the derived \ac{pdf} accurately matches the empirical \ac{pdf}. However, it exhibits a larger tail compared to the Gaussian distribution.
In Table \ref{table: pdfs metrics}, we compare the variance, kurtosis, and skewness of the derived \ac{pdf} in \eqref{eq: pdf}, the empirical histogram density estimation of the \ac{pdf}, and the Gaussian distribution. The kurtosis measures the fourth-order central moment of the random variable $Y_{n_{\rm{I}}}[1]$ with mean $\mu\triangleq\mathbb{E}\{Y_{n_{\rm{I}}}[1]|{\rm{H}}_{0}\}$, and the skewness is a measure of the symmetry in the distribution.
Large deviation from the mean yields large values of kurtosis.
For fair comparison, we consider the normalized kurtosis $\kappa$ and skewness $\xi$ defined as
\begin{align} \label{hjiuoiokl}
\kappa \triangleq \frac{\mathbb{E}\{(Y_{n_{\rm{I}}}[1]-\mu)^4\}}
{\mathbb{E}^2\{(Y_{n_{\rm{I}}}[1]-\mu)^2\}},
\end{align}
and
\begin{align}
\xi \triangleq \frac{\mathbb{E}\{(Y_{n_{\rm{I}}}[1]-\mu)^3\}}
{\mathbb{E}^{\frac{3}{2}}\{(Y_{n_{\rm{I}}}[1]-\mu)^2\}}.
\end{align}
To estimate the normalized kurtosis and skewness for the emirical \ac{pdf}, we use
\begin{align} \label{gfdgdgdkk}
\hat{\kappa} \triangleq \frac{\frac{1}{M}\sum_{n=0}^{M-1}({y_{n_{\rm{I}}}[1]}-\hat{\mu})^4}{\big{(}\frac{1}{M}\sum_{n=0}^{M-1}({y_{n_{\rm{I}}}[1]}-\hat{\mu})^2\big{)}^2},
\end{align}
and
\begin{align} \label{gfdgdgdkk1}
{\hat{\xi}} \triangleq \frac{\frac{1}{M}\sum_{n=0}^{M-1}({y_{n_{\rm{I}}}[1]}-\hat{\mu})^3}{\big{(}\frac{1}{M}\sum_{n=0}^{M-1}({y_{n_{\rm{I}}}[1]}-\hat{\mu})^2\big{)}^{\frac{3}
{2}}},
\end{align}
where
\begin{align}
\hat{\mu}=\frac{1}{M}\sum_{n=0}^{M-1}{y_{n_{\rm{I}}}[1]}.
\end{align}
To estimate the kurtosis and skewness in \eqref{gfdgdgdkk} and \eqref{gfdgdgdkk1}, we set $M=10^6$.
As seen in Table \ref{table: pdfs metrics}, the theoretical kurtosis obtained by \eqref{hjiuoiokl} equals the empirical kurtosis in \eqref{gfdgdgdkk} with precision of $0.01$.
Further, these values are larger than 3; hence,
it indicates a non-Gaussian \ac{pdf}, which in particular, has a larger tail.
In Table \ref{table: pdfs metrics}, we also observe that for the theoretical and empirical \ac{pdf}s, the skewness is zero, which implies that the \ac{pdf} is symmetric around its mean.
\begin{figure*}
\begin{align}\label{dpos}
\hspace{-2em}{\bf f}_{\bf{Y}}({\bf{y}} | {\rm{H}}_d)& =
\prod_{k=0}^{n_{\rm{s}}-d-1}\tilde{f}_{Y[k+d]}
\big{(}y[k]|{\rm{H}}_0\big{)}
\prod_{m=1}^{N-1} \Bigg{(} \prod_{u=0}^{n_{\rm{s}}-1}
\tilde{f}_{Y[u]}
\big{(}y[mn_{\rm{s}}+u-d]|{\rm{H}}_0
\big{)}\Bigg{)}\prod_{v=Nn_{\rm{s}}-d}^{Nn_{\rm{s}}-1}\tilde{f}_{Y[v-Nn_{\rm{s}}+d]}\big{(}y[v]|{\rm{H}}_0\big{)}
,\,\,\,\,\,\,\, \,\,\ d \ge 0
\end{align}
\begin{align}\label{dpos1}
&{\bf f}_{\bf{Y}}({\bf{y}} | {\rm{H}}_d) =
\prod_{k=0}^{|d|-1}\tilde{f}_{Y[-]}\big{(}y[k]|{\rm{H}}_0\big{)}
\prod_{m=0}^{N-2} \Bigg{(}
\prod_{u=0}^{n_{\rm{s}}-1}\tilde{f}_{Y[u]}
\big{(}y[mn_{\rm{s}}+u-d]|{\rm{H}}_0\big{)}
\Bigg{)}
\prod_{u=(N-1)n_{\rm{s}}-d}^{Nn_{\rm{s}}-1}
\tilde{f}_{Y[u-(N-1)n_{\rm{s}}+d]}\big{(}y[u]|{\rm{H}}_0\big{)},\,\,\,\,\,\,\,\,\,\,\ d<0
\end{align}
\end{figure*}
\subsection{\ac{pdf} Derivation for Delay Hypothesis ${\rm H}_d$, $d \neq 0$ }
In order to obtain the \ac{ml} estimator, we need to derive the joint \ac{pdf} of the received samples given all delay hypotheses ${\rm{H}}_{d}$, $d\in \{-n_{\rm{s}}+1,\dots,-1,0,1,\dots,$ $n_{\rm{s}}-1\}$.
In Theorem \ref{theo: pdf y}, we derived the \ac{pdf} of the received samples given hypothesis ${\rm{H}}_{0}$, i.e., $f_{{\bf Y}}({\bf y}|{\rm{H}}_0)$. In Appendix \ref{proof: theo final}, we prove that
$f_{{\bf Y}}({\bf y}|{\rm{H}}_d)$ can be expressed based on the joint \ac{pdf} of the received samples given ${\rm{H}}_{0}$ as it is shown in \eqref{dpos} and \eqref{dpos1} at the top of this page, where
\begin{align}\label{eq:29}
\tilde{f}_{Y[m]}( y| {\rm{H}}_0 ) &\triangleq
{f}_{Y[nn_{\rm{s}}+m]}( y| {\rm{H}}_0 )=f_{Y_n[m]}( y| {\rm{H}}_0)
\\ \nonumber
& \approx f_{Y_{n_{\rm{I}}}[m]} \big( y_{\rm{I}} | {\rm{H}}_0 \big) f_{Y_{n_{\rm{Q}} }[m]} \big( y_{\rm{Q}} | {\rm{H}}_0 \big) \\ \nonumber
& \triangleq f_{Y_{{\rm{I}}}[m]} \big( y_{\rm{I}} | {\rm{H}}_0 \big)f_{Y_{{\rm{Q}}}[m]} \big( y_{\rm{Q}} | {\rm{H}}_0 \big),
\,\,\,\,\,\,\,\,\,\,\,\ n \ge 0
\end{align}
for $0 \le m \le n_{\rm{s}}-1$, and
\begin{align}\label{eq:30}
\tilde{f}_{Y[-]} (y|{\rm{H}}_0) & \triangleq {f}_{Y[nn_{\rm{s}}+m]}( y| {\rm{H}}_0 )=f_{Y_n[m]}( y| {\rm{H}}_0),
\\ \nonumber
& \approx f_{Y_{n_{\rm{I}}}[m]}(y_{\rm{I}}|{\rm{H}}_0) f_{Y_{n_{\rm{Q}}}[m]}(y_{\rm{Q}}|{\rm{H}}_0) \\ \nonumber
& \triangleq f_{Y_{{\rm{I}}}[-]} \big( y_{\rm{I}} | {\rm{H}}_0 \big)f_{Y_{{\rm{Q}}}[-]} \big( y_{\rm{Q}} | {\rm{H}}_0 \big),
\,\,\,\,\,\,\,\,\,\,\ n<0,
\end{align}
where $y \triangleq y_{\rm{I}}+iy_{\rm{Q}}$, $f_{Y[nn_{\rm{s}}+m]}(\cdot | {\rm{H}_0})$ is the \ac{pdf} of the received sample $y[nn_{\rm{s}}+m] \triangleq y_{n}[m]$ given ${\rm{H}_0}$, and the \ac{pdf} of $\Re\{y[nn_{\rm{s}}+m]\}={y_{n_{\rm{I}}}}[m]$, i.e., $f_{Y_{n_{\rm{I}}}[m]}(\cdot|{\rm{H}}_0)$ and the \ac{pdf} of $\Im\{y[nn_{\rm{s}}+m]\}={y_{n_{\rm{Q}}}}[m]$, i.e., $f_{Y_{n_{\rm{Q}}}[m]}(\cdot|{\rm{H}}_0)$ are given in Theorem \ref{theo: pdf y}.
The relation between the \ac{pdf} of the received samples given ${\rm{H}}_d$ and ${\rm{H}}_0$ is attributed to the
periodicity of the zero-padded guard interval.
To visualize \eqref{dpos} and \eqref{dpos1} of the revised manuscript,
let us consider the vector of PDF in \eqref{eq: pdf matrix H0} in the revised manuscript.
We consider the first shown dashed line from the top in the PDF vector as reference line. The elements below this reference line in PDF vector are periodic with period $n_{\rm{s}}$ (see the pattern in Fig. \ref{fig: conv} in the revised manuscript). The elements above this reference line represent the PDF of the noise samples. For $d\ge0$, the elements of the observation vector $\bf{y}$ are respectively
substituted in the PDF vector starting from the $(d+1)$-th element below the reference line. This results in \eqref{dpos}. Similarly, for $d<0$, the elements of the observation vector $\bf{y}$ are respectively substituted in the PDF vector starting $|d|$ elements above the reference line. This results in \eqref{dpos1}.
\subsection{\ac{ml} TO Estimator}
The ML estimation for TO is defined to be the value of $d$ that maximizes $f_{{\bf Y}}({\bf y}|{\rm{H}}_d)$ for ${\bf y}$ fixed, i.e., the value that maximizes the likelihood function. The maximization is performed over the allowable range of $d$. Corollary \ref{theo: final} summarizes the proposed NDA-ML TO estimation for ZP-OFDM.
\begin{corollary} \label{theo: final}
For a \ac{zp}-OFDM system in a doubly-selective fading channel with the received vector $\bf{y}$ in \eqref{eq:14}, the NDA-\ac{ml} \ac{to} estimator is given by
\begin{equation}\label{8989ioio}
\hat{d}^{\text{opt}}= \operatorname*{argmax}_{d \in \{-n_{\rm{s}}+1,..., n_{\rm{s}}-1\}} {\bf f}_{\bf{Y}}({\bf{y}} | {\rm{H}}_d),
\end{equation}
\noindent where ${\bf f}_{\bf{Y}}({\bf{y}} | {\rm{H}}_d)$ is given in \eqref{dpos} and \eqref{dpos1}.
\end{corollary}
The proposed time synchronization method can be extended to ZP-OFDM with non-rectangular pulse shaping, but it requires the modification of the PDF in equation (20). In this case,
the $T_{\rm sa}$-spaced baseband OFDM samples are modeled as independent random variables with different variances, which makes the derivation of the PDF challenging.
Since a closed-form expression cannot be found for the ML estimator in \eqref{8989ioio}, a numerical approach can be used. Numerical methods employ either an exhaustive
search or an iterative maximization of the likelihood function.
\begin{table}[t!]
\vspace{-1em}
\centering
\caption{Statistical Analysis}
\label{table: pdfs metrics}
\resizebox{0.45\textwidth}{!}{
\begin{tabularx}{0.36\textwidth}{lccc}
\toprule
Metric & Empirical & Analytical & Gaussian \\
\midrule
Mean & 4.6250 $\times 10^{-4}$ & 0 & 0 \\
Variance & 0.3206 & 0.3205 & 0.3206 \\
Skewness & 0.0031 & 0 & 0 \\
Kurtosis & 4.5315 & 4.5653 & 3 \\
\bottomrule
\end{tabularx} }
\vspace{-1em}
\end{table}
\section{Low-Complexity Implementation}
\label{sec: importance samp}
The derived \ac{pdf} in \eqref{eq: pdf} is complex due to the integral terms including the Gaussian error function $\Phi(\cdot)$. Hence, practical implementation of the proposed \ac{ml} can be challenging.
An alternative approach with feasible implementation
and lower complexity is to employ MCS techniques to approximate the joint \ac{pdf} of the received samples.
MCS methods benefit from the availability of computer generated random variables to approximate univariate and multidimensional integrals in Bayesian estimation, inference, and optimization problems.
The key idea behind MCS method is to generate independent random samples from a \ac{pdf} usually known up to a normalizing constant. In the following discussion, we use MCS integration method in order to
make efficient implementation of the proposed theoretical
NDA-\ac{ml} estimator in \eqref{8989ioio} possible.
\subsection{MCS Method}
In Appendix \ref{proof: theo}, we proved that ${f}_{Y_{{\rm{I}}}[m]}(y_{\rm{I}} | {\rm{H}}_0)$ in \eqref{eq: pdf} is expressed in an integral form as follows
\begin{equation} \label{eq: importance sampling}
\begin{split}
&f_{Y_{n_{\rm{I}}}[m]}(y_{\rm{I}} | {\rm{H}}_0) = \int_{-\infty}^{\infty} f_{W_{n_{\rm{I}}}[m]}(y_{\rm{I}}-v) f_{V_{n_{\rm{I}}}[m]}(v | {\rm{H}}_0) dv \\
&= \int^{\infty}_{-\infty} \frac{1}{\sqrt{\pi \sigma^2_{\rm{w}}}} \exp\bigg\{-\frac{1}{\sigma^2_{\rm{w}}} \big(y_{\rm{I}} - v\big)^2 \bigg\} f_{V_{n_{\rm{I}}}[m]} (v) dv,
\end{split}
\end{equation}
where
\begin{align} \nonumber
f_{V_{n_{\rm{I}}}[m]}(v | \rm{H}_0)& = \Bigg( \prod_{i=a}^{b} \lambda_i \Bigg)^2 \sum_{j=a}^{b} \sum_{n=a}^{b} \frac{ 1}{ \prod_{k=a, k \neq j}^{b} (\lambda_k - \lambda_j)} \\ \label{pdf_v0}
&~ \times\frac{ 1 }{ \prod_{p=a, p \neq j}^{b} (\lambda_p - \lambda_n)} \frac{e^{-\lambda_j |v|}}{\lambda_j + \lambda_n},
\end{align}
and $f_{W_{n_{\rm{I}}}[m]} (w)$ is the \ac{pdf} of the white Gaussian noise with variance $\sigma_{\rm{w}}^2/2$. Generating samples from random variable $V_{n_{\rm{I}}}[m]$ with \ac{pdf} in
\eqref{pdf_v0} is straightforward since it is expressed as a linear function of independent exponentially distributed random variables with rate parameter $\lambda_k=(\sigma_{{\rm{h}}_k} \sigma_{\rm{x}}/2)^{-1}$ as shown in \eqref{eq: decomposee} of Appendix \ref{proof: theo}.
By using Monte Carlo integration method, we can write
\begin{align}\label{eq: monte c}
\hspace{-1em}
f_{Y_{n_{\rm{I}}}[m]}(y_{\rm{I}} | {\rm{H}}_0) \simeq \frac{1}{L}\sum^{L-1}_{\ell=0} \frac{1}{\sqrt{\pi \sigma^2_{\rm{w}}}} \exp\bigg\{-\frac{1}{\sigma^2_{\rm{w}}} \big(y_{\rm{I}} - v_\ell\big)^2 \bigg\}, \end{align}
where $L$ is the number of Monte Carlo samples, and $\{v_0,v_1, \dots,v_{L-1}\}$ are \ac{iid} samples drawn from $V_{n_{\rm{I}}}[m]$.
By applying Monte Carlo integration to the marginal \ac{pdf}s of the in-phase and quadrature components of
${\bf{y}}$, the joint \ac{pdf} is given by
\begin{align}\nonumber
& f_{\bf{Y}}( {\bf y}| {\rm{H}}_{0})
\hspace{-0.11em} \approx \hspace{-0.21em}\prod_{n=0}^{N-1}
\prod_{m=0}^{n_{{\rm{s}}}-1} \hspace{-0.11em} f_{Y_{n_{\rm{I}}}[m]}(y_{n_{{\rm{I}}}}[m] | {\rm{H}}_{0}) f_{Y_{n_{\rm{Q}}}[m]}(y_{n_{{\rm{Q}}}}[m] | {\rm{H}}_{0}) \\ \nonumber
& \simeq \frac{1}{({\pi \sigma^2_{\rm{w}}})^{Nn_{\rm{s}}}} \prod_{n=0}^{N-1}
\prod_{m=0}^{n_{{\rm{s}}}-1}
\Bigg{(} \sum^{L}_{\ell=1} \exp\Bigg\{\frac{-1}{\sigma^2_{\rm{w}}} \big(y_{n_{{\rm{I}}}}[m] - v_{{n_{{\rm{I}}}}}^m[\ell]\big)^2 \bigg\} \\ \label{eq: monte c}
& \hspace{0.5cm}\times
\sum^{L}_{\ell=1} \exp\Bigg\{\frac{-1}{\sigma^2_{\rm{w}}} \big(y_{n_{{\rm{Q}}}}[m] - v_{{n_{{\rm{Q}}}}}^m[\ell]\big)^2 \bigg\}\Bigg{)},
\end{align}
where $v_{{n_{{\rm{I}}}}}^m[\ell]$ and $v_{{n_{{\rm{Q}}}}}^m[\ell]$
are \ac{iid} values drawn from random variables with \ac{pdf}s $f_{V_{n_{\rm{I}}}[m]} (v)$ and $f_{V_{n_{\rm{Q}}}[m]} (v)$, respectively.
By using \eqref{eq: monte c}, we can design the MCS implementation of the theoretical \ac{nda}-ML \ac{to} estimator in \eqref{8989ioio} as in
Fig. \ref{fig: gamma block}.
\begin{figure*}[t]
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [line width=0.75] (106.48,333.63) -- (106.25,315.32) -- (238.62,315.18) -- (238.86,333.49) -- cycle ;
\draw [line width=0.75] (160.33,333.74) -- (160.1,316.48) ;
\draw [line width=0.75] (192.49,333.64) -- (192.41,327.22) -- (192.27,316.38) ;
\draw [line width=0.75] (377.41,260) -- (418.12,260) -- (418.12,283.27) -- (377.41,283.27) -- cycle ;
\draw [line width=0.75] (312.33,259.04) -- (345.34,259.04) -- (345.34,282.94) -- (312.33,282.94) -- cycle ;
\draw [line width=0.75] (312.33,315.72) -- (345.34,315.72) -- (345.34,339.62) -- (312.33,339.62) -- cycle ;
\draw [line width=0.75] (312.33,365.63) -- (345.34,365.63) -- (345.34,389.53) -- (312.33,389.53) -- cycle ;
\draw [line width=0.75] (348.86,266.41) -- (378.33,266.83) ;
\draw [shift={(346.86,266.38)}, rotate = 0.81] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (377.41,315.06) -- (418.12,315.06) -- (418.12,338.33) -- (377.41,338.33) -- cycle ;
\draw [line width=0.75] (348.86,319.5) -- (378.33,319.92) ;
\draw [shift={(346.86,319.47)}, rotate = 0.81] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (377.41,361.56) -- (418.12,361.56) -- (418.12,384.83) -- (377.41,384.83) -- cycle ;
\draw [line width=0.75] (348.86,367.28) -- (378.33,367.7) ;
\draw [shift={(346.86,367.25)}, rotate = 0.81] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (262.84,323.29) .. controls (262.84,318.41) and (267.13,314.45) .. (272.42,314.45) .. controls (277.71,314.45) and (282,318.41) .. (282,323.29) .. controls (282,328.18) and (277.71,332.14) .. (272.42,332.14) .. controls (267.13,332.14) and (262.84,328.18) .. (262.84,323.29) -- cycle ; \draw [line width=0.75] (265.65,317.04) -- (279.2,329.55) ; \draw [line width=0.75] (279.2,317.04) -- (265.65,329.55) ;
\draw [line width=0.75] (268.5,313.01) -- (290.02,265.85) ;
\draw [shift={(267.67,314.83)}, rotate = 294.53] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (311.71,266.09) -- (290.02,265.85) ;
\draw [line width=0.75] (311.54,368) -- (290.51,367.9) ;
\draw [line width=0.75] (276.84,332.04) -- (290.51,367.9) ;
\draw [shift={(276.12,330.17)}, rotate = 69.14] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (280.77,318.84) -- (312.33,318.88) ;
\draw [shift={(278.77,318.83)}, rotate = 0.09] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (458.26,262.24) -- (458.23,243.89) ;
\draw [shift={(458.27,264.24)}, rotate = 269.89] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (421.06,267.41) -- (449,267.83) ;
\draw [shift={(419.06,267.38)}, rotate = 0.85] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (421.06,320.5) -- (449,320.92) ;
\draw [shift={(419.06,320.47)}, rotate = 0.85] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (421.06,368.28) -- (449,368.7) ;
\draw [shift={(419.06,368.25)}, rotate = 0.85] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (456.84,314.15) -- (456.81,298.17) ;
\draw [shift={(456.84,316.15)}, rotate = 269.89] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (456.84,361.36) -- (456.81,345.38) ;
\draw [shift={(456.84,363.36)}, rotate = 269.89] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (242.37,323.97) -- (262.84,323.29) ;
\draw [shift={(240.37,324.03)}, rotate = 358.12] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (53.6,288.2) -- (87.31,288.2) -- (87.31,360.9) -- (53.6,360.9) -- cycle ;
\draw [line width=0.75] (84.79,199.13) -- (84.54,178.58) -- (231.88,178.43) -- (232.13,198.98) -- cycle ;
\draw [line width=0.75] (114.72,198.51) -- (114.49,179.14) ;
\draw [line width=0.75] (144.7,198.48) -- (144.46,179.11) ;
\draw [line width=0.75] (171.49,198.41) -- (171.4,191.21) -- (171.25,179.04) ;
\draw [line width=0.75] (12.1,153.49) -- (12.1,188.64) ;
\draw [line width=0.75] (45.15,188.89) -- (80.73,189.17) ;
\draw [shift={(82.73,189.19)}, rotate = 180.45] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (12.1,162.85) -- (23.42,153.2) ;
\draw [line width=0.75] (12.1,162.85) -- (3.5,153.2) ;
\draw [line width=0.75] (12.1,188.64) -- (33.38,188.6) ;
\draw [line width=0.75] (33.38,188.6) -- (45.8,179.54) ;
\draw [shift={(47.42,178.36)}, rotate = 503.89] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (204.63,213.75) -- (204.5,199.72) ;
\draw [line width=0.75] (204.63,213.75) -- (460.46,209.13) ;
\draw [shift={(462.46,209.09)}, rotate = 538.97] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (96,177.75) -- (96.69,150.24) ;
\draw [line width=0.75] (96.69,150.24) -- (388.7,147.86) ;
\draw [shift={(390.7,147.84)}, rotate = 539.53] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (125.5,179.24) -- (125.5,166.26) ;
\draw [line width=0.75] (125.5,166.26) -- (422.63,165.75) ;
\draw [shift={(424.63,165.75)}, rotate = 539.9] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (331.24,76.73) -- (330.85,53.07) -- (529.92,52.93) -- (530.3,76.59) -- cycle ;
\draw [line width=0.75] (400.33,76.67) -- (401.71,141.44) ;
\draw [shift={(401.76,143.44)}, rotate = 268.78] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (435,77.33) -- (435.67,160.71) ;
\draw [shift={(435.69,162.71)}, rotate = 269.54] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (472.33,78) -- (473.5,195.97) ;
\draw [shift={(473.52,197.97)}, rotate = 269.43] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (601.75,370.5) -- (602,155.74) ;
\draw [line width=0.75] (602,155.74) -- (411,154.83) ;
\draw [line width=0.75] (585.98,173.75) -- (442.33,173.83) ;
\draw [line width=0.75] (587.05,320.75) -- (585.98,173.75) ;
\draw [line width=0.75] (568.35,209.58) -- (483,209.5) ;
\draw [line width=0.75] (568.35,264) -- (568.35,209.58) ;
\draw [line width=0.75] (541.96,264.24) -- (574.42,264) ;
\draw [shift={(539.96,264.25)}, rotate = 359.59] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (542.82,320.35) -- (590.53,320.75) ;
\draw [shift={(540.82,320.34)}, rotate = 0.48] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (542.28,369.68) -- (603.19,370.5) ;
\draw [shift={(540.28,369.65)}, rotate = 0.78] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (503.42,260.83) -- (539.51,260.83) -- (539.51,282.5) -- (503.42,282.5) -- cycle ;
\draw [line width=0.75] (470.33,267.69) -- (502,267.83) ;
\draw [shift={(468.33,267.68)}, rotate = 0.25] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (470.33,320.78) -- (502,320.92) ;
\draw [shift={(468.33,320.77)}, rotate = 0.25] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (470.33,368.56) -- (502,368.7) ;
\draw [shift={(468.33,368.55)}, rotate = 0.25] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (504.99,315.32) -- (541.08,315.32) -- (541.08,336.99) -- (504.99,336.99) -- cycle ;
\draw [line width=0.75] (503.42,362.89) -- (539.51,362.89) -- (539.51,384.56) -- (503.42,384.56) -- cycle ;
\draw [line width=0.75] (89,353.25) -- (133.67,353.5) ;
\draw [shift={(87,353.24)}, rotate = 0.32] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (89,299.36) -- (212.42,298.83) ;
\draw [shift={(87,299.37)}, rotate = 359.76] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (213,314.74) -- (212.42,298.83) ;
\draw [line width=0.75] (133.67,353.5) -- (133.67,334.8) ;
\draw [line width=0.75] (18.97,324.01) -- (52.76,323.58) ;
\draw [shift={(16.97,324.03)}, rotate = 359.27] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (390.7,151.54) .. controls (390.7,146.73) and (395.09,142.83) .. (400.51,142.83) .. controls (405.94,142.83) and (410.33,146.73) .. (410.33,151.54) .. controls (410.33,156.35) and (405.94,160.25) .. (400.51,160.25) .. controls (395.09,160.25) and (390.7,156.35) .. (390.7,151.54) -- cycle ; \draw [line width=0.75] (390.7,151.54) -- (410.33,151.54) ; \draw [line width=0.75] (400.51,142.83) -- (400.51,160.25) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (393.33,76.67) -- (394.71,141.44) ;
\draw [shift={(394.76,143.44)}, rotate = 268.78] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (428,77.33) -- (428.67,160.71) ;
\draw [shift={(428.69,162.71)}, rotate = 269.54] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (465.33,78) -- (466.5,195.97) ;
\draw [shift={(466.52,197.97)}, rotate = 269.43] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (103,177.75) -- (103.68,157.51) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (103.68,157.51) -- (391,155.76) ;
\draw [shift={(393,155.75)}, rotate = 539.65] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (134,177.25) -- (134,172.44) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (134,172.44) -- (421.63,172.25) ;
\draw [shift={(423.63,172.25)}, rotate = 539.96] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (212.63,204.75) -- (212.5,200.22) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (212.63,204.75) -- (459.5,201.77) ;
\draw [shift={(461.5,201.75)}, rotate = 539.31] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (578,202.65) -- (479.67,203.5) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (578,277.91) -- (578,202.65) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (541.67,277.76) -- (582.73,277.91) ;
\draw [shift={(539.67,277.75)}, rotate = 0.21] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (593.88,167.25) -- (441.25,167.65) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (595,330.75) -- (593.88,167.25) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (543.39,330.27) -- (597.37,330.75) ;
\draw [shift={(541.39,330.25)}, rotate = 0.51] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (612.08,381.5) -- (612.33,147.82) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (612.33,147.82) -- (407.82,146.83) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (541.95,380.85) -- (612.08,381.5) ;
\draw [shift={(539.95,380.83)}, rotate = 0.53] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (470.33,275.69) -- (504.33,275.83) ;
\draw [shift={(468.33,275.68)}, rotate = 0.23] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (470.33,328.78) -- (504.33,328.92) ;
\draw [shift={(468.33,328.77)}, rotate = 0.23] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (470.33,376.56) -- (504.33,376.7) ;
\draw [shift={(468.33,376.55)}, rotate = 0.23] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (420.06,276.41) -- (448,276.83) ;
\draw [shift={(418.06,276.38)}, rotate = 0.85] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (420.06,329.5) -- (448,329.92) ;
\draw [shift={(418.06,329.47)}, rotate = 0.85] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (420.06,377.28) -- (448,377.7) ;
\draw [shift={(418.06,377.25)}, rotate = 0.85] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (348.03,277.41) -- (377.5,277.83) ;
\draw [shift={(346.03,277.38)}, rotate = 0.81] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (348.03,330.5) -- (377.5,330.92) ;
\draw [shift={(346.03,330.47)}, rotate = 0.81] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (348.03,378.28) -- (377.5,378.7) ;
\draw [shift={(346.03,378.25)}, rotate = 0.81] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (275.97,313.72) -- (293.04,280.17) ;
\draw [shift={(275.07,315.5)}, rotate = 296.96] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (310.92,279.09) -- (293.04,280.17) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (310.75,381) -- (284.58,380.17) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (270.92,337.41) -- (284.58,380.17) ;
\draw [shift={(270.31,335.5)}, rotate = 72.28] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] [dash pattern={on 4.5pt off 4.5pt}] (279.97,330.84) -- (311.54,330.88) ;
\draw [shift={(277.97,330.83)}, rotate = 0.09] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=0.75] (423.7,171.54) .. controls (423.7,166.73) and (428.09,162.83) .. (433.51,162.83) .. controls (438.94,162.83) and (443.33,166.73) .. (443.33,171.54) .. controls (443.33,176.35) and (438.94,180.25) .. (433.51,180.25) .. controls (428.09,180.25) and (423.7,176.35) .. (423.7,171.54) -- cycle ; \draw [line width=0.75] (423.7,171.54) -- (443.33,171.54) ; \draw [line width=0.75] (433.51,162.83) -- (433.51,180.25) ;
\draw [line width=0.75] (461.7,206.54) .. controls (461.7,201.73) and (466.09,197.83) .. (471.51,197.83) .. controls (476.94,197.83) and (481.33,201.73) .. (481.33,206.54) .. controls (481.33,211.35) and (476.94,215.25) .. (471.51,215.25) .. controls (466.09,215.25) and (461.7,211.35) .. (461.7,206.54) -- cycle ; \draw [line width=0.75] (461.7,206.54) -- (481.33,206.54) ; \draw [line width=0.75] (471.51,197.83) -- (471.51,215.25) ;
\draw [line width=0.75] (448.45,272.95) .. controls (448.45,268.14) and (452.85,264.24) .. (458.27,264.24) .. controls (463.69,264.24) and (468.09,268.14) .. (468.09,272.95) .. controls (468.09,277.76) and (463.69,281.66) .. (458.27,281.66) .. controls (452.85,281.66) and (448.45,277.76) .. (448.45,272.95) -- cycle ; \draw [line width=0.75] (448.45,272.95) -- (468.09,272.95) ; \draw [line width=0.75] (458.27,264.24) -- (458.27,281.66) ;
\draw [line width=0.75] (449.45,324.95) .. controls (449.45,320.14) and (453.85,316.24) .. (459.27,316.24) .. controls (464.69,316.24) and (469.09,320.14) .. (469.09,324.95) .. controls (469.09,329.76) and (464.69,333.66) .. (459.27,333.66) .. controls (453.85,333.66) and (449.45,329.76) .. (449.45,324.95) -- cycle ; \draw [line width=0.75] (449.45,324.95) -- (469.09,324.95) ; \draw [line width=0.75] (459.27,316.24) -- (459.27,333.66) ;
\draw [line width=0.75] (449,372.7) .. controls (449,367.89) and (453.4,363.99) .. (458.82,363.99) .. controls (464.24,363.99) and (468.64,367.89) .. (468.64,372.7) .. controls (468.64,377.51) and (464.24,381.41) .. (458.82,381.41) .. controls (453.4,381.41) and (449,377.51) .. (449,372.7) -- cycle ; \draw [line width=0.75] (449,372.7) -- (468.64,372.7) ; \draw [line width=0.75] (458.82,363.99) -- (458.82,381.41) ;
\draw (398.79,179.84) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (398.79,187.32) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (398.79,195.18) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (561.54,308.44) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (561.54,300.58) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (561.54,293.1) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (521.47,270.32) node [font=\footnotesize] {$( \cdot )^{2}$};
\draw (523.03,324.81) node [font=\footnotesize] {$( \cdot )^{2}$};
\draw (521.47,372.38) node [font=\footnotesize] {$( \cdot )^{2}$};
\draw (185.32,324.26) node [font=\huge,rotate=-269.79] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (176.72,324.26) node [font=\huge,rotate=-269.79] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (167.26,324.26) node [font=\huge,rotate=-269.79] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (328.75,327.11) node {$\sum $};
\draw (328.75,377.02) node {$\sum $};
\draw (328.75,270.44) node {$\sum $};
\draw (215.12,323.56) node [font=\scriptsize] {$\ z[ n_{s} -1]$};
\draw (132.44,323.69) node [font=\scriptsize,rotate=-0.07] {$\ z[ -n_{s} +1]$};
\draw (96.73,330.98) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (96.73,323.12) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (96.73,315.64) node [font=\huge] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (95.68,188.42) node [font=\scriptsize,rotate=-0.07] {$\ y[ 0]$};
\draw (127.95,187.39) node [font=\scriptsize,rotate=-358.27] {$\ y[ 1]$};
\draw (200.43,187.87) node [font=\scriptsize] {$\ y[ Nn_{s} -1]$};
\draw (429.82,65.36) node [align=left] {{\fontfamily{ptm}\selectfont Offline Exponential Generator}};
\draw (293.89,127.78) node [font=\scriptsize] {$\overbrace{( y[ 0] ,\ y[ 0] ,\ ...\ ,\ y[ 0]}^{L})$};
\draw (342.99,224.65) node [font=\scriptsize] {$( y[ Nn_{s} -1] ,\ y[ Nn_{s} -1] ,\ ...\ ,\ y[ Nn_{s} -1])$};
\draw (479.55,305.01) node [font=\scriptsize] {$-1/\sigma ^{2}_{\rm{w}}$};
\draw (479.55,352.79) node [font=\scriptsize] {$-1/\sigma ^{2}_{\rm{w}}$};
\draw (479.55,251.92) node [font=\scriptsize] {$-1/\sigma ^{2}_{\rm{w}}$};
\draw (70.45,324.55) node [rotate=-270] [align=left] {{\fontfamily{ptm}\selectfont Argmax ( . )}};
\draw (7.73,317.4) node {$\hat{d}$};
\draw (440.54,109.69) node [font=\huge,rotate=-90] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (448.74,109.69) node [font=\huge,rotate=-90] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (456.55,109.69) node [font=\huge,rotate=-90] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (149.54,188.69) node [font=\huge,rotate=-90] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (157.74,188.69) node [font=\huge,rotate=-90] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (165.55,188.69) node [font=\huge,rotate=-90] [align=left] {{\fontfamily{pcr}\selectfont {\large .}}};
\draw (410,135) node {$-$};
\draw (448.67,159.67) node {$-$};
\draw (485.33,193.33) node {$-$};
\draw (397.76,271.64) node [font=\footnotesize] {${\rm{exp}}( \cdot )$};
\draw (397.76,326.7) node [font=\footnotesize] {${\rm{exp}}( \cdot )$};
\draw (397.76,373.2) node [font=\footnotesize] {${\rm{exp}}( \cdot )$};
\draw (319.82,100.06) node [font=\scriptsize] {$( v_{0}[ 0] ,\ v_{0}[ 1] ,\ ...\ ,\ v_{0}[ L-1])$};
\draw (579.82,100.06) node [font=\scriptsize] {$( v_{Nn_{s} -1}[ 0] ,\ v_{Nn_{s} -1}[ 1] ,\ ...\ ,\ v_{Nn_{s} -1}[ L-1])$};
\draw (310.32,184.98) node [font=\scriptsize] {$( y[ 1] ,\ y[ 1] ,\ ...\ ,\ y[ 1])$};
\end{tikzpicture}
\caption{MCS implementation of the proposed theoretical NDA-ML TO estimator in \eqref{8989ioio} using \eqref{eq: monte c}. The solid and dashed lines represent the in-phase and quadrature components of the received samples, respectively. }\label{fig: gamma block}
\end{figure*}
\subsection{Iterative Likelihood Maximization}
Efficient one dimensional
iterative search algorithms can be used to avoid exhaustive search, and thus, reduce the computational complexity of \eqref{8989ioio}.
In iterative search method, an interval $[d_{\rm{L}}, d_{\rm{U}}]$ containing the true \ac{to} $d^*$ is
established and is then repeatedly reduced on the basis of function evaluations
until a reduced bracket $[d_{\rm{L}}, d_{\rm{U}}]$ is achieved which is sufficiently small. The
minimizer/maximizer can be assumed to be at the center of interval $[d_{\rm{L}}, d_{\rm{U}}]$. These
methods can be applied to any function and differentiability of the function is not
essential.
An iterative search method in which iterations can be performed
until the desired accuracy in either the maximizer or the maximum value of the
objective function is achieved is the golden-section search method \cite{press2007numerical}.
For a strictly unimodal function with an extremum inside the interval, Golden-section search method finds that extremum, while for an interval containing multiple extrema (possibly including the interval boundaries), it converges to one of them. Implementation of the proposed NDA-ML TO estimation with golden-section search is summarized in Algorithm
\ref{Table1xrrrr}. For Algorithm \ref{Table1xrrrr}, we define
\begin{equation}
\mathcal{L}(d) \triangleq
{\rm{Prod}}\Big(f^{(d:d+Nn_{\rm{s}}-1)}_{{\bf Y}}({\bf y}; {\rm{H}}_0)\Big), \end{equation}
where ${\rm{Prod}}\big{(}[z_0,z_1,\dots,z_{u-1}]\big{)} \triangleq \prod_{i=0}^{u-1}z_i$,
and
\begin{align} \label{6yo09}
& {{\bf f}^{(d:q)}_{\bf{Y}}} (\cdot ; {\rm{H}_0} )\triangleq
\\ \nonumber
&\big{[}f_{Y[d]}(\cdot | {\rm{H}_0} )~f_{Y[d+1]}(\cdot | {\rm{H}_0} )~ \dots~ f_{Y[q]}(\cdot | {\rm{H}_0} )\big{]}^T
\end{align}
with $q \ge d$ and $f_{Y[nn_{\rm{s}}+d]}(\cdot | {\rm{H}_0})$ as the \ac{pdf} of the received sample $y[nn_{\rm{s}}+d] \triangleq y_{n}[d]$ given ${\rm{H}_0}$.
\begin{algorithm}[t]
\caption{Golden-section search}\label{Table1xrrrr}
\begin{algorithmic}[1]
\Statex \textbf{Initialization:} $\mathcal{D} \gets \{ -n_{\rm{s}},\dots, n_{\rm{s}}-2\}$, $init \gets -n_{\rm{s}}+1$, $last \gets n_{\rm{s}}-1$, $ratio \gets 0.381966$
\State $c \gets init + +\lfloor ratio * (last-init)\rfloor$
\While{$\mathcal{L}(c)<\mathcal{L}(init)$ or $\mathcal{L}(c)<\mathcal{L}(last)$}
\If{$|\mathcal{D}|$=2}
\If{$\mathcal{L}(init)>\mathcal{L}((last)$}
\State \Return $\hat{d}^{\textrm{opt}} \gets init$
\Else
\State \Return $\hat{d}^{\textrm{opt}} \gets last$
\EndIf
\EndIf
\State $\mathcal{D} \gets \mathcal{D}\setminus \{c\}$
\State $c \gets$ Choose a random index from the set
$\mathcal{D}$
\EndWhile
\While{$last-init \ge 4$}
\If{$last-c \ge c-init$}
\State $d \gets c+\lfloor ratio * (last-init)\rfloor$
\If{$\mathcal{L}(d)<\mathcal{L}(c)$}
\State $last \gets d$
\Else
\State $init \gets c$
\State $c \gets d$
\EndIf
\Else
\State $d \gets c-\lfloor ratio*(last-init) \rfloor$
\If{$\mathcal{L}(d)<\mathcal{L}(c)$}
\State $init \gets d$
\Else
\State $last \gets c$
\State $c \gets d$
\EndIf
\EndIf
\EndWhile
\State $\hat{d}^{\textrm{opt}} \gets \operatorname*{argmax}_{i \in \{init,\dots,last\}} \mathcal{L}(i)$
\State \Return $\hat{d}^{\textrm{opt}}$
\end{algorithmic}
\end{algorithm}
\subsection{Complexity Analysis}
Here, we compare the complexity of the proposed theoretical and the MCS time synchronization algorithms. In our complexity analysis, a real addition, multiplication, or division is counted as one floating point operation (FLOP). Considering the fact that the number of FLOPs for $H_{\rm{d}}$, $d \ge 0$, is higher than that for $d <0$, the number of FLOPs per TO hypothesis for the theoretical and the MCS time synchronization algorithms are upper bounded by $2N(n_{\rm{x}}+n_{\rm{z}})+(2N-1)n_{\rm{x}}n_{\rm{h}}(8+2u_1+2u_2)$ and $((6+2u_1)(L-1)+1)(n_{\rm{x}}+n_{\rm{z}}-1)(N-1)$, respectively, where $u_1$ and $u_2$ denote the number of FLOPs for the computation of $\exp(\cdot)$ and error function $\Phi(\cdot)$, respectively. Here, $L$ is a trade-off parameter between the accuracy and complexity for the MCS algorithm. That is, increasing $L$ increases both accuracy and complexity of the MCS algorithm.
The ratio of the average estimation time (RAET) versus $n_{\rm{x}}$ for the proposed theoretical NDA-ML and the MCS algorithms are shown in Table \ref{table: time analysis}.
Here, RAET is defined as:
\begin{equation}
\text{RAET}= \frac{\text{Average estimation time for MCS}}{\text{Average estimation time for the theoretical}} .
\end{equation}
The number of Monte Carlo samples is set to $L=10^4$ for the MCS implementation, and exhaustive search method is used for both algorithms.
As seen, MCS implementation offers a lower computational complexity compared to the theoretical \ac{ml} estimator using Theorem \ref{theo: pdf y}. This complexity reduction is obtained at the expense of an insignificant performance degradation in terms of lock-in probability as it will be shown in the next section.
\begin{table}[t!]
\vspace{-1em}
\centering
\caption{Complexity Analysis}
\label{table: time analysis}
\resizebox{0.4\textwidth}{!}{
\begin{tabularx}{0.3\textwidth}{@{ }l*{4}{C}c@{}}
\toprule
$n_{\rm{x}}$ & 64 & 128 & 256 & 512 & 1024 \\
\midrule
RAET & 0.841 & 0.779 & 0.704 & 0.668 & 0.629 \\
\bottomrule
\end{tabularx} }
\end{table}
\section{Simulations}\label{simmp}
In this section, we evaluate the performance of the proposed \ac{nda}-\ac{ml} time synchronization algorithm through several simulation experiments.
\subsection{Simulation Setup}
We consider a \ac{zp}-OFDM system with 128-QAM modulation in a frequency-selective Rayleigh fading channel.
Unless otherwise mentioned, the number of sub-carriers is $n_{\rm{x}}=128$, the number of zero-padded samples is $n_{\rm{z}}=15$, and
the number of observed OFDM symbols at the receiver is $N=10$.
The sampling time of the ZP-OFDM system at the receiver is $T_{\rm sa}=10^{-6}$s.
An uncorrelated multipath fading channel with $n_{\rm{h}}=10$ taps and maximum delay spread of {$\tau_{\rm{max}}=10 \mu$s} is considered.
The delay profile of the Rayleigh
fading channel in \eqref{7u8i0000} is modeled as an exponential-decay function, i.e., $\sigma^2_{h_l}=\alpha\exp(-\beta l)$, $l=0,1,\dots,n_{\rm{h}}-1$, where $p_{\rm{h}}=\sum_{l=0}^{n_{\rm{h}}-1}\sigma^2_{h_l}=1$, $\alpha=1/2.5244$, and $\beta=0.5$. The maximum Doppler spread of the fading channel is set to $f_{\rm{D}}=5$ Hz.
Without loss of generality, the transmit power is assumed to be {$\sigma_{\rm{x}}^2=1$}, and
the \ac{awgn} is modeled as a zero-mean complex Gaussian random variable with variance $\sigma^2_{\rm{w}}$, which varies according to the value of \ac{snr} $\gamma \triangleq \sigma_{\rm x}^2 p_{\rm h} / \sigma_{\rm w}^2$. The \ac{to} introduced to the system is modeled as a uniformly distributed integer random variable in the range of $d \in [-30 , \ 30]$. Simulations are evaluated under $10^4$ Monte Carlo realizations, and the number of samples for MCS implementation of the proposed theoretical NDA-ML algorithm is $L=10^4$. The performance of the proposed algorithms are evaluated in terms of mean squared error (MSE) and lock-in probability.
Here, the lock-in probability is defined as the probability that the estimated TO (given in sampling time) equals to the actual TO. That is, any non-zero error is counted as a missed estimation.
\subsection{Simulation Results}
The performance of the proposed theoretical NDA-ML algorithm, its MCS implementation, and the current state-of-the-art NDA \ac{to} estimator for \ac{zp}-OFDM, i.e. transition metric (TM) \cite{LeNir2010}, for different values of $E_{\rm{b}}/N_0$ are shown in Fig.~\ref{fig: snr}. As can be seen, the proposed theoretical algorithm and its MCS implementation outperform the TM algorithm since they maximize the likelihood function while TM is a heuristic algorithm. Moreover, as seen, there is a negligible performance gap between the proposed theoretical NDA-ML algorithm and its MCS implementation.
This performance gap can be further reduced by increasing the number of Monte Carlo samples $L$ used for averaging in \eqref{eq: monte c} at the expense of higher complexity. In Fig.~\ref{fig: snr}. we also illustrate the performance of the the sub-optimal time synchronization algorithm in \cite{K2020}, which relies on Gaussian PDF approximation of the received samples. As can be seen, there is a large gap between the proposed algorithms and the sub-optimal algorithm in \cite{K2020} at low SNR values.
Fig. \ref{fig: dop} illustrates the effect of the maximum Doppler spread (mobility) on the performance of the proposed theoretical NDA-ML algorithm and its MCS implementation. As seen, the lock-in probability increases as the maximum Doppler spread increases. The reason is that the time dynamics of
the channel taps contributing (through convolution) to the received samples become less correlated as the maximum Doppler spread increases.
However, for zero maximum Doppler spread, identical channel taps contribute to the received samples.
Thus, our independency assumption on the received samples becomes more valid for higher values of maximum Doppler spread. These results reveal that the proposed NDA-ML algorithm can be considered as a promising candidate for vehicle-to-vehicle (V2V) communications.
The effect of the number of OFDM symbols $N$, used for time synchronization, on the performance of the
proposed theoretical NDA-ML algorithm, its MCS implementation, and the TM algorithm \cite{LeNir2010} are represented in Fig. \ref{fig: obs}. As expected, the higher $N$, the higher the lock-in probability.
Major improvements in performance occurs when the number of OFDM symbols increases from 1 to 10, and then the rate of performance improvement decreases. This is due to the fact that innovation introduced by each new sample to an ML estimator deacreases as the total number of samples (used for estimation) increases.
In Fig. \ref{fig: tap}, the performance of the proposed theoretical NDA-ML TO estimator, its MCS implementation, and the TM estimator \cite{LeNir2010} versus the number of channel taps $n_{\rm{h}}$ for $n_{\rm{z}}=20$ at $15$ dB $E_{\rm{b}}/N_0$ are shown.
As seen, the lock-in probability of the theoretical NDA-ML TO estimator and its MCS implementation degrades as $n_{\rm{h}}$ increases. This is because the sharpness of the likelihood function decreases; the sharpness of likelihood function determines how accurately we can estimate an unknown parameter.
\begin{figure}
\vspace{-1em}
\centering
\includegraphics[height=2.835in]{fig3_snr.pdf}
\caption{Lock-in probability versus $E_{\rm{b}}/N_0$.}\label{fig: snr}
\end{figure}
\begin{figure}
\vspace{-2.1em}
\centering
\includegraphics[height=2.835in]{fig_doppler.pdf}
\caption{Lock-in probability versus maximum Doppler spread of the fading channel at $15$ dB $E_{\rm{b}}/N_0$.} \label{fig: dop}
\vspace{-1em}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=2.835in]{fig4_obser.pdf}
\caption{Lock-in probability versus the number of observation vectors $N$ at $10$ dB $E_{\rm{b}}/N_0$.}\label{fig: obs}
\end{figure}
\begin{figure}
\vspace{-2em}
\centering
\includegraphics[height=2.835in]{filterlength.pdf}
\caption{Lock-in probability versus the number of channel taps $n_{\rm{h}}$ at $10$ dB $E_{\rm{b}}/N_0$.}\label{fig: tap}
\end{figure}
In Fig.~\ref{fig: hist}, we illustrate the empirical probability mass function (PMF) of the synchronization error for the proposed theoretical and the MCS algorithms at 10 dB $E_{\rm{b}}/N_0$. As can be seen, the empirical PMF of the error is not symmetric around zero and is slightly biased towards positive \ac{to}s. Based on asymptotic properties of the MLEs, the biased term approaches zero as $N\rightarrow \infty$. Moreover, we observe that the synchronization error falls in small interval, i.e., $\{-2,-1,1,2\}$. This means that the proposed algorithms offer low
MSE as shown in Fig. \ref{fig: mse}.
Because of low MSE, the proposed time-synchronization algorithms can take advantage of low complexity channel coding to further improve synchronization performance.
The effect of PDP estimation error on the performance of the proposed theoretical and the MCS time synchronization algorithms is shown in Fig.~\ref{fig: sens}. We model the estimated PDP as
\begin{align}
\hat{\sigma}^2_{{\rm{h}}_k} \in {\cal{U}}\Big{[}\sigma^2_{{\rm{h}}_k}-\alpha\sigma^2_{{\rm{h}}_k},\sigma^2_{{\rm{h}}_k}+\alpha\sigma^2_{{\rm{h}}_k}\Big{]},
\end{align}
where $\sigma^2_{{\rm{h}}_k}$, $k=0,1,\dots, n_{\rm{h}}-1$, is the true PDP, and
${\cal{U}}[a,b]$ denotes the uniform distribution in the interval $[a,b]$.
In Fig.~\ref{fig: sens}, we show the lock-in probability versus $\alpha \in [0,1]$ at 10 dB $E_{\rm{b}}/N_0$. As can be seen, the theoretical and the MCS algorithms are robust to the
the delay profile estimation error for $\alpha \in [0,1]$ and $\alpha \in [0,0.5]$, respectively.
While the performance of the theoretical algorithm slightly degrades for
$\alpha \in [0.5,1]$, the lock-in probability is still larger than $0.75$.
\begin{figure}
\centering
\includegraphics[height=2.835in]{hist.pdf}
\caption{PMF of the synchronization error for the theoretical and the MCS algorithms at 10 dB $E_{\rm{b}}/N_0$.}\label{fig: hist}
\end{figure}
\begin{figure}
\centering
\vspace{-2em}
\includegraphics[height=2.835in]{MSE_comparison.pdf}
\caption{Mean Squared Error (MSE) versus SNR for Theoretical and MCS method.}\label{fig: mse}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=2.835in]{sensitivity.pdf}
\caption{Sensitivity of the proposed time synchronization algorithms to PDP estimation error for $N=10$ at 10 dB $E_{\rm{b}}/N_0$.}\label{fig: sens}
\end{figure}
\section{Conclusions}\label{sec: conclu}
In this paper, for the first time in the literature, the \ac{nda}-\ac{ml} time synchronization for \ac{zp}-OFDM was analytically derived and a feasible solution for its implementation based on MCS technique was proposed.
The obtained time synchronization method can be employed for both frame and symbol synchronization. Moreover,
to achieve the optimal time synchronization method, we obtained a closed-form approximate expression for the distribution of convolution, i.e. received convolved signal.
Simulation results verify that the proposed theoretical \ac{nda}-\ac{ml} time synchronization and its MCS implementation can offer high lock-in probabilities even at low SNR values. Also, they are effective in highly time-selective channels with large maximum Doppler spread.
These properties make \ac{zp}-OFDM a promising candidate for low-power IoT networks and V2V communications.
\IEEEpeerreviewmaketitle
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08614",
"language": "en",
"timestamp": "2023-02-20T02:03:30",
"url": "https://arxiv.org/abs/2302.08614",
"yymm": "2302"
} | \section{Introduction}\label{sec:introduction}
Fast, scalable approximations to posterior distributions have been a staple of Bayesian inference when dealing with big data \citep{graves2011practical,zhou2020variational}, especially when Markov Chain Monte Carlo (MCMC) is too costly.
Approximate Bayesian inference (ABI) is a class of alternative methods that includes the Laplace approximation,
the fully exponential (or improved) Laplace approximation \citep{tierney_kadane_1986}, variational Bayes and related methods,
and expectation propagation \citep{minka_2001}.
These alternative methods are often deterministic, based on approximating the posterior distribution via a standard distribution,
and often have a significant advantage in speed over standard MCMC techniques when working with large datasets.
ABI methods typically perform optimization, usually by minimizing the distance between the approximation and the posterior distribution, in place of using sampling as in standard MCMC methods, and, in this way, they cut down on the majority of computational time.
This major benefit is reflected by the extensive application of ABI methods for inference in a wide range of fields \citep{Blei2017}.
However, this increase in speed is clearly also compromised by a decrease in accuracy, as ABI-based approximations can be only as accurate as the distribution used to approximate the posterior.
If the standard distribution used is not a good fit for the posterior, then the approximation can be poor.
In particular, ABI methods mostly base their posterior approximations on the parametric form of a multivariate Gaussian density, in order to take advantage of the asymptotic normality of the posterior distribution guaranteed by the Bernstein-von Mises theorem \citep{Gelman1995}.
While the approximations stemming from these Gaussian-based methods are fast and reasonable, the quality of these approximations will inevitably be questionable when the posterior is not similar in shape to the multivariate Gaussian density.
For example, \cite{tierney_kadane_1986} note that the principal regularity condition required for reasonable Laplace approximations is that the posterior is dominated by a single mode.
Even if the posterior is unimodal, skewness in the posterior density can lead to sub-par fits, as the Gaussian density is not skewed.
This can potentially become a problem when the sample size is small compared to the number of parameters in the model.
For instance, \cite{fong_rue_wakefield_2009} indicate that integrated nested Laplace approximations for generalized linear mixed models tend to be less accurate for binomial data with small sample sizes, which result in skewed posterior distributions.
A recent advancement by \cite{durante2023skewed} addresses this problem by way of a skewed Bernstein-von Mises theorem.
This asymptotic approximation takes the form of a
generalized skew-normal (GSN) distribution \citep{genton2005generalized}.
This results in plug-in-based (using the maximum a posteriori estimate) skew-model (SM) approximations which converge to the posterior distribution faster than those of the classical Bernstein-von Mises theorem, by an order of $1/\sqrt{n}$.
While this method is a very important theoretical contribution and is promising from an accuracy perspective, it is not without its practical drawbacks.
The GSN distribution does not have closed forms expressions for the marginal distributions, nor any of its moments necessitating sampling schemes to
approximate these quantities (when needed) adding to its computational cost.
Under these considerations, this paper aims to improve on conventional Gaussian-based methods for approximating posterior distributions, by taking skewness into account in such a way that posterior inference is kept simple and practical.
Although many skew-normal distributions now exist \citep{genton_2004}, we propose that the posterior density is approximated by the original multivariate skew-normal (MSN) distribution described in-depth by \cite{azzalini_capitanio_2014}, to take advantage of the extensive theory that has already been developed.
In this new method which we will call skew-normal matching, key statistics of the posterior are initially estimated using existing methods, and the MSN approximation is then constructed by matching these statistics with those of the MSN density.
A number of variants are explored, with each approach offering an intuitive way to take skewness into account when the posterior is not well-behaved.
The performance of each variant is then compared with existing Gaussian approaches and the SM approximation of \cite{durante2023skewed} across a range of simulation and benchmark settings that focus on small and moderate dimensional settings.
The outline of the paper is as follows.
In Section \ref{sec:background}, the multivariate skew-normal distribution is introduced and its existing applications in approximating probability distributions are discussed.
In Section \ref{sec:matching}, a detailed description is given of the skew-normal matching method; four separate matching schemes are covered, with each scheme based on a combination of posterior statistics.
In Section \ref{sec:settings}, the simulation and benchmark settings that the following two sections use are described.
In Section \ref{sec:standalone}, the performance of skew-normal matching as a standalone approximation is compared to that of existing methods, across both simulated data and selected benchmark datasets.
In Section \ref{sec:post-hoc}, post-hoc alternatives of the skew-normal matching method are considered, with performance being evaluated across both simulated data and selected benchmark datasets.
In Section \ref{sec:summary}, the results and future directions are discussed.
Finally, the supplementary material contains derivations and additional plots.
This paragraph summarizes the definitions, conventions, and notation used in this paper.
Let ${\mathcal D}$ denote the set of observed variables, i.e., the data.
Let $\mathcal{S}_+^{p}$ denote the set of all $p\times p$ positive definite matrices.
Let $\phi_p({\vectorfonttwo \theta};{\vectorfonttwo \mu},{\boldsymbol \Sigma})$ denote the density of the ${\mathcal N}_p({\vectorfonttwo \mu},{\boldsymbol \Sigma})$ distribution evaluated at ${\vectorfonttwo \theta}$ with mean ${\vectorfonttwo \mu}\in{\mathbb R}^p$ and covariance ${\boldsymbol \Sigma}\in\mathcal{S}_+^{p}$, and $\Phi$ denote the cumulative distribution function of the standard Gaussian distribution.
The $\odot$ symbol in the exponent denotes the Hadamard power, the operation where all elements in a matrix are raised to the same power.
If ${\matrixfontone A}$ is a square matrix then $\mbox{$\textrm{\textup{dg}}$}({\matrixfontone A})$ is the vector consisting of the diagonal
elements of ${\matrixfontone A}$.
If ${\vectorfontone b$ is a vector then $\mbox{$\textrm{\textup{diag}}$}({\vectorfontone b)$ is a diagonal matrix with diagonal elements ${\vectorfontone b$.
Functions applied to vectors are interpreted as being applied element-wise.
\section{The multivariate skew-normal distribution}\label{sec:background}
While many results based on the MSN distribution are presented using the original parameterization by \cite{azzalini_1996}, we choose a common alternate parameterization that simplifies the exposition.
For a $p$-dimensional random vector ${\vectorfonttwo \Theta}$ we will work with the skew-normal density of the form
\begin{align*}
p({\vectorfonttwo \theta})=2\cdot\phi_p({\vectorfonttwo \theta};{\vectorfonttwo \mu},{\boldsymbol \Sigma})\cdot\Phi({\vectorfontone d}^\top({\vectorfonttwo \theta} - {\vectorfonttwo \mu})),
\end{align*}
where ${\vectorfonttwo \mu}\in\mathbb{R}^p$, ${\boldsymbol \Sigma}\in\mathcal{S}_+^{p}$, and ${\vectorfontone d}\in\mathbb{R}^p$ are the location, scale, and skewness parameters respectively.
In this case, we write ${\vectorfonttwo \Theta}\sim\text{SN}_p({\vectorfonttwo \mu},{\boldsymbol \Sigma},{\vectorfontone d})$.
An account of the properties of this distribution are given in detail by \cite{azzalini_capitanio_2014}.
Of importance in this paper are the expectation and variance, which are
\begin{align*}
\mbox{$\textrm{\textup{E}}$}({\vectorfonttwo \Theta}) = {\vectorfonttwo \mu}+\sqrt{\frac{2}{\pi}}{\vectorfonttwo \delta}
\quad\text{and}\quad
\mbox{$\textrm{\textup{var}}$}({\vectorfonttwo \Theta})={\boldsymbol \Sigma} - \frac{2}{\pi} {\vectorfonttwo \delta}\vdelta^\top
\end{align*}
respectively where ${\vectorfonttwo \delta} = {\boldsymbol \Sigma}{\vectorfontone d}/\sqrt{1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}}$.
Given ${\vectorfonttwo \delta}$ and ${\boldsymbol \Sigma}$, the vector ${\vectorfontone d}$ can be recovered via the identity
${\vectorfontone d} = {\boldsymbol \Sigma}^{-1}{\vectorfonttwo \delta}/\sqrt{1-{\vectorfonttwo \delta}^\top{\boldsymbol \Sigma}^{-1}{\vectorfonttwo \delta}}$ leading
to the constraint
\begin{align}\label{eq:constraint}
{\vectorfonttwo \delta}^\top{\boldsymbol \Sigma}^{-1}{\vectorfonttwo \delta} < 1.
\end{align}
The cumulant generating function can be shown to be
$K({\vectorfontone t})=\log2+{\vectorfonttwo \mu}^\top{\vectorfontone t}+{\vectorfontone t}^\top{\boldsymbol \Sigma}{\vectorfontone t}/2 +\log\Phi({\vectorfonttwo \delta}^\top{\vectorfontone t})$,
from which it can be shown that the vector of third-order unmixed central moments (TUM) is given by
\begin{align}\label{eq:msntum}
\text{TUM}({\vectorfonttwo \Theta})=\frac{\sqrt{2}(4-\pi)}{\pi^{3/2}}{\vectorfonttwo \delta}^{\odot3}.
\end{align}
Together, (\ref{eq:constraint}) and (\ref{eq:msntum}) imply a constraint on the allowable `size' of the TUM vector admittable by the MSN distribution.
As a natural extension to the multivariate Gaussian distribution, the MSN distribution lends itself particularly well to approximating probability distributions which are potentially skewed; \cite{azzalini_capitanio_2014} give a brief outline of existing frequentist work using skew-normal approximations in their applications chapter.
In particular, skew-normal approximations to the discrete binomial, negative binomial, and hypergeometric distributions via moment matching have been investigated by \cite{chang2008note}.
Furthermore, unobserved continuous covariates were modelled using the skew-normal distribution by \cite{guolo2008flexible} in the analysis of case-control data.
Additionally, \cite{gupta2003density} have used the skew-normal functional form as the main component of their Edgeworth-type density expansions, in order to account for skewness in the density.
\section{The skew-normal matching method}\label{sec:matching}
In the skew-normal matching method, key statistics of the posterior distribution (for which the density is known) are estimated and then matched with the MSN density to construct a suitable skew-normal posterior approximation.
When the parameter of interest ${\vectorfonttwo \theta}$ has $p$ components, the MSN approximation has a total of $2p+p(p+1)/2$ parameters to vary.
Four matching schemes were devised, each of which matches $2p+p(p+1)/2$ posterior statistics with those of the MSN density, so that there is at most one solution in general.
These are summarized in the following subsections.
\subsection{Moment matching}
In the moment matching (MM) scheme, the mean $\widetilde{{\vectorfonttwo \mu}}\in\mathbb{R}^p$ and covariance matrix ${\matrixfontone C}\in\mathcal{S}_+^{p}$, along with the TUM vector ${\vectorfontone s}\in\mathbb{R}^p$ of the posterior distribution $p({\vectorfonttwo \theta}|{\mathcal D})$ are estimated.
These are then matched to the MSN density to form an approximation of the posterior.
This is similar in nature to the work of \cite{chang2008note}, where common distributions were moment matched to a skew-normal distribution in the univariate case.
The practicality of this approach is limited as posterior moments are often not readily available.
The final system of matching equations for the moment matching scheme are given by
\begin{subequations}\label{eq:mmatch}
\begin{align}
\widetilde{{\vectorfonttwo \mu}}
& = {\vectorfonttwo \mu}+\sqrt{\frac{2}{\pi}}{\vectorfonttwo \delta}, \label{eq:mma}
\\
{\matrixfontone C}
& = {\boldsymbol \Sigma}-\frac{2}{\pi}{\vectorfonttwo \delta}\vdelta^\top, \label{eq:mmb}
\\
{\vectorfontone s}
& = \frac{\sqrt{2}(4-\pi)}{\pi^{3/2}}{\vectorfonttwo \delta}^{\odot3},\ \mbox{and} \label{eq:mmc} \\
{\vectorfonttwo \delta}
& = \frac{{\boldsymbol \Sigma}{\vectorfontone d}}{(1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d})^{1/2}}. \label{eq:mmd}
\end{align}
\end{subequations}
The derivations are provided in the supplementary material.
It is straightforward to obtain the values ${\vectorfonttwo \delta}$, ${\vectorfonttwo \mu}$, and ${\boldsymbol \Sigma}$ of the approximating MSN distribution.
Once ${\vectorfonttwo \delta}$ and ${\boldsymbol \Sigma}$ are known, ${\vectorfontone d}$ can also be recovered easily.
The steps of the moment matching scheme are outlined in Algorithm \ref{alg:mm}.
\begin{algorithm}
\caption{Skew-normal matching method: moment matching}\label{alg:mm}
\begin{algorithmic}[1]
\Require $\widetilde{{\vectorfonttwo \mu}}\in\mathbb{R}^p,\ {\matrixfontone C}\in\mathcal{S}_+^{p},\ {\vectorfontone s}\in\mathbb{R}^p$
\State $\displaystyle {\vectorfonttwo \delta}^* \gets \left[\frac{\pi^{3/2}}{\sqrt{2}(4-\pi)}\right]^{1/3}\cdot {\vectorfontone s}^{\odot1/3}$
\State $\displaystyle {\vectorfonttwo \mu}^* \gets \widetilde{{\vectorfonttwo \mu}}-\sqrt{\frac{2}{\pi}}{\vectorfonttwo \delta}^*$
\State $\displaystyle {\boldsymbol \Sigma}^* \gets {\matrixfontone C}+\frac{2}{\pi}{\vectorfonttwo \delta}^*({\vectorfonttwo \delta}^*)^\top$
\State $\displaystyle {\vectorfontone d}^* \gets ({\boldsymbol \Sigma}^*)^{-1}{\vectorfonttwo \delta}^*/\sqrt{1-({\vectorfonttwo \delta}^*)^\top({\boldsymbol \Sigma}^*)^{-1}{\vectorfonttwo \delta}^*}$ \\
\Return $({\vectorfonttwo \mu}^*,\,{\boldsymbol \Sigma}^*,\,{\vectorfontone d}^*)$
\Comment{The `$*$' superscript indicates final value}
\end{algorithmic}
\end{algorithm}
The MM equations (\ref{eq:mmatch}) do not always admit a solution due to constraint (\ref{eq:constraint}).
Using constraint (\ref{eq:constraint}) and the Woodbury identity, it can be shown that when
$$
({\vectorfontone s}^{\odot1/3})^T{\matrixfontone C}^{-1}({\vectorfontone s}^{\odot1/3}) \geq\frac{\sqrt[3]{2}(4-\pi)^{2/3}}{\pi-2},
$$
there is no solution to (\ref{eq:mmatch}).
This intuitively corresponds to cases where the skewness is too large compared to the covariance.
In such cases, one may make an adjustment to the observed value of ${\vectorfontone v}={\vectorfontone s}^{\odot1/3}$, of the form ${\vectorfontone v}_a=a{\vectorfontone v}$, such that a solution does exist.
We can choose $a\in(0,\sqrt{\sqrt[3]{2}(4-\pi)^{2/3}/((\pi-2){\vectorfontone v}^T{\matrixfontone C}^{-1}{\vectorfontone v}}))$ to minimize the loss function
\begin{align*}
L(a)=w\norm{{\vectorfontone v}_a-{\vectorfontone v}}+\norm{{\vectorfontone d}_a},
\end{align*}
where $w>0$ is some weight and ${\vectorfontone d}_a$ is the matched value of ${\vectorfontone d}$ using ${\vectorfontone v}_a$ instead of ${\vectorfontone v}$.
Intuitively, the left term ensures that the adjustment is not too large (thus compromising the approximation), while the right term ensures that the skewness parameter does not get too large (thus affecting the interpretability of the approximation).
We set $w=2000$ in the simulations; in practice, some manual tuning is required for a given model, where $w$ is chosen so that the matched values of ${\vectorfontone d}$ never get too large.
\subsection{Derivative matching}
In the derivative matching (DM) scheme, we denote the mode by ${\vectorfontone m}\in\mathbb{R}^p$, the negative Hessian at the mode by ${\matrixfontone J}\in\mathbb{R}^{p\times p}$, and the third-order unmixed derivatives (TUD) at the mode of the observed log-posterior by ${\vectorfontone t}\in\mathbb{R}^p$. These are matched to the corresponding values of the MSN density to form an approximation of the posterior distribution $p({\vectorfonttwo \theta}|{\mathcal D})$.
This can be viewed as an extension of the first order Laplace approximation, though \cite{durante2023skewed} formalizes this approach using the GSN distribution.
The final system of matching equations for the derivative matching scheme are given by
\begin{subequations}\label{eq:derivmatch}
\begin{align}
{\vectorfontone 0}&=-{\boldsymbol \Sigma}^{-1}({\vectorfontone m}-{\vectorfonttwo \mu})+\zeta_1(\kappa){\vectorfontone d}, \label{eq:dma} \\
{\matrixfontone J}&={\boldsymbol \Sigma}^{-1}-\zeta_2(\kappa){\vectorfontone d}\vd^\top, \label{eq:dmb} \\
{\vectorfontone t}&=\zeta_3(\kappa){\vectorfontone d}^{\odot3},\ \mbox{and} \label{eq:dmc} \\
\kappa&={\vectorfontone d}^\top({\vectorfontone m} -{\vectorfonttwo \mu}), \label{eq:dmd}
\end{align}
\end{subequations}
where $\zeta_k(x)=d^k \log\Phi(x)/dx^k$.
The derivations are provided in the supplementary material.
Here, we have extracted the common term $\kappa$ appearing in (\ref{eq:dma})-(\ref{eq:dmc}) as an additional equation, which facilitates a (unique) solution to (\ref{eq:derivmatch}).
The key idea is to reduce the original system of matching equations to a line search in one dimension for $\kappa$. Once $\kappa$ is solved for, the parameters of the MSN density are then easily recovered.
The steps of the derivative matching scheme are outlined in Algorithm \ref{alg:dm}.
\begin{algorithm}
\caption{Skew-normal matching method: derivative matching}\label{alg:dm}
\begin{algorithmic}[1]
\Require ${\vectorfontone m}\in\mathbb{R}^p,\ {\matrixfontone J}\in\mathbb{R}^{p\times p},\ {\vectorfontone t}\in\mathbb{R}^p$
\State $\displaystyle R \gets ({\vectorfontone t}^{\odot1/3})^\top{\matrixfontone J}^{-1}({\vectorfontone t}^{\odot1/3})$
\State $\displaystyle \kappa^* \gets \text{Solution to }
[\kappa\cdot\zeta_3(\kappa)^{2/3} - R\cdot\zeta_1(\kappa)]\cdot[\zeta_3(\kappa)^{2/3} +R\cdot\zeta_2(\kappa)] + [R^2\cdot\zeta_1(\kappa)\cdot\zeta_2(\kappa) ] = 0$
\State $\displaystyle {\vectorfontone d}^* \gets \left[ {\vectorfontone t}/\zeta_3(\kappa^*) \right]^{\odot 1/3}$
\State $\displaystyle {\boldsymbol \Sigma}^* \gets \left({\matrixfontone J}+\zeta_2(\kappa^*){\vectorfontone d}^*({\vectorfontone d}^*)^\top\right)^{-1}$
\State $\displaystyle {\vectorfonttwo \mu}^* \gets {\vectorfontone m}-\zeta_1(\kappa^*){\boldsymbol \Sigma}^*{\vectorfontone d}^*$ \\
\Return $({\vectorfonttwo \mu}^*,\,{\boldsymbol \Sigma}^*,\,{\vectorfontone d}^*)$
\Comment{The `$*$' superscript indicates final value}
\end{algorithmic}
\end{algorithm}
\subsection{Mean-mode-Hessian matching}
In the mean-mode-Hessian (MMH) matching scheme, we use the mode ${\vectorfontone m}\in\mathbb{R}^p$, negative Hessian of the observed log joint likelihood at the mode ${\matrixfontone J}\in\mathbb{R}^{p\times p}$ (which we will assume to be strictly positive definite), and the posterior mean $\widetilde{{\vectorfonttwo \mu}}\in\mathbb{R}^p$.
We match these with the corresponding quantities of the MSN density.
This leads to an approximation of the posterior distribution $p({\vectorfonttwo \theta}|{\mathcal D})$.
Note that the difference between the mode and mean provides information about the posterior skewness.
In addition, the mean may be taken from an existing Gaussian approximation, resulting in a post-hoc skewness adjustment.
The system of matching equations for the MMH matching scheme is given by
\begin{subequations}\label{eq:mHmatch}
\begin{align}
{\vectorfontone 0}&=-{\boldsymbol \Sigma}^{-1}({\vectorfontone m}-{\vectorfonttwo \mu})+\zeta_1(\kappa){\vectorfontone d}, \label{eq:mhma} \\
{\matrixfontone J}&={\boldsymbol \Sigma}^{-1}-\zeta_2(\kappa){\vectorfontone d}\vd^\top, \label{eq:mhmb} \\
\widetilde{{\vectorfonttwo \mu}}&={\vectorfonttwo \mu}+\sqrt{\frac{2}{\pi}}\cdot\frac{{\boldsymbol \Sigma}{\vectorfontone d}}{\sqrt{1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}}},\ \mbox{and} \label{eq:mhmc} \\
\kappa&={\vectorfontone d}^\top({\vectorfontone m} -{\vectorfonttwo \mu}). \label{eq:mhmd}
\end{align}
\end{subequations}
The derivations are provided in the supplementary material.
Similar to the DM scheme, an auxiliary parameter/equation for $\kappa$ is introduced to facilitate solving (\ref{eq:mHmatch}). The steps of the MMH matching scheme are outlined in Algorithm \ref{alg:mhm}.
Again, the solution reduces the original system of matching equations to a line search in one dimension for $\kappa$ (see line 5 in Algorithm \ref{alg:mhm}).
Once $\kappa$ is obtained, the parameters of the MSN density are then easily recovered.
\begin{algorithm}
\caption{Skew-normal matching method: mean-mode-Hessian matching}\label{alg:mhm}
\begin{algorithmic}[1]
\Require ${\vectorfontone m}\in\mathbb{R}^p,\ {\matrixfontone J}\in\mathbb{R}^{p\times p},\ \widetilde{{\vectorfonttwo \mu}}\in\mathbb{R}^p$
\State ${\vectorfonttwo \Delta} \gets \widetilde{{\vectorfonttwo \mu}}-{\vectorfontone m}$
\State $\displaystyle Q \gets {\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta}$
\State $\displaystyle \lambda(\kappa)\coloneqq\sqrt{\frac{2}{\pi}}\cdot\left(1+\frac{\kappa}{\zeta_1(\kappa)}\right)^{-1/2} - \zeta_1(\kappa)$
\State $\displaystyle \alpha(\kappa)\coloneqq\left(\lambda(\kappa)^2/ \zeta_2(\kappa) - Q - \zeta_2(\kappa) \lambda(\kappa)^2 \left( \frac{\kappa}{\zeta_1(\kappa)} \right)^2\right)^{-1}$
\State $\displaystyle \kappa^* \gets \text{Solution to } \ \kappa\lambda(\kappa)^2 - Q\kappa\lambda(\kappa)^2 \alpha(\kappa) - Q\zeta_1(\kappa)=0$
\State $\displaystyle {\boldsymbol \Sigma}^* \gets {\matrixfontone J}^{-1}-\alpha(\kappa^*){\vectorfonttwo \Delta}\vDelta^\top$
\State $\displaystyle {\vectorfontone d}^* \gets \frac{1}{\lambda(\kappa^*)}{({\boldsymbol \Sigma}^*)^{-1}{\vectorfonttwo \Delta}}$
\State $\displaystyle {\vectorfonttwo \mu}^* \gets \widetilde{{\vectorfonttwo \mu}} - \sqrt{\frac{2}{\pi}} \cdot \frac{{\boldsymbol \Sigma}^*{\vectorfontone d}^*}{\sqrt{1+({\vectorfontone d}^*)^\top{\boldsymbol \Sigma}^*{\vectorfontone d}^*}}$ \\
\Return $({\vectorfonttwo \mu}^*,\,{\boldsymbol \Sigma}^*,\,{\vectorfontone d}^*)$
\Comment{The `$*$' superscript indicates final value}
\end{algorithmic}
\end{algorithm}
Note that ${\vectorfonttwo \Delta}$ in Algorithm \ref{alg:mhm} is the difference between the mean and the mode of the target distribution, and indicates the strength and direction of skewness.
It is assumed to be non-zero in the algorithm; the case for ${\vectorfonttwo \Delta}=0$ can be reduced to moment matching a multivariate Gaussian and is trivial.
\subsection{Mean-mode-covariance matching}
In the mean-mode-covariance (MMC) matching scheme, the mode ${\vectorfontone m}\in\mathbb{R}^p$ of the observed log-posterior, along with the mean $\widetilde{{\vectorfonttwo \mu}}\in\mathbb{R}^p$ and covariance matrix ${\matrixfontone C}\in\mathcal{S}_+^{p}$ of the posterior $p({\vectorfonttwo \theta}|{\mathcal D})$ are estimated and then matched to the MSN density to form an approximation of the posterior distribution $p({\vectorfonttwo \theta}|{\mathcal D})$.
Care should be taken to avoid confusion with the MM scheme, which also uses the mean and covariance.
Note that the mean and covariance may be taken from an existing Gaussian approximation in practice, resulting in a post-hoc skewness adjustment.
The final system of matching equations for the mean-covariance matching scheme is given by
\begin{subequations}\label{eq:mcmatch}
\begin{align}
{\vectorfontone 0}&=-{\boldsymbol \Sigma}^{-1}({\vectorfontone m}-{\vectorfonttwo \mu})+\zeta_1(\kappa){\vectorfontone d}, \label{eq:mcma} \\
\widetilde{{\vectorfonttwo \mu}}&={\vectorfonttwo \mu}+\sqrt{\frac{2}{\pi}}\cdot\frac{{\boldsymbol \Sigma}{\vectorfontone d}}{\sqrt{1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}}}, \label{eq:mcmb} \\
{\matrixfontone C}&={\boldsymbol \Sigma} - \frac{2}{\pi(1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d})}{\boldsymbol \Sigma}{\vectorfontone d}\vd^\top{\boldsymbol \Sigma},\ \mbox{and} \label{eq:mcmc} \\
\kappa&={\vectorfontone d}^\top({\vectorfontone m} -{\vectorfonttwo \mu}). \label{eq:mcmd}
\end{align}
\end{subequations}
The derivations are provided in the supplementary material. Similarly to the DM and MMH schemes, an auxiliary variable $\kappa$ is introduced. The solution again hinges on solving a line search for $\kappa$ after which the parameters of the MSN density are easily recovered. The steps of the mean-mode-covariance matching scheme are outlined in Algorithm \ref{alg:mcm}. Again, note that ${\vectorfonttwo \Delta}$ in Algorithm \ref{alg:mcm} indicates the strength and direction of posterior skewness, and is assumed to be non-zero.
\begin{algorithm}
\caption{Skew-normal matching method: Mean-mode-covariance matching}\label{alg:mcm}
\begin{algorithmic}[1]
\Require ${\vectorfontone m}\in\mathbb{R}^p,\ \widetilde{{\vectorfonttwo \mu}}\in\mathbb{R}^p,\ {\matrixfontone C}\in\mathcal{S}_+^{p}$
\State $\displaystyle {\vectorfonttwo \Delta} \gets \widetilde{{\vectorfonttwo \mu}}-{\vectorfontone m}$
\State $\displaystyle G \gets {\vectorfonttwo \Delta}^\top{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}$
\medskip
\State $\displaystyle \lambda(\kappa)\coloneqq\sqrt{\frac{2}{\pi}}\cdot\left(1+\frac{\kappa}{\zeta_1(\kappa)}\right)^{-1/2} - \zeta_1(\kappa)$
\State $\displaystyle \beta(\kappa)\coloneqq\frac{2}{\pi}\left(1+\frac{\kappa}{\zeta_1(\kappa)}\right)^{-1}\cdot\frac{1}{\lambda(\kappa)^2}$
\State $\displaystyle \kappa^* \gets \text{Solution to }\frac{1}{G}+\beta(\kappa)-\frac{\zeta_1(\kappa)}{\kappa\lambda(\kappa)^2}=0$
\State $\displaystyle {\boldsymbol \Sigma}^* \gets {\matrixfontone C}+\beta(\kappa^*){\vectorfonttwo \Delta}\vDelta^\top$
\State $\displaystyle {\vectorfontone d}^* \gets {({\boldsymbol \Sigma}^*)^{-1}{\vectorfonttwo \Delta}}/\lambda(\kappa^*)$
\State $\displaystyle {\vectorfonttwo \mu}^* \gets \widetilde{{\vectorfonttwo \mu}} - \sqrt{\frac{2}{\pi}}\cdot\frac{{\boldsymbol \Sigma}^*{\vectorfontone d}^*}{\sqrt{1+{({\vectorfontone d}^*)^\top{\boldsymbol \Sigma}^*{\vectorfontone d}^*}}}$ \\
\Return $({\vectorfonttwo \mu}^*,\,{\boldsymbol \Sigma}^*,\,{\vectorfontone d}^*)$
\Comment{The `$*$' superscript indicates final value}
\end{algorithmic}
\end{algorithm}
The MMC equations (\ref{eq:mcmatch}) do not always admit a solution.
When $G\geq2/(\pi-2)$, step 5 in Algorithm \ref{alg:mcm} has no solution (see the supplementary material for more details).
This intuitively corresponds to cases where the skewness is too large compared to the covariance.
In such cases, we make an adjustment to the observed value of ${\vectorfonttwo \Delta}$, of the form ${\vectorfonttwo \Delta}_a=a{\vectorfonttwo \Delta}$, such that a solution does exist.
We can choose $a\in(0,\sqrt{2/[(\pi-2)G]})$ to minimize the loss function
$$
L(a)=w\norm{{\vectorfonttwo \Delta}_a-{\vectorfonttwo \Delta}}+\norm{{\vectorfontone d}_a},
$$
where $w>0$ is some weight and ${\vectorfontone d}_a$ is the matched value of ${\vectorfontone d}$ using ${\vectorfonttwo \Delta}_a$ instead of ${\vectorfonttwo \Delta}$.
The left term ensures that the adjustment is not too large, while the right term ensures that the skewness parameter is not too large.
We set $w=50$ in the simulations but in practice, as with the MM scheme, some manual tuning is required for a given model.
\section{Simulation and benchmark settings}\label{sec:settings}
The following sections describe practical applications of the skew-normal matching method for simple Bayesian problems, where its performance was measured on both simulated and benchmark datasets.
In particular, the case of Bayesian probit regression was considered, with logistic regression results provided in the supplementary material.
The purpose of this section is to outline the settings that were used throughout these simulations and benchmarks.
\subsection{Additional methods compared}
A selection of common ABI methods were chosen to be compared to the skew-normal matching method.
The first of these is Laplace's method, which has seen widespread use in the approximation of posteriors and posterior integrals \citep{tierney_kadane_1986, raudenbush_yang_yosef_2000, rue_martino_chopin_2009}.
Next, we considered Gaussian variational Bayes (GVB), another well-established method, with direct optimization of the evidence lower bound performed via the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm.
As a related method with relative ease of implementation, the delta method variational Bayes (DMVB) variant of \cite{braun2010variational} and \cite{wang2013variational} was also chosen.
The expectation propagation (EP) framework from \cite{minka_2001}, in its classical Gaussian product-over-sites implementation, was also investigated.
Finally, the SM approximation of \cite{durante2023skewed} (with 50,000 samples used for posterior inference) was considered as a potential theoretically justified alternative to the skew-normal matching method.
\subsection{Simulation settings}
All methods considered were subject to the same simulation settings for the purpose of comparison.
For each combination of method, $p\in\left\{2,4,8,16,24,40\right\}$, and $n\in\left\{2p,4p\right\}$, either random Gaussian ($X_{ij}\sim\text{i.i.d.}\ \mathcal{N}_1(0,1)$ for $i=1,\ldots,n$ and $j=2,\ldots,p$) or AR1 covariate data ($\left\{X_{i2},\ldots,X_{ip}\right\}\sim\text{AR}(1)$ with coefficient $\rho=0.9$, for $i=1,\ldots,n$) was generated.
In both cases, $X_{i1}=1$, for $i=1,\ldots,n$.
Call $n=2p$ the low data case and $n=4p$ the moderate data case; we chose $n$ to be small relative to $p$ so as to induce more difficult situations where the posterior distribution can become quite skewed.
The corresponding response data was then randomly generated using ${\vectorfonttwo \theta}_p=(2,-2,\cdots,2,-2)^\top/p$, with ${\vectorfonttwo \theta}_p$ having $p$ entries.
Data was simulated 50 times for each combination of settings; in all cases, the Gaussian prior was centered at zero, with the variance $\sigma_{\vectorfonttwo \theta}^2$ set to $10,000$.
Finally, in order to avoid pathological cases, we discarded simulations where separation was detected in the data (see \cite{mansournia2018separation} for an overview).
\subsection{Benchmark datasets}
In addition to the simulated examples above, a selection of eight commonly used benchmark datasets for binary classification from the UCI machine learning repository were also used to evaluate the performance of the skew-normal matching method.
Each dataset contained a certain number of numeric and categorical predictors, in addition to a column of binary responses to be predicted on.
These datasets were coded as O-rings ($n=23$, $p=3$), Liver ($n=345$, $p=6$), Diabetes ($n=392$, $p=9$), Glass ($n=214$, $p=10$), Breast cancer ($n=699$, $p=10$), Heart ($n=270$, $p=14$), German credit ($n=1000$, $p=25$), and Ionosphere ($n=351$, $p=33$), where the value $p$ includes the intercept.
Breast cancer, German credit, and Ionosphere were shortened to Breast, German, and Ion respectively.
\subsection{Performance evaluation}
In each simulation or instance of benchmark data, the corresponding posterior distribution was initially computed using the No-U-Turn sampler, as implemented via the R package \texttt{rstan} \citep{rlang,rstan}.
Each posterior was approximated using a total of 50,000 MCMC iterations, 5,000 of which were warm-up iterations.
Convergence was verified by checking that the $\hat{R}$ value was less than $1.1$ \citep{Gelman1995}.
This MCMC approximation, if the chain converged, acted as the gold standard to which the rest of the methods were compared.
If the chain did not converge, the result was discarded.
For each method, comparisons were made across marginal components -- the difference between the marginal components of the approximation and the corresponding marginal components of the MCMC estimate were computed using the $L^1$ norm.
An appropriate transformation was then performed on this norm (ranging between 0 and 2) in order to obtain the $L^1$ accuracy.
The $L^1$ accuracy for the $j$-th marginal component is given by
\begin{align}\label{eq:l1acc}
\text{$L^1$ accuracy}&=1-\frac{1}{2}\int_{-\infty}^\infty\left|p_\text{MCMC}(\theta_j)-p_\text{Approx.}(\theta_j)\right|\,d\theta_j
\end{align}
where $p_\text{MCMC}(\theta_j)$ and $p_\text{Approx.}(\theta_j)$ are the $j$-th marginal components of the MCMC and approximation, respectively.
The former was estimated via \texttt{density()}, the default kernel density estimation function in R.
The $L^1$ accuracy ranges in value from 0 to 1, with values closer to one indicating better performance.
The integral in (\ref{eq:l1acc}) was computed numerically, with the bounds of integration set to $m_j\pm5\sqrt{{\boldsymbol \Sigma}_{j,j}}$, with ${\vectorfontone m}$ and ${\boldsymbol \Sigma}$ being the mean and covariance respectively of the MCMC samples.
This range was evenly split into 1,000 intervals for use with the composite trapezoidal rule.
\section{Applications as a standalone approximation}\label{sec:standalone}
In this section, the skew-normal matching method was applied as a standalone approximation to Bayesian probit regression and compared to other methods.
We use the word standalone in the sense that the posterior statistics required in the skew-normal matching algorithm are estimated using an accurate, non-Gaussian-based approach.
For moment matching, $\widetilde{{\vectorfonttwo \mu}}$, ${\matrixfontone C}$, and ${\vectorfontone s}$ were all estimated using Pareto smoothed importance sampling (PSIS) \citep{vehtari2021}.
For derivative matching, ${\vectorfontone m}$, ${\matrixfontone J}$, and ${\vectorfontone t}$ were all calculated using the Newton-Raphson method.
For MMH matching, ${\vectorfontone m}$ and ${\matrixfontone J}$ were estimated using Newton-Raphson, while $\widetilde{{\vectorfonttwo \mu}}$ was estimated using one of three methods: a Jensen-based approach, an improved Laplace approach, and PSIS (denoted MMH-Jensen, MMH-IL, and MMH-IS, respectively).
Finally, for MMC matching, ${\vectorfontone m}$ was estimated via Newton-Raphson, while $\widetilde{{\vectorfonttwo \mu}}$ and ${\matrixfontone C}$ were estimated using PSIS.
Note that all matching methods costs $O(p^2)$ once these posterior statistics are calculated and are not in the form of an iterative solution so no convergence checks are required.
The main costs are often in calculating the posterior statistics themselves (this can be avoided by only considering post-hoc adjustments, and is the focus of the next section).
Derivations for the estimation of posterior statistics and the Jensen variant of MMH matching, in addition to plots, are given in the supplementary material.
\subsection{Simulation results}
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccccccccccc}
\toprule
\multicolumn{1}{c}{ } & \multicolumn{2}{c}{$p=2$} & \multicolumn{2}{c}{$p=4$} & \multicolumn{2}{c}{$p=8$} & \multicolumn{2}{c}{$p=16$} & \multicolumn{2}{c}{$p=24$} & \multicolumn{2}{c}{$p=40$} \\
\cmidrule(l{3pt}r{3pt}){2-3} \cmidrule(l{3pt}r{3pt}){4-5} \cmidrule(l{3pt}r{3pt}){6-7} \cmidrule(l{3pt}r{3pt}){8-9} \cmidrule(l{3pt}r{3pt}){10-11} \cmidrule(l{3pt}r{3pt}){12-13}
Method & Acc. & Time & Acc. & Time & Acc. & Time & Acc. & Time & Acc. & Time & Acc. & Time\\
\midrule
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{$\mathbf{n=2p}$}}\\
\hspace{1em}Laplace & 76.7 & 0.01 & 75.0 & 0.00 & 72.1 & 0.00 & 68.8 & 0.00 & 68.3 & 0.00 & 62.2 & 0.01\\
\hspace{1em}DMVB & 75.9 & 0.01 & 71.9 & 0.01 & 64.5 & 0.04 & 64.2 & 0.19 & 63.1 & 0.52 & 58.7 & 3.01\\
\hspace{1em}GVB & 86.2 & 0.04 & 86.3 & 0.07 & 88.6 & 0.21 & 89.0 & 0.83 & 90.6 & 1.50 & 88.7 & 2.91\\
\hspace{1em}EP & 86.0 & 0.01 & 89.5 & 0.00 & 92.8 & 0.01 & \textbf{94.3} & 0.01 & \textbf{95.8} & 0.04 & \textbf{96.0} & 0.10\\
\hspace{1em}SM & 78.7 & 0.53 & 80.7 & 0.88 & 78.3 & 1.56 & 74.8 & 2.94 & 74.2 & 4.37 & 67.3 & 7.68\\
\hspace{1em}MM & 96.2 & 2.12 & 94.9 & 2.78 & 93.7 & 3.00 & 92.1 & 3.42 & 90.4 & 3.83 & 83.0 & 4.98\\
\hspace{1em}DM & 83.7 & 0.04 & 79.7 & 0.06 & 78.1 & 0.12 & 73.5 & 0.26 & 76.5 & 0.37 & 69.1 & 0.62\\
\hspace{1em}MMH-Jensen & 90.9 & 0.04 & 89.7 & 0.08 & 88.2 & 0.16 & 85.5 & 0.38 & 85.9 & 0.60 & 80.7 & 1.52\\
\hspace{1em}MMH-IL & 92.4 & 0.07 & 91.7 & 0.19 & 91.4 & 0.50 & 90.5 & 1.36 & 90.5 & 2.76 & 86.5 & 11.09\\
\hspace{1em}MMH-IS & 94.3 & 2.12 & 93.6 & 2.78 & 93.3 & 2.99 & 91.7 & 3.43 & 90.1 & 3.85 & 85.6 & 4.99\\
\hspace{1em}MMC & \textbf{96.5} & 2.11 & \textbf{95.0} & 2.77 & \textbf{94.8} & 3.00 & 92.6 & 3.44 & 90.5 & 3.86 & 83.0 & 5.03\\
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{$\mathbf{n=4p}$}}\\
\hspace{1em}Laplace & 78.6 & 0.00 & 83.3 & 0.00 & 86.2 & 0.00 & 90.3 & 0.00 & 91.4 & 0.00 & 92.7 & 0.01\\
\hspace{1em}DMVB & 78.4 & 0.01 & 85.9 & 0.01 & 92.1 & 0.03 & 97.1 & 0.19 & 98.2 & 0.67 & 98.7 & 3.92\\
\hspace{1em}GVB & 85.8 & 0.05 & 91.7 & 0.10 & 95.5 & 0.31 & 98.1 & 0.84 & 98.7 & 1.24 & 98.9 & 3.35\\
\hspace{1em}EP & 85.6 & 0.01 & 93.2 & 0.01 & 96.4 & 0.01 & 98.5 & 0.02 & \textbf{98.9} & 0.05 & \textbf{99.2} & 0.13\\
\hspace{1em}SM & 80.9 & 0.53 & 90.5 & 0.86 & 92.6 & 1.54 & 94.9 & 2.88 & 95.6 & 4.32 & 96.0 & 7.68\\
\hspace{1em}MM & \textbf{96.9} & 2.11 & \textbf{97.5} & 2.72 & \textbf{98.0} & 2.97 & 98.4 & 3.43 & 98.5 & 3.93 & 98.3 & 5.23\\
\hspace{1em}DM & 86.9 & 0.04 & 87.9 & 0.06 & 88.7 & 0.14 & 93.3 & 0.26 & 93.9 & 0.38 & 94.8 & 0.63\\
\hspace{1em}MMH-Jensen & 93.9 & 0.05 & 95.6 & 0.08 & 96.6 & 0.18 & 97.8 & 0.43 & 98.2 & 0.82 & 98.3 & 2.55\\
\hspace{1em}MMH-IL & 94.8 & 0.08 & 95.9 & 0.21 & 96.8 & 0.51 & 97.6 & 1.48 & 97.9 & 3.39 & 98.1 & 14.33\\
\hspace{1em}MMH-IS & 95.9 & 2.11 & 96.2 & 2.72 & 96.9 & 2.97 & 97.8 & 3.44 & 98.0 & 3.94 & 98.0 & 5.25\\
\hspace{1em}MMC & 96.7 & 2.11 & 97.3 & 2.72 & \textbf{98.0} & 2.97 & \textbf{98.6} & 3.44 & 98.6 & 3.94 & 98.4 & 5.26\\
\bottomrule
\end{tabular}}
\caption{Mean marginal $L^1$ accuracies (Acc.) (expressed as a percentage) and times (in seconds) across probit regression simulations with independent covariate data. The highest accuracies for each value of $p$ appear in bold.}
\label{tab:probitsims}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{img/STA-sim-PR-2A-contour.png}
\end{center}
\caption{Contour plots for the posterior and across various approximation methods, for a small probit regression example with $p=2$ and $n=4$.}
\label{pr-contours}
\end{figure}
Simulation results for probit regression with independent covariates are shown in Table \ref{tab:probitsims}. The other cases are included in the supplementary material.
Moment matching and mean-mode-covariance matching tended to have the highest $L^1$ accuracies, followed by mean-mode-Hessian matching and, finally, by derivative matching.
The SM approximation had around the same $L^1$ accuracies as derivative matching, with this accuracy improving when both $p$ and $n$ were high.
In both the low and moderate data cases, skew-normal matching approximations generally performed better compared to standard ABI approaches.
However, as $p$ increased, this improvement in accuracy diminished.
Furthermore, as $p$ increased, exact solutions to the moment matching and mean-mode-covariance matching schemes became less common, and, in addition, matching became less effective compared to variational approaches; this also led to decreases in accuracy.
When $p$ is above or equal to 24, GVB and EP started to outperform the skew-normal matching method.
In most cases, the skew-normal matching variants tended to be slower than their Gaussian approximations by an order of around 2 to 100, depending on the setting, with additionally the SM approximation taking roughly the same amount of time as the skew-normal matching variants.
It should be noted that some parts of GVB and SM were implemented in C\nolinebreak\hspace{-.05em}\raisebox{.4ex}{\tiny\bf+}\nolinebreak\hspace{-.10em}\raisebox{.4ex}{\tiny\bf +}\ for speed, and so a direct comparison with the skew-normal matching method is not available.
GVB is expected to be slower than some of the skew-normal matching variants under a fair comparison.
In general, the variants of the skew-normal matching method are seen to provide an increase in posterior fit compared to Gaussian approaches when $p$ is not too high, at the cost of decreased speed.
Better overall performance for the MM and MMC schemes may be achieved by only performing such an approximation when an exact solution exists, and using a Gaussian approximation otherwise.
Figure \ref{pr-contours} illustrates the benefits of the skew-normal matching method in a particular low-dimensional case with $p=2$ and $n=4$.
Note that, for this specific example, approximate solutions were not needed for the MM and MMC schemes.
\subsection{Benchmark results}
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccccccccccccccc}
\toprule
\multicolumn{1}{c}{ } & \multicolumn{2}{c}{O-rings} & \multicolumn{2}{c}{Liver} & \multicolumn{2}{c}{Diabetes} & \multicolumn{2}{c}{Glass} & \multicolumn{2}{c}{Breast} & \multicolumn{2}{c}{Heart} & \multicolumn{2}{c}{German} & \multicolumn{2}{c}{Ion.} \\
\cmidrule(l{3pt}r{3pt}){2-3} \cmidrule(l{3pt}r{3pt}){4-5} \cmidrule(l{3pt}r{3pt}){6-7} \cmidrule(l{3pt}r{3pt}){8-9} \cmidrule(l{3pt}r{3pt}){10-11} \cmidrule(l{3pt}r{3pt}){12-13} \cmidrule(l{3pt}r{3pt}){14-15} \cmidrule(l{3pt}r{3pt}){16-17}
Method & Acc. & Time & Acc. & Time & Acc. & Time & Acc. & Time & Acc. & Time & Acc. & Time & Acc. & Time & Acc. & Time\\
\midrule
Laplace & 92.5 & 0.17 & 98.0 & 0.00 & 98.3 & 0.00 & 92.4 & 0.00 & 95.3 & 0.05 & 95.7 & 0.00 & 97.9 & 0.11 & 86.0 & 0.03\\
DMVB & 95.5 & 0.14 & 99.1 & 0.19 & \textbf{99.2} & 0.61 & 96.9 & 0.33 & 97.1 & 1.97 & \textbf{99.1} & 0.74 & \textbf{99.2} & 31.00 & 93.2 & 10.85\\
GVB & 96.5 & 0.33 & 99.1 & 0.09 & \textbf{99.2} & 0.06 & 97.3 & 2.16 & 98.2 & 2.12 & 99.0 & 0.09 & \textbf{99.2} & 0.46 & 97.3 & 11.54\\
EP & 96.6 & 0.32 & 99.1 & 0.14 & \textbf{99.2} & 0.14 & 97.8 & 0.08 & 98.4 & 0.30 & \textbf{99.1} & 0.11 & \textbf{99.2} & 0.67 & \textbf{98.6} & 0.35\\
SM & 96.9 & 0.91 & 99.2 & 1.14 & 99.0 & 1.62 & 96.2 & 2.03 & 97.9 & 1.81 & 98.5 & 2.61 & 99.0 & 4.47 & 90.5 & 6.05\\
\addlinespace
MM & \textbf{99.1} & 2.45 & 99.1 & 3.44 & 99.1 & 4.07 & \textbf{98.2} & 3.39 & \textbf{98.6} & 5.39 & 98.9 & 3.63 & 99.0 & 7.70 & 96.5 & 6.05\\
DM & 95.7 & 0.32 & 98.3 & 0.12 & 98.4 & 0.17 & 52.9 & 0.18 & 96.2 & 0.20 & 97.1 & 0.25 & 95.3 & 0.47 & 86.4 & 0.51\\
MMH-Jensen & 98.5 & 0.36 & \textbf{99.3} & 0.37 & \textbf{99.2} & 0.93 & 97.3 & 0.42 & 98.4 & 2.20 & \textbf{99.1} & 1.06 & 99.1 & 21.46 & 96.0 & 5.75\\
MMH-IL & 98.6 & 0.53 & \textbf{99.3} & 2.19 & \textbf{99.2} & 6.01 & 97.5 & 3.64 & 98.4 & 21.26 & 99.0 & 4.90 & 99.1 & 125.89 & 97.1 & 39.86\\
MMH-IS & 98.5 & 2.45 & 99.2 & 3.45 & \textbf{99.2} & 4.06 & 97.6 & 3.31 & 98.4 & 5.42 & 98.9 & 3.70 & 99.0 & 7.69 & 96.7 & 6.04\\
\addlinespace
MMC & 98.8 & 2.42 & 99.2 & 3.46 & \textbf{99.2} & 4.03 & 98.1 & 3.33 & \textbf{98.6} & 5.38 & 98.9 & 3.67 & 99.0 & 7.81 & 96.5 & 6.05\\
\bottomrule
\end{tabular}}
\caption{Mean marginal $L^1$ accuracies (Acc.) (expressed as a percentage) across probit regression benchmarks. The highest accuracies for each benchmark dataset appear in bold.}
\label{tab:probitbench}
\end{table}
Results for the benchmark datasets are presented in Table \ref{tab:probitbench}.
A similar hierarchy of performance to the simulations was observed among the skew-normal matching variants, with moment matching, mean-mode-covariance, and mean-mode-Hessian matching generally performing better, and derivative matching performing worse.
Here, the SM approximation had higher $L^1$ accuracies compared to derivative matching, but lower accuracies compared to the other skew-normal matching variants.
The skew-normal matching method performed as well or better compared to its Gaussian approximation methods when $p$ was not too high, as was the case for the German credit and Ionosphere datasets.
For Ionosphere in particular, approximate solutions were required for the moment matching and mean-mode-covariance matching schemes.
As with before, the skew-normal matching method comes with the cost of decreased speed.
\section{Applications as a post-hoc adjustment}\label{sec:post-hoc}
The use of importance sampling to estimate key statistics in the moment matching and mean-mode-covariance matching schemes in Section \ref{sec:standalone} can be a deterrent due to the non-negligible cost of needing to use importance sampling, especially for larger problems.
We believe that the practicality of the skew-normal matching method lies mainly in post-hoc skewness adjustments to standard Gaussian approximations.
That is, the required posterior statistics are estimated by the standard Gaussian approximations themselves.
The original Gaussian approximation are then adjusted post-hoc, by using the skew-normal matching method.
The costs of the post-hoc adjustments are minimal.
In this section we investigate the performance of two types of post-hoc adjustments.
The first is a mean-mode-Hessian adjustment.
Here, an approximate posterior mean ${\vectorfonttwo \mu}$ is first taken from a given Gaussian base approximation (we use one of DMVB, GVB or EP in this paper, as described in Section \ref{sec:settings}).
The mode ${\vectorfontone m}$ and negative Hessian at the mode ${\matrixfontone J}$ of the posterior are then calculated via Newton-Raphson.
Finally, these three quantities are used as inputs to Algorithm \ref{alg:mhm} to produce a skew-normal approximation.
This approximation can be considered as a post-hoc skewness adjustment to the original Gaussian base approximation.
The second is a mean-mode-covariance adjustment.
Here, an approximate posterior mean ${\vectorfonttwo \mu}$ and covariance ${\boldsymbol \Sigma}$ are first taken from a given Gaussian base approximation (again, one of DMVB, GVB or EP).
The mode ${\vectorfontone m}$ of the posterior is then calculated via Newton-Raphson.
Finally, these three quantities are used as inputs to Algorithm \ref{alg:mcm} to produce a skew-normal approximation.
This approximation can again be considered as a post-hoc skewness adjustment to the original Gaussian base approximation.
Approximate solutions were not used for the mean-covariance adjustment when no solution was obtained from either Algorithm \ref{alg:mhm} or Algorithm \ref{alg:mcm}, with the justification being that there is a Gaussian approximation to fall back to.
In such cases, the result was discarded.
As with before, both simulations and benchmark datasets (under the same settings from Section \ref{sec:settings}) were used to measure performance.
Plots can be found in the supplementary material.
\subsection{Simulation results}
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccccccccccc}
\toprule
\multicolumn{1}{c}{ } & \multicolumn{2}{c}{$p=2$} & \multicolumn{2}{c}{$p=4$} & \multicolumn{2}{c}{$p=8$} & \multicolumn{2}{c}{$p=16$} & \multicolumn{2}{c}{$p=24$} & \multicolumn{2}{c}{$p=40$} \\
\cmidrule(l{3pt}r{3pt}){2-3} \cmidrule(l{3pt}r{3pt}){4-5} \cmidrule(l{3pt}r{3pt}){6-7} \cmidrule(l{3pt}r{3pt}){8-9} \cmidrule(l{3pt}r{3pt}){10-11} \cmidrule(l{3pt}r{3pt}){12-13}
Adjust. & $+$ & $-$ & $+$ & $-$ & $+$ & $-$ & $+$ & $-$ & $+$ & $-$ & $+$ & $-$\\
\midrule
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{$\mathbf{n=2p}$}}\\
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{DMVB}}\\
\hspace{1em}\hspace{1em}MMH & \textbf{22.1} & \textbf{1.5} & \textbf{57.7} & \textbf{1.2} & \textbf{153.4} & \textbf{0.9} & \textbf{271.3} & \textbf{3.7} & \textbf{395.9} & \textbf{5.9} & \textbf{644.1} & \textbf{16.1}\\
\hspace{1em}\hspace{1em}MMC & \textbf{3.2} & \textbf{0.4} & \textbf{5.4} & \textbf{0.1} & \textbf{4.5} & \textbf{0.0} & \textbf{7.0} & \textbf{0.0} & \textbf{4.9} & \textbf{0.0} & \text{---} & \text{---}\\
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{GVB}}\\
\hspace{1em}\hspace{1em}MMH & \textbf{16.6} & \textbf{0.7} & \textbf{33.4} & \textbf{3.1} & \textbf{48.6} & \textbf{7.3} & \textbf{76.3} & \textbf{14.6} & \textbf{74.6} & \textbf{26.4} & \textbf{104.4} & \textbf{66.1}\\
\hspace{1em}\hspace{1em}MMC & \textbf{3.8} & \textbf{0.1} & \textbf{3.2} & \textbf{0.3} & \textbf{2.8} & \textbf{0.2} & \textbf{1.5} & \textbf{0.2} & \text{---} & \text{---} & \text{---} & \text{---}\\
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{EP}}\\
\hspace{1em}\hspace{1em}MMH & \textbf{7.8} & \textbf{0.5} & \textbf{12.5} & \textbf{5.6} & \textbf{21.7} & \textbf{17.4} & 27.1 & 40.4 & 15.6 & 82.1 & 11.5 & 209.6\\
\hspace{1em}\hspace{1em}MMC & \textbf{4.2} & \textbf{0.4} & \textbf{7.0} & \textbf{0.3} & \textbf{8.4} & \textbf{0.0} & \textbf{7.2} & \textbf{0.0} & \text{---} & \text{---} & \text{---} & \text{---}\\
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{$\mathbf{n=4p}$}}\\
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{DMVB}}\\
\hspace{1em}\hspace{1em}MMH & \textbf{25.9} & \textbf{0.5} & \textbf{31.2} & \textbf{0.3} & \textbf{30.1} & \textbf{1.5} & \textbf{14.1} & \textbf{7.0} & 7.1 & 13.2 & 4.2 & 26.3\\
\hspace{1em}\hspace{1em}MMC & \textbf{4.7} & \textbf{0.2} & \textbf{7.3} & \textbf{0.0} & \textbf{8.0} & \textbf{0.0} & \textbf{7.0} & \textbf{0.0} & \textbf{6.1} & \textbf{0.0} & \textbf{4.3} & \textbf{0.0}\\
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{GVB}}\\
\hspace{1em}\hspace{1em}MMH & \textbf{20.2} & \textbf{0.1} & \textbf{19.6} & \textbf{1.4} & \textbf{14.6} & \textbf{2.7} & 5.6 & 9.6 & 3.8 & 15.6 & 2.5 & 27.9\\
\hspace{1em}\hspace{1em}MMC & \textbf{3.6} & \textbf{0.2} & \textbf{4.2} & \textbf{0.0} & \textbf{3.8} & \textbf{0.2} & \textbf{2.8} & \textbf{0.2} & \textbf{2.1} & \textbf{0.3} & \textbf{1.1} & \textbf{0.3}\\
\addlinespace[0.3em]
\multicolumn{13}{l}{\textbf{EP}}\\
\hspace{1em}\hspace{1em}MMH & \textbf{11.7} & \textbf{0.2} & \textbf{12.3} & \textbf{2.1} & \textbf{9.2} & \textbf{4.0} & 3.7 & 13.4 & 2.4 & 20.6 & 1.2 & 35.5\\
\hspace{1em}\hspace{1em}MMC & \textbf{2.7} & \textbf{1.1} & \textbf{5.4} & \textbf{0.4} & \textbf{5.7} & \textbf{0.1} & \textbf{4.6} & \textbf{0.1} & \textbf{3.6} & \textbf{0.1} & \textbf{2.0} & \textbf{0.1}\\
\bottomrule
\end{tabular}}
\caption{Mean total marginal $L^1$ improvement/deterioration of accuracies (expressed as a percentage) across probit regression post-hoc simulations with independent covariate data. The column `$+$' indicates a mean total improvement in marginal $L^1$ accuracy, while the column `$-$' indicates the mean total decrease in marginal $L^1$ accuracy across simulations. For example, if the marginal improvements in $L^1$ accuracy for a $p=4$ example were $(5, -2, 1, -3)$, then the total marginal $L^1$ improvement and deterioration are 6 and 5 respectively. Pairs where there is an overall improvement are highlighted in bold. Long dashes indicate no data.}
\label{tab:probitphsims}
\end{table}
\begin{table}
\centering
\begin{tabular}{ccccccc}
\toprule
Multiplier & \text{$p=2$} & \text{$p=4$} & \text{$p=8$} & \text{$p=16$} & \text{$p=24$} & \text{$p=40$}\\
\midrule
2 & 92.0 & 79.3 & 48.7 & 17.3 & 0.7 & 0.0\\
4 & 93.3 & 94.0 & 93.3 & 97.3 & 96.7 & 96.7\\
\bottomrule
\end{tabular}
\caption{Success rates, i.e., rates where a solution exists to Algorithm \ref{alg:mcm}, (expressed as a percentage) averaged over base approximations for the mean-mode-covariance based skewness adjustment. The mean-mode-Hessian based adjustments had a 100\% success rate.}
\label{tab:probitph2success}
\end{table}
Simulation results for probit regression with independent covariates are shown in Tables \ref{tab:probitphsims} and \ref{tab:probitph2success}.
We have included the probit case because it showed the most promising results. The other cases are included in the supplementary material.
In the low data case, a mean-mode-Hessian adjustment provided a noticeable increase in the quality of the posterior fit for the DMVB and GVB approximations, while a decrease in the quality of fit was seen for EP.
On the other hand, mean-mode-covariance adjustments tended to increase or maintain $L^1$ accuracy for low values of $p$, with diminishing returns as $p$ increased and exact solutions became rarer (as indicated by Table \ref{tab:probitph2success}).
In the moderate data case, the mean-mode-Hessian adjustment was seen to increase $L^1$ accuracies across all methods for low values of $p$, but slightly decreased these accuracies for GVB and EP as $p$ approached $40$.
In contrast, the mean-mode-covariance adjustment always increased or maintained the quality of the posterior fit.
In general, simulations show that in most cases, some form of post-hoc skewness adjustment works reasonably well if the number of dimensions is not too high.
The mean-Hessian adjustment gave very favorable results when there was low data (excluding EP, where it appeared to break down), while the mean-covariance adjustment gave favorable results in all situations.
\subsection{Benchmark results}
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccccccccccccccc}
\toprule
\multicolumn{1}{c}{ } & \multicolumn{2}{c}{O-rings} & \multicolumn{2}{c}{Liver} & \multicolumn{2}{c}{Diabetes} & \multicolumn{2}{c}{Glass} & \multicolumn{2}{c}{Breast} & \multicolumn{2}{c}{Heart} & \multicolumn{2}{c}{German} & \multicolumn{2}{c}{Ion.} \\
\cmidrule(l{3pt}r{3pt}){2-3} \cmidrule(l{3pt}r{3pt}){4-5} \cmidrule(l{3pt}r{3pt}){6-7} \cmidrule(l{3pt}r{3pt}){8-9} \cmidrule(l{3pt}r{3pt}){10-11} \cmidrule(l{3pt}r{3pt}){12-13} \cmidrule(l{3pt}r{3pt}){14-15} \cmidrule(l{3pt}r{3pt}){16-17}
\text{Adjust.} & \text{+} & \text{--} & \text{+} & \text{--} & \text{+} & \text{--} & \text{+} & \text{--} & \text{+} & \text{--} & \text{+} & \text{--} & \text{+} & \text{--} & \text{+} & \text{--}\\
\midrule
\addlinespace[0.3em]
\multicolumn{17}{l}{\textbf{DMVB}}\\
\hspace{1em}MMH & {\bf 7.4} & {\bf 0.4} & {\bf 1.1} & {\bf 0.0} & {\bf 0.6} & {\bf 0.2} & {\bf 6.8} & {\bf 0.5} & {\hspace{-0.25cm}\bf 11.4} & {\bf 0.2} & 1.1 & 1.6 & 1.0 & 2.8 & {\bf 89.2} & {\bf \ 3.4} \\
\hspace{1em}MMC & {\bf 7.1} & {\bf 0.0} & {\bf 1.2} & {\bf 0.0} & {\bf 0.5} & {\bf 0.0} & 0.4 & 0.6 & {\bf 0.9} & {\bf 0.0} & {\bf 1.6} & {\bf 0.0} & 0.8 & 1.8 & \text{---} & \text{---} \\
\addlinespace[0.3em]
\multicolumn{17}{l}{\textbf{GVB}}\\
\hspace{1em}MMH & {\bf 6.3} & {\bf 0.4} & {\bf 1.3} & {\bf 0.0} & {\bf 0.7} & {\bf 0.2} & {\bf 4.6} & {\bf 2.0} & {\bf 1.9} & {\bf 0.3} & {\bf 1.2} & {\bf 0.8} & 1.0 & 3.4 & {\bf 24.1} & {\bf 14.1} \\
\hspace{1em}MMC & {\bf 3.4} & {\bf 0.0} & {\bf 1.1} & {\bf 0.0} & {\bf 0.4} & {\bf 0.0} & {\bf 0.5} & {\bf 0.0} & {\bf 0.9} & {\bf 0.0} & {\bf 1.0} & {\bf 0.0} & 0.8 & 1.8 & \text{---} & \text{---}\\
\addlinespace[0.3em]
\multicolumn{17}{l}{\textbf{EP}}\\
\hspace{1em}MMH & {\bf 6.4} & {\bf 0.4} & {\bf 1.3} & {\bf 0.0} & {\bf 0.7} & {\bf 0.3} & 1.8 & 3.4 & {\bf 1.4} & {\bf 1.2} & 1.0 & 1.0 & 1.0 & 3.0 & 1.7 & 34.0 \\
\hspace{1em}MMC & {\bf 4.1} & {\bf 0.0} & {\bf 1.2} & {\bf 0.0} & {\bf 0.4} & {\bf 0.0} & {\bf 0.5} & {\bf 0.4} & {\bf 0.9} & {\bf 0.0} & {\bf 1.0} & {\bf 0.0} & 0.8 & 1.9 & \text{---} & \text{---}\\
\bottomrule
\end{tabular}}
\caption{Mean total marginal improvement/deterioration in $L^1$ accuracies (expressed as a percentage) across probit regression post-hoc benchmarks. The `$+$' columns indicates the total increases in marginal $L^1$ accuracy, while the `$-$' column indicates the total decrease in marginal $L^1$ accuracy. For example, if the marginal improvements in $L^1$ accuracy for a $p=4$ example were $(5, -2, 1, -3)$, then the total marginal $L^1$ improvement and deterioration are 6 and 5 respectively. Pairs where there is an overall improvement are highlighted in bold. Long dashes indicate no data.}
\label{tab:probitphbench}
\end{table}
Results for the benchmarks are shown in Table \ref{tab:probitphbench}.
In the majority of cases, a post-hoc skewness adjustment was seen to give a slight improvement to the $L^1$ accuracy of the original Gaussian approximation.
As with the simulations, mean-mode-Hessian adjustments gave larger increases in $L^1$ accuracy but also had the risk of decreasing accuracy when $p$ was large.
This was seen with the Heart, German credit, and Ionosphere datasets.
On the other hand, mean-covariance adjustments tended to give more modest increases in accuracy but had a much lower chance of decreasing posterior fit.
The German credit dataset appears to be an anomaly in this regard, and requires further investigation.
In general, the benchmarks show that it is beneficial to apply some form of post-hoc skewness adjustment to an existing Gaussian approximation when the number of dimensions is not too high.
Some approximations benefit from a skewness adjustment more than others.
\section{Summary and future work}\label{sec:summary}
This paper has introduced the skew-normal matching method as an alternative to standard Gaussian-based approximations of the posterior distribution, providing a potentially more practical alternative to the SM approximation of \cite{durante2023skewed}.
Four matching schemes are suggested, namely moment matching, derivative matching, mean-mode-Hessian matching, and mean-mode-covariance matching.
Each scheme was based on matching a combination of derivatives at the mode and moments of the posterior with that of the MSN density.
The performance of these matching schemes were evaluated on a Bayesian probit regression model, where it was seen that for both simulated and benchmark data, the skew-normal matching method tended to outperform standard Gaussian-based approximations in terms of accuracy for low and moderate dimensions, at the cost of run time.
All matching schemes except DM were also seen to mostly outperform the SM approximation in the settings investigated.
Each matching scheme is seen to offer a trade-off between performance and speed:
\begin{itemize}
\item \textbf{Moment matching} generally outperforms standard Gaussian approximations if an exact solution exists (more so than mean-Hessian matching) and the dimensionality is not too high, but is compromised by an increase in run time with the estimation of the covariance and third moments, requiring importance sampling.
\item \textbf{Derivative matching} offers an improvement in accuracy to the regular Laplace approximation when there is low data and the posterior is very skewed, at very little additional run time cost.
\item \textbf{Mean-mode-Hessian matching} provides as or better performance compared to most standard Gaussian approximations when the dimensionality is not too high, at the cost of increased run time
\item \textbf{Mean-mode-covariance matching} generally outperforms standard Gaussian approximations if an exact solution exists (more so than mean-Hessian matching) and the dimensionality is not too high, at the cost of increased run time with the estimation of the covariance, requiring importance sampling. Exact solutions are more frequent compared to moment matching.
\end{itemize}
The mean-Hessian and mean-covariance matching schemes were additionally shown to be successfully used for post-hoc skewness adjustments for standard Gaussian approximations.
For the case of probit regression, these adjustments were shown to generally result in increased marginal accuracies compared to the base approximation:
\begin{itemize}
\item \textbf{Mean-mode-Hessian adjustments} tend to give larger increases in marginal accuracies, but also risks decreasing these marginal accuracies when the dimensionality of the data is too large. Better performance was seen when there was low data.
\item \textbf{Mean-mode-covariance adjustments} tend to give more modest increases in marginal accuracies but generally did not decrease marginal accuracies.
\end{itemize}
Although the skew-normal matching method has its benefits when the posterior is known to be skewed, this is not without its drawbacks.
Using a matching, rather than an optimal variational approach can impact performance for some problems.
This may explain why performance breaks down when the number of dimensions is large.
We are also heavily dependent on good estimates of the moments; without these, the skew-normal approximation can fail very easily.
For high dimensional data where $p\gg n$, skew normal matching is still possible, provided that the appropriate inverses exist to quantities such as the Hessian, although further work needs to be done in verifying the quality of these approximations.
There is clearly a variety of ways the work here could be extended.
Other families of skewed densities could be entertained, the matching methods could be used as warm starts for skew-normal variational approximations \citep{Ormerod2011,SmithEtal2020}, and matching with other posterior statistics is possible.
We hope that the results presented in this paper act as a useful benchmark for the performance of matching-based skewed approximations to posterior distributions.
We believe that the skew-normal matching method fulfills a certain niche in posterior approximations and would like to see similar, if not better, methodologies being developed as approximate Bayesian inference grows and the need for such approximations increase.
\bibliographystyle{plainnat}
\section{Derivations for the skew-normal matching method}
\label{sec:matching-derivations}
Preliminary results and notation are provided, before each matching scheme is derived.
If the random variable ${\vectorfonttwo \Theta}$ has an $\text{SN}_p({\vectorfonttwo \mu},{\boldsymbol \Sigma},{\vectorfontone d})$ distribution (as defined in the main text), then it can be shown that the gradient, Hessian, and third-order unmixed derivatives (TUD) of the log density are given by
\begin{align}
\nabla\log p({\vectorfonttwo \theta})&=-{\boldsymbol \Sigma}^{-1}({\vectorfonttwo \theta}-{\vectorfonttwo \mu})+\zeta_1({\vectorfontone d}^\top({\vectorfonttwo \theta} - {\vectorfonttwo \mu})){\vectorfontone d},\label{eq:logmsngrad}\\
\nabla^2\log p({\vectorfonttwo \theta})&=-{\boldsymbol \Sigma}^{-1}+\zeta_2({\vectorfontone d}^\top({\vectorfonttwo \theta}-{\vectorfonttwo \mu})){\vectorfontone d}\vd^\top,\ \mbox{and}\label{eq:logmsnhess}\\
\left(D_{\theta_i}^3\log p({\vectorfonttwo \theta})\right)_{i=1,\ldots,p}^\top&=\zeta_3({\vectorfontone d}^\top({\vectorfonttwo \theta}-{\vectorfonttwo \mu})){\vectorfontone d}^{\odot3} \label{eq:logmsntud}
\end{align}
respectively.
Furthermore, recall from the main text that the expectation, variance, and third-order unmixed moments of such a distribution are given by
\begin{align}
\mbox{$\textrm{\textup{E}}$}({\vectorfonttwo \Theta})&={\vectorfonttwo \mu}+\sqrt{\frac{2}{\pi}}{\vectorfonttwo \delta},\label{eq:msnexp}\\
\mbox{$\textrm{\textup{var}}$}({\vectorfonttwo \Theta})&={\boldsymbol \Sigma} - \frac{2}{\pi} {\vectorfonttwo \delta}\vdelta^\top,\ \mbox{and}\label{eq:msnvar}\\
\text{TUM}({\vectorfonttwo \Theta})&=\frac{\sqrt{2}(4-\pi)}{\pi^{3/2}}{\vectorfonttwo \delta}^{\odot3}\label{eq:msntum}
\end{align}
respectively, where ${\vectorfonttwo \delta}={\boldsymbol \Sigma}{\vectorfontone d}/\sqrt{1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}}$ and $\text{TUM}({\vectorfonttwo \Theta})$ is the vector of third-order unmixed central moments.
\subsection{Moment matching}\label{sec:mmatch}
For $\widetilde{{\vectorfonttwo \mu}}$ to be the mean, we use (\ref{eq:msnexp}) and see that we need
\begin{equation}\label{eq:matchmean}
\widetilde{{\vectorfonttwo \mu}}={\vectorfonttwo \mu}+\sqrt{\frac{2}{\pi}}\cdot\frac{{\boldsymbol \Sigma}{\vectorfontone d}}{\sqrt{1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}}}.
\end{equation}
For ${\matrixfontone C}$ to be the covariance matrix, we use (\ref{eq:msnvar}) see that we need
\begin{equation}\label{eq:matchcov}
{\matrixfontone C}={\boldsymbol \Sigma} - \frac{2}{\pi(1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d})}{\boldsymbol \Sigma}{\vectorfontone d}\vd^\top{\boldsymbol \Sigma}.
\end{equation}
Finally, for ${\vectorfontone s}$ to be the TUM, we use (\ref{eq:msntum}) and that we need
\begin{equation}\label{eq:matchtum}
{\vectorfontone s}=\frac{\sqrt{2}(4-\pi)}{\pi^{3/2}}{\vectorfonttwo \delta}^{\odot3}.
\end{equation}
Combining (\ref{eq:matchmean}), (\ref{eq:matchcov}), and (\ref{eq:matchtum}), we arrive at the moment matching equations in the main text, which have a straightforward solution.
\subsection{Derivative matching}\label{sec:derivmatch}
For ${\vectorfontone m}$ to be the mode, we require that $\nabla\log p({\vectorfontone m})={\vectorfontone 0}$; by (\ref{eq:logmsngrad}), this is equivalent to
\begin{equation}\label{eq:matchmode}
{\vectorfontone 0}=-{\boldsymbol \Sigma}^{-1}({\vectorfontone m}-{\vectorfonttwo \mu})+\zeta_1({\vectorfontone d}^\top({\vectorfontone m}-{\vectorfonttwo \mu})){\vectorfontone d}.
\end{equation}
Next, for ${\matrixfontone J}$ to be the negative Hessian at the mode, we require that $-\nabla^2\log p({\vectorfontone m})={\matrixfontone J}$, which by (\ref{eq:logmsnhess}) is equivalent to
\begin{equation}\label{eq:matchhess}
{\matrixfontone J}={\boldsymbol \Sigma}^{-1}-\zeta_2({\vectorfontone d}^\top({\vectorfontone m}-{\vectorfonttwo \mu})){\vectorfontone d}\vd^\top.
\end{equation}
Finally, for ${\vectorfontone t}$ to be the TUD at the mode, we substitute ${\vectorfontone m}$ for ${\vectorfonttwo \theta}$ in (\ref{eq:logmsntud}) and set everything to be equal to ${\vectorfontone t}$, which gives
\begin{equation}\label{eq:matchtud}
{\vectorfontone t}=\zeta_3({\vectorfontone d}^\top({\vectorfontone m}-{\vectorfonttwo \mu})){\vectorfontone d}^{\odot3}.
\end{equation}
Note that ${\vectorfontone d}^\top({\vectorfontone m}-{\vectorfonttwo \mu})$ is a recurring quantity in the previous equations.
If we set its value to $\kappa$ and combine (\ref{eq:matchmode}), (\ref{eq:matchhess}), and (\ref{eq:matchtud}), we arrive at the derivative matching equations in the main text, which are
\begin{subequations}\label{eq:derivmatch}
\begin{align}
{\vectorfontone 0}&=-{\boldsymbol \Sigma}^{-1}({\vectorfontone m}-{\vectorfonttwo \mu})+\zeta_1(\kappa){\vectorfontone d} \label{eq:dma}, \\
{\matrixfontone J}&={\boldsymbol \Sigma}^{-1}-\zeta_2(\kappa){\vectorfontone d}\vd^\top \label{eq:dmb}, \\
{\vectorfontone t}&=\zeta_3(\kappa){\vectorfontone d}^{\odot3} \label{eq:dmc},\ \mbox{and}\\
\kappa&={\vectorfontone d}^\top({\vectorfontone m} -{\vectorfonttwo \mu}). \label{eq:dmd}
\end{align}
\end{subequations}
The idea is to initially solve for $\kappa$, before recovering the other parameters.
To start, rearrange (\ref{eq:dma}) to get
\begin{equation}\label{eq:vm}
{\vectorfontone m}={\vectorfonttwo \mu}+\zeta_1(\kappa){\boldsymbol \Sigma}{\vectorfontone d}.
\end{equation}
After subtracting ${\vectorfonttwo \mu}$ from both sides, left multiplying by ${\vectorfontone d}^\top$, and using (\ref{eq:dmd}), we have
\begin{equation}\label{eq:dsd}
\frac{\kappa}{\zeta_1(\kappa)} = {\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}.
\end{equation}
Now manipulate (\ref{eq:dmc}) to get
${\vectorfontone d}={\vectorfontone t}^{\odot1/3}/\zeta_3(\kappa)^{1/3} ={\vectorfontone u}/\zeta_3(\kappa)^{1/3}$,
where we let ${\vectorfontone u}={\vectorfontone t}^{\odot1/3}$.
Substituting this into (\ref{eq:dsd}), we get
\begin{equation}\label{eq:usu}
\frac{\kappa}{\zeta_1(\kappa)}= \frac{1}{\zeta_3(\kappa)^{2/3}}{\vectorfontone u}^\top{\boldsymbol \Sigma}{\vectorfontone u}
\implies
\frac{\kappa\cdot\zeta_3(\kappa)^{2/3}}{\zeta_1(\kappa)}=
{\vectorfontone u}^\top{\boldsymbol \Sigma}{\vectorfontone u}.
\end{equation}
If we now rearrange (\ref{eq:dmb}), we have that ${\boldsymbol \Sigma}^{-1}={\matrixfontone J}+\zeta_2(\kappa){\vectorfontone d}\vd^\top$
after which we can use the Sherman-Morrison identity to arrive at
$$
{\boldsymbol \Sigma}={\matrixfontone J}^{-1}-\frac{\zeta_2(\kappa){\matrixfontone J}^{-1}{\vectorfontone d}\vd^\top{\matrixfontone J}^{-1}}{1+\zeta_2(\kappa){\vectorfontone d}^\top{\matrixfontone J}^{-1}{\vectorfontone d}}.
$$
\noindent
Left and right multiplying this relation by ${\vectorfontone u}$ and substituting in (\ref{eq:usu}), we see that
\begin{align*}
\frac{\kappa\cdot\zeta_3(\kappa)^{2/3}}{\zeta_1(\kappa)}
&={\vectorfontone u}^\top{\matrixfontone J}^{-1}{\vectorfontone u}-\frac{\zeta_2(\kappa){\vectorfontone u}^\top{\matrixfontone J}^{-1}{\vectorfontone d}\vd^\top{\matrixfontone J}^{-1}{\vectorfontone u}}{1+\zeta_2(\kappa){\vectorfontone d}^\top{\matrixfontone J}^{-1}{\vectorfontone d}} \\
\implies\frac{\kappa\cdot\zeta_3(\kappa)^{2/3}}{\zeta_1(\kappa)}
&={\vectorfontone u}^\top{\matrixfontone J}^{-1}{\vectorfontone u}-\frac{\zeta_2(\kappa)}{\zeta_3(\kappa)^{2/3}}\cdot\frac{{\vectorfontone u}^\top{\matrixfontone J}^{-1}{\vectorfontone u}\vu^\top{\matrixfontone J}^{-1}{\vectorfontone u}}{1+\frac{\zeta_2(\kappa)}{\zeta_3(\kappa)^{2/3}}{\vectorfontone u}^\top{\matrixfontone J}^{-1}{\vectorfontone u}}.
\end{align*}
Further substituting in $R={\vectorfontone u}^\top{\matrixfontone J}^{-1}{\vectorfontone u}$, the above equation becomes
$$
\frac{\kappa\cdot\zeta_3(\kappa)^{2/3}}{\zeta_1(\kappa)}=R-\frac{\zeta_2(\kappa)}{\zeta_3(\kappa)^{2/3}}\cdot\frac{R^2}{1+\frac{\zeta_2(\kappa)}{\zeta_3(\kappa)^{2/3}}R} \\[6pt]
\implies\frac{\kappa\cdot\zeta_3(\kappa)^{2/3}}{\zeta_1(\kappa)}=R-\frac{\zeta_2(\kappa)R^2}{\zeta_3(\kappa)^{2/3}+\zeta_2(\kappa)R}.
$$
This final equation is univariate in $\kappa$ and so can be solved numerically.
Once $\kappa$ is known, (\ref{eq:dmc}) can be used to find ${\vectorfontone d}$, after which (\ref{eq:dmb}) can be used to find ${\boldsymbol \Sigma}$.
Finally, (\ref{eq:vm}) can be used to find ${\vectorfonttwo \mu}$.
\subsection{Mean-mode-Hessian matching}\label{sec:mhmatch}
For ${\vectorfontone m}$ to be the mode, we use our working from Section \ref{sec:derivmatch} and see that we need (\ref{eq:matchmode}) to be satisfied.
For ${\matrixfontone J}$ to be the negative Hessian at the mode, we use our working from Section \ref{sec:derivmatch} and see that we need (\ref{eq:matchhess}) to be satisfied.
Finally, for $\widetilde{{\vectorfonttwo \mu}}$ to be the mean, we use our working from Section \ref{sec:mmatch} and see that we need (\ref{eq:matchmean}) to be satisfied.
Combining (\ref{eq:matchmode}), (\ref{eq:matchhess}), and (\ref{eq:matchmean}), we arrive at the mean-mode-Hessian matching equations in the main text, which are
\begin{subequations}\label{eq:mHmatch}
\begin{align}
{\vectorfontone 0}&=-{\boldsymbol \Sigma}^{-1}({\vectorfontone m}-{\vectorfonttwo \mu})+\zeta_1(\kappa){\vectorfontone d}, \label{eq:mhma} \\
{\matrixfontone J}&={\boldsymbol \Sigma}^{-1}-\zeta_2(\kappa){\vectorfontone d}\vd^\top, \label{eq:mhmb} \\
\widetilde{{\vectorfonttwo \mu}}&={\vectorfonttwo \mu}+\sqrt{\frac{2}{\pi}}\cdot\frac{{\boldsymbol \Sigma}{\vectorfontone d}}{\sqrt{1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}}},\ \mbox{and} \label{eq:mhmc} \\
\kappa&={\vectorfontone d}^\top({\vectorfontone m} -{\vectorfonttwo \mu}). \label{eq:mhmd}
\end{align}
\end{subequations}
Similar to before, the idea is to first solve for $\kappa$, before recovering the other parameters.
As with Section \ref{sec:derivmatch}, (\ref{eq:mhma}), and (\ref{eq:mhmd}) can be used to deduce (\ref{eq:vm}) and (\ref{eq:dsd}).
If we subtract (\ref{eq:mhmc}) by (\ref{eq:vm}) and substitute (\ref{eq:dsd}), we get that
\begin{equation}\label{eq:vDelta}
{\vectorfonttwo \Delta}=\widetilde{{\vectorfonttwo \mu}}-{\vectorfontone m}=\lambda(\kappa){\boldsymbol \Sigma}{\vectorfontone d},\quad\mbox{with}\quad\lambda(\kappa) = \sqrt{\frac{2}{\pi}}\cdot\frac{1}{\sqrt{1 + \frac{\kappa}{\zeta_1(\kappa)}}} - \zeta_1(\kappa).
\end{equation}
Substituting into (\ref{eq:mhmb}), we have
${\matrixfontone J}={\boldsymbol \Sigma}^{-1} - (\zeta_2(\kappa)/\lambda(\kappa)^2) {\boldsymbol \Sigma}^{-1}{\vectorfonttwo \Delta}\vDelta^\top {\boldsymbol \Sigma}^{-1}$.
Applying the Sherman-Morrison identity to this relation, we get that
\begin{equation}\label{eq:smw1}
{\matrixfontone J}^{-1} = {\boldsymbol \Sigma} + \frac{{\vectorfonttwo \Delta}\vDelta^\top}{\lambda(\kappa)^2/ \zeta_2(\kappa) - {\vectorfonttwo \Delta}^\top{\boldsymbol \Sigma}^{-1}{\vectorfonttwo \Delta} }.
\end{equation}
Now, if we rearrange (\ref{eq:mhmb}), we see that ${\boldsymbol \Sigma}^{-1} = {\matrixfontone J} + \zeta_2(\kappa) {\vectorfontone d}\vd^\top$.
Left and right multiplying this by ${\vectorfonttwo \Delta}$, we have
\begin{equation*}
\begin{array}{rl}
{\vectorfonttwo \Delta}^\top{\boldsymbol \Sigma}^{-1}{\vectorfonttwo \Delta}&={\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta} + \zeta_2(\kappa) ({\vectorfontone d}^\top{\vectorfonttwo \Delta})^2 \\
&={\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta} + \zeta_2(\kappa) \lambda(\kappa)^2 ({\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d})^2 \\
&={\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta} + \zeta_2(\kappa) \lambda(\kappa)^2 \left( \frac{\kappa}{\zeta_1(\kappa)} \right)^2.
\end{array}
\end{equation*}
Substituting this into (\ref{eq:smw1}) gives us
\begin{equation*}
{\matrixfontone J}^{-1} = {\boldsymbol \Sigma} + \frac{{\vectorfonttwo \Delta}\vDelta^\top}{\lambda(\kappa)^2/ \zeta_2(\kappa) - {\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta} - \zeta_2(\kappa) \lambda(\kappa)^2 \left( \frac{\kappa}{\zeta_1(\kappa)} \right)^2 } = {\boldsymbol \Sigma} + \alpha(\kappa) {\vectorfonttwo \Delta}\vDelta^\top,
\end{equation*}
where
\begin{equation*}
\alpha(\kappa) = \left(\lambda(\kappa)^2/ \zeta_2(\kappa) - {\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta} - \zeta_2(\kappa) \lambda(\kappa)^2 \left( \frac{\kappa}{\zeta_1(\kappa)} \right)^2\right)^{-1}.
\end{equation*}
\noindent
Hence, we get that
\begin{equation}\label{eq:Sigma}
{\boldsymbol \Sigma} = {\matrixfontone J}^{-1} - \alpha(\kappa){\vectorfonttwo \Delta}\vDelta^\top\implies{\boldsymbol \Sigma}^{-1} = {\matrixfontone J} + \frac{{\matrixfontone J}{\vectorfonttwo \Delta}\vDelta^\top{\matrixfontone J}}{1/\alpha(\kappa) - {\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta}},
\end{equation}
where the Sherman-Morrison identity was used for the implication.
Substituting this into (\ref{eq:vDelta}), we have
$$
\displaystyle {\vectorfontone d}=\frac{{\boldsymbol \Sigma}^{-1}{\vectorfonttwo \Delta}}{\lambda(\kappa)}
=\frac{1}{\lambda(\kappa)}\left[{\matrixfontone J}+\frac{{\matrixfontone J}{\vectorfonttwo \Delta}\vDelta^\top{\matrixfontone J}}{1/\alpha(\kappa)-{\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta}}\right]{\vectorfonttwo \Delta}
=\frac{1}{\lambda(\kappa)}\left[1+\frac{{\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta}}{1/\alpha(\kappa)-{\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta}}\right]{\matrixfontone J}{\vectorfonttwo \Delta}.
$$
\noindent
Substituting again into (\ref{eq:Sigma}), we are left with
\begin{equation*}
{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d} = \frac{1}{\lambda(\kappa)^2}\left[1 + \frac{ {\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta}}{1/\alpha(\kappa) - {\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta}} \right]^2 {\vectorfonttwo \Delta}^\top{\matrixfontone J} \left[ {\matrixfontone J}^{-1} - \alpha(\kappa){\vectorfonttwo \Delta}\vDelta^\top \right] {\matrixfontone J}{\vectorfonttwo \Delta}=\frac{\kappa}{\zeta_1(\kappa)},
\end{equation*}
where (\ref{eq:dsd}) was used for the second equality.
By letting $Q={\vectorfonttwo \Delta}^\top{\matrixfontone J}{\vectorfonttwo \Delta}$ and rearranging the latter part of the previous equation, we see that
\begin{align*}
\left[1+\frac{Q}{1/\alpha(\kappa)-Q}\right]^2{\vectorfonttwo \Delta}^\top{\matrixfontone J} \left[ {\matrixfontone J}^{-1} - \alpha(\kappa){\vectorfonttwo \Delta}\vDelta^\top \right] {\matrixfontone J}{\vectorfonttwo \Delta}&=\frac{\kappa\lambda(\kappa)^2}{\zeta_1(\kappa)} \\
\implies\left[\frac{1}{1-\alpha(\kappa)Q}\right]^2{\vectorfonttwo \Delta}^\top\left[ {\matrixfontone J} - \alpha(\kappa){\matrixfontone J}{\vectorfonttwo \Delta}\vDelta^\top{\matrixfontone J} \right]{\vectorfonttwo \Delta}&=\frac{\kappa\lambda(\kappa)^2}{\zeta_1(\kappa)} \\
\implies\left[\frac{1}{1-\alpha(\kappa)Q}\right]^2\left[Q-\alpha(\kappa)Q^2\right]&=\frac{\kappa\lambda(\kappa)^2}{\zeta_1(\kappa)} \\
\implies\frac{Q}{1-\alpha(\kappa)Q}&=\frac{\kappa\lambda(\kappa)^2}{\zeta_1(\kappa)} \\
\implies\frac{1}{Q}-\alpha(\kappa)-\frac{\zeta_1(\kappa)}{\kappa\lambda(\kappa)^2}&=0.
\end{align*}
As with Section \ref{sec:derivmatch}, this is a univariate equation in $\kappa$ and so can be solved numerically.
Once $\kappa$ is known, we can use (\ref{eq:Sigma}) to solve for ${\boldsymbol \Sigma}$, after which we can use (\ref{eq:vDelta}) to solve for ${\vectorfontone d}$.
Finally, we can substitute everything into (\ref{eq:mhmc}) to solve for ${\vectorfonttwo \mu}$.
\subsection{Mean-mode-covariance matching}\label{sec:mcmatch}
For ${\vectorfontone m}$ to be the mode, we use our working from Section \ref{sec:derivmatch} and see that we need (\ref{eq:matchmode}) to be satisfied.
For $\widetilde{{\vectorfonttwo \mu}}$ to be the mean, we use our working from Section \ref{sec:mmatch} and see that we need (\ref{eq:matchmean}) to be satisfied.
Finally, for ${\matrixfontone C}$ to be the mode, we use our working from Section \ref{sec:mmatch} and see that we need (\ref{eq:matchcov}) to be satisfied.
Combining (\ref{eq:matchmode}), (\ref{eq:matchmean}), and (\ref{eq:matchcov}), we arrive at the mean-mode-covariance matching equations in the main text, which are
\begin{subequations}\label{eq:mcmatch}
\begin{align}
{\vectorfontone 0}&=-{\boldsymbol \Sigma}^{-1}({\vectorfontone m}-{\vectorfonttwo \mu})+\zeta_1(\kappa){\vectorfontone d}, \label{eq:mcma} \\
\widetilde{{\vectorfonttwo \mu}}&={\vectorfonttwo \mu}+\sqrt{\frac{2}{\pi}}\cdot\frac{{\boldsymbol \Sigma}{\vectorfontone d}}{\sqrt{1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}}}, \label{eq:mcmb} \\
{\matrixfontone C}&={\boldsymbol \Sigma} - \frac{2}{\pi(1+{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d})}{\boldsymbol \Sigma}{\vectorfontone d}\vd^\top{\boldsymbol \Sigma},\ \mbox{and} \label{eq:mcmc} \\
\kappa&={\vectorfontone d}^\top({\vectorfontone m} -{\vectorfonttwo \mu}). \label{eq:mcmd}
\end{align}
\end{subequations}
Similar to before, the idea is to first solve for $\kappa$, before recovering the other parameters.
As with Section \ref{sec:mhmatch}, (\ref{eq:mcma}), (\ref{eq:mcmb}), and (\ref{eq:mcmd}) can be used to deduce (\ref{eq:dsd}) and (\ref{eq:vDelta}).
Substituting both relations into (\ref{eq:mcmc}), we are left with
\begin{equation}
{\matrixfontone C}={\boldsymbol \Sigma}-\frac{2}{\pi\left(1+\frac{\kappa}{\zeta_1(\kappa)}\right)}\cdot\frac{1}{\lambda(\kappa)^2}{\vectorfonttwo \Delta}\vDelta^\top
\quad \implies \quad {\boldsymbol \Sigma}={\matrixfontone C}+\beta(\kappa){\vectorfonttwo \Delta}\vDelta^\top, \label{eq:mC}
\end{equation}
where
\begin{equation}\label{eq:betafun}
\beta(\kappa)=\frac{2}{\pi\left(1+\frac{\kappa}{\zeta_1(\kappa)}\right)}\cdot\frac{1}{\lambda(\kappa)^2}.
\end{equation}
Note that $\lambda(\kappa)$ refers to the same function in (\ref{eq:vDelta}).
By applying the Sherman-Morrison identity, this becomes
${\boldsymbol \Sigma}^{-1}={\matrixfontone C}^{-1}-{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}\vDelta^\top{\matrixfontone C}^{-1}/[\beta(\kappa)^{-1}+{\vectorfonttwo \Delta}^\top{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}]$.
Substituting this result back into (\ref{eq:vDelta}), we see that
$$
\displaystyle {\vectorfontone d}
=\frac{1}{\lambda(\kappa)}{\boldsymbol \Sigma}^{-1}{\vectorfonttwo \Delta}
=\frac{1}{\lambda(\kappa)}\left[1-\frac{{\vectorfonttwo \Delta}^\top{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}}{\frac{1}{\beta(\kappa)}+{\vectorfonttwo \Delta}^\top{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}}\right]{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}.
$$
\noindent
Combining this with (\ref{eq:mC}), we now get
\begin{equation*}
{\vectorfontone d}^\top{\boldsymbol \Sigma}{\vectorfontone d}=\frac{1}{\lambda(\kappa)^2}\left[1-\frac{{\vectorfonttwo \Delta}^\top{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}}{\frac{1}{\beta(\kappa)}+{\vectorfonttwo \Delta}^\top{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}}\right]^2{\vectorfonttwo \Delta}^\top{\matrixfontone C}^{-1}\left[{\matrixfontone C}+\beta(\kappa){\vectorfonttwo \Delta}\vDelta^\top\right]{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}=\frac{\kappa}{\zeta_1(\kappa)},
\end{equation*}
where (\ref{eq:dsd}) was used for the second equality.
By letting $G={\vectorfonttwo \Delta}^\top{\matrixfontone C}^{-1}{\vectorfonttwo \Delta}$ and rearranging the latter part of the previous equation, we see that
\begin{align*}
\left[1-\frac{G}{\frac{1}{\beta(\kappa)}+G}\right]^2\left[G+\beta(\kappa)G^2\right]&=\frac{\kappa\lambda(\kappa)^2}{\zeta_1(\kappa)} \\
\implies\left[\frac{1}{1+\beta(\kappa)G}\right]^2G\left[1+\beta(\kappa)G\right]&=\frac{\kappa\lambda(\kappa)^2}{\zeta_1(\kappa)} \\
\implies\frac{G}{1+\beta(\kappa)G}&=\frac{\kappa\lambda(\kappa)^2}{\zeta_1(\kappa)} \\
\implies\frac{1}{G}+\beta(\kappa)-\frac{\zeta_1(\kappa)}{\kappa\lambda(\kappa)^2}&=0.
\end{align*}
As with the previous sections, this is a univariate equation in $\kappa$ and so can be solved numerically.
Once $\kappa$ is known, we can use (\ref{eq:mC}) to solve for ${\boldsymbol \Sigma}$, after which (\ref{eq:vDelta}) can be used to solve for ${\vectorfontone d}$.
Finally, we can substitute everything into (\ref{eq:mcmb}) to solve for ${\vectorfonttwo \mu}$.
\subsection{On solving the kappa equations}
In the derivative matching, mean-mode-Hessian and mean-mode-covariance matching schemes, it was seen empirically that the graph of the final equation in $\kappa$ had a hyperbolic structure, and when a root of the graph did exist, it was always present on the positive half of the real line.
This requires further investigation, but for now we only considered strictly positive $\kappa$ values when searching for a solution in our implementation.
Additionally, for the mean-mode-covariance matching scheme in particular, it can be shown that the horizontal asymptote for the positive half of the graph in $\kappa$, minus the $1/G$ term depending on the data, is at $1-\pi/2$.
This implies that when $G\geq2/(\pi-2)$, there is no solution to (\ref{eq:mcmatch}).
To see this fact about the horizontal asymptote, we consider the limit of $\beta(\kappa)-\zeta_1(\kappa)/(\kappa\lambda(\kappa)^2)$ as $\kappa\to\infty$.
The second term can expanded out using the definition of $\lambda(\kappa)$, and has the form
\begin{equation*}
\frac{\zeta_1(\kappa)}{\kappa\lambda(\kappa)^2}=\frac{\zeta_1(\kappa)/\kappa + 1}{\left(\sqrt{\frac{2}{\pi}}-\sqrt{\zeta_1(\kappa)^2+\kappa\zeta_1(\kappa)}\right)^2}.
\end{equation*}
As $\kappa\to\infty$, the numerator is seen to have a limit of 1, while the denominator is seen to have a limit of $2/\pi$, and so the second term has a limit of $\pi/2$ as $\kappa\to\infty$.
Applying this result to the definition of $\beta(\kappa)$, it also becomes clear that $\beta(\kappa)\to1$ as $\kappa\to\infty$.
Combining both parts gives us our original result.
\section{Estimation of key statistics}
\label{sec:statistic-estimation}
In the skew-normal matching method, key statistics of the observed posterior $p({\vectorfonttwo \theta}|{\mathcal D})$ need to be estimated before they can be matched to the multivariate skew-normal density.
The following subsections describe how these statistics were estimated.
\subsection{Estimation of derivatives at the mode}
Denote in this and the following sections the observed log joint likelihood by $f({\vectorfonttwo \theta})$, with the corresponding gradient and Hessian functions given by ${\vectorfontone g}({\vectorfonttwo \theta})$ and ${\matrixfontone H}({\vectorfonttwo \theta})$ respectively.
Perhaps the simplest approach in obtaining the posterior mode is the Newton-Raphson method, described in detail by \cite{ypma1995historical}.
Newton-Raphson updates of the form
\begin{equation*}
{\vectorfontone m}_{n+1} \leftarrow {\vectorfontone m}_n - {\matrixfontone H}({\vectorfontone m}_n)^{-1}{\vectorfontone g}({\vectorfontone m}_n)
\end{equation*}
allow for the mode of the observed posterior to be determined relatively quickly, after which other higher-order derivatives at the mode can be evaluated analytically.
Convergence is typically defined to be the point when $\norm{{\vectorfontone g}({\vectorfontone m}_{n})}<t$, for some positive threshold $t$.
As the Newton-Raphson method only finds regions with zero gradient, care needs to be taken to ensure that the global maximum is reached, rather than a local maximum.
This will always be the case if the posterior is unimodal.
Additionally, note that in practice, Newton-Raphson updates which are too large can lead to non-convergence in the algorithm, where updates do not necessarily increase the objective function.
To remedy this, updates was modified to have the dampened form:
${\vectorfontone m}_{n+1} \leftarrow {\vectorfontone m}_n - \lambda\cdot{\matrixfontone H}({\vectorfontone m}_n)^{-1}{\vectorfontone g}({\vectorfontone m}_n)$.
In each iteration, $\lambda\in(0,1]$ is chosen via a simplified one-dimensional line search so as to ensure an increase in the objective function.
\subsection{Estimation of moments}
\subsubsection{Importance sampling}
All moments are able to be estimated via importance sampling if there is no other alternative.
In keeping with the spirit of the skew-normal matching method, we chose the proposal density $g$ to be an appropriate $t$-distribution-based approximation of the posterior, so as to ensure proper coverage of the posterior tails.
This approximation has the form of a location-shifted multivariate $t$-distribution, as described by \cite{genz_bretz_2009}.
The shift and scale parameters are set to be ${\vectorfontone m}$ and ${\matrixfontone J}$ respectively, which will have been estimated from the previous section.
The degrees of freedom for this $t$-distribution were chosen so as to encourage finite variances during the importance sampling stage.
Finally, stabilizing adjustments to the important ratios \citep{vehtari2021} were made to alleviate noisy estimates.
\subsubsection{Alternate estimates for the mean}
If the statistic of interest is only the first moment, then there readily exist alternate estimation approaches which may be faster than importance sampling.
For example, one may approximate the marginals of the observed posterior using Jensen's inequality via
\begin{equation}\label{eq:mest}
p(\theta_j|{\mathcal D})\mathrel{\vcenter{\offinterlineskip\halign{\hfil$##$\cr\propto\cr\noalign{\kern2pt}\sim\cr\noalign{\kern-2pt}}}}\exp\left[\mathbb{E}_{{\vectorfonttwo \theta}_{-j}\sim q}\left[f({\vectorfonttwo \theta})-\log q({\vectorfonttwo \theta}_{-j})\right]\right]\eqqcolon\hat{p}_\text{Jens.}(\theta_j|q),
\end{equation}
where $q({\vectorfonttwo \theta}_{-j})$ is some suitable approximation of the conditional posterior $p({\vectorfonttwo \theta}_{-j}|\theta_j,{\mathcal D})$.
In our simulations, we chose $q$ to be the corresponding conditional of the Laplace approximation of the observed posterior.
These marginal estimates can then be normalized via univariate quadrature, after which the marginal means of the posterior can be estimated using the same method.
Finally, these can be combined to form an overall estimate of the mean.
Call this estimate the Jensen-based mean estimate.
Another similar approach would be to instead use the improved Laplace method in obtaining approximations of the posterior marginal.
The original derivations are outlined by \cite{tierney_kadane_1986}; these approximations will be of the form
\begin{equation}
p(\theta_j|{\mathcal D})\mathrel{\vcenter{\offinterlineskip\halign{\hfil$##$\cr\propto\cr\noalign{\kern2pt}\sim\cr\noalign{\kern-2pt}}}}|{\boldsymbol \Sigma}_{\theta_j}|^{1/2}\cdot p({\vectorfonttwo \mu}_{\theta_j},\theta_j,{\mathcal D})\eqqcolon\hat{p}_\text{IL}(\theta_j|q),
\end{equation}
where ${\vectorfonttwo \mu}_{\theta_j}$ and ${\boldsymbol \Sigma}_{\theta_j}$ are the mean and covariance respectively of the Laplace approximation of the joint density $p({\vectorfonttwo \theta}_{-j},\theta_j,{\mathcal D})$, where $\theta_j$ is fixed.
As before, these marginal estimates can then be normalized via univariate quadrature, after which the marginal means of the posterior can be estimated using the same method.
Finally, these can be combined to form an overall estimate of the mean.
Call this the improved Laplace mean estimate.
\section{Probit regression supplement}
\label{sec:probit-supplement}
\subsection{Derivations for the Jensen variant}
We have that $Y_i\sim\text{Bernoulli}(p_i)$ are independent for $i=1,\ldots,n$, where
$p_i=\Phi({\vectorfontone x}_i^T{\vectorfonttwo \theta})$,
and ${\vectorfontone x}_i$ and ${\vectorfonttwo \theta}$ are $\mathbb{R}^p$.
The likelihood function for this probit regression model with observed values $y_1,\ldots,y_n$ and corresponding predictors ${\vectorfontone x}_1,\ldots,{\vectorfontone x}_n$ is given by
$$
p({\mathcal D}|{\vectorfonttwo \theta})=\prod_{i=1}^n\left[\Phi\left({\vectorfontone x}_i^T{\vectorfonttwo \theta}\right)^{y_i}\left(1-\Phi\left({\vectorfontone x}_i^T{\vectorfonttwo \theta}\right)\right)^{1-y_i}\right].
$$
An alternate parameterization which simplifies calculations is given by
$$
p({\mathcal D}|{\vectorfonttwo \theta})=\prod_{i=1}^n\left[\Phi\left((2y_i-1){\vectorfontone x}_i^T{\vectorfonttwo \theta}\right)\right] =\prod_{i=1}^n\left[\Phi\left({\vectorfontone z}_i^T{\vectorfonttwo \theta}\right)\right],
$$
where ${\vectorfontone z}_i=(2y_i-1){\vectorfontone x}_i$ is known; this will be used throughout this section.
The corresponding log-likelihood is $\log p({\mathcal D}|{\vectorfonttwo \theta}) = {\vectorfontone 1}_n^T\zeta_0\left({\matrixfontone Z}{\vectorfonttwo \theta}\right)$
where ${\matrixfontone Z}=[{\vectorfontone z}_1^T,\ldots,{\vectorfontone z}_n^T]^T$.
The log joint likelihood in this case can therefore be written as
\begin{equation}\label{eq:probitf}
f({\vectorfonttwo \theta})={\vectorfontone 1}_n^T\zeta_0\left({\matrixfontone Z}{\vectorfonttwo \theta}\right)-\frac{\norm{{\vectorfonttwo \theta}}^2}{2\sigma_{\vectorfonttwo \theta}^2}+\text{constants in ${\vectorfonttwo \theta}$},
\end{equation}
with corresponding gradient and Hessian functions given by
\begin{align}
{\vectorfontone g}({\vectorfonttwo \theta})&={\matrixfontone Z}^T\zeta_1({\matrixfontone Z}{\vectorfonttwo \theta})-\frac{{\vectorfonttwo \theta}}{\sigma_{\vectorfonttwo \theta}^2}\quad\mbox{and}\label{eq:probitg} \\
{\matrixfontone H}({\vectorfonttwo \theta})&={\matrixfontone Z}^T\mbox{$\textrm{\textup{diag}}$}(\zeta_2({\matrixfontone Z}{\vectorfonttwo \theta})){\matrixfontone Z}-\frac{1}{\sigma_{\vectorfonttwo \theta}^2}{\matrixfontone I}_p.\label{eq:probitH}
\end{align}
In order to implement the Jensen variant of the mean-mode-Hessian matching scheme, we need to evaluate $\hat{p}_\text{Jens.}(\theta_j|q)$ as given in (\ref{eq:mest}), with $q$ being set to the ${\vectorfonttwo \theta}_{-j}$ conditional distribution (keeping $\theta_j$ fixed) of the Laplace approximation of the posterior.
Call this conditional distribution ${\mathcal N}_{p-1}({\vectorfonttwo \mu}_{\theta_j},{\boldsymbol \Sigma}_{\theta_j})$ (noting that the conditional is also Gaussian).
We have that
\begin{equation*}
\begin{array}{rl}
\hat{p}_\text{Jens.}(\theta_j|q)&=\exp\left\{\mathbb{E}_{{\vectorfonttwo \theta}_{-j}\sim q}\left[f({\vectorfonttwo \theta})-\log q({\vectorfonttwo \theta}_{-j})\right]\right\} \\
&\propto\exp\left\{\mathbb{E}_{{\vectorfonttwo \theta}_{-j}\sim q}\left[{\vectorfontone 1}_n^T\zeta_0\left({\matrixfontone Z}{\vectorfonttwo \theta}\right)-\frac{\norm{{\vectorfonttwo \theta}}^2}{2\sigma_{\vectorfonttwo \theta}^2}-\log q({\vectorfonttwo \theta}_{-j})\right]\right\} \\
&\propto\exp\left\{\mathbb{E}_{{\vectorfonttwo \theta}_{-j}\sim q}\left[{\vectorfontone 1}_n^T\zeta_0({\matrixfontone Z}{\vectorfonttwo \theta})\right]-\mathbb{E}_{{\vectorfonttwo \theta}_{-j}\sim q}\left[\frac{\norm{{\vectorfonttwo \theta}}^2}{2\sigma_{\vectorfonttwo \theta}^2}\right]+\frac{1}{2}\log\left|{\boldsymbol \Sigma}_{\theta_j}\right|\right\}.
\end{array}
\end{equation*}
We will compute each of these expectations individually.
To start, it is clear that
$$
\displaystyle \mathbb{E}_{{\vectorfonttwo \theta}_{-j}\sim q}\left[\frac{\norm{{\vectorfonttwo \theta}}^2}{2\sigma_{\vectorfonttwo \theta}^2}\right]
=\frac{1}{2\sigma_{\vectorfonttwo \theta}^2}\cdot\mathbb{E}_{{\vectorfonttwo \theta}_{-j}\sim q}\left[\theta_1^2+\ldots+\theta_n^2\right]
=\frac{1}{2\sigma_{\vectorfonttwo \theta}^2}\left[\theta_j^2+{\vectorfontone 1}_p^\top\left({\vectorfonttwo \mu}_{\theta_j}^{\odot2}+\mbox{$\textrm{\textup{dg}}$}({\boldsymbol \Sigma}_{\theta_j})\right)\right].
$$
For the first term, a method involving a Taylor expansion and the closure of the Gaussian distribution under affine transformations can be used to find the expectation; we have
\begin{align*}
\mathbb{E}_{{\vectorfonttwo \theta}_{-j}\sim q}\left[{\vectorfontone 1}_n^T\zeta_0({\matrixfontone Z}{\vectorfonttwo \theta})\right]\approx{\vectorfontone 1}_n^T\left[\zeta_0(\theta_j{\matrixfontone Z}_{,j}+\widetilde{{\vectorfonttwo \mu}}_{j,\theta_j})+\frac{1}{2}\zeta_2(\theta_j{\matrixfontone Z}_{,j}+\widetilde{{\vectorfonttwo \mu}}_{j,\theta_j})\odot\widetilde{{\vectorfonttwo \sigma}}_{j,\theta_j}^2\right],
\end{align*}
where
$\widetilde{{\vectorfonttwo \mu}}_{j,\theta_j}={\matrixfontone Z}_{,-j}{\vectorfonttwo \mu}_{\theta_j}$
and
$\widetilde{{\vectorfonttwo \sigma}}_{j,\theta_j}^2 =\mbox{$\textrm{\textup{dg}}$}\left({\matrixfontone Z}_{,-j}{\boldsymbol \Sigma}_{\theta_j}{\matrixfontone Z}_{,-j}^T\right)$.
The final form for the Jensen-based approximation is therefore given by
\begin{align*}
\hat{p}_\text{Jens.}(\theta_j|q)\propto\exp\bigg\{&{\vectorfontone 1}_n^T\left[\zeta_0(\theta_j{\matrixfontone Z}_{,j}+\widetilde{{\vectorfonttwo \mu}}_{j,\theta_j})+\frac{1}{2}\zeta_2(\theta_j{\matrixfontone Z}_{,j}+\widetilde{{\vectorfonttwo \mu}}_{j,\theta_j})\odot\widetilde{{\vectorfonttwo \sigma}}_{j,\theta_j}^2\right] \\
&-\frac{1}{2\sigma_{\vectorfonttwo \theta}^2}\left[\theta_j^2+{\vectorfontone 1}_p^T\left({\vectorfonttwo \mu}_{\theta_j}^{\odot2}+\mbox{$\textrm{\textup{dg}}$}({\boldsymbol \Sigma}_{\theta_j})\right)\right]+\frac{1}{2}\log\left|{\boldsymbol \Sigma}_{\theta_j}\right|\bigg\}.
\end{align*}
\subsection{Supplementary results}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-PR-2A-box.png}
\end{center}
\caption{Performance of approximation methods on simulated independent data across different dimensions for probit regression, with $n=2p$ (box plot).}
\label{STA-sim-PR-2A-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-PR-4A-box.png}
\end{center}
\caption{Performance of approximation methods on simulated independent data across different dimensions for probit regression, with $n=4p$ (box plot).}
\label{STA-sim-PR-4A-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-PR-2A-line.png}
\end{center}
\caption{Performance of approximation methods on simulated independent data across different dimensions for probit regression, with $n=2p$ (line graph).}
\label{STA-sim-PR-2A-line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-PR-4A-line.png}
\end{center}
\caption{Performance of approximation methods on simulated independent data across different dimensions for probit regression, with $n=4p$ (line graph).}
\label{STA-sim-PR-4A-line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.62]{img/STA-bench-PR-box.png}
\end{center}
\caption{Performance of approximation methods on benchmark datasets under probit regression.}
\label{STA-bench-PR-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.62]{img/STA-bench-PR-time.png}
\end{center}
\caption{Time usage of approximation methods on benchmark datasets under probit regression.}
\label{STA-bench-PR-time}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-PR-2B-box.png}
\end{center}
\caption{Performance of approximation methods on simulated AR1 data across different dimensions for probit regression, with $n=2p$ (box plot).}
\label{STA-sim-PR-2B-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-PR-4B-box.png}
\end{center}
\caption{Performance of approximation methods on simulated AR1 data across different dimensions for probit regression, with $n=4p$ (box plot).}
\label{STA-sim-PR-4B-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-PR-2B-line.png}
\end{center}
\caption{Performance of approximation methods on simulated AR1 data across different dimensions for probit regression, with $n=2p$ (line graph).}
\label{STA-sim-PR-2B-line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-PR-4B-line.png}
\end{center}
\caption{Performance of approximation methods on simulated AR1 data across different dimensions for probit regression, with $n=4p$ (line graph).}
\label{STA-sim-PR-4B-line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.62]{img/PHA-bench-PR-MMH-PH-dbar.png}
\end{center}
\caption{Total change in marginal $L^1$ accuracy of a mean-mode-Hessian adjustment on benchmark datasets under probit regression.}
\label{PHA-bench-PR-MMH-PH-dbar}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.62]{img/PHA-bench-PR-MMC-PH-dbar.png}
\end{center}
\caption{Total change in marginal $L^1$ accuracy of a mean-mode-covariance adjustment on benchmark datasets under probit regression.}
\label{PHA-bench-PR-MMC-PH-dbar}
\end{figure}
Additional plots corresponding to the probit regression results of the main text are provided.
These include box plots of marginal $L^1$ accuracies and line plots of mean marginal $L^1$ accuracies for each method, and are presented in Figures \ref{STA-sim-PR-2A-box} to \ref{STA-bench-PR-time}.
Similar plots corresponding to the dependent AR1 simulations are also presented in Figures \ref{STA-sim-PR-2B-box} to \ref{STA-sim-PR-4B-line}.
The relative performance of skew-normal matching here was similar to the independent covariate case.
Finally, post-hoc benchmark plots are given in Figures \ref{PHA-bench-PR-MMH-PH-dbar}
and \ref{PHA-bench-PR-MMC-PH-dbar}.
\section{Logistic regression supplement}
\label{sec:logistic-supplement}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-LR-2A-box.png}
\end{center}
\caption{Performance of approximation methods on simulated independent data across different dimensions for logistic regression, with $n=2p$ (box plot).}
\label{STA-sim-LR-2A-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-LR-4A-box.png}
\end{center}
\caption{Performance of approximation methods on simulated independent data across different dimensions for logistic regression, with $n=4p$ (box plot).}
\label{STA-sim-LR-4A-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-LR-2A-line.png}
\end{center}
\caption{Performance of approximation methods on simulated independent data across different dimensions for logistic regression, with $n=2p$ (line graph).}
\label{STA-sim-LR-2A-line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-LR-4A-line.png}
\end{center}
\caption{Performance of approximation methods on simulated independent data across different dimensions for logistic regression, with $n=4p$ (line graph).}
\label{STA-sim-LR-4A-line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-LR-2B-box.png}
\end{center}
\caption{Performance of approximation methods on simulated AR1 data across different dimensions for logistic regression, with $n=2p$ (box plot).}
\label{STA-sim-LR-2B-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-LR-4B-box.png}
\end{center}
\caption{Performance of approximation methods on simulated AR1 data across different dimensions for logistic regression, with $n=4p$ (box plot).}
\label{STA-sim-LR-4B-box}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-LR-2B-line.png}
\end{center}
\caption{Performance of approximation methods on simulated AR1 data across different dimensions for logistic regression, with $n=2p$ (line graph).}
\label{STA-sim-LR-2B-line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{img/STA-sim-LR-4B-line.png}
\end{center}
\caption{Performance of approximation methods on simulated AR1 data across different dimensions for logistic regression, with $n=4p$ (line graph).}
\label{STA-sim-LR-4B-line}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{img/PHA-bench-LR-MMH-PH-dbar.png}
\end{center}
\caption{Total change in marginal $L^1$ accuracy of a mean-mode-Hessian adjustment on benchmark datasets under logistic regression.}
\label{PHA-bench-LR-MMH-PH-dbar}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{img/PHA-bench-LR-MMC-PH-dbar.png}
\end{center}
\caption{Total change in marginal $L^1$ accuracy of a mean-mode-covariance adjustment on benchmark datasets under logistic regression.}
\label{PHA-bench-LR-MMC-PH-dbar}
\end{figure}
Plots corresponding to the logistic regression case, where the skew-normal matching method was used as a standalone approximation, are provided.
These include box plots of marginal $L^1$ accuracies, line plots of mean marginal $L^1$ accuracies, and line plots of average time taken for each method, and are presented in Figures \ref{STA-sim-LR-2A-box} to \ref{STA-sim-LR-4A-line}.
The results are comparable to the probit regression case.
Similar plots corresponding to the dependent AR1 simulations are also presented in Figures \ref{STA-sim-LR-2B-box} to \ref{STA-sim-LR-4B-line}.
The relative performance of skew-normal matching here was similar to the independent covariate case.
Finally, post-hoc benchmark plots are given in Figures \ref{PHA-bench-LR-MMH-PH-dbar}
and \ref{PHA-bench-LR-MMC-PH-dbar}.
\bibliographystyle{plainnat}
|
{
"arxiv_id": "2302.08604",
"language": "en",
"timestamp": "2023-02-20T02:03:02",
"url": "https://arxiv.org/abs/2302.08604",
"yymm": "2302"
} | \section{Introduction} \label{sec:intro}
In the era of all-sky astrophysical surveys, mapping the youngest stellar populations of the Milky Way
in the optical bands is one of the
main core science goals.
Up to now, a full overall understanding of the Galactic components has been hampered by observational limits and, paradoxically, we have a better view of the morphological structure of external galaxies than of the Milky Way.
In particular, it is crucial to trace the poorly known Galactic thin disk,
and its spiral arms, where most of the youngest stellar populations
are expected to be found.
The youngest galactic stellar component is mainly found in star forming regions (SFRs), namely stellar clusters and over-dense structures originating from the collapse of the molecular clouds, the coldest and densest part of the interstellar medium \citep{macl04}.
It is by now relatively easy to identify in the
near-, mid-, far-infrared (IR), and radio wavelengths the young stellar objects (YSOs) within SFRs, during the first phases of the star formation process. In fact, at these wavelengths, due to the presence of the optically thick infalling envelope or circumstellar disk around the central star, they show, at these wavelengths, an excess with respect to the typical photospheric emission. During the subsequent phases, they start to emit also in the optical bands,
while the circumstellar disk is still optically thick \citep{bouv07}. YSOs are no longer easily identifiable in the IR or at radio wavelengths, when the final dispersal of the disk material occurs and
non-accreting transition disks form \citep{erco21}. At these latter stages,
a complete census of the YSOs can be achieved only using deep optical
observations. Such census
is therefore crucial for a thorough comprehension of the large scale three-dimensional structure of the young Galactic stellar component.
The identification of SFRs and the comparison with the environment in which they form are also fundamental to put them in the context of the surrounding giant molecular clouds and to answer
still controversial open questions about the star formation (SF) process,
such as, i) did the SFRs we observe originate from monolithic single SF bursts or from multiple spatial and temporal scales,
i.e. in a hierarchical mode \citep[e.g.][]{kamp08}? ii) When can the feedback from massive stars affect the proto-planetary disk evolution \citep{tana18} or
trigger subsequent SF events,
as thought, for example, for the Supernova explosion in the $\lambda$\,Ori region \citep{koun20}?
iii) Is the overall spatial
distribution of SFRs correlated to an overall large scale structure, such as the Gould Belt or the damped wave, very recently suggested by \citet{alve20}?
The Vera C. Rubin Observatory \citep{ivez19}
will conduct a Legacy Survey of Space and Time (LSST)
with a very impressive combination of flux sensitivity, area, and temporal sampling rate \citep{bian22}.
A flexible scheduling system is designed to maximize the scientific return under a set of observing constraints. Vera C. Rubin will perform in the 10 years a set of surveys. These include the Wide-Fast-Deep (WFD) that is the “main” LSST survey, primarily focused on low-extinction regions of the sky for extra-galactic science, the Deep Drilling Fields (DDFs), that are a set of few individual fields that will receive high cadence
and additional “minisurveys”, that cover specific sky regions such as the ecliptic plane, Galactic plane (GP), and the Large and Small Magellanic Clouds
with specific observing parameters \citep{bian22}.
The implementation of the Rubin LSST observing strategy must meet
the basic LSST Science Requirements\footnote{see \url{https://docushare.lsst.org/docushare/dsweb/Get/LPM-17}}
but a significant flexibility in the detailed cadence of observations
is left to ensure the best optimisation of the survey.
The scientific community has been involved to give
feedback on specific science cases on
the distribution of visits within a year,
the distribution of images between filters, and the definition of a “visit” as single or multiple exposures.
To this aim, simulated realisations of the 10-yr sequence of observations acquired by LSST, for different sets of basis functions that define the
different survey strategies, indicated as \texttt{OpSim}, are made available to the community. Such simulated observations can be analysed through
an open-access software package, the metrics analysis framework \citep[MAF,][]{jone14},
where specific metrics can be calculated to quantify the suitability of a given strategy for reaching a particular science objective.
In the original baseline, i.e. the reference benchmark survey strategy, the mini survey of the GP was covered by a number of visits a factor $\sim$5 smaller than those planned for WFD.
The main reason of this choice is to avoid highly extincted regions with E(B-V)$>$0.2 since this amount of dust extinction is problematic for extragalactic science, as motivated in \citet{jone21}.
Nevertheless, mapping the Milky Way is one of the
four broad science pillars that LSST will address \citep{bian22}.
We aim to show here that alternative implementations in which the number of visits planned for the WFD is extended also to the Galactic thin disk ($|b|\lesssim $ 5-10${}^\circ$),
will increase significantly the impact of Rubin LSST on the understanding
of the Galactic large-scale structures and in particular of the large molecular clouds in which stars form.
The stellar components of the SFRs,
and, in particular, the most populated
low mass components down to the M-dwarf regime with age t$\lesssim$10 Myrs,
that will be discovered with Rubin LSST observations,
are one of the most relevant output of the star formation
process and assessing their properties (spatial distribution in the Galaxy, masses, ages and age spread) is crucial to trace the global structure
of the youngest populations as well as to pinpoint
several under-debate disputes such as the universality of the Initial Mass Function (IMF), the duration of the evolutionary phases involved in
the star formation process and the dynamical evolution
\citep{chab03,tass04,ball07,park14}.
Even though the thin disk of the Galaxy is
strongly affected by high dust extinction and crowding issues,
it is the only Galactic component where most of such structures are found and where young stars are exclusively
found.
A big step forward has been made in the comprehension of the Galactic components thanks to the Gaia data
that allowed us to achieve a clearer and homogeneous overview of the
Galactic structures, including the youngest ones, but at distances from the Sun
limited to 2-3\,kpc
\citep{zari18,koun19,koun20,kerr21,pris22}.
The exquisite science return from the very deep Rubin LSST static photometry, derived from co-added images, is the only opportunity to push such large-scale studies of young stars to otherwise unreachable and unexplored regions.
With respect to the old open clusters, the study of the SFRs is favored by the intrinsic astrophysical properties of the low mass YSOs, representing the bulk
of these populations.
In fact, during the pre-main sequence (PMS) phase, M-type young stars are more luminous in bolometric light than the stars of the same mass in the main sequence (MS) phase. For example, for a given extinction, a 1\,Myr (10 Myrs) old 0.3\,M$_\odot$ star is 3.4\,mag (1.6\,mag) brighter than
a 100\,Myr star of the same mass \citep{togn18,pris18a}.
Therefore, if on one hand the extinction limits our capability to detect such low mass stars, on the other hand, the higher intrinsic luminosity of
young stars (t$<$10\,Myr) compensates for this effect, allowing a significant increase in
the volume of PMS stars detectable with respect to old MS stars.
Such property has been exploited in \citet{dami18,pris18,venu19}
where a purely photometric approach using {\it gri} magnitudes combined with the most deep near-IR magnitudes available in the literature,
has been adopted to statistically identify YSOs at distances where M-type MS stars are no longer
detectable, because they are intrinsically less luminous than the
analogous M-type PMS stars.
We note, however, that also studies of other relevant structures, such as for example,
stellar clusters, as tracers of the Galactic chemical evolution \citep{pris18}, as well as transient phenomena (see Street et al. 2021 Cadence Note) would benefit from the
observed strategy discussed in this paper.
The goal of this paper is to evaluate the impact on the YSO science of different Rubin LSST observing strategies and, in particular, of the different WFD-like cadences extended to the GP, as an improvement with respect to the baseline observing strategy.
\section{metrics definition}
The metrics we present in this work is defined as the
number of detectable YSOs with masses down to 0.3 M$_\odot$ and ages t$\leq$10\,Myr, distributed in the GP and, in particular, in the thin disk of the Milky Way.
The number of YSOs estimated in this work is
not based on empirical results but on
the theoretical description
of the star number density $\rho$ (number of stars per unit volume)
in the Galactic thin disk, integrated within the volume accessible with Rubin LSST observations. In particular,
the total number of young stars observable within a given solid angle $\Omega$ around a direction (e.g. a Healpix) defined by the Galactic coordinates $(l,b)$ will be obtained by integrating such density along cones, using volume elements
\begin{equation}
dV=\Omega r^2 dr
\label{eqvolume}
\end{equation}
where $r$ is the distance from the Sun.
Therefore we define this number of stars as
\begin{equation}
N(<r_{max}) = \int_0^{r_{max}} \rho(r,l,b) \; \; dV
\label{eqnrmax}
\end{equation}
where $r_{max}$ is the maximum distance from the Sun that is defined from the observation depth.
\subsection{Density law of YSOs in the Galactic Thin Disk}
In the Galactic thin disk, the star density distribution or number density,
usually called the density profile or density law,
can be described by a double exponential \citep[e.g.][]{cabr05}, as follows:
\begin{equation}
\rho(r,l,b)=A \times exp \left(-\frac{r |sin b|}{h} - \frac{R}{r_1}\right)
\label{eqrho}
\end{equation}
where $A$ is the total density normalization (to be computed below), $h$ is the thin disk scale height that we assume equal to 300\,pc \citep{blan16},
$R$ is the Galactocentric distance, and
$r_1$ is the thin-disk radial scale length that we assume equal to 2.6\,kpc
\citep{freu98}. The Galactocentric distance $R$ can be expressed in terms of $r$, $l$, and Galactic Center distance from the Sun $D$, as:
\begin{equation}
R = \sqrt{(D - r \cos l)^2 + (r \sin l)^2}.
\end{equation}
To derive the density normalization constant $A$, we
integrate the thin-disk density $\rho$ over the whole Galaxy, to obtain the total number of stars as:
\begin{equation}
\label{equn}
N = 4 \pi A h r_1^2
\end{equation}
Recent results by \citet[][see their Table 2]{giam21} show that, within a factor of two, we can assume in the Galaxy a constant star formation rate. Therefore,
in the whole Galaxy the total number of stars with ages t$<$10\,Myr, N$_{\rm yng}$, can
be estimated as
\begin{equation}
{\rm N}_{\rm yng}=\frac{t_{\rm yng}}{t_{\rm MW}} \times {\rm N}_{\rm tot}
\end{equation}
where
$t_{\rm yng}$=10\,Myr is the upper age limit for the youngest stellar component considered here,
$t_{\rm MW}$ is the Galaxy age that we assume to be $\sim$10\,Gyr and
N$_{\rm tot}$ is the total number of the stars in the Galaxy roughly equal to 10$^{11}$. With these assumptions N$_{\rm yng}$ amounts to 10$^8$ stars. Moreover, to take into account that we are going to miss about 20\% of the IMF below 0.3 $M_{\odot}$ \citep{weid10},
we should add a corrective factor of 0.8 for the number of actually observable stars.
Therefore, we may reasonably approximate in the Equ.\,\ref{equn} for the total thin-disk content $N \sim 0.8 \times N_{yng}$, from which $A \sim \frac{0.8 \times 10^8}{4 \pi h r_1^2}$, to be used in Eq.\,\ref{eqrho}.
\subsection{Rubin LSST accessible volume}
The element of volume within a given solid angle $\Omega$,
as well as the density law $\rho$, defined by
the equations\,\ref{eqvolume} and \ref{eqrho}, respectively,
depend on the distance from the Sun.
At a given mass and age and for a given extinction,
the maximum distance that can be achieved with Rubin LSST data is a function of the photometric observational depth and thus of the adopted
observing strategy.
To evaluate such distance, and thus to define such volume, our metrics has been defined with the following
criteria:
\begin{itemize}
\item use of the Rubin LSST gri filters, assuming to use
the $r-i$ vs. $g-r$ diagram to photometrically select the YSOs,
as in \citet{dami18,pris18,venu19}.
This is a restrictive assumption, since more than 3 Rubin LSST filters
will be used for the selection;
\item use of a dust map to take into account the non-uniform
extinction pattern that characterizes the GP, in particular towards the
Galactic Center;
\item estimate the maximum distance that will be achieved
assuming different \texttt{OpSims}.
\end{itemize}
The definition of the metrics is therefore based on the following rationale:
specify the desired accuracy magnitude/$\sigma$ = 5 for gri filters
and compute the corresponding limiting magnitudes from coadded images; correct for extinction the apparent magnitudes using a dust map; compute the maximum distance at which a 10 Myr old
star of 0.3 M$_\odot$ can be detected, assuming the absolute magnitudes in the gri filters, M$_g$=10.32, M$_r$=9.28 and M$_i$=7.97,
predicted for such a star by a 10 Myr solar metallicity isochrone \citep{togn18};
calculate the corresponding volume element within a Nside=64 Healpix;
integrate the young star density within the volume element, through
equation\,\ref{eqnrmax}.
\subsubsection{Dust map}
The dust extinction is an increasing function of the
distance and then nearby stars are affected by E(B-V) values smaller than those of more distant stars.
The \citet{schl98} map has a massive use in Astronomy, but provide an integrated extinction along a line of sight at the maximum distance
corresponding to the adopted 100\,$\mu$m observations.
As a consequence, they are more useful for extragalactic studies \citep{amor21}.
Assuming a 2D dust extinction
i.e. a map that depends only on the positions (e.g. the galactic coordinates $l, b$) leads to a significant overestimation of the real
extinction at different distances, especially in the directions
with A$_{\rm V}>$0.5 \citep[e.g][]{arce99,amor05}, such as in the GP
and, in particularly, in the Galactic spiral arms.
In the last years, many efforts have been done to derive 3D extinction
maps that take into account the strong dependence on the distance, as summarised in \citet{amor21}.
A 3D dust map, appropriated
for the Rubin LSST metrics,
has been developed (Mazzi et al. in preparation) by using the \citet{lall19} map, the
\texttt{mwdust.Combined19} map, based on the \citet{drim03,gree19,mars06} maps, combined with the method described in \citet{bovy16},
and the \citet{plan16} map Planck.
The 3D map has been integrated
in the MAF as \texttt{DustMap3D} to distinguish it from the default Rubin 2D \texttt{Dust\_values} based on \citet{schl98} map.
The two maps can be imported into MAF with the following commands:
\begin{verbatim}
from rubin_sim.photUtils import Dust_values
from rubin_sim.maf.maps import DustMap3D.
\end{verbatim}
To test the effect on our metrics of using the 2D or the 3D dust map and select the one predicting the more realistic results, we computed our metrics by considering the default 2D dust map implemented within the MAF, as well as the new 3D dust map.
Fig.\,\ref{fig:dustmapcomp} (left panel) shows the results of our metrics on the OpSim simulation named \texttt{baseline\_v2.0\_10yrs},
obtained by assuming the 2D dust map, while Fig.\,\ref{fig:dustmapcomp} (right panel) shows the corresponding result, assuming the 3D dust map.
The results point out that by assuming the 3D distant-dependent extinction map,
the number of detectable YSOs
decreases only smoothly towards the inner GP. On the contrary, the sharp decrease in the detected YSOs in the inner GP is evidence of an unrealistic star distribution due to the adoption of an unsuitable overestimated extinction map (2D dust map). Therefore,
the final version of the code defining our metrics includes the
3D dust map.
\begin{figure*}[htp]
\includegraphics[scale=0.3]{lsst_dust_90b_fbs_2_0_baseline_baseline_v2_0_10yrs_young_stars_HEAL_SkyMap_p.pdf}
\includegraphics[scale=0.3]{lsst_211116_yso_90b_fbs_2_0_baseline_baseline_v2_0_10yrs_young_stars_HEAL_SkyMap_p.pdf}
\caption{Map of the number of detectable YSOs per HEALPix computed with our metrics, using the
2D dust map (left panel) and the 3D dust map (right panel). In both
cases the \texttt{baseline\_v2.0\_10yrs} has been adopted.
The sky is shown in the equal-area Mollweide projection in equatorial coordinates and the HEALPIX grid resolution is $N_{side}=64$.}
\label{fig:dustmapcomp}
\end{figure*}
\subsubsection{Crowding effects}
Our metrics has been defined by assuming the formal limiting magnitude at 5$\sigma$ photometric precision, but in several wide
areas of the Galactic Plane, the limiting magnitude should be set by the photometric errors due to the crowding.
To take into account such effect a specific crowding metrics has been developed within MAF based on
the TRILEGAL stellar density maps \citep{dalt22} to compute the errors
that will result from stellar
crowding\footnote{see \url{https://github.com/LSST-nonproject/sims_maf_contrib/blob/master/science/static/Crowding_tri.ipynb}}.
Such crowding metrics has been included in our code since we expect that the observations become severely incomplete when the photometric errors due to the crowding exceed 0.25 mag. These
photometric errors are derived using the formalism by \citet{olse03} and the 0.25 mag criterion has been empirically checked using deep observations of the Bulge (Clarkson et al., in preparation for this ApJS special issue).
Therefore, to detect the faintest stars in
very crowded regions we used the minimum (brightest) magnitude between those obtained with \texttt{maf.Coaddm5Metric()} and \texttt{maf.CrowdingM5Metric(crowding\_error=0.25)}.
The limiting magnitudes achieved after including the crowding effects can be $\sim$2-3 magnitudes brighter than those obtained without
considering confusion effects. As a consequence, the number of YSOs detected by including the confusion metrics, shown in Fig.\,\ref{fig:dustmapcompcrow}, can be significantly smaller
than that predicted by the metrics that does not include such effect, shown in
Fig\,\ref{fig:dustmapcomp}, right panel.
We note that the two-pronged fork feature in the map shown in Fig.\,\ref{fig:dustmapcompcrow}
is due to the fact that in the layers immediately above and below the GP, the crowding is the dominant effect
and the number of possible detections goes down here from more than 10000 to surprisingly low values of 10-100 (sources/HEALPix).
On the contrary,
in the central layer of the GP, the
extinction is the dominant effect, while the crowding
effect is negligible since a smaller number of stars is visible. As a consequence, after applying the crowding metrics, the
number of detections in this central layer remains almost constant
(around 1000 sources)\footnote{note the different color scales
in the maps}, while the missed detection fraction is significantly larger in the layers immediately higher and lower than the GP.
This result is consistent with what recently pointed out by \citet{cant23} for the Gaia data.
\begin{figure}[htp]
\includegraphics[scale=0.3]{dbFiles_baseline_v2_0_10yrs_young_stars_HEAL_SkyMap_p.pdf}
\caption{
Map of the number of detectable YSOs per HEALPix computed with our metrics, using the \texttt{baseline\_v2.0\_10yrs}, the 3D dust map and the crowding metrics. The sky is shown in the equal-area Mollweide projection in equatorial coordinates and the HEALPIX grid resolution is $N_{side}=64$.}
\label{fig:dustmapcompcrow}
\end{figure}
\section{Results}
\begin{figure*}[htp]
\gridline{\fig{baseline_v2_0_10yrs_NVisits_All_sky_all_bands_HEAL_SkyMap_p.pdf}{0.45\textwidth}{(a)}
\fig{baseline_v2_1_10yrs_NVisits_All_sky_all_bands_HEAL_SkyMap_p.pdf}{0.45\textwidth}{(b)}
}
\gridline{\fig{vary_gp_gpfrac1_00_v2_0_10yrs_NVisits_All_sky_all_bands_HEAL_SkyMap_p.pdf}{0.45\textwidth}{(c)}
\fig{plane_priority_priority0_3_pbt_v2_1_10yrs_NVisits_All_sky_all_bands_HEAL_SkyMap_p.pdf}{0.45\textwidth}{(d)}
}
\caption{Map of the number of visits per HEALPix planned for the four representative \texttt{OpSim}
relevant for our science case. The small yellow patches are the Deep Drilling Fields (DDFs) to which an higher number of observations is assigned. Note as the Rubin-LSST footprint of the baselines (panels a and b) is significantly different with respect to that of the \texttt{vary\_gp\_gpfrac1.00\_v2.0\_10yrs}
and the \texttt{plane\_priority\_priority0.3\_pbt\_v2.1\_10yrs OpSim} (panels c and d), where the number of visits at the WFD level is assigned
also to the high dust extinction GP area.
The sky is shown in the equal-area Mollweide projection in equatorial coordinates and the HEALPIX grid resolution is $N_{side}=64$.
}
\label{fig:nvisits}
\end{figure*}
The Python code that computes the metrics described in the previous sections is publicly available in the central
\texttt{rubin\_sim} MAF metrics
repository\footnote{\url{https://github.com/lsst/rubin_sim/blob/main/rubin_sim/maf/maf_contrib/young_stellar_objects_metric.py}}
with the name
\texttt{YoungStellarObjectsMetric.py}.
To evaluate the impact of the different family sets of \texttt{OpSim} on our science case, we considered the current state of LSST
v2.0 and v2.1 \texttt{OpSim} databases and metrics results\footnote{available at
\url{http://astro-lsst-01.astro.washington.edu:8080}}. In particular, we considered the
following runs:
\begin{itemize}
\item \texttt{{\bf baseline\_v2.0\_10yrs}}, the reference benchmark survey strategy in which the WFD survey footprint has been extended to include the central GP and bulge, and is defined by low extinction regions. Such configuration includes also five Deep Drilling Fields and additional mini-survey areas of the North Ecliptic Plane, of the GP, and of the South Celestial Pole.
\item \texttt{{\bf baseline\_v2.1\_10yrs}}, in which the Virgo cluster and the requirement on the seeing fwhmEff $<$ 0.8${}^{\prime\prime}$\, for the images in r and i bands have been added with respect to the \texttt{baseline\_v2.0\_10yrs}.
\item {\bf Vary GP} family of simulations in which the fraction of the amount of survey time spent on covering the background (non-WFD-level) GP area ranges
from 0.01 to 1.0 (labeled as frac0.01 - frac1.00), where 1.0 corresponds to extending the WFD cadence to the entire GP. The baseline characteristics, including the ratio of visits over the remainder of the footprint, are kept the same. In our case,
we considered the \texttt{vary\_gp\_gpfrac1.00\_v2.0\_10yrs} survey, planning the maximum amount of time on the GP.
\item {\bf Plane Priority}, the family of \texttt{OpSim} that uses the GP priority map, contributed by LSST SMWLV \& TVS science collaborations, as the basis for further variations on GP coverage.
For this family of simulations, different levels of priority of the GP map are covered at WFD level,
unlike the baseline and Vary GP family where only
the "bulge area" is covered at WFD level.
In our case, we considered the
\texttt{plane\_priority\_priority0.3\_pbt\_v2.1\_10yrs} survey,
since it maximises the results of our metrics.
\end{itemize}
To evaluate the best observing strategy to be adopted for the YSO science,
we defined as Figure of Merit (FoM), the ratio between
the number of young stars detected with a
given \texttt{OpSim} with respect to the number of young stars detected
with the \texttt{baseline\_2.0\_10yrs} survey, this latter taken
as reference.
The map of the number of visits planned for these four representative \texttt{OpSim} surveys are shown in Figure\,\ref{fig:nvisits}, while
the results of the metrics, i.e. the number of YSOs with ages $t<10$\,Myr
and masses $>0.3$\,M$_\odot$, obtained with these four representative \texttt{OpSim} surveys are
given in Table\,\ref{tab:resultsmaf} and shown in Figure\,\ref{fig:resultsmafold}.
The FoM obtained with our metrics using the four \texttt{OpSim} described before are also given in
Table\,\ref{tab:resultsmaf}. Both, the number of YSOs and the FOM predicted by the metrics not including and including the crowding effects are
given in order to quantify the differences due to the confusion effects.
\begin{table*}[]
\centering
\begin{tabular}{c c c c c}
\hline
\hline
\texttt{OpSim} ID & N & FoM & N$_{\rm Crow}$ & FoM$_{\rm Crow}$ \\
\hline
\texttt{baseline\_v2.0\_10yrs} &
8.08$\times 10^6$ & 1.00 & 4.84$\times 10^6$ & 1.00 \\
\texttt{baseline\_v2.1\_10yrs} &
8.10$\times 10^6$ & 1.00 & 4.87$\times 10^6$ & 1.01\\
\texttt{vary\_gp\_gpfrac1.00\_v2.0\_10yrs} &
8.92$\times 10^6$ & 1.10 & 5.58$\times 10^6$ & 1.15\\
\texttt{plane\_priority\_priority0.3\_pbt\_v2.1\_10yrs} &
9.51$\times 10^6$ & 1.18 & 6.02$\times 10^6$ & 1.24 \\
\hline
\end{tabular}
\caption{Number of YSOs with ages t$<$10\,Myr and
masses $>$0.3\,M$_\odot$ that can be detected at distances $<r_{max}$
set by different \texttt{OpSim} surveys,
and relative FoM values, estimated by neglecting the crowding
metrics (N and FoM columns) and by including it
(N$_{\rm Crow}$ and FoM$_{\rm Crow}$ columns).}
\label{tab:resultsmaf}
\end{table*}
\begin{figure*}[!htp]
\gridline{\fig{dbFiles_baseline_v2_0_10yrs_young_stars_HEAL_SkyMap_p.pdf}{0.45\textwidth}{(a)}
\fig{dbFiles_baseline_v2_1_10yrs_young_stars_HEAL_SkyMap_p.pdf}{0.45\textwidth}{(b)}
}
\gridline{\fig{dbFiles_vary_gp_gpfrac1_00_v2_0_10yrs_young_stars_HEAL_SkyMap_p.pdf}{0.45\textwidth}{(c)}
\fig{dbFiles_plane_priority_priority0_3_pbt_v2_1_10yrs_young_stars_HEAL_SkyMap_p.pdf}{0.45\textwidth}{(d)}
}
\caption{Map of the number of YSOs per HEALPix as computed
by our metrics, adopting the \texttt{OpSim} surveys listed
in Table\,\ref{tab:resultsmaf} and indicated by the names superimposed on top of each panel. The sky is shown in the equal-area Mollweide projection in equatorial coordinates and the HEALPIX grid resolution is $N_{side}=64$. Note that the map in panel (a) is identical to the map shown in Fig.\,\ref{fig:dustmapcompcrow}.}
\label{fig:resultsmafold}
\end{figure*}
\section{Discussion and conclusions}
In this paper we presented a metrics aimed to estimate the number of
YSOs we can discover with Rubin LSST static science data.
To take into account the effects due to the extinction, we evaluated the
metrics by using both, the default LSST MAF 2D dust map and the 3D dust map
where a more realistic dependence on the distance is included.
The results of this comparison show that the 2D dust map overestimates E(B-V) values for close (few kpc) accessible stars, even in the direction of the GP.
For this reason, the final metrics has been implemented by considering the 3D dust map integrated into MAF.
To evaluate the faintest magnitudes that can be attained in crowded regions, a specific crowding metrics has been
imported in the code.
In order to quantify the crowding effects, we evaluated our metrics also by neglecting the crowding effects.
The results
show that, if the crowding metrics is included, the number of
detected YSOs (N$_{\rm Crow}$ in Table\,\ref{tab:resultsmaf}) is a factor
0.6-0.63 smaller than the number of YSOs predicted if the crowding is
not considered (column N in Table\,\ref{tab:resultsmaf}). Since the most realistic predictions are those obtained by including the
crowding effects, the final version of the code
that defines the YSOs metrics includes
the crowding metrics.
The impact of changing from the baseline v2.0 to v2.1 is
negligible, while
adopting the \texttt{vary\_gp\_gpfrac1.00\_v2.0\_10yrs} survey,
the only one available covering 100\% the GP with WFD exposure times,
the predicted number of YSOs (t$<$10\,Myr) down to 0.3\,M$_\odot$
is a factor 1.15 larger than that predicted adopting the
\texttt{baseline\_v2.0\_10yrs}, leading to a gain of 0.74$\times 10^6$ new discovered YSOs at very large distances.
An even higher impact can be obtained by adopting the most recently
contributed
\texttt{plane\_priority\_priority0.3\_pbt\_v2.1\_10yrs} predicting
different levels of priority of the GP map.
With this survey, the predicted number of YSOs is a factor 1.24 higher
with respect to the baseline, corresponding to a gain of
$\sim 1.17 \times 10^6$ additional YSOs.
To determine at what distance and beyond which distance from the Sun this gain is
most significant, we included in our metrics also the computation
of the maximum distance that can be reached for a 10\,Myr old star of 0.3\,M$_\odot$
within each Healpix.
We derived the distribution of such distances, weighted by the
number of YSOs that can be detected in each line of sight.
The distributions obtained for the four \texttt{OpSim} considered in this work
are shown in Fig.\,\ref{fig:histdist}.
\begin{figure}[htp]
\includegraphics[scale=0.6]{histogram_opsims.pdf}
\caption{Number of detected YSOs as a function of the distance for the four \texttt{OpSim} indicated in Table\,\ref{tab:resultsmaf}.}
\label{fig:histdist}
\end{figure}
We note that
the number of YSOs detected at distances from the Sun larger than $\sim$10\,kpc
is significantly larger for the \texttt{OpSim} \texttt{plane\_priority\_priority0.3\_pbt\_v2.1\_10yrs}
and \texttt{ vary\_gp\_gpfrac1.00\_v2.0\_10yrs}
than for the two baselines. The curve of the \texttt{ vary\_gp\_gpfrac1.00\_v2.0\_10yrs}
\texttt{OpSim} sits below the baseline curves for d=7-10 kpc. This could be due
to the fact that in the two baselines, the number of visits in the regions outside the
GP is higher (800-900) with respect to that planned for the \texttt{ vary\_gp\_gpfrac1.00\_v2.0\_10yrs}
(700-800), where the coverage is uniform.
This comparison clearly shows that the science enabled by adopting an
observing strategy consistent with the
\texttt{plane\_priority\_priority0.3\_pbt\_v2.1\_10yrs}
or \texttt{ vary\_gp\_gpfrac1.00\_v2.0\_10yrs} \texttt{OpSims}
would be precluded if we followed the prescriptions of other simulations.
These results highlight the greatest impact of Rubin LSST WFD cadence data
extended to the GP, mainly arising from the unknown farthest young stars that will be discovered.
We note that the two baselines and the \texttt{ vary\_gp\_gpfrac1.00\_v2.0\_10yrs} \texttt{OpSims}
show an asymmetric distribution with a peak that is around 9\,kpc and 14\,kpc, respectively.
The \texttt{plane\_priority\_priority0.3\_pbt\_v2.1\_10yrs} \texttt{OpSim}
shows a more smoothed and symmetric shape, that is likely due to the
different levels of priority given to the GP for this survey strategy.
We retain that a gain of 24\% in the number of discovered YSOs, mainly
arising at distances larger than 10\,kpc,
represents a strong justification to extend the WFD strategy to the GP,
and to ensure the uniformity of the observing strategy in the Galaxy.
Finally, we note that our metrics scales more strongly with the area
rather than with the number of visits per pointing and therefore, in case the
\texttt{plane\_priority\_priority0.3\_pbt\_v2.1\_10yrs} survey
cannot be adopted, we advocate the adoption of the
\texttt{ vary\_gp\_gpfrac1.00\_v2.0\_10yrs}, to preserve the uniform coverage of the GP.
We stress that our request is to extend the number of visits planned for the
WFD to the GP
but without constraints on the temporal observing cadence, since our analysis is based on the
use of the co-added gri Rubin LSST images.
However, different visits per night should be in different filters from the
gri set. The same applies in case 2 or 3 visits/night are made, as this would enhance the number
but also the accuracy of color measurements available, and hence heightens the fraction of YSOs
that can be detected and characterized thanks to their color properties.
The color accuracy is improved if the observations in different filters are taken close together in time and therefore
least affected by star variability.
The metrics described in this work has been developed assuming to use a photometric technique to statistically identify YSOs
mainly expected within SFRs as already done in \citet{dami18,pris18,venu19}. However, YSOs can also form dispersed populations, mingled with the general thin disk population
\citep[e.g.][]{bric19,luhm22}, and their identification can be hampered by the strong field star contamination. Such issue can be
{\bf overcome} by also exploiting Rubin LSST accurate proper motion
and parallax measurements \citep{lsst17,pris18a}, as well as the most deep photometric near-IR photometric surveys, such as, for example, the
VISTA Variables in the Via Lactea \citep{minn10}. Nevertheless,
a spectroscopic follow up will be needed for
a full characterization of the YSOs detected with Rubin LSST, for
which spectroscopic data can be achieved.
Forthcoming spectroscopic facilities like WEAVE \citep{dalt20} and
4MOST \citep{dejo22} will be crucial to this aim, at least within the
limiting magnitudes they can achieve.
The exquisite depth of the Rubin LSST coadded images represents a unique opportunity to study large-scale young structures and
to investigate the low mass stellar populations in/around the thin disk GP, where star formation mainly occurs. Our results suggest that
in order to maximise the volume of detected YSOs and the uniformity of
coverage of the large-scale young structures,
the amount of survey time dedicated to the GP should be equal to or comparable to the WFD level. These observations will allow us
to better characterize the Galactic structure and in particular the Norma, Scutum, Sagittarius, Local, Perseus and the Outer spiral arms, reported in \citet{reid14}.
\begin{acknowledgments}
This work was supported by the Preparing for Astrophysics with LSST Program, funded by the Heising Simons Foundation through grant 2021-2975, and administered by Las Cumbres Observatory.
AM acknowledges financial support from Padova University, Department of Physics and Astronomy Research Project 2021
(PRD 2021). L.V. is supported by the National Aeronautics and Space Administration (NASA) under grant No. 80NSSC21K0633 issued through the NNH20ZDA001N Astrophysics Data Analysis Program (ADAP).
\end{acknowledgments}
|
{
"arxiv_id": "2302.08592",
"language": "en",
"timestamp": "2023-02-20T02:02:31",
"url": "https://arxiv.org/abs/2302.08592",
"yymm": "2302"
} | \section{Introduction and main results}
In this manuscript we are interested in continuous state branching processes in random environments, in particular when the environment is driven by a L\'evy process. This family of processes is known as \emph{continuous state branching processes in L\'evy environment} (or CBLEs for short) and they have been constructed independently by He et al. \cite{he2018continuous} and Palau and Pardo \cite{palau2018branching}, as the unique non-negative strong solution of a stochastic differential equation whose linear term is driven by a L\'evy process.
The classification of the asymptotic behaviour of rare events of CBLEs, such as the survival probability, depends on the long-term behaviour of the environment. In other words an auxiliary L\'evy process, which is associated to the environment, leads to the usual classification for the long-term behaviour of branching processes. To be more precise, the CBLE is called \textit{supercritical}, \textit{critical} or \textit{subcritical} accordingly as the auxilliary L\'evy process drifts to $\infty$, oscillate or drifts to $-\infty$. Furthermore, in the subcritical regime another phase transition arises which depends on whether the L\'evy process drifts to $\infty$, oscillate or drifts to $-\infty$ under a suitable exponential change of measure. These regimes are known in the literature as \textit{strongly}, \textit{intermediate} and \textit{weakly subcritical} regimes, respectively.
The study of the long-term behaviour of CBLEs has attracted considerable attention in the last decade, see for instance Bansaye et al. \cite{bansaye2013extinction}, B\"oinghoff and Hutzenthaler \cite{boeinghoff2011branching}, He et al. \cite{he2018continuous}, Li and Xu \cite{li2018asymptotic}, Palau and Pardo \cite{palau2017continuous, palau2018branching}, Palau et al. \cite{palau2016asymptotic} and Xu \cite{xu2021}. All the aforementioned studies deal with the case when the branching mechanism is associated to a stable jump structure or a Brownian component on the branching term. For simplicity on exposition we will call such branching mechanisms as {\it stable}. Bansaye et al. \cite{bansaye2013extinction} determined the long-term behaviour for stable CBLEs when the random environment is driven by a L\'evy process with bounded variation paths. Palau and Pardo \cite{palau2017continuous} studied the case when the random environment is driven by a Brownian motion with drift. Afterwards, Li and Xu \cite{li2018asymptotic} and Palau et al. \cite{palau2018branching}, independently, extended this result to the case when the environment is driven by a general L\'evy process. More recently, Xu \cite{xu2021} provided an exact description for the speed of the extinction probability for CBLEs with stable branching mechanism and where the L\'evy environment is heavy-tailed. It is important to note that all these manuscripts exploited the explicit knowledge of the survival probability which is given in terms of exponential functionals of L\'evy processes.
Much less is known about the long-term behaviour of CBLEs when the associated branching mechanism is more general. Up to our knowledge, the only studies in this direction are Bansaye et al. \cite{bansaye2021extinction} and Cardona-Tob\'on and Pardo \cite{CTP21}, where the speed of extinction for more general branching mechanisms is studied. More precisely, Bansaye et al. \cite{bansaye2021extinction} focus on the critical case (oscillating L\'evy environments satisfying the so-called Spitzer's condition at $\infty$) and relax the assumption that the branching mechanism is stable. Shortly afterwards, Cardona-Tob\'on and Pardo \cite{CTP21} studied the speed of extinction of CBLEs in the strongly and intermediate subcritical regimes. Their methodology combines a path analysis of the branching process together with its L\'evy environment, fluctuation theory for L\'evy processes and the asymptotic behaviour of exponential functionals of L\'evy processes.
In this manuscript we continue with such systematic study on the asymptotic behaviour of the survival probability for the CBLE under more general branching mechanisms but now in the weakly subcritical regime. It is important to note that extending such asymptotic behaviour to more general branching mechanism is not as easy as we might think since we required to control a functional of the associated L\'evy process to the environment which is somehow quite involved. Moreover, contrary to the discrete case, the state 0 can be polar
and the process might become very close to 0 but never reach this point. To focus on the absorption event, we use Grey's condition which guarantees that 0 is accessible.
Our main contribution is to provide its precise asymptotic behaviour under some assumptions on the auxiliary L\'evy process and the branching mechanism.
In particular, we obtain that the speed of the survival probability decays exponentially with a polynomial factor of order ${3/2}$ (up to a multiplicative constant which is computed explicitly and depends on the limiting behaviour of the survival probability given favorable environments). In particular, for the stable case we recover the results of \cite{li2018asymptotic} where the limiting constant is given in terms of the exponential functional of the L\'evy process. In order to deduce such asymptotic behaviour, we combine the approach developed in \cite{{afanasyev2012}}, for the discrete time setting, with fluctuation theory of L\'evy processes and a similar strategy developed by Bansaye et al. in \cite{bansaye2021extinction}. A key point in our arguments is to rewrite the probability of survival under a suitable change of measure which is associated to an exponential martingale of the L\'evy environment. In order to do so, the existence of some exponential moments for the L\'evy environment is required.
Under this exponential change of measure the L\'evy environment now oscillates
and we can apply a similar strategy developed by Bansaye et al. in \cite{bansaye2021extinction} to study the extinction rate for CBLEs in the critical regime. More precisely, under this new measure, we split the event of survival in two parts, that is when the running infimum is either negative or positive and then we show that only paths of the L\'evy process with a positive running infimum give substantial contribution to the speed of survival. In this regime, we assume that the branching mechanism is regularly varying and a lower bound for the branching mechanism which allow us to control the event of survival under favorable environments and unfavourable environments, respectively. Our results complements those in \cite{bansaye2021extinction, CTP21}.
\subsection{Main results}\label{sec_defandprop}
Let $(\Omega^{(b)}, \mathcal{F}^{(b)}, (\mathcal{F}^{(b)}_t)_{t\geq 0}, \mathbb{P}^{(b)})$ be a filtered probability space satisfying the usual hypothesis on which we may construct the demographic (branching) term of the model that we are interested in. We suppose that $(B_t^{(b)}, t\geq 0)$ is a $(\mathcal{F}_t^{(b)})_{t\geq 0}$-adapted standard Brownian motion and $N^{(b)}(\mathrm{d} s , \mathrm{d} z, \mathrm{d} u)$ is a $(\mathcal{F}_t^{(b)})_{t\geq 0}$-adapted Poisson random measure on $\mathbb{R}^3_+$ with intensity $\mathrm{d} s \mu(\mathrm{d} z)\mathrm{d} u$ where $\mu$ satisfies \begin{equation}\label{eq_finitemean}
\int_{(0,\infty)}(z\wedge z^2)\mu(\mathrm{d}z)<\infty.
\end{equation}
We denote by $\widetilde{N}^{(b)}(\mathrm{d} s , \mathrm{d} z, \mathrm{d} u)$ for the compensated version of $N^{(b)}(\mathrm{d} s , \mathrm{d} z, \mathrm{d} u)$. Further, we also introduce the so-called branching mechanism $\psi$, a convex function with the following L\'evy-Khintchine representation
\begin{equation}
\psi(\lambda) =\psi'(0+) \lambda + \varrho^2 \lambda^2 + \int_{(0,\infty)} \big(e^{-\lambda x} - 1 + \lambda x \big) \mu(\mathrm{d} x), \qquad \lambda \geq 0,
\end{equation}
where $\varrho\ge 0$. Observe that the term $\psi'(0+)$ is well defined (finite) since condition \eqref{eq_finitemean} holds. Moreover, the function $\psi$ describes the stochastic dynamics of the population.
On the other hand, for the environmental term, we consider another filtered probability space $(\Omega^{(e)}, \mathcal{F}^{(e)},(\mathcal{F}^{(e)}_t)_{t\geq 0}, \mathbb{P}^{(e)})$ satisfying the usual hypotheses. Let us consider $\sigma \geq 0$ and $\alpha$ real constants; and $\pi$ a measure concentrated on $\mathbb{R}\setminus\{0\}$ such that $$\int_{\mathbb{R}} (1\land z^2)\pi(\mathrm{d} z)<\infty.$$ Suppose that \ $(B_t^{(e)}, t\geq 0)$ \ is a $(\mathcal{F}_t^{(e)})_{t\geq0}$ - adapted standard Brownian motion, $N^{(e)}(\mathrm{d} s, \mathrm{d} z)$ is a $(\mathcal{F}_t^{(e)})_{t\geq 0}$ - Poisson random measure on $\mathbb{R}_+ \times \mathbb{R}$ with intensity $\mathrm{d} s \pi(\mathrm{d} z)$, and $\widetilde{N}^{(e)}(\mathrm{d} s, \mathrm{d} z)$ its compensated version. We denote by $S=(S_t, t\geq 0)$ a L\'evy process, that is a process with stationary and independent increments and c\`adl\`ag paths, with the following L\'evy-It\^o decomposition
\begin{equation*}\label{eq_ambLevy}
S_t = \alpha t + \sigma B_t^{(e)} + \int_{0}^{t} \int_{(-1,1)} (e^z - 1) \widetilde{N}^{(e)}(\mathrm{d} s, \mathrm{d} z) + \int_{0}^{t} \int_{(-1,1)^c} (e^z - 1) N^{(e)}(\mathrm{d} s, \mathrm{d} z).
\end{equation*}
Note that $S$ is a L\'evy process with no jumps smaller than -1.
In our setting, we are considering independent processes for the demographic and environmental terms. More precisely, we work now on the space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq 0}, \mathbb{P})$ the direct product of the two probability spaces defined above, that is to say, $\Omega := \Omega^{(e)} \times \Omega^{(b)}, \mathcal{F}:= \mathcal{F}^{(e)}\otimes \mathcal{F}^{(b)}, \mathcal{F}_t:= \mathcal{F}^{(e)}_t \otimes \mathcal{F}^{(b)}_t$ for $t\geq0$, $ \mathbb{P}:=\mathbb{P}^{(e)} \otimes \mathbb{P}^{(b)} $. Therefore $(Z_t, t\geq 0)$,
the \textit{continuous-state branching process in the L\'evy environment $(S_t, t\geq 0)$} is defined on $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq 0}, \mathbb{P})$ as the unique non-negative strong solution of the following stochastic differential equation
\begin{equation}\label{CBILRE}
\begin{split}
Z_t = &Z_0 - \psi'(0+) \int_{0}^{t} Z_s \mathrm{d} s + \int_{0}^{t} \sqrt{2\varrho^2 Z_s} \mathrm{d} B_s^{(b)} \\ &\hspace{4cm} + \int_{0}^{t} \int_{(0,\infty)} \int_{0}^{Z_{s-}}z \widetilde{N}^{(b)} (\mathrm{d} s, \mathrm{d} z, \mathrm{d} u)+ \int_{0}^{t} Z_{s-} \mathrm{d} S_s.
\end{split}
\end{equation}
According to Theorem 3.1 in He et al. \cite {he2018continuous} or Theorem 1 in Palau and Pardo \cite{palau2018branching}, the equation has a unique positive strong solution which is not explosive.
An important property satisfied by $Z$ is that, given the environment, it inherits the branching property of the underlying continuous state branching process. We denote by $\mathbb{P}_z$, for its law starting from $z\ge 0$.
The analysis of the process $Z$ is deeply related to the behaviour and fluctuations of the L\'evy process $\xi=(\xi_t, t\ge 0)$, defined as follows
\begin{equation}\label{eq_envir2}
\xi_t = \overline{\alpha} t + \sigma B_t^{(e)} + \int_{0}^{t} \int_{(-1,1)} z \widetilde{N}^{(e)}(\mathrm{d} s, \mathrm{d} z) + \int_{0}^{t} \int_{(-1,1)^c}z N^{(e)}(\mathrm{d} s, \mathrm{d} z),
\end{equation}
where
\begin{equation*}
\overline{\alpha} := \alpha -\psi'(0+)-\frac{\sigma^2}{2} - \int_{(-1,1)} (e^z -1 -z) \pi(\mathrm{d} z).
\end{equation*}
Note that, both processes $S$ and $\xi$ generate the same filtration.
In addition, we see that the drift term $\overline{\alpha}$ provides the interaction between the demographic and environmental parameters. We denote by $\mathbb{P}^{(e)}_x$, for the law of the process $\xi$ starting from $x\in \mathbb{R}$ and when $x=0$, we use the notation $\mathbb{P}^{(e)}$ for $\mathbb{P}^{(e)}_0$.
Further, under condition \eqref{eq_finitemean}, the process $\left(Z_t e^{-\xi_t}, t\geq 0\right)$ is a quenched martingale implying that for any $t\geq 0$ and $z\geq 0$,
\begin{equation}\label{martingquenched}
\mathbb{E}_z[Z_t \ \vert \ S]=ze^{\xi_t}, \ \qquad \mathbb{P}_z \ \textrm{-a.s},
\end{equation}
see Bansaye et al. \cite{bansaye2021extinction}. In other words, the process $\xi$
plays an analogous role as the random walk associated to the logarithm of the mean of the offsprings in the discrete time framework
and leads to the usual classification for the long-term behaviour of branching processes. More precisely, we say that the
process $Z$ is subcritical, critical or supercritical accordingly as $\xi$ drifts to $-\infty$, oscillates or drifts to $+\infty$.
In addition, under condition \eqref{eq_finitemean}, there is another quenched martingale associated to $(Z_t e^{-\xi_t}, t\geq 0)$ which allow us to compute its Laplace transform, see for instance Proposition 2 in \cite{palau2018branching} or Theorem 3.4 in \cite{he2018continuous}. In order to compute the Laplace transform of $Z_t e^{-\xi_t}$, we first introduce the unique positive solution $(v_t(s,\lambda, \xi), s\in [0,t])$ of the following backward differential equation
\begin{equation}\label{eq_BDE}
\frac{\partial}{\partial s} v_t(s,\lambda, \xi) = e^{\xi_s} \psi_0(v_t(s, \lambda, \xi)e^{-\xi_s}), \qquad v_t(t,\lambda, \xi) = \lambda,
\end{equation}
where
\begin{equation}\label{eq_phi0}
\psi_0(\lambda)= \psi(\lambda)- \lambda \psi'(0+)=\varrho^2 \lambda^2 + \int_{(0,\infty)} \big(e^{-\lambda x} - 1 + \lambda x\big) \mu(\mathrm{d} x) .
\end{equation}
Then the process $\left(\exp\{-v_t(s,\lambda, \xi) Z_s e^{-\xi_s}\}, 0\le s\le t\right)$ is a quenched martingale implying that for any $\lambda\geq 0$ and $t\geq s\geq 0$,
\begin{equation}\label{eq_Laplace}
\mathbb{E}_{(z,x)}\Big[\exp\{-\lambda Z_t e^{-\xi_t}\}\ \Big|\, S, \mathcal{F}^{(b)}_s\Big] = \exp\{-Z_se^{-\xi_s}v_t(s, \lambda, \xi)\}.
\end{equation}
We may think of $v_t(\cdot, \cdot, \xi)$ as an inhomogeneous cumulant semigroup determined by the time-dependent branching mechanism $(s,\theta)\mapsto e^{\xi_s} \psi_0(\theta e^{-\xi_s})$. The functional $v_t(\cdot, \cdot, \xi)$ is quite involved, except for a few cases (stable and Nevue cases), due to the stochasticity coming from the time-dependent branching mechanism which makes it even not so easy to control.
In the what follows, we assume that $\xi$ is not a compound Poisson process to avoid the possibility that the process visits the same maxima or minima at distinct times which can make our analysis more involved. Moreover, we also require the following exponential moment condition,
\begin{equation}\label{eq_moments}\tag{\bf{H1}}
\textrm{there exists }\quad \vartheta > 1\ \text{ such that } \ \int_{\{|x|>1\}}e^{\lambda x}\pi(\mathrm{d} x)<\infty, \quad \textrm{for all} \quad \lambda \in [0, \vartheta],
\end{equation}
which is equivalent to the existence of the Laplace transform on $[0, \vartheta]$, i.e. $\mathbb{E}^{(e)}[e^{\lambda \xi_1}]$ is well defined for $\lambda\in [0, \vartheta]$ (see for instance Lemma 26.4 in Sato \cite{ken1999levy}). The latter implies that we can introduce the Laplace exponent of $\xi$ as follows
\[
\Phi_\xi(\lambda):=\log \mathbb{E}^{(e)}[e^{\lambda \xi_1}], \qquad \textrm{ for }\quad \lambda\in [0, \vartheta].
\]
Again from Lemma 26.4 in \cite{ken1999levy}, we also have $\Phi_\xi(\lambda)\in C^\infty$ and $\Phi_\xi^{\prime\prime}(\lambda)>0$, for $\lambda\in (0, \vartheta)$.
Another object which will be relevant in our analysis is the so-called exponential martingale associated to the L\'evy process $\xi$, i.e.
\[
M^{(\lambda)}_t=\exp\Big\{\lambda\xi_{t}-t\Phi_\xi(\lambda)\Big\}, \qquad t\ge 0,
\]
which is well-defined for $\lambda\in [0,\vartheta]$ under assumption \eqref{eq_moments}. It is well-known that $(M^{(\lambda)}_t, t\ge 0)$ is a $(\mathcal{F}^{(e)}_t)_{t\ge 0}$-martingale and that it induces a change of measure which is known as the Esscher transform, that is to say
\begin{equation}\label{eq_medida_Essher}
\mathbb{P}^{(e,\lambda)}(\Lambda):= \mathbb{E}^{(e)}\Big[M_t^{(\lambda)} \mathbf{1}_{\Lambda}\Big], \qquad \textrm{for}\quad \Lambda\in \mathcal{F}^{(e)}_t.
\end{equation}
Let us introduce the dual process $\widehat{\xi}=-\xi$ which is also a L\'evy process satisfying that for any fixed time $t>0$, the processes
\begin{equation}\label{eq_lemmaduality}
(\xi_{(t-s)^-}-\xi_{t}, 0\le s\le t)\qquad \textrm{and}\qquad (\widehat{\xi}_s, 0\le s\le t),
\end{equation}
have the same law, with the convention that $\xi_{0^-}=\xi_0$ (see for instance Lemma 3.4 in Kyprianou \cite{kyprianou2014fluctuations}). For every $x\in \mathbb{R}$, let $\widehat{\mathbb{P}}_x^{(e)}$ be the law of $x+\xi$ under $\widehat{\mathbb{P}}^{(e)}$, that is the law of $\widehat{\xi}$ under $\mathbb{P}_{-x}^{(e)}$. We also introduce the running infimum and supremum of $\xi$, by
\begin{equation*}
\underline{\xi}_t = \inf_{0\leq s\leq t} \xi_s \qquad \textrm{ and } \qquad \overline{\xi}_t = \sup_{0 \leq s \leq t} \xi_s, \qquad \textrm{for} \qquad t \geq 0.
\end{equation*}
Similarly to the critical case, which was studied by Bansaye et al. \cite{bansaye2021extinction}, the asymptotic analysis of the weakly subcritical regime requires the notion of the renewal functions $U^{(\lambda)}$ and $\widehat{U}^{(\lambda)}$ under $\mathbb{P}^{(e,\lambda)}$, which are associated to the supremum and infimum of $\xi$, respectively. See Section \ref{preLevy} for a proper definition (or the references therein).
For our purposes, we also require the notion of conditioned L\'evy processes and continuous state branching processes in a conditioned L\'evy environment. Let us define the probability $\mathbb{Q}_{x}$ associated to the L\'evy process $\xi$ started at $x>0$ and killed at time $\zeta$ when it
first enters $(-\infty, 0)$, that is to say
$$ \mathbb{Q}_{x}\big[f(\xi_t)\mathbf{1}_{\{\zeta>t\}}\Big ]:= \mathbb{E}^{(e)}_{x}\Big[f(\xi_t)\mathbf{1}_{\left\{\underline{\xi}_t> 0\right\}}\Big], $$
where $f:\mathbb{R}_+\to \mathbb{R}$ is a measurable function.
According to Chaumont and Doney \cite[Lemma 1]{chaumont2005levy}, under the assumption that $\xi$ does not drift towards $-\infty$, we have that the renewal function $\widehat{U}:=\widehat{U}^{(0)}$ is invariant for the killed process. In other words, for all $x> 0$ and $t\ge 0$,
\begin{equation}
\label{eq_fctharm}
\mathbb{Q}_x\left[\widehat{U}(\xi_t)\mathbf{1}_{\{\zeta>t\}}\right]=\mathbb{E}^{(e)}_x\left[\widehat{U}(\xi_t)\mathbf{1}_{\left\{\underline{\xi}_t> 0\right\}}\right]=\widehat{U}(x).
\end{equation}
Hence, from the Markov property, we deduce that $(\widehat{U}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}, t\geq 0)$ is a martingale with respect to $(\mathcal{F}_t^{(e)})_{t\geq 0}$. We may now use this martingale to define a change of measure corresponding to the law of $\xi$ \textit{conditioned to stay positive} as a Doob-$h$ transform.
Under the assumption that $\xi$ does not drift towards $-\infty$, the law of the process $\xi$ conditioned to stay positive is defined as follows, for $\Lambda \in \mathcal{F}^{(e)}_t$ and $x>0$,
\begin{equation} \label{defPuparrowx}
\mathbb{P}^{(e),\uparrow}_{x} (\Lambda)
:=\frac{1}{\widehat{U}(x)}\mathbb{E}^{(e)}_{x}\left[\widehat{U}(\xi_t) \mathbf{1}_{\left\{\underline{\xi}_t> 0\right\}} \mathbf{1}_{\Lambda}\right].
\end{equation}
On the other hand, by duality, under the assumption that $\xi$ does not drift towards $\infty$, the law of the process $\xi$ \textit{conditioned to stay negative} is defined for $x<0$, as follows
\begin{equation} \label{defdownarrowx}
\mathbb{P}^{(e),\downarrow}_{x} (\Lambda)
:=\frac{1}{U(-x)}\mathbb{E}^{(e)}_{x}\left[U(-\xi_t) \mathbf{1}_{\left\{\overline{\xi}_t<0\right\}} \mathbf{1}_{\Lambda}\right].
\end{equation}
L\'evy processes conditioned to stay positive (and negative) are well studied objects. For a complete overview of this theory the reader is referred to \cite{bertoin1996levy, chaumont1996conditionings, chaumont2005levy} and references therein.
Similarly to the definition of L\'evy processes conditioned to stay positive (and negative) given above,
we may introduce a continuous state branching processes in a L\'evy environment conditioned to stay positive as a Doob-$h$ transform. The aforementioned process was first investigated by Bansaye et al. \cite{bansaye2021extinction} with the aim to study the survival event in a critical L\'evy environment. In other words, they proved the following result.
\begin{lemma}[Bansaye et. al. \cite{bansaye2021extinction}]\label{teo_bansayemtg}
Let us assume that $z,x >0$. Under the law $\mathbb{P}_{(z,x)}$, the process $(\widehat{U}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t> 0\}}, t\geq 0)$ is a martingale with respect to $(\mathcal{F}_t)_{t\geq 0}$. Moreover the following Doob-$h$ transform holds, for $\Lambda \in \mathcal{F}_t$,
\begin{equation*}
\mathbb{P}^{\uparrow}_{(z,x)}(\Lambda):=\frac{1}{\widehat{U}(x)}\mathbb{E}_{(z,x)}\big[\widehat{U}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t> 0\}}\mathbf{1}_\Lambda\big],
\end{equation*}
defines a continuous state branching process in a L\'evy environment $\xi$ conditioned to stay positive.
\end{lemma}
Furthermore, appealing to duality and Lemma \ref{teo_bansayemtg}, we may deduce that, under $\mathbb{P}_{(z,x)}$ with $z>0$ and $x<0$, the process $(U(-\xi_t)\mathbf{1}_{\{\overline{\xi}_t< 0\}}, t\geq 0)$ is a martingale with respect to $(\mathcal{F}_t)_{t\geq 0}$. Hence, the law of \textit{ continuous state branching processes in a L\'evy environment $\xi$ conditioned to stay negative} is defined as follows: for $z>0$, $x<0$ and $\Lambda \in \mathcal{F}_t$,
\begin{equation}\label{eq_CDBPnegativo}
\mathbb{P}^{\downarrow}_{(z,x)}(\Lambda): =\frac{1}{U(-x)}\mathbb{E}_{(z,x)}\big[U(-\xi_t)\mathbf{1}_{\{\overline{\xi}_t< 0\}}\mathbf{1}_\Lambda\big].
\end{equation}
Recall that we are interested in the probability of survival under the weakly subcritical regime, that is \eqref{eq_moments} is satisfied and the Laplace exponent of $\xi$ is such that
\[
\Phi_{\xi}'(0)<0< \Phi_{\xi}'(1) \textrm{ and there exists } \gamma \in (0,1) \textrm{ which solves } \Phi_{\xi}'(\gamma)=0.
\]
In other words, the L\'evy process $\xi$ drifts to $-\infty$ a.s., under $\mathbb{P}^{(e)}$, and to $+\infty$ a.s., under $\mathbb{P}^{(e,1)}$. In the remainder of this manuscript, we will always assume that the process $Z$ is in the weakly subcritical regime.
Our first main result requires that the branching mechanism $\psi$ is regularly varying at $0$, that is there exist $\beta\in (0,1]$
\begin{equation}\label{betacond}\tag{\bf{H2}}
\psi_0(\lambda)=\lambda^{1+\beta}\ell(\lambda),
\end{equation}
where $\ell$ is a slowly varying function at $0$. See Bingham et al. \cite{bingham1989regular} for a proper definition.
For simplicity on exposition, we introduce the function $\kappa^{(\lambda)}(0,\theta)$ as follows
\[
\int_0^\infty e^{-\theta y} U^{(\gamma)}(y) \mathrm{d} y=\frac{1}{\theta\kappa^{(\gamma)}(0,\theta)}, \qquad \theta> 0.
\]
\begin{theorem}\label{prop_weakly_pareja}
Let $x,z>0$. Assume that $Z$ is weakly subcritical and that condition \eqref{betacond} holds, hence the random variable $\mathcal{U}_t:=Z_te^{-\xi_t}$ converges in distribution to some random variable $Q$ with values in $[0,\infty)$ as $t\to \infty$, under $\mathbb{P}_{(z,x)}\big(\cdot \ | \ \underline{\xi}_t>0 \big)$. Moreover,
\begin{equation}\label{eq_constantb1}
\mathfrak{b}(z,x):= \lim\limits_{t\to \infty}\mathbb{P}_{(z,x)}\Big(Z_t>0 \ \big|\big. \ \underline{\xi}_t>0\Big)>0,
\end{equation}
where
\begin{equation*}
\mathfrak{b}(z,x) = 1- \lim\limits_{\lambda \to \infty} \lim\limits_{s\to \infty} \int_{0}^\infty \int_{0}^{1} \int_{0}^{\infty} w^{\textbf{u}} \mathbb{P}^{(\gamma),\uparrow}_{(z,x)}\big(\mathcal{U}_s \in \mathrm{d} \textbf{u}\big) \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\Big(\widehat{W}_s(\lambda) \in \mathrm{d} w\Big)\mu_\gamma(\mathrm{d} y), \end{equation*}
with
\begin{equation}\label{eq:mugamma}
\widehat{W}_{s}(\lambda):= \exp\left\{-v_s(0,\lambda,\widehat{\xi})\right\}\quad\textrm{and}\quad \mu_\gamma(\mathrm{d} y) := \gamma\kappa^{(\gamma)}(0,\gamma)e^{-\gamma y} U^{(\gamma)}(y) \mathbf{1}_{\{y>0\}} \mathrm{d} y.
\end{equation}
\end{theorem}
It is important to note that in general, it seems difficult to compute explicitly the constant $\mathfrak{b}(z,x)$ except for the stable case. In the stable case, we observe that the constant $\mathfrak{b}(z,x)$ is given in terms of two independent exponential functionals of conditioned L\'evy processes. Denote by $\texttt{I}_{s,t}(\beta \xi)$ the exponential functional of the L\'evy process $\beta \xi$, i.e.,
\begin{equation}\label{eq_expfuncLevy}
\texttt{I}_{s,t}(\beta \xi):=\int_{s}^{t}e^{-\beta \xi_u} \mathrm{d} u, \quad \quad 0\leq s\leq t.
\end{equation}
Hence, when $\psi_0(\lambda)=C\lambda^{1+\beta}$ with $C>0$ and $\beta\in(0,1)$, we have
\begin{equation*}
\mathfrak{b} (z,x) = \gamma\kappa^{(\gamma)}(0,\gamma) \int_{0}^{\infty} e^{-\gamma y} U^{(\gamma)} (y) G_{z,x}(y){\rm d} y,
\end{equation*}
where
\begin{equation}\label{functG}
\begin{split}
G_{z,x}(y)&:= \int_{0}^{\infty} \int_{0}^{\infty} \left(1-e^{-ze^{-x}(\beta C w +\beta C u)^{-1/\beta}}\right)\\
&\hspace{4cm}\mathbb{P}^{(\gamma),\uparrow}_{(z,x)}\big(\texttt{I}_{0,\infty}(\beta \xi) \in \mathrm{d} w\big) \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\Big(\texttt{I}_{0,\infty}(\beta \widehat{\xi}) \in \mathrm{d} u\Big).
\end{split}
\end{equation}
We refer to subsection \ref{sec:stable} for further details about the computation of this constant.
Under the assumption that $Z$ is weakly subcritical, the running infimum of the auxiliary process $\xi$ satisfies the following asymptotic behaviour: for $x>0$,
\begin{equation}\label{eq_weakly_cota_hirano}
\mathbb{P}^{(e)}_x\left(\underline{\xi}_t> 0\right) \sim \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)} e^{\gamma x} \widehat{U}^{(\gamma)}(x) t^{-3/2}e^{\Phi_{\xi}(\gamma)t}, \quad \text{as} \qquad t \to \infty,
\end{equation}
where
\begin{equation}\label{eq_weakly_constante}
A_\gamma:= \frac{1}{\sqrt{2 \pi \Phi_{\xi}''(\gamma)}} \exp\left\{\int_{0}^{\infty}(e^{-t}-1)t^{-1}e^{-t\Phi_{\xi}(\gamma)}\mathbb{P}^{(e)}(\xi_t = 0) \mathrm{d} t\right\},
\end{equation}
see for instance Lemma A in \cite{hirano2001levy} (see also Proposition 4.1 in \cite{li2018asymptotic}). Such asymptotic turns out to be the leading term in the asymptotic behaviour of the probability of survival as it is stated below.
\begin{theorem}[Weakly subcritical regime]\label{teo_debil}
Let $z>0$. Assume that $Z$ is weakly subcritical and that the slowly varying function in \eqref{betacond} satisfies that there exists a constant $C>0$, such that $\ell(\lambda)>C$.
Then there exists $0<\mathfrak{B}(z)<\infty$ such that
\[
\begin{split}
\lim\limits_{t\to \infty}t^{-3/2}e^{-\Phi_{\xi}(\gamma)t} \mathbb{P}_{z}(Z_t>0) = \mathfrak{B}(z),
\end{split}
\]
with
$$ \mathfrak{B}(z):=\frac{A_\gamma }{ \gamma\kappa^{(\gamma)}(0,\gamma)}\lim_{x\to \infty} \mathfrak{b}(z,x)e^{\gamma x}\widehat{U}^{(\gamma)}(x),
$$
where $\mathfrak{b}(z,x)$ and $A_{\gamma}$ are the constants defined in \eqref{eq_constantb1} and \eqref{eq_weakly_constante}, respectively.
\end{theorem}
It is important to note that in the stable case, the constant $\mathfrak{B}(z)$ coincides with the constant that appears in Theorem 5.1 in Li and Xu \cite{li2018asymptotic}, that is
\begin{equation*}
\begin{split}
\mathfrak{B}(z)= A_\gamma \lim\limits_{x\to \infty} e^{\gamma x} \widehat{U}^{(\gamma)}(x)\int_{0}^{\infty} e^{-\gamma y} U^{(\gamma)}(y)G_{z,x}(y){\rm d} y,
\end{split}
\end{equation*}
where $G_{z,x}$ is defined in \eqref{functG}.
{\bf Some comments about our results:}
We first remark that our assumption \eqref{betacond} clearly implies
\begin{equation}\label{eq_xlog2x}
\int^\infty x \log^2 x \mu(\mathrm{d} x) <\infty.
\end{equation}
The latter condition was used before in Proposition 3.4 in \cite{bansaye2021extinction} to control the effect of a favourable environment on the event of survival. Unlike the critical case, in the weakly subcritical regime the slightly stronger condition \eqref{betacond} is required to guarantee the convergence in Theorem \ref{prop_weakly_pareja}, which allows us to have a good control of the event of survival given favourable environments. A crucial ingredient in Theorem \ref{prop_weakly_pareja} is an extension of a sort of functional limit theorem for conditioned L\'evy and CBLE processes (see Proposition \ref{prop_weakly_primera} below). More precisely, we would require the asymptotically independence of the processes $((Z_u, \xi_u), 0\leq u \leq r \ | \ \underline{\xi}_t>0)$ and $(\xi_{(t-u)^{-}}, 0 \leq u \leq \delta t \ | \ \underline{\xi}_t>0)$ as $t $ goes to $\infty$, for every $r, t\geq 0$ and $\delta\in (0,1)$. We claim that this result must be true in full generality (in particular Theorem \ref{prop_weakly_pareja} under \eqref{eq_xlog2x}) since it holds for random walks (see Theorem 2.7 in \cite{afanasyev2012}) but it seems not so easy to deduce. Meanwhile in the discrete setting the result follows directly from duality, in the L\'evy case the convergence will depend on a much deeper analysis on the asymptotic behaviour for bridges of L\'evy processes and their conditioned version. It seems that a better understanding of conditioned L\'evy bridges is required.
On the other hand, the condition that the slowly varying function $\ell$ is bounded from below is required to control the absorption event under unfavourable environments (see Lemma \ref{lem_cero_weakly}) and to guarantee a.s. absorption. Indeed, under Grey's condition
\begin{equation}\label{GreysCond}
\int^{\infty}\frac{1}{\psi_0(\lambda)} \mathrm{d} \lambda < \infty,
\end{equation}
and equation \eqref{eq_Laplace}, we deduce that for $z,x>0$
\begin{equation}\label{cotasup}
\mathbb{P}_{(z,x)}\Big(Z_t>0,\ \underline{\xi}_t\le -y\Big)=\mathbb{E}^{(e)}\left[\left(1-e^{-z v_t(0,\infty, \xi)}\right)\mathbf{1}_{\{\underline{\xi}_t\le -y-x\}}\right], \quad \textrm{ for } \quad y\ge 0,
\end{equation}
where $v_t(0,\infty, \xi)$ is $\mathbb{P}^{(e)}$-a.s. finite for all $t\ge 0$, (see Theorem 4.1 and Corollary 4.4 in \cite{he2018continuous}) but perhaps equals 0. We note that \eqref{eq_xlog2x} (and implicitly \eqref{betacond}) guarantees that $v_t(0,\infty, \xi)>0$, $\mathbb{P}^{(e)}$-a.s. for all $t>0$ (see for instance Proposition 3 in \cite{palau2018branching}). Since the functional $v_t(0,\infty, \xi)$ depends strongly on the environment, it seems difficult to estimate the right-hand side of \eqref{cotasup}. Actually, it seems not so easy to obtain a sharp control of \eqref{cotasup}. Condition \eqref{betacond} implies that Grey's condition
is fulfilled and the assumption that $\ell$ is bounded from below allow us to upper bound \eqref{cotasup} in terms of the exponential functional of $\xi$.
Finally, we point out that in the discrete setting such probability can be estimated directly in terms of the infimum of the environment since the event of survival is equal to the event that the current population is bigger or equal to one, something that cannot be performed in our setting.
The remainder of this paper is devoted to the proof of the main results.
\section{Proofs}\label{sec_absweakly}
This section is devoted to the proofs of our main results and the computation of the constant $\mathfrak{b} (z,x) $ in the stable case. We start with some preliminaries on L\'evy processes.
\subsection{Preliminaries on L\'evy processes}\label{preLevy}
Recall that $\mathbb{P}^{(e)}_x$ denotes the law of the L\'evy process $\xi$
starting from $x\in \mathbb{R}$ and when $x=0$, we use the notation $\mathbb{P}^{(e)}$ for $\mathbb{P}^{(e)}_0$. We also recall that $\widehat{\xi}=-\xi$ denotes the dual process and denote by $\widehat{\mathbb{P}}_x^{(e)}$ for its law starting at $x\in \mathbb{R}$.
In what follows, we require the notion of the
reflected processes $\xi-\underline{\xi}$ and $\overline{\xi}-\xi$ which are Markov processes with respect to the filtration $(\mathcal{F}^{(e)}_t)_{t\geq 0}$ and whose semigroups
satisfy the Feller property (see for instance Proposition VI.1 in the monograph of Bertoin \cite{bertoin1996levy}). We
denote by $L=(L_t, t \geq 0 )$ and $\widehat{L}=(\widehat{L}_t, t \geq 0 )$ for the local times of $\overline{\xi}-\xi$ and $\xi-\underline{\xi}$ at $0$, respectively, in the sense of Chapter IV in \cite{bertoin1996levy}. If $0$ is regular for $(-\infty,0)$ or regular downwards, i.e.
\[
\mathbb{P}^{(e)}(\tau^-_0=0)=1,
\]
where $\tau^{-}_0=\inf\{s> 0: \xi_s\le 0\}$, then $0$ is regular for the reflected process $\xi-\underline{\xi}$ and then, up to a multiplicative constant, $\widehat{L}$ is the unique additive functional of the reflected process whose set of increasing points is $\{t:\xi_t=\underline{\xi}_t\}$. If $0$ is not regular downwards then the set $\{t:\xi_t=\underline{\xi}_t\}$ is discrete and we define the local time $\widehat{L}$ as the counting process of this set. The same properties holds for $L$ by duality.
Let us denote by $L^{-1}$ and $\widehat{L}^{-1}$ the right continuous inverse of $L$ and $\widehat{L}$, respectively. The range of the inverse local times $L^{-1}$ and $\widehat{L}^{-1}$, correspond to the sets of real times at which new maxima and new minima occur, respectively. Next, we introduce the so called increasing ladder height process by
\begin{equation}\label{defwidehatH}
H_t=\overline{\xi}_{L_t^{-1}}, \qquad t\ge 0.
\end{equation}
The pair $(L^{-1}, H)$ is a bivariate subordinator, as is the case of the pair $(\widehat{L}^{-1}, \widehat{H})$ with
\[
\widehat{H}_t=-\underline{\xi}_{\widehat{L}_t^{-1}}, \qquad t\ge 0.
\]
The range of the process $H$ (resp. $\widehat{H}$) corresponds to the set of new maxima (resp. new minima). Both pairs are known as descending and ascending ladder processes, respectively.
We also recall that $U^{(\lambda)}$ and $\widehat{U}^{(\lambda)}$ denote the renewal functions under $\mathbb{P}^{(e,\lambda)}$. Such functions are defined as follows: for all $x>0$,
\begin{equation}\label{eq_Utheta}
U^{(\lambda)}(x) := \mathbb{E}^{(e,\lambda)}\left[\int_{[0,\infty)} \mathbf{1}_{\left\{\overline{\xi}_t\leq x\right\}} \mathrm{d} L_t\right] \quad
\textrm{and}\quad
\widehat{U}^{(\lambda)}(x) := \mathbb{E}^{(e,\lambda)}\left[\int_{[0,\infty)} \mathbf{1}_{\left\{\underline{\xi}_t\geq -x\right\}} \mathrm{d} \widehat{L}_t\right].
\end{equation}
The renewal functions $U^{(\lambda)}$ and $\widehat{U}^{(\lambda)}$ are finite, subadditive, continuous and increasing. Moreover, they are identically 0 on $(-\infty, 0]$, strictly positive on $(0,\infty)$ and satisfy
\begin{equation}
\label{grandO}
U^{(\lambda)}(x)\leq C_1 x \qquad\textrm{and}\qquad \widehat{U}^{(\lambda)}(x)\leq C_2 x \quad \text{ for any } \quad x\geq 0,
\end{equation}
where $C_1, C_2$ are finite constants (see for instance Lemma 6.4 and Section 8.2 in the monograph of Doney
\cite{doney2007fluctuation}). Moreover $U^{(\lambda)}(0)=0$ if $0$ is regular upwards and $U^{(\lambda)}(0)=1$ otherwise, similalry $\widehat{U}^{(\lambda)}(0)=0$ if $0$ is regular upwards and $\widehat{U}^{(\lambda)}(0)=1$ otherwise.
Furthermore, it is important to note that by a simple change of variables, we can rewrite the renewal functions $U^{(\lambda)}$ and $\widehat{U}^{(\lambda)}$ in terms of the ascending and descending ladder height processes. Indeed, the measures induced by $U^{(\lambda)}$ and $\widehat{U}^{(\lambda)}$ can be rewritten as follows,
\begin{equation*}
U^{(\lambda)}( x)=\mathbb{E}^{(e,\lambda)}\left[\int_{0}^\infty\mathbf{1}_{\{{H}_t \le x\}}\mathrm{d} t\right]\qquad \textrm{and}\qquad \widehat{U}^{(\lambda)}(x)=\mathbb{E}^{(e,\lambda)}\left[\int_{0}^\infty\mathbf{1}_{\{\widehat{H}_t \le x\}}\mathrm{d} t\right].
\end{equation*}
Roughly speaking, the renewal function $U^{(\lambda)}(x)$ (resp. $\widehat{U}^{(\lambda)}(x)$) ``measures'' the amount of time that the ascending (resp. descending) ladder height process spends on the interval $[0,x]$ and in particular induces a measure on $[0,\infty)$ which is known as the renewal measure. The latter implies
\begin{equation}\label{bivLap}
\int_{[0,\infty)} e^{-\theta x} U^{(\lambda)}( x)\mathrm{d}x = \frac{1}{\theta\kappa^{(\lambda)}(0,\theta)}, \qquad \theta>0,
\end{equation}
where $\kappa^{(\lambda)}(\cdot,\cdot)$ is the bivariate Laplace exponent of the ascending ladder process $(L^{-1}, H)$, under $\mathbb{P}^{(e,\lambda)}$ (see for instance \cite{bertoin1996levy, doney2007fluctuation, {kyprianou2014fluctuations}}).
\subsection{Proof of Theorem \ref{prop_weakly_pareja}}
Our arguments follows a similar strategy as in Afanasyev et al. \cite{afanasyev2012} where the discrete setting is considered. Although the matter of considering continuous time leads to significant changes such as that $0$ might be polar. Our first proposition is the continuous analogue of Proposition 2.5 in \cite{afanasyev2012} and in some sense it is a generalisation of Theorem 2 part (a) in Hirano \cite{hirano2001levy} (see also Proposition 4.2 in \cite{li2018asymptotic}). In particular, the result tell us that, for every $r, t\geq 0$ and $s\leq t$, the conditional processes $((Z_u, \xi_u), 0\leq u \leq r \ | \ \underline{\xi}_t>0)$ and $(\xi_{(t-u)^{-}}, 0 \leq u \leq s \ | \ \underline{\xi}_t>0)$ are asymptotically independent as $t\to \infty$.
Before we state our first result in this subsection, we recall that $\mathbb{D}([0,t])$ denotes the space of c\`adl\`ag real-valued functions on $[0,t]$.
\begin{proposition}\label{prop_weakly_primera}
Let $f$ and $g$ be continuous and bounded functionals on $\mathbb{D}([0, t])$. We also set \ $\mathcal{U}_r := g((Z_u, \xi_u), 0\leq u \leq r)$, and for $s\le t$
\[
\widehat{W}_s := f(-\xi_u, 0\leq u \leq s), \qquad \textrm{and}\qquad\widetilde{W}_{t-s,t} := f(\xi_{(t-u)^-}, 0\leq u \leq s).
\]
Then for any bounded continuous function $\varphi:\mathbb{R}^3\to \mathbb{R}$, we have
\[
\begin{split}
\lim_{t\to\infty}&\frac{\mathbb{E}^{(\gamma)}_{(z,x)}\Big[\varphi(\mathcal{U}_{r},\widetilde{W}_{t-s, t},\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\Big]}{\mathbb{E}^{(e,\gamma)}_x\Big[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\Big]}\\
&\hspace{4cm} =\iiint \varphi(u,v,y) \mathbb{P}^{(\gamma),\uparrow}_{(z,x)}\big(\mathcal{U}_r \in \mathrm{d} u\big)\mathbb{P}^{(e,\gamma),\downarrow}_{-y}\Big(\widehat{W}_s \in \mathrm{d} v\Big)\mu_\gamma(\mathrm{d} y),
\end{split}
\]
with $$\mu_\gamma(\mathrm{d} y) := \gamma\kappa^{(\gamma)}(0,\gamma)e^{-\gamma y} U^{(\gamma)}(y) \mathbf{1}_{\{y>0\}} \mathrm{d} y.$$
\end{proposition}
\begin{proof} By a monotone class argument, it is enough to show the result for continuous bounded functions of the form $\varphi(u,v,y)=\varphi_1(u)\varphi_2(v)\varphi_3(y)$, where $\varphi_i: \mathbb{R} \to \mathbb{R}$ are bounded and continuous functions, for $i=1,2,3$. That is, we will show that for $z,x>0$,
\[
\lim\limits_{t \to \infty} \frac{\mathbb{E}^{(\gamma)}_{(z,x)}\Big[\varphi_1(\mathcal{U}_r)\varphi_2(\widetilde{W}_{t-s,t}) \varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_t > 0\}}\Big]}{\mathbb{E}^{(e,\gamma)}_x\Big[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_t > 0\}}\Big]} = \mathbb{E}_{(z,x)}^{(\gamma), \uparrow}[\varphi_1(\mathcal{U}_r)] \mathbb{E}_{\mu_\gamma}^{(e,\gamma), \downarrow}\Big[\varphi_2(\widehat{W}_s)\varphi_3(\xi_0)\Big],
\]
where \begin{equation}\label{eq_mugamma}
\mathbb{E}_{\mu_\gamma}^{(e,\gamma), \downarrow}\Big[\varphi_2(\widehat{W}_s)\varphi_3(\xi_0)\Big] = \int_{(0, \infty)} \mathbb{E}_{-y}^{(e,\gamma), \downarrow}\Big[\varphi_2(\widehat{W}_s)\varphi_3(\xi_0)\Big] \mu_\gamma (\mathrm{d}y).
\end{equation}
For simplicity on exposition, we assume $0\leq \varphi_i \leq 1$, for $i=1, 2, 3$. We first observe from the Markov property that for $t\geq r+s$, we have
\begin{equation}\label{eq_markov_prop1}
\mathbb{E}^{(\gamma)}_{(z,x)}\Big[\varphi_1(\mathcal{U}_r)\varphi_2(\widetilde{W}_{t-s,t}) \varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_t > 0\}}\Big]= \mathbb{E}_{(z,x)}^{(\gamma)}\left[\varphi_1(\mathcal{U}_r) \Phi_{t-r}(\xi_r)\mathbf{1}_{\{\underline{\xi}_{r} > 0\}}\right],
\end{equation}
where
\begin{equation}\label{eq_phiu}
\Phi_{u}(y):= \mathbb{E}_{y}^{(e,\gamma)} \left[\varphi_2(\widetilde{W}_{u-s,u})\varphi_3(\xi_{u})e^{-\gamma \xi_{u}}\mathbf{1}_{\{\underline{\xi}_{u} > 0\}}\right], \qquad u \geq s,\,\, y>0.
\end{equation}
Using the last definition and once again the Markov property, we deduce the following identity
\begin{equation}\label{eq_expec_weakly}
\Phi_{t-r}(y)= \mathbb{E}_{y}^{(e,\gamma)} \left[\Phi_{s}(\xi_{t-r-s})\mathbf{1}_{\{\underline{\xi}_{t-r-s} > 0\}} \right],\qquad y>0.
\end{equation}
On the other hand, by Lemma 1 in \cite{hirano2001levy}, we know that for $\delta >0$ and $t\geq v$,
\begin{equation*}
\lim\limits_{t \to \infty} \frac{\mathbb{E}^{(e,\gamma)}_y\left[e^{-(\delta + \gamma)\xi_{t-v}}\mathbf{1}_{\{\underline{\xi}_{t-v} > 0\}}\right]}{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} = \frac{\widehat{U}^{(\gamma)}(y)}{\widehat{U}^{(\gamma)}(x) }\frac{\displaystyle\int_{0}^{\infty} e^{-( \delta + \gamma) z} U^{(\gamma)}(z) \mathrm{d} z}{\displaystyle\int_{0}^{\infty} e^{- \gamma z} U^{(\gamma)}(z) \mathrm{d}z}.
\end{equation*}
Then by the continuity Theorem for the Laplace transform and using identity \eqref{bivLap}, for $h$ bounded and continuous $\mu_\gamma$-a.s., it follows
\begin{equation}\label{eq_lim_prop1}
\lim\limits_{t \to \infty} \frac{\mathbb{E}^{(e, \gamma)}_y\left[h(\xi_{t-v}) e^{- \gamma \xi_{t-v}}\mathbf{1}_{\{\underline{\xi}_{t-v} > 0\}}\right]}{\mathbb{E}^{(e, \gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} = \frac{\widehat{U}^{(\gamma)}(y)}{\widehat{U}^{(\gamma)}(x)} \int_{0}^{\infty} h(z) \mu_\gamma (\mathrm{d} z).
\end{equation}
If $h$ is positive and continuous but not bounded, we can truncate the function $h$, i.e., fix $n\in \mathbb{N}$ and define $h_n(x):= h(x)\mathbf{1}_{\{h(x)\leq n \}}$. Then by \eqref{eq_lim_prop1}, we have
\[
\begin{split}
\liminf_{t\to \infty}\frac{\mathbb{E}^{(e,\gamma)}_y\left[h(\xi_{t-v}) e^{- \gamma \xi_{t-v}}\mathbf{1}_{\{\underline{\xi}_{t-v} > 0\}}\right]}{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} &\geq \liminf_{t\to\infty}\frac{\mathbb{E}^{(e,\gamma)}_y\left[h_n(\xi_{t-v}) e^{- \gamma \xi_{t-v}}\mathbf{1}_{\{\underline{\xi}_{t-v} > 0\}}\right]}{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} \\ &= \frac{\widehat{U}^{(\gamma)}(y)}{\widehat{U}^{(\gamma)}(x)} \int_{0}^{\infty} h_n(z) \mu_\gamma (\mathrm{d} z).
\end{split}
\]
On the other hand, since $h_n(x) \to h(x)$ as $n \to \infty$, by Fatou's Lemma
\[
\liminf_{n\to \infty} \int_{0}^{\infty} h_n(z) \mu_\gamma (\mathrm{d} z) \geq \int_{0}^{\infty} h(z) \mu_\gamma (\mathrm{d} z).
\]
Thus putting both pieces together, we get
\begin{equation}\label{eq_liminf_pro1}
\liminf_{t \to \infty} \frac{\mathbb{E}^{(e,\gamma)}_y\left[h(\xi_{t-v}) e^{- \gamma \xi_{t-v}}\mathbf{1}_{\{\underline{\xi}_{t-v} > 0\}}\right]}{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} \geq \frac{\widehat{U}^{(\gamma)}(y)}{\widehat{U}^{(\gamma)}(x)} \int_{0}^{\infty} h(z) \mu_\gamma (\mathrm{d} z).
\end{equation}
We want to apply the previous inequality to the function $h(x)=\Phi_s(x)e^{\gamma x}$. To do so, we need to verify that $\Phi_{s}(\cdot)$ is a positive and $\mu_\gamma$-a.s.-continuous function. First, we observe that discontinuities of $\Phi_{s}(\cdot)$ correspond to discontinuities of the map
\[
{\tt e}: y \mapsto \mathbb{P}^{(e,\gamma)}\big(\underline{\xi}_t > -y\big).
\] Since ${\tt e}(\cdot)$ is bounded and monotone, it has a countable number of discontinuities. Thus $\Phi_{s}(\cdot)$ is continuous almost everywhere with respect to the Lebesgue measure and therefore $\mu_\gamma$-a.s.
Now, from \eqref{eq_expec_weakly} and \eqref{eq_liminf_pro1} with $v=r+s$ and $h(x)=\Phi_s(x)e^{\gamma x}$, we have
\begin{equation}\label{eq2_liminf_pro1}
\begin{split}
\liminf_{t \to \infty} \frac{\Phi_{t-r}(y)}{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]}&= \liminf_{t \to \infty} \frac{\mathbb{E}^{(e,\gamma)}_y\left[\Phi_{s}(\xi_{t-v}) e^{\gamma \xi_{t-v}}e^{- \gamma \xi_{t-v}}\mathbf{1}_{\{\underline{\xi}_{t-v} > 0\}}\right]}{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} \\
&= \liminf_{t \to \infty} \frac{\mathbb{E}^{(e,\gamma)}_y\left[h(\xi_{t-v}) e^{- \gamma \xi_{t-v}}\mathbf{1}_{\{\underline{\xi}_{t-v} > 0\}}\right]}{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} \\ &\geq \frac{\widehat{U}^{(\gamma)}(y)}{\widehat{U}^{(\gamma)}(x)} \int_{0}^{\infty} \Phi_s(z)e^{\gamma z} \mu_\gamma (\mathrm{d}z).
\end{split}
\end{equation}
In view of identity \eqref{eq_markov_prop1} and the above inequality, replacing $y$ by $\xi_r$, we get from Fatou's Lemma
\begin{equation}\label{eq3_liminf_prop1}
\begin{split}
\liminf_{t \to \infty} \frac{\mathbb{E}^{(\gamma)}_{(z,x)}\left[\varphi_1(\mathcal{U}_r)\varphi_2(\widetilde{W}_{t-s,t})\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]}&= \liminf_{t \to \infty} \frac{ \mathbb{E}_{(z,x)}^{(\gamma)}\left[\varphi_{1}(\mathcal{U}_r) \Phi_{t-r}(\xi_r)\mathbf{1}_{\{\underline{\xi}_{r} > 0\}}\right] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} \\
&\hspace{-2.5cm}\geq \frac{\mathbb{E}^{(\gamma)}_{(z,x)}\Big[\varphi_1(\mathcal{U}_r)\widehat{U}^{(\gamma)}(\xi_r)\mathbf{1}_{\{\underline{\xi}_{r} > 0\}}\Big]}{\widehat{U}^{(\gamma)}(x)} \int_{0}^{\infty} \Phi_s(z)e^{\gamma z} \mu_\gamma (\mathrm{d} z)\\
&\hspace{-2.5cm}= \mathbb{E}_{(z,x)}^{(\gamma), \uparrow}[\varphi_1(\mathcal{U}_r)] \int_{0}^{\infty} \Phi_s(z)e^{\gamma z} \mu_\gamma (\mathrm{d} z).
\end{split}
\end{equation}
Now, we use the duality relationship, with respect to the Lebesgue measure, between $\xi$ and $\widehat{\xi}$ (see for instance Lemma 3 in \cite{hirano2001levy}) to get
\[
\begin{split}
\int_{0}^{\infty} \Phi_s(z)e^{\gamma z} e^{-\gamma z} U^{(\gamma)}(z) \mathrm{d} z &= \int_{0}^{\infty} \mathbb{E}_{z}^{(e,\gamma)} \left[\varphi_2(\widetilde{W}_{0,s})\varphi_3(\xi_{s})e^{-\gamma \xi_{s}}\mathbf{1}_{\{\underline{\xi}_{s} > 0\}} \right]U^{(\gamma)}(z)\mathrm{d} z\\
&= \int_{0}^\infty \mathbb{E}_{-z}^{(e,\gamma)} \left[\varphi_2(\widehat{W}_s)U^{(\gamma)}(-\xi_s)\mathbf{1}_{\{\overline{\xi}_{s} < 0\}} \right] \varphi_3(z) e^{-\gamma z} \mathrm{d} z \\
&= \int_{0}^\infty \mathbb{E}_{-z}^{(e,\gamma), \downarrow} \left[\varphi_2(\widehat{W}_s)\varphi_3(\xi_0) \right]e^{-\gamma z} U^{(\gamma)}(z)\mathrm{d} z.
\end{split}
\]
Using this equality in \eqref{eq3_liminf_prop1}, we obtain
\begin{equation}\label{eq4_liminf_prop1}
\liminf_{t \to \infty} \frac{\mathbb{E}^{(\gamma)}_{(z,x)}\left[\varphi_1(\mathcal{U}_r)\varphi_2(\widetilde{W}_{t-s,t})\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} \geq \mathbb{E}_{(z,x)}^{(\gamma), \uparrow}[\varphi_1(\mathcal{U}_r)] \mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}[\varphi_2(\widehat{W}_s) \varphi_3(\xi_0)].
\end{equation}
On the other hand, by taking $y=x$, $v=0$ and $h(z)=\varphi_3(z)$ in \eqref{eq_lim_prop1}, we deduce
\[
\lim\limits_{t \to \infty} \frac{\mathbb{E}^{(e,\gamma)}_x\left[\varphi_3(\xi_{t}) e^{- \gamma \xi_{t}}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]}{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} = \int_{0}^{\infty} \varphi_3(z) \mu_\gamma (\mathrm{d} z)=\mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}[\varphi_3(\xi_0)].
\]
Using this last identity and replacing $\varphi_1(\mathcal{U}_r)$ by $1-\varphi(\mathcal{U}_r)$ and $\varphi_2\equiv 1$ in \eqref{eq4_liminf_prop1}, we get
\[
\begin{split}
\mathbb{E}^{(\gamma), \uparrow}_{(z,x)}\Big[1-\varphi_1(\mathcal{U}_r)\Big] \mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}&[\varphi_3(\xi_0)] \leq
\liminf_{t \to \infty} \frac{\mathbb{E}^{(\gamma)}_{(z,x)}\Big[\Big(1-\varphi_1(\mathcal{U}_r)\Big)\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\Big] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} \\ &= \mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}[\varphi_3(\xi_0)]- \limsup_{t \to \infty} \frac{\mathbb{E}^{(\gamma)}_{(z,x)}\Big[\varphi_1(\mathcal{U}_r)\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\Big] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]}.
\end{split}
\]
Therefore,
\begin{eqnarray*}
\limsup_{t \to \infty} \frac{\mathbb{E}^{(\gamma)}_{(z,x)}\Big[\varphi_1(\mathcal{U}_r)\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\Big] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} \leq
\mathbb{E}^{(\gamma), \uparrow}_{(z,x)}[\varphi_1(\mathcal{U}_r)] \mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}[\varphi_3(\xi_0)].
\end{eqnarray*}
In other words, by taking $\varphi_2\equiv 1$ in \eqref{eq4_liminf_prop1} and the above inequality, we obtain the identity
\begin{eqnarray*}
\lim_{t \to \infty} \frac{\mathbb{E}^{(\gamma)}_{(z,x)}\Big[\varphi_1(\mathcal{U}_r)\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\Big] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]} =
\mathbb{E}^{(\gamma), \uparrow}_{(z,x)}[\varphi_1(\mathcal{U}_r)] \mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}[\varphi_3(\xi_0)].
\end{eqnarray*}
Finally we pursue the same strategy as before, that is to say we replace $\varphi_2(\widetilde{W}_{t-s,t})$ by $1-\varphi_2(\widetilde{W}_{t-s,t})$ in \eqref{eq4_liminf_prop1} to obtain
\[
\begin{split}
\liminf_{t \to \infty}\frac{\mathbb{E}^{(\gamma)}_{(z,x)}\left[\varphi_1(\mathcal{U}_r)\Big(1-\varphi_2(\widetilde{W}_{t-s,t})\Big)\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]}&\\
& \hspace{-3cm} \geq \mathbb{E}_{(z,x)}^{(\gamma), \uparrow}[\varphi_1(\mathcal{U}_r)] \mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}\Big[\Big(1-\varphi_2(\widehat{W}_{s})\Big)\varphi_3(\xi_0)\Big].
\end{split}
\]
Then, it follows
\[\begin{split}
\limsup_{t\to\infty}&\frac{\mathbb{E}^{(\gamma)}_{(z,x)}\left[\varphi_1(\mathcal{U}_r)\varphi_2(\widetilde{W}_{t-s,t})\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]}\\ & \hspace{2cm}\le \mathbb{E}_{(z,x)}^{(\gamma), \uparrow}[\varphi_1(\mathcal{U}_r)] \mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}\Big[\varphi_2(\widehat{W}_s) \varphi_3(\xi_0)\Big].
\end{split}\]
Finally, putting all pieces together, we conclude that
\[
\lim_{t\to\infty}\frac{\mathbb{E}^{(\gamma)}_{(z,x)}\left[\varphi_1(\mathcal{U}_r)\varphi_2(\widetilde{W}_{t-s,t})\varphi_3(\xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right] }{\mathbb{E}^{(e,\gamma)}_x\left[e^{-\gamma \xi_t} \mathbf{1}_{\{\underline{\xi}_{t} > 0\}}\right]}= \mathbb{E}_{(z,x)}^{(\gamma), \uparrow}[\varphi_1(\mathcal{U}_r)] \mathbb{E}^{(e,\gamma), \downarrow}_{\mu_\gamma}\Big[\varphi_2(\widehat{W}_s) \varphi_3(\xi_0)\Big],
\]
as expected.
\end{proof}
The following lemmas are preparatory results for the proof of Theorem \ref{prop_weakly_pareja}. We first observe from the Wiener-Hopf factorisation that there exists a non decreasing function $\Psi_0$ satisfying,
$$ \psi_0(\lambda) = \lambda \Psi_0(\lambda), \qquad \text{for} \quad \lambda\geq 0,$$ where $\Psi_0$ is the Laplace exponent of a subordinator and takes the form
\begin{equation}\label{eq_phi0sub}
\Psi_0(\lambda)=\varrho^2\lambda+\int_{(0,\infty)} (1-e^{-\lambda x})\mu(x,\infty){\rm d}x.
\end{equation}
From \eqref{betacond}, it follows that $\Psi_0(\lambda)$ is regularly varying at 0 with index $\beta$.
\begin{lemma}\label{lem_cotapsi0}
Let $x, \lambda>0$ and assume that \eqref{betacond} holds, then
$$\lim\limits_{s \to \infty}\lim\limits_{t\to \infty}e^{-t \Phi_{\xi}(\gamma)} t^{3/2}\int_s^{t-s}\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_t>0\}}\right] {\rm d} u =0.$$
\end{lemma}
\begin{proof}
Let $x>0$ and $\lambda>0$. From the Markov property, we observe
\[\begin{split}
\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_t>0\}}\right]&=\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbb{P}_{\xi_u}^{(e)}\left(\underline{\xi}_{t-u}>0\right)\right].
\end{split}\]
Next we take $x_0 > x$ and from the monotonicity of $z\mapsto\mathbb{P}_{z}^{(e)}(\underline{\xi}_{t-u}>0)$, we obtain
\[
\begin{split}
\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_t>0\}}\right]
&\le \mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbb{P}_{\xi_u}^{(e)}\left(\underline{\xi}_{t-u}>0\right)\mathbf{1}_{\{\xi_u>x_0\}}\right]\\
&\hspace{1cm}+\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbf{1}_{\{\xi_u\le x_0\}}\right]\mathbb{P}_{x_0+x}^{(e)}\left(\underline{\xi}_{t-u}>0\right).
\end{split}
\]
Now using the asymptotic behaviour given in \eqref{eq_weakly_cota_hirano} and the Escheer transform \eqref{eq_medida_Essher}, for $t$ large enough, we have
\begin{equation}\label{eq:cota}
\begin{split}
\mathbb{E}_{x}^{(e)}\Big[\Psi_0&(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_t>0\}}\big]\\ & \hspace{1cm} \le C_{\gamma} \mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbf{1}_{\{\xi_u>x_0\}}e^{\gamma \xi_u} \widehat{U}^{(\gamma)} (\xi_u)\right](t-u)^{-3/2} e^{\Phi_\xi(\gamma)(t-u)}\\
&\hspace{1.5cm}+C_{\gamma, x+x_0}\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbf{1}_{\{\xi_u\le x_0\}}\right]\left(t-u\right)^{-3/2}e^{\Phi_\xi(\gamma)(t-u)}\\
&\hspace{1cm} \le C_{\gamma} \mathbb{E}_{x}^{(e, \gamma)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbf{1}_{\{\xi_u>x_0\}}\widehat{U}^{(\gamma)} (\xi_u) \right]\left(t-u\right)^{-3/2} e^{\Phi_\xi(\gamma)t}\\
&\hspace{1.5cm}+C_{\gamma, x+x_0}\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbf{1}_{\{\xi_u\le x_0\}}\right]\left(t-u\right)^{-3/2} e^{\Phi_\xi(\gamma)(t-u)},
\end{split}
\end{equation}
where $C_{\gamma}$ and $C_{\gamma, x+x_0}$ are strictly positive constants.
First, we deal with the first expectation in the right-hand side of the previous inequality. Recalling that $\Phi_\xi^{\prime\prime}(\gamma)<\infty$, we get from Corollary 5.3 in \cite{kyprianou2014fluctuations} that
\[
y^{-1}\widehat{U}^{(\gamma)}(y)\to \frac{1}{\widehat{\mathbb{E}}^{(e,\gamma)}[H_1]}, \qquad \textrm{as} \quad y\to \infty.
\]
Furthermore, since $\widehat{U}^{(\gamma)}$ is increasing then the map $y\mapsto e^{-\frac{\varsigma}{2} y}\widehat{U}^{(\gamma)}(y)$ is bounded for any $\varsigma \in (0,\beta)$ and from \eqref{betacond}, we also deduce that the map $y\mapsto e^{-\frac{\varsigma}{2} y}\ell(\lambda e^{-y})$ is also bounded.
With these observations in mind and, it follows, for $u$ large enough, that
\[\begin{split}
\mathbb{E}_{x}^{(e, \gamma)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbf{1}_{\{\xi_u>x_0\}}\widehat{U}^{(\gamma)} (\xi_u) \right]
\leq C_\lambda \mathbb{E}_{x}^{(e, \gamma)}\left[e^{-(\beta-\frac{\varsigma}{2})\xi_u}\mathbf{1}_{\{\underline{\xi}_u>0\}} \right],
\end{split}\]
where $C_\lambda$ is a strictly positive constants. According to Lemma 1 in \cite{hirano2001levy}, we have that for $u$ sufficiently large there exists $C_{\lambda, \beta, x}$ such that
\[\begin{split}
\mathbb{E}_{x}^{(e, \gamma)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbf{1}_{\{\xi_u>x_0\}}\widehat{U}^{(\gamma)} (\xi_u) \right]& \leq C_{\lambda, \beta, x} u^{-3/2}.
\end{split}\]
For the second expectation in \eqref{eq:cota}, we use the monotonicity of $\Psi_0$ to get
\[
\begin{split}
\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_u>0\}}\mathbf{1}_{\{\xi_u\le x_0\}}\right] & \leq
\Psi_0(\lambda)\mathbb{P}_{x}^{(e)}\left(\underline{\xi}_u>0\right) \leq \widehat{C}_{\gamma, x, \lambda} u^{-3/2}e^{\Phi_\xi(\gamma)u},
\end{split}
\]
where $\widehat{C}_{\gamma, x, \lambda}$ is a positive constant. Putting all pieces together in \eqref{eq:cota}, we deduce, for $t$ large enough, that
\[ \mathbb{E}_{x}^{(e)}\Big[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_t>0\}}\big] \leq C_{\lambda, \beta, x, \gamma} u^{-3/2} (t-u)^{-3/2} e^{\Phi_{\xi}(\gamma) t},\]
where $C_{\lambda, \beta, x, \gamma}>0$. Finally, observe that for $t$ large enough
\[\begin{split}
e^{-t \Phi_{\xi}(\gamma)} t^{3/2} \int_s^{t-s}\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_t>0\}}\right] {\rm d} u & \leq C_{\lambda, \beta, x, \gamma} t^{3/2} \int_{s}^{t-s} (t-u)^{-3/2}u^{-3/2} {\rm d} u \\ & \leq 2 C_{\lambda, \beta, x, \gamma} t^{3/2} \left(\frac{t}{2}\right)^{-3/2} \int_{s}^{\infty} u^{-3/2} \mathrm{d} u \\ & \leq 2 C_{\lambda, \beta, x, \gamma} s^{-1/2}.
\end{split}\]
The result now follows by taking $t\to \infty$ and then $s\to \infty$.
\end{proof}
\begin{lemma}\label{lem_cotasv}
Let $z, x >0$ and assume that \eqref{betacond} holds, then
\[\begin{split}
\lim\limits_{s \to \infty} \lim\limits_{t\to \infty} & t^{3/2}e^{-t \Phi_{\xi}(\gamma)}\mathbb{E}_{(z,x)}\Bigg[\bigg|\exp\left\{-Z_s e^{-\xi_s}v_t(s,\lambda, \xi)\right\}\bigg. \Bigg. \\ &\Bigg.\bigg. \hspace{4cm}- \exp\left\{-Z_s e^{-\xi_s}v_t(t-s,\lambda, \xi)\right\}\bigg| \mathbf{1}_{\{\underline{\xi}_t>0\}} \Bigg] =0.
\end{split}\]
\end{lemma}
\begin{proof}
Fix $z,x>0$ and take $t\geq 2s$. We begin by observing that since $f(y)=e^{-y}$, $y\geq 0$, it is Lipschitz and hence there exists a positive constant $C_1$ such that
\[\begin{split}
\mathbb{E}_{(z,x)}&\Bigg[\bigg|\exp\left\{-Z_s e^{-\xi_s}v_t(s,\lambda, \xi)\right\}- \exp\left\{-Z_s e^{-\xi_s}v_t(t-s,\lambda, \xi)\right\}\bigg| \mathbf{1}_{\{\underline{\xi}_t>0\}}\Bigg]\\ & \hspace{1.5cm } \leq C_1 \mathbb{E}_{(z,x)}\left[Z_s e^{-\xi_s}\big|v_t(s,\lambda, \xi)-v_t(t-s,\lambda, \xi)\big| \mathbf{1}_{\{\underline{\xi}_t>0\}} \right] \\ & \hspace{1.5cm} = C_1 z^{-1} \mathbb{E}_{x}^{(e)}\left[\big|v_t(s,\lambda, \xi)-v_t(t-s,\lambda, \xi)\big| \mathbf{1}_{\{\underline{\xi}_t>0\}}\right],
\end{split}\]
where in the last identity we have conditioned on the environment and used \eqref{martingquenched}. Since $\psi_0$ is positive, from \eqref{eq_BDE} we have that $s\mapsto v_t(s, \lambda, \xi)$ is an increasing function. This together with the facts that $\psi_0$ is a non-decreasing function and $v_t(t, \lambda,\xi) = \lambda$, we deduce $$ \psi_0\big(v_t(u, \lambda, \xi) e^{-\xi_u}\big)\leq \psi_0(\lambda e^{-\xi_u}), \qquad \text{for} \quad u\leq t.$$
Hence, we obtain
\[\begin{split}
v_t(s,\lambda, \xi)-v_t(t-s,\lambda, \xi) &= \int_{s}^{t-s} e^{\xi_u} \psi_0\big(v_t(u, \lambda, \xi) e^{-\xi_u}\big)\mathrm{d} u\\ & \leq \int_{s}^{t-s} e^{\xi_u} \psi_0(\lambda e^{-\xi_u} ) \mathrm{d} u= \int_{s}^{t-s} \lambda \Psi_0(\lambda e^{-\xi_u} )\mathrm{d} u.
\end{split}\]
In other words, we have deduced
\[
\mathbb{E}_{x}^{(e)} \left[\big|v_t(s,\lambda, \xi)-v_t(t-s,\lambda, \xi)\big| \mathbf{1}_{\{\underline{\xi}_t>0\}} \right] \leq \lambda \int_{s}^{t-s} \mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_t>0\}}\right] \mathrm{d} u.
\]
Appealing to Lemma \ref{lem_cotapsi0}, we conclude that
\[\begin{split}
\lim\limits_{s \to \infty} \lim\limits_{t\to \infty} & t^{3/2}e^{-t \Phi_{\xi}(\gamma)}\mathbb{E}_{(z,x)}\Bigg[\bigg|\exp\left\{-Z_s e^{-\xi_s}v_t(s,\lambda, \xi)\right\}\bigg. \Bigg. \\ &\Bigg.\bigg. \hspace{4cm}- \exp\left\{-Z_s e^{-\xi_s}v_t(t-s,\lambda, \xi)\right\}\bigg|\mathbf{1}_{\{\underline{\xi}_t>0\}} \Bigg] \\ & \leq C_1 z^{-1} \lambda \lim\limits_{s \to \infty} \lim\limits_{t\to \infty}t^{3/2}e^{-t \Phi_{\xi}(\gamma)} \int_s^{t-s}\mathbb{E}_{x}^{(e)}\left[\Psi_0(\lambda e^{-\xi_u})\mathbf{1}_{\{\underline{\xi}_t>0\}}\right] {\rm d} u = 0,
\end{split}\]
as required.
\end{proof}
The following lemma states that, with respect to the measure $\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}$ with $z,x>0$, the reweighted process $(Z_te^{-\xi_t}, t\geq 0)$ is a martingale that converges towards a strictly positive r.v. under $\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}$. This is another preparatory lemma for the proof of Theorem \ref{prop_weakly_pareja} below.
\begin{lemma}\label{lem_mtg}
Let $z,x >0$ and assume that \eqref{betacond} holds. Then the process $(Z_te^{-\xi_t},t\geq 0)$ is a martingale with respect to $(\mathcal{F}_t)_{t\geq 0}$ under $\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}$. Moreover, as $t\to\infty$
$$ Z_{t}e^{-\xi_{t}} \longrightarrow \mathcal{U}_\infty, \quad \quad \mathbb{P}_{(z,x)}^{(\gamma),\uparrow}-\text{a.s.},$$
where the random variable $\mathcal{U}_\infty$ is finite and satisfies
$$ \mathbb{P}_{(z,x)}^{(\gamma),\uparrow}\big(\mathcal{U}_\infty>0\big)>0.$$
\end{lemma}
In order to prove the above result, we require the following Lemma which is Proposition 3.4 in Bansaye et al. \cite{bansaye2021extinction}.
\begin{lemma}[Proposition 3.4 in \cite{bansaye2021extinction}] \label{lem_JCProba} Let $z,x>0$ and assume that the environment $\xi$ is critical under $ \mathbb{P}_{(z,x)}$ and that \eqref{eq_xlog2x} is fulfilled, then
$$ \lim\limits_{t\to \infty} \mathbb{P}^{\uparrow}_{(z,x)}\big(Z_t>0\big) >0.$$
\end{lemma}
We recall that \eqref{betacond} implies the $x\log^2(x)$-moment condition \eqref{eq_xlog2x}.
\begin{proof} [Proof of Lemma \ref{lem_mtg}]
From Proposition 1.1 in \cite{bansaye2021extinction}, which we may apply here with respect to the measure $\mathbb{P}_{(z,x)}^{(\gamma)}$, we have that the process $(Z_te^{-\xi_t},t\geq 0)$ is a quenched martingale with respect to the environment.
We assume that $s\leq t$ and take $A\in \mathcal{F}_s$. In order to deduce the first claim of this lemma, we first show
\[\begin{split}
\mathbb{E}_{(z,x)}^{(\gamma)}\Big[Z_te^{-\xi_t}\mathbf{1}_A\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big] &= \mathbb{E}_{(z,x)}^{(\gamma)}\Big[Z_se^{-\xi_s}\mathbf{1}_A\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big].
\end{split}\]
First, conditioning on the environment, we deduce that
\[\begin{split}
\mathbb{E}_{(z,x)}^{(\gamma)}\Big[Z_te^{-\xi_t}\mathbf{1}_A\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big] &= \mathbb{E}_{(z,x)}^{(\gamma)}\Big[\mathbb{E}_{(z,x)}^{(\gamma)}[Z_te^{-\xi_t}\mathbf{1}_A| \xi]\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big]\\ &= \mathbb{E}_{(z,x)}^{(\gamma)}\Big[\mathbb{E}_{(z,x)}^{(\gamma)}[Z_se^{-\xi_s}\mathbf{1}_A| \xi]\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big].
\end{split}\]
We can see that the random variable $\mathbb{E}_{(z,x)}^{(\gamma)}[Z_se^{-\xi_s}\mathbf{1}_A| \xi]$ is $\mathcal{F}_s$-measurable. Thus conditioning on $\mathcal{F}_s$, we have
\[\begin{split}
\mathbb{E}_{(z,x)}^{(\gamma)}\Big[Z_te^{-\xi_t}\mathbf{1}_A\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big] &= \mathbb{E}_{(z,x)}^{(\gamma)}\Big[\mathbb{E}_{(z,x)}^{(\gamma)}[Z_se^{-\xi_s}\mathbf{1}_A | \xi] \mathbb{E}_{(z,x)}^{(\gamma)}[\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}| \mathcal{F}_s]\Big].
\end{split}\]
Further, by Lemma 3.1 in \cite{bansaye2021extinction}, which we can apply here under the measure $\mathbb{P}_{(z,x)}^{(\gamma)}$,
the process $(\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}, t\geq 0)$ is a martingale with respect to $(\mathcal{F}_t)_{t\geq 0}$ under $\mathbb{P}_{(z,x)}^{(\gamma)}$. Hence
\[\begin{split}
\mathbb{E}_{(z,x)}^{(\gamma)}\Big[Z_te^{-\xi_t}\mathbf{1}_A\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big] &= \mathbb{E}_{(z,x)}^{(\gamma)}\Big[\mathbb{E}_{(z,x)}^{(\gamma)}[Z_se^{-\xi_s}\mathbf{1}_A | \xi] \widehat{U}^{(\gamma)}(\xi_s)\mathbf{1}_{\{\underline{\xi}_s>0\}}\Big] \\ &= \mathbb{E}_{(z,x)}^{(\gamma)}\Big[Z_se^{-\xi_s}\mathbf{1}_A \widehat{U}^{(\gamma)}(\xi_s)\mathbf{1}_{\{\underline{\xi}_s>0\}}\Big].
\end{split}\]
Therefore, by definition of the measure $\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}$ we see
\[\begin{split}
\mathbb{E}_{(z,x)}^{(\gamma),\uparrow}\big[Z_te^{-\xi_t}\mathbf{1}_A\big] &= \frac{1}{\widehat{U}(x)} \mathbb{E}_{(z,x)}^{(\gamma)}\Big[Z_te^{-\xi_t}\mathbf{1}_A\widehat{U}^{(\gamma)}(\xi_t)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big] \\& = \frac{1}{\widehat{U}(x)} \mathbb{E}_{(z,x)}^{(\gamma)}\Big[Z_se^{-\xi_s}\mathbf{1}_A \widehat{U}^{(\gamma)}(\xi_s)\mathbf{1}_{\{\underline{\xi}_s>0\}}\Big] = \mathbb{E}_{(z,x)}^{(\gamma),\uparrow}\big[Z_se^{-\xi_s}\mathbf{1}_A\big],
\end{split}\]
which allows us to conclude that the process $(Z_te^{-\xi_t},t\geq 0)$ is a martingale with respect to $(\mathcal{F}_t)_{t\geq 0}$ under $\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}$. Moreover, by Doob's convergence theorem, there is a non-negative finite r.v. $\mathcal{U}_\infty$ such that as $t\to \infty$
$$ Z_{t}e^{-\xi_{t}} \longrightarrow \mathcal{U}_\infty, \quad \quad \mathbb{P}_{(z,x)}^{(\gamma),\uparrow}-\text{a.s.}$$
Next, by Dominated Convergence Theorem we have
\[\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}\big(\mathcal{U}_\infty>0\big) = \lim\limits_{t\to\infty}\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}\big(Z_te^{-\xi_t}>0\big).\]
The proof is thus completed as soon as we can show
\begin{equation}\label{eq_pgamma}
\lim\limits_{t\to\infty}\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}\big(Z_te^{-\xi_t}>0\big)>0.
\end{equation}
In order to do so, we first observe that the following identity holds
\[\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}\big(Z_te^{-\xi_t}=0\big)
=\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}\big(Z_t=0\big),\]
then by noting that under $\mathbb{P}^{(\gamma)}_{(z,x)}$ the L\'evy process $\xi$ oscillates (since $\Phi_{\xi}'(\gamma)=0$), we can apply Lemma \ref{lem_JCProba} to deduce \eqref{eq_pgamma}.
\end{proof}
With Proposition \ref{prop_weakly_primera} and Lemmas \ref{lem_cotasv} and \ref{lem_mtg} in hand, we may now proceed to prove Theorem \ref{prop_weakly_pareja} following similar ideas as those used in Lemma 3.4 in \cite{afanasyev2012} although we might consider that the continuous setting leads to significant changes since an extension of Proposition \ref{prop_weakly_primera} seems difficult to be deduced unlike in the discrete case (see Theorem 2.7 in \cite{afanasyev2012}). Indeed, it seems that such extension will depend on a much deeper analysis on the asymptotic behaviour for bridges of L\'evy processes and their conditioned version.
\begin{proof}[Proof of Theorem \ref{prop_weakly_pareja}]
Fix $x,z>0$ and recall that the process $(\mathcal{U}_s, s\geq 0)$ is defined as $ \mathcal{U}_{s} := Z_{s}e^{-\xi_{s}}$. For any $\lambda\geq 0$, we shall prove the convergence of the following Laplace transform as $t\to\infty$, $$\mathbb{E}_{(z,x)}\left[\exp\{-\lambda Z_te^{-\xi_t}\}\ \big|\big. \ \underline{\xi}_t >0\right].$$
First we rewrite the latter expression in a form which allows to use Proposition \ref{prop_weakly_primera} and Lemma \ref{lem_cotasv}. We begin by recalling from \eqref{eq_Laplace}, that for any $\lambda\geq 0$ and $t\geq s\geq 0$ we have
\begin{equation*}\label{eq_quenchedlaw}
\mathbb{E}_{(z,x)}\left[\exp\{-\lambda Z_te^{-\xi_t}\}\ \big|\big.\ \xi, \mathcal{F}^{(b)}_s\right] = \exp\big\{- Z_se^{-\xi_s}v_t(s,\lambda,\xi)\big\}.
\end{equation*}
Thus
\[\begin{split}
\mathbb{E}_{(z,x)}&\left[\exp\{-\lambda Z_te^{-\xi_t}\}\mathbf{1}_{\{\underline{\xi}_t>0\}}\right]= \mathbb{E}_{(z,x)}\Big[\mathbb{E}_{(z,x)}\left[\exp\{-\lambda Z_te^{-\xi_t}\}\ \big|\big.\ \xi, \mathcal{F}^{(b)}_s\right] \mathbf{1}_{\{\underline{\xi}_t>0\}}\Big]\\ &\hspace{0.3cm} = \mathbb{E}_{(z,x)} \Big[ \exp\big\{- Z_se^{-\xi_s}v_t(s,\lambda,\xi)\big\}\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big]\\ &\hspace{0.3cm} = \mathbb{E}_{(z,x)} \Big[ \exp\big\{- Z_se^{-\xi_s}v_t(t-s,\lambda,\xi)\big\}\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big] \\ &\hspace{1cm}+ \mathbb{E}_{(z,x)} \Big[\Big( \exp\big\{- Z_se^{-\xi_s}v_t(s,\lambda,\xi)\big\}- \exp\big\{- Z_se^{-\xi_s}v_t(t-s,\lambda,\xi)\big\}\Big)\mathbf{1}_{\{\underline{\xi}_t>0\}}\Big].
\end{split}\]
Now, using the same notation as in Proposition \ref{prop_weakly_primera}, we note that for any $s\leq t$,
\begin{equation}\label{eq_vaphi}
\exp\left\{-{Z_{s}e^{-\xi_{s}}} v_{t}(t-s,\lambda,\xi)\right\} = \varphi\big(\mathcal{U}_{s} ,\widetilde{W}_{t-s,t}, \xi_t\big),
\end{equation}
where $(\widehat W_s(\lambda), s\geq 0)$ and $(\widetilde{W}_{t-s,t}, s\leq t)$ are defined by
\[\begin{split}
\widehat{W}_{s}(\lambda):= \exp\left\{-v_s(0,\lambda,\widehat{\xi})\right\}, \quad \quad \quad \widetilde{W}_{t-s, t}:=\exp\left\{-v_t(t-s,\lambda,\xi)\right\},
\end{split}\]
and $\varphi$ is the following bounded and continuous function
$$\varphi(\textbf{u},w,y) := w^{\textbf{u}}, \qquad 0\leq w\leq 1, \quad \textbf{u}\geq 0, \quad y\in \mathbb{R}.$$
Hence, appealing to Proposition \ref{prop_weakly_primera}, Lemma \ref{lem_cotasv} and
\eqref{eq_weakly_cota_hirano}, for $z,x>0$, we see
\[\begin{split}
\lim\limits_{t\to \infty}& \mathbb{E}_{(z,x)}\left[\exp\{-\lambda Z_t e^{-\xi_t}\}\ \Big|\Big. \ \underline{\xi}_t >0\right] = \lim\limits_{s\to \infty} \lim\limits_{t\to \infty} \mathbb{E}_{(z,x)}\left[\varphi(\mathcal{U}_{s},\widetilde{W}_{t-s,t}, \xi_t)\ \Big|\Big.\ \underline{\xi}_t > 0\right] \\ & \hspace{0.1cm} + \lim\limits_{s\to \infty}\lim\limits_{t\to \infty}\mathbb{E}_{(z,x)}\left[\Big|\exp\left\{-Z_s e^{-\xi_s}v_t(s,\lambda, \xi)\right\} - \exp\left\{-Z_s e^{-\xi_s}v_t(t-s,\lambda, \xi)\right\}\Big|\, \bigg|\Big. \ \underline{\xi}_t>0 \right] \\&\hspace{0.3cm}= \lim\limits_{s\to \infty}\lim\limits_{t\to \infty}\frac{ \mathbb{E}_{(z,x)}^{(\gamma)}\left[\varphi(\mathcal{U}_{s},\widetilde{W}_{t-s,t}, \xi_t)e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_t>0\}}\right] }{\mathbb{E}^{(e, \gamma)}_x\left[e^{-\gamma \xi_t}\mathbf{1}_{\{\underline{\xi}_t>0\}}\right]} = \lim\limits_{s\to \infty}\Upsilon_{z,x}(\lambda,s),
\end{split}\]
where
\begin{equation*}
\Upsilon_{z,x}(\lambda,s):= \iiint \varphi(\textbf{u},w,y) \mathbb{P}^{(\gamma),\uparrow}_{(z,x)}\big(\mathcal{U}_s \in \mathrm{d} \textbf{u}\big) \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\big(W_s(\lambda) \in \mathrm{d} w\big)\mu_\gamma(\mathrm{d} y).
\end{equation*}
On the other hand, from Lemma \ref{lem_mtg}, we recall that, under $\mathbb{P}_{(z,x)}^{(\gamma),\uparrow}$, the process $(\mathcal{U}_s, s\geq 0)$ is a non-negative martingale with respect to $(\mathcal{F}_t)_{t\geq 0}$ that converges towards the non-negative and finite r.v. $\mathcal{U}_\infty$. Next, we observe from Proposition 2.3 in \cite{he2018continuous} that the mapping $s\mapsto v_s(0,\lambda,\widehat \xi)$ is decreasing implying that $s\mapsto \widehat W_s(\lambda)$ is increasing $\mathbb{P}_{-y}^{(e,\gamma),\downarrow}$-a.s., for $y>0$. Further, since $v_{s}(0,\lambda, \widehat \xi)\leq \lambda$, the process $(\widehat W_{s}(\lambda), s\geq 0)$ is bounded below, i.e. for any $\lambda\geq 0$,
$$ 0 < e^{-\lambda}\leq \widehat W_{s}(\lambda) \leq 1.$$
Therefore it follows that, for any $\lambda\geq 0$ and $y>0$,
\begin{equation}\label{eq_convW}
\widehat W_{s}(\lambda) \xrightarrow[s\to\infty]{} \widehat W_\infty(\lambda), \quad \quad \mathbb{P}_{-y}^{(e,\gamma),\downarrow}-\text{a.s.},
\end{equation}
where $\widehat W_\infty(\lambda)$ is a strictly positive r.v.
The above observations together with the dominated convergence theorem imply that
$$ \lim\limits_{s\to \infty}\Upsilon_{z,x}(\lambda,s) = \iiint \varphi(\textbf{u},w,y) \mathbb{P}^{(\gamma),\uparrow}_{(z,x)}\big(\mathcal{U}_\infty \in \mathrm{d} \textbf{u}\big) \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\big(\widehat W_\infty(\lambda) \in \mathrm{d} w\big)\mu_\gamma(\mathrm{d} y):= \Upsilon_{z,x}(\lambda).$$
In other words $ \mathcal{U}_t=Z_t e^{-\xi_t}$ converges weakly, under $\mathbb{P}_{(z,x)}\big(\cdot \ | \ \underline{\xi}_t>0 \big)$, towards some positive and finite r.v. that we denote by $Q$ and whose Laplace transform is given by $\Upsilon_{z,x}$.
Next, we observe that the probability of the event $\{Q>0\}$ is strictly positive. The latter is equivalent to show that $\Upsilon_{z,x}(\lambda)<1$ for all $\lambda>0$. In other words, from the definition of $\varphi(\textbf{u},w,y)$, it is enough to show
$$ \mathbb{P}_{(z,x)}^{(\gamma),\uparrow}\big(\mathcal{U}_\infty>0\big)>0 \quad \quad \text{and}\quad \quad \mathbb{P}_{-y}^{(e,\gamma),\downarrow}\big(\widehat W_\infty(\lambda)<1\big)=1, \quad \text{for all}\quad \lambda>0.$$
The first claim has been proved in Lemma \ref{lem_mtg}. For the second claim, we observe that for any $\lambda>0$,
\[ \mathbb{P}_{-y}^{(e,\gamma),\downarrow}\big(\widehat W_\infty(\lambda)<1\big) = \mathbb{P}_{-y}^{(e,\gamma), \downarrow}\big(v_\infty(0, \lambda, \widehat{\xi})>0\big). \]
By the proof of Proposition 3.4. in \cite{bansaye2021extinction}, we have
\[v_\infty(0, \lambda, \xi) \geq \lambda \exp\left\{-\int_{0}^{\infty} \Psi_0(\lambda e^{-\xi_u}){\rm d} u\right\},\]
and moreover, from the same reference and under assumption \eqref{betacond}, it follows
\[\mathbb{E}^{(e, \gamma), \uparrow}_y \left[\int_{0}^{\infty} \Psi_0(\lambda e^{-\xi_u}){\rm d} u\right] < \infty,\]
which implies that
$$\mathbb{P}_{-y}^{(e,\gamma), \downarrow }\big(v_\infty(0, \lambda, \widehat \xi)>0\big)=1, \quad \text{for all} \quad \lambda \geq 0.$$
In other words, the probability of the event $\{Q>0\}$ is strictly positive, which implies
$$\lim\limits_{t\to \infty}\mathbb{P}_{(z,x)}\Big(Z_t e^{-\xi_t}>0 \ \Big|\Big. \ \underline{\xi}_t>0\Big)>0.$$
This completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{teo_debil}}
The proof of this theorem follows a similar strategy as the proof of Theorem 1.2 in Bansaye et al. \cite{bansaye2021extinction} for the critical regime where assumption that $\ell(\lambda)>C$, for $C>0$, and the asymptotic behaviour of exponential functionals of L\'evy processes are crucial. We also recall that $Z$ is in the weakly subcritical regime.
For simplicity of exposition, we split the proof of Theorem \ref{teo_debil} into two lemmas. The first Lemma is a direct consequence of Theorem \ref{prop_weakly_pareja}.
\begin{lemma}\label{lem_weakly_2}
Suppose that \eqref{betacond} holds. Then for any $z, x >0$ we have, as $t \to \infty$
\begin{eqnarray*}
\mathbb{P}_{(z,x)}\Big(Z_t >0,\ \underline{\xi}_t>0\Big) &\sim& \mathfrak{b}(z,x) \mathbb{P}^{(e)}_x\left(\underline{\xi}_t > 0\right) \\ &\sim& \mathfrak{b}(z,x)\frac{A_\gamma}{ \gamma\kappa^{(\gamma)}(0,\gamma)} e^{\gamma x} \widehat{U}^{(\gamma)}(x) t^{-3/2}e^{\Phi_{\xi}(\gamma)t},
\end{eqnarray*}
where the constant $A_\gamma$ is defined in \eqref{eq_weakly_constante}.
\end{lemma}
\begin{proof}
We begin by recalling from Theorem \ref{prop_weakly_pareja} that
\begin{equation*}
\lim\limits_{t \to \infty} \mathbb{P}_{(z,x)}\left(Z_t>0\ \Big| \ \underline{\xi}_t> 0\right) = \mathfrak{b}(z,x)>0.
\end{equation*}
Thus, appealing to \eqref{eq_weakly_cota_hirano} we obtain that,
\begin{eqnarray*}
\mathbb{P}_{(z,x)}\Big(Z_t>0,\ \underline{\xi}_t>0\Big) &=& \mathbb{P}_{(z,x)}\left(Z_t>0 \ \big|\big. \ \underline{\xi}_t>0\right) \mathbb{P}_x^{(e)}\left(\underline{\xi}_t>0\right) \\ &\sim& \mathfrak{b}(z,x)\frac{A_\gamma}{ \gamma\kappa^{(\gamma)}(0,\gamma) }e^{\gamma x} \widehat{U}^{(\gamma)}(x) t^{-3/2}e^{\Phi_{\xi}(\gamma)t},
\end{eqnarray*}
as $t\to \infty$, which yields the desired result.
\end{proof}
The following lemma tell us that, under the condition that $\ell(\lambda)>C$, for $C>0$, only a L\'evy random environment with a high infimum contribute substantially to the non-extinction probability.
\begin{lemma}\label{lem_cero_weakly}
Suppose that $\ell(\lambda)>C$, for $C>0$. Then for $\delta\in (0,1) $ and $z, x>0$, we have
\begin{equation}\label{eq_wekly_cota_cero}
\lim\limits_{y \to \infty} \limsup_{t\to \infty} t^{3/2} e^{-t\Phi_{\xi}(\gamma)}\mathbb{P}_{(z,x)}\left(Z_t>0,\ \underline{\xi}_{t-\delta}\leq -y\right) = 0.
\end{equation}
\end{lemma}
\begin{proof}
The proof of this lemma follows similar arguments as those used in the proofs of Lemma 6 in Bansaye et al. \cite{bansaye2021extinction} and Lemma 4.4 in Li et al. \cite{li2018asymptotic}.
From \eqref{eq_Laplace}, we deduce the following identity which holds for all $t> 0$,
\begin{equation}\label{eq_weakly_cota_noext}
\mathbb{P}_{(z,x)}\big(Z_t > 0 \ \big|\big. \ \xi \big) = 1-\exp\big\{-z v_t(0,\infty, \xi-\xi_0)\big\}.
\end{equation}
Similarly as in Lemma 6 in \cite{bansaye2021extinction}, since $\ell(\lambda)>C$ we can bound the functional $v_t(0,\infty,\xi-\xi_0)$ in terms of the exponential functional of the L\'evy process $\xi$, i.e.
\begin{equation}\label{eq_cota_v}
v_t(0,\infty,\xi-\xi_0) \leq \Big(\beta C {\tt I}_{0,t}(\beta (\xi-\xi_0))\Big)^{-1/\beta},
\end{equation}
where we recall that
\begin{equation}\label{eq_func_expon}
{\tt I}_{s,t}(\beta (\xi-\xi_0)):= \int_{s}^{t} e^{-\beta (\xi_u-\xi_0)} \mathrm{d} u,\qquad \textrm{for} \quad t\ge s\ge 0.
\end{equation}
In other words, for $0< \delta < t$, we deduce
\begin{equation}
\begin{split}\label{eq_cota_debil1}
\mathbb{P}_{(z,x)}\Big(Z_t>0,\ \underline{\xi}_{t-\delta}\leq -y\Big) &\leq C(z) \mathbb{E}^{(e)}_x\Big[F({\tt I}_{0,t}(\beta (\xi-\xi_0)));\ \underline{\xi}_{t-\delta}\leq -y\Big] \\
& = C(z) \mathbb{E}^{(e)}\Big[F({\tt I}_{0,t}(\beta \xi));\ \tau^{-}_{-\tilde{y}}\leq t-\delta\Big],
\end{split}
\end{equation}
where $\tilde{y}=y+x$, $\tau^-_{-\tilde{y}} = \inf\{t\geq 0:\xi_t\leq -\tilde{y}\},$ $C(z)=z(\beta C)^{-1/\beta}\lor 1$ and
\[
F(w)=1 - \exp\{-z(\beta C w)^{-1/\beta}\}.
\]
To upper bound the right-hand side of \eqref{eq_cota_debil1}, we recall from Lemma 4.4 in \cite{li2018asymptotic} that there exists a positive constant $\tilde C$ such that
\begin{equation}
\limsup_{t\to \infty} t^{3/2} e^{-t\Phi_{\xi}(\gamma)} \mathbb{E}^{(e)}\Big[F({\tt I}_{0,t}(\beta \xi));\ \tau^{-}_{-\tilde{y}}\leq t-\delta\Big] \leq \tilde C e^{-\tilde{y}} + \tilde C e^{-(1-\gamma)\tilde{y}}\widehat{U}^{(\gamma)}(\tilde{y}),
\end{equation}
which clearly goes to $0$ as $y$ increases, since $\gamma \in (0,1)$ and $ \widehat{U}^{(\gamma)}(y) = \mathcal{O}(y)$ as $y$ goes to $ \infty$. Hence putting all pieces together allow us to deduce our result.
\end{proof}
We are now ready to deduce our second main result. The next result follows the same arguments as those used in the proof of Theorem 1.2 in \cite{bansaye2021extinction}, we provide its proof for the sake of completeness.
\begin{proof} [Proof of Theorem \ref{teo_debil}]
Let $z,x,\epsilon >0$. From Lemma \ref{lem_cero_weakly}, we deduce that we may choose $y>0$ such that for $t$ sufficiently large
\begin{equation}
\mathbb{P}_{(z,x)}\Big(Z_t>0,\ \underline{\xi}_{t-\delta} \leq -y\Big) \leq \epsilon \mathbb{P}_{(z,x)}\Big(Z_t>0,\ \underline{\xi}_{t-\delta} > -y\Big).
\end{equation}
Further, since $\{Z_{t} >0\}\subset \{Z_{t-\delta} >0\}$ for $t$ large, we deduce that
\begin{eqnarray*}
\mathbb{P}_{z}(Z_t>0) &=& \mathbb{P}_{(z,x)}\Big(Z_t>0,\ \underline{\xi}_{t-\delta} >-y\Big) + \mathbb{P}_{(z,x)}\Big(Z_t>0,\ \underline{\xi}_{t-\delta} \leq -y\Big) \\ &\leq & (1+\epsilon) \mathbb{P}_{(z,x+y)}\Big(Z_{t-\delta}>0,\ \underline{\xi}_{t-\delta} > 0\Big).
\end{eqnarray*}
In other words, for every $\epsilon >0$ there exists $y'>0$ such that
\[\begin{split}
(1-\epsilon) t^{3/2}e^{-\Phi_{\xi}(1)t}\mathbb{P}_{(z,y')}\Big(Z_t>0,\ & \underline{\xi}_{t} >0\Big) \leq t^{3/2}e^{-\Phi_{\xi}(1)t}\mathbb{P}_{z}(Z_t>0) \\ &\leq (1+\epsilon) t^{3/2}e^{-\Phi_{\xi}(1)t}\mathbb{P}_{(z,y')}\Big(Z_{t-\delta}>0,\ \overline{\xi}_{t-\delta} > 0\Big).
\end{split}\]
Now, appealing to Lemma \ref{lem_weakly_2}, we have
\begin{equation*}
\lim\limits_{t \to \infty}t^{3/2}e^{-\Phi_{\xi}(1)t}\mathbb{P}_{(z,y')}\Big(Z_t>0,\ \underline{\xi}_t >0\Big) = \mathfrak{b}(z,y') \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)}e^{\gamma y'} \widehat{U}^{(\gamma)}(y').
\end{equation*}
Hence, we obtain
\[\begin{split}
(1-\epsilon) \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)} \mathfrak{b}(z,y') e^{\gamma y'} \widehat{U}^{(\gamma)}(y') &\leq \lim\limits_{t \to \infty} t^{3/2}e^{-t\Phi_{\xi}(1)} \mathbb{P}_{z}(Z_t>0) \\ &\leq (1+\epsilon) \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)} \mathfrak{b}(z,y') e^{\gamma y'} \widehat{U}^{(\gamma)}(y') e^{-\Phi_{\xi}(1)\delta },
\end{split}\]
where $y'$ may depend on $\epsilon$ and $z$. Next, we choose $y'$ in such a way that it goes to infinity as $\epsilon$ goes to 0. In other words, for any $y'=y_{\epsilon}(z)$ which goes to $\infty$ as $\epsilon$ goes to 0, we have
\[\begin{split}
0< (1-\epsilon) \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)} &\mathfrak{b}(z,y_{\epsilon}(z)) e^{\gamma y_{ \epsilon}(z)}\widehat{U}^{(\gamma)}(y_{\epsilon}(z)) \leq \lim\limits_{t \to \infty} t^{3/2}e^{-\Phi_{\xi}(1)t}\mathbb{P}_{z}(Z_t>0)\\
&\leq (1+\epsilon) \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)} \mathfrak{b}(z,y_{\epsilon}(z)) e^{\gamma y_{\epsilon}(z)}\widehat{U}^{(\gamma)}(y_{\epsilon}(z)) e^{-\Phi_{\xi}(1)\delta } < \infty.
\end{split}
\]
Therefore, letting $\epsilon\to 0$, we get
\[\begin{split}
0< \liminf_{\epsilon \to 0} (1-\epsilon) & \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)} \mathfrak{b}(z,y_{\epsilon}(z)) e^{\gamma y_{ \epsilon}(z)}\widehat{U}^{(\gamma)}(y_{\epsilon}(z)) \leq\lim\limits_{t \to \infty} t^{3/2}e^{-\Phi_{\xi}(1)t}\mathbb{P}_{z}(Z_t>0)\\ &\leq \limsup_{\epsilon \to 0}(1+\epsilon) \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)} \mathfrak{b}(z,y_{\epsilon}(z)) e^{\gamma y_{\epsilon}(z)}U^{(\gamma)}(y_{ \epsilon}(z)) e^{-\Phi_{\xi}(1)\delta }< \infty.
\end{split}\]
Since $\delta$ can be taken arbitrary close to 0, we deduce
\begin{equation*}
\lim\limits_{t \to \infty} t^{3/2}e^{-\Phi_{\xi}(1)t} \mathbb{P}_{z}(Z_t>0)=\mathfrak{B}(z),
\end{equation*}
where
$$ \mathfrak{B}(z):= \frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)}\lim_{\epsilon \to 0} \mathfrak{b}(z,y_{\epsilon}(z))e^{\gamma y_{\epsilon}(z)}\widehat{U}^{(\gamma)}(y_{\epsilon}(z)) \in (0,\infty). $$
Thus the proof is completed.
\end{proof}
\subsection{The stable case}\label{sec:stable}
Here, we compute the constant $\mathfrak{B}(z)$ in the stable case and verify that it coincides with the constant that appears in Theorem 5.1 in Li and Xu \cite{li2018asymptotic}. To this end, we recall that in the stable case we have $\psi_0(\lambda)= C\lambda^{1+\beta}$ with $\beta\in (0,1)$ and $C>0$. Moreover, the backward differential equation \eqref{eq_BDE} can be solved explicitly (see e.g. Section 5 in \cite{he2018continuous}), that is for any $\lambda\geq 0$ and $s\in [0,t]$,
\begin{equation}\label{eq:vstable}
v_t(s,\lambda, \xi) = \Big(\lambda^{-\beta} + \beta C\texttt{I}_{s,t}(\beta \xi)\Big)^{-1/\beta},
\end{equation}
where $\texttt{I}_{s,t}(\beta \xi)$ denotes the exponential functional of the L\'evy process $\beta \xi$ defined in \eqref{eq_expfuncLevy}.
Next, we observe that, for any $z,x >0$, the constant $\mathfrak{b} (z,x)$ defined in Theorem \ref{prop_weakly_pareja} can be rewritten as follows
\begin{equation*}
\mathfrak{b}(z,x) = 1- \lim\limits_{\lambda \to \infty} \lim\limits_{s\to \infty} \gamma\kappa^{(\gamma)}(0,\gamma) \int_{0}^{\infty} e^{-\gamma y} U^{(\gamma)} (y) R_{s,\lambda}(z,x,y){\rm d} y,
\end{equation*}
where
\[ R_{s,\lambda}(z,x,y) := \int_{0}^{1} \int_{0}^{\infty} w^{\textbf{u}} \mathbb{P}^{(\gamma),\uparrow}_{(z,x)}\big(\mathcal{U}_s \in \mathrm{d} \textbf{u}\big) \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\big(\widehat{W}_s(\lambda) \in \mathrm{d} w\big).\]
In order to find an explicit expression of the previous double integral we use Proposition 3.3. in \cite{bansaye2021extinction} which claims that for any $z,x>0$ and $\theta \geq 0$, we have
$$\mathbb{E}_{(z,x)}^{(\gamma), \uparrow} \Big[\exp\left\{-\theta Z_s e^{-\xi_s}\right\} \Big] = \mathbb{E}^{(e, \gamma), \uparrow}_x\Big[\exp\left\{-z v_s(0, \theta e^{-x}, \xi-x)\right\}\Big]. $$
It follows that
\begin{equation*}
\begin{split}
R_{s,\lambda}(z,x,y)&= \int_{0}^{1} \mathbb{E}_{(z,x)}^{(\gamma), \uparrow}\left[w^{\mathcal{U}_s}\right] \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\big(\widehat{W}_s(\lambda) \in \mathrm{d} w\big) \\
& = \int_{0}^{1} \mathbb{E}_{(z,x)}^{(\gamma), \uparrow}\Big[\exp\left\{\log(w)Z_se^{-\xi_s}\right\}\Big] \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\big(\widehat{W}_s(\lambda) \in \mathrm{d} w\big) \\
& = \int_{0}^{1} \mathbb{E}_{x}^{(e, \gamma), \uparrow}\Big[\exp\left\{-z v_s(0,- \log(w)e^{-x}, \xi-x)\right\}\Big] \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\big(\widehat{W}_s(\lambda) \in \mathrm{d} w\big)\\
& = \int_{0}^{\infty} \int_{0}^{\infty} e^{-ze^{-x}(\beta C w +\beta C u)^{-1/\beta}} \mathbb{P}^{(\gamma),\uparrow}_{(z,x)}\big(\texttt{I}_{0,\infty}(\beta \xi) \in \mathrm{d} w\big) \mathbb{P}^{(e,\gamma),\downarrow}_{-y}\Big(\texttt{I}_{0,\infty}(\beta \widehat{\xi}) \in \mathrm{d} u\Big),
\end{split}
\end{equation*}
where in the last equality we have used \eqref{eq:vstable}. Thus putting all pieces together and appealing to the Dominated Convergence Theorem, we deduce
\begin{equation*}
\begin{split}
\mathfrak{b} (z,x)& = 1- \gamma\kappa^{(\gamma)}(0,\gamma) \int_{0}^{\infty} e^{-\gamma y} U^{(\gamma)} (y) \lim\limits_{\lambda \to \infty} \lim\limits_{s\to \infty} R_{s,\lambda}(z,x,y){\rm d} y \\ & = \gamma\kappa^{(\gamma)}(0,\gamma) \int_{0}^{\infty} e^{-\gamma y} U^{(\gamma)} (y) G_{z,x}(y){\rm d} y,
\end{split}
\end{equation*}
where $G_{z,x}(\cdot)$ is as in \eqref{functG}. Therefore, we have that the limiting constant in the stable case is given by
\begin{equation*}
\begin{split}
\mathfrak{B}(z)&:=\frac{A_\gamma}{\gamma\kappa^{(\gamma)}(0,\gamma)} \lim_{x\to \infty} \mathfrak{b}(z,x)e^{\gamma x}\widehat{U}^{(\gamma)}(x)\\ & = A_\gamma \lim\limits_{x\to \infty} e^{\gamma x} \widehat{U}^{(\gamma)}(x)\int_{0}^{\infty} e^{-\gamma y} U^{(\gamma)}(y)G_{z,x}(y){\rm d} y,
\end{split}
\end{equation*}
as expected.
~\\
{\bf Acknowledgements:} N.C.-T. acknowledges support from CONACyT-MEXICO grant no. 636133 and financial support from the University of G\"ottingen. This work was concluded whilst N.C.-T. was visiting CIMAT whom she also acknowledges for their hospitality.
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.08631",
"language": "en",
"timestamp": "2023-02-20T02:04:13",
"url": "https://arxiv.org/abs/2302.08631",
"yymm": "2302"
} | \section*{Acknowledgements}
We thank Aldo Pacchiano and Dylan Foster for helpful discussions.
\bibliographystyle{plainnat}
\section{Proof of \cref{thm:discnonconvex}}
\label{app:discnonconvex}
\discnonconvex*
\begin{proof}
We elide $x$ dependence for notational convenience.
\begin{align*}
&\inf_{p \in \Delta(\mathcal{A})} \sup_{\substack{a^\star \in \mathcal{A} \\ f^\star \in \mathcal{F}}}
\E_{\substack{a \sim p \\ a' \sim G|a}} \sq*{ f^\star(a) - f^\star(a^\star) - \frac{\gamma}{4} \cdot \paren{\widehat{f}(a') - f^\star(a')}^2 } \\
&= \inf_{p \in \Delta(\mathcal{A})} \sup_{\substack{q \in \Delta(\mathcal{A}) \\ f^\star \in \mathcal{F}}}
\E_{\substack{a \sim p \\ a' \sim G|a}} \sq*{ f^\star(a) - \frac{\gamma}{4} \cdot \paren{\widehat{f}(a') - f^\star(a')}^2 } - \E_{a \sim q}\sq*{f^\star(a)} \\
&= \inf_{p \in \Delta(\mathcal{A})} \sup_{\substack{q \in \Delta(\mathcal{A})}}
\E_{a \sim p}\sq*{\widehat{f}(a)} - \E_{a \sim q}\sq*{\widehat{f}(a)} \\
&\qquad + \frac{1}{2} \sum_a \sup_{h}\paren*{ 2 \paren*{q(a) - p(a)} h - \paren*{\gamma \sum_{a'} G(a|a') p(a')} \frac{1}{2} h^2 } & \left(\star\right) \\
&\leq \inf_{p \in \Delta(\mathcal{A})} \sup_{\substack{q \in \Delta(\mathcal{A})}}
\E_{a \sim p}\sq*{\widehat{f}(a)} - \E_{a \sim q}\sq*{\widehat{f}(a)} \\
&\qquad + \sum_a \paren*{\gamma {\sum_{a'} G(a|a') p(a')}}^{-1} \paren*{q(a) - p(a)}^2 & \left(\dagger\right)\\
&=\lastline,
\end{align*}
where $(\star)$ substitutes $h \doteq f^\star(a) - \widehat{f}(a)$ at each $a$.
$(\dagger)$ is only valid if $\forall a: 0 < \sum_{a'} G(a|a') p(a')$.
For indifference, we upper bound the expectand wrt $q$,
\begin{align*}
\forall a: -z &\geq -\widehat{f}(a) + \left(\gamma \sum_{a'} G(a|a') p(a')\right)^{-1} \paren*{1 - 2 p(a)}
\end{align*}
The result follows directly.
\end{proof}
\section{Regret Bounds}
\subsection{Proof of \cref{thm:strongobs}}
\label{app:strongobs}
\strongobs*
\begin{proof}
We upper bound the original problem \cref{eqn:discact}.
Let $a^*$ be any minimizer of $\widehat{f}$, choose
$z = -\alpha(G) \gamma^{-1} + \widehat{f}(a^*)$, and
relax the normalization constraint to a sub-normalization
constraint by shifting any residual mass to $p_{a^*}$.
Further wlog substitute $\widehat{f} \leftarrow \widehat{f} - \widehat{f}(a^*)$ for
ease of exposition. This results in the following upper bounding
program
\begin{alignat}{2}
&\! \min_{p \succeq 0} & \qquad & \alpha(G) \gamma^{-1} + p^\top \paren*{\widehat{f} + \alpha(G) \gamma^{-1}} \\
& \ensuremath{\mathrm{subject\ to}}\xspace & & \gamma G p \succeq \frac{1}{\widehat{f} + \alpha(G) \gamma^{-1}} \label{sobs:gpcons} \\
& & & -1^\top p \geq -1, \nonumber
\end{alignat}
where the additional $\alpha(G) \gamma^{-1}$ term is an upper bound
on the contribution to the objective function from the residual mass.
Next let $S$ be a maximum independent set and consider the
uniform distribution $p = \alpha(G)^{-1} 1_{a \in S}$.
This distribution satisfies \cref{sobs:gpcons}
because of (i) self-loops for each $a \in S$;
and (ii) for $a \not \in S$, because $S$ is a dominating set.
Having established feasibility, we apply strong duality.
\begin{alignat*}{2}
&\! \max_{u \succeq 0, v \geq 0} & \qquad & \alpha(G) \gamma^{-1} + u^\top \frac{1}{\widehat{f} + \alpha(G) \gamma^{-1}} - v \\
& \ensuremath{\mathrm{subject\ to}}\xspace & & \gamma u^\top G - v \preceq \paren*{\widehat{f} + \alpha(G) \gamma^{-1}}.
\end{alignat*}
Suppose $(u, v)$ is feasible. Unless at least constraint is satisfied
with equality, the objective can be improved by increasing $u$.
Therefore at least component of $u$ is positive, call this component $i$.
Then $(u + k e_i, v + k \gamma \| e_i^\top G \|_{\infty})$ is also feasible with change
in objective value
\begin{align*}
k \paren*{ \frac{1}{\widehat{f}_i + \alpha(G) \gamma^{-1}} - \gamma \left\| e_i^\top G \right|_{\infty} }.
\end{align*}
If the coefficient on $k$ is positive than the dual is unbounded, but
the primal is feasible, therefore the coefficient on $k$
must be non-positive. Thus we can decrease $v$ without
degrading the objective. We conclude that a dual optimum exists with $v = 0$.
\begin{alignat*}{2}
&\! \max_{u \succeq 0} & \qquad & \alpha(G) \gamma^{-1} + u^\top \frac{1}{\widehat{f} + \alpha(G) \gamma^{-1}} \\
& \ensuremath{\mathrm{subject\ to}}\xspace & & \gamma u^\top G \preceq \paren*{\widehat{f} + \alpha(G) \gamma^{-1}}.
\end{alignat*}
Because of self-loops and $\widehat{f} \leq 1$, we must have $u \preceq \gamma^{-1} + \gamma^{-2} \alpha(G) \doteq u_{\max}$.
Assuming $u$ is feasible for the above program, then $u \leftarrow u + u_{\min} \vec{1}$ is feasible for the following program,
\begin{alignat*}{2}
&\! \max_{u_{\max} + u_{\min} \succeq u \succeq u_{\min}} & \qquad & \alpha(G) \gamma^{-1} + u^\top \frac{1}{\widehat{f} + \alpha(G) \gamma^{-1}} \\
& \ensuremath{\mathrm{subject\ to}}\xspace & & \gamma u^\top G \preceq \paren*{\widehat{f} + \alpha(G) \gamma^{-1}} + \gamma 1^\top G u_{\min},
\end{alignat*}
which has strictly larger objective value.
If we choose $u_{\min} = |\mathcal{A}|^{-1} \alpha(G) \gamma^{-2}$, we can relax the constraint to
\begin{alignat*}{2}
&\! \max_{u_{\max} + u_{\min} \succeq u \succeq u_{\min}} & \qquad & \alpha(G) \gamma^{-1} + u^\top \frac{1}{\widehat{f} + \alpha(G) \gamma^{-1}} \\
& \ensuremath{\mathrm{subject\ to}}\xspace & & \gamma u^\top G \preceq 2 \paren*{\widehat{f} + \alpha(G) \gamma^{-1}}.
\end{alignat*}
Now we can upper bound the objective by
$$
\max_{u_{\max} + u_{\min} \succeq u \succeq u_{\min}} \gamma^{-1} \paren*{\alpha(G) + 2 u^\top \frac{1}{u^\top G}}.
$$
Applying \cref{lemma:boundthingy} finishes the proof.
\end{proof}
\begin{megalemma}
\label{lemma:boundthingy}
Let $G$ be the adjacency matrix of an undirected graph with self-loops,
let $\alpha(G)$ be the independence number of the graph, and let $|V|$
be the number of vertices. Then
\begin{align*}
\max_{u_{\max} \succeq u \succeq u_{\min} \succ 0} u^\top \frac{1}{u^\top G} = 4 \alpha(G) \log\paren*{\frac{4 |V|^2 u_{\max}}{\alpha(G) u_{\min}}}.
\end{align*}
\end{megalemma}
\begin{proof}
Because the objective is invariant to scaling all coordinates,
substitute $u \leftarrow u / (u_{\max} |V|)$.
This ensures $1^\top u \leq 1$. Next apply
\citet[Lemma 5]{alon2015online} with $\epsilon = u_{\min} / (u_{\max} |V|)$
to upper bound the objective.
\end{proof}
\subsection{Cops and Robbers}
\label{app:copsandrobbers}
The following Mathematica verifies the solution to the 2-action nonconvex program,
and also indicates it is feasible for the many-action nonconvex program
when $\widehat{f}_3 \geq \widehat{f}_2 + \nicefrac{1}{\gamma}$.
\begin{lstlisting}
problem = { -z + p1 f1 + p2 f2 + p1 p1 / (\[Gamma] G1) + p2 p2 / (\[Gamma] G2),
(1 - 2 p1) / (\[Gamma] G1) <= f1 - z,
(1 - 2 p2) / (\[Gamma] G2) <= f2 - z,
p1 >= 0, p2 >= 0, p1 + p2 == 1,
\[Gamma] > 0, f2 > f1
} /. G1 -> p2 /. G2 -> p1 /. f1 -> 0 /. p1 -> 1 - p2
megasimp = FullSimplify@Reduce[Drop[problem, 1], Reals]
ultrasolve = Assuming[megasimp,
FullSimplify@Minimize[{ problem[[1]], megasimp }, { p2, z }]]
(* When is this solution optimal for many actions? *)
Assuming[f3 >= f2 && megasimp,
FullSimplify@MaxValue[{
f3 - f2,
f3 >= f2
&& (f3 - z <= 1/\[Gamma] /. ultrasolve[[2]])
&& (megasimp /. ultrasolve[[2]])
},
{ f3, f2 }]]
\end{lstlisting}
\subsection{Apple Tasting}
\label{app:appletasting}
The following Mathematica verifies the solution to the 2-action nonconvex program
when the revealing action has the higher $\widehat{f}$.
\begin{lstlisting}
problem = { -z + p1 f1 + p2 f2 + p1 p1 / (\[Gamma] G1) + p2 p2 / (\[Gamma] G2),
(1 - 2 p1) / (\[Gamma] G1) <= f1 -z,
(1 - 2 p2) / (\[Gamma] G2) <= f2 -z,
p1 >= 0, p2 >= 0, p1 + p2 == 1,
\[Gamma] > 0, f1 > f2
} /. f2 -> 0 /. G1 -> p1 /. G2 -> p1 /. p2 -> 1 - p1
megasimp = FullSimplify@Reduce[ Drop[problem, 1], Reals ]
ultrasolve = Assuming[megasimp,
FullSimplify@Minimize[{ problem[[1]], megasimp }, { p1, z }]]
\end{lstlisting}
The following Mathematica verifies the solution to the 2-action nonconvex program
when the revealing action has the lower $\widehat{f}$.
\begin{lstlisting}
problem = { -z + p1 f1 + p2 f2 + p1 p1 / (\[Gamma] G1) + p2 p2 / (\[Gamma] G2),
(1 - 2 p1) / (\[Gamma] G1) <= f1 -z,
(1 - 2 p2) / (\[Gamma] G2) <= f2 -z,
p1 >= 0, p2 >= 0, p1 + p2 == 1,
\[Gamma] > 0, f2 > f1
} /. f1 -> 0 /. G1 -> p1 /. G2 -> p1 /. p2 -> 1 - p1
megasimp = FullSimplify@Reduce[Drop[problem, 1], Reals ]
ultrasolve = Assuming[megasimp,
FullSimplify@Minimize[{ problem[[1]], megasimp }, { p1, z }]]
\end{lstlisting}
\subsection{Proof of \cref{thm:weakobs}}
\label{app:weakobs}
\weakobs*
\begin{proof}
Let $a^*$ be a minimizer of $\widehat{f}$. Starting from \cref{eqn:discact},
substitute $z \leftarrow \widehat{f}_{a^*} - z$ and
$\widehat{f} \leftarrow \widehat{f} - \widehat{f}_{a^*}$. Let $S$ be the smallest
dominating set, and play a mixture
$p = \frac{\epsilon}{|S|} 1_{a \in S} + (1 - \epsilon) 1_{a = a^*}$.
Substituting into the objective and strengthening the constraint
yields the upper bounding program
\begin{alignat*}{2}
&\! \min_{\epsilon \in [0, 1], z > 0} & \qquad & \frac{\epsilon}{|S|} 1_S^\top \widehat{f} + z \\
& \ensuremath{\mathrm{subject\ to}}\xspace & & \gamma \frac{\epsilon}{|S|} \succeq \frac{1}{z}.
\end{alignat*}
Choose $z = \frac{|S|}{\epsilon \gamma}$; furthermore, upper bound
$1_S^\top \widehat{f} = |S|$. This upper bounds the objective by
$\epsilon + \frac{|S|}{\epsilon \gamma}$.
Choosing $\epsilon = \sqrt{\frac{|S|}{\gamma}}$ yields objective bound
$2 \sqrt{\frac{|S|}{\gamma}}$.
\end{proof}
\section{Examples}
\subsection{Posted-Price Auction Bidding}
\label{app:postedprice}
The following Mathematica verifies the solution to the convex program.
\begin{lstlisting}
problemspec = {
f1 p1 + f2 (1 - p1) - z,
f1 - z > 0,
f2 - z > 0,
\[Gamma] p1 >= 1 / (f1 - z) - 1 / (f2 - z),
\[Gamma] (1 - p1) >= 1 / (f2 - z),
1 >= p1 >= 0,
\[Gamma] > 0
} /. f2 -> deltaf /. f1 -> 0;
Refine[FullSimplify@Minimize[{
problemspec[[1]],
Drop[problemspec, 1],
deltaf > 0
}, { z, p1 }],
\[Gamma] > 0 && deltaf > 0]
Refine[FullSimplify@Minimize[{
problemspec[[1]],
Drop[problemspec, 1],
deltaf <= 0
}, { z, p1 }],
\[Gamma] > 0 && deltaf <= 0]
\end{lstlisting}
\section{Introduction}
Autonomous agents execute experimental designs to reliably acquire
knowledge. These designs require exploration of potentially sub-optimal
actions to ensure convergence, but this exploration activity incurs
real costs and is ideally minimized. Intuitively, there is no need to
explore a potentially suboptimal action if a presumed better action
yields the same information. Online learning with feedback graphs,
first introduced by \citet{mannor2011bandits}, confirms and extends
this intuition. For the adversarial multi-armed (non-contextual)
bandit setting, the theory is mature and asymptotic minimax rates are
well understood; but associated algorithms involve enumeration over
arms. \citep{alon2015online,chen2021understanding,ito2022nearly}
Extending these approaches to the contextual bandit setting is
challenging unless a suitable policy cover is available, e.g., assuming
linearity.~\citep{wang2021adversarial}
The historical development of vanilla (non-graph) contextual
bandit algorithms followed a similar path, with early approaches
finding enumeration productive theoretically but challenging
computationally.~\citep{beygelzimer2011contextual,dudik2011efficient,agarwal2014taming}
More recently, reductions of contextual bandits to online
regression have proven both theoretically and computationally
performant.~\citep{foster2020beyond,zhu2022contextual} These reductions
are derived from a minimax characterization of the statistical complexity
of interactive learning.~\citep{foster2021statistical} In this work we
specialize the minimax framework to contextual bandits with graph feedback,
with the goal of encouraging utilization of graph feedback
in practical applications.
\subsection*{Contributions}
\begin{itemize}
\item In \cref{subsec:convexprogram} we derive a generic reduction of
contextual bandits with graph feedback to online regression.
The reduction is computationally felicitous: it requires
the per-round solution of a convex program
whose dimensionality is one more than the number of actions.
\item In \cref{subsec:regretbounds} we specialize the reduction to
compare with several known lower bounds.
\item In \cref{subsec:postedprices} we apply the approach to no-regret learning in partially informed auctions,
improving over previous results both
statistically and computationally.
\end{itemize}
\section{Problem Setting}
\label{sec:setting}
\paragraph{Notation} Let $\Delta(S)$ denote the set of all
Radon probability measures over a set $S$. For $n \in \mathbb{N}$
let $[n]$ denote the set $\{ 1, 2, \ldots, n \}$. Let
$\aleph$ denote uninformative feedback. Let $\conv(S)$
denote the convex hull of a set $S$.
\paragraph{Protocol} We consider the following graph-feedback contextual
bandit problems. At any time step $t \in [T]$, nature selects a
context $x_t \in \mathcal{X}$; a distribution $\P_t \in \Delta(\ell_t)$
over loss functions $\ell_t: \mathcal{A} \rightarrow [0,1]$ mapping from the
action set $\mathcal{A}$ to a loss value in $[0, 1]$; and a set of conditional
probabilities $G_t: \mathcal{A} \rightarrow [0, 1]^\mathcal{A}$. Nature
reveals $x_t$ and $G_t$ to the algorithm, which then selects
an action $a_t$. Nature then samples $\ell_t \sim \P_t$ and reveals
(censored) loss realization
$l_t \in \paren*{[0, 1] \cup \{ \aleph \}}^A$
\begin{align*}
l_t(a) &=
\begin{cases}
\ell_t(a) & \text{with probability } G_t\paren*{\left.a\right| a_t} \\
\aleph & \text{otherwise}
\end{cases}
\end{align*}
by independently sampling each action with probability
according to $G_t$, i.e., a stochastic feedback
graph.~\citep{esposito2022learning} This protocol requires the description
of $G_t$ to be revealed to the algorithm prior to making a
decision, i.e., the informed setting.~\citep{alon2017nonstochastic}
This implies $G_t \perp \ell_t, a_t | x_t$, i.e., the graph must be
conditionally independent of the loss realization and algorithm
action given the context. See \cref{subsec:postedprices},
\cref{subsec:inventory} and \cref{sec:discussion} for commentary.
\paragraph{Realizability} We make the following realizability assumption
studied in the contextual bandit literature \citep{agarwal2012contextual,
foster2018practical, foster2020beyond, simchi2021bypassing}.
\begin{assumption}[Realizability]
\label{asm:realizability}
There exists a regression function $f^\star \in \mathcal{F}$ such that
$\E \sq{\ell_t(a) \mid x_t} = f^\star(x_t, a)$ for any $a \in \mathcal{A}$ and
across all $t \in [T]$.
\end{assumption}
Three comments are in order. First, realizability is leveraged
to derive practical algorithms: the agnostic variant of the minimax
formulation we employ operates in policy space and is not computationally
convenient.~\citep{foster2022complexity} Second, misspecification
can be incorporated while maintaining computational efficiency, but
we do not complicate the exposition here.~\citep{foster2020adapting}
Third, \cref{asm:realizability} induces a ``semi-adversarial'' setting,
wherein nature is completely free to determine the context and graph
sequences; and has considerable latitude in determining the loss
distribution subject to a moment constraint. This robustness is a
practical benefit, but creates ambiguity in \cref{subsec:regretbounds}
about whether to compare to stochastic or adversarial bounds.
\paragraph{Regret}
For each regression function $f\in\mathcal{F}$, let $\pi_f(x_t) \vcentcolon= \argmax_{a
\in \mathcal{A}} f(x_t,a)$ denote the induced policy, and define $\pi^\star \vcentcolon=
\pi_{f^\star}$ as the optimal policy. We measure the performance of the
learner via regret to $\pi^\star$,
\begin{align*}
\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace(T) \vcentcolon= \sum_{t=1}^T \ell_t(a_t) - \ell_t(\pi^\star(x_t)),
\end{align*}
where (as is standard) the contextual bandit regret does not depend upon $G_t$.
\paragraph{Regression Oracle}
We assume access to an online regression oracle \ensuremath{\mathrm{\mathbf{Alg}}_{\mathsf{Sq}}}\xspace for function
class $\mathcal{F}$, which is an algorithm for online learning with (censored)
squared loss. We consider the following protocol. At each round $t \in
[T]$, the oracle produces an estimator $\widehat{f}_t \in \conv(\mathcal{F})$, then
receives a context-loss tuple $(x_t, l_t)$. The goal of the oracle is
to accurately predict the loss as a function of the context and action,
and we evaluate its performance via the square loss
$\sum_{a | l_t(a) \neq \aleph} (\widehat{f}_t(x_t,a)-l_t(a))^2$.
We measure the oracle's cumulative performance via square-loss regret
to $\mathcal{F}$.
\begin{assumption}[Bounded square-loss regret]
\label{asm:regression_oracle}
The regression oracle \ensuremath{\mathrm{\mathbf{Alg}}_{\mathsf{Sq}}}\xspace guarantees that for any (potentially
adaptively chosen) sequence $\curly*{(x_t, l_t}_{t=1}^T$,
\begin{equation*}
\sum_{t=1}^T \sum_{a | l_t(a) \neq \aleph} \paren*{\widehat{f}_t(x_t, a) - l_t(a)}^2
-\inf_{f \in \mathcal{F}}\sum_{t=1}^T \sum_{a | l_t(a) \neq \aleph} \paren*{f(x_t, a) - l_t(a)}^2 \\
\leq \ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T),\notag
\end{equation*}
for some (non-data-dependent) function $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)$.
\end{assumption}
Note $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace$ does depend upon $G_t$, in contrast to $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace$. We will
use the Online Newton Step (ONS) over a $d$-dimensional linear predictor
as our running analytical prototype for $\ensuremath{\mathrm{\mathbf{Alg}}_{\mathsf{Sq}}}\xspace$: it achieves $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T) =
O(d \log T)$.~\citep{hazan2007logarithmic} ONS roughly corresponds to how
the analysis is leveraged in practice: reduction to a nonlinear function
class and gradient-based optimization that lacks formal guarantees,
but with the assurance that good predictions induce good decisions.
\section{Algorithms}
\subsection{Background: Minimax Reduction Design}
Our approach is based upon the minimax formulation of
\citet{foster2021statistical}, which we review here. Define
the decision estimation coefficient as
$\dec_\gamma(\mathcal{F}) \vcentcolon= \sup_{\widehat{f}, x, G} \dec_{\gamma}(\mathcal{F};\widehat{f},x, G)$, where
\begin{equation}
\begin{split}
&\dec_{\gamma}(\mathcal{F}; \widehat{f}, x, G) \\
&\qquad \vcentcolon= \inf_{p \in \Delta(\mathcal{A})} \sup_{\substack{a^\star \in \mathcal{A} \\ f^\star \in \mathcal{F}}}
\E_{\substack{a \sim p \\ a' \sim G|a}} \sq*{ f^\star(x, a) - f^\star(x, a^\star) - \frac{\gamma}{4} \cdot \paren{\widehat{f}(x,a') - f^\star(x,a')}^2 }.
\end{split}
\label{eqn:minimax}
\end{equation}
Because the $\dec$ is the expected difference between per-round $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace$
and $\gamma$-scaled $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace$, an upper bound on $\dec$ implies an
upper bound on $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace(T)$ in terms of $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)$. This is stated
precisely in \cref{thm:decbound}, and conveniently summarized
in \cref{tab:decoderring} with respect to the minimax taxonomy
of \citet[Theorem 1]{alon2015online}.
\begin{table}[H]
\caption{$\dec$ bounds, regret bounds, and corresponding minimax rates. $\log(\delta)$ factors are elided.}
\label{tab:decoderring}
\vskip 0.15in
\begin{center}
\begin{tabular}{lcr}
\toprule
$\dec$ bound &
$\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace(T)$ bound &
Corresponds to \\
\midrule
$O\paren*{\frac{1}{\gamma}}$ & $O\paren*{\sqrt{T \ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)}}$ & Strongly observable \\
$O\paren*{\frac{1}{\sqrt{\gamma}}}$ & $O\paren*{T^{2/3} \ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)^{1/3}}$ & Weakly observable \\
$O\paren*{1}$ & $O\paren*{T}$ & Unobservable \\
\bottomrule
\end{tabular}
\end{center}
\vskip -0.1in
\end{table}
\begin{restatable}{theorem}{decbound}
\label{thm:decbound}
Suppose $\dec_\gamma(\mathcal{F}) \leq \paren*{\nicefrac{C}{4}} \gamma^{-\beta}$.
Then when probability at least $(1 - \delta)$,
\begin{equation*}
\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace(T) \leq \frac{1}{4} \paren{1 + \beta} \paren*{C \beta^{-\beta} T \ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)^\beta}^{\frac{1}{1+\beta}} + \sqrt{2 T \log\paren*{\delta^{-1}}}.
\end{equation*}
\end{restatable}
\begin{proof} Closely follows \citet{foster2020adapting}.
First note that the squared loss term in \cref{eqn:minimax}
is the expected per-round $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace$,
\begin{align*}
&\E_{\substack{a' \sim G|a}} \sq*{ \paren{\widehat{f}(x,a') - f^\star(x,a')}^2 } \\
&= \E_{\ell \sim P}\sq*{ \sum_{a'} G(a'|a) \paren*{\paren*{\widehat{f}(x,a') - \ell_t(a')}^2 - \paren*{f^\star(x,a') - \ell_t(a')}^2 } } & \paren*{\dagger} \\
&= \E_{\substack{\ell \sim P \\ l \sim \ell|G}}\sq*{ \sum_{a'} 1_{l(a') \neq \aleph} \paren*{\paren*{\widehat{f}(x,a') - l_t(a')}^2 - \paren*{f^\star(x,a') - l_t(a')}^2 } },
\end{align*}
where $\paren*{\dagger}$ follows from realizability. Next
sum \cref{eqn:minimax} over the horizon and rearrange terms,
\begin{equation*}
\begin{split}
\E\sq*{\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace(T)} &\leq \frac{\gamma}{4} \E\sq*{\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)} + T \frac{C}{4} \gamma^{-\beta} \\
&\leq \frac{\gamma}{4} \ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T) + T \frac{C}{4} \gamma^{-\beta}, \\
\end{split}
\end{equation*}
where the last line follows from \cref{asm:regression_oracle} holding for
any sequence. Finally concentrate $\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace(T)$ around it's expectation
via Hoeffding's inequality
\begin{equation*}
\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace(T) \leq \E\sq*{\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{CB}}}\xspace(T)} + \sqrt{2 T \log\paren*{\delta^{-1}}}.
\end{equation*}
Choosing $\gamma^{1+\beta} = \nicefrac{C \beta T}{\ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)}$ yields
the result.
\end{proof}
To derive an algorithm (rather than merely assert the existence of
a low-regret reduction), a constructive bound is necessary, i.e.,
a specific procedure for computing the minimax $p$ in \cref{eqn:minimax}.
\subsection{Nonconvex Program}
Our first result bounds \cref{eqn:minimax}
by the value of a non-convex optimization program.
\begin{restatable}{theorem}{discnonconvex}
\label{thm:discnonconvex}
For finite $\mathcal{A}$, \cref{eqn:minimax} is upper bounded by the value of
\begin{alignat}{2}
&\! \min_{p \in \Delta(\mathcal{A}), z} & \qquad & p^\top \paren*{\widehat{f} - z} + p^\top \frac{p}{\gamma G p} \label{eqn:discnonconvex} \\
& \ensuremath{\mathrm{subject\ to}}\xspace & & \frac{1 - 2 p}{\gamma G p} \preceq \widehat{f} - z \nonumber \\
& & & \gamma G p \succ 0, \nonumber
\end{alignat}
where $p(a)$ and $\widehat{f}(x, a)$ are interpreted as vectors in $\mathbb{R}^{|\mathcal{A}|}$,
and $G(a'|a)$ is interpreted as a weighted adjacency matrix in $\mathbb{R}^{|\mathcal{A}|\times|\mathcal{A}|}$.
\end{restatable}
\newcommand{\forall a: -z &\geq -\widehat{f}(a) + \left(\gamma \sum_{a'} G(a|a') p(a')\right)^{-1} \paren*{1 - 2 p(a)}}{\forall a: -z &\geq -\widehat{f}(a) + \left(\gamma \sum_{a'} G(a|a') p(a')\right)^{-1} \paren*{1 - 2 p(a)}}
\newcommand{\lastline}{\inf_{p \in \Delta(\mathcal{A})} \sup_{\substack{q \in \Delta(\mathcal{A})}} \E_{a \sim q}\sq*{-\widehat{f}(a) + \paren*{\gamma {\sum_{a'} G(a|a') p(a')}}^{-1} \paren*{q(a) - 2 p(a)} } \\
&\qquad + \E_{a \sim p}\sq*{ \widehat{f}(a) + \paren*{\gamma {\sum_{a'} G(a|a') p(a')}}^{-1} p(a)}}
\begin{proof}
We provide a sketch here: for a complete proof see \cref{app:discnonconvex}.
We elide $x$ dependence for ease of exposition. We first empower the
adversary to play any distribution over actions $q$ rather than $1_{a^*}$.
Using the convex conjugate, we establish
\begin{equation*}
\begin{split}
\dec_{\gamma}(\mathcal{F}; \widehat{f}, G) &\leq \lastline
\end{split}
\end{equation*}
For indifference, we upper bound the expectand wrt $q$,
\begin{align*}
\forall a: -z &\geq -\widehat{f}(a) + \left(\gamma \sum_{a'} G(a|a') p(a')\right)^{-1} \paren*{1 - 2 p(a)}.
\end{align*}
The result follows directly.
\end{proof}
The proof of \cref{thm:discnonconvex} only has one inequality step, which is tight
if $\mathcal{F}$ contains a worst-case function. Because the $\dec$ characterizes
minimax rates, we anticipate \cref{thm:discnonconvex} will achieve minimax
optimal rates. Unfortunately the optimization problem is not typically tractable
as written, but can be relaxed to a convex program.
\subsection{Convex Program}
\label{subsec:convexprogram}
\begin{figure}[t]
\begin{python}
def makeProblem(nactions):
import cvxpy as cp
gammaG = cp.Parameter((nactions, nactions), nonneg=True)
fhat = cp.Parameter(nactions)
p = cp.Variable(nactions, nonneg=True)
z = cp.Variable()
objective = cp.Minimize(fhat @ p - z)
constraints = [
cp.sum(p) == 1,
gammaG @ p >= cp.inv_pos(fhat - z)
]
problem = cp.Problem(objective, constraints)
assert problem.is_dcp(dpp=True) # proof of convexity
return problem, gammaG, fhat, p, z
\end{python}
\caption{An implementation of \cref{eqn:discact} via cvxpy~\citep{diamond2016cvxpy,agrawal2018rewriting}.}
\label{fig:convexprogram}
\end{figure}
Our next result is a tractable constructive solution to \cref{eqn:minimax}.
\begin{restatable}{corollary}{discact}
\label{thm:discact}
For finite $\mathcal{A}$, \cref{eqn:minimax} is upper bounded by the twice the value of the (convex)
optimization problem
\begin{alignat}{2}
&\! \min_{p \in \Delta(\mathcal{A}), z \prec \widehat{f}} & \qquad & p^\top \widehat{f} - z \label{eqn:discact} \\
& \ensuremath{\mathrm{subject\ to}}\xspace & & \gamma G p \succeq \frac{1}{\widehat{f} - z}, \nonumber
\end{alignat}
where $p(a)$ and $\widehat{f}(x, a)$ are interpreted as vectors in $\mathbb{R}^{|\mathcal{A}|}$,
and $G(a'|a)$ as a matrix in $\mathbb{R}^{|\mathcal{A}|\times|\mathcal{A}|}$.
\end{restatable}
\begin{proof}
This follows from strengthing the constraint in \cref{eqn:discnonconvex}
to $\frac{1}{\gamma G p} \succeq \widehat{f} - z$.
\end{proof}
\Cref{eqn:discact} is a generalization of the inverse gap weighting (IGW)
technique to the informed graph feedback setting: when $G = I$, we
recover the log-barrier solution of \citet{foster2020adapting}.
When $G \succeq I$, the solution to \cref{eqn:discact} is at most
$\nicefrac{3}{\gamma}$ larger than \cref{eqn:discnonconvex}, and
therefore we anticipate minimax optimal rates for strongly observable
graphs with self-loops. However this relaxation is too loose
to achieve minimax rates for strongly observable graphs without
self-loops, as discussed in the next section.
\subsection{Regret Bounds}
\label{subsec:regretbounds}
In this section we show correspondence between \cref{eqn:discnonconvex},
\cref{eqn:discact}, and known results in the graph feedback literature.
\subsubsection{Strongly Observable Case, Self-Loops}
If $G \succeq I$, the log-barrier solution of
\citet{foster2020adapting} certifies an
$O(\sqrt{|\mathcal{A}| T \ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)})$ regret bound.
The following theorem improves the dependence from the number of
actions $|\mathcal{A}|$ to the independence number $\alpha(G)$, matching
strongly observable minimax bandit rates qua \cref{tab:decoderring}
and \citet{alon2015online}.
\begin{restatable}[Strongly Observable]{theorem}{strongobs}
\label{thm:strongobs}
Suppose $G \succeq I$ is binary. Let $\alpha(G)$ be the independence
number of $G$. Then \cref{eqn:discact} has objective value at most
\begin{align*}
\gamma^{-1} \alpha(G) \paren*{1 + 8 \log\paren*{\frac{4 |\mathcal{A}|^2}{\alpha(G)} \paren*{ 1 + |\mathcal{A}| + \frac{\gamma |\mathcal{A}|}{\alpha(G)} } } }.
\end{align*}
\end{restatable}
\begin{proof} We provide a sketch here: for a complete proof
see \cref{app:strongobs}. We use an Abe-Long~\citep{abe1999associative}
style construction to upper bound \cref{eqn:discact}, switch
to the dual, and ultimately arrive at
$\max_{u_{\max} \succeq u \succeq u_{\min} \succ 0} u^\top \frac{1}{u^\top G}$;
we then apply \citet[Lemma 5]{alon2015online}.
\end{proof}
Note computing a maximum independent set is both NP-hard and
difficult to approximate~\citep{bar1984approximation}, whereas
\cref{eqn:discact} is a convex program, i.e.,
easy to compute. For example, the explicit solution
in \cref{subsec:inventory} has support over a non-minimal dominating
set, i.e., the solution does not reveal the maximum independent set;
nonetheless the regret bound is independent of the number of actions.
\subsubsection{Other Strongly Observable Graphs}
\Cref{thm:strongobs} only applies to graphs where every vertex has a
self-loop, i.e., every action reveals at least itself. This is a subset
of all strongly observable graphs, as a vertex $a$ is strongly observable
if it has a self-loop or if every other action reveals $a$, i.e.,
$\forall a' \in \mathcal{A} \setminus \{ a \}: G(a|a')=1$;
and a graph is strongly observable if all vertices are strongly
observable. As we now show, the additional slack from
convexification is problematic in strongly observable graphs
without self-loops, but the nonconvex program yields good solutions.
For ease of exposition, in this section we will assume the minimum value of $\widehat{f}$
is zero: substituting
$(z, \widehat{f}) \leftarrow \paren*{z + \min \widehat{f}, \widehat{f} - \min \widehat{f} }$ into \cref{eqn:discnonconvex}
and \cref{eqn:discact} demonstrates this is wlog.
\paragraph{Cops and Robbers}
The ``cops and robbers'' graph $G = 1 1^\top - I$
is strongly observable and has no self-loops.
For \cref{eqn:discact}, a distribution supported only any two actions
with the two smallest values of $\widehat{f}$ is sufficient to satisfy all
constraints and is lower objective than solutions with larger support.
Let $\widehat{f}_2 \geq 0$ be the second smallest value.
If $\gamma$ is sufficiently large such that $\gamma \widehat{f}_2 > 2$,
\cref{eqn:discact}
has optimal value $2 \sqrt{\nicefrac{\widehat{f}_2}{\gamma}}$, which corresponds
to a weakly observable rate. Fortunately \cref{eqn:discnonconvex} is
analytically tractable (see \cref{app:copsandrobbers}): considering
a distribution supported on the two smallest values of $\widehat{f}$ yields
optimum
\begin{align*}
p^*_2 &= \frac{2}{2 + \gamma \widehat{f}_2 + \sqrt{4 + \gamma^2 \widehat{f}_2^2}} \in \left( 0, \frac{1}{2} \right], \\
z^* &= -\frac{1}{\gamma} + \frac{\gamma \widehat{f}_2 + \sqrt{4 + \gamma^2 \widehat{f}_2^2}}{2 \gamma},
\end{align*}
with value $\nicefrac{1}{\gamma}$, which corresponds to the same rate
as full information feedback. This solution is feasible for the many
action problem if the third smallest value $\widehat{f}_3$ obeys
$\widehat{f}_3 \geq \widehat{f}_2 + \nicefrac{1}{\gamma}$;
otherwise, we can decrease $z$ to ensure feasibility, adding at most
$\nicefrac{1}{\gamma}$ to the objective.
\paragraph{Apple Tasting} In the two action case, the only remaining
strongly observable graph is where one action reveals all and the
other reveals nothing. This scenario was originally deemed apple
tasting by \citet{helmbold2000apple}, but more recently is denoted spam
filtering.~\citep{van2021beyond} The graph is
$G = \paren*{\begin{array}{cc} 0 & 1 \\ 0 & 1 \\ \end{array}}$.
If $\gamma$ is sufficiently large such that $\widehat{f}_2 \gamma > 1$,
\cref{eqn:discact} has optimal value $2 \sqrt{\frac{\widehat{f}_2}{\gamma}}$,
which corresponds to a weakly observable rate. Fortunately
\cref{eqn:discnonconvex} is analytically tractable (see
\cref{app:appletasting}): let $\widehat{f}_1$ be the estimate for the
uninformative action and let $\widehat{f}_2$ be the estimate for the revealing
action. The solution is
\begin{align*}
p_2^* &= \begin{cases} \frac{2}{4 + \gamma \widehat{f}_2} & \widehat{f}_2 > 0 \\
\min\paren*{ 1, \frac{2}{4 - \gamma \widehat{f}_1} } & \widehat{f}_1 > 0
\end{cases}, \\
\text{opt} &= \begin{cases} \frac{1}{\gamma} + \frac{\widehat{f}_2}{4 + \gamma \widehat{f}_2} & \widehat{f}_2 > 0 \\
\max\paren*{0, \frac{1}{\gamma} - \frac{\widehat{f}_1}{4 - \gamma \widehat{f}_1} } & \widehat{f}_1 > 0
\end{cases},
\end{align*}
and in particular $\text{opt} \leq \frac{2}{\gamma}$, which corresponds to a strongly observable rate.
\subsubsection{Weakly Observable}
The following theorem, when combined with \cref{thm:decbound},
certifies an $O\paren*{\delta(G)^{1/3} T^{2/3} \ensuremath{\mathrm{\mathbf{Reg}}_{\mathsf{Sq}}}\xspace(T)^{1/3}}$ regret
bound, matching weakly observable
minimax bandit rates qua \cref{tab:decoderring} and \citet{alon2015online}.
\begin{restatable}[Weakly Observable]{theorem}{weakobs}
\label{thm:weakobs}
Suppose $G$ is binary and weakly observable. Let $\delta(G)$ be the
size of the smallest dominating set. For $\gamma \geq \delta(G) $,
\cref{eqn:discact} has objective value at most
$2 \sqrt{\frac{\delta(G)}{\gamma}}$.
\end{restatable}
\begin{proof} We provide a sketch here: for a complete proof
see \cref{app:weakobs}. We play a tuned epsilon-greedy strategy
over the dominating set and evaluate the objective.
\end{proof}
\section{Examples}
\subsection{Posted-Price Auction Bidding}
\label{subsec:postedprices}
This example is inspired by the no-regret algorithm for partially
informed auctions of \citet{guo2022no}; but is more broadly applicable
to situations where there are two actions, one of which is ``optional
'' and the other is ``do nothing.''
There are 2 actions: action 1 is do nothing and action 2 is the optional
action, and the feedback graph is
$G_t = \paren*{\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}}$,
i.e., the cost of the ``do nothing'' action is always revealed.
\Cref{eqn:discact} simplifies to
\begin{alignat*}{2}
&\! \min_{p_1 \in [0, 1], z \prec \widehat{f}} & \qquad & \widehat{f}^\top \paren*{\begin{array}{c} p_1 \\ 1 - p_1 \end{array}} - z \\
& \text{subject to} & & \gamma p_1 \geq \frac{1}{\widehat{f}_1 - z} - \frac{1}{\widehat{f}_2 - z} \\
& & & \gamma (1 - p_1) \geq \frac{1}{\widehat{f}_2 - z},
\end{alignat*}
which has solution (see \cref{app:postedprice}) $$
\begin{aligned}
\text{opt} &=
\frac{1}{\gamma} + \frac{\paren*{\Delta \widehat{f}}_+}{1 + \gamma \paren*{\Delta \widehat{f}}_+} &
\text{ with } & p_1^* &= \frac{\paren*{\Delta \widehat{f}}_+}{1 + \gamma \paren*{\Delta \widehat{f}}_+},
\end{aligned}
$$ where $\paren*{\Delta \widehat{f}}_+ \doteq \max\paren*{0, \widehat{f}_2 - \widehat{f}_1}$.
When the ``optional'' action has lower loss $(\widehat{f}_2 \leq \widehat{f}_1)$ it
is selected with probability 1, otherwise it is selected with decreasing
probability.
In the posted price auction setting of \citet{guo2022no}, the bidder
experiences a sequence of auctioned items. A posted price $\rho_t \in
[0, 1]$ and public item features $h_t$ are revealed to the bidder,
which can choose not to accept the price (and pay nothing), or to
accept the price. If the price is accepted, private item features $x_t$
are revealed which completely determine the private value $v_t \in [0,
1]$ of the item and the bidder experiences loss $l_t(1) = (1 + \rho_t
- v_t)/2$. Note $l_t(0) = 0$ is always observed. Thus the regression
oracle must predict $\E\sq*{\left.v_t\right|h_t,\rho_t}$ as the other
loss components are determined. In their ``unknown'' item distribution
setting, the hidden feature vector $x_t$ is sampled from an fixed unknown
distribution and composed with a fixed unknown function $h: \mathcal{X} \to [n]$
to generate $h_t \doteq h(x_t)$. Under these conditions, $h_t$ is an
$n$-dimensional basis vector and $\E\sq*{\left.v_t\right|h_t,\rho_t}$
is a linear function of $h_t$. Therefore we instantiate our algorithm
with ONS as the online regressor and compare to their results in
\cref{tab:compguo}.
\begin{table}[ht]
\caption{Comparison with ``unknown item distribution'' from \citet[Table
1]{guo2022no}. Regret bounds hold with probability at least $(1 -
\delta)$ and dependence upon $\log(\delta)$ is suppressed. Our result is
statistically superior in two ways: first we do not distinguish between
stochastic and adversarial prices since they are part of the context;
second, we can handle an arbitrary masking functions which induce
linearly realizable conditional means.
See text for details about our computational superiority.}
\label{tab:compguo}
\vskip 0.15in
\begin{center}
\begin{tabular}{lccc}
\toprule
Prices &
Masking function $h$ &
\citet{guo2022no} &
Our result \\
\midrule
Stochastic & $h: \mathcal{X} \to [n]$ & $O\paren*{\sqrt{T n \log T}}$ & \multirow{3}{*}{$O\paren*{\sqrt{T n \log T}}$} \\
\cmidrule{1-3}
\multirow{2}{*}{Adversarial} & $h: \mathcal{X} \to [n]$ & $\tilde{O}\paren*{T^{2/3} n^{1/3}}$ & \\
& Linearly realizable & N/A & \\
\bottomrule
\end{tabular}
\end{center}
\vskip -0.1in
\end{table}
In addition to the statistical improvements exhibited in
\cref{tab:compguo}, reduction to regression is more computationally
felicitous: \citet{guo2022no} derive their result by first exploring
randomly, then constructing a policy cover, and finally running
exponentiated gradient. This frequently applied theoretical construction
is a brittle algorithmic template which is not viable in practice. Thus
despite their impressive algorithmic improvements over naive exponentiated
gradient, their approach is less preferable to running ONS (which, given
the discrete mapping assumption induces a diagonal covariance matrix,
can be further simplified to independent per-coordinate operation).
From the position of a bidder, a posted-price auction is an informed graph
feedback problem. However from the position of a seller the problem is
an uninformed graph feedback problem, as the information revealed depends
upon whether the price chosen was too high (in which case, higher prices
are informed) or too low (in which case, lower prices are informed).
Therefore for posted-price selling, our technique does not apply.
\subsection{Inventory}
\label{subsec:inventory}
In this scenario, the algorithm must decide what quantity of an
item to supply given an externally imposed fixed price. There are
$K$ actions that correspond to ordered amounts, $a_1 < a_2 \ldots <
a_K$; and $G = \paren*{\begin{array}{cc} 1 & \mathbf{1} \\ \mathbf{0} &
1 \end{array}}$ is upper triangular, i.e., playing action $a_j$
informs about all actions $a_{i \leq j}$ because items are consumed until
either the demand or the inventory is exhausted. This $G$ is invertible
with
\begin{align*}
G^{-1} &= \paren*{\begin{array}{ccccc}
1 & -1 & 0 & 0 & \mathbf{0} \\
0 & 1 & -1 & 0 & \mathbf{0} \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
\mathbf{0} & 0 & 0 & 1 & -1 \\
\mathbf{0} & 0 & 0 & 0 & 1
\end{array}},
\end{align*}
and \cref{eqn:discact} simplifies to
\begin{alignat*}{2}
&\! \min_{p \in \Delta(K), z \prec \widehat{f}} & \qquad & p^\top \widehat{f} - z \\
& \text{subject to} & & \gamma p_{K} \geq \frac{1}{\widehat{f}_{K} - z}, \\
& & & \gamma p_j \geq \frac{1}{\widehat{f}_j - z} - \frac{1}{\widehat{f}_{j+1} - z} \qquad \paren*{j < K}.
\end{alignat*}
In particular the last constraint can be discarded whenever $\widehat{f}_j \geq
\widehat{f}_{j+1}$, in which case we can assume $p^*_j = 0$, because for any
feasible point with $(p_j > 0, p_{j+1})$, the point with $(0, p_{j+1} +
p_j)$ is also feasible and has no higher objective value (strictly lower
if the inequality is strict).
Thus for any sequence $a_i, a_{i+1}, \ldots, a_j$ where the loss estimates
$\widehat{f}_i \geq \widehat{f}_{i+1} \ldots \geq \widehat{f}_{j}$ are non-increasing,
the solution need only place probability at the rightmost action in the
sequence $a_j$.
The independence number of $G$ is 1, as any two
vertices will share an edge, thus the multi-armed
(non-contextual) bandit regret bound is $O(\sqrt{T})$.
If the loss estimates are
strictly increasing with larger actions,
we can play the distribution the satisfies
all the constraints with equality and choose $z$ to normalize, yielding objective value
\begin{align*}
p^\top \widehat{f} - z &= \gamma^{-1} \left(1 + \sum_{j < K} \frac{\widehat{f}_{j+1} - \widehat{f}_{j}}{\widehat{f}_{j+1} - z}\right)
\stackrel{(\dagger)}{\leq} \gamma^{-1} \left(1 + 1 \sum_{j < K} \frac{1}{\widehat{f}_{j+1} - z}\right) \stackrel{(\star)}{\leq} 2 \gamma^{-1},
\end{align*}
where $(\dagger)$ follows from $\widehat{f} \in [0, 1]$ and $(\star)$ follows from normalization.
We can apply this argument generally by
summing over only the rightmost actions in
each non-increasing $\widehat{f}$ sequence as per
the previous paragraph, establishing a
regret bound independent of the number
of actions.
Although the inventory problem has a non-trivial informed graph, there
is also an uninformed graph component. Specifically, if the inventory is
only partially consumed, then the losses associated with larger inventory
levels are also revealed. Our technique does not exploit this additional
feedback structure.
\section{Related Work}
Multi-armed bandits with graph-structured feedback have been extensively
studied. An early example is the apple tasting problem of
\citet{helmbold2000apple}. The general formulation of graph-structured
feedback was introduced by \citet{mannor2011bandits}. \citet{alon2015online}
characterized the minimax rates in terms of graph-theoretic structural
quantities. Follow-on work includes relaxing the
assumption that the graph is observed prior to
decision~\citep{cohen2016online}; analyzing the distinction between
the stochastic and adversarial settings~\citep{alon2017nonstochastic};
consider stochastic feedback graphs~\citep{li2020stochastic,esposito2022learning}; and instance-adaptivity~\citep{ito2022nearly}.
In the stochastic multi-armed
bandit setting, the solution value of
a particular linear program determines
the asymptotically optimal minimax rate.~\citep{buccapatnam2014stochastic,li2020stochastic} It involves a divergence
to an unknown distribution and
the true mean values, but is
superficially similar to \cref{eqn:discact}.
Possibly, with specific generative
assumptions and applying an online-to-batch
conversion to our framework, the two
formulations could be brought into
correspondence.
The contextual bandit problem with graph-structured feedback has received
relatively less attention. \citet{wang2021adversarial} provide algorithms
for the uninformed stochastic context adversarial losses linear setting,
which is not comparable to our setting (which is informed,
adversarial context, and realizable mean losses). \citet{singh2020contextual}
study a particular informed stochastic linear setting with Gaussian noise,
and were able to improve over the (bandit feedback) minimax rates of
\citet{lattimore2017end} for this setting, substituting dependence
upon the number of actions with the domination number of the graph.
While not directly comparable, the correspondence with our results
is intriguing. \citet{van2021beyond} study the problem from the
viewpoint of online multiclass classification and describe a
(computationally low overhead) reduction to online convex optimization;
their analysis is in terms of surrogate regret (i.e., in terms
of a surrogate classification loss) and is not directly comparable.
\section{Discussion}
\label{sec:discussion}
In this paper we study the informed graph feedback setting, where the
entire feedback graph is exposed to the algorithm prior to each decision.
However practical problems of interest are often uninformed graph feedback
problems, where the graph is unknown at decision time. It is unclear how
to formulate an analogous minimax problem to \cref{eqn:minimax} under
these conditions. One idea is to consume the additional feedback in
the online regressor and adjust the estimated function and prediction
loss to reflect this additional structure, e.g., using the more
general version of the E2D framework which incorporates arbitrary side
observations.~\citep{foster2021statistical} This corresponds to local
uninformed feedback qua \citet{cohen2016online}, which proves a
sharp distinction between the adversarial and stochastic settings: even if
the graphs are all strongly observable with bounded independence number,
in the adversarial setting the minimax rate is unaffected, whereas in the
stochastic setting the minimax rate is improved. Although our setting
is semi-adversarial due to realizability of the mean loss, this result
suggests uninformed graph feedback will require additional assumptions
to attack in this minimax fashion. On a positive note, the
encouraging stochastic result suggests the viability of adapting
\citet{simchi2021bypassing} for graph feedback.
Graph feedback problems often present with associated policy constraints,
e.g., for the apple tasting problem it is
natural to rate limit the informative action. Fortunately, our approach
is compatible with constraint enforcement via reduction to
Lagrange regression.~\citep{slivkins2022efficient} Specifically,
the approaches herein can function as the primal algorithm
in their two-player game.
|
{
"arxiv_id": "2302.08686",
"language": "en",
"timestamp": "2023-02-20T02:06:37",
"url": "https://arxiv.org/abs/2302.08686",
"yymm": "2302"
} | \section{Introduction}
The Wiener index~\cite{Wie47}, also known as the total distance, is a well-studied parameter in graph theory.
For a given graph $G$, the Wiener index of the graph is denoted by $W(G)$ where
\[
W(G)=\sum_{\{u,v\}\subseteq V(G)} d_{G}(u,v).
\]
The Wiener index of a graph with a fixed number of vertices is linearly related to the average distance of the graph, which is itself another important metric parameter in random graph theory.
Additionally, it is related to parameters of random walks such as cover cost and Kemeny's constant~\cite{GW17,JSS22}.
Its generalization of Steiner-Wiener index~\cite{LMG16} is related to the average traveling salesman problem distance~\cite{C22}.
It is natural to investigate the extremal properties of the Wiener index.
Among the most basic results on the total distance of a graph, there are folklore results that for every connected graph $G$ and a tree $T$ with $n$ vertices, we have
\[
W(K_n) \le W(G) \le W(P_n) \mbox{~and~} W(S_n) \le W(T) \le W(P_n).
\]
Here $K_n$ is the $n$-vertex complete graph, $P_n$ is a path with $n$ vertices and $S_n$ is a star with $n$ vertices.
Proofs can be found in~\cite{cambie2022thesis}.
While these statements and proofs might be straightforward and easy, the relationship between the Wiener index and other graph parameters or specific classes of graphs is still being explored.
For example among all graphs with a given diameter and order the minimum Wiener index is determined in~\cite{plesnik1984sum}.
In~\cite{cambie2020asymptotic} the maximum Wiener index is determined up to the asymptotics of the second-order term.
The authors in~\cite{che2019upper,szekely2021wiener,ghosh2020maximum} have studied planar graphs and determined sharp bounds for the maximum of the Wiener index in Apollonian planar graphs and maximal planar graphs.
In this work, we study the Wiener index of $k$-uniform hypergraphs.
Generalizing fundamental results in graph theory to hypergraphs gives a broader insight into the problem.
A few examples of such extensions of classical results towards the hypergraph setting are given in~\cite{Mubayi06,GL12}.
A $k$-uniform hypergraph $\mathcal{H}$ of order $n$ can be represented as a pair $(V,E)$ where $V$ is a vertex set with $n$ elements and $E \subseteq \binom{V}{k}$ is a family of $k$-sets, called hyperedges or just edges.
Note that in the case of $k=2$, a $2$-uniform hypergraph is a graph.
In order to introduce a distance between the vertices of a hypergraph, at first we need to introduce a concept of paths for hypergraphs.
In this work we follow the celebrated definition of Berge~\cite{berge1973graphs}:
a Berge path of a hypergraph $\mathcal{H}$ is a sequence of distinct vertices and hyperedges $v_0 h_1 v_1 h_2 v_2 \ldots v_{d-1} h_d v_d$ of $\mathcal{H}$ where $v_{i-1},v_{i} \in h_i$ for every $1 \le i \le d$ for some $d$.
A hypergraph is connected if there is a Berge path for every pair of vertices.
The length of a Berge path is the number of hyperedges in the Berge path.
Naturally the distance between two vertices $u,v$ is the length of the shortest Berge path containing $u$ and $v$ in the hypergraph $\mathcal{H}$, and it is denoted by $d_{\mathcal{H}}(u,v)$.
Note that when the ground hypergraph is known from the context we write $d(u,v)$ instead of $d_{\mathcal{H}}(u,v)$.
The Wiener index of a connected hypergraph $\mathcal{H}$ is defined as $\sum_{\{u,v\} \subseteq V(\mathcal{H})} d_{\mathcal{H}}(u,v).$
A connected hypergraph $\mathcal{H}$ is called a hyper-tree when it can be represented by a tree $T$ in the graph sense, on the same vertex set, where every edge of $\mathcal{H}$ induces a connected subgraph of $T$, equivalent formulations are collected in~\cite[Chap.~5, Sec.~4]{Berge89}.
When restricting to linear $k$-uniform hyper-trees (connected hypergraphs with $n=m(k-1)+1$ vertices and $m$ edges), it was shown in~\cite{GZL17} that for every such hyper-tree $\mathcal{H}$, $W(S_n^k) \le W(\mathcal{H})\le W(P_n^k)$, where $S_n^k$ and $P_n^k$ are the linear path and star, see Figure~\ref{fig:loosepath&star}.
Furthermore, the corresponding extremal hyper-trees are unique.
\begin{figure}[h]
\centering
\begin{tikzpicture}[thick]
\foreach \y in {0,1,2,3}{
\foreach \x in {0, 1,2,3}{
\draw[fill] (60*\y:\x) circle (0.1151);
}
\draw[rotate=60*\y] (0:1.5) ellipse (2cm and 0.5cm);
}
\foreach \y in {0,1,2,3}{
\foreach \x in {0, 1,2,3}{
\draw[fill] (3*\x+\y-6,-2) circle (0.1151);
}
\draw (-4.5+3*\y,-2) ellipse (2cm and 0.5cm);
}
\end{tikzpicture}
\caption{The loose star $S_{13}^4$ and path $P_{13}^4$}
\label{fig:loosepath&star}
\end{figure}
Considering arbitrary connected $k$-uniform hypergraphs, or hyper-trees, the extremal hypergraphs for maximizing/minimizing the Wiener index are not always unique. Determining the lower bound for the Wiener index of $k$-uniform hypergraphs is trivial, as all distances between the distinct vertices are at least $1$.
\begin{proposition}\label{prop:lowerbound}
Any connected $k$-uniform hypergraph $\mathcal{H}$ satisfies
$\binom{n}{2}=W(K_n^k) \le W(\mathcal{H}).$
\end{proposition}
We remark that, in huge contrast with the graph case, equality can also be attained by very sparse hypergraphs.
\begin{remark}
Equality in Proposition~\ref{prop:lowerbound} occurs if and only if for every pair of vertices there is a hyperedge incident to both. This is the case when the hypergraph corresponds with a projective plane.
This is an example of a sparse hypergraph minimizing the Wiener index. Also hyper-trees can attain equality when $k \ge 3$, e.g. the hyper-tree with underlying tree $S_n$ and every subtree of order $k$ being a hyperedge.
\end{remark}
On the other hand, maximizing the Wiener index for $k$-uniform hypergraphs is a relatively complex problem.
In this note, we determine the maximum possible Wiener index of a connected $n$-vertex $k$-uniform hypergraph and characterize all hypergraphs attaining the maximum.
First, we define the class of extremal hypergraphs.
For integers $a$ and $b$ with $a\leq b$, let $[a, b]$ be the set $\{a,a+1,\ldots,b\}$ and $[a]$ be the set $[1, a]$.
For given integers $n$ and $k$ with $0\le k<n$, let $s$ and $r$ be integers such that $n=ks+r$ and $0\leq r<k$.
For every $r$ such that $r\neq 0$, let $P_{n}^{k}=(V,E)$ be the following tight-path.
Let the vertex set of $P_n^{k}$ be $[n]$ and the edge set be
$ E=\left \{[(i-1)k+1, ik] \colon 1 \le i \le s \right\} \cup \left\{ [r+(i-1)k+1, ik+r] \colon 1 \le i \le s\right \}.$
For $r=0$, and every integer $x$, such that $0<x<k$, we define a $k$-uniform hypergraph $P_{n,x}^{k}=(V,E)$ where $V=[n]$ and
$E=\{ [(i-1)k+1, ik] \colon 1 \le i \le s\} \cup \{ [(i-1)k+1+x, ik+x] \colon 1 \le i \le s-1\}.$ See Figure~\ref{fig:densepath} for examples of both $P_n^{k}$ and $P_{n,x}^{k}$.
Our main result can now be stated as follows.
\begin{figure}[h]
\centering
\begin{tikzpicture}[thick]
\foreach \y in {0,1,2}{
\foreach \x in {0, 1,2,3}{
\draw[fill] (\x+4*\y,-2) circle (0.1151);
}
\draw (1.5+4*\y,-2) ellipse (1.9cm and 0.5cm);
\draw[red] (2.5+4*\y,-2) ellipse (1.9cm and 0.5cm);
}
\draw[fill] (12,-2) circle (0.1151);
\foreach \y in {0,1,2}{
\foreach \x in {0, 1,2,3}{
\draw[fill] (\x+4*\y+0.5,-4) circle (0.1151);
\draw (2+4*\y,-4) ellipse (1.9cm and 0.5cm);
}
}
\foreach \y in {0,1}{
\draw[red] (4+4*\y,-4) ellipse (1.9cm and 0.5cm);
}
\end{tikzpicture}
\caption{The tight paths $P_{13}^4$ and $P_{12,2}^4$ for $k=4$}
\label{fig:densepath}
\end{figure}
\begin{theorem}\label{thr:main}
Let $\mathcal{H}$ be a connected $k$-uniform hypergraph of order $n\geq k$.
If $k \nmid n,$ then $W(\mathcal{H}) \le W(P_n^{k})$ with equality if and only if $\mathcal{H}=P_n^{k}.$
If $k \mid n$, then $W(\mathcal{H}) \le W(P_{n,1}^{k})$ with equality if and only if $\mathcal{H}=P_{n,x}^{k}$ for some $0<x<k.$
\end{theorem}
\section{Proof of Theorem~\ref{thr:main}}\label{sec:proofmain}
Let $\mathcal{H}=(V,E)$ be an $n$-vertex $k$-uniform connected hypergraph with maximum Wiener index.
Then we may assume that for every edge $h$ of $\mathcal{H}$, the hypergraph $\mathcal{H}'=(V,E \backslash h)$\footnote{We abuse notation and write $E \backslash h$ instead of $E \backslash \{h\}$.} is not connected, otherwise we would remove the edge and the Wiener index would not decrease.
Proving Theorem~\ref{thr:main} for edge-minimal hypergraphs is sufficient since adding an edge to one of the extremal hypergraphs from Theorem~\ref{thr:main} decreases the Wiener~index.
The following lemma shows that there is a hyperedge $h$ in $E$, such that hypergraph $\mathcal{H}'=(V,E \backslash h)$ an contains at most one connected component of size greater than one.
\begin{lemma}\label{lem:findgoodedge}
Let $\mathcal{H}=(V,E)$ be a $k$-uniform hypergraph such that the deletion of any edge disconnects the hypergraph.
Then there is an edge $h$ of $\mathcal{H}$ such that the hypergraph $\mathcal{H}'=(V,E \backslash h)$ contains at most one connected component of size greater than one.
\end{lemma}
\begin{proof}
Let $h$ be a hyperedge of $\mathcal{H}$ such that the size of the largest connected component of $\mathcal{H}'=(V,E \backslash h)$ is the maximum.
Then either we are done or $\mathcal{H}'=(V,E \backslash h)$ contains two connected components $\mathcal{H}_1$ and $\mathcal{H}_2$ each containing a hyperedge such that $\mathcal{H}_1$ is a component with the maximum size.
Let $h_2$ be a hyperedge of $\mathcal{H}_2$, the hypergraph $(V,E \backslash h_2)$ contains a component of size larger than $H_1$ since it contains a component containing $\mathcal{H}_1$ with all vertices of $h$ as well, a contradiction.
\end{proof}
By Lemma~\ref{lem:findgoodedge} and the edge-minimality of $\mathcal{H}$, there is an edge $h_1$ of $\mathcal{H}$ such that $\mathcal{H}'=(V,E \backslash h_1)$ consists of a connected component of order $n-\ell$ on a vertex set $V'\subset V$ and $\ell$ isolated vertices for some $1\leq \ell <k$.
Let $\mathcal{H}'_{V'}$ be the largest connected component of $\mathcal{H}'$.
Now we proceed by induction, noting that the theorem trivially holds for $n<2k$.
Assume Theorem~\ref{thr:main} holds for every order strictly smaller than $n$.
Let $v$ be an isolated vertex of $\mathcal{H}'$ such that $\sum_{u \in V'} d_{\mathcal{H}'}(u,v)$ is the maximum possible.
Then we have
\begin{equation}\label{eq:inductionstep}
W(\mathcal{H}) \le W(\mathcal{H}'_{V'}) + \binom{\ell}{2} + \ell \sum_{u \in V'} d(u,v). \end{equation}
By induction, we have an upper bound on $W(\mathcal{H}'_{V'}).$
For this, we compute the Wiener index of the paths $P_n^k$ and $P_{n,x}^k$.
When $n=ks+r,$ for some integers $s$ and $r$ such that $0\leq r<k$, the Wiener index of the path equals
\begin{align*}
f(s,k,r)&=\frac{k^2s^3}3 + rks^2 +r^2s+ \frac{(k^2 - 3k)s}6 + \binom{r}{2},
\end{align*}
Note that the Wiener index of $P_{n,x}^k$ is independent of the choice of $x$.
We also need to find an upper bound for $\sum_{u \in V'} d_{\mathcal{H}}(u,v)$. To this end, we define the following two functions:
\begin{align*}
g_1(\ell,k,s,r)&=k s^2 + \ell s+ r(2s+1),\\
g_2(\ell,k,s,r)&=k s^2 + \ell s + 2rs.\\
\end{align*}
\begin{claim}
Let $n-\ell=ks'+r'$ for integers $s'$ and $r'$ for $0\leq r'<k$, then we have
\[
\sum_{u \in V'} d(u,v)\le \begin{cases}
g_1(\ell,k,s',r') \mbox{ if } k-\ell \le r'\\
g_2(\ell,k,s',r') \mbox{ if } k-\ell > r'\\
\end{cases}
\]
\end{claim}
\begin{claimproof}
Let $d$ be the eccentricity of $v$, i.e. $d=\max_{u \in V'} d_{\mathcal{H}}(u,v).$
For $d\ge i \ge 1$, let $n_i$ be the number of vertices in $\mathcal{H}'$ which are at distance $i$ from $v$.
Let $n_i=0$ for $i>d.$
Now by definition, we have $\sum_{u \in V'} d(u,v)= \sum_{i=1}^d i n_i$ and $\sum_{i=1}^d n_i=(n-\ell)=ks'+r'$.
Note that $n_1\ge k-\ell$ and $n_i+n_{i+1}\ge k$ for every $1\le i\le d-1,$ since for every such $i$ there is an edge whose vertices are all at distance $i$ and $i+1$ from $v.$
First, we bound the eccentricity.
Let $u$ be a vertex for which $d(u,v)=d$ and $h$ an edge containing the vertex $u$.
When $0 \le r' < k-\ell,$ either $d\le 2s'$ or we have $n_1+\dots +n_{2s'-1} \ge (k-\ell)+(s'-1)k>k(s'-1)+r'$ and hence $h$ cannot be disjoint of these vertices at distance at most $2s'-1$ from $v$, thus we have $d\le 2s'.$ Now we conclude
\begin{align*}
\sum_{i=1}^d i n_i
&= 2s' \left( \sum_{i=1}^{2s'} n_i \right) - \sum_{j=1}^{2s'-1} \left( \sum_{i=1}^{j} n_i \right)\\
&\le 2s'(n-\ell) - \left( s'(k-\ell) + \sum_{j=1}^{2s'-1} \floorfrac j2 k \right)\\
&= 2s'(ks'+r') -\left( s'(k-\ell) + s'(s'-1)k \right)\\
&= s'^2k+2s'r'+\ell s'.
\end{align*}
In the case that $k-\ell \le r' <k,$ we similarly have $n_1+\dots +n_{2s'} \ge ks' $ and again $h$ cannot be disjoint from at least $ks'$ vertices in $\mathcal{H}'$, thus $d\le 2s'+1.$
The computation now analogously results into $\sum_{i=1}^d i n_i\le s'^2k+(2s'+1)r'+\ell s'.$
\end{claimproof}
Finally, it is sufficient to prove that the upper bound for the right-hand side of Inequality~\eqref{eq:inductionstep} is always bounded by the claimed upper bound.
For $n-\ell=ks+r$, we have
\begin{align*}
f(s+1,k,r+\ell-k)&-\left( f(s,k,r)+\ell g_1(\ell,k,s,r)+\binom \ell 2\right)=(k-\ell-r)^2\geq 0 \: \: \mbox{ if } k-\ell \le r,\\
f(s,k,r+\ell)&-\left( f(s,k,r)+\ell g_2(\ell,k,s,r)+\binom \ell 2\right)=\ell r \geq 0\: \: \:\quad \quad \quad \quad \mbox{ if } k-\ell > r,
\end{align*}
which also has been verified by Maple\footnote{See \url{https://github.com/StijnCambie/WienerHypergraph}}.
Equality holds if $k=\ell+r$ or $r=0$.
If $k \mid n,$ equality occurs if and only if $k=\ell+r$ and for every fixed $0<\ell<k$ the only hypergraph attaining the maximum Wiener index is $P_{n,\ell}^{k}$.
If $k\nmid n,$ equality occurs if and only if $r=0$, since $0<\ell<k$ and by the induction hypothesis the unique connected $n$-vertex $k$-uniform hypergraph maximizing the Wiener index is $P_n^{k}$.
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.08705",
"language": "en",
"timestamp": "2023-02-20T02:07:22",
"url": "https://arxiv.org/abs/2302.08705",
"yymm": "2302"
} | \section{Introduction}
\section{Introduction}
\begin{comment}
\begin{enumerate}
\item \textbf{Non-fading attribute of ZAK-OTFS} in the crystallization region of the period Hyperbola.
\item Variation in ZAK-OTFS BER performance with changing Doppler for three scenarios, delay-only channel,
Doppler-only channel and doubly-spread channel.
\item In Doppler-only channel, the performance of ZAK-OTFS degrades with decreasing Doppler period and
converges to that of TDM.
\item In delay-only channel, the performance of ZAK-OTFS degrades with increasing Doppler period and
converges to that of FDM.
\item In doubly-spread channel, the performance of ZAK-OTFS degrades for both high and low values of Doppler period except for the part of the Hyperbola curve where crystallization occurs.
\end{enumerate}
\end{comment}
6G presents an opportunity to reflect on the fundamentals of wireless communication, as it becomes more and more difficult to estimate channels, and we encounter Doppler spreads measured in KHz (e.g., 1.3 KHz Doppler at 28 GHz carrier and 50 km/hr speed, and 2.3 KHz Doppler at 5 GHz carrier and 500 km/hr speed) \cite{6Gpaper}. It is even an opportunity to question the standard model-dependent approach to wireless communication that requires channel estimation.
It is common knowledge that a time-domain (TD) pulse is an ideal waveform for pure delay channels
as it is possible to separate reflections according to their range, and, similarly, a frequency domain (FD) pulse is an ideal waveform for pure Doppler channels as it is possible to separate reflections according to their velocity. In part II of this tutorial paper, we explore the proposition that
a pulse in the delay-Doppler (DD) domain is an ideal waveform for doubly spread channels comprising of reflections of various ranges and velocities. In Part I \cite{PartIpaper}, we explained that a pulse
in the DD domain is a quasi-periodic localized function, and that when viewed in the time domain,
is realized as a pulse train modulated by a tone, (hence the name \emph{pulsone}).
In Part I, we described a modulation scheme referred to as Zak-OTFS, which uses the
inverse Zak transform \cite{Zak67, Janssen88} to convert information symbols mounted on DD pulses to the time domain for transmission. We emphasized that the Zak-OTFS input-output (I/O) relation is predictable and non-fading
when the delay and Doppler periods are greater than the effective channel delay and Doppler spreads,
a condition we call the \emph{crystallization condition}. We argued that to achieve robust performance, a communication system should operate within this \emph{crystalline regime}.
\begin{comment}
\begin{table*}
\caption{Delay and Doppler spreads in different channels.}
\centering
\begin{tabular}{ | l | l | l | }
\hline
Channel & Delay spread ($\mu s$) & Doppler spread (KHz) \\ \hline
\hline
Leo-satellite channel \cite{BShen2022} & ${\bf 0.8}$ & ${\bf 82}$ \\
(600 Km altitude, $f_c=2$ GHz) & & \\
\hline
UAV/Aeronautical channel \cite{EHaas2002} & ${\bf 33}$
(En-route) & ${\bf 7.68}$ (En-route) \\
Max. altitude: 10 Km (En-route), 1 Km (Arrival/takeoff) & ${\bf 7.0}$ (Arrival/takeoff) & \\
$f_c = (118-137$ MHz) 5.1 GHz, $v = 440$ m/s &
\\ \hline
mmWave Mobile Channel \cite{mmwave} & ${\bf 1.0}$ & ${\bf 3.0}$\\
($f_c = 28$ GHz, $v = 100$ Km/hr) & & \\
\hline
Terrestrial Mobile Channel - ETU \cite{3gppts36p116} & ${\bf 5.0}$ & ${\bf 0.3}$\\
($f_c = $ GHz, $v = $ Km/hr) & & \\
\hline
Terrestrial Pedestrian Channel - EPA \cite{3gppts36p116} & ${\bf 0.41}$ & {\bf 0.005} \\
($f_c = $ GHz, $v = $ Km/hr) & & \\
\hline
\end{tabular}
\label{tab1}
\end{table*}
\end{comment}
In Part I, we described the predictability and non-fading attributes of Zak-OTFS in the context of continuous time and infinite bandwidth. Here in Part II, we study predictability in the context of a sampled communication system with finite duration and bandwidth constraints, and we present a discrete DD domain system model that enables bit error rate (BER) performance evaluation through simulation. We measure predictability through an explicit formula for reconstructing the I/O relation from a finite number of received pilot samples in the DD domain. The reconstruction accuracy depends on the choice of the delay-Doppler periods and the pulse shaping filters, and accuracy is high when the crystallization condition is satisfied. In the crystalline regime, it is possible to learn the I/O relation without needing to estimate the underlying channel. This opens up the possibility of model-free operation, which can enable communication when traditional model-dependent modes requiring channel estimation are out of reach (for example when the channel comprises of non-resolvable
paths, or admits a continuous delay-Doppler profile, as in the presence of acceleration). We now highlight our main contributions.
\textbf{Origins of non-predictability}:
We show that non-predictability and fading result from aliasing in the DD domain, and we describe how aliasing occurs when the channel delay spread is greater than the delay period, or the channel Doppler spread is greater than the Doppler period. Fundamental understanding of aliasing leads to the \emph{crystalline decomposition}, which is a canonical decomposition of the effective channel response filter into a predictable component and a non-predictable component. The crystallization condition
holds if and only if the non-predictable component vanishes.
\begin{comment}
\begin{table*}
\caption{$\tau_p = 32 \, \mu s$, $\nu_p = 31.25$ KHz}
\centering
\begin{tabular}{ | l || l | l | }
\hline
& Delay & Doppler \\
& Spread ($\mu s$) & Spread (KHz) \\
\hline
Use-case (A) & & \\
(Leo-satellite channel \cite{BShen2022}) & ${\bf 0.8}$ & ${\bf 82}$ \\
\hline
Use-case (B) & ${\bf 7.0}$ (Take-off)& \\
(UAV/Aeronautical channel \cite{EHaas2002}) & ${\bf 33}-{\bf 66}$ (En-route) & ${\bf 7.68}$ (En-route) \\
& & $f_c = $ GHz, $v = $ Km/hr \\
\hline
Use-case (C) & ${\bf 1.0}$ & ${\bf 3.0}$\\
(mmWave Mobile Channel) & & $f_c = 28$ GHz, $v = 100$ Km/hr \\
\hline
Use-case (D) & ${\bf 5.0}$ & ${\bf 0.3}$\\
(Terrestrial Mobile Channel & & $f_c = $ GHz, $v = $ Km/hr\\
- ETU \cite{3gppts36p116}) & & \\
\hline
Use-case (E) & & \\
(Terrestrial Pedestrian Channel & ${\bf 0.41}$ & {\bf 0.005} \\
- EPA \cite{3gppts36p116}) & & $f_c = $ GHz, $v = $ Km/hr\\
\hline
\end{tabular}
\label{tab1}
\end{table*}
\end{comment}
\textbf{Benefits of predictability}:
Given the I/O response at one DD domain point in a frame, it is possible to predict the I/O response at all other points in the frame. Predictability implies that the received power profile is flat (no fading), that we have engineered a two-dimensional Gaussian channel. We illustrate the practical benefits of predictability by evaluating BER performance as a function of the received SNR.
We first suppose that channel estimation is perfect in order to understand the
impact of fading. We show that performance is superior in the crystalline regime, that performance approaches that of TDM as the Doppler period shrinks, and that performance approaches that of FDM as the delay period shrinks. We then evaluate performance when we do not have the fine delay-Doppler resolution necessary for accurate channel estimation, and as a consequence, model-dependent approaches fail.
We show that in the crystalline regime, model-free operation is successful,
and that performance is only slightly worse than performance with perfect knowledge of the effective channel. We also describe how better transmit and receive filters serve to extend the range of reliable operation.
\textbf{Optimality of Zak-OTFS}:
Over the past few years
several variants of OTFS have been reported in literature \cite{TThaj2022}. A multicarrier approximation to Zak-OTFS, which we refer to as MC-OTFS, has been the focus of most research attention so far \cite{RH1, RH3, br_otfs}. We show that the I/O relation of MC-OTFS is less predictable than that of Zak-OTFS. As the Doppler spread increases, the BER performance of MC-OTFS is inferior to that of Zak-OTFS.
Some recent works on OTFS have started focusing on Zak transform based approach \cite{SKM2021, SKM20212, Lampel2022, VSBhat2023}. However, none of these
works investigate the subtle aspect of predictability of the I/O relation in the DD domain. In particular, none emphasizes the important fact that only Zak-OTFS where the information bits are encoded as a discrete quasi-periodic function and filtering is applied through twisted convolution admits a predictable I/O relation if the crystallization condition holds. We feel that this assertion is an important theoretical contribution of this paper.
\begin{comment}
The consequence is that when predictability is satisfied the I/O relation becomes non-fading, explaining why performance of Zak-OTFS is almost invariant to Doppler spread when operating in the crystalline regime.
\end{comment}
\textbf{Radar applications}:
We derive the radar ambiguity function for the Zak pulsone and demonstrate that unambiguous delay-Doppler estimation is possible in the crystalline regime, when the delay period is greater than the delay spread of the radar scene, and the Doppler period is greater than the Doppler spread. We highlight the similarity between the structure of the carrier waveform proposed by Woodward in his influential text \cite{PMWoodward} (a train of narrow TD Gaussian pulses modulated with a broad Gaussian envelope), and the Zak pulsone carrier waveform (a train of narrow impulses modulated by a sinusoid). Note that in radar applications, a signal is modulated onto a carrier waveform (in our case, the Zak pulsone), and that the ambiguity function of the concatenated system depends on both signal and carrier (see \cite{LM2004} and \cite{S2008}).
The paper is organized as follows. Section \ref{secOTFSPredictability} explains how non-predictability in communications results from aliasing in the DD domain, which occurs when one of the channel spreads is greater than the corresponding pulsone period. This geometric perspective leads to a canonical decomposition of the effective channel response filter into a predictable component and a nonpredictable component (the crystalline decomposition). Section \ref{seciorelation} addresses finite bandwidth and duration constraints, expressing the I/O relation for TDM, FDM, and Zak-OTFS in matrix-vector form. Section \ref{secperfcsi} uses BER simulations when the channel is perfectly known, to illustrate that Zak-OTFS is non-fading in the crystalline regime. Section \ref{section5paper2} uses BER simulations when the channel is not known, to illustrate that Zak-OTFS is predictable, and that model-free operation is possible in the crystalline regime. Section \ref{secotfsvariants} uses BER simulations to illustrate that Zak-OTFS is more predictable than MC-OTFS (a widely studied multicarrier approximation) making the case that model-free operation is more possible. Section \ref{radar} derives the radar ambiguity function of the Zak-OTFS carrier waveform, illustrating that unambiguous delay-Doppler estimation is possible in the crystalline regime. Section \ref{conc} presents conclusions.
\begin{comment}
The paper is organized as follows. Section \ref{secOTFSPredictability}
provides a geometric perspective on the I/O relation for Zak-OTFS, leading
to a canonical decomposition of the effective channel response filter into a
predictable component and a non-predictable component (the \emph{crystalline decomposition}). Section \ref{seciorelation} introduces the matrix-vector form of I/O relation for TDM, FDM, and Zak-OTFS in systems with finite bandwidth and duration constraints. Section \ref{secperfcsi} illustrates the non-fading attribute through
BER simulations, when the I/O relation is known perfectly. Section \ref{section5paper2}
illustrates the predictability attribute through BER simulations when the I/O relation is not known perfectly.
Section \ref{secotfsvariants} shows that the
Zak-OTFS I/O relation is more predictable
than the MC-OTFS I/O relation.
Section \ref{radar} introduces attributes of the Zak-OTFS waveform that are useful in sensing applications. Section \ref{conc} presents conclusions.
\end{comment}
\begin{figure*
\vspace{-4mm}
\centering
\includegraphics[width=18cm, height=6.0cm]{paper2_zakotfs.jpg}
\caption{Signal processing in Zak-OTFS.}
\label{figzakotfspaper2}
\end{figure*}
\section{The Zak-OTFS Input-Output (I/O) Relation and Predictability in the Crystalline Regime}
\label{secOTFSPredictability}
\subsection{The Zak-OTFS I/O Relation for a Sampled System}
\label{seczakotfsio}
The Zak-OTFS I/O relation (in the absence of AWGN) is presented in Section VI-C of Part I \cite{PartIpaper} (see also Table III in Part I).
In Zak-OTFS, the frame duration is roughly an integer multiple
$N$ of the delay period $\tau_p$, and the bandwidth is roughly an integer multiple $M$ of the Doppler period $\nu_p= 1/\tau_p$.
The information symbols are arranged as a 2-D array $x[k,l]$, $k=0,1,\cdots,M-1$, $l=0,1,\cdots, N-1$ and are encoded as a discrete DD domain information signal which is given by
\begin{eqnarray}
\label{paper2_eqn8}
x_{_{\mbox{\footnotesize{dd}}}}[k + n M ,l+ m N] & \hspace{-2mm} \Define
x[k,l] \, e^{j 2 \pi n \frac{l}{N}}, \,\,\, m,n \in {\mathbb Z}.
\end{eqnarray}
Being discrete DD domain signal means that $x_{_{\mbox{\footnotesize{dd}}}}[k,l]$ is a quasi-periodic function on the information grid with period $M$ along the delay axis and period $N$ along the Doppler axis, i.e.,
\begin{eqnarray}
\label{paper2_eqn8a}
x_{_{\mbox{\footnotesize{dd}}}}[k+nM, l + mN] & \hspace{-3mm} = & \hspace{-3mm} x_{_{\mbox{\footnotesize{dd}}}}[k,l] \, e^{j 2 \pi n \frac{l}{N}},
\end{eqnarray}
for any $n,m \in {\mathbb Z}$. The discrete DD domain signal is then lifted to a continuous DD domain signal by means of
\begin{eqnarray}
x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) & \hspace{-3mm} = & \hspace{-3mm} \sum\limits_{k,l \in {\mathbb Z}} x_{_{\mbox{\footnotesize{dd}}}}[k,l] \, \delta\left(\tau - k \frac{\tau_p}{M} \right) \delta\left(\nu - l \frac{\nu_p}{N} \right).
\end{eqnarray}
The complete transceiver signal processing is depicted in Fig.~\ref{figzakotfspaper2} (see also Table-III in Part I).
As shown in Fig.~\ref{figzakotfspaper2},
the DD domain transmit signal $x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu)$ is the twisted convolution
of $x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$ with a transmit pulse $w_{tx}(\tau,\nu)$, i.e., $x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu) = w_{tx}(\tau, \nu) *_{\sigma} x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$, where $*_{\sigma}$ denotes the twisted convolution operation.
We then apply the inverse Zak transform
to obtain the TD realization $s_{_{\mbox{\footnotesize{td}}}}(t)$.
At the receiver, the received TD signal $r_{_{\mbox{\footnotesize{td}}}}(t)$ is converted to its DD domain representation $y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$ via the Zak transform.
The channel acts on $x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu)$ by twisted convolution, so that
$y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu) = h(\tau,\nu) *_{\sigma} x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu)$.
After twisted convolution of $y_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$ with a receive DD pulse $w_{rx}(\tau, \nu)$ we obtain
\begin{eqnarray}
\label{paper2_eqn229}
y_{_{\mbox{\footnotesize{dd}}}}^{w_{rx}}(\tau, \nu) & = &
w_{rx}(\tau,\nu) \, *_{\sigma} \, y_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) \nonumber \\
\hspace{-30mm} & \hspace{-27mm} = & \hspace{-16mm}
w_{rx}(\tau,\nu) \, *_{\sigma} \, {\Big (} h(\tau,\nu) \, *_{\sigma} \, x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu) {\Big )} \nonumber \\
\hspace{-30mm} & \hspace{-27mm} = & \hspace{-16mm} w_{rx}(\tau,\nu) \, *_{\sigma} \, {\Big (} h(\tau,\nu) \, *_{\sigma} \, {\Big [}
w_{tx}(\tau, \nu) *_{\sigma} x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) {\Big ]} {\Big )} \nonumber \\
\hspace{-30mm} & \hspace{-27mm} = & \hspace{-16mm} \underbrace{{\Big (} w_{rx}(\tau,\nu) \, *_{\sigma} \, h(\tau,\nu) \, *_{\sigma} \, w_{tx}(\tau, \nu) {\Big )}}_{h_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)} \, *_{\sigma} \, x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu), \nonumber \\
\end{eqnarray}
where the last step follows from the associativity of the twisted convolution operation.
Equation (\ref{paper2_eqn229}) gives the Zak-OTFS I/O relation in the continuous DD domain. Simply put, the I/O relation states that the output $y_{_{\mbox{\footnotesize{dd}}}}^{w_{rx}}(\tau, \nu)$ is the twisted convolution of the input $ x_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$ with the effective continuous DD channel filter $h_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$.
Finally, we sample this continuous output signal along the information grid consisting of integer multiples of
$\tau_p/M$ along the delay axis and integer multiples of $\nu_p/N$ along the Doppler axis, to obtain a discrete DD domain output signal
\begin{eqnarray}
\label{paper2_eqn5paper12}
y_{_{\mbox{\footnotesize{dd}}}}[k,l] & \hspace{-3mm} = & \hspace{-3mm} y_{_{\mbox{\footnotesize{dd}}}}^{w_{rx}}\left(\tau = k \frac{\tau_p}{M}, \nu = l \frac{\nu_p}{N} \right) \nonumber \\
& \hspace{-10mm} = & \hspace{-6mm} \sum\limits_{k', l' \in {\mathbb Z}} h_{_{\mbox{\footnotesize{dd}}}}[k - k', l - l'] \, x_{_{\mbox{\footnotesize{dd}}}}[k' , l']
\, e^{j 2 \pi \frac{(l - l')}{N} \frac{k'}{M} } ,
\end{eqnarray}
where $h_{_{\mbox{\footnotesize{dd}}}}[k,l]$ is the discrete effective channel filter, given by sampling the continuous effective channel filter, i.e.,
\begin{eqnarray}
\label{eqn53_b12}
h_{_{\mbox{\footnotesize{dd}}}}[k, l] & \Define & h_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu){\Big \vert}_{\left( \tau = \frac{k \tau_p}{M} \,,\, \nu = \frac{l \nu_p}{N} \right)}.
\end{eqnarray}
Equation (\ref{paper2_eqn5paper12}) constitutes the canonical form of the Zak-OTFS I/O relation. It expresses the output signal as a discrete twisted convolution\footnote{\footnotesize{$u[k,l] \, *_{\sigma} \, v[k,l] = \sum\limits_{k', l' \in {\mathbb Z}} u[k', l'] v[k-k',l-l'] e^{j 2 \pi \frac{k-k'}{M} \frac{l'}{N}}$}} of the discrete effective channel filter and the input signal, i.e.,
\begin{eqnarray}
\hspace{-5mm} y_{_{\mbox{\footnotesize{dd}}}}[k,l] & \hspace{-5mm} = & \hspace{-6mm} \sum\limits_{k', l' \in {\mathbb Z}} h_{_{\mbox{\footnotesize{dd}}}}[k', l'] \, x_{_{\mbox{\footnotesize{dd}}}}[k - k' ,l - l']
\, e^{j 2 \pi \frac{(k - k')}{M} \frac{l'}{N} } \nonumber \\ & = & h_{_{\mbox{\footnotesize{dd}}}}[k, l] \, *_{\sigma} \, x_{_{\mbox{\footnotesize{dd}}}}[k ,l].
\end{eqnarray}
We remark that the R.H.S of (\ref{paper2_eqn5paper12}) is a weighted double sum of terms, where the $(k',l')$-th term is given by
\begin{eqnarray}
\label{paper2_eqn301}
h_{_{\mbox{\footnotesize{dd}}}}[k,l] \, *_{\sigma} \, \left[ \delta[k- k'] \, \delta[l - l'] \right], \,\, k,l \in {\mathbb Z},
\end{eqnarray}
multiplied by the weight $x_{_{\mbox{\footnotesize{dd}}}}[k' ,l']$.
\subsection{Predictability of the Zak-OTFS I/O relation in the Crystalline Regime}
\label{secCrystalline}
In this section, we show that the Zak-OTFS I/O relation is predictable in the crystalline regime. Specifically, we show that when the crystallization condition holds,
the channel response to a \emph{green} pilot located at $(k^{(g)}, l^{(g)})$ can be accurately estimated from the channel response to a \emph{blue} pilot located at $(k^{(b)}, l^{(b)})$.
By definition, the blue pilot is a discrete DD domain impulse signal, given by
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_eqn8a1}
\hspace{-2mm} x^{(b)}[k ,l] & \hspace{-3mm} = & \hspace{-5mm} \sum\limits_{m,n \in {\mathbb Z}} \hspace{-2mm} e^{j 2 \pi \frac{n l^{(b)}}{N}} \delta[k - (k^{(b)} + nM)] \delta[l - (l^{(b)} + mN)].
\end{eqnarray}\normalsize}
Similarly, the green pilot is a discrete DD domain impulse signal, given by
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_eqn8a1green}
\hspace{-3mm} x^{(g)}[k ,l] & \hspace{-3mm} = & \hspace{-5mm} \sum\limits_{m,n \in {\mathbb Z}} \hspace{-2mm} e^{j 2 \pi \frac{n l^{(g)}}{N}} \delta[k - (k^{(g)} + nM)] \delta[l - (l^{(g)} + mN)].
\end{eqnarray}\normalsize}
The channel response to the blue pilot is given by
{\small
\vspace{-4mm}
\begin{eqnarray}
\label{paper2_eqn8b}
y^{(b)}[k,l] & \hspace{-3mm} = & \hspace{-3mm}
h_{_{\mbox{\footnotesize{dd}}}}[k,l] \, *_{\sigma}
\, x^{(b)}[k ,l] \nonumber \\
& \hspace{-3mm} = & \hspace{-3mm} \sum\limits_{n,m \in {\mathbb Z}} {\Big [} h_{_{\mbox{\footnotesize{dd}}}}[k - (k^{(b)} + nM), l - (l^{(b)} + mN)] \, \nonumber \\
& & \,\,\, \hspace{6mm} e^{j 2 \pi \frac{n l^{(b)}}{N}} \, e^{j 2 \pi \frac{(l - l^{(b)} - mN)}{N} \frac{(k^{(b)} + n M)}{M} } {\Big ]}.
\end{eqnarray}
\normalsize}
We see from (\ref{paper2_eqn8b}) that the total response to the blue pilot is a sum of local responses to its constituent impulses where the response to the $(n,m)$-th impulse is given by
\begin{eqnarray}
\label{paper2_eqn8d}
y^{(b)}_{_{\mbox{\footnotesize{n,m}}}}[k,l] & \hspace{-3mm} = & \hspace{-3mm} h_{_{\mbox{\footnotesize{dd}}}}[k - (k^{(b)} + nM), l - (l^{(b)} + mN)] \, \nonumber \\
& & \,\,\, \hspace{3mm} e^{j 2 \pi \frac{n l^{(b)}}{N}} \, e^{j 2 \pi \frac{(l - l^{(b)} - mN)}{N} \frac{(k^{(b)} + n M)}{M} }.
\end{eqnarray}
Observe that the $(n,m)$-th response coincides up to multiplicative phases with the effective channel filter shifted by $(k^{(b)} + nM, l^{(b)} + m N)$.
\begin{figure
\hspace{-2mm}
\includegraphics[width=9cm, height=6.0cm]{Fig1_paper2.jpg}
\caption{Prediction in the crystalline regime. In the crystalline regime, the channel response
to the green DD pilot signal can be predicted from the channel response to the blue DD pilot signal.}
\label{fig1paper2}
\end{figure}
Similarly, the response to the green pilot is a sum of local responses, where the $(n,m)$-th response is given by
\begin{eqnarray}
\label{paper2_eqn8f}
y^{(g)}_{_{\mbox{\footnotesize{n,m}}}}[k,l] & = & {\Big [} h_{_{\mbox{\footnotesize{dd}}}}[k - (k^{(g)} + nM), l - (l^{(g)} + mN)] \, \nonumber \\
& & \,\,\, e^{j 2 \pi \frac{n l^{(g)}}{N}} \, e^{j 2 \pi \frac{(l - l^{(g)} - mN)}{N} \frac{(k^{(g)} + n M)}{M} } {\Big ]}.
\end{eqnarray}
In Fig.~\ref{fig1paper2}, the response components to the blue (green) pilot are depicted as blue (green) ellipses. Observe that the $(n,m)$-th green response can always be calculated from the $(n,m)$-th blue response according to the following rule
\begin{eqnarray}
\label{paper2_eqn8g}
y^{(g)}_{_{\mbox{\footnotesize{n,m}}}}[k,l] & = & y^{(b)}_{_{\mbox{\footnotesize{n,m}}}}[k - (k^{(g)} - k^{(b)}),l -(l^{(g)} - l^{(b)})] \nonumber \\
& & \hspace{-4mm} e^{j 2 \pi n \frac{(l^{(g)} - l^{(b)})}{N}} \, e^{j2 \pi \frac{(l - l^{(g)} - mN)}{N} \frac{(k^{(g)} - k^{(b)})}{M}}.
\end{eqnarray}
where we use (\ref{paper2_eqn8d}) and (\ref{paper2_eqn8f}).\footnote{\footnotesize{To obtain the $(n,m)$-th response component to the green pilot from that of the blue pilot, we first align supports by translating the blue ellipses by $(k^{(g)} - k^{(b)})$ in delay, and by $(l^{(g)} - l^{(b)})$ in Doppler. We then multiply by a deterministic factor that is independent of the channel.}} The problem is that the individual local responses are super-imposed at the receiver and, in general, cannot be separated from one another. However, when the crystallization conditions hold, the blue (green) ellipses do not overlap with each other which means that the local responses do not interact. As a consequence, in this situation. the total green response can be predicted from the total blue response using the local formula (\ref{paper2_eqn8g}).
An important consequence of predictability is that the energy profile of the local responses is flat, that is, independent of the position of the pilot, due to the fact that
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_nofading1}
\left\vert y^{(g)}_{_{\mbox{\footnotesize{n,m}}}}[k,l] \right\vert = \left\vert y^{(b)}_{_{\mbox{\footnotesize{n,m}}}}[k - (k^{(g)} - k^{(b)}),l -(l^{(g)} - l^{(b)})] \right\vert,
\end{eqnarray}\normalsize}
In other words, in the crystalline regime, the Zak-OTFS I/O
relation is \emph{non-fading}.
\begin{figure
\hspace{-7mm}
\includegraphics[width=10cm, height=5.0cm]{Fig2_paper2.jpg}
\caption{Non-crystalline regime: Partially predictable Zak-OTFS I/O relation. The
response to the green DD pilot signal can be predicted from the response to the blue DD pilot signal only for the part of the response which is free from any overlap/aliasing/self-interaction.}
\label{fig2paper2}
\end{figure}
\subsection{Non-predictability of the Zak-OTFS I/O relation in the Non-crystalline Regime}
\label{secNoncrystalline}
When the crystallization conditions fail to hold the local channel responses interact (the ellipses overlap with one another), a phenomenon we refer to as DD domain aliasing. We now explain how non-predictability arises from DD domain aliasing.
We illustrate the aliasing phenomenon through an example. We again look to predict the response to a green pilot from the response to a blue pilot. However, we now assume that the Doppler spread of the effective channel filter is greater than the Doppler period. Under this assumption, local responses interact. Specifically, in our example the $(n,m)$-th response interacts with the $(n,m-1)$-th response. The interaction is depicted in Fig.~\ref{fig2paper2} as an overlap between ellipses.
We consider a generic point $(k,l)$ residing in the overlap of the $(0,0)$ and $(0,-1)$ blue ellipses, depicted in Fig.~\ref{fig2paper2} as a square with a blue border. The response received at this point is the super-position of two local responses $y^{(b)}_{_{\mbox{\footnotesize{0,0}}}}[k,l] + y^{(b)}_{_{\mbox{\footnotesize{0,-1}}}}[k,l]$, which is equal to
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_eqn8l}
h_{_{\mbox{\footnotesize{dd}}}}[\Delta k, \Delta l] e^{j 2 \pi \frac{\Delta l}{N} \frac{k^{(b)}}{M}} + h_{_{\mbox{\footnotesize{dd}}}}[\Delta k, \Delta l + N] e^{j 2 \pi \frac{\Delta l + N}{N} \frac{k^{(b)}}{M}},
\end{eqnarray}
\normalsize}where $\Delta k = k - k^{(b)}$ and $\Delta l = l - l^{(b)}$. We consider the parallel point $(k',l')$ residing in the overlap of the $(0,0)$ and $(0,-1)$ green ellipses, depicted in Fig.~\ref{fig2paper2} as a square with a green border. Here, $k'=k+(k^{(g)} - k^{(b)})$ and $l'=l+(l^{(g)} - l^{b)})$. The response at this point is a super-position of two local responses $y^{(g)}_{_{\mbox{\footnotesize{0,0}}}}[k',l'] + y^{(g)}_{_{\mbox{\footnotesize{0,-1}}}}[k',l']$, which is explicitly equal to
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_eqn8m}
h_{_{\mbox{\footnotesize{dd}}}}[\Delta k, \Delta l] e^{j 2 \pi \frac{\Delta l}{N} \frac{k^{(g)}}{M}} + h_{_{\mbox{\footnotesize{dd}}}}[\Delta k, \Delta l + N] e^{j 2 \pi \frac{\Delta l + N}{N} \frac{k^{(g)}}{M}}.
\end{eqnarray}
\normalsize}
We see that unless both the individual local terms in (\ref{paper2_eqn8l}) are separately known, the green response in (\ref{paper2_eqn8m}) cannot be predicted from the total blue response.
A consequence of non-predictability is that the energy profile of the received response depends on the DD location of the pilot signal. In our example, the energy profile of the received response to the blue pilot will, in general, be different than that for the green pilot. This is because the linear combination (\ref{paper2_eqn8l}) can be different from (\ref{paper2_eqn8m}), depending on the values of the phase coefficients in each case.
\subsection{The Crystalline Decomposition}
\label{secCrysdecomp}
The previous discussion reveals that the channel response is unpredictable at points inside the overlap region between interacting ellipses. Outside this region, the response is in fact predictable. This observation suggests a decomposition of the effective channel filter into a predictable and a non-predictable component.
To see this, we consider a pilot at the origin. The channel response to this pilot is given by
\begin{eqnarray}
\label{paper2_ddpilotorigin}
h_{_{\mbox{\footnotesize{dd}}}}[k, l] + \hspace{-5mm}\sum\limits_{n,m \in {\mathbb Z}, (n,m) \ne (0,0)} h_{_{\mbox{\footnotesize{dd}}}}[k - nM, l - mN] \, e^{j 2 \pi \frac{n l}{N}}
\end{eqnarray}
where the first term is the local response to the impulse at the origin and the remaining terms are local responses to other impulses. The main observation is that if $(k,l)$ satisfies $h_{_{\mbox{\footnotesize{dd}}}}[k, l] \neq 0$ and $h_{_{\mbox{\footnotesize{dd}}}}[k - nM, l - mN] \neq 0$ for some $(n,m) \ne (0,0)$, then the response at $(k,l)$ is a superposition of at least two local responses - the local response to the $(0,0)$-th impulse and the local response to the $(n,m)$-th impulse, and, consequently, the effective channel filter tap at $(k,l)$ cannot be estimated from the received response. We refer to such taps as \emph{non-predictable} taps. Alternatively, it is easy to see that all other taps in the support can be estimated from the channel response. We refer to such taps as \emph{predictable} taps. We denote the support set of all predictable taps by ${\mathcal P}$ and the complement support set of all non-predictable taps by ${\mathcal P}^c$.
Recall that given a set ${\mathcal S}$, the indicator function ${\chi}_{_{\mathcal S}}[k,l]$ takes the value $1$ for $(k,l) \in {\mathcal S}$, and the value $0$ otherwise. We define the \emph{crystalline decomposition} of the effective channel filter to be
\begin{eqnarray}
h_{_{\mbox{\footnotesize{dd}}}}[k,l] & = & {\chi_{_{{\mathcal P}}}}[k,l]
\, h_{_{\mbox{\footnotesize{dd}}}}[k,l] \,\, + \,\, {\chi_{_{{\mathcal P}^c}}}[k,l]
\, h_{_{\mbox{\footnotesize{dd}}}}[k,l],
\end{eqnarray}
where the first term is the predictable component and the second term is the non-predictable component. The crystallization condition corresponds to the special case where the non-predictable component vanishes, that is, ${{\mathcal P}}^c = \emptyset$.
\subsection{Error in prediction of the Zak-OTFS I/O relation}
In this section we analyze the prediction accuracy for different
choices of the delay-Doppler period and shaping filter. The analysis is carried through simulations of a simple two-path channel, where, the first path has a delay of $\tau_1 = 0 \, \mu s$ and a Doppler shift of $\nu_1 = 815$ Hz and the second path
has a delay of $\tau_2 = 5 \mu s$ and a Doppler shift of $\nu_2 = -815$ Hz. The normalized channel gains for the two paths are $h_1 = h_2 = 1/\sqrt{2}$. The system bandwidth is $B = 0.96$ MHz and the frame duration is $T=1.6$ ms.
We consider two choices for the transmit/receive shaping filters. The first choice is a sinc filter
\begin{eqnarray}
w_{rx}(\tau,\nu) = w_{tx}(\tau,\nu) = \sqrt{B T} \, sinc(B \tau) \, sinc(\nu T).
\end{eqnarray}
Observe that the slow decay of the sinc function amplifies DD domain aliasing, thereby, reducing predictability.
The second choice is a root raised cosine (RRC) filter
\begin{eqnarray}
\label{rrcpulse_eqn1}
w_{rx}(\tau,\nu) = w_{tx}(\tau,\nu) = \sqrt{BT} \, rrc_{_{\beta_{\tau}}}( B \tau ) \, rrc_{_{\beta_{\nu}}}( T \nu ),
\end{eqnarray}
where the function $rrc_{\beta}(\cdot)$ for $0 \leq \beta \leq 1$ is given by \cite{SHDigcomm}
\begin{eqnarray}
\label{rrceqn1}
rrc_{_{\beta}}(x) & \hspace{-3mm} = & \hspace{-3mm} \frac{\sin(\pi x (1 - \beta)) + 4 \beta x \cos(\pi x (1 + \beta))}{\pi x \left( 1 - 4 \beta x \right)^2}.
\end{eqnarray}
Observe that the RRC function has faster decay than the sinc function, causing less DD domain aliasing, thereby increasing predictability. However, reduced aliasing comes at the cost of an increase in frame duration and bandwidth. In the simulation we use $\beta_{\tau} = 0.1$ and $\beta_{\nu} = 0.2$, corresponding to a $10 \, \%$ increase in bandwidth, and a
$20 \, \%$ increase in duration.
We normalize the energy of the transmit/receive filters by setting
\begin{eqnarray}
\iint \vert w_{tx}(\tau, \nu) \vert^2 \, d\tau d\nu = \iint \vert w_{rx}(\tau, \nu) \vert^2 \, d\tau d\nu = 1.
\end{eqnarray}
We now use a pilot signal at $(k^{(b)},l^{(b)}) = \left( \frac{M}{2}, \frac{N}{2} \right)$
to estimate the effective channel filter $h_{_{\mbox{\footnotesize{dd}}}}[k,l]$. Recall, that
in the crystalline regime, local responses do not interact, therefore, the total response in the fundamental period coincides with the $(0,0)$-th local response $y^{(b)}_{0,0}[k,l]$ which is given by
\begin{eqnarray}
y^{(b)}[k,l] = h_{_{\mbox{\footnotesize{dd}}}}[k - M/2,l - N/2] \, e^{j \pi \frac{(l - N/2)}{N} },
\end{eqnarray}
for $0 \leq k < M$ and $0 \leq l < N$. From this we conclude
\begin{eqnarray}
\label{paper2_eqn8k}
h_{_{\mbox{\footnotesize{dd}}}}[k,l] = y^{(b)}[k+ M/2,l + N/2 ] \, e^{- j \pi \frac{l}{N} },
\end{eqnarray}
for $-\frac{M}{2} \leq k < \frac{M}{2}$ and $-\frac{N}{2} \leq l < \frac{N}{2}$. We set the channel filter estimate to be
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_predicteqn1}
{\widehat h}_{_{\mbox{\footnotesize{dd}}}}[k,l] & =
\begin{cases}
{y}^{(b)} \left[ k+\frac{M}{2},l+\frac{N}{2} \right] \, e^{-j \pi \frac{l}{N}}&, -\frac{M}{2} \leq k < \frac{M}{2}\\
& \,\, -\frac{N}{2} \leq l < \frac{N}{2}\\
0 &, \mbox{\small{otherwise.}} \\
\end{cases}.
\end{eqnarray}\normalsize}
Using (\ref{paper2_predicteqn1}), we can predict the effective channel response to any
pilot. The predicted channel response to a pilot located at $(k^{(g)}, l^{(g)})$ is given by
{\small
\vspace{-4mm}
\begin{eqnarray}
\label{paper2_predicteqn24}
\widehat{y}^{(g)}[k,l] & \hspace{-3mm} = & \hspace{-3mm}
{\widehat h}_{_{\mbox{\footnotesize{dd}}}}[k,l] \, *_{\sigma} x^{(g)}[k,l].
\end{eqnarray}
\normalsize}
The predicted response should be compared to the true channel response
{\small
\vspace{-4mm}
\begin{eqnarray}
\label{paper2_predicteqn2}
y^{(g)}[k,l] & \hspace{-3mm} = & \hspace{-3mm}
{h}_{_{\mbox{\footnotesize{dd}}}}[k,l] \, *_{\sigma} x^{(g)}[k,l].
\end{eqnarray}
\normalsize}
\begin{figure*
\vspace{-4mm}
\centering
\includegraphics[width=12cm, height=8.0cm]{Zakotfs_sinc_colormap.jpg}
\caption{Heatmap showing Relative Prediction Error (RPE), in dB, as a function of delay (horizontal axis), and Doppler (vertical axis) with sinc pulse shaping filters. RPE
is significantly smaller in the crystalline regime when compared to that in the non-crystalline regime.}
\label{fig3paper2}
\end{figure*}
The relative prediction error is given by
\begin{eqnarray}
\label{paper2_ek1l1eqn}
E(k^{(g)}, l^{(g)}) & \Define & \frac{\sum\limits_{k=0}^{M-1}\sum\limits_{l=0}^{N-1} {\Big \vert} \widehat{y}^{(g)}[k,l] - y^{(g)}[k,l]{\Big \vert}^2}{\sum\limits_{k=0}^{M-1}\sum\limits_{l=0}^{N-1} {\Big \vert} y^{(g)}[k,l]{\Big \vert}^2}.
\end{eqnarray}
Figs.~\ref{fig3paper2} and \ref{fig3paper2_rc} depict the relative prediction error as a two-dimensional heat-map for three different points on the period curve: the points $\nu_p = 1.25$ KHz, representing a TDM approximation, $\nu_p = 30$ KHz representing a point in the crystalline regime and $\nu_p=240$ KHz, representing an FDM approximation. Fig.~\ref{fig3paper2} assumes sinc transmit/receive shaping filters, whereas Fig.~\ref{fig3paper2_rc} assumes root raised cosine transmit/receive filters.
When $\nu_p = 1.25$ KHz ($\tau_p = 800 \, \mu s$), the Doppler period is smaller than the channel Doppler spread ($1.63$ KHz), causing aliasing along the Doppler dimension which, in turns, creates non-predictability and fading along delay. In this situation the relative prediction error is minimized at the pilot location and it increases rapidly towards $0$ dB as we move away along the delay axis.
When $\nu_p = 240$ KHz, ($\tau_p = 4.16 \, \mu s$), the delay period is smaller than the channel delay spread ($5 \, \mu s$) causing aliasing along along the delay dimension, which, in turns, creates non-predictability and fading along Doppler. In this situation the relative prediction error is minimized at the pilot location, and it now increases rapidly towards $0$ dB as we move away from the pilot location along the Doppler axis.
Finally, when $\nu_p = 30$ KHz, ($\tau_p = 33.3 \, \mu s$), the crystallization condition holds and predictability is maintained. In this situation, the relative prediction error is small.
When we compare Fig.~\ref{fig3paper2} to Fig.~\ref{fig3paper2_rc}, we see that the choice of transmit/receive shaping filter makes a significant difference. Using root raised cosine filters lead to a relative prediction error of roughly $-50$ dB, whereas using sinc filters lead to a relative prediction error of roughly $-20$ dB.
\begin{figure*
\vspace{-4mm}
\centering
\includegraphics[width=12cm, height=8.0cm]{Zakotfs_rc_colormap.jpg}
\caption{Heatmap showing Relative Prediction Error (RPE), in dB, as a function of delay (horizontal axis), and Doppler (vertical axis) with root raised cosine (RRC) pulse shaping filters.
In the crystalline regime, RRC pulse results in significantly smaller RPE when compared to sinc pulses, although at the cost of a higher OTFS frame duration and bandwidth.}
\label{fig3paper2_rc}
\end{figure*}
\begin{figure
\includegraphics[width=8cm, height=6.0cm]{Fig4_paper2.jpg}
\caption{Exploiting Quasi-periodicity: Generating complete response from response received in the fundamental DD period.}
\label{fig4paper2}
\end{figure}
\section{Matrix-vector description of the I/O relation}
\label{seciorelation}
Recall that the channel equation relating the received and the transmit TD signals is given by \cite{Bello63}
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_eqn1}
r_{\mbox{\footnotesize{td}}}(t) = \iint h(\tau,\nu) s_{\mbox{\footnotesize{td}}}(t-\tau) \, e^{j 2 \pi \nu (t - \tau)} \, d\tau \, d\nu \, + \, n_{\mbox{\footnotesize{td}}}(t),
\end{eqnarray}\normalsize}where $h(\tau,\nu)$ is the delay-Doppler channel representation/spreading function and $n_{\mbox{\footnotesize{td}}}(t)$ is the AWGN noise term. In this section we study the I/O relation induced by (\ref{paper2_eqn1}) for Zak-OTFS, TDM and FDM and establish a matrix formulation in each case.
\subsection{Zak-OTFS}
\label{subseczakotfsmatrixIO}
Direct calculation reveals that the DD domain I/O relation induced by (\ref{paper2_eqn1}) is given by
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_eqn7b}
y_{_{\mbox{\footnotesize{dd}}}}[k,l]
& \hspace{-3mm} = & \hspace{-3mm} \sum\limits_{k', l' \in {\mathbb Z}} h_{_{\mbox{\footnotesize{dd}}}}[k - k', l - l'] \, x_{_{\mbox{\footnotesize{dd}}}}[k',l']
\, e^{j 2 \pi \frac{(l - l')}{N} \frac{k'}{M}} \nonumber \\
& & \hspace{10mm} \, + \, n_{_{\mbox{\footnotesize{dd}}}}[k,l],
\end{eqnarray}\normalsize}where $h_{_{\mbox{\footnotesize{dd}}}}[k, l]$ is the effective DD domain channel filter and $n_{_{\mbox{\footnotesize{dd}}}}[k,l]$ is the discrete noise term obtained by sampling
\begin{eqnarray*}
n_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) = w_{rx}(\tau, \nu) \, *_{\sigma} \, {\mathcal Z}_t(n_{_{\mbox{\footnotesize{td}}}}(t)).
\end{eqnarray*}
Here ${\mathcal Z}_t(\cdot)$ denotes the time-Zak transform (see Part I). Both sequences $x_{_{\mbox{\footnotesize{dd}}}}[k ,l]$ and $y_{_{\mbox{\footnotesize{dd}}}}[k ,l]$ are quasi-periodic.
Hence, can be be reconstructed from samples within the fundamental period ($0 \leq k < M, 0 \leq l < N$). Fig.~\ref{fig4paper2} illustrates how to reconstruct the local response to a DD domain pilot supported over the union of regions I, II, III and IV, from the received samples supported over the union of regions I', II', III and IV' within the fundamental period.
The $MN$ received samples in the fundamental period constitute a sufficient statistic for the transmitted information symbols $x[k,l]$. Hence, the I/O relation (\ref{paper2_eqn7b}) can be reduced to a finite matrix form relating the $MN \times 1$ vector of received samples $y_{_{\mbox{\footnotesize{dd}}}}[k,l]$ to the $MN \times 1$ vector of transmitted symbols $x[k,l]$, where on both sides $k=0,1,\cdots,M-1$ and $l=0,1,\cdots, N-1$. In more detail, define the $MN \times 1$ vectors
\begin{eqnarray}
\label{paper2eqn192}
\left({{\bf y}}_{_{\mbox{\footnotesize{dd}}}}\right)_{kM+l+1} & = & {{ y}}_{_{\mbox{\footnotesize{dd}}}}[k,l] \nonumber \\
\left({{\bf x}}_{_{\mbox{\footnotesize{dd}}}}\right)_{kM+l+1} & = & {{x}}_{_{\mbox{\footnotesize{dd}}}}[k,l] \nonumber \\
\,\,\, \left({{\bf n}}_{_{\mbox{\footnotesize{dd}}}}\right)_{kM+l+1} & = & {{n}}_{_{\mbox{\footnotesize{dd}}}}[k,l],
\end{eqnarray}
In addition, define the $MN \times MN$ matrix
\begin{eqnarray}
\label{paper2eqn193}
\left( {\bf H}_{_{\mbox{\footnotesize{dd}}}} \right)_{k'N + l'+1, kN+l+1} & \hspace{-3mm} = & \hspace{-3mm} {H}_{_{\mbox{\footnotesize{dd}}}}[k'N + l', kN+l]
\end{eqnarray}
where the right hand side is given by (\ref{paper2_eqn12}) (see top of page $9$).
\begin{figure*}
\vspace{-8mm}
\begin{eqnarray}
\label{paper2_eqn12}
{ H}_{_{\mbox{\footnotesize{dd}}}}[k'N + l' , k N + l ] & = & \sum\limits_{n=-\infty}^{\infty} \sum\limits_{m = -\infty}^{\infty} h_{_{\mbox{\footnotesize{dd}}}}[k' - k -nM, l' - l -mN] \, e^{j 2 \pi n l/N} e^{j2 \pi \frac{l' - l - mN}{N} \frac{k + nM}{M}}, \nonumber \\
& & k',k=0,1,\cdots, M-1, \,\, l',l=0,1,\cdots, N-1.
\end{eqnarray}
\end{figure*}
The matrix formulation of the DD domain I/O relation (\ref{paper2_eqn7b}) is given by
\begin{eqnarray}
\label{paper2_eqnyddhdd}
{{\bf y}}_{_{\mbox{\footnotesize{dd}}}} & = & {{\bf H}}_{_{\mbox{\footnotesize{dd}}}}
{{\bf x}}_{_{\mbox{\footnotesize{dd}}}} \, + \, {{\bf n}}_{_{\mbox{\footnotesize{dd}}}}.
\end{eqnarray}
\subsection{TDM}
\label{subsectdmmatrixIO}
Direct calculation reveals that the time domain I/O relation induced from (\ref{paper2_eqn1}), is given by (see Equation ($24$) in Part I)
{\vspace{-4mm}
\begin{eqnarray}
\label{paper2_eqn2}
y_{\mbox{\footnotesize{td}}}[k] & = & \sum\limits_{k' \in {\mathbb Z}} x_{\mbox{\footnotesize{td}}}[k'] \, h_{_{\mbox{\footnotesize{td}}}}[k - k' \, ; \, k'] \, + \, n_{\mbox{\footnotesize{td}}}[k]
\end{eqnarray}}where $h_{_{\mbox{\footnotesize{td}}}}[n \, ; \, k]$ is the effective TD channel filter and the noise term
$n_{\mbox{\footnotesize{td}}}[k]$ is given by sampling at $t= k/B$ the continuous function
\begin{eqnarray*}
w_{rx}(t) \, \star \, n_{\mbox{\footnotesize{td}}}(t),
\end{eqnarray*}where $w_{rx}(t)$ is the matched filter at the receiver and $\star$ is linear convolution. Equation (\ref{paper2_eqn2}) can be expressed in a matrix form as follows. Arrange the transmitted information symbols as a $BT$ column vector
\begin{eqnarray}
\label{paper2_eqn3}
\left( {\bf x}_{\mbox{\footnotesize{td}}} \right)_{k+1} & \hspace{-3mm} = & \hspace{-3mm} x_{\mbox{\footnotesize{td}}}[k] = x[k],
\end{eqnarray}
for $k=0,1,\cdots, BT-1$. Arrange the received TD samples as a $BT+K_1+K_2$ column vector
\begin{eqnarray}
\label{paper2_eqn201}
\left( {\bf y}_{\mbox{\footnotesize{td}}} \right)_{k+1+K_1} & \hspace{-3mm} = & \hspace{-3mm} y_{\mbox{\footnotesize{td}}}[k],
\end{eqnarray}
for $k=-K_1,\cdots, BT-1+K_2$, where the constants $K_1, K_2 \in {\mathbb Z}$ are determined by the channel delay spread. Similarly, arrange the received sampled noise as a $BT+K_1+K_2$ vector
\begin{eqnarray}
\label{paper2_eqn197}
\left( {\bf n}_{\mbox{\footnotesize{td}}} \right)_{k+1+K_1} & \hspace{-3mm} = & \hspace{-3mm} n_{\mbox{\footnotesize{td}}}[k],
\end{eqnarray}
for $k=-K_1,\cdots, BT-1+K_2$. Finally, arrange the effective channel filter as a $(BT + K_1 + K_2) \times BT$ matrix
\begin{eqnarray}
\label{paper2_eqn199}
\left( {\bf H}_{\mbox{\footnotesize{td}}} \right)_{k'+1,k+1}
& = & h_{_{\mbox{\footnotesize{td}}}}[k' - k - K_1 \, ; \, k].
\end{eqnarray}
Putting everything together, we obtain the matrix relation
\begin{eqnarray}
\label{paper2_eqn4}
{\bf y}_{\mbox{\footnotesize{td}}} & = & {\bf H}_{\mbox{\footnotesize{td}}} \, {\bf x}_{\mbox{\footnotesize{td}}} \, + \, {\bf n}_{\mbox{\footnotesize{td}}}.
\end{eqnarray}
\subsection{FDM}
\label{subsecfdmmatrixIO}
Direct calculation reveals that the frequency domain I/O relation induced from (\ref{paper2_eqn1}), is given by (see Equation ($25$) in Part I)
{\vspace{-4mm}
\begin{eqnarray}
\label{paper2_eqn5}
y_{\mbox{\footnotesize{fd}}}[k] & = & \sum\limits_{k' \in {\mathbb Z}} x_{\mbox{\footnotesize{fd}}}[k'] \, h_{_{\mbox{\footnotesize{fd}}}}[k - k' \, ; \, k'] \, + \, n_{\mbox{\footnotesize{fd}}}[k]
\end{eqnarray}}
where $h_{_{\mbox{\footnotesize{fd}}}}[n \, ; \, k]$ is the effective FD channel filter and the noise term $n_{\mbox{\footnotesize{fd}}}[k]$ is obtained by sampling at $f=k/T$ the continuous function $w_{rx}(f) \star n_{\mbox{\footnotesize{fd}}}(f)$ where
\begin{eqnarray*}
n_{\mbox{\footnotesize{fd}}}(f) & = & \int n_{\mbox{\footnotesize{td}}}(t) \, e^{-j 2 \pi f t} \, dt.
\end{eqnarray*}
Arrange the transmitted information symbols as a $BT$ column vector
\begin{eqnarray}
\label{paper2_eqn6}
\left( {\bf x}_{\mbox{\footnotesize{fd}}} \right)_{k+1} &
\hspace{-3mm} = & \hspace{-3mm}
{x}_{\mbox{\footnotesize{fd}}}[k] = x[k],
\end{eqnarray}
for $k=0,1,\cdots,BT-1$. Arrange the received samples as a $BT+L_1+L_2$ column vector
\begin{eqnarray}
\label{paper2_eqn210}
\left( {\bf y}_{\mbox{\footnotesize{fd}}} \right)_{k+1+L_1} & \hspace{-3mm} = & \hspace{-3mm} {y}_{\mbox{\footnotesize{fd}}}[k],
\end{eqnarray}
for $k=-L_1, \cdots, BT-1 + L_2$, where the constants $L_1, L_2 \in {\mathbb Z}$ are determined by the channel Doppler spread. Similarly, arrange the received noise samples as a $BT+L_1+L_2$ column vector
\begin{eqnarray}
\label{paper2_eqn205}
\left( {\bf n}_{\mbox{\footnotesize{fd}}} \right)_{k+1+L_1} & \hspace{-3mm} = & \hspace{-3mm} n_{\mbox{\footnotesize{fd}}}[k],
\end{eqnarray}
for $k=-L_1,\cdots, BT-1+L_2$. Finally, arrange the FD channel filter as a $(BT + L_1 + L_2) \times BT$ matrix
\begin{eqnarray}
\label{paper2_eqn206}
\left( {\bf H}_{\mbox{\footnotesize{fd}}} \right)_{k'+1,k+1} & \hspace{-3mm} = & \hspace{-3mm} h_{_{\mbox{\footnotesize{fd}}}}[k' - k - L_1\, ; \, k],
\end{eqnarray}
for $k'=0,1,\cdots,BT+L_1+L_2-1$ and $k=0,1,\cdots, BT-1$. Putting everything together, we obtain the matrix relation
\begin{eqnarray}
\label{paper2_eqn207}
{\bf y}_{\mbox{\footnotesize{fd}}} & = & {\bf H}_{\mbox{\footnotesize{fd}}} \, {\bf x}_{\mbox{\footnotesize{fd}}} \, + \, {\bf n}_{\mbox{\footnotesize{fd}}}.
\end{eqnarray}
\section{Impact of fading in the crystalline regime}
\label{secperfcsi}
In this Section, we compare uncoded BER performance of Zak-OTFS, TDM and FDM for the Veh-A channel model \cite{EVAITU}. Since we are focusing on the impact of fading, we suppose that the input-output (I/O) relation is perfectly known. We study the performance of Zak-OTFS as we move along the hyperbola $\tau_p \cdot \nu_p = 1$ by choosing different Doppler periods $\nu_p$. We shall demonstrate that the performance of Zak-OTFS is superior in the crystalline regime, that performance approaches TDM as the Doppler period shrinks, and that performance approaches FDM as the delay period shrinks.
Our Veh-A channel consists of six channel paths,
and the delay-Doppler spreading function is given by
\begin{eqnarray}
\label{paper2_eqn13}
h(\tau,\nu) & = & \sum\limits_{i=1}^6 h_i \, \delta(\tau - \tau_i) \, \delta(\nu - \nu_i).
\end{eqnarray}
where $h_i, \tau_i,$ and $\nu_i$ respectively
denote the gain, delay, and Doppler shift of the $i$-th channel path.
Table-\ref{tab1_paper2} lists the power-delay profile for the six channel paths. The maximum Doppler shift is $\nu_{max} = 815$ Hz, the Doppler spread is $1.63$ KHz, and the delay spread is $\tau_{max} \Define \max_i \tau_i \, - \, \min_i \tau_i = 2.5 \, \mu s$.
The Doppler shift of the $i$-th path is modeled as $\nu_i = \nu_{max} \cos(\theta_i)$, where the variables $\theta_i, i=1,2,\cdots, 6$ are independent and distributed uniformly in the interval $[0 \,,\, 2\pi)$.
We fix the time duration $T$ of a data frame to be $T = 1.6$ ms, and we
fix the bandwidth $B$ to be $0.96$ MHz.
We now specify transmit and receive filters. For TDM,
the pulse shaping filter at the transmitter, and the matched
filter at the receiver, are both narrow TD sinc pulses
of bandwidth $B$ and TD pulse width proportional to $1/B$:
\begin{eqnarray}
w_{tx}(t) = w_{rx}(t) = \sqrt{B} \,sinc(B t).
\end{eqnarray}
For FDM, the transmit and receive filters are narrow FD sinc pulses with width proportional to $1/T$, so that the time-realization has duration $T$:
\begin{eqnarray}
w_{tx}(f) = w_{rx}(f) =\sqrt{T} \, sinc(f T).
\end{eqnarray}
For Zak-OTFS, the DD domain transmit and receive filters are
the product of a narrow pulse in the delay domain with width proportional to $1/B$ and a narrow pulse in the Doppler domain with width proportional to $1/T$:
\begin{eqnarray}
w_{tx}(\tau,\nu) & \hspace{-3mm} = & \hspace{-3mm} w_{rx}(\tau, \nu) = \sqrt{B T} \, sinc(B \tau) \, sinc(T \nu).
\end{eqnarray}
We measure BER performance as a function of the received SNR,
which is the ratio of the power of the information carrying signal to the power of the AWGN in the received TD signal.
We normalize the complex channel gains by setting $\sum\limits_{i=1}^{6}{\mathbb E}[\vert h_i \vert^2] = 1$. We define the transmitted signal power $P_T$ to be the ratio of the average energy of the transmitted signal to the frame duration $T$:
\begin{eqnarray}
P_T & \Define & \frac{{\mathbb E}\left[ \int \vert {s}_{_{\mbox{\footnotesize{td}}}}(t) \vert^2 \, dt \right]}{T}.
\end{eqnarray}
If $N_0$ is the noise power spectral density of the AWGN ${n}_{_{\mbox{\footnotesize{td}}}}(t)$, then the noise power at the receiver is $N_0 B$, and
the signal-to-noise ratio (SNR) is given by
\begin{eqnarray}
\label{paper2_gamma_defeqn}
\gamma & \Define & \frac{P_T}{N_0 B}.
\end{eqnarray}
We assume that the Zak-OTFS receiver knows the effective
channel matrix ${\bf H}_{_{\mbox{\footnotesize{dd}}}}$
perfectly, that the TDM receiver knows ${\bf H}_{_{\mbox{\footnotesize{td}}}}$ perfectly, and that the
FDM receiver knows ${\bf H}_{_{\mbox{\footnotesize{fd}}}}$
perfectly. \emph{Note that perfect knowledge of the I/O relation does not imply perfect knowledge of the channel delay-Doppler spreading function.}
The matrix-vector I/O relations given by
(\ref{paper2_eqnyddhdd}), (\ref{paper2_eqn4}) and (\ref{paper2_eqn207}) have the same structure as that of a MIMO system. Hence, we use a
Linear Minimum Mean Squared Error (LMMSE) equalizer to detect the transmitted $4$-QAM information symbols.
\begin{table}
\caption{Power Delay Profile of Doubly-spread Veh-A Channel.}
\centering
\begin{tabular}{ | c || c | c | c | c | c | c |}
\hline
Path no. $i$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ \\
\hline
Rel. Delay $\tau_i$ ($\mu s$) & $0$ & $0.31$ & $0.71$ & $1.09$ & $1.73$ & $2.51$ \\
\hline
Rel. Power $\frac{{\mathbb E}[\vert h_i \vert^2]}{{\mathbb E}[\vert h_1 \vert^2]}$ (dB) & $0$ & $-1$ & $-9$ & $-10$ & $-15$ & $-20$ \\
\hline
\end{tabular}
\label{tab1_paper2}
\end{table}
\begin{figure
\includegraphics[width=9cm, height=6.7cm]{Fig1.jpg}
\caption{BER performance of Zak-OTFS, TDM and FDM,
on a doubly-spread Veh-A channel as we traverse the hyperbola $\tau_p \cdot \nu_p = 1$. Performance of
Zak-OTFS is superior in the crystalline regime
($\nu_p = 15$ KHz), performance approaches TDM as the delay period grows ($\nu_p = 1.25$ and $0.625$ KHz) and performance approaches FDM as the Doppler period grows ($\nu_p = 480$ and $960$ KHz).}
\label{fig1}
\end{figure}
Fig.~\ref{fig1} illustrates how BER performance of Zak-OTFS
changes as we move along the hyperbola $\tau_p \cdot \nu_p = 1$, choosing Doppler periods $\nu_p = 0.625, 1.25, 15, 480$, and $960$ KHz. When $\nu_p = 15$ KHz, the crystallization
conditions hold, the I/O relation
is non-fading (see (\ref{paper2_nofading1})),
and BER performance is superior to both TDM and FDM. This is because the I/O relations for TDM and FDM exhibit fading on doubly spread channels.
As the Doppler period increases, first to $480$ KHz, then to $960$ KHz, the delay spread is no longer less than the delay period, and Zak-OTFS operates outside the crystalline regime.
Aliasing occurs along the delay axis, and the received symbol
energy varies (fades) along the Doppler axis.
When $\nu_p = 960$ KHz, the BER performance of Zak-OTFS
coincides with FDM, which is expected, given that FDM coincides
with Zak-OTFS in the limit of vanishing delay period.
As the Doppler period decreases, first to $1.25$ KHz, then to $0.625$ KHz, the Doppler spread is no longer less than the Doppler period, and Zak-OTFS operates outside the crystalline regime. Aliasing now occurs along the Doppler axis, and the received symbol energy varies (fades) along the delay axis.
When $\nu_p = 0.625$ KHz, the BER performance of Zak-OTFS almost coincides with TDM, which is expected, given that TDM
coincides with Zak-OTFS in the limit of vanishing Doppler period.
A \emph{delay-only} Veh-A channel is matched to TDM and mismatched
to FDM. The channel is frequency-selective, not time-selective,
hence FDM exhibits fading, and TDM does not.
Fig.~\ref{fig2} compares BER performance of TDM and FDM with that of Zak-OTFS in a delay-only Veh-A channel (i.e., $\nu_i = 0, i=1,2,\cdots, 6$) for Doppler periods $\nu_p = 0.625, 15$, and $960$ KHz. When
$\nu_p = 0.625$ KHz, the delay period $\tau_p = 1.6$ ms is much larger
than the delay spread ($2.5 \, \mu s$), and BER performance is essentially the same as that of TDM. As long as we operate in the crystalline regime (for example, $\nu_p = 15$ KHz and $\tau_p = 66.66 \, \mu s$), there is no fading, and BER performance changes very little with change in $\nu_p$.
Outside the crystalline regime, when the Doppler period is large and the delay period is smaller than the delay spread (for example, $\nu_p = 960$ KHz and $\tau_p = 1.04 \, \mu s$), the BER performance degrades considerably.
\begin{figure}[h
\includegraphics[width=9.5cm, height=6.7cm]{Fig2.jpg}
\caption{BER performance of Zak-OTFS, TDM, and FDM on a delay-only Veh-A channel. The I/O relation for Zak-OTFS in the crystalline regime
($\nu_ p = 15$ KHz) is non-fading, hence BER performance matches that of TDM. Frequency selective fading degrades the BER performance of FDM.
}
\label{fig2}
\end{figure}
In contrast, a \emph{Doppler-only} Veh-A channel is matched
to FDM and mismatched to TDM. The channel is time-selective,
not frequency-selective, hence TDM exhibits fading and FDM does not.
Fig.~\ref{fig3} compares BER performance of TDM and FDM with that
of Zak-OTFS in a Doppler-only Veh-A channel (i.e., $\tau_{max} = 0$), for Doppler periods $\nu_p = 0.625, 15$ and $960$ KHz.
When $\nu_p = 960$ KHz, the Doppler period is much larger than the Doppler spread ($1.63$ KHz), and BER performance is essentially the same as that of FDM. Again, when we operate in the crystalline regime (for example
$\nu_p = 15$ KHz and $\tau_p = 66.66 \, \mu s$) there is no fading, and the BER performance changes very little with change in $\nu_p$. Outside the crystalline regime
(for example $\nu_p = 0.625$ KHz), the BER performance degrades considerably.
\begin{figure}[h
\includegraphics[width=9.5cm, height=6.7cm]{Fig3.jpg}
\caption{BER performance of Zak-OTFS, TDM, and FDM,
on a Doppler-only Veh-A channel.
The I/O relation for Zak-OTFS in the crystalline regime
($\nu_p = 15$ KHz) is non-fading,
hence BER performance matches that of FDM. Time
selective fading degrades BER performance of TDM.
}
\label{fig3}
\end{figure}
\section{Model-free operation in the Crystalline regime}
\label{section5paper2}
In this section, we compare uncoded BER performance of Zak-OTFS, TDM and
FDM, in the typical scenario where the effective channel matrix is not
known perfectly at the receiver. The effective channel matrix
can be acquired from the I/O relation (see Section \ref{seciorelation}),
which in turn depends on the delay-Doppler spreading function
$h(\tau,\nu)$. The \emph{model-dependent} and \emph{model-free}
modes of operation correspond to different approaches to estimating
the effective channel matrix.
\textbf{Model-dependent operation:}
Here we impose a model on the delay-Doppler spreading function $h(\tau, \nu)$,
typically by prescribing a finite number of paths, and constraining their delay and Doppler shifts. Given this model, the receiver estimates
$h(\tau, \nu)$, then forms an estimate of the effective channel matrix. The accuracy of this estimate is limited
by time and bandwidth constraints on the pilot signal, and by any
mismatch between the channel model and the physical channel.
\textbf{Model-free operation:}
Here the receiver estimates the effective channel filter
taps directly, without reference to any model for the delay-Doppler spreading function $h(\tau, \nu)$. Model-free operation is possible in the crystalline regime with Zak-OTFS because the effective channel filter taps $h_{_{\mbox{\footnotesize{dd}}}}[k,l]$
can be estimated from the response to a single pilot symbol (see
Sections \ref{secCrystalline} and \ref{secCrysdecomp}).
The model-dependent mode of operation is a challenge for the Veh-A channel introduced in Section \ref{secperfcsi} (with sinc pulse shaping filters).
Since channel bandwidth $B = 0.96$ MHz, the delay domain resolution is $1/B \approx 1.04 \, \mu s$, and the first three paths introduce delay shifts in the interval $[ 0 \,,\, 0.71] \, \mu s$. These paths are not separable, and so cannot be estimated accurately.
However, model-free operation is still possible for TDM, FDM, and for Zak-OTFS in the crystalline regime ($\nu_p = 15$ KHz). We estimate the effective channel filter taps from the response to a pilot frame with a single high-energy pilot and no information symbols. We want to start
from ground truth, so we suppose that the received pilot is not subject to AWGN. For Zak-OTFS we locate the single pilot at the center $(M/2 \,,\, N/2)$ of the fundamental period, and we use (\ref{paper2_predicteqn1}) to estimate the effective filter taps. For TDM and FDM we locate the single pilot
at $k = BT/2$, estimate the effective channel filter taps
$h_{_{\mbox{\footnotesize{td/fd}}}}[n \, ; \, k=BT/2], n \in {\mathbb Z}$,
and simply reuse this estimate for all other taps $h_{_{\mbox{\footnotesize{td/fd}}}}[n \, ; \, k]$ $n \in {\mathbb Z}, k=0,1,\cdots, BT-1, k \ne BT/2$.
We attempt model-dependent operation, using the pilots described above to
estimate the complex channel gain, delay and Doppler shift of each channel
path. We interpret DD points with significant energy as the locations
of channel path delays and Doppler shifts. Given the locations of delay and
Doppler shifts, the received pilot signal depends linearly on the channel path gains, so we use least squares to estimate the vector of complex channel gains, then reconstruct ${\bf H}_{_{\mbox{\footnotesize{dd}}}}$
using (\ref{paper2_eqn12}).
Fig.~\ref{fig4} compares BER performance of the model-dependent and model-free modes of Zak-OTFS in the crystalline regime. It also includes BER performance of TDM and FDM, which is poor, and which does not improve with increasing SNR. Since the channel is doubly spread, the I/O relation for TDM/FDM is non-stationary in the TD/FD, and it cannot be accurately predicted from the response to a single pilot at $k = BT/2$. Hence, the estimate of the effective channel matrix is inaccurate, and the probability of mis-detection is high. BER performance of the model-dependent mode of Zak-OTFS is slightly better, but it also exhibits a high error floor because insufficient frame bandwidth and duration precludes accurate estimation of channel path gains, delays and Doppler shifts.\footnote{\footnotesize{For example, for the Veh-A channel, the first three paths
introduce delay shifts which lie in the interval $[0 \,,\, 0.71 ] \, \mu s$.
With a channel bandwidth of $B = 0.96$ MHz, the delay domain resolution is $1/B \approx 1.04 \, \mu s$ which is more than $0.71 \, \mu s$ and therefore
the first three dominant paths are not separable/resolvable.
These non-separable paths cannot be estimated accurately.}} BER performance of the model-free mode of Zak-OTFS is considerably better, only slightly worse than performance with perfect knowledge of the I/O relation.
\begin{figure}[h
\includegraphics[width=9cm, height=6.7cm]{Fig_ZakOTFS_ModelFreeDependent.jpg}
\caption{BER performance of Zak-OTFS, TDM, and FDM on the Veh-A channel (Section IV), where signal bandwidth and duration is insufficient to estimate channel path delays and Doppler shifts. In the crystalline regime ($\nu_p = 15$ KHz), BER performance of the model-free mode of Zak-OTFS is only slightly worse than performance with perfect knowledge of the I/O relation.}
\label{fig4}
\end{figure}
Next, through Fig.~\ref{fig5} we explore what we lose when it is possible to learn the channel, but we choose to operate model-free. Again, we fix the time duration $T$ of a data frame to $1.6$ ms, and we fix the bandwidth $B$ to $0.96$ MHz. There are $M=64$ delay bins and $N=24$ Doppler bins. Again, the delay spread is roughly $2.5 \, \mu s$, and the Doppler spread is $1.63$ KHz. We consider a $5$-path resolvable channel, where the delay shifts are integer multiples of the delay resolution $1/B$, and the Doppler shifts are integer multiples of the Doppler resolution $1/T$. To be precise, the path delays are $[0, 1, 2, 4, 7] \times 1/B$, the Doppler shifts are $[1, -2, -3, 3, 4] \times 1/T$, and the relative average power of the paths is $[0, -1, -9, -10, -13]$ dB. Again, we consider Zak-OTFS operating in the crystalline regime ($\nu_p = 15$ KHz).
Fig.~\ref{fig5} illustrates that for this resolvable channel, model-dependent performance coincides with that of performance with perfect knowledge of the I/O relation. This is expected since estimation of the channel spreading function is accurate when paths are resolvable. Further, model-free performance is only slightly inferior, despite the high Doppler spread of $4.375$ KHz. Why the small degradation? Pulse shaping filters cause self-interaction/aliasing between the received pilot in the fundamental period and its quasi-periodic replicas. Though this aliasing is small in the crystalline regime, it is non-zero.\footnote{\footnotesize{In the Zak-OTFS I/O relation, the output DD signal is a twisted convolution of the effective discrete DD channel filter $h_{_{\mbox{\footnotesize{dd}}}}[k,l]$ with the input information signal $x_{_{\mbox{\footnotesize{dd}}}}[k,l]$, and therefore each information symbol is spread in the DD domain by an amount equal to the DD spread/width of $h_{_{\mbox{\footnotesize{dd}}}}[k,l]$. Since $h_{_{\mbox{\footnotesize{dd}}}}[k,l]$ is sampled from the effective continuous DD channel filter $h_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$ at integer multiples of $1/B$ and $1/T$ along the delay and Doppler domains respectively (see equation ($36$) in Part I), the DD spread/width of $h_{_{\mbox{\footnotesize{dd}}}}[k,l]$ is directly related to the DD spread/width of $h_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$. Further, $h_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu) = w_{rx}(\tau,\nu) *_{\sigma} h(\tau,\nu) *_{\sigma} w_{tx}(\tau,\nu)$ is a twisted convolution of the channel DD spreading function and the transmit and receive pulse shaping filters (see equation ($36$) in Part I). The DD domain spread of $h_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$ is therefore the sum of the DD spread/width of $h(\tau,\nu)$ and the DD spread/width of the transmit and receive DD pulse shaping filters $w_{tx}(\tau,\nu)$ and $w_{rx}(\tau,\nu)$. Hence the maximum effective DD spread of $h_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$ is more than $\tau_{max}$ along the delay domain and more than $2 \nu_{max}$ along the Doppler domain.}}
\begin{figure}[h
\hspace{-2mm}
\includegraphics[width=9cm, height=6.7cm]{ZakOTFS_resolvable_fivepath.jpg}
\caption{
BER performance of model-dependent and model-free Zak-OTFS on a $5$-path resolvable channel, where the delay shifts are integer multiples of the delay resolution $1/B$, and the Doppler shifts are integer multiples of the Doppler resolution $1/T$. Model-dependent performance coincides with that of performance with perfect knowledge of the I/O relation. Model-free performance is only slightly inferior.
}
\label{fig5}
\end{figure}
Next, through Fig.~\ref{fig6} we explore model-free performance in the crystalline regime as we traverse the hyperbola $\tau_p \cdot \nu_p = 1$, moving towards FDM. We consider the Veh-A channel introduced in Section \ref{secperfcsi}, fixing the SNR at $16$ dB. We measure performance for Doppler periods $\nu_p = 3.75, 7.5, 15$, and $30$ KHz, as $\nu_{max}$ varies between $500$ Hz and $4$ KHz. For $\nu_p = 15, 30$ KHz, we are deep in the crystalline regime, since the Doppler period is significantly larger than the Doppler spread (which is at most $8$ KHz), and the delay period ($66.66, 33.33 \, \mu s$) is significantly larger than the delay spread ($2.5 \, \mu s$). The spacing between quasi-periodic replicas limits self-interaction, even with sinc pulse shaping filters. The Zak-OTFS I/O relation is predictable, and we are able to accurately estimate the taps of the effective channel filter. Fig.~\ref{fig6} illustrates that increasing the spacing from $15$ KHz to $30$ KHz improves performance slightly. For both Doppler periods, the BER performance is excellent, and almost invariant to increasing Doppler spread.
BER performance changes as we reduce the Doppler period to $7.5$ KHz. When the Doppler spread $2 \nu_{max}$ is less than $4$ KHz, the interaction between the received DD pilot in the fundamental period and its quasi-periodic replicas is not significant. As the Doppler spread increases beyond $4$ KHz, BER increases steadily. When the Doppler spread is the same as the Doppler period ($\nu_{max} = 3.75$ KHz), BER performance degrades completely. This is because the sinc pulse shaping filters leak energy outside their ideal delay width ($1/B = 1.04 \, \mu s$) and Doppler width ($1/T = 625$ Hz). When we reduce the Doppler period to $3.75$ KHz, the BER performance starts to degrade earlier at $\nu_{max} = 500$ Hz.
\begin{figure}[h
\hspace{-7mm}
\includegraphics[width=9.5cm, height=7.0cm]{ZakOTFS-BERvsnumaxfixedSNRSinc.jpg}
\caption{. Model-free performance as a function of $\nu_{max}$ for the Veh-A channel introduced in Section \ref{secperfcsi}. When the Doppler spread $2 \nu_{max}$ is significantly less than $\nu_p$, performance does not degrade as $\nu_{max}$ increases. When the Doppler spread $2 \nu_{max}$ is close to $\nu_p$, performance degrades because of Doppler domain aliasing. When operating deep in the crystalline regime, BER performance is consistently excellent over a wide range of Dopplers.}
\label{fig6}
\end{figure}
\begin{comment}
To understand this dependence on the DD pulse shaping filter,
we study the variation in the Zak-OTFS (model-free) BER performance as a function of increasing $\nu_{max}$ for a different pulse shaping filter which has a better localization compared to the Sinc pulse. For this study we therefore consider the Root Raised Cosine (RRC) pulse whose localization depends on its roll-off parameter $\beta$ which takes value between $0$ and $1$ (for $\beta=0$ the RRC pulse reduces to a Sinc pulse). The localization improves with increasing $\beta$, i.e., the fraction of pulse energy in the vicinity of the pulse location increases and that away from the pulse location decreases. The DD domain RRC pulse
is given by (\ref{rrcpulse_eqn1}) and is characterized by
the roll-off parameters $\beta_{\tau}$ and $\beta_{\nu}$ which control the localization of the pulse along the delay and Doppler domains respectively.
To be precise, a larger $\beta_{\nu}$ reduces Doppler domain aliasing between the DD pilot received in the fundamental DD period and its quasi-periodic replicas.
The reduced aliasing results in better acquisition of the Zak-OTFS I/O relation and therefore better BER performance even when the Doppler spread $2 \nu_{max}$ is close to the Doppler period $\nu_p$. This is clear from Fig.~\ref{fig7} where we
plot the variation in the Zak-OTFS (model-free) BER performance with increasing $\nu_{max}$ (fixed $SNR = 16$ dB
and $\nu_p = 7.5$ KHz) for the Sinc pulse (i.e., $\beta_{\nu} = \beta_{\tau} = 0$) and the RRC pulse (with $\beta_{\nu} = 0.3, 0.6, 0.9$ and fixed $\beta_{\tau} = 0.1$). For the Sinc pulse,
the BER degrades steadily after $\nu_{max} = 2$ KHz. However, with an RRC pulse having $\beta_{\nu} = 0.3$, the BER performance does not degrade even for a higher $\nu_{max} = 2.5$ KHz and degrades only beyond it. For an even higher $\beta_{\nu} = 0.6, 0.9$, the BER performance does not degrade even when $\nu_{max} = 3.0$ KHz. Therefore, an appropriately chosen DD pulse (e.g., RRC pulse) helps in reducing Doppler domain aliasing which makes Zak-OTFS (model-free) robust to a higher Doppler spread. Although, a higher $\beta_{\nu}$
makes Zak-OTFS robust to higher Doppler spread, it also implies a frame time duration which is $(1 + \beta_{\nu})$ times larger than that of a Sinc pulse. Similar, a higher $\beta_{\tau}$ makes Zak-OTFS robust to a higher delay spread (when compared to a Sinc pulse along delay axis) but at the cost of a $(1 + \beta_{\tau})$ times increase in the frame bandwidth. The above study highlights the need for discovering better pulse design which
could further improve the robustness of Zak-OTFS
to Doppler and delay spread.
\end{comment}
\begin{figure}[h
\hspace{-3mm}
\includegraphics[width=10cm, height=7.0cm]{Fig7.jpg}
\caption{Better pulse shaping filters extend the region of reliable model-free operation. BER performance, for the sinc pulse, and for RRC pulses with $\beta_{\tau} = 0.1$, and $\beta_{\nu} = 0.3, 0.6$, and $0.9$, as $\nu_{max}$ varies between $500$ Hz and $4$ KHz.
}
\label{fig7}
\end{figure}
Next, through Fig.~\ref{fig7} we show that better pulse shaping filters extend the region of reliable model-free operation by reducing aliasing. We consider DD domain Root Raised Cosine (RRC) pulses (specified in (\ref{rrcpulse_eqn1})), parameterized by a roll-off parameter $\beta_{\tau}$ that controls localization in delay, and a parameter $\beta_{\nu}$ that controls localization in Doppler. The sinc pulse corresponds to $\beta_{\tau} = \beta_{\nu} = 0$, and localization improves as the parameters $\beta_{\tau}, \beta_{\nu}$, increase from $0$ to $1$. Better localization implies lesser DD domain aliasing
and therefore accurate prediction/estimation of the I/O relation for a higher Doppler spread). We fix $\beta_{\tau} = 0.1$, and for $\beta_{\nu} = 0.3, 0.6$, and $0.9$, we measure BER performance over the Veh-A channel as $\nu_{max}$ varies between $500$ Hz and $4$ KHz (fixed SNR of $16$ dB, $M = 128$, $N = 12$, $\nu_p = 7.5$ KHz). Fig.~\ref{fig7} illustrates that increasing $\beta_{\nu}$ from $0$ (a sinc pulse) to $0.3$ extends the range of Doppler spreads for which BER performance is flat (from $4$ KHz to $5$ KHz). Increasing $\beta_{\nu}$ further, to $0.6, 0.9$, extends the range still further. The cost of introducing better filters is a reduction in spectral efficiency. When we replace a sinc filter by a RRC filter, we increase the frame duration by a factor $(1 + \beta_{\nu})$, and we increase the frame bandwidth by $(1 + \beta_{\tau})$.
\begin{figure*
\centering
\includegraphics[width=18cm, height=6.0cm]{paper2_mcotfs_twostep.jpg}
\caption{Signal processing for MC-OTFS (two step).}
\label{figmcotfstwosteppaper2}
\end{figure*}
\begin{figure*
\centering
\includegraphics[width=18cm, height=6.0cm]{paper2_mcotfs.jpg}
\caption{Zak transform based equivalent signal processing
for the two-step MC-OTFS.}
\label{figmcotfspaper2}
\end{figure*}
\section{Multicarrier Approximations to Zak-OTFS}
\label{secotfsvariants}
So far, all existing work on OTFS presumes a
two-step modulation where the DD domain information symbols are first transformed to the time-frequency (TF) domain (using the Inverse Symplectic Finite Fourier transform or Inverse SFFT). The resulting TF symbols are then converted to a TD transmit signal using the Heisenberg transform (which is essentially an OFDM modulator) \cite{RH1} (see Fig.~\ref{figmcotfstwosteppaper2}). The compatibility of this two-step modulation with existing 4G/5G modems is one of the primary reasons why it was proposed.
This two-step modulation is fundamentally different from Zak-OTFS. One can think of it as a multicarrier approximation to Zak-OTFS, which we refer to here as MC-OTFS.
Interestingly, the two-step implementation of MC-OTFS can be casted in the framework of the Zak-transform as shown in Fig.~\ref{figmcotfspaper2}.
The simplicity of Zak-OTFS signal processing (shown in Fig.~\ref{figzakotfspaper2}) results from the fact that channels and filters both act by twisted convolution and all signals in the signal processing chain are quasi-periodic. Since twisted convolution is associative, the I/O relation admits a simple structure: the output signal is the twisted convolution of the input signal with an effective DD channel filter (see (\ref{paper2_eqn229})). This particular structure becomes predictable in the crystalline regime, implying that, in the crystalline regime, the complete I/O relation can be accurately estimated from the channel response to a single DD pilot. In contrast, as we will see now, the I/O relation of MC OTFS cannot be expressed as a simple twisted convolution and, consequently, it does not appear to satisfy any reasonable form of predictability. That is, it is not clear how to estimate the I/O relation from the observed channel response to a single DD pilot.
A careful study of the Zak-transform implementation of MC-OTFS reveals that its underlying signal processing is fundamentally different and more complicated than that of Zak-OTFS. While the signal processing of Zak-OTFS comprises of pure cascade of twisted convolutions, the signal processing of MC-OTFS comprises of a mixture of linear convolution, multiplication by a quasi-periodic function and twisted convolution. As shown in Fig.~\ref{figmcotfspaper2}, in MC-OTFS, the information is encoded as a discrete periodic function (instead of discrete quasi-periodic function as in Zak-OTFS) on the DD domain, given by
\begin{eqnarray}
x(\tau,\nu) & \hspace{-3mm} = & \hspace{-3mm} \sum\limits_{k, l \in {\mathbb Z}} x[k,l] \, \delta\left( \tau - \frac{k \tau_p}{M} \right) \, \delta\left( \nu - \frac{l \nu_p}{N} \right).
\end{eqnarray}
Next step is linear convolution (instead of twisted convolution as in Zak-OTFS) of the information with a transmit filter $w_{tx}(\tau, \nu)$ resulting in a periodic continuous DD domain function. The transmit pulse is usually taken to be the SFFT of a TF window (whose time and frequency support are the time duration and bandwidth of the MC-OTFS frame). Next step is to convert the periodic DD domain function to a quasi-periodic one. This is achieved by multiplication with a fixed quasi-periodic signal $G_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$ called the multi-carrier generator, which is nothing but the Zak transform of the TD transmit pulse $g_{tx}(t)$ in the two-step OTFS modulator. The transmitted signal is the inverse Zak transform of the resulting quasi-periodic signal. At the receiver, the Zak transform of the received signal which is quasi-periodic is converted into a periodic continous function by means of multiplication with the complex conjugate of the multi-carrier signal generator. This is then followed by linear convolution with a receive filter $w_{rx}(\tau, \nu)$ resulting in $y^{w_{rx}}(\tau, \nu)$ which is then sampled. Ignoring AWGN effect, the relation between the information function $x(\tau, \nu)$ and the received filtered function $y^{w_{rx}}(\tau, \nu)$ is given in Table-\ref{tabeqn} (here $\star$ denotes linear convolution and $\cdot$ denotes multiplication). The counter part relation for Zak-OTFS is also given in this table. The main observation is that in Zak-OTFS, both the channel and the filters act through twisted convolution, hence due to associativity, the end-to-end signal processing is equivalent to a single twisted convolution with the effective channel filter $h_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu)$. In contrast, since the MC-OTFS I/O relation is a mix of linear
convolution, multiplication and twisted convolution, it cannot be expressed as a simple action with some effective filter.
\begin{comment}
\begin{figure*}
\begin{eqnarray}
\label{paper2_eqn400}
y^{w_{rx}}(\tau, \nu) & = & w_{rx}(\tau, \nu) \, \star \, \left[G_{_{\mbox{\footnotesize{dd}}}}^*(\tau, \nu) \cdot \left( h(\tau,\nu) *_{\sigma} \left\{ G_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) \cdot \left[ w_{tx}(\tau, \nu) \star x(\tau,\nu) \right] \, \right\} \right) \right], \nonumber \\
y^{w_{rx}}_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) & = & w_{rx}(\tau, \nu) \, *_{\sigma} \, h(\tau, \nu) \, *_{\sigma} \, w_{tx}(\tau, \nu) \, *_{\sigma} \, x_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu).
\end{eqnarray}
\end{figure*}
\end{comment}
\begin{table*}
\vspace{-4mm}
\caption{I/O relation for Zak-OTFS and MC-OTFS}
\centering
\begin{tabular}{ | c | }
\hline
MC-OTFS I/O relation \\
\hline
\\
$y^{w_{rx}}(\tau, \nu) \, = \, w_{rx}(\tau, \nu) \, \star \, \left[G_{_{\mbox{\footnotesize{dd}}}}^*(\tau, \nu) \cdot \left( h(\tau,\nu) *_{\sigma} \left\{ G_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) \cdot \left[ w_{tx}(\tau, \nu) \star x(\tau,\nu) \right] \, \right\} \right) \right]$ \\
\\
\hline
Zak-OTFS I/O relation \\
\hline
\\
$y^{w_{rx}}_{_{\mbox{\footnotesize{dd}}}}(\tau, \nu) \, = \, w_{rx}(\tau, \nu) \, *_{\sigma} \, h(\tau, \nu) \, *_{\sigma} \, w_{tx}(\tau, \nu) \, *_{\sigma} \, x_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu) \, = \, h_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu) \, *_{\sigma} \, x_{_{\mbox{\footnotesize{dd}}}}(\tau,\nu)$ \\
\hline
\end{tabular}
\label{tabeqn}
\end{table*}
It is illuminating to observe, how MC-OTFS and Zak-OTFS evolve as we move towards TDM by shrinking the Doppler period $\nu_p$. As we traverse the hyperbola $\tau_p \cdot \nu_p = 1 $ along the limit $\nu_p \rightarrow 0$, the Zak transform converges to identity and Zak-OTFS converges to TDM whose I/O relation for delay-only channels is given by linear convolution with an effective channel filter, hence it is predictable. In contrast, as $\nu_p \rightarrow 0$, MC-OTFS does not converge to TDM. The limit in this case is a non-intuitive modulation whose I/O relation cannot be expressed as a simple linear convolution of an input signal with an effective TD channel filter.
\begin{figure*
\centering
\includegraphics[width=12cm, height=8.0cm]{mcotfs_colormap.jpg}
\caption{Heatmap showing Relative Prediction Error (RPE), in dB, for MC-OTFS (with sinc pulses), as a function of delay (horizontal axis), and Doppler (vertical axis). RPE for MC-OTFS is larger than that for Zak-OTFS.}
\label{fig3paper2_mcotfs}
\end{figure*}
Recall that Fig.~\ref{fig3paper2} depicts the relative prediction error for Zak-OTFS as a two-dimensional heatmap, for three different points on the period hyperbola $\tau_p \cdot \nu_p = 1$. Fig.~\ref{fig3paper2_mcotfs} depicts the relative prediction error for MC-OTFS, with respect to the same two-path channel, and the same transmit and receive filters (given by sinc pulses). The multi-carrier generator is taken to be the Zak transform of a rectangular TD pulse $g(t)$ of duration $\tau_p$. The MC-OTFS I/O relation is derived from the continuous I/O relation given in Table-\ref{tabeqn}.
In this relation the input is a discrete periodic function obtained by extending $x[k,l]$ periodically with periods $M = B \tau_p$ along delay and $N = T \nu_p$ along Doppler respectively. The output is obtained by sampling $y^{w_{rx}}(\tau, \nu)$
at integer multiples of $1/B$ along the delay and $1/T$ along Doppler. As for Zak-OTFS, we estimate the MC-OTFS I/O relation from the received response to a pilot impulse located at $(M/2, N/2)$.
The point $\nu_p = 1.25$ KHz, $\tau_p = 800 \, \mu s$ represents the delay asymptote outside the crystalline regime, and the point $\nu_p = 240$ KHz, $\tau_p = 4.16 \, \mu s$ represents the Doppler asymptote outside the crystalline regime. For these points, the relative prediction error is high for both Zak-OTFS and MC-OTFS. The point $\nu_p = 30$ KHz, $\tau_p = 33.3 \, \mu s$ represents the crystalline regime where relative prediction error for MC-OTFS is larger than for Zak-OTFS (the heatmap in Fig.~\ref{fig3paper2_mcotfs} is a mixture of yellow and green, whereas the heatmap in Fig.~\ref{fig3paper2} is mostly green).
\begin{figure}[h
\hspace{-7mm}
\includegraphics[width=10cm, height=7cm]{Fig14.jpg}
\caption{Uncoded BER performance for Zak-OTFS and MC-OTFS on a Veh-A channel as a function of increasing SNR. The performance of model-dependent Zak-OTFS and MC-OTFS is poor since channel bandwidth and duration is not sufficient to estimate channel path gains, delays and Doppler shifts. Performance of model-free Zak-OTFS is superior to that of model-free MC-OTFS, because the Zak-OTFS I/O relation is more predictable.
}
\label{fig82}
\end{figure}
Fig.~\ref{fig82} compares BER performance of Zak-OTFS and MC-OTFS on the Veh-A channel introduced in Section \ref{secperfcsi}, and used in Fig.~\ref{fig4}. We have changed the channel parameters slightly by increasing the Doppler shift $\nu_{max}$ from $815$ Hz to $2$ KHz. For MC-OTFS, the TF window and the multicarrier signal generator are the same as those used in Fig.~\ref{fig3paper2_mcotfs} (see the above discussion). Given perfect knowledge of the I/O relation the performance of Zak-OTFS and MC-OTFS is quite similar. However, with imperfect knowledge of the I/O relation, model-free performance of Zak-OTFS is greatly superior to that of MC-OTFS because the Zak-OTFS I/O relation is more predictable than that of MC-OTFS.
\underline{Remark:} Fig.~\ref{fig82} reveals that with perfect knowledge of the I/O relation, both Zak-OTFS and MC-OTFS exhibit similar BER performance. Acquiring the I/O relation in the model-free approach amounts to estimating the channel response to a pilot at any arbitrary location. In Zak-OTFS, operating within the crystalline regime, allows to accurately predict a channel response to an arbitrary pilot from the response to a single pilot, facilitating efficient acquisition of the complete I/O relation. In contrast,
in MC-OTFS, due to the complex nature of its I/O relation, it seems that such prediction scheme is not possible, hence accurate acquisition of the complete I/O relation becomes less efficient when compared to that for Zak-OTFS.
\begin{figure}[h]
\hspace{-7mm}
\includegraphics[width=10cm, height=7.0cm]{Fig15.jpg}
\caption{
Uncoded BER performance of model-free Zak-OTFS and model-free MC-OTFS as a function of $\nu_{max}$ for the Veh-A channel introduced in Section \ref{secperfcsi} ($\nu_p = 15$ KHz, $M =64, N= 24$). Zak-OTFS is more robust to high Doppler spreads.
}
\label{fig15}
\end{figure}
Fig.~\ref{fig15} compares model-free performance of various modulation schemes as a function of increasing Doppler spread while operating at a fixed SNR of $16$ dB. The simulations use the Veh-A channel introduced in Section \ref{secperfcsi}. In particular, the figure illustrates that when $\nu_{max}$ is less than $1$ KHz (Doppler spread less than $2$ KHz), the BER performance of Zak-OTFS and MC-OTFS is essentially the same. However, as the Doppler spread increases, BER performance of MC-OTFS degrades steadily, while that of Zak-OTFS remains almost constant. The reason for that gap is because when the Doppler spread is high, the Zak-OTFS I/O relation is more predictable than that of MC-OTFS.
\begin{comment}
{
\underline{Remark:}
Over the past few years numerous variants of OTFS, including MC-OTFS, have been reported in literature \cite{TThaj2022}. Several of these variants are based on the Zak transform \cite{SKM2021, SKM20212, Lampel2022, VSBhat2023}. However, none of these prior works investigate the subtle aspect of predictability of the I/O relation in the DD domain. In particular, none emphasizes the important fact that only Zak-OTFS where the information bits are encoded as a discrete quasi-periodic function and filtering is applied through twisted convolution admits a predictable I/O relation if the delay and Doppler spread of the effective channel is less than the delay and Doppler period (a condition which we have named the \emph{crystallization condition}). We feel that this assertion is an important theoretical contribution of this paper. Another theoretical contribution of this paper is the assertion that fading is a consequence of DD domain aliasing. The consequence is that when predictability is satisfied the I/O relation becomes non-fading, explaining why performance of Zak-OTFS is almost invariant to Doppler spread when operating in the crystalline regime.}
\end{comment}
\begin{comment}
\begin{table}
\caption{Zak-OTFS vs. MC-OTFS}
\begin{tabular}{ | l || l | l | }
\hline
& Zak-OTFS & MC-OTFS \\
\hline
Model-free &
& Predictability & Predictability \\
& Twisted convolution & Cyclic convolution \\
\hline
Model-dependent & \textbf{MD-ZAK-OTFS} & \textbf{MD-MC-OTFS} \\
& Channel acquisition & Channel acquisition \\
& Twisted convolution & Cyclic convolution \\
& Aliasing compensation & Higher order correction \\
\hline
\end{tabular}
\label{tabzakmc}
\end{table}
\end{comment}
\begin{comment}
\section{MC-OTFS: A multi-carrier variant of ZAK-OTFS}
\begin{enumerate}
\item Model-dependent: Figure: ZAK-OTFS and MC-OTFS comparison for unresolvable as well as
resolvable (similar performance)
\item Model-free: In low Doppler, ZAK-OTFS is slightly better than MC-OTFS. In high Doppler,
MC-OTFS has very poor performance compared to ZAK-OTFS. Figure: BER versus Doppler spread for fixed SNR.
\end{enumerate}
MC-OTFS is a variant of ZAK-OTFS which facilitates a cyclic input-output
relation which
Introduce MC-OTFS, block diagram, input-output relation.
Multi-carrier OTFS (MC-OTFS) in terms of Zak theory.
Include remarks on the consequences of the approximation
which will be evaluated in the next section.
\section{Performance of MC-OTFS under imperfect knowledge of input-output relation}
\subsection{Model-free MC-OTFS:}
\subsection{Model-dependent MC-OTFS:}
\end{comment}
\begin{comment}
\begin{table*}
\caption{$\tau_p = 32 \, \mu s$, $\nu_p = 31.25$ KHz}
\centering
\begin{tabular}{ | l || l | l | }
\hline
& Delay & Doppler \\
& Spread ($\mu s$) & Spread (KHz) \\
\hline
Use-case (A) & & \\
(Leo-satellite channel \cite{BShen2022}) & ${\bf 0.8}$ & ${\bf 82}$ \\
\hline
Use-case (B) & ${\bf 7.0}$ (Take-off)& \\
(UAV/Aeronautical channel \cite{EHaas2002}) & ${\bf 33}-{\bf 66}$ (En-route) & ${\bf 7.68}$ (En-route) \\
& & $f_c = $ GHz, $v = $ Km/hr \\
\hline
Use-case (C) & ${\bf 1.0}$ & ${\bf 3.0}$\\
(mmWave Mobile Channel) & & $f_c = 28$ GHz, $v = 100$ Km/hr \\
\hline
Use-case (D) & ${\bf 5.0}$ & ${\bf 0.3}$\\
(Terrestrial Mobile Channel & & $f_c = $ GHz, $v = $ Km/hr\\
- ETU \cite{3gppts36p116}) & & \\
\hline
Use-case (E) & & \\
(Terrestrial Pedestrian Channel & ${\bf 0.41}$ & {\bf 0.005} \\
- EPA \cite{3gppts36p116}) & & $f_c = $ GHz, $v = $ Km/hr\\
\hline
\end{tabular}
\label{tab1}
\end{table*}
\end{comment}
\begin{comment}
\section{Multi-carrier OTFS (MC-OTFS) vs. Zak OTFS (Z-OTFS)}
\subsection{MC-OTFS}
\underline{Multi-carrier Modulation}:
TF plane (every point parametrizes an operator on signals, each operator results in time shift and frequency shift which is the Heisenberg transform), TF lattices (undersampled lattice), Heisenberg operator/transform,
Multi-carrier generator (signals in TD which are bi-orthogonal on the lattice), Symplectic Fourier transform, Bi-orthogonality.
(References: Gabor paper)
Populate QAM on the TF grid to get a discrete TF signal,
convert to analog signal by delta function, apply Heisenberg transform
to get the operators for each QAM, apply the superposition of the operators
to the generator. Fig.$1$: Block diagram of transceiver
$\sum\limits_{m} \sum\limits_{n} X[n,m] g_{tx}(t - nT) e^{j 2 \pi m \Delta f (t - nT)}$, $T \, \Delta f = 1$.\\
\underline{From Multi-carrier Modulation to MC-OTFS}:
SFFT: period DD signal $\rightarrow$ period TF signal (DD and TF grids are reciprocal),
TF signal has infinite bandwidth and duration, need to filter with TF window,
followed by Heisenberg transform.
\underline{Interpretation of MC-OTFS in terms of Z-OTFS}:
Interpretation of MC-OTFS in terms of Z-OTFS in the critically sampled scenario (Fig. 1: Block diagram).
This explains where MC-OTFS breaks down.
\section{Implementation Architectures for Zak OTFS, Two-step OTFS and CP-OFDM}
Block diagrams (repeat the block diagram in Part-I for Zak OTFS)
Mapping design parameters to attributes of the modulation
\section{Performance Comparison between Zak OTFS, Two-step OTFS and CP-OFDM}
\end{comment}
\section{The Radar Ambuguity Function}
\label{radar}
When we use a waveform to illuminate a radar scene, and we correlate the return with the transmitted waveform, the radar ambiguity function expresses the blurriness of the scene \cite{AT1985}. We begin by considering a single target and no reflectors, so that the received signal is given by
\begin{eqnarray}
\label{paper2_eqn1876}
r_{\mbox{\footnotesize{td}}}(t) & = & h \, s_{\mbox{\footnotesize{td}}}(t - \tau) \, e^{j 2 \pi \nu ( t - \tau)} \, + \, n_{\mbox{\footnotesize{td}}}(t),
\end{eqnarray}
where $s_{\mbox{\footnotesize{td}}}(t)$ is the transmitted radar waveform and $n_{\mbox{\footnotesize{td}}}(t)$
is the AWGN at the radar receiver.
We obtain the maximum likelihood (ML) estimate for the delay $\tau$ and Doppler
$\nu$ using matched filtering at the
radar receiver. Thus $({\widehat \tau}, {\widehat \nu}) = \arg \max_{\tau, \nu} \left\vert A_{r,s}(\tau, \nu) \right\vert$, where
\begin{eqnarray}
A_{r,s}(\tau, \nu) & \Define &
\int r_{\mbox{\footnotesize{td}}}(t) \, s_{\mbox{\footnotesize{td}}}^*(t - \tau) \, e^{-j 2 \pi \nu ( t - \tau)} dt
\end{eqnarray}
is the cross-ambiguity function between the received and the transmitted waveform. The cross-ambiguity function is also relevant for the general radar scene where there are multiple targets and reflectors. Using (\ref{paper2_eqn1}) in the expression for $A_{r,s}(\tau, \nu)$ we get
\begin{eqnarray}
\label{paper2_eqn2001}
A_{r,s}(\tau,\nu) & = & h(\tau, \nu) *_{\sigma} A_{s,s}(\tau,\nu) \nonumber \\
& & \hspace{-3mm} + \, \int n_{\mbox{\footnotesize{td}}}(t) \, s_{\mbox{\footnotesize{td}}}^*(t - \tau) \, e^{-j 2 \pi \nu ( t - \tau)} dt,
\end{eqnarray}
where $*_{\sigma}$ denotes twisted convolution and $h(\tau, \nu)$ is the DD spreading function of the channel between the radar transmitter and receiver.
In (\ref{paper2_eqn2001}),
$A_{s,s}(\tau,\nu)$ is the
(auto-) ambiguity function of the transmitted radar waveform $s_{\mbox{\footnotesize{td}}}(t)$ and is given by
\begin{eqnarray}
\label{paper2_selfambiguityfunction}
A_{s,s}(\tau,\nu) & \Define & \int s_{\mbox{\footnotesize{td}}}(t) \, s_{\mbox{\footnotesize{td}}}^*(t - \tau) \, e^{-j 2 \pi \nu ( t - \tau)} dt.
\end{eqnarray}
The ambiguity function $A_{s,s}(\tau, \nu)$ places fundamental limits on the \emph{blur} which constrains our ability to estimate target distance (delay) and velocity (Doppler).
Moyal’s identity \cite{WMoran2001} captures the fundamental limits on blur by using the energy in the signal to provide a lower bound on the volume under the squared ambiguity surface.
\begin{eqnarray}
\label{paper2_Moyalidentity}
\iint \left\vert A_{s,s}(\tau, \nu) \right\vert^2 \, d\tau d\nu & = & \left( \int \vert s_{\mbox{\footnotesize{td}}}(t) \vert^2 \, dt \right)^2.
\end{eqnarray}
Intuitively, the radar engineer aims to manipulate the ambiguity surface so that
blur is concentrated in those regions that matter least for the operational task of the radar.
Next, we illustrate how the spread
of the ambiguity function limits the
resolution of the radar by considering a radar scene with two targets. Thus
\begin{eqnarray}
\label{paper2_htnexample}
h(\tau, \nu) & = & \sum\limits_{i=1}^2 h_i \delta(\tau - \tau_i) \delta(\nu - \nu_i),
\end{eqnarray}
where $(\tau_1, \nu_1)$, $(\tau_2, \nu_2)$ are the
delay-Doppler domain locations of the two targets. The noise-free cross-ambiguity is given by
{\vspace{-4mm}
\small
\begin{eqnarray*}
h(\tau,\nu) *_{\sigma} A_{s,s}(\tau,\nu) & \hspace{-3mm} = & \hspace{-3mm}
h_1 A_{s,s}(\tau - \tau_1, \nu - \nu_1) e^{j 2 \pi \nu_1(\tau - \tau_1)} \nonumber \\
& & + \, \, h_2 A_{s,s}(\tau - \tau_2, \nu - \nu_2) e^{j 2 \pi \nu_2(\tau - \tau_2)}
\end{eqnarray*}\normalsize}
and we resolve the two targets by limiting the overlap between the two terms appearing on the R.H.S. In other words, we require the delay domain spread of $A_{s,s}(\tau, \nu)$ to be less than $\vert \tau_1 - \tau_2 \vert$ and the Doppler domain spread of $A_{s,s}(\tau, \nu)$ to be less than $\vert \nu_1 - \nu_2 \vert$.
\subsection{Ambiguity functions for TDM and FDM waveforms}
We consider a TDM pulse $s(t) = s_{\mbox{\footnotesize{td}}}(t) = \sqrt{B} \, sinc(B t)$ with bandwidth $B$, that is localized
around $t=0$. It follows from
(\ref{paper2_selfambiguityfunction}) that the ambiguity function in this case is given by
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_eqn171}
A_{s,s}^{\mbox{\tiny{tdm}}}(\tau, \nu) & \hspace{-3mm} = \begin{cases}
\left( 1 - \frac{\vert \nu \vert}{B} \right) \, e^{j \pi \nu \tau} \, sinc((B - \vert \nu \vert) \tau) &, \vert \nu \vert < B \\
0 &, \vert \nu \vert \geq B \\
\end{cases}. \nonumber \\
\end{eqnarray}\normalsize}
For a fixed $\nu$, we consider the term $sinc((B - \vert \nu \vert) \tau)$ as a function of $\tau$ and conclude that the delay spread of the TDM ambiguity function is about $1/B$. Hence, it is possible to separate two targets if their delays differ by more than $1/B$. On the other hand, the Doppler spread of the TDM ambiguity function is the bandwidth $B$. Hence, it is not possible to separate two targets with delays differing by less than $1/B$ unless the two Doppler shifts differ by more than $B$, which is unlikely in most scenarios. The TDM waveform is localized in the TD, but not in the FD, and this is the reason it is unable to separate targets in the Doppler domain.
Next, we consider an FDM pulse $s(f) = s_{\mbox{\footnotesize{fd}}}(f) = \sqrt{T} \, sinc(f T)$ with duration $T$, that is localized around $f=0$. It follows from (\ref{paper2_selfambiguityfunction}) that the ambiguity function in this case is given by
{\vspace{-4mm}
\small
\begin{eqnarray}
\label{paper2_eqnfdmradar}
A_{s,s}^{\mbox{\tiny{fdm}}}(\tau, \nu) & \hspace{-3mm} = \begin{cases}
\left( 1 - \frac{\vert \tau \vert}{T} \right) \, e^{j \pi \nu \tau} \, sinc((T - \vert \tau \vert) \nu) &, \vert \tau \vert < T \\
0 &, \vert \tau \vert \geq T \\
\end{cases}. \nonumber \\
\end{eqnarray}\normalsize}
Now, the Doppler spread is small (about $1/T$) and the delay spread is large (the duration $T$), hence it is not possible to separate two targets with Dopplers differing by less than $1/T$ unless the two delay shifts differ by more than $T$. This is unlikely in most scenarios. The FDM waveform is localized in the FD, but not in the TD, and this is the reason it is unable to separate targets in the delay domain.
\begin{figure}[h]
\hspace{-4mm}
\includegraphics[width=10cm, height=7.0cm]{TDM_and_FDM_Ambiguity.jpg}
\caption{Squared magnitude
$\left\vert A_{s,s}(\tau, \nu) \right\vert^2$ of the ambiguity functions for TDM and FDM. The TDM carrier waveform is not able to separate targets in Doppler, and the FDM carrier waveform is not able to separate targets in delay.
}
\label{fig20}
\end{figure}
The volume under the (squared) magnitude ambiguity surface is fixed by Moyal’s identity (\ref{paper2_Moyalidentity}), but it can be redistributed to enable resolution of radar targets. This possibility was known to P. M. Woodward more than $70$ years ago. In his $1953$ book \cite{PMWoodward}, he described how the ambiguity function of a narrow Gaussian pulse (the red shaded ellipse in Fig.~\ref{fig21}) can be redistributed into several DD domain functions/pulses (shown as black ellipses in Fig.~\ref{fig21}). The trick is to modulate a train of narrow TD Gaussian pulses with a broad Gaussian envelope. There is a striking resemblance between Woodward’s waveform and the Zak-OTFS carrier waveform (pulsone), which is a train of narrow pulses modulated by a sinusoid.
\begin{figure}[h]
\hspace{-1mm}
\includegraphics[width=9cm, height=8.0cm]{paper2_fig24.jpg}
\caption{Redistributing the squared magnitude $\vert A_{s,s}(\tau, \nu) \vert^2$ of the ambiguity function of a narrow Gaussian pulse. Modulating a train of narrow TD Gaussian pulses with a broad Gaussian envelope, produces an ambiguity function that is better able to separate targets in delay and Doppler.
}
\label{fig21}
\end{figure}
\subsection{Ambiguity function of Zak-OTFS pulsone}
The Zak-OTFS carrier waveform is a pulsone in time which realizes a quasi-periodic pulse in the DD domain at some location $(\tau_0, \nu_0)$ in the fundamental period. Recall from Part I, Table III that after filtering at the transmitter, the DD domain signal is given by
{\vspace{-4mm}
\small
\begin{eqnarray}
x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(\tau, \nu) & \hspace{-3mm} = & \hspace{-3mm} w_{tx}(\tau, \nu) \, *_{\sigma} \, \nonumber \\
& & \hspace{-6mm} \sum\limits_{n,m \in {\mathbb Z}} \hspace{-1.5mm} e^{j 2 \pi n \nu \tau_p } \delta(\tau - n \tau_p - \tau_0) \delta (\nu - m \nu_p - \nu_0).
\end{eqnarray}
\normalsize}
Converting from the DD domain to the time domain by applying the inverse Zak transform, yields
\begin{eqnarray}
s_{\mbox{\footnotesize{td}}}(t) & = & \sqrt{\tau_p} \, \int_{0}^{\nu_p} x_{_{\mbox{\footnotesize{dd}}}}^{w_{tx}}(t, \nu) \, d\nu.
\end{eqnarray}
We assume the number of Doppler bins ($N$) is even, and we apply a sinc shaping filter $w_{tx}(\tau, \nu) = \sqrt{B T } sinc(B \tau) sinc(T \nu)$ to an impulse located at the origin i.e., $(\tau_0, \nu_0) = (0,0)$. The corresponding TD pulsone is given by
\begin{eqnarray}
\label{paper2_stdpulsone}
s_{\mbox{\footnotesize{td}}}(t) & \hspace{-3mm} = & \hspace{-3mm} \frac{\sqrt{B}}{\sqrt{N}} \sum\limits_{n = -\frac{N}{2}}^{\frac{N}{2} - 1} \hspace{-1.5mm} sinc\left( B ( t - n \tau_p - \tau_0) \right) \, e^{j 2 \pi n \nu_0 \tau_p}.
\end{eqnarray}
\begin{figure*}
\vspace{-4mm}
\begin{eqnarray}
\label{paper2_eqnaotfs}
A_{s,s}^{\mbox{\tiny{otfs}}}(\tau, \nu) & \hspace{-3mm} = & \hspace{-3mm} \begin{cases}
\,\, \frac{\left( 1 - \frac{\vert \nu \vert}{B}\right)}{N} \hspace{-2mm} \sum\limits_{n_1 = -\frac{N}{2}}^{\frac{N}{2} -1} \sum\limits_{n_2 = -\frac{N}{2}}^{\frac{N}{2} -1} {\Big [} e^{j \pi \nu (\tau - (n_1 + n_2)\tau_p)} \, sinc\left( (B - \vert \nu \vert) (\tau + (n_1 - n_2) \tau_p) \right) {\Big ]} \, &, \,\, \vert \nu \vert < B \\
\,\, 0 \, &, \,\, \vert \nu \vert \geq B
\end{cases}.
\end{eqnarray}
\end{figure*}
The corresponding ambiguity function is given in (\ref{paper2_eqnaotfs}) (see top of next page), and is illustrated in Fig.~\ref{fig22} for the same time and bandwidth constraints as the TDM and FDM waveforms illustrated in Fig.~\ref{fig20}.
\begin{figure}[h]
\hspace{-3mm}
\includegraphics[width=9.6cm, height=7cm]{ZakOTFSAmbiguity.jpg}
\caption{Plot of the ambiguity function (squared magnitude)
for the Zak-OTFS carrier waveform. Simultaneous delay and Doppler resolvability can be achieved. Unambiguous target estimation is achievable in the crystalline regime.
}
\label{fig22}
\end{figure}
The ambiguity function of the TD pulsone consists of narrow DD domain impulses separated by $\tau_p$ along the delay axis and $\nu_p$ along the Doppler axis. Each impulse has a spread of $1/B$ along the delay axis, and a spread of $1/T$ along the Doppler axis. Recall from (\ref{paper2_eqn2001}), that in the absence of noise, the cross-ambiguity function is the twisted convolution of the delay-Doppler spreading function $h(\tau, \nu)$ and the ambiguity function $A_{s,s}(\tau, \nu)$. We can therefore guarantee unambiguous delay–Doppler estimation by choosing $\tau_p$ to be greater than the delay spread, and $\nu_p$ to be greater than the Doppler spread. These are exactly the crystallization conditions guaranteeing predictability of the I/O relation in Zak-OTFS communication. Note that we can improve resolvability by increasing bandwidth $B$ and duration $T$.
Given an arbitrary pulse shaping filter, it follows from (\ref{paper2_selfambiguityfunction}) that the ambiguity function of the corresponding TD pulsone consists of pulses, where the pulse within the fundamental period (depicted in Fig.~\ref{fig22} as a light blue rectangle) is simply the twisted convolution of $w_{tx}(\tau, \nu)$ with its complex conjugate. In the above example, the pulse shaping filter is a sinc pulse, the Zak-OTFS pulsone is a train of sinc pulses, and the ambiguity function in Fig.~\ref{fig22} also consists of sinc pulses. In general, this structure enables precise design of ambiguity functions.
\section{Conclusions}
\label{conc}
\begin{comment}
This paper presented Part II of a two-part tutorial paper on OTFS. OTFS is a generalized waveform (a DD domain pulse called a pulsone) parameterized by delay and Doppler periods. It has strikingly good attributes for communication as well as radar sensing.
The first key message conveyed is that OTFS subsumes TDM and FDM waveforms as special cases. Both TDM and FDM waveforms can be viewed as pulsones which differ from OTFS in the choice of the defining delay-Doppler periods and filters. TDM waveform is a pulsone with infinite delay period and zero Doppler period, which is ideal for communication in delay-only channels. FDM waveform is a pulsone with infinite Doppler period and zero delay period, which is ideal for communication in Doppler-only channels. Both TDM and FDM waveforms fail to be good communication waveforms in doubly-selective channels because of the channel spreads along the Doppler and delay domains, respectively,
For the same reason, TDM and FDM waveforms fail to provide good localization attributes in the delay-Doppler domain which are crucial to achieve good radar sensing performance. That is, TDM waveform is well localized along the delay domain but spread along the Doppler domain. On the other hand, FDM is well localized along the Doppler domain but spread along the delay domain. Therefore, in doubly spread channels, both TDM and FDM waveforms are poorly localized, leading to poor sensing and commuication performance. This is the second key message.
The third key message is that OTFS waveform, when operated in the crystalline regime (a regime where the delay and Doppler spreads of the channel are much smaller than the delay and Doppler periods of the waveform, respectively), is ideal for both communication and radar sensing. This is explained as follows. Operating OTFS in the crystalline regime provides us with two performance knobs, one to control aliasing (through proper choice of DD periods and filter) and another to control resolvability (through suitable choice of DD filter). For communication, controlling aliasing translates to non-fading and predictability, and controlling resolvability translates to diversity opportunities. Likewise, for radar sensing,
controlling aliasing translates to reducing ambiguity in sensing and controlling resolvability translates to increasing sensing resolution.
The fourth message is that emerging cellular standards and systems (6G and beyond) demand a versatile waveform that not only provides robust communication performance in a flexible manner but also serve as an integrated radar waveform that can provide high-resolution sensing performance for accurately imaging the operating environment to ensure better road safety, and that OTFS meets these emerging demands in a natural way.
The fifth and final message is that we are at the threshold of beginning a new generation of OTFS research, and Zak approach to OTFS is the way to go forward.
\end{comment}
{
In this paper we have designed a parametric family of pulsone waveforms that can be matched to the delay and Doppler spreads of different propagation environments. We explained that a pulsone is a signal on the time domain which realizes a quasi-periodic localized function on the DD domain. The prototypical structure of a pulsone is a train of pulses modulated by a tone.
We have put special emphasis on system analysis in the situation when the pulsone parameters matches the environment channel parameters, in the sense that, the delay period of the pulsone is greater than the delay spread of the channel, and the Doppler period of the pulsone is greater than the Doppler spread of the channel. We refer to this condition as the crystallization condition.
\begin{comment}
We have demonstrated that when the delay period of the pulsone is greater than the delay spread, and the Doppler period is greater than the Doppler spread, then a pulsone behaves as if it were localized in delay and Doppler.
\end{comment}
We have explained that pulsones constitute a universal parametric family of waveforms which includes as special cases conventional TDM and FDM waveform. Specifically, a TDM waveform is a pulsone with infinite delay period and zero Doppler period. Such a waveform perfectly matches to a delay-only propagation environments. Reciprocally, an FDM waveform is a pulsone with infinite Doppler period and zero delay period. Such a waveform perfectly matches to Doppler-only propagation environments.
On the negative side, we have explained, in the context of communication, how and why the I/O relation of sampled system becomes non-predictable and fading when the parameters of the pulsone do not match the propagation environment, i.e., they do not satisfy the crystallization condition. Specifically, we have shown that the phenomena of non-predictability results from aliasing in the DD domain, which, in turn, occurs when one of the channel spreads are greater than the corresponding pulsone period. In the context of radar sensing, we have shown that ambiguity - that is, inability to separate resolvable channel reflections, occurs when one of the propagation spreads is greater than the corresponding pulsone period - again, violation of the crystallization condition.
On the positive side, we have explained how and why a sampled communication system yields superior performance when operated in the crystalline regime, i.e, when the delay spreads of the channel are considerably smaller than the periods of the pulsone. Specifically, we have explained that operation in the crystalline regime provides two mechanisms for improved performance: one mechanism minimizes DD domain aliasing through proper choice of the periods and shaping filters, and a second mechanism maximizes diversity exploitation by resolving reflections through suitable choice of shaping filters. In the context of radar sensing, minimizing DD domain aliasing translates to reducing ambiguity among resolvable reflections, and maximizing resolvability translates to increased resolution.
\begin{comment}
When we communicate with pulsones in the crystalline regime, the input-output relations are governed by twisted convolution, and we have shown that we can learn them directly without knowing the channel.
\end{comment}
Another important implication, in the context of communication, of operation with pulsones in the crystalline regime is that, due to its predictability, the I/O relation of a sampled communication system can be learnt directly without the need to know the parameters of the underlying channel. This opens up the possibility of a model-free mode of operation, which is especially useful when channel estimation is out of reach.
Another contribution of this paper is a detailed comparison between Zak-OTFS and its multicarrier approximation, which we refer to as MC-OTFS, that has been the focus of almost all research attention so far. We compared the two modulation schemes both on theoretical grounds and on performance grounds. On the theoretical side, we have shown that the I/O relation of MC-OTFS is less predictable than that of Zak-OTFS, which implies that as the Doppler spread increases, the BER performance of MC-OTFS is inferior to that of Zak-OTFS. This suggests that MC-OTFS is less adapted to model-free mode of operation than Zak-OTFS.
In conclusion, convergence of communications and sensing in 6G and beyond has focused research attention on the design of carrier waveforms that support both applications. Noticing that an environment, be it a radar scene or a communication medium is characterized by its delay and Doppler spreads. The underlying message of this paper is that in both contexts - radar and communication, it is beneficial to choose the periods of the pulsone to be greater than the spreads of the environment, i.e., one should operate in the crystalline regime.
\begin{comment}
Convergence of communications and sensing in 6G and beyond has focused research attention on the design of waveforms that support both applications. We have demonstrated that the family of pulsone waveforms supports reliable communication as well as high-resolution sensing. We have argued that they are well suited to emerging applications like smart cars and highway infrastructure.
Propagation environments are characterized by delay and Doppler spreads, and we have demonstrated the value of designing waveforms in the DD domain to match those characteristics. We have also demonstrated the advantages of the pure Zak approach to pulsone design over multicarrier approximations.
\end{comment}
}
|
{
"arxiv_id": "2302.08632",
"language": "en",
"timestamp": "2023-02-20T02:04:14",
"url": "https://arxiv.org/abs/2302.08632",
"yymm": "2302"
} |
\section{Introduction}
One of the most important and basic needs of machine learning (ML) research in music is the availability of free datasets. Unfortunately, the field of music has lagged behind other fields (like image and speech recognition) in the availability of high-quality datasets. This paper contributes to the growing body of available music audio datasets by presenting the \textit{jazznet dataset}, an extensible dataset of fundamental piano patterns.
There are notable efforts toward creating datasets for MIR, and jazznet is complementary to these efforts. Some existing music datasets include GTZAN \cite{tzanetakis02_gtzan}, MSD \cite{bertin11_MSD}, AudioSet \cite{gemmeke17_audioSet}, FMA dataset \cite{defferrard16_fmaDataset}, MusicNet \cite{thickstun16_musicnet}, and RWC \cite{goto03_rwcMusicDatabase}. The main difference between the jazznet dataset and the majority of existing datasets lies in the approach taken to creating the dataset: an emphasis on the fundamental patterns, rather than complete music pieces. This approach is inspired by how humans effectively learn piano music. Suppose you wanted to learn to play jazz piano; you would learn much faster and more effectively if you first understood the fundamental patterns---\textit{chords, arpeggios, scales,} and \textit{chord progressions}---on which full music pieces are based \cite{levine11_jazzPianoBook}. We focus on jazz piano music patterns to make the dataset creation tractable and because of the versatility of jazz music---the patterns in jazz music encompass several other musical genres (e.g., blues, country, pop, etc.). Given that an important hallmark of jazz music is the variety of expression, the dataset also contains the different forms---or \textit{inversions}---in which the patterns can be played.
The multi-class characteristic of the dataset also aims to model the hierarchy of difficulty humans experience in learning jazz music. For instance, it is easy to differentiate between the \textit{types} of patterns (e.g., chords vs. arpeggios vs. scales). Within each of the classes, it is substantially more difficult to recognize different \textit{modes} of patterns (e.g., major vs. minor or augmented vs. diminished chords). Finally, only a uniquely talented musician with \textit{absolute pitch} \cite{deutsch06_absolutePitchEnigma} can recognize specific modes (e.g., A major vs. C major).
In summary, we make two major contributions. First, we present the jazznet dataset\footnote{\url{https://doi.org/10.5281/zenodo.7192653}}, which we have curated based on a wide-ranging survey of jazz education resources and jazz standards. The dataset contains 162520 automatically generated and labeled piano music patterns, which results in 95GB and over 26k hours of audio. The primary objective of the jazznet dataset is to facilitate the development of ML models for challenging MIR tasks. Second, we have developed a \textit{Pattern Generator} that uses \textit{Distance-Based Pattern Structures (DBPS)} to facilitate the generation of piano patterns based on the distance between pitches within the patterns. This approach enables the easy generation of new piano patterns for ML research, and we have open-sourced the Python scripts\footnote{\url{https://github.com/tosiron/jazznet/tree/main/PatternGenerator}} that implement it for the convenience of users.
\section{The Jazznet Dataset}
Although complementary to existing music datasets, jazznet is unique in several ways. Structurally, the most related datasets to jazznet are the NSynth \cite{engel17_nsynth} and MAPS \cite{emiya10_maps_dataset} datasets, with some notable differences. NSynth is created for exploring neural audio synthesis of musical notes with different pitches, timbres, and envelopes. However, it does not contain any musical patterns, only single notes, and although it represents more musical instruments and dynamics than jazznet, it is not comparable in terms of the variety of musical patterns contained in jazznet. On the other hand, the MAPS dataset is frequently used for automatic music transcription \cite{benetos18_musicTranscription} and contains isolated notes, chords, and music pieces. However, it contains fewer chords than jazznet and does not include scales, arpeggios, or chord progressions. Additionally, the MAPS dataset is significantly smaller than jazznet, with 31GB of data compared to 95GB. Most importantly, unlike previous datasets, jazznet is accompanied by piano pattern generators that enable researchers to extend the dataset to include new patterns.
While the dataset is created to be easily extensible, we curated the patterns for the current version by extensively surveying several resources for jazz music performance and education, such as The Jazz Piano Book \cite{levine11_jazzPianoBook}, The Jazz Theory Book \cite{levine11_jazzTheoryBook}), and more than 50 jazz standards \cite{gioia21_jazzStandards,hal-leonard17_jazzStandards}. We also interacted with several jazz musicians to develop a consensus on some of the common fundamental patterns, particularly chord progressions. However, we acknowledge that given the complexity and diversity of jazz music, the dataset does not cover all jazz piano patterns. Nevertheless, the provided pattern generator (Section \ref{sec:patternGen}) enables users to generate new patterns not in the dataset. In the following sections, we provide a background on music theory necessary for understanding the dataset's creation, describe the data generation approach, and then present the jazznet dataset, its features, the pattern generator, and potential applications.
\subsection{Brief background basic music theory}
The dataset is based on the notes of a standard 8-octave (88-key) piano. The main notes of a piano (and most other stringed instruments) are typically represented using 7 letters: $A, B, C, D, E, F, G$ (which correspond to the white keys on a piano). The notes in between these notes (i.e., the black keys) are described as \textit{sharp ($\sharp$)} to the immediately preceding note, or \textit{flat ($\flat$)} to the immediately succeeding note. For example, the note between C and D is referred to as C$\sharp$ or D$\flat$ depending on the context. An \textit{octave} is the interval between one musical pitch and another half or double its frequency. For example, the middle C on the piano (also called C4) has a frequency of 261.63 Hz, the next C (C5) has a frequency of 523.25 Hz, and the previous C (C3) has a frequency of 130.81 Hz. There are 12 `half steps' (also called \textit{semitones}) or 6 `full steps' (also called \textit{tones}) between each note and its octave.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{figures/inversions.pdf}
\vspace{-15pt}
\caption{Illustration of the C major chord and its three forms: root, first, and second inversions}
\label{fig:inversions}
\vspace{-10pt}
\end{figure}
The dataset consists of four types of piano patterns: \textit{chords, arpeggios, scales} and \textit{chord progressions}. A chord is a harmonic set of pitches or frequencies consisting of multiple notes that are played or heard simultaneously. Chords can be categorized by the number of notes they contain: \textit{dyads} (2-note), \textit{triad} (3-note), \textit{tetrad} (4-note), etc. An arpeggio is a chord in which the notes are played one after another. A scale is a set of notes ordered by the frequency of the notes. The notes in these patterns can be rearranged to create different "colors" of music, called \textit{inversions} (illustrated in Figure \ref{fig:inversions} using the C major triad). Additionally, one of the first and most important things a new jazz pianist must learn is common chord progressions. Chord progressions are successions of chords that form the foundation of music \cite{levine11_jazzTheoryBook}. They are often represented using Roman numerals, with lowercase letters for minor chords and uppercase letters for major chords. This representation allows progressions to be independent of the specific key that a piece of music is played in.
\subsection{Dataset creation via \textit{distance-based pattern structures (DBPS)}} \label{sec:creation}
\input{chordGeneration}
We use a method called \textit{distance-based pattern structures (DBPS)} to automatically generate piano patterns in a flexible and intuitive way. Thanks to the formulaic structure of most piano music patterns, the patterns can be generated based on the distance between the pitches within the patterns. We represent the pitches using MIDI pitch numbers, with a distance of 1 representing the shortest distance between two notes (i.e., a semitone). To illustrate this approach, we provide a simplified pseudocode in Algorithm \ref{alg:chordGeneration}. The algorithm takes the pitch distances from preceding notes as input to generate any pattern. For example, for any major triad with three notes $note_0, note_1, note_2$, the input would be $distance = [4,3]$. That is, given the base note as $note_0$, $note_1$ is 4 pitches (or 4 semitones) away, and $note_2$ is 3 pitches away. Similarly, for a maj7 tetrad, $distance = [4,3,4]$; and so on.
To generate chord progressions, we use a similar approach based on the distances between notes within each chord of the progression. The base note (\textit{note0}) for each chord is defined by its distance from the reference note (the key in which the progression exists) based on the Roman numeral. For example, the base note of the $ii$ chord is two pitches from the reference note, while the base note of the $IV$ chord is five pitches away. Once the base note is determined, the chords in the progression can be generated as described in Algorithm \ref{alg:chordGeneration}. For example, the popular jazz chord progression, \textit{ii-V-I}, can be represented by the following pattern structure:
chord1: note0=ref+2; note1=note0+3; note2=note1+4
chord2: note0=ref+7; note1=note0+4; note2=note1+3
chord3: note0=ref; note1=note0+4; note2=note1+3
The DBPS approach can also be used to generate \textit{altered} or \textit{extended} chords commonly found in jazz music. The approach works by imposing the distance required for the alteration or extension to generate the new chord. For example, a dominant $7th$ extension on a chord with notes $note0, note1$, and $note2$, would add a new note $note3$, three pitches above $note2$. Adding a major $7th$ would add a note that is four pitches above $note2$. Similarly, alterations can be generated by modifying the pitch of a note within the chord. For instance, a $\flat5$ would be created by reducing the pitch of $note2$ by one.
We created pattern structures for all the piano patterns selected in our survey, and then used Python to develop a pattern generator (Section \ref{sec:patternGen}) that utilized these structures to generate MIDI format files\footnote{\url{https://github.com/MarkCWirt/MIDIUtil}} for all keys and inversions. These MIDI files were then manually reviewed for accuracy before being converted to WAV format using the Timidity synthesizer\footnote{\url{http://timidity.sourceforge.net/}}, with a sampling rate of 16KHz and 24-bit depth. To ensure the quality of the generated patterns, we randomly selected WAV files for verification by two jazz pianists, who played the patterns on physical pianos to confirm that the sounds and labels were correct.
\subsection{Dataset description and statistics}
\input{taxonomy.tex}
Figure \ref{fig:taxonomy} depicts a high-level taxonomy of the dataset. The dataset contains three kinds of chords and arpeggios: \textit{dyads}, \textit{triads}, and \textit{tetrads}. The scales are dominated by \textit{diatonic} (6-note) scales and include one \textit{pentatonic} (5-note) scale. The dataset is dominated by progressions, of which there are two kinds: 3-chord and 4-chord (tetrad) progressions. All the patterns are present in all their inversions and in all keys of the 88-key piano.
Table \ref{tab:datasetStats} summarizes the dataset statistics. The dataset is published in WAV format (MIDI files are also provided) as separate data directories for \textit{types:} chords, arpeggios, scales, and progressions, and subdirectories for the different \textit{modes}. There are 65 labeled modes detailed as:
\begin{itemize}
\item \textbf{Chords/Arpeggios} (24 each, appended with "-chord" or "-arpeggio"): \textit{12 dyads} (1 inversion each): minor 2nd (min2), major 2nd (maj2), minor 3rd (min3), major 3rd (maj3), perfect 4th (perf4), tritone, perfect 5th (perf4), minor 6th (min6), major 6th (maj6), minor 7th or augmented 6th (aug6), major 7th (maj7\_2), octave; \textit{6 triads} (2 inversions each): major (maj), minor (min), augmented (aug), diminished (dim), suspended 2nd (sus2), suspended 4th (sus4); and \textit{6 tetrads} (3 inversions each): dim7, maj7, min7, min7$\flat$5, seventh, sixth. There are 5,525 chords and 5,525 arpeggios in total.
\vspace{-5pt}
\item \textbf{Scales} (8): \textit{5 diatonics} (6 inversions each): aeolian, dorian, ionian, locrian, lydian, mixolydian, phrygian; \textit{1 pentatonic} (4 inversions). There are 4590 scales in total.
\vspace{-5pt}
\item \textbf{Progressions} (9): \textit{3-chord} (64 combinations): ii-V-I, ii-V-i, and ii-triV-I (triV means tritone substitution of V); \textit{4-chord} (256 combinations): I-VI-ii-V, i-vi-ii-V, iii-VI-ii-V, I-i$\sharp$-ii-V, I-IV7-iii-VI7, and ii$\sharp$-V$\sharp$-ii-V. There are 146,880 progressions in total.
\end{itemize}
\vspace{-5pt}
Each mode subdirectory contains the patterns in each mode labeled according to the specific note/pitch, octave, mode, and inversion. The inversions in the progressions are labeled according to the chord combinations.
The patterns are all recorded at a speed of 60 beats per minute (bpm), with two beats of decay at the end. The chord recordings are all 3 seconds long. The arpeggios and scales are recorded with one note played per beat. The arpeggios range from 4 to 6 seconds long, while the scales range from 7 to 9 seconds long, depending on the number of notes. Progressions are recorded with two chords played per measure and range from 7 to 10 seconds long.
\input{dataTable}
\subsection{Suggested subsets}
Given the dataset's imbalance resulting from the size of the chord progressions, we suggest some subsets of the dataset where a more balanced dataset may be required or downloading/using the entire dataset may be impractical (e.g., for initially testing a model). All subsets contain all the chords, scales, and arpeggios (15640 clips). The small, medium, and large subsets add 5876, 14688, and 36720 progressions, for a total of 21516, 30328, and 52360 clips, respectively. All the subsets are pseudo-randomly generated to ensure that all the modes are represented. The metadata files are in CSV format with a suggested split into train, validation, and test sets using an 80/10/10\% split. The validation and test sets are randomly selected from the different modes without overlapping with the training set.
\subsection{Pattern generator} \label{sec:patternGen}
We have developed piano \textit{pattern generators} using Python, which allow users to easily create new patterns with ease. Due to space limitations, we omit the full details of the generator functions (the details are on the Github page\footnote{\url{https://github.com/tosiron/jazznet/tree/main/PatternGenerator}}), but summarize that chords, arpeggios, and scales can be generated by specifying the pitch distances as outlined in Section \ref{sec:creation}. In the same way, chord progressions can be generated by specifying the Roman numerals of the chords and the supported extensions/alterations (e.g., maj7, $\flat$5, $\sharp$, etc.). Crucially, the patterns are generated in all keys of an 88-key piano and in all inversions. The generated MIDI files can then be converted into the user's format of choice, such as WAV, using the user's choice of tool. The provided tool allows for the creation of numerous new patterns, and the scripts are open-source, allowing for the possibility of expanding the tool to support additional patterns.
\vspace{-7pt}
\subsection{Applications}
Jazznet allows the evaluation of models developed for a variety of machine learning MIR tasks. The most straightforward and basic tasks involve \textbf{automatic music understanding} based on the dataset's class hierarchy: type recognition (whether an input is a chord, arpeggio, scale, or chord progression) or mode recognition, e.g., whether an input is augmented (aug) or major (maj) or Ionian or a ii-V-I progression. Specific details of the patterns can also be predicted, like the pitch contents, specific key (e.g., C-maj vs. E$\flat$-maj chord), octaves, and inversions.
Jazznet can be used for more complex MIR tasks. For example, \textbf{automatic music transcription} \cite{benetos18_musicTranscription,benetos13_automaticMusicTranscription} is considered a canonical task in MIR, where the musical components (e.g., chords) in an audio recording are extracted. Most current music transcription models focus on isolated notes or chords present in the music. But additional musical patterns like scales and chord progressions can be predicted using models trained on jazznet. The dataset can also be used for the development of \textbf{music recommendation systems} \cite{schedl15_musicRecommender,van13_musicRecommendation}. Recommendation systems can be built based on musical patterns: a listener who likes certain kinds of chords or scales in one song (e.g., a song dominated by the minor scale or minor chords) may similarly like other songs that contain similar chord modes. Related to music understanding, jazznet can be used for \textbf{automatic music generation} \cite{herremans17_music_generation,carnovalini19_multilayeredMusicGeneration}. A lot of jazz music involves the repetition of jazz chord progressions with scales or arpeggios played over them. For example, the \textit{ii-V-I} progression, which appears in nearly all jazz standards \cite{levine11_jazzPianoBook}, can be soloed over with each chord's arpeggios or with matching scales (e.g., the Dorian mode over the \textit{ii} chord). Jazznet can be used to learn the fundamental patterns in musical pieces and how they occur (e.g., frequency of progressions, scales used with the progressions, etc.).
\section{Experiments} \label{sec:experiments}
\input{dataResults}
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{figures/results.png}
\vspace{-15pt}
\caption{mAP of the CRNN and M5 models on the large subset}
\label{fig:results}
\vspace{-5pt}
\end{figure}
\end{comment}
\noindent\textbf{Experimental setup:} We used two simple models to demonstrate the potential of the jazznet dataset for music audio recognition in two tasks, type and mode recognition. The first model is a convolutional recurrent neural network (CRNN), as described in \cite{choi17_CRNN_music}. This model utilizes Mel-spectrograms as input and models long-term dependencies in musical structures using a recurrent neural network (RNN). The second model is the M5 model described in \cite{dai17_dnn_audio}: a deep convolutional neural network that takes time series waveforms as input and requires no preprocessing. Since data preprocessing is often a bottleneck for audio ML, this model provides a sense of the performance achieved using the dataset without any preprocessing. Both models contain 4 convolutional layers with batch normalization and the ReLU activation function. We trained the models with the Adam optimizer and categorical crossentropy loss function, and evaluated their performance using the mean Average Precision (mAP) and AUC (Area Under the Curve) score, following prior work \cite{dai17_dnn_audio}, on the medium subset (30328 clips) of the dataset.
\vspace{3pt}
\noindent\textbf{Results:} Identifying chords, arpeggios, scales, and progressions was a simple task for the models. The CRNN and M5 models had high scores with an average AUC score of 0.984. They consistently recognized all four types, with mAP scores of 0.99 and 0.97. In comparison, two jazz pianists informally attempted to recognize a random sample of the types and both achieved 100\% accuracy.
On the other hand, recognizing modes across the 65 modes was more difficult for the models, as summarized in Table \ref{tab:results}. The CRNN and M5 models achieved mAP scores of 0.63 and 0.30, respectively, with an average AUC score of 0.04. Both models found the arpeggios and scales to be the most challenging with AP of 0.16 and 0.10 for CRNN and 0.09 and 0.06 for M5. However, the models performed best on the progressions with an AP of 0.67 and 0.48 for CRNN and M5, respectively. The humans also found mode prediction more difficult than type recognition, achieving an accuracy range of 57\% to 73\%. In summary, the dataset was designed to reflect the hierarchy of difficulty in recognizing musical patterns and provides an opportunity to benchmark models for challenging MIR tasks.
\section{Conclusion}
In this paper, we presented a \textit{jazznet}, a dataset of essential jazz piano music patterns and an open-source pattern generator, enabling researchers to benchmark machine learning models for complex music information retrieval (MIR) tasks. Our aim is for this dataset to contribute to advancing machine learning research in MIR. In future work, we plan to expand the dataset by including more musical auditory attributes such as dynamics and rhythmic variations, as well as more complex patterns like 5-note chords and longer chord progressions. Additionally, we intend to explore more sophisticated models to improve performance on tasks using the dataset and also further investigate the potential of the DBPS approach for automatically generating different kinds of data.
\bibliographystyle{IEEEbib}
\section{Introduction}
\label{sec:intro}
These guidelines include complete descriptions of the fonts, spacing, and
related information for producing your proceedings manuscripts. Please follow
them and if you have any questions, direct them to Conference Management
Services, Inc.: Phone +1-979-846-6800 or email
to \\\texttt{papers@2021.ieeeicassp.org}.
\section{Formatting your paper}
\label{sec:format}
All printed material, including text, illustrations, and charts, must be kept
within a print area of 7 inches (178 mm) wide by 9 inches (229 mm) high. Do
not write or print anything outside the print area. The top margin must be 1
inch (25 mm), except for the title page, and the left margin must be 0.75 inch
(19 mm). All {\it text} must be in a two-column format. Columns are to be 3.39
inches (86 mm) wide, with a 0.24 inch (6 mm) space between them. Text must be
fully justified.
\section{PAGE TITLE SECTION}
\label{sec:pagestyle}
The paper title (on the first page) should begin 1.38 inches (35 mm) from the
top edge of the page, centered, completely capitalized, and in Times 14-point,
boldface type. The authors' name(s) and affiliation(s) appear below the title
in capital and lower case letters. Papers with multiple authors and
affiliations may require two or more lines for this information. Please note
that papers should not be submitted blind; include the authors' names on the
PDF.
\section{TYPE-STYLE AND FONTS}
\label{sec:typestyle}
To achieve the best rendering both in printed proceedings and electronic proceedings, we
strongly encourage you to use Times-Roman font. In addition, this will give
the proceedings a more uniform look. Use a font that is no smaller than nine
point type throughout the paper, including figure captions.
In nine point type font, capital letters are 2 mm high. {\bf If you use the
smallest point size, there should be no more than 3.2 lines/cm (8 lines/inch)
vertically.} This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make
the paper much more readable. Larger type sizes require correspondingly larger
vertical spacing. Please do not double-space your paper. TrueType or
Postscript Type 1 fonts are preferred.
The first paragraph in each section should not be indented, but all the
following paragraphs within the section should be indented as these paragraphs
demonstrate.
\section{MAJOR HEADINGS}
\label{sec:majhead}
Major headings, for example, "1. Introduction", should appear in all capital
letters, bold face if possible, centered in the column, with one blank line
before, and one blank line after. Use a period (".") after the heading number,
not a colon.
\subsection{Subheadings}
\label{ssec:subhead}
Subheadings should appear in lower case (initial word capitalized) in
boldface. They should start at the left margin on a separate line.
\subsubsection{Sub-subheadings}
\label{sssec:subsubhead}
Sub-subheadings, as in this paragraph, are discouraged. However, if you
must use them, they should appear in lower case (initial word
capitalized) and start at the left margin on a separate line, with paragraph
text beginning on the following line. They should be in italics.
\section{PRINTING YOUR PAPER}
\label{sec:print}
Print your properly formatted text on high-quality, 8.5 x 11-inch white printer
paper. A4 paper is also acceptable, but please leave the extra 0.5 inch (12 mm)
empty at the BOTTOM of the page and follow the top and left margins as
specified. If the last page of your paper is only partially filled, arrange
the columns so that they are evenly balanced if possible, rather than having
one long column.
In LaTeX, to start a new column (but not a new page) and help balance the
last-page column lengths, you can use the command ``$\backslash$pagebreak'' as
demonstrated on this page (see the LaTeX source below).
\section{PAGE NUMBERING}
\label{sec:page}
Please do {\bf not} paginate your paper. Page numbers, session numbers, and
conference identification will be inserted when the paper is included in the
proceedings.
\section{ILLUSTRATIONS, GRAPHS, AND PHOTOGRAPHS}
\label{sec:illust}
Illustrations must appear within the designated margins. They may span the two
columns. If possible, position illustrations at the top of columns, rather
than in the middle or at the bottom. Caption and number every illustration.
All halftone illustrations must be clear black and white prints. Colors may be
used, but they should be selected so as to be readable when printed on a
black-only printer.
Since there are many ways, often incompatible, of including images (e.g., with
experimental results) in a LaTeX document, below is an example of how to do
this \cite{Lamp86}.
\section{FOOTNOTES}
\label{sec:foot}
Use footnotes sparingly (or not at all!) and place them at the bottom of the
column on the page on which they are referenced. Use Times 9-point type,
single-spaced. To help your readers, avoid using footnotes altogether and
include necessary peripheral observations in the text (within parentheses, if
you prefer, as in this sentence).
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8.5cm]{image1}}
\centerline{(a) Result 1}\medskip
\end{minipage}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=4.0cm]{image3}}
\centerline{(b) Results 3}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[width=4.0cm]{image4}}
\centerline{(c) Result 4}\medskip
\end{minipage}
\caption{Example of placing a figure with experimental results.}
\label{fig:res}
\end{figure}
\section{COPYRIGHT FORMS}
\label{sec:copyright}
You must submit your fully completed, signed IEEE electronic copyright release
form when you submit your paper. We {\bf must} have this form before your paper
can be published in the proceedings.
\section{RELATION TO PRIOR WORK}
\label{sec:prior}
The text of the paper should contain discussions on how the paper's
contributions are related to prior work in the field. It is important
to put new work in context, to give credit to foundational work, and
to provide details associated with the previous work that have appeared
in the literature. This discussion may be a separate, numbered section
or it may appear elsewhere in the body of the manuscript, but it must
be present.
You should differentiate what is new and how your work expands on
or takes a different path from the prior studies. An example might
read something to the effect: "The work presented here has focused
on the formulation of the ABC algorithm, which takes advantage of
non-uniform time-frequency domain analysis of data. The work by
Smith and Cohen \cite{Lamp86} considers only fixed time-domain analysis and
the work by Jones et al \cite{C2} takes a different approach based on
fixed frequency partitioning. While the present study is related
to recent approaches in time-frequency analysis [3-5], it capitalizes
on a new feature space, which was not considered in these earlier
studies."
\vfill\pagebreak
\section{REFERENCES}
\label{sec:refs}
List and number all bibliographical references at the end of the
paper. The references can be numbered in alphabetic order or in
order of appearance in the document. When referring to them in
the text, type the corresponding reference number in square
brackets as shown at the end of this sentence \cite{C2}. An
additional final page (the fifth page, in most cases) is
allowed, but must contain only references to the prior
literature.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08588",
"language": "en",
"timestamp": "2023-02-20T02:02:23",
"url": "https://arxiv.org/abs/2302.08588",
"yymm": "2302"
} | \section{Introduction} \label{sec:intro}
A continuous-time Markov chain (CTMC) is a model of a dynamical system that, upon entering some state, remains in that state for a random real-valued amount of time ---called the dwell time or sojourn time--- and then transitions probabilistically to another state.
CTMCs are popular models in performance and dependability analysis. They have wide application and constitute the underlying semantics for real-time probabilistic systems such as queuing networks~\cite{LazowskaZGS84}, stochastic process algebras~\cite{Hillston94}, and calculi for systems biology~\cite{CiocchettaH09,KwiatkowskaNP08}.
Model checking tools such as \textsc{Prism}~\cite{KwiatkowskaNP11} and \textsc{Storm}~\cite{DehnertJK017}
provide access to a number of powerful analysis techniques for CTMCs.
Both tools accept models written in the \textsc{Prism}\ language, a state-based language based on~\cite{AlurH99b} that represents synchronous and asynchronous components in a uniform framework that supports compositional design.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\input{prismmodels/SIRuno.tex}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\input{prismmodels/SIRagents.tex}
\end{subfigure}
\caption{(Left) SIR model with lockdown from~\cite{Milazzo21}, (Right) Semantically equivalent formulation of the model to the left where susceptible, infected, and recovered individuals are modeled as distinct modules interacting with each other via synchronization.} \label{fig:bigSIR}
\end{figure}
For example, consider the two semantically equivalent \textsc{Prism}\ models depicted in Fig.~\ref{fig:bigSIR} implementing a variant of the Susceptible-Infected-Recovered (SIR) model proposed in~\cite{Milazzo21} to describe the spread of disease in presence of lockdown restrictions. The model depicted to the left consists of a single module, whereas the one to the right implements a compositional design where modules interact by synchronizing on two actions: \texttt{infection} and \texttt{recovery}.
Both models distinguish between three types of individuals: susceptible, infected, and recovered. Susceptible individuals become infected through contact with another infected person and can recover without outside interference. The SIR model is parametric in $\mathtt{beta}$, $\mathtt{gamma}$, and $\mathtt{plock}$. $\mathtt{beta}$ is the \emph{infection coefficient}, describing the probability of infection after the contact of a susceptible individual with an infected one; $\mathtt{gamma}$ is the \emph{recovery coefficient}, describing the rate of recovery of an infected individual (in other words, $1/\mathtt{gamma}$ is the time one individual requires to recover); and $\mathtt{plock} \in [0,1]$ is used to scale down the infection coefficient modeling restrictions to reduce the spread of disease.
Clearly, the outcome of the analysis of the above SIR model is strongly dependent on the parameter values used in each module, as they govern the timing and probability of events of the CTMC describing its semantics. However, in some application domains, parameter values have to be empirically evaluated from a number of partially-observable executions of the model. To the best of our knowledge, neither \textsc{Prism}\ nor \textsc{Storm} provide integrated support for this task, leaving the burden of estimating parameter values to the modeler. A paradigmatic example is the modeling pipeline described in~\cite{Milazzo21}, where the parameters of the SIR model in Fig.~\ref{fig:bigSIR} are estimated based on a definition of the model as ODEs, and later used in an approximation of the original SIR model designed to reduce the state space of the SIR model in Fig.~\ref{fig:bigSIR} (left).
Such modeling pipelines require high technical skills, are error-prone, and are time-consuming, thus limiting the applicability and the user base of model checking tools.
In this work, we address the problem of estimating parameter values of CTMCs expressed as \textsc{Prism}\ models from a number of partially-observable executions.
The expressive power of the \textsc{Prism}\ language brings two technical challenges:
\begin{inlineenum}
\item the classic state-space explosion problem due to modular specification, and
\item the fact that the transition rates of the CTMCs result from the algebraic composition of the rates of different (parallel) modules which are themselves defined as arithmetic expressions over the parameters (\emph{cf}.\ Fig.~\ref{fig:bigSIR}).
\end{inlineenum}
We address the second aspect of the problem by considering a class of \emph{parametric} CTMCs, which are CTMCs where transition rates are polynomial functions over a fixed set of parameters. In this respect, parametric CTMCs have the advantage to cover a rich subclass of \textsc{Prism}\ models and to be closed under the operation of parallel composition implemented by the \textsc{Prism}\ language.
Following the standard approach, we pursue the maximum likelihood estimate (MLE), i.e., we look for the parameter values that achieve the maximum joint likelihood of the observed execution sequences. However, given the non-convex nature of the likelihood surface, computing the global maximum that defines the MLE is computationally intractable~\cite{Terwijn02}.
To deal with this issue we employ a theoretical iterative optimization principle known as MM algorithm~\cite{Lange16,Lange13}.
The well-known EM algorithm~\cite{Dempster77} is an instance of MM optimization framework and is a versatile tool for constructing optimization algorithms.
MM algorithms are typically easy to design, numerically stable, and in some cases amenable to accelerations~\cite{JamshidianJ97,ZhouAL11}.
The versatility of the MM principle consists in the fact that is built upon a simple theory of inequalities, allowing one to derive optimization procedures. In fact, these procedures appear to be much easier than the derivation of a corresponding EM algorithm that relies on choosing appropriate missing data structures, i.e., latent variables.
As the EM algorithm, the MM principle is useful to derive iterative procedures for maximum likelihood estimation which increase the likelihood at each iteration and converge to some local optimum.
The main technical contribution of the paper consists in laying out MM techniques for devising novel iterative maximum likelihood estimation algorithms for parametric CTMCs covering two learning scenarios.
In the first scenario, we assume that state labels and dwell times are observable variables while state variables are hidden. The learning procedure devised for this case is a generalization of the Baum-Welch
algorithm~\cite{Rabiner89} ---an EM algorithm that estimates transition probabilities in hidden Markov models--- to parametric CTMCs.
In the second scenario, state labels are observable while state variables and dwell time variables are hidden. In contrast with the first case, the objective function that defines the MLE achieves the same value on all the CTMCs sharing the same embedded Markov chain. Thus, a standard adaptation of the Baum-Welch algorithm to this case would not lead to a procedure able to learn the continuous-time aspects of the observed system. Nevertheless, by making an analogy between the way transitions ``compete'' with each other in race conditions and the Bradley--Terry model of ranking~\cite{BradleyT52}, we successfully extend the solution devised for the first scenario with techniques used by Lange, Hunter, and Yang in \cite{LangeHY00} for finding rank estimates in the Bradley--Terry model.
We provide experimental evidence that, when the model has sufficiently many constant transition rates, our solution effectively converge to the true parameter values of the model by hinging on the rate values that are known in the model. Note that this condition is easily fulfilled when one of the components is fully observable. A typical example is the model a microcontroller component running within a partially observable physical environment. Other examples may arise from website analysis for reviewing a website's performance w.r.t.\ user experience.
We demonstrate the effectiveness of our estimation procedure on a case study taken from~\cite{Milazzo21}: the analysis of the spread of COVID-19 in presence of lockdown countermeasures. In particular, we showcase how our technique can be used to simplify modeling pipelines that involve a number of modifications of the model ---possibly introducing approximations--- and the re-estimation of its parameters.
\paragraph*{Related Work}
In~\cite{GeorgoulasHS18,GeorgoulasHMS14} Georgoulas et al.\ employ probabilistic programming to implement a variant of Bio-PEPA~\cite{CiocchettaH09} called ProPPA. ProPPA is a stochastic process algebra with inference capabilities that allows some rates to be assigned a prior distribution, capturing the modeler’s belief about the likely values of the rates. Using probabilistic inference, the ProPPA model may be combined with the observations to derive updated probability distributions over rates.
Before ProPPA, Geisweiller proposed EMPEPA~\cite{Geisweiller2006}, an EM algorithm that estimates the rate values inside a PEPA model.
A closely related work is~\cite{WeiWT02} where they learn continuous-time hidden Markov models to do performance evaluation. There, observations are regarded as (discrete-time) periodic observations with fixed period $\Delta$. The learning method works in two steps: first, they employ the Baum-Welch algorithm~\cite{Rabiner89} to estimate the transition probability matrix of a hidden Markov model, then they obtain the infinitesimal generator of the CTMC from the learned transition probability matrix. In contrast with~\cite{WeiWT02}, we are able to derive a simpler procedure that directly extends the Baum-Welch algorithm to parametric CTMCs.
In~\cite{SenVA04}, Sen et al.\ present an algorithm based on the state merging paradigm of \textsc{Alergia}~\cite{CarrascoO94} to learn a CTMC from timed observations. In contrast with our work, \cite{SenVA04} does not perform parameter estimation over structured models, but learns an unstructured (transition-labeled) CTMC.
Another related line of research is parameter synthesis of Markov models~\cite{JansenJK22}.
In contrast with our work, parameter synthesis revolves around the problem of finding (some or all) parameter instantiations of the model that satisfy a given logical specification.
\section{Preliminaries and Notation} \label{sec:prelim}
We denote by $\mathbb{R}$, $\mathbb{Q}$, and $\mathbb{N}$ respectively the sets of real, rational, and natural numbers,
and by $\Sigma^n$, $\Sigma^*$ and, $\Sigma^\omega$ respectively the set of words of length $n \in \mathbb{N}$, finite length, and infinite length, built over the finite alphabet $\Sigma$.
We use $\distr{\Omega}$ to denote the set of discrete probability distributions on $\Omega$, i.e., functions $\mu \colon \Omega \to [0,1]$, such that $\mu(X) = 1$, where $\mu(E) = \sum_{x \in E} \mu(x)$ for $E \subseteq X$.
For a proposition $p$, we write $\bool{p}$ for the Iverson bracket of $p$, i.e., $\bool{p} = 1$ if $p$ is true, otherwise $0$.
A labelled continuous-time Markov chain (CTMC) is defined as follows.
\begin{definition} A labelled CTMC is a tuple $\M = (S, R, \pi, \ell)$ where $S$ is a finite set of states, $R \colon S \times S \to \mathbb{R}_{\geq 0}$ is the transition rate function, $\pi \in \distr{S}$ the initial distribution of states, and $\ell \colon S \to L$ is a labelling function which assigns to each state an observable label $\ell(s)$.
\end{definition}
The transition rate function assigns rates $r = R(s, s')$ to each pair of states $s,s' \in S$ which are to be seen as transitions of the form $s \trans{r} s'$. A transition $s \trans{r} s'$ can only occur if $r > 0$. In this case, the probability of this transition to be triggered within $\tau \in \mathbb{R}_{>0}$ time-units is $1 - e^{- r \, \tau}$. When, from a state $s$, there are more than one outgoing transition with positive rate, we are in presence of a \emph{race condition}. In this case, the first transition to be triggered determines which label is observed as well as the next state of the CTMC. According to these dynamics, the time spent in state $s$ before any transition occurs, called \emph{dwell time}, is exponentially distributed with parameter $E(s) = \sum_{s' \in S} R(s,s')$, called \emph{exit-rate} of $s$. A state $s$ is called \emph{absorbing} if $E(s) = 0$, that is, $s$ has no outgoing transition. Accordingly, when the CTMC ends in an absorbing state it will remain in the same state indefinitely.
The probability that the transition $s \trans{r} s'$ is triggered from $s$ is $r / E(s)$ and is independent from the time at which it occurs.
Accordingly, from the CTMC $\M$, we construct a (labelled) discrete-time Markov chain $\mathit{emb}(\M) = (S, P,\pi, \ell)$ with transition probability function $P \colon S \times S \to [0,1]$ defined as
\begin{equation*}
P(s, s') = \begin{cases}
R(s,s') / E(s) & \text{if $E(s) \neq 0$} \\
1 & \text{if $E(s) = 0$ and $s = s'$} \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
\begin{remark}
A CTMC can be equivalently described as a tuple $(S,{\to},s_0,\ell)$ where ${\to} \subseteq S \times \mathbb{R}_{\geq 0} \times S$ is a transition \emph{relation}. The transition rate function $R$ induced by $\to$ is obtained as, $R(s,s') = \sum \{ r \mid s \trans{r} s' \}$ for arbitrary $s,s' \in S$.
\end{remark}
An \emph{infinite path} of a CTMC $\M$ is a sequence $s_0 \tau_0 s_1 \tau_1 s_2 \tau_2 \cdots \in (S \times \mathbb{R}_{>0})^\omega$ where $R(s_i, s_{i+1}) > 0$ for all $i \in \mathbb{N}$. A \emph{finite path} is a sequence $s_0 \tau_0 \cdots s_{k-1} \tau_{k-1} s_k$ where $R(s_i, s_{i+1}) > 0$ and $\tau_i \in \mathbb{R}_{>0}$ for all $i \in \{1, \dots, k-1\}$ and $s_k$ is absorbing. The meaning of a path is that the system started in state $s_0$, where it stayed for time $\tau_0$, then transitioned to state $s_1$ where it stayed for time $\tau_1$, and so on. For a finite path the system eventually reaches an absorbing state $s_k$, where it remains.
We denote by $\mathbf{Path}_\M$ the set of all (infinite and finite) paths of $\M$. The formal definition of the probability space over $\mathbf{Path}_\M$ induced by $\M$ can be given by following the classical cylinder set construction (see e.g., ~\cite{BaierHHK03,KwiatkowskaNP07}).
Finally, we define the random variables $S_i$, $L_i$, and $T_i$ ($i \in \mathbb{N}$) that respectively indicate the $i$-th state, its label, and $i$-th dwell time of a path.
\paragraph*{The MM Algorithm} \label{sec:MMintro}
The MM algorithm is an iterative optimisation method. The acronym MM has a double interpretation: in minimization problems, the first M stands for majorize and the second for minorize; dually, in maximization problems, the first M stands for minorize and the second for maximize.
In this paper we only focus on maximizing an objective function $f(\vec{x})$, hence we tailor
the presentation of the general principles of the MM framework to maximization problems.
The MM algorithm is based on the concept of \emph{surrogate function}. A surrogate function $g(\vec{x} \mid \vec{x}_m)$ is said to \emph{minorize} a function $f(\vec{x})$ at $\vec{x}_m$ if
\begin{align}
&f(\vec{x}_m) = g(\vec{x}_m \mid \vec{x}_m) \,, \label{eq:tangent} \\
&f(\vec{x}) \geq g(\vec{x} \mid \vec{x}_m)\, \quad \text{for all } \vec{x} \neq \vec{x}_m \,. \label{eq:minorization}
\end{align}
In the maximization variant of the MM algorithm, we maximize the surrogate minorizing function $g(\vec{x} \mid \vec{x}_m)$ rather than the actual function $f(\vec{x})$. If $\vec{x}_{m+1}$ denotes the maximum of the surrogate $g(\vec{x} \mid \vec{x}_m)$, then we can show that the next iterate $\vec{x}_{m+1}$ forces $f(\vec{x})$ uphill, Indeed, the inequalities
\begin{equation*}
f(\vec{x}_m) = g(\vec{x}_m \mid \vec{x}_m) \leq g(\vec{x}_{m+1} \mid \vec{x}_m) \leq f(\vec{x}_{m+1})
\end{equation*}
follow directly from the definition of $\vec{x}_{m+1}$ and the axioms \eqref{eq:tangent} and \eqref{eq:minorization}.
The art in devising an MM algorithm revolves around intelligent choices of minorizing functions. This work relies on three inequalities.
The first basic minorization builds upon Jensen's inequality. For $x_i > 0$, $y_i > 0$ ($i =1 \dots n$),
\begin{equation}
\ln \left( \sum_{i = 1}^{n} x_i \right) \geq \sum_{i = 1}^{n} \frac{y_i}{\sum_{j = 1}^n y_j} \ln \left( \frac{\sum_{j = 1}^n y_j}{y_i} x_i\right) \label{eq:basicmin1}
\end{equation}
Note that the above inequality becomes an equality whenever $x_i = y_i$ for all $i =1 \dots n$. Remarkably, the EM algorithm~\cite{Dempster77} is a special case of the MM algorithm which revolves around the above basic minorization when additionally the values $x_i$ and $y_i$ describe a probability distribution, i.e., $\sum_{i = 1}^{n} x_i = 1$ and $\sum_{i = 1}^{n} y_i = 1$.
Our second basic minorization derives from the strict concavity of the logarithm function, which implies for $x,y > 0$ that
\begin{equation}
- \ln x \geq 1 - \ln y - x/y\label{eq:basicmin2}
\end{equation}
with equality if and only if $x = y$. Note that the above inequality restates the supporting hyperplane property of the convex function $- \ln x$.
The third basic minorization~\cite[\S 8.3]{Lange13} derives from the generalized arithmetic-geometric mean inequality
which implies, for positive $x_i$, $y_i$, and $\alpha_i$ and $\alpha = \sum_{i = 1}^{n} \alpha_i$, that
\begin{equation}
- \prod_{i = 1}^{n} x_i^{\alpha_i}
\geq - \left( \prod_{i = 1}^{n} y_i^{\alpha_i} \right) \sum_{i = 1}^n \frac{\alpha_i}{\alpha} \left( \frac{x_i}{y_i}\right)^{\alpha}\,.
\label{eq:basicmin3}
\end{equation}
Note again that equality holds when all $x_i = y_i$.
Because piecemeal composition of minorization works well, our derivations apply the above basic minorizations to strategic parts of the objective function, leaving other parts untouched.
Finally, another aspect that can simplify the derivation of MM algorithms comes from the fact that the iterative maximization procedure hinges on finding $\vec{x}_{m+1} = \argmax_{\vec{x}} g(\vec{x} \mid \vec{x}_m)$. Therefore, we can equivalently use any other surrogate function $g'(\vec{x} \mid \vec{x}_m)$ satisfying $\argmax_{\vec{x}} g(\vec{x} \mid \vec{x}_m) = \argmax_{\vec{x}} g'(\vec{x} \mid \vec{x}_m)$.
This is for instance the case when $g(\vec{x} \mid \vec{x}_m)$ and $g'(\vec{x} \mid \vec{x}_m)$ are equal up to some (irrelevant) constant $c$, that is $g(\vec{x} \mid \vec{x}_m) = g'(\vec{x} \mid \vec{x}_m) + c$.
\section{Parametric Continuous-time Markov chains}
As mentioned in the introduction, the \textsc{Prism}\ language offers constructs for the modular design of CTMCs within a uniform framework that represents synchronous and asynchronous module interaction.
For example, consider the \textsc{Prism}\ models depicted in Fig.~\ref{fig:bigSIR}. The behavior of each module is described by a set of commands which take the form
\mbox{$[\prismident{action}] \; \prismident{guard} \; \rightarrow \prismident{rate} \colon \prismident{update}$} representing a set of transitions of the module. The guard is a predicate over the state variables in the model. The update and the rate describe a transition that the module can make if the guard is true.
The command optionally includes an action used to force two or more modules to make transitions simultaneously (i.e., to synchronize). For example, in the left model in Fig.~\ref{fig:bigSIR}, in state $(50, 20, 5)$ (i.e., $s = 50$, $i = 20$, and $r = 5$), the composed model can move to state $(49,21,5)$ by synchronizing over the action $\mathtt{infection}$. The rate of this transition is equal to the product of the individual rates of each module participating in an $\mathtt{infection}$ transition, which in this case amounts to $0.01 \cdot \mathtt{beta} \cdot \mathtt{plock}$.
Commands that do not have an action represent asynchronous transitions that can be taken independently (i.e., asynchronously) from other modules.
By default, all modules are combined following standard parallel composition in the sense of the parallel operator from Communicating Sequential Processes algebra (CPS), that is, modules synchronize over all their common actions. The \textsc{Prism}\ language offers also other CPS-based operators to specify the way in which modules are composed in parallel.
Therefore, a parametric representation of a CTMC described by a \textsc{Prism}\ model shall consider \emph{transition rate expressions} which are closed under finite sums and finite products: sums deal with commands with overlapping guards and updates, while products take into account synchronization.
Let $\vec{x} = (x_1,\dots, x_n)$ be a vector of parameters. We write $\mathcal{E}$ for the set of polynomial maps $f \colon \mathbb{R}_{\geq 0}^n \to \mathbb{R}_{\geq 0}$ of the form $f(\vec{x}) = \sum_{i = 1}^{m} b_i \prod_{j=1}^{n} x_j^{a_{ij}}$,
where $b_i \in \mathbb{R}_{\geq 0}$ and $a_{ij} \in \mathbb{N}$ for $i \in \{ 1, \dots, m \}$ and $j \in \{ 1, \dots, n \}$.
Note that $\mathcal{E}$ is a commutative semiring satisfying the requirements established above for transition rate expressions.
We are now ready to introduce the notion of \emph{parametric} continuous-time Markov chain.
\begin{definition}
A parametric CTMC is a tuple $\mathcal{P} = (S,R,s_0, \ell)$ where $S$, $s_0$, and $\ell$ are defined as for CTMCs, and $R \colon S \times S \to \mathcal{E}$ is a parametric transition rate function.
\end{definition}
Intuitively, a parametric CTMC $\mathcal{P} = (S,R,s_0, \ell)$ defines a family of CTMCs arising by plugging in concrete values for the parameters $\vec{x}$. Given a parameter evaluation $\vec{v} \in \mathbb{R}_{\geq 0}^n$, we denote by $\mathcal{P}(\vec{v})$ the CTMC associated with $\vec{v}$, and $R(\vec{v})$ for its rate transition function. Note that by construction $R(\vec{v})(s,s') \geq 0$ for all $s,s' \in S$, therefore $\mathcal{P}(\vec{v})$ is a proper CTMC.
As for CTMCs, parametric transitions rate functions can be equivalently described by means of a transition relation ${\to} \subseteq S \times \mathcal{E} \times S$, where the parametric transition rate from $s$ to $s'$ is $R(s,s')(\vec{x}) = \sum \{f(\vec{x}) \mid s \trans{f} s' \}$.
\begin{example}
Consider the SIR model in Fig.~\ref{fig:bigSIR} with parameters $\mathtt{beta}$, $\mathtt{gamma}$, and $\mathtt{plock}$. The semantics of this model is a parametric CTMC with states $S=\{(s,i,r) \mid s,i,r \in \{0, \dots, 10^5 \} \}$ and initial state $(99936, 48, 16)$.
For example, the initial state has two outgoing transitions: one that goes to $(99935, 49, 16)$ with rate $48.96815 \cdot \mathtt{beta} \cdot \mathtt{plock}$, and the other that goes to $(99935,48,17)$ with rate $49 \cdot \mathtt{gamma} \cdot \mathtt{plock}$. \qed
\end{example}
One relevant aspect of the class of parametric CTMCs is the fact that it is closed under parallel composition in the sense described above. As a consequence, the study of parameter estimation of \textsc{Prism}\ models from observed data can be conveniently addressed as maximum likelihood estimation for parametric CTMCs.
\section{Learning Parameters from Observed Sample Data} \label{sec:learnCTMC}
In this section we present two algorithms to estimate the parameters of parametric CTMC $\mathcal{P}$ from a collection of i.i.d.\ observation sequences $\O = \vec{o}_1, \dots, \vec{o}_J$.
The two algorithms consider two different types of observations: timed and non-timed. A \emph{timed observation} $\ell_{0:k},\tau_{0:k-1}$ is a finite sequence $\ell_0 \tau_0 \cdots \tau_{k-1} \ell_k$ representing consecutive dwell times and state labels observed during a random execution of $\M$. Similarly, a \emph{non-timed observation} $\ell_{0:k}$ represents a sequence of consecutive state labels observed during a random execution of $\M$.
Both algorithms follow a maximum likelihood approach: the parameters $\vec{x}$ are estimated to maximize the joint likelihood $\mathcal{L}(\mathcal{P}(\vec{x}) | \O)$ of the observed data.
When $\mathcal{P}$ and $\O$ are clear from the context, we simply write $\mathcal{L}(\vec{x})$ for the joint likelihood.
Hereafter we present a solution to the maximum likelihood estimation problem building on an optimization framework known by the name MM algorithm~\cite{Lange13,Lange16}.
In this line, our algorithms start with an initial hypothesis $\hypX{0}$ and iteratively improve the current hypothesis $\hypX{m}$, in the sense that the likelihood associated with the next hypothesis $\hypX{m+1}$ enjoys the inequality $\mathcal{L}(\hypX{m}) \leq \mathcal{L}(\hypX{m+1})$. The procedure terminates when the improvement does not exceed a fixed threshold $\epsilon$, namely when $\mathcal{L}(\hypX{m}) - \mathcal{L}(\hypX{m-1}) \leq \epsilon$.
\subsection{Learning from Timed Observations} \label{sec:learntimed}
Assume we have $J$ i.i.d.\ timed observation sequences $\O = \vec{o}_1, \dots, \vec{o}_J$ where $\vec{o}_j = \ell^j_{0:k_j}, \tau^j_{1:k_j-1}$ ($j=1\dots J$). We want to estimate a valuation of the parameters $\vec{x}$ of $\mathcal{P}$ that maximises the joint likelihood function $\mathcal{L}(\vec{x}) = \prod_{j=1}^{J} l(\vec{o}_j | \mathcal{P}(\vec{x}))$ where the likelihood of an observation $\vec{o} = \ell_{0:k}, \tau_{0:k-1}$ for a generic CTMC $\M$ is
\begin{align}
&l (\vec{o} | \M) = \textstyle \sum_{s_{0:k}} l(S_{0:k} = s_{0:k}, L_{0:k} = \ell_{0:k}, T_{0:k-1} = \tau_{0:k-1} | \M) \notag \\
&\quad = \textstyle \sum_{s_{0:k}} P[S_{0:k} = s_{0:k}, L_{0:k} = \ell_{0:k} | \M] \cdot l(S_{0:k} = s_{0:k}, T_{0:k-1} = \tau_{0:k-1} | \M) \notag\\
&\quad = \textstyle \sum_{s_{0:k}} \bool{\ell(s_{0:k}) {=} \ell_{i:k}} \big( \prod_{i = 0}^{k-1} R(s_{i}, s_{i+1}) / E(s_{i}) \big) \, \big(\prod_{i = 0}^{k-1}E(s_{i}) \, e^{- E(s_{i}) \tau_{i}} \big) \notag\\
& \quad = \textstyle \sum_{s_{0:k}} \bool{\ell(s_{0:k}) = \ell_{i:k}} \prod_{i = 0}^{k-1} R(s_{i}, s_{i+1}) \cdot e^{- E(s_{i}) \tau_{i}} \,.\label{eq:LTObs2}
\end{align}
Before presenting an MM algorithm to solve the MLE problem above, we find it convenient to introduce some notation. Let $\mathcal{P} = (S,\to, s_0, \ell)$, we write $f_\rho$ for the rate map of the transition $\rho \in {\to}$, and write $s \to \cdot$ for the set of transitions departing from $s \in S$.
Without loss of generality, we assume that the rate function $f_\rho$ of a transition is either a constant map, i.e., $f_\rho(\vec{x}) = c_r$ for some $c_r \geq 0$ or a map of the form $f_\rho(\vec{x}) = c_\rho \prod_{i = 1}^{n} x_i^{a_{\rho i}}$ for some $c_\rho > 0$ and $a_{\rho i} > 0$ for some $i \in \{1,\dots, n\}$; we write $a_\rho$ for $\sum_{i = 1}^n a_{\rho i}$. We denote by $\trans{c}$ the subset of transitions with constant rate function and $\trans{\vec{x}}$ for the remaining transitions.
To maximize $\mathcal{L}(\vec{x})$ we propose to employ an MM algorithm based on the following surrogate function $g(\vec{x} | \hypX{m}) = \sum_{i = 1}^n g(x_i | \hypX{m})$ where
\begin{equation}
g(x_i | \hypX{m}) =
\sum_{\rho \in {\trans{\vec{x}}}} \xi_{\rho} a_{\rho i} \ln x_i
- \sum_{s} \sum_{\rho \in s \trans{\vec{x}} \cdot} \frac{f_\rho(\hypX{m}) a_{\rho i} \gamma_s}{a_\rho (x_{m i})^{a_\rho}} x_i^{a_\rho} \label{eq:surrogate-timed}
\end{equation}
Here $\gamma_s = \sum_{j = 1}^{J} \sum_{t = 0}^{k_j-1} \gamma^j_s(t) \tau^j_{t}$ and $\xi_\rho = \sum_{j = 1}^{J} \sum_{t = 0}^{k_j-1} \xi^j_{\rho}(t)$, where $\gamma^j_{s}(t)$ denotes the likelihood that having observed $\vec{o}_j$ on a random execution of $\mathcal{P}(\hypX{m})$ the state $S_t = s$;
and $\xi^j_{\rho}(t)$ is the likelihood that for such random execution the transition performed from state $S_t$ is $\rho$.
The following theorem states that the surrogate function $g(\vec{x} | \hypX{m})$ is a minorizer of the log-likelihood relative to the observed dataset $\O$.
\begin{theorem} \label{thm:gMinorizes}
The surrogate function $g(\vec{x} | \hypX{m})$ minorizes $\ln \mathcal{L}(\vec{x})$ at $\hypX{m}$ up to an irrelevant constant.
\end{theorem}
By Theorem~\ref{thm:gMinorizes} and the fact that the logarithm is an increasing function, we obtain that the parameter valuation that achieves the maximum of $g(\vec{x} | \hypX{m})$ improves the current hypothesis $\hypX{m}$ relative to likelihood function $\mathcal{L}(\vec{x})$.
\begin{corollary}
Let $\hypX{m+1} =\argmax_{\vec{x}} g(\vec{x} | \hypX{m})$, then $\mathcal{L}(\hypX{m}) \leq \mathcal{L}(\hypX{m+1})$.
\end{corollary}
The surrogate function $g(\vec{x} | \hypX{m})$ is easier to maximize than $\mathcal{L}(\vec{x})$ because its parameters are separated. Indeed, maximization of $g(\vec{x} | \hypX{m})$ is done by point-wise maximization of each univariate function $g(x_i | \hypX{m})$.
This has two main advantages: first, it is easier to handle high-dimensional problems~\cite{Lange13,Lange16}; second, if one can choose to fix the value of some parameters, say $I \subset \{1\dots n\}$ and the maximization of $g(\vec{x} | \hypX{m})$ can be performed by maximizing $g(x_i | \hypX{m})$ for each $i \notin I$.
The maxima of $g(x_i | \hypX{m})$ are found among the \emph{non-negative} roots\footnote{Note that $P_i$ always admits non-negative roots.
Indeed, $P_i(0) \leq 0$ and $P_i(M) > 0$ for $M > 0$ sufficiently large. Therefore, by the intermediate value theorem, there exists $y_0 \in [0,M)$ such that $P_i(y_0) = 0$.} of the polynomial function $P_i \colon \mathbb{R} \to \mathbb{R}$
\begin{equation}
P_i(y) = \sum_{s} \sum_{\rho \in s \trans{\vec{x}}} \frac{f_\rho(\hypX{m}) a_{\rho i} \gamma_s}{(x_{m i})^{a_\rho} } y^{a_\rho} - \sum_{\rho \in {\trans{\vec{x}}}} \xi_{\rho} a_{\rho i}
\label{updateTimed}
\end{equation}
\begin{remark}
\label{rm:category1}
There are some cases when \eqref{updateTimed} admits a closed-form solution. For instance, when the parameter index $i$ satisfies the property $\forall \rho \in {\trans{\vec{x}}} .\,a_{\rho i} > 0 \implies a_{\rho} = C$ for some constant $C \in \mathbb{N}$, then maximization of $g(x_i | \hypX{m})$ leads to the following update
\begin{equation*}
x_{(m+1) i} = \left[ \frac{ (x_{m i})^C \sum_{\rho \in {\trans{\vec{x}}}} \xi_{\rho} a_{\rho i} }{ \sum_{s} \sum_{\rho \in s \trans{\vec{x}}} f_\rho(\hypX{m}) a_{\rho i} \gamma_s } \right]^{1/C}
\end{equation*}
A classic situation when the above condition is fulfilled occurs when all transitions $\rho$ where $x_i$ appear (i.e., $a_{\rho i} > 0$), the transition rate is $f_\rho(\vec{x}) = c_\rho x_i$ (i.e., $a_{\rho i} = a_\rho = 1$). In that case, the above equation simplifies to
\begin{equation*}
x_{(m+1) i} = \frac{ \sum_{\rho \in {\trans{\vec{x}}}} \xi_{\rho} }{ \sum_{s} \sum_{\rho \in s \trans{\vec{x}}} c_\rho \gamma_s }
\end{equation*}
For example, the parametric CTMC associated with the SIR models in Fig.~\ref{fig:bigSIR} satisfies the former property for all parameters, because all transition rates are expressions either of the form $c \cdot \mathtt{plock} \cdot \mathtt{beta}$ or the form $c \cdot \mathtt{plock} \cdot \mathtt{gamma}$ for some constant $c > 0$. Furthermore, if we fix the value of the parameter $\mathtt{plock}$ the remaining parameters satisfy the latter property.
In Section~\ref{sec:case-study}, we will take advantage of this fact for our calculations. \qed
\end{remark}
To complete the picture, we show how to compute the coefficients $\gamma^j_s(t)$ and $\xi^j_{\rho}(t)$. To this end, we employ standard forward and backward procedures.
We define the forward function $\alpha_s^j(t)$ and the backward function $\beta_s^j(t)$ respectively as
\begin{align*}
\alpha_s^j(t) &= l(L_{0: t} = \ell^{j}_{0: t},T_{0: t} = \tau^{j}_{0: t}, S_{t} = s | \mathcal{P}(\hypX{m})) \, \text{, and} \\
\beta_s^j(t) &= l(L_{t+1:k_j} = \ell^{j}_{t+1 :k_j}, T_{t+1:k_j-1} = \tau^{j}_{t+1:k_j-1} | S_{t} = s , \mathcal{P}(\hypX{m}) ) \, .
\end{align*}
These can be computed using dynamic programming according to the following recurrences: let $\mathcal{P}(\hypX{m}) = (S,R,s_0, \ell)$, then
\begin{align}
\alpha_s^j(t) &= \begin{cases}
\bool{s = s_0} \, \omega^j_s(t) &\text{if $ t=0$} \\
\omega^j_{s}(t) \sum_{s' \in S} \frac{R(s', s)}{E(s')} \, \alpha_{s'}^j(t-1) &\text{if $0 < t \leq k_j$}
\end{cases}
\\
\beta_s^j(t) &= \begin{cases}
1 &\text{if $ t= k_j$} \\
\sum_{s' \in S} \frac{R(s, s')}{E(s)} \, \beta_{s'}^j(t+1) \, \omega^j_{s'}(t+1) &\text{if $0 \leq t < k_j$}
\end{cases}
\end{align}
where
\begin{equation}
\omega^j_s(t) =
\begin{cases}
\bool{\ell(s) = \ell^j_t} E(s) e^{- E(s) \tau^j_{t}}
&\text{if $0 \leq t < k_j$,} \\
\bool{\ell(s) = \ell^j_t} &\text{if $t = k_j$.}
\end{cases}
\label{eq:omegaobs}
\end{equation}
Finally, for $s \in S$ and $\rho = (s \trans{f_\rho} s')$, $\gamma^j_{s}(t)$ and $\xi^j_{\rho}(t)$ are related to the forward and backward functions as follows
\begin{align}
\gamma^j_{s}(t) = \frac{\alpha_s^j(t) \, \beta_s^j(t) }{ \sum_{s' \in S} \alpha_{s'}^j(t) \, \beta_{s'}^j(t)} \, ,
&&
\xi^j_{\rho}(t) = \frac{ \alpha_s^j(t) f_\rho(\hypX{m}) \, \omega^j_{s'}(t+1) \,\beta_{s'}^j(t+1) }{ E(s) \sum_{s'' \in S} \alpha_{s''}^j(t) \, \beta_{s''}^j(t)} \,. \label{eq:defGammaXi}
\end{align}
\subsection{Learning from Non-timed Observations} \label{sec:learnnontimed}
Let now assume we have collected $J$ i.i.d.\ non-timed observation sequences $\O = \vec{o}_1, \dots, \vec{o}_J$ where $\vec{o}_j = \ell^j_{0:k_j}$ ($j=1 \dots J$). As done before, we want to maximize the joint likelihood function $\mathcal{L}(\vec{x}) = \prod_{j=1}^{J} l(\vec{o}_j | \mathcal{P}(\vec{x}))$ where the likelihood an arbitrary non-timed observation $\vec{o} = \ell_{0:k}$ relative to the CTMC $\M$ is
\begin{align}
l(\ell^j_{1:k} | \M ) &= \textstyle \sum_{s_{0:k}} P[S_{0:k} = s_{0:k}, L_{0:k} = \ell^j_{1:k} | \M] \label{eq:LObs1} \\
&= \textstyle \sum_{s_{0:k}} \bool{\ell(s_{0:k}) {=} \ell_{i:k}} \prod_{i = 0}^{k-1} R(s_{i}, s_{i+1}) / E(s_{i}) \,. \label{eq:LObs2}
\end{align}
Looking at the formula above, it is clear that whenever two CTMCs $\M_1$ and $\M_2$ have the same embedded Markov chain they will have also the same likelihood value, i.e., $\mathcal{L}(\M_1 | \O) = \mathcal{L}(\M_2 | \O)$. The fact that dwell time variables are not observable leaves us with an MLE objective that does not fully capture the continuous-time aspects of the model under estimation.
A similar problem shows up also in the Bradley--Terry model of ranking~\cite{BradleyT52}. This model is intuitively understood via a sport analogy. Given a set of teams where each team $i$ is assigned a rank parameter $r_i > 0$, assuming that ties are not possible, team $i$ beats team $j$ with probability $r_i / (r_i + r_j)$. If this outcome occurs $c_{ij}$ times during a tournament, then the probability of the whole tournament is $L(\vec{r}) = \prod_{i,j} (r_i / (r_i + r_j))^{c_{ij}}$, assuming that games are independent one another. Clearly, $L(\vec{r}) = L(c \,\vec{r})$ for any $c > 0$. Under mild assumptions, the function $L(\vec{r})$ admits a unique maximum when the value of one rank, say $r_1$, is fixed a priori.
Back to our problem, we claim that the race conditions among transitions can be interpreted under the Bradley--Terry model of ranking. As a consequence, when the number of parametric transitions is sufficiently small relative to that of constant transitions, the estimation of the unknown transition rates can hinge on the value of the transition rates that are fixed, leading the algorithm to converge to the real parameter values.
For the non-timed maximum likelihood estimation problem we devise an MM algorithm based on the surrogate function $h(\vec{x} | \hypX{m}) = \sum_{i = 1}^{n} h(x_i | \hypX{m})$ for
\begin{equation}
h(x_i | \hypX{m}) =
\sum_{\rho \in {\trans{\vec{x}}}} \hat\xi_{\rho} a_{\rho i} \ln x_i
- \sum_{s} \sum_{\rho \in s \trans{\vec{x}} \cdot} \frac{f_\rho(\hypX{m}) \, a_{\rho i} \, \hat\gamma_s}{E_{m}(s) \, a_\rho \, x_{m i}^{a_\rho}} x_i^{a_\rho} \label{eq:surrogate-nontimed}
\end{equation}
where $E_{m}(s)$ denotes the exit rate of the state $s$ in $\mathcal{P}(\hypX{m})$, $\hat\gamma_s = \sum_{j = 1}^{J}\sum_{t = 0}^{k_j - 1} \hat\gamma_s^j(t)$, and $\hat\xi_\rho = \sum_{j = 1}^{J}\sum_{t = 0}^{k_j - 1} \hat\xi_\rho^j(t)$.
This time, the coefficients $\hat{\gamma}^j_{s}(t)$ and $\hat{\xi}^j_{\rho}(t)$ denote respectively the probability that having observed $\vec{o}_j$ in a random execution of $\mathcal{P}(\hypX{m})$, the state $S_t$ is $s$, and the transition performed in state $S_t$ is $\rho$. $\hat{\gamma}^j_{s}(t)$ and $\hat{\xi}^j_{r}(t)$ can be computed using the same dynamic programming procedure described in Section~\ref{sec:learntimed} by replacing each occurrence of $\omega^j_s(t)$ with $\hat{\omega}^j_s(t) = \bool{\ell(s) = \ell^j_t}$.
The following theorem states that the surrogate function $h(\vec{x} | \hypX{m})$ is a minorizer of the log-likelihood relative to the observed (non-timed) dataset $\O$.
\begin{theorem} \label{thm:hMinorizes}
The surrogate function $h(\vec{x} | \hypX{m})$ minorizes $\ln \mathcal{L}(\vec{x})$ at $\hyp_{m}$ up to an irrelevant constant.
\end{theorem}
\begin{proof}(sketch) To easy the presentation we assume that the parametric CTMC $\mathcal{P}$ under study has at most one transition between each pair of states.
Starting from the log-likelihood function $\ln \mathcal{L}(\vec{x})$, we proceed with the following minorization steps\footnote{We denote by $f(\vec{x}) \cong f'(\vec{x})$ the fact that $f(\vec{x}) = f'(\vec{x}) + C$ for some (irrelevent) constant $C$.}
\begin{align*}
&\ln \mathcal{L}(\vec{x}) = \sum_{j = 1}^{J} \ln l(\vec{o}_j | \mathcal{P}(\vec{x}))
= \sum_{j = 1}^{J} \ln \sum_{s_{0:k_j}} P[ s_{0:k_j}, \vec{o}_j | \mathcal{P}(\vec{x})] \tag{by \eqref{eq:LObs1}} \\
&\geq \sum_{j = 1}^{J} \sum_{s_{0:k_j}} {P[s_{0:k_j} | \vec{o}_j , \mathcal{P}(\hypX{m})]} \ln \left( \frac{P[ s_{0:k_j}, \vec{o}_j | \mathcal{P}(\vec{x})]}{P[s_{0:k_j} | \vec{o}_j , \mathcal{P}(\hypX{m})]} \right) \tag{by \eqref{eq:basicmin1}}\\
&\cong
\sum_{j = 1}^{J} \sum_{t = 1}^{k_j} \sum_{s_{0:k_j}} P[s_{0:k_j} | \vec{o}_j , \mathcal{P}(\hypX{m})] \big( \ln R(s_{t}, s_{t+1}) - \ln E(s_{t}) \big) \tag{by \eqref{eq:LObs2}} \\
&\cong \sum_{\rho \in {\trans{\vec{x}}}} \hat\xi_{\rho} \ln f_\rho(\vec{x}) + \sum_{s} \hat\gamma_s (- \ln E(s) )\tag{up-to const} \\
&\geq
\sum_{i = 1}^{n} \sum_{\rho \in {\trans{\vec{x}}}} \hat\xi_{\rho} a_{\rho i} \ln x_i + \sum_{s} \hat\gamma_s \left( - \frac{E(s)}{E_{m}(s)} \right) \tag{by \eqref{eq:basicmin2}, up-to const} \\
& \geq \sum_{i = 1}^n \left[
\sum_{\rho \in {\trans{\vec{x}}}} \hat{\xi}_{\rho} a_{\rho i} \ln x_i
- \sum_{s} \sum_{\rho \in s \trans{\vec{x}} \cdot} \frac{\hat{\gamma}_s f_\rho(\hypX{m}) a_{\rho i}}{E_{m}(s) a_\rho x_{m i}^{a_\rho}} x_i^{a_\rho} \right] \tag{$\triangle\triangle$} \label{eq:hatminEsbis} \\
& = h(\vec{x} | \hypX{m}) \tag{by \eqref{eq:surrogate-nontimed}}
\end{align*}
Where the step \eqref{eq:hatminEsbis} is justified by the minorization of $- E(s)$ obtained via \eqref{eq:basicmin3} as follows
\begin{equation*}
- E(s)
\cong \sum_{\rho \in s \trans{\vec{x}} \cdot} c_{\rho} \left( - \prod_{i = 1}^{n} x_i^{a_{\rho i}} \right)
\geq \sum_{\rho \in s \trans{\vec{x}}} - f_{\rho}(\hypX{m}) \sum_{i = 1}^{n} \frac{a_{\rho i}}{a_\rho} \left(\frac{x_i}{x_{m i}} \right)^{a_\rho} \,.
\end{equation*}
Hence, there exists a constant $C > 0$ such that $h(\vec{x} | \hypX{m}) + C$ minorizes $\ln \mathcal{L}(\vec{x})$ at $\hypX{m}$.
\end{proof}
Notably, in the proof of Theorem~\ref{thm:hMinorizes} we employ the minorization~\eqref{eq:basicmin2} used in~\cite{LangeHY00} for finding rankings in the Bradley--Terry model.
As an immediate corollary of Theorem~\ref{thm:hMinorizes}, we have that the parameter valuations that maximize $h(\vec{x} | \hypX{m})$ improve the current hypothesis $\hypX{m}$ with respect to the ML objective.
\begin{corollary}
Let $\hypX{m+1} =\argmax_{\vec{x}} h(\vec{x} | \hypX{m})$, then $\mathcal{L}(\hypX{m}) \leq \mathcal{L}(\hypX{m+1})$.
\end{corollary}
As before, maximization of $h(\vec{x} | \hypX{m})$ is achieved by point-wise maximization of $h(x_i | \hypX{m})$. The maxima of $h(x_i | \hypX{m})$ are found among the \emph{non-negative} roots of the polynomial function
\begin{equation}
Q_i(y) =
\sum_{s} \sum_{\rho \in s \trans{\vec{x}} \cdot} \frac{f_\rho(\hypX{m}) a_{\rho i} \hat\gamma_s}{E(s) (x_{m i})^{a_\rho}} y^{a_\rho}
- \sum_{\rho \in {\trans{\vec{x}}}} \hat\xi_{\rho} a_{\rho i}
\label{updateUntimed}
\end{equation}
By arguments similar to those explained in Remark~\ref{rm:category1}, Equation~\eqref{updateUntimed} may admit a closed-form solution.
\section{Experimental evaluation} \label{sec:experiments}
\begin{figure}[t]
\input{prismmodels/tandem.tex}
\caption{Prism model for the tandem queueing network from \cite{Hermanns99}.}
\label{model:tandem}
\end{figure}
We implemented the algorithms from Section~\ref{sec:learnCTMC} as an extention of the \texttt{Jajapy} Python library~\cite{jajapy}, which has the advantage of being compatible with \textsc{Prism}\ models.
In this section, we present an empirical evaluation of the efficiency of our algorithms as well as the quality of their outcome.
To this end, we use the tandem queueing network model from \cite{Hermanns99} (\emph{cf}.\ Fig.~\ref{model:tandem}) as a benchmark for our evaluation.
The experiments have been designed according to the following setup. We assume that the state of $\mathtt{serverC}$ is fully observable ---i.e., its state variables $\mathtt{sc}$ and $\mathtt{ph}$ are-- as well as the size $\mathtt{c}$ of the queue and the value of $\mathtt{lambda}$. In contrast, we assume that the state of $\mathtt{serverM}$ is not observable.
Each experiment consists in estimating the value of the parameters $\mathtt{mu1a}$, $\mathtt{mu1b}$, $\mathtt{mu2}$, and $\mathtt{kappa}$ from a training set consisting of $100$ observation sequences of length $30$, generated by simulating the \textsc{Prism}\ model depicted in Fig.~\ref{model:tandem}.
We perform this experiment both using timed and non-timed observations, by increasing the size $\mathtt{c}$ of the queue until the running time of the estimation exceeds a time-out set to $1$ hour.
We repeat each experiment 10
times by randomly re-sampling the initial values of each unknown parameter $x_i$ in the interval $[0.1,\, 5.0]$. We annotate the running time as well as the relative error
$\delta_i$ for each parameter $x_i$, calculated according to the formula $\delta_i = |e_i - r_i| / |r_i|$, where $e_i$ and $r_i$ are respectively the estimated value and the real value of $x_i$.
\begin{table}[!ht]
\begin{center}
\rowcolors{4}{gray!10}{gray!40}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\mathtt{c}$} & \multirow{2}{*}{$|S|$} & \multirow{2}{*}{$|{\to}|$} & \multicolumn{2}{c|}{Running time (s)} & \multicolumn{2}{c|}{$\norm{1}{\delta}$} & \multicolumn{2}{c|}{$\norm{\infty}{\delta}$}\\ \cline{4-9}
& & & Timed & Non-timed & Timed & Non-timed & Timed & Non-timed \\ \hline \hline
4 & 45 & 123 & 4.336 & 15.346 & 0.226 & 0.251 & 0.13 & 0.13\\ \hline
6 & 91 & 269 & 13.219 & 38.661 & 0.399 & 0.509 & 0.173 & 0.329\\ \hline
8 & 153 & 471 & 37.42 & 90.952 & 0.322 & 0.387 & 0.183 & 0.187\\ \hline
10 & 231 & 729 & 76.078 & 170.044 & 0.359 & 0.346 & 0.17 & 0.191\\ \hline
12 & 325 & 1043 & 160.694 & 276.383 & 0.343 & 0.616 & 0.165 & 0.289\\ \hline
14 & 435 & 1413 & 264.978 & 623.057 & 0.373 & 0.263 & 0.195 & 0.117\\ \hline
16 & 561 & 1839 & 458.766 & 774.642 & 0.406 & 0.427 & 0.245 & 0.192\\ \hline
18 & 703 & 2321 & 871.39 & 1134.037 & 0.249 & 0.783 & 0.14 & 0.49\\ \hline
20 & 861 & 2859 & 1425.65 & 1225.539 & 0.416 & 0.987 & 0.281 & 0.519\\ \hline
22 & 1035 & 3453 & 2031.587 & 1297.383 & 0.546 & 1.013 & 0.278 & 0.602\\ \hline
24 & 1225 & 4103 & 2675.794 & 1924.074 & 0.441 & 1.892 & 0.281 & 1.599\\ \hline
\end{tabular}
\end{center}
\caption{Comparison of the performance of the estimation for timed and non-timed observations on the tandem queueing network with different size of the queue.}
\label{tab:tandem-res}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{pictures/tandem_experiment10.png}
\caption{Comparison of the performance of the estimation for timed and non-timed observations on the tandem queueing network with different size of the queue.}
\label{fig:tandem-res}
\end{figure}
Table \ref{tab:tandem-res} reports the results for some selected experiments. The second and third columns provide respectively the number of states and transitions of the parametric CTMC resulting from the choice of $\mathtt{c}$; the fourth column reports the average running time; while the fifth (resp.\ sixth) column details the average $L_1$-norm (resp. $L_\infty$-norm) of the vector $\delta = (\delta_i)$, calculated as $\norm{1}{\delta} = \sum_{i} |\delta_i|$ (resp. $\norm{\infty}{\delta} = \max_i |\delta_i|$).
Fig.~\ref{fig:tandem-res} reports the results of all the experiments in a graphical format where measurements are presented together with their respective error bars.
We observe that the running time is quadratic in the number of states (equivalently, linear in the size $|S|+|{\to}|$ of the model) both for timed and non-timed observations. However, for non-timed observations, the variance of the measured running times tends to grow with the size of the model. In this respect, we observed that large models required more iterations than small models to converge. Nevertheless, all experiments required at most 20 iterations.
As one may expect, the variance of the measured relative errors is larger on the experiments performed with non-timed observations, and the quality of the estimation is better when employing timed observations.
Notably, for timed observations, the quality of the estimation remained stable despite the size of the model increased relatively to the size of the training set. This may be explained by the fact that the parameters occur in many transitions.
\section{Case Study: SIR modeling of pandemic} \label{sec:case-study}
In this section, we take as a case study the modeling pipeline proposed by Milazzo~\cite{Milazzo21} for the analysis and simulation in \textsc{Prism}\ of the spread of COVID-19 in presence of lockdown countermeasures.
The modeling pipeline includes:
\begin{inlineenum}
\item parameter estimation from real data based on a modified SIR model described by means of a system of Ordinary Differential Equations;
\item \label{item:translation} translation of the modified SIR model into a CTMC expressed as a \textsc{Prism}\ model; and
\item \label{item:analaysis} stochastic simulation and model checking with \textsc{Prism}.
\end{inlineenum}
In particular, the \textsc{Prism}\ model devised in step~(\ref{item:translation}) is exactly the model depicted in Fig.~\ref{fig:bigSIR} (left). However, to perform the analysis, Milazzo had to apply ``a couple of modeling tricks (variable pruning and upper bounds) that allowed state space of the model constructed by \textsc{Prism}\ to be reduced by several orders of magnitude. The introduction of upper bounds to the values of variables actually introduces a small approximation in the model, that is negligible in practically relevant cases''~\cite{Milazzo21}.
We argue that these kinds of modeling tricks are not uncommon in formal verification, but they require the modeler to ensure that the parameter values estimated for the original model are still valid in the approximated one.
In this section, we showcase the use of our algorithm to semi-automatize this task. Specifically, we generate two training sets by simulating the SIR model in Fig.~\ref{fig:bigSIR} using \textsc{Prism}\ and, based on that, we re-estimate $\mathtt{beta}$, $\mathtt{gamma}$, and $\mathtt{plock}$ on an approximated version of the model (\emph{cf}.\ Fig.~\ref{model:SIR-initial}) which is amenable to analysis in \textsc{Prism}.
\begin{figure*}[ht]
\input{prismmodels/sir_init_clean.tex}
\caption{Approximated SIR model.}
\label{model:SIR-initial}
\end{figure*}
The first training set represents the spread of the disease without lockdown (i.e., $\mathtt{plock} = 1$), while the second one is obtained by fixing the value of $\mathtt{plock}$ estimated in~\cite{Milazzo21} (i.e., $\mathtt{plock} = 0.472081$). In line with the data set used in~\cite{Milazzo21}, both training sets consist of one timed observation reporting the number of infected individuals for a period of 30 days.
The estimation of the parameters $\mathtt{beta}$, $\mathtt{gamma}$ and $\mathtt{plock}$ is performed on the model depicted in Fig.~\ref{model:SIR-initial}.
As in~\cite{Milazzo21}, we use an approximated version of the original SIR model (\emph{cf}.\ Fig.~\ref{fig:bigSIR}) obtained by employing a few modeling tricks: variable pruning, set upper bounds on the state variable $\mathtt{i}$, and re-scaling of the variable $\mathtt{r}$ in the interval $[0,\mathtt{nb\_r} -1]$.
These modeling tricks have the effect to reduce the state space of the underlying CTMC, speeding-up in this way parameter estimation and the following model analysis.
We perform the estimation in two steps. First, we estimate the values of $\mathtt{beta}$ and $\mathtt{gamma}$ on the first training set with $\mathtt{plock}$ set to $1$.
Then, we estimate the value of $\mathtt{plock}$ on the second training set with $\mathtt{beta}$ and $\mathtt{gamma}$ set to the values estimated in the first step.
Each step was repeated 10 times by randomly re-sampling the initial values of each unknown parameter in the interval $[0,1]$. Table \ref{tab:sir-res} reports the average estimated values and absolute errors relative to each parameter.
The average running time\footnote{Experiments were performed on a Linux machine with an AMD-Ryzen 9 3900X 12-Core processor and 32 GB of RAM.} of each execution of the algorithm was $89.94$ seconds.
\begin{table}[!ht]
\begin{center}
\rowcolors{2}{gray!10}{gray!40}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Parameter} & \textbf{Expected Value} & \textbf{Estimated Value} & \textbf{Absolute Error}\\ \hline \hline
$\mathtt{beta}$ & $0.122128$ & $0.135541$ & $0.013413$\\ \hline
$\mathtt{gamma}$ & $0.127283$ & $0.128495$ & $0.001212$\\ \hline
$\mathtt{plock}$ & $0.472081$ & $0.437500$ & $0.034581$\\ \hline
\end{tabular}
\end{center}
\caption{Parameter estimation on the approximated SIR model.}
\label{tab:sir-res}
\end{table}
Our results confirm Milazzo's claim that the introduction of upper bounds to the values of state variables produces a small approximation in the model.
Notably, we were able to achieve an accurate estimation of all the parameters from training sets consisting of a single partially-observable execution of the original SIR model. As observed in Section~\ref{sec:experiments}, this may be due to the fact that each parameter occurs in many transitions.
This case study demonstrates that our estimation procedure can be effectively used to simplify modeling pipelines that involve successive modifications of the model and the re-estimation of its parameter values.
In line with the model checking problem, also our technique requires the modeler to take the size of the model into account.
\section{Conclusion and Future Work} \label{sec:conclusion}
We presented novel methods to estimate parameters values of CTMCs expressed as \textsc{Prism}\ models from timed and non-timed partially-observable executions.
We demonstrated, through the use of a case-study, that our solution is a concrete aid in applications involving modeling and analysis, especially when the model under study requires successive adaptations which may lead to approximations that require re-estimation of the parameters of the model.
Notably, all the algorithms presented in this paper were devised following simple optimization principles borrowed from the MM optimization framework.
We suggest that similar techniques can be employed to other modeling languages (e.g., Markov automata~\cite{EisentrautHZ10,EisentrautHZ10a}) and metric-based approximate minimization~\cite{BacciBLM17,BalleLPPR21}.
An interesting future direction of research consists in extending our techniques to non-deterministic stochastic models by integrating the active learning strategies presented in~\cite{BacciILR21}.
\section{Introduction} \label{sec:intro}
A continuous-time Markov chain (CTMC) is a model of a dynamical system that, upon entering some state, remains in that state for a random real-valued amount of time ---called the dwell time or sojourn time--- and then transitions probabilistically to another state.
CTMCs are popular models in performance and dependability analysis. They have wide application and constitute the underlying semantics for real-time probabilistic systems such as queuing networks~\cite{LazowskaZGS84}, stochastic process algebras~\cite{Hillston94}, and calculi for systems biology~\cite{CiocchettaH09,KwiatkowskaNP08}.
Model checking tools such as \textsc{Prism}~\cite{KwiatkowskaNP11} and \textsc{Storm}~\cite{DehnertJK017}
provide access to a number of powerful analysis techniques for CTMCs.
Both tools accept models written in the \textsc{Prism}\ language, a state-based language based on~\cite{AlurH99b} that represents synchronous and asynchronous components in a uniform framework that supports compositional design.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\input{prismmodels/SIRuno.tex}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\input{prismmodels/SIRagents.tex}
\end{subfigure}
\caption{(Left) SIR model with lockdown from~\cite{Milazzo21}, (Right) Semantically equivalent formulation of the model to the left where susceptible, infected, and recovered individuals are modeled as distinct modules interacting with each other via synchronization.} \label{fig:bigSIR}
\end{figure}
For example, consider the two semantically equivalent \textsc{Prism}\ models depicted in Fig.~\ref{fig:bigSIR} implementing a variant of the Susceptible-Infected-Recovered (SIR) model proposed in~\cite{Milazzo21} to describe the spread of disease in presence of lockdown restrictions. The model depicted to the left consists of a single module, whereas the one to the right implements a compositional design where modules interact by synchronizing on two actions: \texttt{infection} and \texttt{recovery}.
Both models distinguish between three types of individuals: susceptible, infected, and recovered. Susceptible individuals become infected through contact with another infected person and can recover without outside interference. The SIR model is parametric in $\mathtt{beta}$, $\mathtt{gamma}$, and $\mathtt{plock}$. $\mathtt{beta}$ is the \emph{infection coefficient}, describing the probability of infection after the contact of a susceptible individual with an infected one; $\mathtt{gamma}$ is the \emph{recovery coefficient}, describing the rate of recovery of an infected individual (in other words, $1/\mathtt{gamma}$ is the time one individual requires to recover); and $\mathtt{plock} \in [0,1]$ is used to scale down the infection coefficient modeling restrictions to reduce the spread of disease.
Clearly, the outcome of the analysis of the above SIR model is strongly dependent on the parameter values used in each module, as they govern the timing and probability of events of the CTMC describing its semantics. However, in some application domains, parameter values have to be empirically evaluated from a number of partially-observable executions of the model. To the best of our knowledge, neither \textsc{Prism}\ nor \textsc{Storm} provide integrated support for this task, leaving the burden of estimating parameter values to the modeler. A paradigmatic example is the modeling pipeline described in~\cite{Milazzo21}, where the parameters of the SIR model in Fig.~\ref{fig:bigSIR} are estimated based on a definition of the model as ODEs, and later used in an approximation of the original SIR model designed to reduce the state space of the SIR model in Fig.~\ref{fig:bigSIR} (left).
Such modeling pipelines require high technical skills, are error-prone, and are time-consuming, thus limiting the applicability and the user base of model checking tools.
In this work, we address the problem of estimating parameter values of CTMCs expressed as \textsc{Prism}\ models from a number of partially-observable executions.
The expressive power of the \textsc{Prism}\ language brings two technical challenges:
\begin{inlineenum}
\item the classic state-space explosion problem due to modular specification, and
\item the fact that the transition rates of the CTMCs result from the algebraic composition of the rates of different (parallel) modules which are themselves defined as arithmetic expressions over the parameters (\emph{cf}.\ Fig.~\ref{fig:bigSIR}).
\end{inlineenum}
We address the second aspect of the problem by considering a class of \emph{parametric} CTMCs, which are CTMCs where transition rates are polynomial functions over a fixed set of parameters. In this respect, parametric CTMCs have the advantage to cover a rich subclass of \textsc{Prism}\ models and to be closed under the operation of parallel composition implemented by the \textsc{Prism}\ language.
Following the standard approach, we pursue the maximum likelihood estimate (MLE), i.e., we look for the parameter values that achieve the maximum joint likelihood of the observed execution sequences. However, given the non-convex nature of the likelihood surface, computing the global maximum that defines the MLE is computationally intractable~\cite{Terwijn02}.
To deal with this issue we employ a theoretical iterative optimization principle known as MM algorithm~\cite{Lange16,Lange13}.
The well-known EM algorithm~\cite{Dempster77} is an instance of MM optimization framework and is a versatile tool for constructing optimization algorithms.
MM algorithms are typically easy to design, numerically stable, and in some cases amenable to accelerations~\cite{JamshidianJ97,ZhouAL11}.
The versatility of the MM principle consists in the fact that is built upon a simple theory of inequalities, allowing one to derive optimization procedures. In fact, these procedures appear to be much easier than the derivation of a corresponding EM algorithm that relies on choosing appropriate missing data structures, i.e., latent variables.
As the EM algorithm, the MM principle is useful to derive iterative procedures for maximum likelihood estimation which increase the likelihood at each iteration and converge to some local optimum.
The main technical contribution of the paper consists in laying out MM techniques for devising novel iterative maximum likelihood estimation algorithms for parametric CTMCs covering two learning scenarios.
In the first scenario, we assume that state labels and dwell times are observable variables while state variables are hidden. The learning procedure devised for this case is a generalization of the Baum-Welch
algorithm~\cite{Rabiner89} ---an EM algorithm that estimates transition probabilities in hidden Markov models--- to parametric CTMCs.
In the second scenario, state labels are observable while state variables and dwell time variables are hidden. In contrast with the first case, the objective function that defines the MLE achieves the same value on all the CTMCs sharing the same embedded Markov chain. Thus, a standard adaptation of the Baum-Welch algorithm to this case would not lead to a procedure able to learn the continuous-time aspects of the observed system. Nevertheless, by making an analogy between the way transitions ``compete'' with each other in race conditions and the Bradley--Terry model of ranking~\cite{BradleyT52}, we successfully extend the solution devised for the first scenario with techniques used by Lange, Hunter, and Yang in \cite{LangeHY00} for finding rank estimates in the Bradley--Terry model.
We provide experimental evidence that, when the model has sufficiently many constant transition rates, our solution effectively converge to the true parameter values of the model by hinging on the rate values that are known in the model. Note that this condition is easily fulfilled when one of the components is fully observable. A typical example is the model a microcontroller component running within a partially observable physical environment. Other examples may arise from website analysis for reviewing a website's performance w.r.t.\ user experience.
We demonstrate the effectiveness of our estimation procedure on a case study taken from~\cite{Milazzo21}: the analysis of the spread of COVID-19 in presence of lockdown countermeasures. In particular, we showcase how our technique can be used to simplify modeling pipelines that involve a number of modifications of the model ---possibly introducing approximations--- and the re-estimation of its parameters.
\paragraph*{Related Work}
In~\cite{GeorgoulasHS18,GeorgoulasHMS14} Georgoulas et al.\ employ probabilistic programming to implement a variant of Bio-PEPA~\cite{CiocchettaH09} called ProPPA. ProPPA is a stochastic process algebra with inference capabilities that allows some rates to be assigned a prior distribution, capturing the modeler’s belief about the likely values of the rates. Using probabilistic inference, the ProPPA model may be combined with the observations to derive updated probability distributions over rates.
Before ProPPA, Geisweiller proposed EMPEPA~\cite{Geisweiller2006}, an EM algorithm that estimates the rate values inside a PEPA model.
A closely related work is~\cite{WeiWT02} where they learn continuous-time hidden Markov models to do performance evaluation. There, observations are regarded as (discrete-time) periodic observations with fixed period $\Delta$. The learning method works in two steps: first, they employ the Baum-Welch algorithm~\cite{Rabiner89} to estimate the transition probability matrix of a hidden Markov model, then they obtain the infinitesimal generator of the CTMC from the learned transition probability matrix. In contrast with~\cite{WeiWT02}, we are able to derive a simpler procedure that directly extends the Baum-Welch algorithm to parametric CTMCs.
In~\cite{SenVA04}, Sen et al.\ present an algorithm based on the state merging paradigm of \textsc{Alergia}~\cite{CarrascoO94} to learn a CTMC from timed observations. In contrast with our work, \cite{SenVA04} does not perform parameter estimation over structured models, but learns an unstructured (transition-labeled) CTMC.
Another related line of research is parameter synthesis of Markov models~\cite{JansenJK22}.
In contrast with our work, parameter synthesis revolves around the problem of finding (some or all) parameter instantiations of the model that satisfy a given logical specification.
\section{Preliminaries and Notation} \label{sec:prelim}
We denote by $\mathbb{R}$, $\mathbb{Q}$, and $\mathbb{N}$ respectively the sets of real, rational, and natural numbers,
and by $\Sigma^n$, $\Sigma^*$ and, $\Sigma^\omega$ respectively the set of words of length $n \in \mathbb{N}$, finite length, and infinite length, built over the finite alphabet $\Sigma$.
We use $\distr{\Omega}$ to denote the set of discrete probability distributions on $\Omega$, i.e., functions $\mu \colon \Omega \to [0,1]$, such that $\mu(X) = 1$, where $\mu(E) = \sum_{x \in E} \mu(x)$ for $E \subseteq X$.
For a proposition $p$, we write $\bool{p}$ for the Iverson bracket of $p$, i.e., $\bool{p} = 1$ if $p$ is true, otherwise $0$.
A labelled continuous-time Markov chain (CTMC) is defined as follows.
\begin{definition} A labelled CTMC is a tuple $\M = (S, R, \pi, \ell)$ where $S$ is a finite set of states, $R \colon S \times S \to \mathbb{R}_{\geq 0}$ is the transition rate function, $\pi \in \distr{S}$ the initial distribution of states, and $\ell \colon S \to L$ is a labelling function which assigns to each state an observable label $\ell(s)$.
\end{definition}
The transition rate function assigns rates $r = R(s, s')$ to each pair of states $s,s' \in S$ which are to be seen as transitions of the form $s \trans{r} s'$. A transition $s \trans{r} s'$ can only occur if $r > 0$. In this case, the probability of this transition to be triggered within $\tau \in \mathbb{R}_{>0}$ time-units is $1 - e^{- r \, \tau}$. When, from a state $s$, there are more than one outgoing transition with positive rate, we are in presence of a \emph{race condition}. In this case, the first transition to be triggered determines which label is observed as well as the next state of the CTMC. According to these dynamics, the time spent in state $s$ before any transition occurs, called \emph{dwell time}, is exponentially distributed with parameter $E(s) = \sum_{s' \in S} R(s,s')$, called \emph{exit-rate} of $s$. A state $s$ is called \emph{absorbing} if $E(s) = 0$, that is, $s$ has no outgoing transition. Accordingly, when the CTMC ends in an absorbing state it will remain in the same state indefinitely.
The probability that the transition $s \trans{r} s'$ is triggered from $s$ is $r / E(s)$ and is independent from the time at which it occurs.
Accordingly, from the CTMC $\M$, we construct a (labelled) discrete-time Markov chain $\mathit{emb}(\M) = (S, P,\pi, \ell)$ with transition probability function $P \colon S \times S \to [0,1]$ defined as
\begin{equation*}
P(s, s') = \begin{cases}
R(s,s') / E(s) & \text{if $E(s) \neq 0$} \\
1 & \text{if $E(s) = 0$ and $s = s'$} \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
\begin{remark}
A CTMC can be equivalently described as a tuple $(S,{\to},s_0,\ell)$ where ${\to} \subseteq S \times \mathbb{R}_{\geq 0} \times S$ is a transition \emph{relation}. The transition rate function $R$ induced by $\to$ is obtained as, $R(s,s') = \sum \{ r \mid s \trans{r} s' \}$ for arbitrary $s,s' \in S$.
\end{remark}
An \emph{infinite path} of a CTMC $\M$ is a sequence $s_0 \tau_0 s_1 \tau_1 s_2 \tau_2 \cdots \in (S \times \mathbb{R}_{>0})^\omega$ where $R(s_i, s_{i+1}) > 0$ for all $i \in \mathbb{N}$. A \emph{finite path} is a sequence $s_0 \tau_0 \cdots s_{k-1} \tau_{k-1} s_k$ where $R(s_i, s_{i+1}) > 0$ and $\tau_i \in \mathbb{R}_{>0}$ for all $i \in \{1, \dots, k-1\}$ and $s_k$ is absorbing. The meaning of a path is that the system started in state $s_0$, where it stayed for time $\tau_0$, then transitioned to state $s_1$ where it stayed for time $\tau_1$, and so on. For a finite path the system eventually reaches an absorbing state $s_k$, where it remains.
We denote by $\mathbf{Path}_\M$ the set of all (infinite and finite) paths of $\M$. The formal definition of the probability space over $\mathbf{Path}_\M$ induced by $\M$ can be given by following the classical cylinder set construction (see e.g., ~\cite{BaierHHK03,KwiatkowskaNP07}).
Finally, we define the random variables $S_i$, $L_i$, and $T_i$ ($i \in \mathbb{N}$) that respectively indicate the $i$-th state, its label, and $i$-th dwell time of a path.
\paragraph*{The MM Algorithm} \label{sec:MMintro}
The MM algorithm is an iterative optimisation method. The acronym MM has a double interpretation: in minimization problems, the first M stands for majorize and the second for minorize; dually, in maximization problems, the first M stands for minorize and the second for maximize.
In this paper we only focus on maximizing an objective function $f(\vec{x})$, hence we tailor
the presentation of the general principles of the MM framework to maximization problems.
The MM algorithm is based on the concept of \emph{surrogate function}. A surrogate function $g(\vec{x} \mid \vec{x}_m)$ is said to \emph{minorize} a function $f(\vec{x})$ at $\vec{x}_m$ if
\begin{align}
&f(\vec{x}_m) = g(\vec{x}_m \mid \vec{x}_m) \,, \label{eq:tangent} \\
&f(\vec{x}) \geq g(\vec{x} \mid \vec{x}_m)\, \quad \text{for all } \vec{x} \neq \vec{x}_m \,. \label{eq:minorization}
\end{align}
In the maximization variant of the MM algorithm, we maximize the surrogate minorizing function $g(\vec{x} \mid \vec{x}_m)$ rather than the actual function $f(\vec{x})$. If $\vec{x}_{m+1}$ denotes the maximum of the surrogate $g(\vec{x} \mid \vec{x}_m)$, then we can show that the next iterate $\vec{x}_{m+1}$ forces $f(\vec{x})$ uphill, Indeed, the inequalities
\begin{equation*}
f(\vec{x}_m) = g(\vec{x}_m \mid \vec{x}_m) \leq g(\vec{x}_{m+1} \mid \vec{x}_m) \leq f(\vec{x}_{m+1})
\end{equation*}
follow directly from the definition of $\vec{x}_{m+1}$ and the axioms \eqref{eq:tangent} and \eqref{eq:minorization}.
The art in devising an MM algorithm revolves around intelligent choices of minorizing functions. This work relies on three inequalities.
The first basic minorization builds upon Jensen's inequality. For $x_i > 0$, $y_i > 0$ ($i =1 \dots n$),
\begin{equation}
\ln \left( \sum_{i = 1}^{n} x_i \right) \geq \sum_{i = 1}^{n} \frac{y_i}{\sum_{j = 1}^n y_j} \ln \left( \frac{\sum_{j = 1}^n y_j}{y_i} x_i\right) \label{eq:basicmin1}
\end{equation}
Note that the above inequality becomes an equality whenever $x_i = y_i$ for all $i =1 \dots n$. Remarkably, the EM algorithm~\cite{Dempster77} is a special case of the MM algorithm which revolves around the above basic minorization when additionally the values $x_i$ and $y_i$ describe a probability distribution, i.e., $\sum_{i = 1}^{n} x_i = 1$ and $\sum_{i = 1}^{n} y_i = 1$.
Our second basic minorization derives from the strict concavity of the logarithm function, which implies for $x,y > 0$ that
\begin{equation}
- \ln x \geq 1 - \ln y - x/y\label{eq:basicmin2}
\end{equation}
with equality if and only if $x = y$. Note that the above inequality restates the supporting hyperplane property of the convex function $- \ln x$.
The third basic minorization~\cite[\S 8.3]{Lange13} derives from the generalized arithmetic-geometric mean inequality
which implies, for positive $x_i$, $y_i$, and $\alpha_i$ and $\alpha = \sum_{i = 1}^{n} \alpha_i$, that
\begin{equation}
- \prod_{i = 1}^{n} x_i^{\alpha_i}
\geq - \left( \prod_{i = 1}^{n} y_i^{\alpha_i} \right) \sum_{i = 1}^n \frac{\alpha_i}{\alpha} \left( \frac{x_i}{y_i}\right)^{\alpha}\,.
\label{eq:basicmin3}
\end{equation}
Note again that equality holds when all $x_i = y_i$.
Because piecemeal composition of minorization works well, our derivations apply the above basic minorizations to strategic parts of the objective function, leaving other parts untouched.
Finally, another aspect that can simplify the derivation of MM algorithms comes from the fact that the iterative maximization procedure hinges on finding $\vec{x}_{m+1} = \argmax_{\vec{x}} g(\vec{x} \mid \vec{x}_m)$. Therefore, we can equivalently use any other surrogate function $g'(\vec{x} \mid \vec{x}_m)$ satisfying $\argmax_{\vec{x}} g(\vec{x} \mid \vec{x}_m) = \argmax_{\vec{x}} g'(\vec{x} \mid \vec{x}_m)$.
This is for instance the case when $g(\vec{x} \mid \vec{x}_m)$ and $g'(\vec{x} \mid \vec{x}_m)$ are equal up to some (irrelevant) constant $c$, that is $g(\vec{x} \mid \vec{x}_m) = g'(\vec{x} \mid \vec{x}_m) + c$.
\section{Parametric Continuous-time Markov chains}
As mentioned in the introduction, the \textsc{Prism}\ language offers constructs for the modular design of CTMCs within a uniform framework that represents synchronous and asynchronous module interaction.
For example, consider the \textsc{Prism}\ models depicted in Fig.~\ref{fig:bigSIR}. The behavior of each module is described by a set of commands which take the form
\mbox{$[\prismident{action}] \; \prismident{guard} \; \rightarrow \prismident{rate} \colon \prismident{update}$} representing a set of transitions of the module. The guard is a predicate over the state variables in the model. The update and the rate describe a transition that the module can make if the guard is true.
The command optionally includes an action used to force two or more modules to make transitions simultaneously (i.e., to synchronize). For example, in the left model in Fig.~\ref{fig:bigSIR}, in state $(50, 20, 5)$ (i.e., $s = 50$, $i = 20$, and $r = 5$), the composed model can move to state $(49,21,5)$ by synchronizing over the action $\mathtt{infection}$. The rate of this transition is equal to the product of the individual rates of each module participating in an $\mathtt{infection}$ transition, which in this case amounts to $0.01 \cdot \mathtt{beta} \cdot \mathtt{plock}$.
Commands that do not have an action represent asynchronous transitions that can be taken independently (i.e., asynchronously) from other modules.
By default, all modules are combined following standard parallel composition in the sense of the parallel operator from Communicating Sequential Processes algebra (CPS), that is, modules synchronize over all their common actions. The \textsc{Prism}\ language offers also other CPS-based operators to specify the way in which modules are composed in parallel.
Therefore, a parametric representation of a CTMC described by a \textsc{Prism}\ model shall consider \emph{transition rate expressions} which are closed under finite sums and finite products: sums deal with commands with overlapping guards and updates, while products take into account synchronization.
Let $\vec{x} = (x_1,\dots, x_n)$ be a vector of parameters. We write $\mathcal{E}$ for the set of polynomial maps $f \colon \mathbb{R}_{\geq 0}^n \to \mathbb{R}_{\geq 0}$ of the form $f(\vec{x}) = \sum_{i = 1}^{m} b_i \prod_{j=1}^{n} x_j^{a_{ij}}$,
where $b_i \in \mathbb{R}_{\geq 0}$ and $a_{ij} \in \mathbb{N}$ for $i \in \{ 1, \dots, m \}$ and $j \in \{ 1, \dots, n \}$.
Note that $\mathcal{E}$ is a commutative semiring satisfying the requirements established above for transition rate expressions.
We are now ready to introduce the notion of \emph{parametric} continuous-time Markov chain.
\begin{definition}
A parametric CTMC is a tuple $\mathcal{P} = (S,R,s_0, \ell)$ where $S$, $s_0$, and $\ell$ are defined as for CTMCs, and $R \colon S \times S \to \mathcal{E}$ is a parametric transition rate function.
\end{definition}
Intuitively, a parametric CTMC $\mathcal{P} = (S,R,s_0, \ell)$ defines a family of CTMCs arising by plugging in concrete values for the parameters $\vec{x}$. Given a parameter evaluation $\vec{v} \in \mathbb{R}_{\geq 0}^n$, we denote by $\mathcal{P}(\vec{v})$ the CTMC associated with $\vec{v}$, and $R(\vec{v})$ for its rate transition function. Note that by construction $R(\vec{v})(s,s') \geq 0$ for all $s,s' \in S$, therefore $\mathcal{P}(\vec{v})$ is a proper CTMC.
As for CTMCs, parametric transitions rate functions can be equivalently described by means of a transition relation ${\to} \subseteq S \times \mathcal{E} \times S$, where the parametric transition rate from $s$ to $s'$ is $R(s,s')(\vec{x}) = \sum \{f(\vec{x}) \mid s \trans{f} s' \}$.
\begin{example}
Consider the SIR model in Fig.~\ref{fig:bigSIR} with parameters $\mathtt{beta}$, $\mathtt{gamma}$, and $\mathtt{plock}$. The semantics of this model is a parametric CTMC with states $S=\{(s,i,r) \mid s,i,r \in \{0, \dots, 10^5 \} \}$ and initial state $(99936, 48, 16)$.
For example, the initial state has two outgoing transitions: one that goes to $(99935, 49, 16)$ with rate $48.96815 \cdot \mathtt{beta} \cdot \mathtt{plock}$, and the other that goes to $(99935,48,17)$ with rate $49 \cdot \mathtt{gamma} \cdot \mathtt{plock}$. \qed
\end{example}
One relevant aspect of the class of parametric CTMCs is the fact that it is closed under parallel composition in the sense described above. As a consequence, the study of parameter estimation of \textsc{Prism}\ models from observed data can be conveniently addressed as maximum likelihood estimation for parametric CTMCs.
\section{Learning Parameters from Observed Sample Data} \label{sec:learnCTMC}
In this section we present two algorithms to estimate the parameters of parametric CTMC $\mathcal{P}$ from a collection of i.i.d.\ observation sequences $\O = \vec{o}_1, \dots, \vec{o}_J$.
The two algorithms consider two different types of observations: timed and non-timed. A \emph{timed observation} $\ell_{0:k},\tau_{0:k-1}$ is a finite sequence $\ell_0 \tau_0 \cdots \tau_{k-1} \ell_k$ representing consecutive dwell times and state labels observed during a random execution of $\M$. Similarly, a \emph{non-timed observation} $\ell_{0:k}$ represents a sequence of consecutive state labels observed during a random execution of $\M$.
Both algorithms follow a maximum likelihood approach: the parameters $\vec{x}$ are estimated to maximize the joint likelihood $\mathcal{L}(\mathcal{P}(\vec{x}) | \O)$ of the observed data.
When $\mathcal{P}$ and $\O$ are clear from the context, we simply write $\mathcal{L}(\vec{x})$ for the joint likelihood.
Hereafter we present a solution to the maximum likelihood estimation problem building on an optimization framework known by the name MM algorithm~\cite{Lange13,Lange16}.
In this line, our algorithms start with an initial hypothesis $\hypX{0}$ and iteratively improve the current hypothesis $\hypX{m}$, in the sense that the likelihood associated with the next hypothesis $\hypX{m+1}$ enjoys the inequality $\mathcal{L}(\hypX{m}) \leq \mathcal{L}(\hypX{m+1})$. The procedure terminates when the improvement does not exceed a fixed threshold $\epsilon$, namely when $\mathcal{L}(\hypX{m}) - \mathcal{L}(\hypX{m-1}) \leq \epsilon$.
\subsection{Learning from Timed Observations} \label{sec:learntimed}
Assume we have $J$ i.i.d.\ timed observation sequences $\O = \vec{o}_1, \dots, \vec{o}_J$ where $\vec{o}_j = \ell^j_{0:k_j}, \tau^j_{1:k_j-1}$ ($j=1\dots J$). We want to estimate a valuation of the parameters $\vec{x}$ of $\mathcal{P}$ that maximises the joint likelihood function $\mathcal{L}(\vec{x}) = \prod_{j=1}^{J} l(\vec{o}_j | \mathcal{P}(\vec{x}))$ where the likelihood of an observation $\vec{o} = \ell_{0:k}, \tau_{0:k-1}$ for a generic CTMC $\M$ is
\begin{align}
&l (\vec{o} | \M) = \textstyle \sum_{s_{0:k}} l(S_{0:k} = s_{0:k}, L_{0:k} = \ell_{0:k}, T_{0:k-1} = \tau_{0:k-1} | \M) \notag \\
&\quad = \textstyle \sum_{s_{0:k}} P[S_{0:k} = s_{0:k}, L_{0:k} = \ell_{0:k} | \M] \cdot l(S_{0:k} = s_{0:k}, T_{0:k-1} = \tau_{0:k-1} | \M) \notag\\
&\quad = \textstyle \sum_{s_{0:k}} \bool{\ell(s_{0:k}) {=} \ell_{i:k}} \big( \prod_{i = 0}^{k-1} R(s_{i}, s_{i+1}) / E(s_{i}) \big) \, \big(\prod_{i = 0}^{k-1}E(s_{i}) \, e^{- E(s_{i}) \tau_{i}} \big) \notag\\
& \quad = \textstyle \sum_{s_{0:k}} \bool{\ell(s_{0:k}) = \ell_{i:k}} \prod_{i = 0}^{k-1} R(s_{i}, s_{i+1}) \cdot e^{- E(s_{i}) \tau_{i}} \,.\label{eq:LTObs2}
\end{align}
Before presenting an MM algorithm to solve the MLE problem above, we find it convenient to introduce some notation. Let $\mathcal{P} = (S,\to, s_0, \ell)$, we write $f_\rho$ for the rate map of the transition $\rho \in {\to}$, and write $s \to \cdot$ for the set of transitions departing from $s \in S$.
Without loss of generality, we assume that the rate function $f_\rho$ of a transition is either a constant map, i.e., $f_\rho(\vec{x}) = c_r$ for some $c_r \geq 0$ or a map of the form $f_\rho(\vec{x}) = c_\rho \prod_{i = 1}^{n} x_i^{a_{\rho i}}$ for some $c_\rho > 0$ and $a_{\rho i} > 0$ for some $i \in \{1,\dots, n\}$; we write $a_\rho$ for $\sum_{i = 1}^n a_{\rho i}$. We denote by $\trans{c}$ the subset of transitions with constant rate function and $\trans{\vec{x}}$ for the remaining transitions.
To maximize $\mathcal{L}(\vec{x})$ we propose to employ an MM algorithm based on the following surrogate function $g(\vec{x} | \hypX{m}) = \sum_{i = 1}^n g(x_i | \hypX{m})$ where
\begin{equation}
g(x_i | \hypX{m}) =
\sum_{\rho \in {\trans{\vec{x}}}} \xi_{\rho} a_{\rho i} \ln x_i
- \sum_{s} \sum_{\rho \in s \trans{\vec{x}} \cdot} \frac{f_\rho(\hypX{m}) a_{\rho i} \gamma_s}{a_\rho (x_{m i})^{a_\rho}} x_i^{a_\rho} \label{eq:surrogate-timed}
\end{equation}
Here $\gamma_s = \sum_{j = 1}^{J} \sum_{t = 0}^{k_j-1} \gamma^j_s(t) \tau^j_{t}$ and $\xi_\rho = \sum_{j = 1}^{J} \sum_{t = 0}^{k_j-1} \xi^j_{\rho}(t)$, where $\gamma^j_{s}(t)$ denotes the likelihood that having observed $\vec{o}_j$ on a random execution of $\mathcal{P}(\hypX{m})$ the state $S_t = s$;
and $\xi^j_{\rho}(t)$ is the likelihood that for such random execution the transition performed from state $S_t$ is $\rho$.
The following theorem states that the surrogate function $g(\vec{x} | \hypX{m})$ is a minorizer of the log-likelihood relative to the observed dataset $\O$.
\begin{theorem} \label{thm:gMinorizes}
The surrogate function $g(\vec{x} | \hypX{m})$ minorizes $\ln \mathcal{L}(\vec{x})$ at $\hypX{m}$ up to an irrelevant constant.
\end{theorem}
By Theorem~\ref{thm:gMinorizes} and the fact that the logarithm is an increasing function, we obtain that the parameter valuation that achieves the maximum of $g(\vec{x} | \hypX{m})$ improves the current hypothesis $\hypX{m}$ relative to likelihood function $\mathcal{L}(\vec{x})$.
\begin{corollary}
Let $\hypX{m+1} =\argmax_{\vec{x}} g(\vec{x} | \hypX{m})$, then $\mathcal{L}(\hypX{m}) \leq \mathcal{L}(\hypX{m+1})$.
\end{corollary}
The surrogate function $g(\vec{x} | \hypX{m})$ is easier to maximize than $\mathcal{L}(\vec{x})$ because its parameters are separated. Indeed, maximization of $g(\vec{x} | \hypX{m})$ is done by point-wise maximization of each univariate function $g(x_i | \hypX{m})$.
This has two main advantages: first, it is easier to handle high-dimensional problems~\cite{Lange13,Lange16}; second, if one can choose to fix the value of some parameters, say $I \subset \{1\dots n\}$ and the maximization of $g(\vec{x} | \hypX{m})$ can be performed by maximizing $g(x_i | \hypX{m})$ for each $i \notin I$.
The maxima of $g(x_i | \hypX{m})$ are found among the \emph{non-negative} roots\footnote{Note that $P_i$ always admits non-negative roots.
Indeed, $P_i(0) \leq 0$ and $P_i(M) > 0$ for $M > 0$ sufficiently large. Therefore, by the intermediate value theorem, there exists $y_0 \in [0,M)$ such that $P_i(y_0) = 0$.} of the polynomial function $P_i \colon \mathbb{R} \to \mathbb{R}$
\begin{equation}
P_i(y) = \sum_{s} \sum_{\rho \in s \trans{\vec{x}}} \frac{f_\rho(\hypX{m}) a_{\rho i} \gamma_s}{(x_{m i})^{a_\rho} } y^{a_\rho} - \sum_{\rho \in {\trans{\vec{x}}}} \xi_{\rho} a_{\rho i}
\label{updateTimed}
\end{equation}
\begin{remark}
\label{rm:category1}
There are some cases when \eqref{updateTimed} admits a closed-form solution. For instance, when the parameter index $i$ satisfies the property $\forall \rho \in {\trans{\vec{x}}} .\,a_{\rho i} > 0 \implies a_{\rho} = C$ for some constant $C \in \mathbb{N}$, then maximization of $g(x_i | \hypX{m})$ leads to the following update
\begin{equation*}
x_{(m+1) i} = \left[ \frac{ (x_{m i})^C \sum_{\rho \in {\trans{\vec{x}}}} \xi_{\rho} a_{\rho i} }{ \sum_{s} \sum_{\rho \in s \trans{\vec{x}}} f_\rho(\hypX{m}) a_{\rho i} \gamma_s } \right]^{1/C}
\end{equation*}
A classic situation when the above condition is fulfilled occurs when all transitions $\rho$ where $x_i$ appear (i.e., $a_{\rho i} > 0$), the transition rate is $f_\rho(\vec{x}) = c_\rho x_i$ (i.e., $a_{\rho i} = a_\rho = 1$). In that case, the above equation simplifies to
\begin{equation*}
x_{(m+1) i} = \frac{ \sum_{\rho \in {\trans{\vec{x}}}} \xi_{\rho} }{ \sum_{s} \sum_{\rho \in s \trans{\vec{x}}} c_\rho \gamma_s }
\end{equation*}
For example, the parametric CTMC associated with the SIR models in Fig.~\ref{fig:bigSIR} satisfies the former property for all parameters, because all transition rates are expressions either of the form $c \cdot \mathtt{plock} \cdot \mathtt{beta}$ or the form $c \cdot \mathtt{plock} \cdot \mathtt{gamma}$ for some constant $c > 0$. Furthermore, if we fix the value of the parameter $\mathtt{plock}$ the remaining parameters satisfy the latter property.
In Section~\ref{sec:case-study}, we will take advantage of this fact for our calculations. \qed
\end{remark}
To complete the picture, we show how to compute the coefficients $\gamma^j_s(t)$ and $\xi^j_{\rho}(t)$. To this end, we employ standard forward and backward procedures.
We define the forward function $\alpha_s^j(t)$ and the backward function $\beta_s^j(t)$ respectively as
\begin{align*}
\alpha_s^j(t) &= l(L_{0: t} = \ell^{j}_{0: t},T_{0: t} = \tau^{j}_{0: t}, S_{t} = s | \mathcal{P}(\hypX{m})) \, \text{, and} \\
\beta_s^j(t) &= l(L_{t+1:k_j} = \ell^{j}_{t+1 :k_j}, T_{t+1:k_j-1} = \tau^{j}_{t+1:k_j-1} | S_{t} = s , \mathcal{P}(\hypX{m}) ) \, .
\end{align*}
These can be computed using dynamic programming according to the following recurrences: let $\mathcal{P}(\hypX{m}) = (S,R,s_0, \ell)$, then
\begin{align}
\alpha_s^j(t) &= \begin{cases}
\bool{s = s_0} \, \omega^j_s(t) &\text{if $ t=0$} \\
\omega^j_{s}(t) \sum_{s' \in S} \frac{R(s', s)}{E(s')} \, \alpha_{s'}^j(t-1) &\text{if $0 < t \leq k_j$}
\end{cases}
\\
\beta_s^j(t) &= \begin{cases}
1 &\text{if $ t= k_j$} \\
\sum_{s' \in S} \frac{R(s, s')}{E(s)} \, \beta_{s'}^j(t+1) \, \omega^j_{s'}(t+1) &\text{if $0 \leq t < k_j$}
\end{cases}
\end{align}
where
\begin{equation}
\omega^j_s(t) =
\begin{cases}
\bool{\ell(s) = \ell^j_t} E(s) e^{- E(s) \tau^j_{t}}
&\text{if $0 \leq t < k_j$,} \\
\bool{\ell(s) = \ell^j_t} &\text{if $t = k_j$.}
\end{cases}
\label{eq:omegaobs}
\end{equation}
Finally, for $s \in S$ and $\rho = (s \trans{f_\rho} s')$, $\gamma^j_{s}(t)$ and $\xi^j_{\rho}(t)$ are related to the forward and backward functions as follows
\begin{align}
\gamma^j_{s}(t) = \frac{\alpha_s^j(t) \, \beta_s^j(t) }{ \sum_{s' \in S} \alpha_{s'}^j(t) \, \beta_{s'}^j(t)} \, ,
&&
\xi^j_{\rho}(t) = \frac{ \alpha_s^j(t) f_\rho(\hypX{m}) \, \omega^j_{s'}(t+1) \,\beta_{s'}^j(t+1) }{ E(s) \sum_{s'' \in S} \alpha_{s''}^j(t) \, \beta_{s''}^j(t)} \,. \label{eq:defGammaXi}
\end{align}
\subsection{Learning from Non-timed Observations} \label{sec:learnnontimed}
Let now assume we have collected $J$ i.i.d.\ non-timed observation sequences $\O = \vec{o}_1, \dots, \vec{o}_J$ where $\vec{o}_j = \ell^j_{0:k_j}$ ($j=1 \dots J$). As done before, we want to maximize the joint likelihood function $\mathcal{L}(\vec{x}) = \prod_{j=1}^{J} l(\vec{o}_j | \mathcal{P}(\vec{x}))$ where the likelihood an arbitrary non-timed observation $\vec{o} = \ell_{0:k}$ relative to the CTMC $\M$ is
\begin{align}
l(\ell^j_{1:k} | \M ) &= \textstyle \sum_{s_{0:k}} P[S_{0:k} = s_{0:k}, L_{0:k} = \ell^j_{1:k} | \M] \label{eq:LObs1} \\
&= \textstyle \sum_{s_{0:k}} \bool{\ell(s_{0:k}) {=} \ell_{i:k}} \prod_{i = 0}^{k-1} R(s_{i}, s_{i+1}) / E(s_{i}) \,. \label{eq:LObs2}
\end{align}
Looking at the formula above, it is clear that whenever two CTMCs $\M_1$ and $\M_2$ have the same embedded Markov chain they will have also the same likelihood value, i.e., $\mathcal{L}(\M_1 | \O) = \mathcal{L}(\M_2 | \O)$. The fact that dwell time variables are not observable leaves us with an MLE objective that does not fully capture the continuous-time aspects of the model under estimation.
A similar problem shows up also in the Bradley--Terry model of ranking~\cite{BradleyT52}. This model is intuitively understood via a sport analogy. Given a set of teams where each team $i$ is assigned a rank parameter $r_i > 0$, assuming that ties are not possible, team $i$ beats team $j$ with probability $r_i / (r_i + r_j)$. If this outcome occurs $c_{ij}$ times during a tournament, then the probability of the whole tournament is $L(\vec{r}) = \prod_{i,j} (r_i / (r_i + r_j))^{c_{ij}}$, assuming that games are independent one another. Clearly, $L(\vec{r}) = L(c \,\vec{r})$ for any $c > 0$. Under mild assumptions, the function $L(\vec{r})$ admits a unique maximum when the value of one rank, say $r_1$, is fixed a priori.
Back to our problem, we claim that the race conditions among transitions can be interpreted under the Bradley--Terry model of ranking. As a consequence, when the number of parametric transitions is sufficiently small relative to that of constant transitions, the estimation of the unknown transition rates can hinge on the value of the transition rates that are fixed, leading the algorithm to converge to the real parameter values.
For the non-timed maximum likelihood estimation problem we devise an MM algorithm based on the surrogate function $h(\vec{x} | \hypX{m}) = \sum_{i = 1}^{n} h(x_i | \hypX{m})$ for
\begin{equation}
h(x_i | \hypX{m}) =
\sum_{\rho \in {\trans{\vec{x}}}} \hat\xi_{\rho} a_{\rho i} \ln x_i
- \sum_{s} \sum_{\rho \in s \trans{\vec{x}} \cdot} \frac{f_\rho(\hypX{m}) \, a_{\rho i} \, \hat\gamma_s}{E_{m}(s) \, a_\rho \, x_{m i}^{a_\rho}} x_i^{a_\rho} \label{eq:surrogate-nontimed}
\end{equation}
where $E_{m}(s)$ denotes the exit rate of the state $s$ in $\mathcal{P}(\hypX{m})$, $\hat\gamma_s = \sum_{j = 1}^{J}\sum_{t = 0}^{k_j - 1} \hat\gamma_s^j(t)$, and $\hat\xi_\rho = \sum_{j = 1}^{J}\sum_{t = 0}^{k_j - 1} \hat\xi_\rho^j(t)$.
This time, the coefficients $\hat{\gamma}^j_{s}(t)$ and $\hat{\xi}^j_{\rho}(t)$ denote respectively the probability that having observed $\vec{o}_j$ in a random execution of $\mathcal{P}(\hypX{m})$, the state $S_t$ is $s$, and the transition performed in state $S_t$ is $\rho$. $\hat{\gamma}^j_{s}(t)$ and $\hat{\xi}^j_{r}(t)$ can be computed using the same dynamic programming procedure described in Section~\ref{sec:learntimed} by replacing each occurrence of $\omega^j_s(t)$ with $\hat{\omega}^j_s(t) = \bool{\ell(s) = \ell^j_t}$.
The following theorem states that the surrogate function $h(\vec{x} | \hypX{m})$ is a minorizer of the log-likelihood relative to the observed (non-timed) dataset $\O$.
\begin{theorem} \label{thm:hMinorizes}
The surrogate function $h(\vec{x} | \hypX{m})$ minorizes $\ln \mathcal{L}(\vec{x})$ at $\hyp_{m}$ up to an irrelevant constant.
\end{theorem}
\begin{proof}(sketch) To easy the presentation we assume that the parametric CTMC $\mathcal{P}$ under study has at most one transition between each pair of states.
Starting from the log-likelihood function $\ln \mathcal{L}(\vec{x})$, we proceed with the following minorization steps\footnote{We denote by $f(\vec{x}) \cong f'(\vec{x})$ the fact that $f(\vec{x}) = f'(\vec{x}) + C$ for some (irrelevent) constant $C$.}
\begin{align*}
&\ln \mathcal{L}(\vec{x}) = \sum_{j = 1}^{J} \ln l(\vec{o}_j | \mathcal{P}(\vec{x}))
= \sum_{j = 1}^{J} \ln \sum_{s_{0:k_j}} P[ s_{0:k_j}, \vec{o}_j | \mathcal{P}(\vec{x})] \tag{by \eqref{eq:LObs1}} \\
&\geq \sum_{j = 1}^{J} \sum_{s_{0:k_j}} {P[s_{0:k_j} | \vec{o}_j , \mathcal{P}(\hypX{m})]} \ln \left( \frac{P[ s_{0:k_j}, \vec{o}_j | \mathcal{P}(\vec{x})]}{P[s_{0:k_j} | \vec{o}_j , \mathcal{P}(\hypX{m})]} \right) \tag{by \eqref{eq:basicmin1}}\\
&\cong
\sum_{j = 1}^{J} \sum_{t = 1}^{k_j} \sum_{s_{0:k_j}} P[s_{0:k_j} | \vec{o}_j , \mathcal{P}(\hypX{m})] \big( \ln R(s_{t}, s_{t+1}) - \ln E(s_{t}) \big) \tag{by \eqref{eq:LObs2}} \\
&\cong \sum_{\rho \in {\trans{\vec{x}}}} \hat\xi_{\rho} \ln f_\rho(\vec{x}) + \sum_{s} \hat\gamma_s (- \ln E(s) )\tag{up-to const} \\
&\geq
\sum_{i = 1}^{n} \sum_{\rho \in {\trans{\vec{x}}}} \hat\xi_{\rho} a_{\rho i} \ln x_i + \sum_{s} \hat\gamma_s \left( - \frac{E(s)}{E_{m}(s)} \right) \tag{by \eqref{eq:basicmin2}, up-to const} \\
& \geq \sum_{i = 1}^n \left[
\sum_{\rho \in {\trans{\vec{x}}}} \hat{\xi}_{\rho} a_{\rho i} \ln x_i
- \sum_{s} \sum_{\rho \in s \trans{\vec{x}} \cdot} \frac{\hat{\gamma}_s f_\rho(\hypX{m}) a_{\rho i}}{E_{m}(s) a_\rho x_{m i}^{a_\rho}} x_i^{a_\rho} \right] \tag{$\triangle\triangle$} \label{eq:hatminEsbis} \\
& = h(\vec{x} | \hypX{m}) \tag{by \eqref{eq:surrogate-nontimed}}
\end{align*}
Where the step \eqref{eq:hatminEsbis} is justified by the minorization of $- E(s)$ obtained via \eqref{eq:basicmin3} as follows
\begin{equation*}
- E(s)
\cong \sum_{\rho \in s \trans{\vec{x}} \cdot} c_{\rho} \left( - \prod_{i = 1}^{n} x_i^{a_{\rho i}} \right)
\geq \sum_{\rho \in s \trans{\vec{x}}} - f_{\rho}(\hypX{m}) \sum_{i = 1}^{n} \frac{a_{\rho i}}{a_\rho} \left(\frac{x_i}{x_{m i}} \right)^{a_\rho} \,.
\end{equation*}
Hence, there exists a constant $C > 0$ such that $h(\vec{x} | \hypX{m}) + C$ minorizes $\ln \mathcal{L}(\vec{x})$ at $\hypX{m}$.
\end{proof}
Notably, in the proof of Theorem~\ref{thm:hMinorizes} we employ the minorization~\eqref{eq:basicmin2} used in~\cite{LangeHY00} for finding rankings in the Bradley--Terry model.
As an immediate corollary of Theorem~\ref{thm:hMinorizes}, we have that the parameter valuations that maximize $h(\vec{x} | \hypX{m})$ improve the current hypothesis $\hypX{m}$ with respect to the ML objective.
\begin{corollary}
Let $\hypX{m+1} =\argmax_{\vec{x}} h(\vec{x} | \hypX{m})$, then $\mathcal{L}(\hypX{m}) \leq \mathcal{L}(\hypX{m+1})$.
\end{corollary}
As before, maximization of $h(\vec{x} | \hypX{m})$ is achieved by point-wise maximization of $h(x_i | \hypX{m})$. The maxima of $h(x_i | \hypX{m})$ are found among the \emph{non-negative} roots of the polynomial function
\begin{equation}
Q_i(y) =
\sum_{s} \sum_{\rho \in s \trans{\vec{x}} \cdot} \frac{f_\rho(\hypX{m}) a_{\rho i} \hat\gamma_s}{E(s) (x_{m i})^{a_\rho}} y^{a_\rho}
- \sum_{\rho \in {\trans{\vec{x}}}} \hat\xi_{\rho} a_{\rho i}
\label{updateUntimed}
\end{equation}
By arguments similar to those explained in Remark~\ref{rm:category1}, Equation~\eqref{updateUntimed} may admit a closed-form solution.
\section{Experimental evaluation} \label{sec:experiments}
\begin{figure}[t]
\input{prismmodels/tandem.tex}
\caption{Prism model for the tandem queueing network from \cite{Hermanns99}.}
\label{model:tandem}
\end{figure}
We implemented the algorithms from Section~\ref{sec:learnCTMC} as an extention of the \texttt{Jajapy} Python library~\cite{jajapy}, which has the advantage of being compatible with \textsc{Prism}\ models.
In this section, we present an empirical evaluation of the efficiency of our algorithms as well as the quality of their outcome.
To this end, we use the tandem queueing network model from \cite{Hermanns99} (\emph{cf}.\ Fig.~\ref{model:tandem}) as a benchmark for our evaluation.
The experiments have been designed according to the following setup. We assume that the state of $\mathtt{serverC}$ is fully observable ---i.e., its state variables $\mathtt{sc}$ and $\mathtt{ph}$ are-- as well as the size $\mathtt{c}$ of the queue and the value of $\mathtt{lambda}$. In contrast, we assume that the state of $\mathtt{serverM}$ is not observable.
Each experiment consists in estimating the value of the parameters $\mathtt{mu1a}$, $\mathtt{mu1b}$, $\mathtt{mu2}$, and $\mathtt{kappa}$ from a training set consisting of $100$ observation sequences of length $30$, generated by simulating the \textsc{Prism}\ model depicted in Fig.~\ref{model:tandem}.
We perform this experiment both using timed and non-timed observations, by increasing the size $\mathtt{c}$ of the queue until the running time of the estimation exceeds a time-out set to $1$ hour.
We repeat each experiment 10
times by randomly re-sampling the initial values of each unknown parameter $x_i$ in the interval $[0.1,\, 5.0]$. We annotate the running time as well as the relative error
$\delta_i$ for each parameter $x_i$, calculated according to the formula $\delta_i = |e_i - r_i| / |r_i|$, where $e_i$ and $r_i$ are respectively the estimated value and the real value of $x_i$.
\begin{table}[!ht]
\begin{center}
\rowcolors{4}{gray!10}{gray!40}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\mathtt{c}$} & \multirow{2}{*}{$|S|$} & \multirow{2}{*}{$|{\to}|$} & \multicolumn{2}{c|}{Running time (s)} & \multicolumn{2}{c|}{$\norm{1}{\delta}$} & \multicolumn{2}{c|}{$\norm{\infty}{\delta}$}\\ \cline{4-9}
& & & Timed & Non-timed & Timed & Non-timed & Timed & Non-timed \\ \hline \hline
4 & 45 & 123 & 4.336 & 15.346 & 0.226 & 0.251 & 0.13 & 0.13\\ \hline
6 & 91 & 269 & 13.219 & 38.661 & 0.399 & 0.509 & 0.173 & 0.329\\ \hline
8 & 153 & 471 & 37.42 & 90.952 & 0.322 & 0.387 & 0.183 & 0.187\\ \hline
10 & 231 & 729 & 76.078 & 170.044 & 0.359 & 0.346 & 0.17 & 0.191\\ \hline
12 & 325 & 1043 & 160.694 & 276.383 & 0.343 & 0.616 & 0.165 & 0.289\\ \hline
14 & 435 & 1413 & 264.978 & 623.057 & 0.373 & 0.263 & 0.195 & 0.117\\ \hline
16 & 561 & 1839 & 458.766 & 774.642 & 0.406 & 0.427 & 0.245 & 0.192\\ \hline
18 & 703 & 2321 & 871.39 & 1134.037 & 0.249 & 0.783 & 0.14 & 0.49\\ \hline
20 & 861 & 2859 & 1425.65 & 1225.539 & 0.416 & 0.987 & 0.281 & 0.519\\ \hline
22 & 1035 & 3453 & 2031.587 & 1297.383 & 0.546 & 1.013 & 0.278 & 0.602\\ \hline
24 & 1225 & 4103 & 2675.794 & 1924.074 & 0.441 & 1.892 & 0.281 & 1.599\\ \hline
\end{tabular}
\end{center}
\caption{Comparison of the performance of the estimation for timed and non-timed observations on the tandem queueing network with different size of the queue.}
\label{tab:tandem-res}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{pictures/tandem_experiment10.png}
\caption{Comparison of the performance of the estimation for timed and non-timed observations on the tandem queueing network with different size of the queue.}
\label{fig:tandem-res}
\end{figure}
Table \ref{tab:tandem-res} reports the results for some selected experiments. The second and third columns provide respectively the number of states and transitions of the parametric CTMC resulting from the choice of $\mathtt{c}$; the fourth column reports the average running time; while the fifth (resp.\ sixth) column details the average $L_1$-norm (resp. $L_\infty$-norm) of the vector $\delta = (\delta_i)$, calculated as $\norm{1}{\delta} = \sum_{i} |\delta_i|$ (resp. $\norm{\infty}{\delta} = \max_i |\delta_i|$).
Fig.~\ref{fig:tandem-res} reports the results of all the experiments in a graphical format where measurements are presented together with their respective error bars.
We observe that the running time is quadratic in the number of states (equivalently, linear in the size $|S|+|{\to}|$ of the model) both for timed and non-timed observations. However, for non-timed observations, the variance of the measured running times tends to grow with the size of the model. In this respect, we observed that large models required more iterations than small models to converge. Nevertheless, all experiments required at most 20 iterations.
As one may expect, the variance of the measured relative errors is larger on the experiments performed with non-timed observations, and the quality of the estimation is better when employing timed observations.
Notably, for timed observations, the quality of the estimation remained stable despite the size of the model increased relatively to the size of the training set. This may be explained by the fact that the parameters occur in many transitions.
\section{Case Study: SIR modeling of pandemic} \label{sec:case-study}
In this section, we take as a case study the modeling pipeline proposed by Milazzo~\cite{Milazzo21} for the analysis and simulation in \textsc{Prism}\ of the spread of COVID-19 in presence of lockdown countermeasures.
The modeling pipeline includes:
\begin{inlineenum}
\item parameter estimation from real data based on a modified SIR model described by means of a system of Ordinary Differential Equations;
\item \label{item:translation} translation of the modified SIR model into a CTMC expressed as a \textsc{Prism}\ model; and
\item \label{item:analaysis} stochastic simulation and model checking with \textsc{Prism}.
\end{inlineenum}
In particular, the \textsc{Prism}\ model devised in step~(\ref{item:translation}) is exactly the model depicted in Fig.~\ref{fig:bigSIR} (left). However, to perform the analysis, Milazzo had to apply ``a couple of modeling tricks (variable pruning and upper bounds) that allowed state space of the model constructed by \textsc{Prism}\ to be reduced by several orders of magnitude. The introduction of upper bounds to the values of variables actually introduces a small approximation in the model, that is negligible in practically relevant cases''~\cite{Milazzo21}.
We argue that these kinds of modeling tricks are not uncommon in formal verification, but they require the modeler to ensure that the parameter values estimated for the original model are still valid in the approximated one.
In this section, we showcase the use of our algorithm to semi-automatize this task. Specifically, we generate two training sets by simulating the SIR model in Fig.~\ref{fig:bigSIR} using \textsc{Prism}\ and, based on that, we re-estimate $\mathtt{beta}$, $\mathtt{gamma}$, and $\mathtt{plock}$ on an approximated version of the model (\emph{cf}.\ Fig.~\ref{model:SIR-initial}) which is amenable to analysis in \textsc{Prism}.
\begin{figure*}[ht]
\input{prismmodels/sir_init_clean.tex}
\caption{Approximated SIR model.}
\label{model:SIR-initial}
\end{figure*}
The first training set represents the spread of the disease without lockdown (i.e., $\mathtt{plock} = 1$), while the second one is obtained by fixing the value of $\mathtt{plock}$ estimated in~\cite{Milazzo21} (i.e., $\mathtt{plock} = 0.472081$). In line with the data set used in~\cite{Milazzo21}, both training sets consist of one timed observation reporting the number of infected individuals for a period of 30 days.
The estimation of the parameters $\mathtt{beta}$, $\mathtt{gamma}$ and $\mathtt{plock}$ is performed on the model depicted in Fig.~\ref{model:SIR-initial}.
As in~\cite{Milazzo21}, we use an approximated version of the original SIR model (\emph{cf}.\ Fig.~\ref{fig:bigSIR}) obtained by employing a few modeling tricks: variable pruning, set upper bounds on the state variable $\mathtt{i}$, and re-scaling of the variable $\mathtt{r}$ in the interval $[0,\mathtt{nb\_r} -1]$.
These modeling tricks have the effect to reduce the state space of the underlying CTMC, speeding-up in this way parameter estimation and the following model analysis.
We perform the estimation in two steps. First, we estimate the values of $\mathtt{beta}$ and $\mathtt{gamma}$ on the first training set with $\mathtt{plock}$ set to $1$.
Then, we estimate the value of $\mathtt{plock}$ on the second training set with $\mathtt{beta}$ and $\mathtt{gamma}$ set to the values estimated in the first step.
Each step was repeated 10 times by randomly re-sampling the initial values of each unknown parameter in the interval $[0,1]$. Table \ref{tab:sir-res} reports the average estimated values and absolute errors relative to each parameter.
The average running time\footnote{Experiments were performed on a Linux machine with an AMD-Ryzen 9 3900X 12-Core processor and 32 GB of RAM.} of each execution of the algorithm was $89.94$ seconds.
\begin{table}[!ht]
\begin{center}
\rowcolors{2}{gray!10}{gray!40}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Parameter} & \textbf{Expected Value} & \textbf{Estimated Value} & \textbf{Absolute Error}\\ \hline \hline
$\mathtt{beta}$ & $0.122128$ & $0.135541$ & $0.013413$\\ \hline
$\mathtt{gamma}$ & $0.127283$ & $0.128495$ & $0.001212$\\ \hline
$\mathtt{plock}$ & $0.472081$ & $0.437500$ & $0.034581$\\ \hline
\end{tabular}
\end{center}
\caption{Parameter estimation on the approximated SIR model.}
\label{tab:sir-res}
\end{table}
Our results confirm Milazzo's claim that the introduction of upper bounds to the values of state variables produces a small approximation in the model.
Notably, we were able to achieve an accurate estimation of all the parameters from training sets consisting of a single partially-observable execution of the original SIR model. As observed in Section~\ref{sec:experiments}, this may be due to the fact that each parameter occurs in many transitions.
This case study demonstrates that our estimation procedure can be effectively used to simplify modeling pipelines that involve successive modifications of the model and the re-estimation of its parameter values.
In line with the model checking problem, also our technique requires the modeler to take the size of the model into account.
\section{Conclusion and Future Work} \label{sec:conclusion}
We presented novel methods to estimate parameters values of CTMCs expressed as \textsc{Prism}\ models from timed and non-timed partially-observable executions.
We demonstrated, through the use of a case-study, that our solution is a concrete aid in applications involving modeling and analysis, especially when the model under study requires successive adaptations which may lead to approximations that require re-estimation of the parameters of the model.
Notably, all the algorithms presented in this paper were devised following simple optimization principles borrowed from the MM optimization framework.
We suggest that similar techniques can be employed to other modeling languages (e.g., Markov automata~\cite{EisentrautHZ10,EisentrautHZ10a}) and metric-based approximate minimization~\cite{BacciBLM17,BalleLPPR21}.
An interesting future direction of research consists in extending our techniques to non-deterministic stochastic models by integrating the active learning strategies presented in~\cite{BacciILR21}.
|
{
"arxiv_id": "2302.08639",
"language": "en",
"timestamp": "2023-03-02T02:04:48",
"url": "https://arxiv.org/abs/2302.08639",
"yymm": "2302"
} | \section{Introduction}
In the past few years, we have seen the fast development of deep neural network (DNN) based speaker verification (SV) systems. Various models for speaker verification have been proposed with different DNN architectures~\cite{snyder2018x, zeinali2019but, desplanques2020ecapa}, novel loss functions~\cite{wan2018generalized, chung2020defence, wang2018cosface}, and frameworks for overcoming domain mismatch~\cite{sang2020open, bhattacharya2019generative, sang2021deaan}. In~\cite{inoue2020semi, sang2022self, chen22g_interspeech}, researchers further studied semi-supervised and self-supervised SV systems using partially labeled or unlabeled data.
\mfmod{CNN-based architectures have been the predominant model for SV system, such as x-vector, ResNet-based architectures, and ECAPA-TDNN }
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[1]{Work performed while Mufan Sang was an intern at Microsoft.}
\mfmod{Generally, an entire speaker verification system consists of a front-end speaker embedding network to extract discriminative speaker embeddings and a back-end scoring module to calculate the similarity or measure a score between embedding pairs. }For speaker embedding extraction, x-vector~\cite{snyder2018x} has been proven successful by utilizing the 1D convolutional neural network (CNN) based time delay neural network (TDNN). Moreover, 2D CNNs (i.e. ResNet-based architectures) are also successfully adopted to the SV task and obtain remarkable performance~\cite{cai2018exploring, chung2020defence}. ECAPA-TDNN~\cite{desplanques2020ecapa} was proposed to further enhance the TDNN-based architecture and achieved a competitive performance with ResNet. These predominant CNN-based models take advantage of strong ability of
capturing local speaker patterns from speech features. To further improve the performance of CNN-based speaker embedding networks, some attention mechanisms were integrated to the speaker embedding extractor~\cite{zhou2019deep, sang22_interspeech} or the pooling layer~\cite{cai2018exploring,okabe2018attentive}. Convolution layer allows CNNs to model the local dependencies well, but it lacks a mechanism to capture speaker information globally\mfmod{capture long-range global context}. There have been attempts to explore using Transformer to replace CNNs for speaker embedding extraction~\cite{mary2021s, safari2020self, wang2022multi}. However, without large-scale pre-training, Transformer-based speaker embedding networks can hardly achieve competitive performance as CNNs for speaker verification. This is primarily due to the lack of certain desirable properties inherently built into the CNN architecture such as locality.
In this study, we aim to enhance the Transformers in capturing global and local context collectively. We introduce locality mechanisms to Transformer in two directions. First, we propose the Locality-Enhanced Conformer (LE-Confomer) by incorporating depth-wise convolution and channel-wise attention into the feed-forward network (FFN) of the Conformer block~\cite{gulati2020conformer}. We investigate an effective way to aggregate the output features from all the LE-Conformer blocks to improve the frame-level speaker embedding. Second, we present Speaker Swin Transformer (SST) by employing the hierarchical Transformer architecture with shifted local window self-attention, inspired by Swin Transformer~\cite{liu2021swin}.
Experimental results on the VoxCeleb datasets demonstrate that the proposed LE-Conformer and SST significantly outperform the previously proposed Transformer-based models and ResNet and ECAPA-TDNN baseline systems. Moreover, when trained on a larger-scale MS-internal multilingual dataset, the proposed systems outperform Res2Net50 by a large margin, producing more robust and competitive speaker embeddings. The primary contributions of this paper can be summarized as follow: (1) We propose an effective locality mechanism for Conformer to enhance the ability of local information aggregation. (2) We propose the Speaker Swin Transformer which generates multi-scale output feature maps with shifted local window self-attention. (3) We conduct comprehensive experiments to demonstrate the effectiveness of the proposed Transformer-based networks with locality mechanisms.
\begin{figure*}[th]
\centering
\scalebox{0.95}
{
\includegraphics[width=18.0cm,height=6.6cm]{./image/Conformer_finalV2_2.pdf}
}
\vspace{-2.0mm}
\caption{The architecture of Locality-Enhanced Conformer for speaker embedding.}
\label{fig:system}
\end{figure*}
\vspace{-0.5ex}
\section{Locality-Enhanced Conformer}
Self-attention is the key component of the Transformer. It enables the Transformer to have a strong ability to model the global interaction between speech frames. However, global self-attention does not have sufficient ability to capture local information which is essential for speaker embedding. Therefore, we introduce locality mechanisms to enhance Transformer in modeling local dependencies. In this section, we present Locality-Enhanced Conformer (LE-Confomer), which is built upon the Conformer~\cite{gulati2020conformer} architecture. We incorporate additional convolution, channel-wise attention, and intermediate output feature map aggregation into the Conformer network. The system architecture is illustrated in Fig. \ref{fig:system}(a). It comprises (i) a VGG front-end to subsample the input feature, (ii) a number of locality-enhanced Conformer blocks to extract frame-level speaker embedding, (iii) a pooling layer to generate utterance-level embedding, (iv) a linear layer to extract the final speaker embedding.
\subsection{Conformer Encoder Block}
As a state-of-the-art model in ASR, Conformer~\cite{gulati2020conformer} combines convolution and self-attention in the Transformer block to enhance its capability of capturing local information. The architecture of Conformer block is shown in the left side of Fig. \ref{fig:system}(b). It consists of multi-head self-attention (MSA) module and convolution module sandwiching by two Macron-style feed-forward networks (FFN). Assuming $\boldsymbol{z_{i}}$ as the input for the $i$-th Conformer block, the output of this block $\boldsymbol{z_{i+1}}$ is computed as
\begin{equation}
\begin{aligned}
\tilde{\boldsymbol{z}_i} &=\boldsymbol{z_i}+\frac{1}{2} \mathrm{FFN}\left(\boldsymbol{z_i}\right) \\
\boldsymbol{z_i}^{\prime} &=\tilde{\boldsymbol{z_i}}+\operatorname{MSA}\left(\tilde{\boldsymbol{z_i}}\right) \\
\boldsymbol{z_i}^{\prime \prime} &=\boldsymbol{z_i}^{\prime}+\operatorname{Conv}\left(\boldsymbol{z_i}^{\prime}\right) \\
\boldsymbol{z_{i+1}} &=\operatorname{LayerNorm}\left(\boldsymbol{z_i}^{\prime \prime}+\frac{1}{2} \mathrm{FFN}\left(\boldsymbol{z_i}^{\prime \prime}\right)\right)
\end{aligned}
\end{equation}
where FFN denotes the feed-forward network, MSA denotes the multi-head self-attention, and Conv denotes the convolution module.
\subsection{Introducing Locality and Channel-wise Attention}
In each block, adding convolutional layers after MSA sequentially helps the Conformer capture local dependencies. Regarding the FFN module, hidden dimension of the latent feature is expanded between the two linear layers. We consider that depth-wise convolution could be a filter for the latent representation and introduce additional locality to Transformer. To achieve this goal, we propose an effective strategy to integrate depth-wise convolution and a channel-wise attention mechanism into the feed-forward network.
As shown in the right side of Fig. \ref{fig:system}(b), LayerNorm and a 1D depth-wise convolution layer are added after the first linear layer of FFN. The depth-wise convolution provides information interaction among adjacent frames, and captures local continuity of input feature maps. Integrating Squeeze-and-Excitation (SE) block~\cite{hu2018squeeze} after convolution can further improve representation ability by modeling the inter-dependencies between channels of feature maps and re-calibrating each channel.
\subsection{Aggregating Output Feature Maps of LE-Conformer Blocks}
For the frame-level speaker embedding extractor, the output of the last layer is often used as the input for the pooling layer. However, previous studies~\cite{gao2019improving, tang2019deep} indicated that feature maps from lower layers can also contribute to robust speaker embeddings. Similar to ~\cite{zhang2022mfa}, we apply two strategies to utilize low-layer feature maps. As illustrated in Fig. \ref{fig:system}(c), the first strategy is to concatenate the output feature maps from all LE-Conformer blocks along the channel dimension. Secondly, we conduct weighted average of the output feature maps from all LE-Conformer blocks with learnable weights.
Finally, the enhanced frame-level speaker embedding is processed by the pooling layer to generate utterance-level speaker embedding.
\begin{figure}[t]
\centering
\scalebox{1.0}
{
\includegraphics[width=8.6cm,height=8.0cm]{./image/Swin_Final_S__2.pdf}
}
\vspace{-6.0mm}
\caption{The architecture of Speaker Swin Transformer.}
\label{fig:Swin}
\end{figure}
\vspace{-2.0ex}
\section{Speaker Swin Transformer}
\vspace{-1.0ex}
Generally, the Transformer computes the self-attention globally which contributes to a great long-range dependencies modeling capability. However, speech features can be much longer than text sentences. Thus, Transformers usually incur high computational and memory costs for speech tasks. Moreover, with convolution operation, ResNet-based networks bring locality, inductive bias, and multi-scale feature maps which contribute to their successes on SV tasks. On the contrary, Transformer's output feature maps are single-scale among all blocks. With a hierarchical structure and shifted window self-attention, Swin Transformer~\cite{liu2021swin} has achieved state-of-the-art performance on multiple vision tasks. Therefore, we bring the advantages of Swin Transformer to speaker verification, proposing the Speaker Swin Transformer (SST), which generates multi-scale output features and computes self-attention with shifted local windows. The architecture leads to linear computational complexity to the length of utterance. The overview of the Speaker Swin Transformer encoder is presented in Fig. \ref{fig:Swin}.
\subsection{Encoding Input Features by Overlapping Patch Embedding}
Different from Conformer, Speaker Swin Transformer processes input speech features patch-by-patch instead of frame-by-frame. \mfmod{Swin Transformer splits the input image into non-overlapping patches.}Considering the characteristics of speech feature, using patch embedding as input token allows model to learn both the temporal and frequency structure. As shown in Fig. \ref{fig:Swin}(a), to better capture local continuity of input feature, the input feature is split into a sequence of $P\times P$ ($P=7$) patches with overlap by half of its area. Each patch is flattened and projected into a $C$-dimensional ($C=96$) 1D patch embedding with a linear patch embedding layer. We utilize a convolution layer with $7\times 7$ kernel size, a stride of 4 ($S=4$), and zero padding of 3 for patch embedding. Accordingly, the output size of patch embedding is $\left(\frac{T}{S} \times \frac{F}{S}, C\right)$, where $T$ and $F$ represent time and frequency domain of input feature. Although Fbanks and images have a similar format with 2D shape, the height and width of Fbanks contain different information which represents frequency and temporal dimension. Inspired by~\cite{chen2022hts}, to better model the dependencies among frequencies for nearby frames, we first split the whole Fbanks into chunks along temporal dimension. Then, patches are split within each chunk following the order shown in Fig. \ref{fig:Swin}(a), and all patches compose the sequence chunk-by-chunk. Then, the patch token sequence is processed by the linear patch embedding layer and sent to the following Transformer blocks.
\subsection{Swin Transformer Block}
In order to construct the hierarchical structure with local window self-attention, we adapt Swin Transformer block to speaker verification. Inside each Swin Transformer block, the local window self-attention is introduced to replace the conventional global self-attention for efficient modeling. Regarding self-attention computing, all the input patch tokens are evenly partitioned into non-overlapping local windows which contain $M\times M$ ($M=5$) patches. Accordingly, self-attention is computed within each local window instead of among all patches globally. For an input speech feature with $f\times t$ patch tokens, the computational complexity of a global MSA is $\mathcal{O}\left(f t C^2+(f t)^2 C\right)$ and a local window MSA (LW-MSA) is $\mathcal{O}\left(f t C^2+M^2 f t C\right)$ with the latent feature dimension $C$. It demonstrates that LW-MSA is much more efficient than global MSA with $M^2 \ll ft $ ($M=5$) and its complexity grows linearly with $ft$ instead of quadratically as global MSA. Therefore, it is able to substantially decrease the computation cost.
To enlarge the receptive field and model the connections across windows, shifted local window multi-head self-attention (SLW-MSA) is introduced in addition to the LW-MSA. This module adopts a windowing configuration that is shifted from that of the preceding layer, by shifted toward lower-right by $\left(\left\lfloor\frac{M}{2}\right\rfloor,\left\lfloor\frac{M}{2}\right\rfloor\right)$ patches in consecutive Swin Transformer blocks. As shown in Fig. \ref{fig:Swin}(b), the Transformer block with SLW-MSA introduces connections between neighboring non-overlapping windows from the previous block.
To form a hierarchical structure, a patch merging layer~\cite{liu2021swin} is added at the end of each stage from stage 1 to 3. For the first patch merging layer, it concatenates the neighboring patches with group of $2 \times 2$, and applies a linear layer to reduce the output dimension from $\left(\frac{T}{2S} \times \frac{F}{2S}, 4C\right)$ to $\left(\frac{T}{2S} \times \frac{F}{2S}, 2 C\right)$. As illustrated in Fig. \ref{fig:Swin}(a), the shape of patch tokens is reduced to $\left(\frac{T}{4S} \times \frac{F}{4S}, 4C\right)$ and $\left(\frac{T}{8S} \times \frac{F}{8S}, 8C\right)$ through stage 2 and 3, respectively. These stages allow the network to generate hierarchical representations as ResNet, so the memory cost is decreased exponentially through each stage. More importantly, the receptive field for local window self-attention grows larger as layer goes deeper. In summary, these designs contribute to efficient local and global relations modeling.
\section{Experiments}
\vspace{-1.5ex}
\subsection{Datasets}
\textbf{\textit{VoxCeleb}} The SV systems are trained on the development set of VoxCeleb1\&2~\cite{nagrani2017voxceleb, chung2018voxceleb2} and evaluated on VoxCeleb1 test set. The total duration of the training data is around 2k hrs. We augment the training data by distorting the raw speech with additive noise~\cite{snyder2015musan}, RIR reverberation~\cite{ko2017study} and speed perturbation (with speed factor 0.9 and 1.1). Specifically, the generated utterances by speed perturbation are regarded as from new speakers.
\noindent{\textbf{\textit{MS-internal}} This is a large-scale Microsoft internal multilingual dataset collected in controlled acoustic environments. It consists of around 26 million utterances from over 29k speakers in 48 languages and locales, about 4.6 seconds in length per utterance. The training set contains around 24 million utterances from over 27k speakers with total duration of 33k hrs, and the test set contains over 0.1 million utterances from around 1.7k speakers. There is no speaker overlapping between training and testing set.}
\subsection{Implementation Details}
The input features are 80-dimensional log Mel-filterbanks with a frame-length of 25 ms and 10 ms shift, applied with mean normalization at the utterance level. The proposed Transformers serve as the backbone network. Attentive statistic pooling (ASP)~\cite{okabe2018attentive} is used to generate utterance-level embeddings. The models are trained with additive margin softmax (AM-softmax) loss~\cite{wang2018cosface} with a margin of 0.2 and a scaling factor of 30. We also trained ResNet34, ECAPA-TDNN, and Res2Net50 for comparison.
\mfmod{For the Conformer-based speaker embedding extractor,}
\textbf{\textit{Locality-Enhanced Conformer}} A segment of 2.0 seconds is randomly selected for each input utterance. The model consists of 6 Locality-Enhanced Conformer encoder blocks with 4 attention heads. For each block, we set the encoder dimension as 512, the kernel size of convolution module as 15, and the hidden unit size as 2048 for the FFN. We use the AdamW~\cite{loshchilov2018decoupled} optimizer with an initial learning rate of 3e-4 and set the weight decay as 5e-2. A linear warm-up is applied at the first 45k steps and the learning rate is adjusted based on cyclical annealing schedule in the range of 1e-8 and 3e-4. The batch size is 128 for each of 8 GPU cards.
\textbf{\textit{Speaker Swin Transformer}} A segment of 3.2 seconds is randomly selected for each input utterance. The input feature with size $320\times 80$ will be equally split into 2 chunks along time dimension with size $160\times 80$. We set patch size as $7\times 7$, \mfmod{To adapt the patch chunk with size $160\times 80$ to the hierarchical structure, }and attention window size as $5\times 5$. The four network stages are designed with 2, 2, 6, 2 Swin Transformer blocks, respectively. We set the channel number of the hidden layer in the first stage $C=96$. We utilize AdamW optimizer with an initial learning rate of 3e-2 and set the weight decay as 5e-2. A linear warm-up is applied at the first 130k steps, and the learning rate is adjusted based on cyclical annealing schedule in the range of 1e-5 and 3e-2. The batch size is 200 for each of 8 GPU cards.
We report the system performance using two evaluation metrics: Equal Error Rate (EER) and minimum Detection Cost Function (minDCF) with $p_{target}=0.05$. \mfmod{Cosine similarity is adopted for scoring in the testing phase.}
\vspace{-1.0ex}
\subsection{Evaluation on VoxCeleb Dataset}
\vspace{-1.0ex}
\begin{table}[t]
\caption{Performance of all SV systems on VoxCeleb1. \textit{Upper block: CNN-based models; Middle block: Transformer-based models; Lower block: Our proposed models}}
\setlength{\tabcolsep}{3.2mm}{
\renewcommand\arraystretch{1.2}
\scalebox{0.97}{
\begin{tabular}{lccc}
\hline \textbf{Systems} & \textbf{Corpus} & \textbf{EER(\%)} & \textbf{minDCF} \\
\hline
ResNet34~\cite{zeinali2019but} & Vox1\&2 & $1.06$ & $0.084$ \\
ECAPA-TDNN~\cite{desplanques2020ecapa} & Vox1\&2 & $0.85$ & $0.078$ \\
Res2Net50 {~\cite{zhou2021resnext}} & Vox1\&2 & $0.55$ & $0.041$ \\
\hline
SAEP {~\cite{safari2020self}} & Vox2 & $5.44$ & $-$ \\
Wang et al.{~\cite{wang2022multi}} & Vox2 & $2.56$ & $-$ \\
DT-SV {~\cite{zhang2022dt}} & Vox2 & $1.92$ & $0.130$ \\
S-vector+PLDA {~\cite{mary2021s}} & Vox1\&2 & $2.67$ & $0.300$ \\
\hline
LE-Conformer (ours) & Vox1\&2 & $\mathbf{0.75}$ & $\mathbf{0.055}$ \\
SST (ours) & Vox1\&2 & $1.34$ & $0.104$ \\
\hline
\end{tabular}}
}
\label{table:Vox}
\end{table}
First, we evaluate the proposed LE-Conformer and SST on the VoxCeleb dataset. Table \ref{table:Vox} compares the performance of our systems with CNN-based baseline systems ResNet34~\cite{zeinali2019but}, ECAPA-TDNN~\cite{desplanques2020ecapa}, Res2Net50~\cite{zhou2021resnext} and other Transformer models~\cite{mary2021s, safari2020self, wang2022multi, zhang2022dt} for speaker verification. The LE-Conformer and SST achieve 0.75\% and 1.34\% EER, respectively, which significantly outperform other Transformer-based systems. Compared to S-vector+PLDA, LE-Conformer and SST improve the performance with relative 71.9\% and 49.8\% reduction in EER. Moreover, the LE-Conformer and SST also outperform the Wang et al~\cite{wang2022multi} with relative 70.7\% and 47.7\%, and DT-SV with 60.9\% and 30.2\% improvement in EER.
Compared to CNN-based networks, LE-Conformer outperforms ResNet34 and ECAPA-TDNN with relative 29.3\% and 11.8\% improvement in EER, respectively. It demonstrates that introducing locality mechanisms to the Transformer is beneficial for more accurate extraction of speaker embeddings.
\begin{table}[h]
\caption{Ablation study of the Locality-Enhanced Conformer and Speaker Swin Transformer. \textit{Non-OPE: non-overlapping patch embedding.}}
\vspace{-4.0mm}
\setlength{\tabcolsep}{3.5mm}{
\begin{center}
\renewcommand\arraystretch{1.05}
\scalebox{0.99}{
\begin{tabular}{lcc}
\hline \textbf{Systems} & \textbf{EER(\%)} & \textbf{minDCF} \\
\hline LE-Conformer & $0.75$ & $0.055$ \\
\hspace{0.5em} No SE Block & ${0.87}$ & ${0.060}$ \\
\hspace{0.5em} No DW Conv & ${0.94}$ & ${0.068}$ \\
\hspace{0.5em} No Concat & ${1.00}$ & ${0.070}$ \\
\hspace{0.5em} Weighted avg & ${1.21}$ & ${0.091}$ \\
\hline SST & $1.34$ & $0.104$ \\
\hspace{0.5em} Non-OPE & $1.47$ & $0.120$ \\
\hline
\end{tabular}}
\end{center}}
\vspace{-3mm}
\label{table:Ablation}
\end{table}
\begin{table}[h]
\caption{Performance of the proposed systems on MS-internal dataset.}
\vspace{-4.0mm}
\setlength{\tabcolsep}{4mm}{
\begin{center}
\renewcommand\arraystretch{1.0}
\begin{tabular}{l|cc}
\hline \textbf{Systems} & \textbf{EER} (\%) & \textbf{minDCF} \\
\hline Res2Net50 & $3.09$ & $0.180$ \\
LE-Conformer & $3.57$ & $0.229$ \\
SST & $\mathbf{2.64}$ & $\mathbf{0.168}$ \\
\hline
\end{tabular}
\end{center}}
\label{table:KingASR}
\end{table}
We further conduct experiments to illustrate the effectiveness of the proposed speaker embedding networks, and how the local information helps to improve the performance of Transformer-based models. Table \ref{table:Ablation} shows the impact of changes to the LE-Conformer block and SST. For LE-Conformer, without adding SE block and depth-wise convolution into the FFN, the performance degrades with an increase of 25.3\% and 23.6\% relatively in EER and minDCF, respectively. \mfmod{We also study the effective way to aggregate outputs from Conformer blocks.}Without concatenating the output feature maps from all blocks, the performance further degrades by 6.4\% relative in EER. It is equivalent to Conformer after removing the SE block, DW conv, and concatenation. The results in Table \ref{table:Ablation} demonstrate that the performance of Transformer can be significantly improved by the integration of locality mechanisms. Moreover, the performance becomes even worse if output features are averaged with learnable weights before the pooling layer.
For SST, it is beneficial to use overlapping patch embedding (OPE), which yields 8.8\% relative improvement in EER compared to non-overlapping patch embedding (non-OPE). The overlapping patch embedding can effectively enhance the capability of modeling local continuity of input features via overlapped sliding windows.
\vspace{-1.0ex}
\subsection{Evaluation on MS-internal Dataset}
\vspace{-1.0ex}
In this section, we investigate the performance of proposed models when trained on a large-scale Microsoft internal multilingual (MS-internal) dataset. We compare LE-Conformer and SST to Res2Net50, the best CNN-based system in Table \ref{table:Vox}. As illustrated in Table \ref{table:KingASR}, Speaker Swin Transformer outperforms Res2Net50 by 14.6\% relative improvement in EER. It demonstrates that the hierarchical transformer structure with shifted local window self-attention is capable of making full use of the massive training data and learning global and local information collectively, compared with CNN-based models.
\mfmod{the performance of these systems trained on the MS-internal dataset.}
\vspace{-2.0ex}
\section{Conclusions}
\vspace{-1.5ex}
In this paper, we propose two speaker embedding networks by incorporating locality mechanisms to Transformer. Firstly, we integrate depth-wise convolution and channel-wise attention into the Conformer blocks to enhance the ability of modeling local dependencies. Secondly, we introduce the Speaker Swin Transformer which processes input features at multiple scales with shifted local window self-attention. Experimental results demonstrate that our models significantly outperform previous Transformer-based models and CNN-based models, such as ResNet34 and ECAPA-TDNN. The proposed architectures enable the effective extraction of speaker embeddings, especially when trained on large amounts of data. We hope this work can provide inspirations for the ultimate design of Transformer-based speaker embedding networks.
\vfill\pagebreak
\footnotesize
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08590",
"language": "en",
"timestamp": "2023-02-20T02:02:28",
"url": "https://arxiv.org/abs/2302.08590",
"yymm": "2302"
} | \section{Introduction}
As \citet{Bisk2020} have noted,
NLP as a field is slowly working its way towards ever wider ``world scopes'', %
going from modelling corpora to larger collections of text, to collections of text paired with other modalities, to modelling in environments over which the learning agent has some control, currently reaching out to scenarios where other agents need to be modelled as well.
It is interesting to note how curiously backwards this would be as a description of the development of a human language user:
Humans needs to experience other minds before they can ever begin to experience structured textual information.
As a development strategy, the bottom-up methodology reflected in this ``widening of scopes'' also bears some risks: As \citet{koller:topdown} recently argued with respect to distributional semantics, a bottom-up strategy by design moves from one (relative) success to the next, as the next thing is always the one that is just about possible to do. Without some further guidance, however, this %
limits the perspective and comes with the risk of getting stuck in local optima. This paper is an attempt to provide such guidance for the field of ``embodied social AI'', by pulling together some of what is known in the various scientific areas that deal with human verbal interaction, into an abstract description of modelling desiderata. In that sense, the proposal here may serve as a \emph{conceptual benchmark} against which settings, tasks, datasets, and models can be measured in terms of their coverage, and in terms of the costs of the abstractions they make relative to this general model.\footnote{%
Note that this way of proceeding is fully compatible with an ``empirical approach'', insofar as that is used to select the best model, and does not aim to determine the goals as well.
}
\begin{figure}[t]
\centering
\hspace*{-3ex}
\includegraphics[width=1.1\linewidth]{Figures/analysis.pdf}
\caption{Representational Domains (bottom) and Anchoring Processes (top) Structuring the Situated Agent}
\label{fig:analysis}
\vspace*{-2.5ex}
\end{figure}
Figure~\ref{fig:analysis} shows an outline of the proposal, which the remaining sections will unpack: There are representational demands that the situation puts on the agent---that is, the agent needs to bring some knowledge, and track some information (discussed further in Section~\ref{sec:repr}). The processes with which it handles the interaction (and comes to update this knowledge) also are subjects to some demands, stemming from the fact that the interaction partner is free and independent, but similar (Sections~\ref{sec:sitlau}, \ref{sec:anchor}).
It is useful to clarify one thing from the outset: What this paper is \emph{not} trying to do is to make any recommendations as to \emph{how} aspects of this model are to be realised (e.g., using symbolic or distributed representation methods; using particular learning algorithms; building in a certain modularisation or modelling monolithically; or using particular decision making algorithms);
the intended contribution is an analysis of how the %
\emph{phenomenon} of situated language use is conceptually structured on a high level, which can then eventually guide the definition of challenges and selection of methods to meet them.
\section{Situated Interaction} %
\label{sec:sitlau}
Here is a (very) high-level, general characterisation of the
face-to-face interaction situation: It is a \emph{direct, purposeful encounter of free and independent, but similar agents}.
Let us unpack this:
\\ $\bullet$ as \emph{agents}, the participants meet their purposes---and here, specifically, \emph{communicative purposes}---through acting;
\\ $\bullet$ as \emph{free} agents, they cannot be forced, and cannot force the respective other, to do anything, and specifically not to \emph{understand} as intended;
\\ $\bullet$ as \emph{independent} agents, they are individually subject to the same passing of time (while one acts, the other can as well and need not wait); they will also have different histories, including their histories of previous interactions and language use, and will bring different knowledge to the interaction;
\\ $\bullet$ this being a \emph{direct} encounter, the agents must rely on what they can do (produce for the other, receive from the other) with their bodies to create meaning here and now;
\\ $\bullet$ finally, as fundamentally \emph{similar} agents, they can rely on a certain body of shared knowledge and experience, for example in how each parses the shared environment, understands the world, and forms desires, beliefs, and intentions, and, if they are to use language for communication, in how they use language, but where the exact degree of similarity will need to be determined during and through the interaction.
\\ This has consequences: To reach \emph{joint} purposes, the agents need to coordinate, in a process that unfolds continuously in time and which can yield new knowledge, including about how to coordinate, but that also can rest on assumed commonalities.\footnote{%
This short description places a different focus, but in the broad strokes follows the analysis by \citet{clark:ul}.
}
The next sections will go into the details of what the situation, thus characterised, demands of the agent.
\section{Representational Demands}
\label{sec:repr}
The central means through with agents in situated interaction meet their
purposes is \emph{language} (and a particular one, at that), and hence the agent must come with knowledge of this language, or possess (or represent to itself) what I will call here a \textbf{language model}.
It is not enough for the agent to be able to produce well-formed strings; rather, the systematic connection to the communicative intentions they express \cite{grice:meaning} must be modelled as well. As these intentions can concern objects in the world, and to the degree that the model of the language can be presumed to be shared, it is via those that the language can count as \emph{grounded} in the sense that has most currency in the NLP community \cite{Chandu2021}.
Examples like those in \ref{ex:worldk} below indicate that \textbf{world knowledge} also factors into the purpose-directed use of language:
\ex. \label{ex:worldk}
\a. \label{ex:wino}
I couldn't put the coat into the suitcase because it was too small.
\b. \label{ex:instruction}
Put the poster up on the wall. %
In \ref{ex:wino}, a ``Winograd schema'' \citep{Levesque2012} type sentence, information about expectable relative sizes, and in \ref{ex:instruction}, knowledge about expected outcomes, is needed to interpret the utterance.\footnote{%
But see for example \citet{Pust:GL,murphy2010lexical} for the notorious difficulties separating linguistic, and in particular lexical knowledge from such more general knowledge.
}
Again, underlying the communicative use of this knowledge is an assumption that it is shared.
While subject to possible updates, as we will see, these types of knowledge can be seen as something that the agent brings into the situation. But the situation itself must be understood by the agents, in order to interact in it.
The proposed schema %
splits the \textbf{situation model} into three sub-types: A model of the \emph{actual situation} in which the interaction is happening, which would provide not only referents for ``poster'' and ``wall'' in a situation in which \ref{ex:instruction} is used, but also potential likely referent for the implicit instrument of the requested action (e.g., perhaps there is a roll of duct tape visible, or a collection of pushpins). For this to work, there is an underlying assumption, which is that the situation will be mostly parsed similarly by the agents, so that it can form the shared basis for assumed mutual knowledge \cite{clark:ul}; repair processes accounting for violations of this assumption will be discussed below.
The discourse of the agents does not always have to be about the actual situation, however. The building up of model of the \emph{reported situation} \cite{van1983strategies}, together with world knowledge about the consequences of entering a room, can explain the licensing of the contrast (indicating surprisal) in the following example:
\ex. \label{ex:continuity}
I saw two people enter the room, but when I followed, the room was empty.
Lastly, the \emph{social situation} also bears on linguistic material: Relative social status, for example, is grammaticalised in many languages \citep{bender2013linguistic}; and even more generally, the simple fact of who is and who is not party to an interaction determines which linguistic and other behaviour is appropriate \citep{goffman1981forms,bohus-horvitz-09:multiparty}.
Next, the \textbf{discourse model}, required to keep track of antecedents of anaphoric acts and, more generally, for the determination of \emph{coherence}. In \ref{ex:dm}, for example, the anaphoric elements \emph{no} (as negating the proposition contained in A's question) and \emph{he} can only be resolved under the assumption that they realise an \emph{answer} (to A's question) and an \emph{explanation} (for the answer), respectively. (See, \emph{inter alia}, \citet{KnR:DtL,ashlasc:sdrtbook,ginz:is}.)
\ex. \label{ex:dm}
A: Is John coming to the party?\\
B: No, he's busy.
Finally, there is a large body of work elucidating the role of the \textbf{agent model} (representing their \emph{beliefs, desires, and intentions}) in interpreting discourse \cite{cmp:intcom}. To again give just one illustrating example, in \ref{ex:coffee}, A must know something about B's likely desires and intentions (to stay awake, or not stay awake) to make sense of their reply.
\ex. \label{ex:coffee}
A: Do you want some coffee?\\
B: It's late.
\section{Anchoring Processes}
\label{sec:anchor}
\begin{figure}[ht]
\centering
\hspace*{-3ex}
\includegraphics[width=1\linewidth]{Figures/with_relations.pdf}
\caption{Representations and Processes. Arrows Denote the \emph{enables} Relation, Dotted Lines the \emph{updates} Relation.}
\label{fig:rels}
\vspace*{-3ex}
\end{figure}
Even if not often taken in its full breadth, \emph{that} the knowledge described above plays a role in situated interaction is presumably not very controversial. The focus of this section is on something that is less often dealt with and brought together, namely, the processes through which the knowledge is applied and updated.
The fact that the agents are \emph{independent} and hence not extrinsically temporally coordinated argues for \textbf{incremental processing}, that is, an updating of situation, discourse and agent models that is continual to the observation of the other agent's actions as well as to the agent's own production---this is turn then makes possible the \emph{achievement} of coordination, for example in successful turn-taking \cite{schlaska:agmo}.
Only this processing regime then makes available certain devices used in \textbf{conversational grounding} \citep{clarkbrennan:grounding,clark:ul}---the process of coordination on what can count as shared knowledge (with respect to either of the models discussed above, most immediately the discourse model)---namely the use of overlapping signals such as ``back-channel'' behaviours like ``uh-hu'' or nodding \cite{yngve:edge}. When understanding is not reached immediately, \emph{repair} can be initiated through clarifications and corrections \cite{hayashi2013conversational}. In the framework of section~\ref{sec:sitlau}, this can be understood as the mending and making true of initially overoptimistic assumptions on what was shared.
In some cases, the outcome of the repair process can lead to ``localised'' learning through the establishment of precedents \cite{brenclark:conpact}, such as for example particular idiosyncratic ways of referring to some object; however, it can also, just like \emph{direct teaching}, lead to a longer term update of language model (e.g., learning new terms) or world model (e.g., learning new facts), in a process of \textbf{incremental learning} \cite{hoppitt:sociallearning}.
Lastly, the multimodal nature of situated interaction \cite{Holler2019} is accounted for by processes of \textbf{multimodal grounding}, which integrate meaning-making devices such as deictic and iconic gestures \cite{Sowa2003,kenningtonetal:sigdial13} and facial expressions \cite{poggipela:facperf}. I will also subsume under this header the process of resolving references into the situational context \cite{royreit:langworld,sieschla:vispento} by performing the categorisations denoted by the expressions.
To summarise the preceding two sections, Figure~\ref{fig:rels} again shows the elements of the analysis discussed here, this time with interrelations added.
\section{Where We Are, And Where To Go} %
\label{sec:where}
\vspace*{-.2cm}
It should be clear that from the perspective of the analysis detailed above, even the NLP tasks that most seem like they are related to situated interaction are severly limited. Here is not the space for a detailed catalogisation, but we can look at a few examples. ``Visual dialog'' \cite{visdial}, the task of generating a reply to a question about an image, within the context of preceding questions and answers, requires a limited discourse model (the previously asked questions and answer may introduce discourse referents), and a limited form of situation model and multimodal grounding (of the target image), but the strict role asymmetry (questioner and answerer) precludes any need for agent modelling and conversational grounding; its strict turn-by-turn nature and the use of written language abstract away from the questioning agent as independent agent and put control unilaterally in the hand of the model. ``Embodied QA'' \citep{dasetal:eqa} and ``language \& vision navigation'' \citep{Anderson2018}, are tasks where in response to a language stimulus actions (in a simulator environment) need to be performed. Hence, these tasks require a more dynamic situation model, but other than that, are not fundamentally different from the visual dialog task (and in fact take away again what little that task requires in terms of discourse model).
Now, tackling a problem by focussing on its parts is a valid strategy, but only if in doing so the whole is kept in mind. In the cases cited above, it seems fair to say that the formulation of the task was driven more by the available modelling method: They basically are tasks that lend themselves to a formulation as sequence-to-sequence problem, and as such are more about transducing the semantics of the stimulus language than they are about situated \emph{interaction} (or interaction at all).
More recently, tasks have been proposed that put more stress on the conversational grounding aspects mentioned above \cite{ilieatal:meetup19,Udagawa2019,bara-etal-2021-mindcraft}. This is a good start, but in order to systematise these efforts, what is missing is a clearer picture of how the task setting (environment, interaction mode, etc.) determines what a task can even test, and how close it will come to the fuller picture sketched above. Should it turn out that for the richest settings, real interaction with capable language users is required, then ways will have to be found to enable that, and to overcome the batch learning mode that current models are bound to.
\section{Related Work}
\label{sec:relwo}
\vspace*{-.2cm}
That various kinds of knowledge and update processes are required to model conversational agents is not a new insight.
The grandparent of any of those attempts, Winograd's \citeyear{Winograd:shdrlu} SHDRLU already made a distinction between language model (in the form of parsing procedures) and situation model, as did the later textbook presentation by \citet{allen:nlu}. A distinction between conversation situation and reported situation was made by \citet{barwiseperry:sitatt}; \citet{bratman:intplan} and \citet{wooldridge:bda} stressed the importance of modelling agents in terms of their beliefs, desires, and intentions. \citet{allenetal:archmorereal} were among the first to point out the need for incremental processing. And to conclude this---almost absurdly selective---tour through what is a massive body of work, \citet{traumlarsson:ISbook} and \citet{staffan:thesis} described a representational system that elegantly interfaced discourse modelling, conversational grounding, and agent modelling. It is not, I want to claim here, that the \emph{analyses} from these papers were wrong; in the light of more recently available methods, what is likely the case is that the realisation of representational demands through manually constructed representation formats and formalism restricted these models, and that this is what our more recent methods can help us overcome.\footnote{%
See \cite{beyond-single} for a very recent, even more wide-ranging argument for the value of looking at situated interaction.
}
\vspace*{-.2cm}
\section{Conclusions}
\label{sec:conc}
\vspace*{-.2cm}
I have argued for a particular analysis of the task of participating in situated interaction, drawing on various literatures. If NLP wants to advance on this phenomenon, I contend, it needs to start to take its complexity seriously, and devise methods and testbeds for tackling it, rather than only invent tasks that fit the available methods.
|
{
"arxiv_id": "2302.08700",
"language": "en",
"timestamp": "2023-02-20T02:07:13",
"url": "https://arxiv.org/abs/2302.08700",
"yymm": "2302"
} | \section{Introduction}
Let $\mathfrak{g}$ be a finite-dimensional simple Lie algebra, $\triangle$ its Dynkin diagram, $\mathsf{C} =(\mathsf{c}_{i,j})_{i,j \in I}$ its Cartan matrix.
In \cite{FR98}, Frenkel-Reshetikhin introduced the two-parameters deformation $\mathsf{C}(q,t)$ of the Cartan matrix $\mathsf{C}$
to define a deformation $\mathcal{W}_{q,t}(\mathfrak{g})$ of the $\mathcal{W}$-algebra $\mathcal{W}$ associated with $\mathfrak{g}$. Interestingly, it comes to light that $\mathcal{W}_{q,t}(\mathfrak{g})$ \emph{interpolates}
the representation theories of the quantum affine algebra $U'_q(\widehat{\mathfrak{g}})$ of untwisted type and its Langlands dual $U'_q(\widehat{\mathfrak{g}}^L)$ in the following sense. Let
$\mathscr{C}_{\widehat{\g}}$ be the category of finite dimensional $U_q(\widehat{\mathfrak{g}})$-modules and $K(\mathscr{C}_{\widehat{\g}})$ its Grothendieck ring. Then the specialization of
$\mathcal{W}_{q,t}(\mathfrak{g})$ at $t=1$ recovers $K(\mathscr{C}_{\widehat{\g}})$ while the one at $q=\exp(\pi i/r)$ is expected to recover $K(\mathscr{C}_{{\widehat{\g}}^L})$, where $r$ is the racing number of $\mathfrak{g}^L$ \cite{FR99,FM01,FH11}.
In the representation theory of $U'_q(\widehat{\mathfrak{g}})$ over the
base field $\mathbf{k} =\overline{\mathbb{Q}(q)}$, the \emph{$\mathrm{R}$-matrices} play a central role as every simple module in $\mathscr{C}_{\widehat{\g}}$ appears as the image of the $\mathrm{R}$-matrix defined on a certain tensor product of a special family of simple modules, called the \emph{fundamental modules} \cite{AK97, Kas02, VV02}. Let us explain this in a more precise way.
The $\mathrm{R}$-matrices are constructed as intertwining operators between modules in $\mathscr{C}_{\widehat{\g}}$ satisfying the Yang-Baxter equation. For a simple module $M \in \mathscr{C}_{\widehat{\g}}$, there exists a unique finite family
$ \st{(i_k,a_k) \in I \times \mathbf{k}^\times }_{1 \le k \le r}$ and the corresponding fundamental modules $\{ V_{i_k}(a_k) \}_{1 \le k \le r}$ such that $M$ is an image of the composition of $\mathrm{R}$-matrices
$$
\mathbf{r}
: V_{i_r}(a_r) \otimes \cdots \otimes V_{i_2}(a_2) \otimes V_{i_1}(a_1) \to V_{i_1}(a_1) \otimes V_{i_2}(a_2) \otimes \cdots \otimes V_{i_r}(a_r),
$$
where
$$
\mathbf{r}_{t,s} : V_{i_t}(a_t) \otimes V_{i_s}(a_s) \to V_{i_s}(a_s) \otimes V_{i_t}(a_t)
$$
denotes the $\mathrm{R}$-matrix between $V_{i_t}(a_t)$ and $V_{i_t}(a_t)$,
\begin{align*}
\mathbf{r} = (\mathbf{r}_{2, 1}) \circ \cdots \circ ( \mathbf{r}_{r-1, 1} \circ \cdots \circ \mathbf{r}_{r-1, r-2} ) \circ ( \mathbf{r}_{r, 1} \circ \cdots \circ \mathbf{r}_{r, r-1} ),
\end{align*}
and
$a_{s}/a_t$ is \emph{not} a root of the \emph{denominator} of the \emph{normalized $\mathrm{R}$-matrix} $R^{{\rm norm}}_{V_{i_t}(1),V_{i_s}(1)_z}$, which is a polynomial in $\mathbf{k}[z]$ and denoted by $d_{i_t,i_s}(z)$.
Furthermore, the denominator of $R^{{\rm norm}}_{V_{i_t}(1),V_{i_s}(1)_z}$ indicates whether $ V_{i_t}(a_t) \otimes V_{i_s}(a_s)$ is simple or not (we refer~\cite{KKOP19C} for more detail about the normalized $\mathrm{R}$-matrix and its denominator).
Thus the study of the $\mathrm{R}$-matrices and denominators between fundamental modules is one of the first step to investigate the representation theory of quantum affine algebras.
Surprisingly,
it is proved that the matrix $(d_{i,j}(z) )_{i,j \in I}$ can read from the inverse ${\widetilde{\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}}}} %{{\widetilde{\mathsf{C}}}(q) \mathbin{:=} \mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q)^{-1}$ of $\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q) \mathbin{:=} \mathsf{C}(q,t)|_{t=1}$
by using the geometry of the graded quiver varieties in type $ADE$ \cite{Fuj19} and using the root system for a simply-laced type $\mathsf{g}$
(see~\eqref{eq: Gg} below) in any finite type $\mathfrak{g}$ \cite{HL15,OS19A,FO21}.
We remark here that $( d_{i,j}(z) )_{i,j \in I}$ was explicitly calculated in \cite{KKK13b,DO94,KMN2,Oh15,OS19,Fuj19}.
We call $\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q)$ the \emph{quantum Cartan matrix}. Note that ${\widetilde{\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}}}} %{{\widetilde{\mathsf{C}}}(q)$ is ubiquitously utilized
in the representation theory of $U_q'(\widehat{\mathfrak{g}})$. For instances, it plays crucial roles in the $q$-character theory invented in \cite{FR99,FM01} and in
the construction of the quantum Grothendieck ring $K_{t}(\mathscr{C}_{\widehat{\g}})$ invented in \cite{Nak04,VV02A,H04}. As we mentioned, to compute ${\widetilde{\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}}}} %{{\widetilde{\mathsf{C}}}(q)$ of type $\mathfrak{g}$, we need to consider its corresponding simply-laced Lie algebra $\mathsf{g}$ given in the below:
\begin{align} \label{eq: Gg}
(\mathfrak{g},\mathsf{g}) = (ADE_n,ADE_{n}), \ (B_n,A_{2n-1}),
\ (C_n,D_{n+1}),\ (F_4,E_6),\ (G_2,D_4).
\end{align}
Then it is revealed that the nature of the representation theory of $U_q'(\widehat{\mathfrak{g}})$ is simply-laced even though $\mathfrak{g}$ is not simply-laced. For instances,
the quantum Grothendieck rings $K_{t}(\mathscr{C}^\mathcal{Q}_{\widehat{\g}})$ of the \emph{heart subcategories} $\mathscr{C}_{\widehat{\g}}^\mathcal{Q}$ of $\mathscr{C}_{\widehat{\g}}$ are isomorphic to
the integral form of the \emph{unipotent quantum coordinate ring} $A_q(\mathsf{n})$ of
$U_q(\mathsf{g})$ \cite{HL15,KO19,OS19}, and the blocks of $\mathscr{C}_{\widehat{\g}}$ is parameterized by the root lattice $\mathsf{Q}_\mathsf{g}$ of $\mathsf{g}$ \cite{CM05, KKOP22,FO21}.
Here, (A) the heart subcategory $\mathscr{C}_{\widehat{\g}}^\mathcal{Q}$ depends on the choice of a $\mathrm{Q}$-datum $\mathcal{Q}=(\triangle_\mathsf{g},\sigma,\xi)$ of $\mathfrak{g}$, which consists of (i)
the Dynkin diagram $\triangle_\mathsf{g}$ of type $\mathsf{g}$, (ii) the Dynkin diagram automorphism $\sigma$ which yields $\triangle_\mathfrak{g}$ as the $\sigma$-orbits of
$\triangle_\mathsf{g}$ and (iii) a height function $\xi:I \to \mathbb{Z}$ satisfying certain conditions (\cite[Definition 3.5]{FO21}), (B) $\mathscr{C}_{\widehat{\g}}^\mathcal{Q}$ is characterized by the smallest subcategory of $\mathscr{C}_{\widehat{\g}}$
(i) containing $\mathsf{V}_\mathcal{Q} \mathbin{:=} \{ V_{i}((-q)^{p}) \}$, where $(i,p)$
ranges over a certain subset of $I \times \mathbb{Z}$ which
is the set of vertices $(\Gamma_\mathcal{Q})_0$ of \emph{combinatorial AR-quiver} $\Gamma_\mathcal{Q}$ of $\mathcal{Q}$, and (ii) stable by taking subquotients, extensions and tensor products \cite{HL15,KO19,OS19}.
\medskip
The \emph{quiver Hecke algebra} $R$, introduced by Khovanov-Lauda \cite{KL09} and Rouquier \cite{R08} independently, is a $\mathbb{Z}$-graded $\mathbf{k}$-algebra, whose finite-dimensional graded module category $R\text{-}\mathrm{gmod}$ also categorifies the integral form of $A_q(\mathbf{n})$ of the quantum group $U_q(\mathbf{g})$ associated with a symmetrizable Kac-Moody algebra $\mathbf{g}$. When $\mathbf{g}$ is symmetric and the base field
$\mathbf{k}$ is of characteristic 0, the self-dual simple modules in $R\text{-}\mathrm{gmod}$ categorifies the distinguished basis $\mathbf{B}^{{\rm up}}$ \cite{VV09,R11}, which is called the \emph{upper global basis}
(equivalent to Lusztig's \emph{dual canonical basis}). For a finite simple Lie algebra $\mathfrak{g}$,
every simple module can be obtained as the image of $\mathrm{R}$-matrices of a certain ordered \emph{convolution product $\mathop{\mathbin{\mbox{\large $\circ$}}}$} of \emph{cuspidal modules} (\S~\ref{subsec: cuspidal}), where the cuspidal modules categorify the dual PBW-vectors \cite{Kato12,McNa15} (see also \cite{KR09,HMD12}). Let us explain this
more precisely.
For a \emph{commutation class} $\cc$ of reduced expressions of ${\underline{w}}_0$ of the longest element $w_0$ of Weyl group $\sfW_\mathfrak{g}$ (\S~\ref{subsec: convex}), one can define the associated set of cuspidal modules $\mathsf{S}_{\cc} = \{ S_{\cc}(\beta) \}_{\beta \in \Phi_\mathfrak{g}^+}$, which consists of \emph{affreal} simple modules (Definition~\ref{def: affreal}) and is parameterized by the set of positive roots $\Phi_\mathfrak{g}^+$.
Then, every simple $R$-module $M$ can be obtained as the image of a distinguished homomorphism, which is also called the $\mathrm{R}$-matrix and satisfy the Yang-Baxter equation (see \cite{KKK18A} for more detail), defined an ordered convolution product of modules in $\mathsf{S}_{\cc}$.
Namely, there exists a unique ${\underline{m}} =(m_\ell,\ldots,m_1) \in \mathbb{Z}_{\ge 0}^{\Phi_\mathfrak{g}^+}$ such that
$M$ is the image of the composition of $\mathrm{R}$-matrices (\S~\ref{subsec: cuspidal})
$$
\mathbf{r}
: S_{\cc}(\beta_\ell)^{ \circ m_\ell} \mathop{\mathbin{\mbox{\large $\circ$}}} \cdots \mathop{\mathbin{\mbox{\large $\circ$}}} S_{\cc}(\beta_2)^{\circ m_2} \mathop{\mathbin{\mbox{\large $\circ$}}} S_{\cc}(\beta_1)^{\circ m_1}
\to S_{\cc}(\beta_1)^{\circ m_1} \mathop{\mathbin{\mbox{\large $\circ$}}} S_{\cc}(\beta_2)^{\circ m_2} \mathop{\mathbin{\mbox{\large $\circ$}}} \cdots \mathop{\mathbin{\mbox{\large $\circ$}}} S_{\cc}(\beta_\ell)^{\circ m_\ell},
$$
where
$$
\mathbf{r}_{t,s} : S_{\cc}(\beta_t) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{\cc}(\beta_s) \to S_{\cc}(\beta_s) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{\cc}(\beta_t)
$$
denotes the $\mathrm{R}$-matrix between $S_{\cc}(\beta_t)$ and $S_{\cc}(\beta_s)$, and
\begin{align*}
\mathbf{r} = (\mathbf{r}_{2, 1}^{\circ m_{2} m_{1}}) \circ \cdots \circ ( \mathbf{r}_{r-1, 1}^{\circ m_{r-1} m_{1}} \circ \cdots \circ \mathbf{r}_{r-1, r-2}^{\circ m_{r-1} m_{r-2}} ) \circ ( \mathbf{r}_{r, 1}^{\circ m_r m_{1}} \circ \cdots \circ \mathbf{r}_{r, r-1}^{\circ m_r m_{r-1}} ).
\end{align*}
Here the homogeneous degree $\Lambda(S_{\cc}(\beta_t) ,S_{\cc}(\beta_s))$ $(t>s)$ of the $\mathrm{R}$-matrix $\mathbf{r}_{t,s}$ coincides with $-(\beta_t,\beta_s)$. As in quantum affine algebra
cases, for each $\mathbf{r}_{t,s}$, we have the $\mathbb{Z}_{\ge 0}$-invariant of the $\mathrm{R}$-matrix
$$\mathfrak{d}(S_{\cc}(\beta_t) ,S_{\cc}(\beta_s)) \mathbin{:=} \frac{1}{2}\bl\Lambda(S_{\cc}(\beta_t) ,S_{\cc}(\beta_s)) + \Lambda(S_{\cc}(\beta_s) ,S_{\cc}(\beta_t)) \br$$
indicating whether $S_{\cc}(\beta_t) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{\cc}(\beta_s)$ is simple or not. Note that, in general, computing $\mathfrak{d}(M,N)$ is quite difficult, when
$M$ and $N$ are simple $R$-modules such that one of them is affreal.
\medskip
In \cite{KKK18A}, Kang-Kashiwara-Kim constructed the \emph{generalized Schur-Weyl functor} $\mathscr{F}_J$
from the category $R^J\text{-}\mathrm{gmod}$ of the quiver Hecke algebra $R^J$
to the category $\mathscr{C}_{\widehat{\g}}$.
Here the quiver Hecke algebra $R^J$ and the monoidal functor $\mathscr{F}_J$
are determined by the \emph{duality datum} (\cite[Section 4]{KKOP21C}), a set of real simple modules $\{ V^j \}_{j \in J}$ in $\mathscr{C}_{\widehat{\g}}$, and the denominators of $\mathrm{R}$-matrices $(\mathbf{r}_{V^i,V^j})_{i,j \in J}$.
When the duality datum \emph{arises} from a $\mathrm{Q}$-datum $\mathcal{Q}$ for $\mathfrak{g}$, the quiver Hecke algebra $R^\mathcal{Q}$ is associated with $\mathsf{g}$ and the functor $\mathscr{F}_\mathcal{Q}$ gives an equivalence of categories $\mathscr{C}^\mathcal{Q}_{\widehat{\g}}$ and $R^\mathsf{g}\text{-}\mathrm{mod}$
for each pair $(\mathfrak{g},\mathsf{g})$ in~\eqref{eq: Gg} and any $\mathrm{Q}$-datum $\mathcal{Q}$ of $\mathfrak{g}$ \cite{KKK13b,KKKO16,KO19,OS19,Fuj17,Nao21,KKOP21C}. Note that $\mathscr{F}_\mathcal{Q}$ (i)
preserves $\mathrm{R}$-matrices in both categories, their invariants and (ii) sends
$\mathscr{C}_{\widehat{\g}}$ to the set of fundamental modules $\mathsf{V}_\mathcal{Q}$ to
$\mathsf{S}_\mathcal{Q} \mathbin{:=} \{ S_\mathcal{Q}(\beta)\mathbin{:=} S_{[\mathcal{Q}]}(\beta) \}_{\beta \in \Phi^+_\mathfrak{g}}$,
where $[\mathcal{Q}]$ is a commutation class of $w_0$ \emph{adapted to $\mathcal{Q}$}. Hence we can obtain information about $\mathfrak{d}( S_{\mathcal{Q}}(\alpha), S_{\mathcal{Q}}(\beta) )$ in $R^\mathsf{g}\text{-}\mathrm{gmod}$ from ${\widetilde{\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}}}} %{{\widetilde{\mathsf{C}}}(q)$ of $\mathfrak{g}$. Thus, roughly speaking, we can understand ${\widetilde{\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}}}} %{{\widetilde{\mathsf{C}}}(q)$ as the denominator formulae among the cuspidal modules in $\mathsf{S}_\mathcal{Q}$.
To sum up, the quantum Cartan matrices $\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q)$, which are the specialization of $\mathsf{C}(q,t)$ at $t=1$, or rather their inverses ${\widetilde{\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}}}} %{{\widetilde{\mathsf{C}}}(q)$ \emph{control} the representation theories of quantum affine algebras $U'_q({\widehat{\g}})$ for all $\mathfrak{g}$ and the ones of quiver Hecke algebras $R^\mathsf{g}$ for simply-laced types $\mathsf{g}$:
\begin{align} \label{eq: right wing}
\raisebox{2.3em}{ \xymatrix@!C=9.3ex@R=1ex{ &&&& \mathscr{C}_{\widehat{\g}} & \text{ for any $\mathfrak{g}$}\\
&&\mathsf{C}(q,t) \ar@{.>}[r]^{t=1} & \mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q) \ar@{~>}[dr]_{\text{controls}}\ar@{~>}[ur]^{\text{controls}}\\
&&&&{\rm Rep}(R^\mathsf{g}) & \text{ only for $\mathsf{g}$}.
}}
\end{align}
In this paper, \emph{regardless of type,}
we prove that (a) the matrix $\mathsf{C}(q,t)|_{q=1}$, denoted by ${\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t)$ and called the \emph{$t$-quantized Cartan matrix},
and (b) a (combinatorial) AR-quiver $\Gamma_Q$ associated with any Dynkin quiver $Q=(\triangle_\mathfrak{g},\xi)$ inform us of the $\mathbb{Z}$-invariants of $\mathrm{R}$-matrices and related properties for pairs of cuspidal modules $(S_{Q}(\alpha), S_{Q}(\beta))$ over the quiver Hecke algebra $R^\mathfrak{g}$, where $[Q]$ is the commutation class of $w_0 \in \sfW_\mathfrak{g}$ \emph{adapted to $Q$}
(Definition~\ref{def: adapted}).
Since ${\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t)$ can be obtained from $\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q)$ by just replacing $q$ with $t$
when $\mathfrak{g}$ is simply-laced, we can say that ${\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t)$ controls $\mathscr{C}_{\widehat{\mathsf{g}}}$ and hence we can complete the left wing of~\eqref{eq: right wing} as follows:
\begin{align} \label{eq: 2 sided wing}
\raisebox{2.3em}{ \xymatrix@!C=9.3ex@R=1ex%
{{\rm Rep}(R^\mathfrak{g}) &&&& \mathscr{C}_{\widehat{\g}} & \text{ for any $\mathfrak{g}$}\\
&\ar@{~>}[dl]^-(.4){\;\text{controls}}\ar@{~>}[ul]_(.4){\text{controls}} {\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t) &\mathsf{C}(q,t) \ar@{.>}[l]_{q=1}\ar@{.>}[r]^{t=1} & \mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q) \ar@{~>}[dr]_(.4){\text{controls}}\ar@{~>}[ur]^{\text{controls\;}}\\
\mathscr{C}_{\widehat{\mathsf{g}}} &&&&{\rm Rep}(R^\mathsf{g}) & \text{ only for $\mathsf{g}$}.
}}
\end{align}
As we explained, when $\mathfrak{g}$ is of simply-laced type and $Q$ is a Dynkin quiver of the same type, ${\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t) = \mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q)|_{q=t}$ indicates whether
$S_{Q}(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta)$ is simple or not. More precisely, via the bijection $ I \times \mathbb{Z} \supset \widehat{\Dynkin}_0 \overset{\phi_Q}{\longrightarrow} \Phi_\mathfrak{g}^+ \times \mathbb{Z}$, we have
$$ \mathfrak{d}( S_{Q}(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\alpha))
= \text{ the coefficient of $t^{|p-s|-1}$ in ${\widetilde{\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}}}} %{{\widetilde{\mathsf{C}}}(t)_{i,j}$},
$$
where $\phi_Q^{-1}(\alpha,0)=(i,p)$ and $\phi_Q^{-1}(\beta,0)=(j,s)$. That is, by only reading (I) relative positions of $\alpha$ and $\beta$ in $\Gamma_Q$ and (II)
the coefficients of ${\widetilde{\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}}}} %{{\widetilde{\mathsf{C}}}(t)_{i,j}$, we can obtain the $\mathbb{Z}$-invariant $\mathfrak{d}(S_{Q}(\beta) ,S_{Q}(\alpha))$. This kind of strategy was achieved in
\cite{Oh16,Oh17,Oh18,OS19A} by developing statistics on $\Gamma_\mathcal{Q}$, which also depends only on the relative positions of $\alpha$ and $\beta$ in $\Gamma_\mathcal{Q}$.
\smallskip
We apply the aforementioned strategy to non simply-laced type $\mathfrak{g}$ also.
For the purpose, in the previous paper \cite{KO22} of the authors, we extended the notion for Dynkin quivers $Q$ to the non simply-laced types.
We then constructed the combinatorial AR-quiver $\Gamma_Q$ for $Q$ of any type $\mathfrak{g}$,
which generalizes the classical AR-quiver $\Gamma_Q$ of the path algebra $\mathbb{C} Q$ for a simply-laced type $Q$ in an aspect of combinatorics. More precisely,
we proved that (i) there are reduced expressions ${\underline{w}}_0$'s of $w_0$ adapted to $Q$ and such reduced expressions form the commutation class $[Q]$, and (ii) we introduced
the combinatorial AR-quiver $\Gamma_Q$ which realizes the \emph{convex partial order} $\prec_Q$ on the set of positive roots $\Phi^+_\mathfrak{g}$ (see~\eqref{eq: convex partial order}
and Theorem~\ref{thm: OS17}).
We also understand $\Gamma_Q$ as a \emph{heart full-subquiver} by constructing the \emph{repetition quiver} $\widehat{\Dynkin}=(\widehat{\Dynkin}_0,\widehat{\Dynkin}_1)$, which depends \emph{only} on $\mathfrak{g}$ and whose set of vertices $\widehat{\Dynkin}_0$ is in bijection with $\Phi_\mathfrak{g}^+ \times \mathbb{Z}$. We remark here that, when $\mathfrak{g}$ is of simply-laced type, $\widehat{\Dynkin}$
is isomorphic to the AR-quiver of the bounded derived category of the path algebra $\mathbb{C} Q$ \cite{Ha87} and $\widehat{\Dynkin}_0$ is a subset of $I \times \mathbb{Z}$.
One of the main theorems in \cite{KO22} is that the inverse $\widetilde{\underline{\mathsf{B}}}(t)$ of $\underline{\mathsf{B}}(t) \mathbin{:=} {\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t) \times \mathsf{D}^{-1}$ can be read from any AR-quiver $\Gamma_Q$ of the same type $\mathfrak{g}$. Here $\mathsf{D}={\rm diag}(\ms{1mu}\mathsf{d}_i \ | \ i \in I)$
is a diagonal matrix symmetrizing the Cartan matrix $\mathsf{C}$.
\smallskip
In this paper, we first show that
$\widetilde{\underline{\mathsf{B}}}(t)$ of type $\mathfrak{g}$ can be obtained from $\widetilde{\underline{\mathsf{B}}}(t)$ of type $\mathsf{g}$ via the Dynkin diagram folding $\sigma$ on $\triangle_\mathsf{g}$ yielding $\triangle_\mathfrak{g}$ (Theorem~\ref{thm: tbij folding}). Since $\widetilde{\underline{\mathsf{B}}}(t)$ of type $\mathsf{g}$ can be read from any $\Gamma_Q$ of the same type $\mathsf{g}$, Theorem~\ref{thm: tbij folding} tells us that
combinatorial properties of $\Gamma_Q$ of type $\mathfrak{g}$ are related to the ones of AR-quivers of simply-laced type. In Section~\ref{sec: Labeling BCFG}, we show that
the labeling of $\Gamma_Q$ of type $\mathfrak{g}=B_n$, $C_n$, $F_4$ and $G_2$ via $\Phi^+_\mathfrak{g}$ can be obtained from the one of $\Gamma_Q$ of type
$\mathsf{g}' = D_{n+1}$, $A_{2n-1}$, $E_6$ and $D_4$ via $\Phi^+_{\mathsf{g}'}$ by simple surgery, respectively:
\begin{align} \label{eq: Gg'}
(\mathfrak{g},\mathsf{g}') = \ (C_n,A_{2n-1}), \ (B_n,D_{n+1}),\ (F_4,E_6),\ (G_2,D_4).
\end{align}
Note that $\mathfrak{g}$ is realized as a non-trivial Lie subalgebra of $\mathsf{g}'$ via the Dynkin diagram folding $\sigma$ (\cite[Proposition 7.9]{Kac}). Based on the combinatorial results in Section~\ref{sec: Labeling BCFG}, we show that the statistics, defined in \cite{Oh16,Oh17,Oh18} for simply-laced type,
is also well-defined and does not depend on the choice of Dynkin quivers for any type. Hence we can obtain the set of polynomials, called the \emph{degree polynomials} $(\mathfrak{d}_{i,j}(t))_{i,j \in I} \subset \mathbb{Z}_{\ge 0}[t]$ of any type $\mathfrak{g}$ (Proposition~\ref{eq: d well-defined}). Then we show in Theorem~\ref{thm:Main} that
\begin{align}\label{eq: relation}
\mathfrak{d}_{i,j}(t) + \delta_{i,j^*}\, \ms{1mu}\mathsf{d}_i\, t^{{\ms{1mu}\mathsf{h}}-1} = \widetilde{\underline{\mathsf{B}}}_{i,j}(t) + t^{\ms{1mu}\mathsf{h}} \widetilde{\underline{\mathsf{B}}}_{i,j^*}(t),
\end{align}
when $\mathfrak{g}$ is of classical type or type $E_6$ (see Remark~\ref{rmk: remained type} for the remained types). Here ${\ms{1mu}\mathsf{h}}$ is the Coxeter number of $\mathfrak{g}$ and $*:I \to I$
is an involution on $I$ induced by the longest element $w_0$.
\smallskip
\noindent \textbf{Main Theorem}\;(Theorem~\ref{thm: determimant}).\ Let $Q$ be a Dynkin quiver of any type $\mathfrak{g}$. Then, for any $\alpha,\beta \in \Phi^+_\mathfrak{g}$, we have
\begin{align} \label{eq: de = tde}
\mathfrak{d}(S_{Q}(\alpha),S_{Q}(\beta)) = \text{ the coefficient of $t^{|p-s|-1}$ in $\widetilde{\underline{\mathsf{B}}}_{i,j}(t)$},
\end{align}
where $\phi_Q(i,p)=(\alpha,0)$ and $\phi_Q(j,s)=(\beta,0)$. Furthermore, we have the following:
\begin{enumerate}[{\rm (a)}]
\item The head of $S_{Q}(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta)$ is simple and isomorphic to
a convolution product of mutually commuting cuspidal modules in $\mathsf{S}_{Q}$, under the assumption that
$\beta \not\prec_Q \alpha$.
\item If the coefficient of $t^{|p-s|-1}$ in $\mathfrak{d}_{i,j}(t)$ is strictly larger than $\max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j)$, then the composition length $\ell(S_{Q}(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta))$ of $S_{Q}(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta)$ is strictly larger than $2$.
\item If the coefficient of $t^{|p-s|-1}$ in $\mathfrak{d}_{i,j}(t)$ is equal to $\max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j)$, $\ell(S_{Q}(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta))$ is $2$.
\item If the coefficient of $t^{|p-s|-1}$ in $\mathfrak{d}_{i,j}(t)$ is equal to $0$, $S_{Q}(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta)$ is simple.
\end{enumerate}
\smallskip
We prove \textbf{Main Theorem} by using (i) the combinatorial properties of AR-quivers investigated in the sections~\ref{Sec: classical AR},~\ref{sec: Labeling BCFG}, (ii) the study
on dual PBW-vectors and their categorifications in \cite{BKM12} and (iii) the theories related to $\mathrm{R}$-matrices developed in \cite{KKKO18,KKOP18} by authors and their collaborators.
\smallskip
Note that \textbf{Main Theorem} tells that, for a Dynkin quiver $Q=(\triangle,\xi)$ with a \emph{source} $i$, we have
\begin{align} \label{eq: reflection}
\mathfrak{d}(S_{Q}(\alpha),S_{Q}(\beta)) = \mathfrak{d}(S_{s_iQ}(s_i\alpha),S_{s_iQ}(s_i\beta)) \quad \text{
if $\alpha\not=\alpha_i$ and $\beta \not= \alpha_i$,}
\end{align}
where $\alpha_i$ is the $i$-th simple root and $s_iQ=(\triangle,s_i\xi)$ is \emph{another} Dynkin quiver obtained from $Q$ (see~\eqref{eq: si height}).
Since $\mathsf{S}_{Q}$ and $\mathsf{S}_{s_iQ}$ categorify two different families of dual PBW-vectors, which are connected via the braid group action of Lusztig and Saito, we can expect an existence of
a functor proving~\eqref{eq: reflection}. When $\mathfrak{g}$ is of simply-laced type, Kato constructed such a functor in \cite{Kato12,Kato17}, called \emph{Saito reflection functor} by using a geometric approach.
We remark here that, comparing with simply-laced type,
there is no fully-developed geometry for non simply-laced type $\mathfrak{g}$, and hence we can not apply a geometric approach in \cite{VV09,Kato12,Kato17,Fuj19} at this moment.
Thus we adopt an algebro-combinatorial approach in this paper.
\medskip
This paper is organized as follows. In Section~\ref{Sec: backgroud}, we review the necessary backgrounds of this paper. In particular, we recall the notion of Dynkin quiver and the statistics defined on the pairs of positive roots with respect to the convex partial orders on $\Phi^+$. In Section~\ref{Sec: t-Cartan}, We first review the results in \cite{KO22} about the computation of $\widetilde{\underline{\mathsf{B}}}(t)=(\widetilde{\underline{\mathsf{B}}}_{i,j}(t))$ as a matrix with entries in $\mathbb{Z}( \hspace{-.3ex} (t) \hspace{-.3ex} )$.
Then we prove that $\widetilde{\underline{\mathsf{B}}}(t)$ of non simply-laced type $\mathfrak{g}$ can be obtained from
the one of simply-laced type $\mathsf{g}$. In Section~\ref{Sec: classical AR}, we review the results in~\cite{Oh16,Oh17,Oh18} about the statistics and the degree polynomials
of AR-quivers $\Gamma_Q$'s when $Q$ is of simply-laced type. In Section~\ref{sec: Labeling BCFG}, we develop the BCFG-analogues of Section~\ref{Sec: classical AR}
and observe the similarities among $\Gamma_Q$'s of different types. In particular, we show that the labeling of AR-quivers of non simply-laced type via $\Phi^+$
can be obtained from the one of simply-laced type by simple surgery. In Section~\ref{sec: Degree poly}, we define the notion of degree polynomials for each Dynkin quiver $Q$
and show that it does not depend on the choice of $Q$ and hence well-defined for any finite type $\mathfrak{g}$. Then we show that~\eqref{eq: relation} holds for classical type and $E_6$.
We allocate a lot of parts of this paper to Section~\ref{Sec: classical AR},~\ref{sec: Labeling BCFG} and ~\ref{sec: Degree poly} to exhibit many computations and examples.
In Section~\ref{Sec: quiver Hecke}, we recall the quiver Hecke algebra, the $\mathrm{R}$-matrix and related $\mathbb{Z}$-invariants related to $\mathrm{R}$-matrix. Also we review the categorification of PBW-vectors via cuspidal modules. In Section~\ref{sec: Cuspidal}, we prove \textbf{Main Theorem} by a case-by-case method
based on the results in the previous sections. More precisely, we classify pairs $(\alpha,\beta)$ by relative positions in $\Gamma_Q$ and hence statistics, and prove~\eqref{eq: de = tde}
according to the relative position and the type of $\mathfrak{g}$.
\begin{convention} Throughout this paper, we follow the following conventions.
\begin{enumerate}[{\rm (1)}]
\item For a statement $\mathtt{P}$, $\delta(\mathtt{P})$ is $1$ or $0$
whether $\mathtt{P}$ is true or not. In particular, we set $\delta_{i,j}
\mathbin{:=} \delta(i = j)$ $($Kronecker's delta$)$.
\item For a finite set $A$, we denote by $|A|$ the number of elements in $A$.
\item We sometimes use ${\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle$
for a Dynkin diagram of non-simply-laced type
to distinguish it from a Dynkin diagram of simply-laced type.
\end{enumerate}
\end{convention}
\section{Backgrounds} \label{Sec: backgroud}
In this section, we review the necessary backgrounds of this paper.
\subsection{Finite Cartan data} Let $I$ be an index set. A \emph{Cartan datum} is a quintuple
$$(\mathsf{C},\mathsf{P},\Pi,\mathsf{P}^\vee,\Pi^\vee)$$ consisting of
\begin{enumerate}[{\rm (1)}]
\item a generalized symmetrizable Cartan matrix $\mathsf{C}=(\mathsf{c}_{i,j})_{i,j\in I}$,
\item a free abelian group $\mathsf{P}$, called the \emph{weight lattice},
\item $\Pi=\{ \alpha_i \ | i \in I \}$, called the set of \emph{simple roots} ,
\item $\mathsf{P}^\vee \mathbin{:=} \operatorname{Hom}_{\mathbb{Z}}(\sfP,\mathbb{Z})$, called the \emph{coweight lattice} and
\item $\Pi^\vee=\{ h_i \ | i \in I \}$, called the set of \emph{simple coroots}
\end{enumerate}
satisfying
\begin{enumerate}[{\rm (i)}]
\item $\langle h_i,\alpha_j\rangle = \mathsf{c}_{i,j}$ for $i,j\in I$,
\item $\Pi$ is linearly independent over $\mathbb{Q}$,
\item for each $i \in I$, there exists $\varpi_i \in \sfP$, called the \emph{fundamental weight}, such that $\langle h_i,\varpi_j \rangle=\delta_{i,j}$ for all $j \in I$ and
\item there exists a $\mathbb{Q}$-valued symmetric bilinear form $( \cdot,\cdot)$ on $\sfP$ such that $\langle h_i,\lambda\rangle = \dfrac{2 (\alpha_i , \lambda) }{ (\alpha_i , \alpha_i ) }$ and
$(\alpha_i,\alpha_i) \in \mathbb{Q}_{>0}$ for any $i$ and $ \lambda \in \sfP$.
\end{enumerate}
We set $\mathfrak{h} \mathbin{:=} \mathbb{Q} \otimes_\mathbb{Z} \wl^\vee$, $\sfQ \mathbin{:=} \bigoplus_{ i \in I} \mathbb{Z}\alpha_i$ and $\sfQ^+ \mathbin{:=} \sum_{i \in I} \mathbb{Z}_{\ge 0} \alpha_i$,
$\sfP^+ \mathbin{:=} \{ \lambda \in P \ | \ \langle h_i,\lambda \rangle \ge 0 \text{ for any } i \in I\}$ and ${\rm ht}(\beta)=\sum_{i \in I} k_i$ for $\beta = \sum_{i \in I} k_i\alpha_i \in \sfQ^+$.
We denote by $\Phi$ the set of \emph{roots}, by $\Phi^\pm$ the set of \emph{positive roots} (resp.\ \emph{negative roots}).
\begin{definition} \hfill
\begin{enumerate}[{\rm (a)}]
\item
The \emph{multiplicity} ${\rm mul}(\beta)$ of $\beta =\sum_{i \in I}n_i\alpha_i \in \sfQ^+$ is defined by
$${\rm mul}(\beta) \mathbin{:=} \max\st{n_i \mid i \in I}.$$
\item The \emph{support} ${\rm supp}(\beta)$ of $\beta =\sum_{i \in I}n_i\alpha_i \in \sfQ^+$ is defined by
$${\rm supp}(\beta) \mathbin{:=}\{ i \in I \ | \ n_i \ne 0 \}.$$
\item For $i \in I$ and $\beta =\sum_{i \in I}n_i\alpha_i \in \sfQ^+$, we set ${\rm supp}_i(\beta) \mathbin{:=} n_i$.
\item For $\alpha,\beta \in \Phi$, we set
$$
p_{\beta,\alpha} \mathbin{:=} \max\st{p\in \mathbb{Z} \mid \beta -p \alpha \in \Phi }.
$$
\end{enumerate}
\end{definition}
\emph{Throughout this paper,
we only consider a finite Cartan datum} and realize the \emph{root lattice} $\sfQ \subset \bigoplus_{s \in J} \mathbb{R} \epsilon_s$ with an orthonormal basis $\{ \epsilon_s \}_{s\in J}$. For instance, if $\sfQ$ is of type $A_{n}$,
$\sfQ \subset \mathop{\mbox{\normalsize$\bigoplus$}}\limits_{i=1}^{n+1} \mathbb{Z} \epsilon_i$ and $\alpha_i = \epsilon_i-\epsilon_{i+1}$ for $1 \le i \le n$.
For each finite Cartan datum, we take the non-degenerate symmetric bilinear form $(\cdot, \cdot)$ on $\mathfrak{h}^*$
such that $(\alpha,\alpha)$=2 for short roots $\alpha$ in $\Phi^+$.
In this paper, a {\em Dynkin diagram} $\triangle$ associated with a Cartan datum is
a graph with $I$ as the set of vertices and
the set of edges with edges between $i$ and $j$
such that $(\alpha_i,\alpha_j)<0$. We assign $\ms{1mu}\mathsf{d}_i \mathbin{:=} (\alpha_i , \alpha_i) /{2} = (\alpha_i, \varpi_i) \in\mathbb{Z}_{>0}$ to each vertex $i\in I$.
We denote by $\triangle_0$ the set of vertices and $\triangle_1$ the set of edges.
Here is the list of Dynkin diagrams:
\eq
&&\label{fig:Dynkin}\\
&&\ba{ccc}
&A_n \ \xymatrix@R=0.5ex@C=4ex{ *{\circ}<3pt> \ar@{-}[r]_<{ 1 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 2 \ \ } & *{\circ}<3pt> \ar@{.}[r]
&*{\circ}<3pt> \ar@{-}[r]_>{ \ \ n} &*{\circ}<3pt> \ar@{}[l]^>{ n-1} }, \quad
B_n \ \xymatrix@R=0.5ex@C=4ex{ *{\circled{$\circ$}}<3pt> \ar@{-}[r]_<{ 1 \ \ } & *{\circled{$\circ$}}<3pt> \ar@{-}[r]_<{ 2 \ \ } & *{\circled{$\circ$}}<3pt> \ar@{.}[r]
&*{\circled{$\circ$}}<3pt> \ar@{-}[r]_>{ \ \ n} &*{\circ}<3pt> \ar@{}[l]^>{ n-1} }, \quad
C_n \ \xymatrix@R=0.5ex@C=4ex{ *{\circ}<3pt> \ar@{-}[r]_<{ 1 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 2 \ \ } & *{\circ}<3pt> \ar@{.}[r]
&*{\circ}<3pt> \ar@{-}[r]_>{ \ \ n} &*{\circled{$\circ$}}<3pt> \ar@{}[l]^>{ n-1} }, \ \allowdisplaybreaks\\[2ex]
&D_n \ \ \raisebox{1em}{ \xymatrix@R=2ex@C=4ex{ &&& *{\circ}<3pt> \ar@{-}[d]_<{ n-1 } \\
*{\circ}<3pt> \ar@{-}[r]_<{ 1 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 2 \ \ } & *{\circ}<3pt> \ar@{.}[r]
&*{\circ}<3pt> \ar@{-}[r]_>{ \ \ n} &*{\circ}<3pt> \ar@{}[l]^>{ n-2} }}, \
E_{6} \ \ \raisebox{1em}{ \xymatrix@R=2ex@C=4ex{ && *{\circ}<3pt> \ar@{-}[d]^<{ 2\ \ } \\
*{\circ}<3pt> \ar@{-}[r]_<{ 1 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 3 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 4 \ \ }
&*{\circ}<3pt> \ar@{-}[r]_>{ \ \ 6} &*{\circ}<3pt> \ar@{}[l]^>{ 5} } }, \
E_{7} \ \ \raisebox{1em}{ \xymatrix@R=2ex@C=3ex{ && *{\circ}<3pt> \ar@{-}[d]^<{ 2\ \ } \\
*{\circ}<3pt> \ar@{-}[r]_<{ 1 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 3 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 4 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 5 \ \ }
&*{\circ}<3pt> \ar@{-}[r]_>{ \ \ 7} &*{\circ}<3pt> \ar@{}[l]^>{ 6} } }, \allowdisplaybreaks \\[4ex]
& E_{8} \ \ \raisebox{1em}{ \xymatrix@R=2ex@C=3ex{ && *{\circ}<3pt> \ar@{-}[d]^<{ 2\ \ } \\
*{\circ}<3pt> \ar@{-}[r]_<{ 1 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 3 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 4 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 5 \ \ } & *{\circ}<3pt> \ar@{-}[r]_<{ 6 \ \ }
&*{\circ}<3pt> \ar@{-}[r]_>{ \ \ 8} &*{\circ}<3pt> \ar@{}[l]^>{ 7} } }, \quad
F_{4} \ \ \xymatrix@R=0.5ex@C=4ex{ *{\circled{$\circ$}}<3pt> \ar@{-}[r]_<{ 1 \ \ } & *{\circled{$\circ$}}<3pt> \ar@{-}[r]_<{ 2 \ \ }
&*{\circ}<3pt> \ar@{-}[r]_>{ \ \ 4} &*{\circ}<3pt> \ar@{}[l]^>{ 3} }, \quad
G_2 \ \ \xymatrix@R=0.5ex@C=4ex{ *{\circ}<3pt> \ar@{-}[r]_<{ 1 \ \ }
& *{\circled{$\odot$}}<3pt> \ar@{-}[l]^<{ \ \ 2 } }.
\ea\nonumber
\eneq
Here $ \circ_{j}$ implies $(\alpha_j,\alpha_j)=2$, $\circled{$\circ$}_j$ implies $(\alpha_j,\alpha_j)=4$, and
$\circled{$\odot$}_j$ implies $(\alpha_j,\alpha_j)=6$.
For $i,j \in I$, $d(i,j)$ denotes the number of edges between $i$ and $j$ in $\triangle$ as a graph.
By the diagonal matrix $\mathsf{D} = {\rm diag}(\ms{1mu}\mathsf{d}_i \ | \ i \in I)$, $\mathsf{C}$ is \emph{symmetrizable}; i.e.,
\begin{align*}
\text{the matrices $\overline{\mathsf{B}}\mathbin{:=}\mathsf{D}\sfC=\bl(\alpha_i,\alpha_j)\br_{i,j\in I}$ and $\underline{\mathsf{B}}\mathbin{:=}\sfC\mathsf{D}^{-1}=\bl(\alpha^\vee_i,\alpha^\vee_j)\br_{i,j\in I}$ are symmetric, }
\end{align*}
where $\alpha^\vee_i=(\ms{1mu}\mathsf{d}_i)^{-1}\alpha_i$.
For $\beta \in \Phi$, we set $\ms{1mu}\mathsf{d}_\beta \mathbin{:=} (\beta , \beta) /{2}$.
For $\alpha,\beta\in\Phi$ such that $\gamma=\alpha+\beta \in \Phi$, we have
\begin{align} \label{eq: palbe}
p_{\beta,\alpha} = \begin{cases}
2 & \text{ if } \ms{1mu}\mathsf{d}_\gamma=3 \text{ and } \ms{1mu}\mathsf{d}_{\alpha}= \ms{1mu}\mathsf{d}_{\beta} =1,\\
1 & \text{ if } \ms{1mu}\mathsf{d}_\gamma=2 \text{ and } \ms{1mu}\mathsf{d}_{\alpha}= \ms{1mu}\mathsf{d}_{\beta} =1,\\
1 & \text{ if $\mathfrak{g}$ is of type $G_2$ and } \ms{1mu}\mathsf{d}_\alpha=\ms{1mu}\mathsf{d}_{\beta}= \ms{1mu}\mathsf{d}_{\gamma} =1,\\
0 & \text{otherwise.}
\end{cases}
\end{align}
\subsection{Convex orders} \label{subsec: convex}
We denote by $\sfW$ the Weyl group associated to the finite Cartan datum. It is generated by the simple reflections
$\{ s_i \mid i \in I \}$: $s_i \lambda = \lambda - \langle h_i,\lambda \rangle \alpha_i$ $(\lambda \in \sfP)$.
Note that there exists a unique element $w_0 \in \sfW$ whose length $\ell(w_0)$ is the largest,
and $w_0$ induces the Dynkin diagram automorphism
$^*\cl I \to I$ sending $i \mapsto i^*$ , where $w_0(\alpha_i)=-\alpha_{i^*}$.
Let ${\underline{w}}_0 \mathbin{:=} s_{i_1} \cdots s_{i_\ell}$ be a reduced expression of $w_0 \in \sfW$ and define
\begin{align}\label{eq: beta_k uw_0}
\beta^{{\underline{w}}_0}_k \mathbin{:=} s_{i_1} \cdots s_{i_{k-1}}(\alpha_{i_k}) \quad \text{ for } k=1,\ldots,\ell.
\end{align}
Then we have $\Phi^+ = \{ \beta^{{\underline{w}}_0}_k \ | \ 1 \le k \le l\}$ and $|\Phi^+|=\ell(w_0)$. It is well-known that the total order $<_{{\underline{w}}_0}$ on $\Phi^+$, defined by
$\beta^{{\underline{w}}_0}_a <_{{\underline{w}}_0} \beta^{{\underline{w}}_0}_b $ for $a <b$,
is \emph{convex} in the following sense: if $\alpha,\beta\in\Phi^+$
satisfy $\alpha<_{{\underline{w}}_0}\beta$ and $\alpha+\beta \in \Phi^+ $, then we have
$ \alpha <_{{\underline{w}}_0} \alpha+\beta <_{{\underline{w}}_0} \beta $.
Two reduced expressions ${\underline{w}}_0$ and ${\underline{w}}'_0$ of $w_0$ are said to be \emph{commutation equivalent}, denoted by ${\underline{w}}_0 \sim {\underline{w}}'_0$, if
${\underline{w}}'_0$ can be obtained from ${\underline{w}}_0$ by applying the \emph{commutation relations} $s_is_j=s_js_i$ $(d(i,j)>1)$. Note that this relation $\sim$ is an equivalence relation,
and an equivalence class under $\sim$ is called a \emph{commutation class}. We denote by $[{\underline{w}}_0]$ the commutation class of ${\underline{w}}_0$.
For a commutation class $\cc$ of $w_0$, we define the \emph{convex partial order} $\prec_{\cc}$ on $\Phi^+$ by:
\begin{align} \label{eq: convex partial order}
\alpha \prec_{\cc} \beta \quad \text{ if and only if } \quad \alpha <_{{\underline{w}}_0} \beta \quad \text{for any $ {\underline{w}}_0 \in\cc$.}
\end{align}
For $[{\underline{w}}_0]$ of $w_0$ and $\alpha \in \Phi^+$, we define $[{\underline{w}}_0]$-residue of $\alpha$, denoted by $\mathrm{res}^{[{\underline{w}}_0]}(\alpha)$, to be $i_k \in I$
if $\beta^{{\underline{w}}_0}_k=\alpha$ with ${\underline{w}}_0=s_{i_1} \cdots s_{i_\ell}$. Note that this notion is well-defined; i.e, for any ${\underline{w}}_0' = s_{j_1} \cdots s_{j_\ell} \in [{\underline{w}}_0]$
with $\beta^{{\underline{w}}'_0}_t=\alpha$, we have $j_t=i_k$. Note that $$ (\alpha , \alpha) = (\alpha_i ,\alpha_i) \quad \text{ if } i= \mathrm{res}^{[{\underline{w}}_0]}(\alpha).$$
For a reduced expression ${\underline{w}}_0=s_{i_1}s_{i_2} \cdots s_{i_\ell}$, it is known that the expression ${\underline{w}}_0'\mathbin{:=} s_{i_2} \cdots s_{i_\ell}s_{i_1^*}$ is also a reduced expression
of $w_0$. This operation is sometimes referred to as a \emph{combinatorial reflection functor} and we write $r_{i_1}{\underline{w}}_0 = {\underline{w}}'_0$. Also it induces the operation on commutation classes of $w_0$ (i.e., $r_{i_1}[{\underline{w}}_0] = [r_{i_1}{\underline{w}}_0]$
is well-defined if there exists a reduced expression ${\underline{w}}_0' =s_{j_1}s_{j_2}\cdots s_{j_\ell} \in [{\underline{w}}_0]$ such that $j_1=i_1$).
The relations $[{\underline{w}}] \overset{r}{\sim} [r_i{\underline{w}}]$ for $i \in I$ generate an equivalence relation, called the \emph{reflection equivalent relation} $\overset{r}{\sim}$, on the
set of commutation classes of $w_0$. For ${\underline{w}}_0$ of $w_0$, the family of commutation classes $[\hspace{-0.3ex}[ {\underline{w}}_0 ]\hspace{-0.3ex}] \mathbin{:=} \{ [{\underline{w}}_0']\ \mid \ [{\underline{w}}_0'] \overset{r}{\sim} [{\underline{w}}_0] \}$
is called an \emph{$r$-cluster point}.
\subsection{Statistics} \label{subsec: stat}
An element ${\underline{m}}=\seq{{\underline{m}}_\beta}$ of $\Ex$ is called an \emph{\ex}.
In this subsection, we give several notions concerning \exs,
which we need in the PBW description of simple $R$-modules
(e.g., see Theorem~\ref{thm: cuspidal} below).
For an \ex ${\underline{m}}$, we set ${\rm wt}({\underline{m}}) \mathbin{:=} \sum_{\beta\in\Phi^+} {\underline{m}}_\beta \beta \in
\sfQ^+$.
\begin{definition}[{cf.\ \cite{McNa15,Oh18}}] \label{def: bi-orders}
For a reduced expression ${\underline{w}}_0=s_{i_1}\ldots s_{\ell}$ and a commutation class $\cc$,
we define the partial orders $<^\mathtt{b}_{{\underline{w}}_0}$ and $\prec^\mathtt{b}_{\cc}$ on $\Ex$ as follows:
\begin{enumerate}[{\rm (i)}]
\item $<^\mathtt{b}_{{\underline{w}}_0}$ is the bi-lexicographical partial order induced by $<_{{\underline{w}}_0}$. Namely, ${\underline{m}}<^\mathtt{b}_{{\underline{w}}_0}{\underline{m}}'$ if
\begin{itemize}
\item ${\rm wt}({\underline{m}})={\rm wt}({\underline{m}}')$.
\item there exists $\alpha\in\Phi^+$ such that
${\underline{m}}_\alpha<{\underline{m}}'_\alpha$ and ${\underline{m}}_\beta={\underline{m}}'_\beta$ for any $\beta$ such that $\beta<_{{\underline{w}}_0}\alpha$,
\item there exists $\eta\in\Phi^+$ such that
${\underline{m}}_\eta<{\underline{m}}'_\eta$ and ${\underline{m}}_\zeta={\underline{m}}'_\zeta$ for any $\zeta$ such that $\eta<_{{\underline{w}}_0}\zeta$.
\end{itemize}
\item \label{eq: crazy order} For sequences ${\underline{m}}$ and ${\underline{m}}'$, we define ${\underline{m}} \prec^\mathtt{b}_{\cc} {\underline{m}}'$ if the following conditions are satisfied:
$${\underline{m}}<^\mathtt{b}_{{\underline{w}}_0} {\underline{m}}'\qt{for all ${\underline{w}}_0 \in \cc$.}$$
\end{enumerate}
\end{definition}
\smallskip
We say that an \ex ${\underline{m}}=\seq{{\underline{m}}_\beta} \in \mathbb{Z}_{\ge0}^{\Phi^+}$ is \emph{$\cc$-simple} if it is minimal with respect to the partial order $\prec^\mathtt{b}_{\cc}$. For a given $\cc$-simple \ex
$\underline{s}=\seq{\underline{s}_\beta} \in \mathbb{Z}_{\ge0}^{\Phi^+}$, we call a cover\footnote{Recall that a \emph{cover} of $x$ in a poset $P$ with partial order $\prec$ is an element $y \in P$ such that $x \prec y$ and there
does not exist $y' \in P$ such that $x \prec y' \prec y$.}
of $\underline{s}$ under $\prec^{\mathtt{b}}_{\cc}$ a \emph{$\cc$-minimal \ex for $\underline{s}$}. The \emph{$\cc$-degree} of an \ex ${\underline{m}}$, denoted by ${\operatorname{deg}}_{\cc}({\underline{m}})$, is the largest integer $k \ge 0$ such that
\begin{align} \label{eq: dist}
{\underline{m}}^{(0)} \prec^\mathtt{b}_{\cc} {\underline{m}}^{(1)} \prec^\mathtt{b}_{\cc} \cdots \prec^\mathtt{b}_{\cc} {\underline{m}}^{(k-1)} \prec^\mathtt{b}_{\cc} {\underline{m}}^{(k)} = {\underline{m}}.
\end{align}
A pair $\pair{\alpha,\beta}$ of $\alpha,\beta\in\Phi^+$ is called a {\em \pr} if
$\beta\not\preceq_\cc\alpha$
(i.e., $\alpha<_{[{\underline{w}}_0]}\beta$ for some ${\underline{w}}_0\in\cc$).
We regard a \pr $\underline{p}\mathbin{:=}\pair{\alpha,\beta}$ as an \ex
by $\underline{p}_\alpha=\underline{p}_\beta=1$ and $\underline{p}_\gamma=0$ for
$\gamma\not=\alpha,\beta$.
When there exists a unique $\cc$-simple \ex $\underline{s}$ satisfying
$\underline{s} \preceq^\mathtt{b}_{\cc} \underline{p}$, we call $\underline{s}$ the \emph{$\cc$-head} of $\underline{p}$ and
denote it by ${\operatorname{hd}}_{\cc}(\underline{p})$\footnote{In \cite{Oh18}, we have used \emph{socle} instead of head, because we use a different convention
in Theorem~\ref{thm: cuspidal} below}.
We regard a positive root $\alpha\in\Phi^+$ as an \ex ${\underline{m}}$ by
${\underline{m}}_{\beta}=\delta_{\beta,\alpha}$.
\begin{remark}
The definition of
${\rm dist}_{[\widetilde{w}_0]}(\underline{p})$ for a \pr $\underline{p}$ in~\cite{Oh18} is different
from ${\operatorname{deg}}_{[\widetilde{w}_0]}(\underline{p})$ in~\eqref{eq: dist}, which was defined as follows:
\begin{align}
{\underline{m}}^{(0)} \prec^\mathtt{b}_{\cc} \underline{p}^{(1)} \prec^\mathtt{b}_{\cc} \cdots \prec^\mathtt{b}_{\cc} \underline{p}^{(k-1)} \prec^\mathtt{b}_{\cc} \underline{p}^{(k)} = \underline{p},
\end{align}
for \prs $\underline{p}^{(i)}$ and ${\underline{m}}^{(0)}$ is $\cc$-simple.
However, for a Dynkin quiver $Q$ of simply-laced type, the statistics
${\operatorname{deg}}_{[Q]}(\underline{p})$ for every \pr $\underline{p}$ still coincides with the
statistics ${\rm dist}_{[Q]}(\underline{p})$ in \cite{Oh18} when $Q$ is not
of type $E_7$ and $E_8$ (see the next subsection for Dynkin quivers).
\end{remark}
\subsection{Dynkin quivers}
A \emph{Dynkin quiver} $Q$ of $\triangle$ is an oriented graph whose
underlying graph is $\triangle$ (see Example~\ref{ex: sigma-fixed} below). For each Dynkin quiver of $Q$, we can
associate a function $\xi\cl \triangle_0 \to \mathbb{Z}$ , called a
\emph{height function} of $Q$ which satisfies:
\begin{align*}
\xi_i = \xi_j +1 \qquad \text{ if }
i \to j \text{ in } Q.
\end{align*}
Conversely, for a Dynkin diagram $\triangle$ and a function $\xi\cl
\triangle_0 \to \mathbb{Z}$ satisfying $|\xi_i-\xi_j|=1$ for $i,j\in I$ with
$d(i,j)=1$, we can associate a Dynkin quiver $Q$ in a canonical way.
Thus we can identify a Dynkin quiver $Q$ with a pair
$(\triangle,\xi)$. Note that, since $\triangle$ is connected, height
functions of $Q$ differ by integers.
For a Dynkin quiver $Q=(\triangle,\xi)$, the function $\xi^*: \triangle_0 \to \mathbb{Z}$ given by
$ (\xi^*)_i = \xi_{i^*}$ is also a height function. Thus
we set $Q^*=(\triangle,\xi^*)$.
For a Dynkin quiver $Q$, we call $i \in \triangle_0$ a \emph{source} of $Q$ if $\xi_i > \xi_j$ for all $j \in \triangle_0$ with $d(i,j)=1$.
For a Dynkin quiver $Q=(\triangle,\xi)$ and its source $i$, we denote by $s_iQ$ the Dynkin quiver $(\triangle,s_i\xi)$ where
$s_i\xi$ is the height function defined by
\begin{align} \label{eq: si height}
(s_i\xi)_j = \xi_j-2\delta_{i,j}.
\end{align}
\begin{definition} \label{def: adapted}
Let $Q=(\triangle,\xi)$ be a Dynkin quiver.
\begin{enumerate}[{\rm (i)}]
\item A reduced expression ${\underline{w}}=s_{i_1}\cdots s_{i_l}$ of an element of $\sfW_{\triangle}$
is said to be \emph{adapted to} $Q$ (or \emph{$Q$-adapted}) if
$$ \text{ $i_k$ is a source of } s_{i_{k-1}}s_{i_{k-2}}\ldots s_{i_1}Q \text{ for all } 1 \le k \le l.$$
\item A \emph{Coxeter element} $\tau$ of $\sfW_{\triangle}$ is a product of all simple reflections; i.e., there exists a reduced expression $s_{i_1} \cdots s_{i_{|I|}}$ of $\tau$ such that $\{ i_1,\ldots,i_{|I|} \}=I$.
\item We denote by ${\ms{1mu}\mathsf{h}}$ the Coxeter number of $\triangle$
which is the order of a Coxeter element.
\end{enumerate}
\end{definition}
\begin{theorem}[\cite{OS19B,KO22}]
For each Dynkin quiver $Q=(\triangle,\xi)$, there exists a $Q$-adapted reduced expression ${\underline{w}}_0$ of $w_0 \in \sfW_\triangle$, and the set of all $Q$-adapted reduced expressions is a commutation class of $w_0$.
\end{theorem}
We denote by $[Q]$ the commutation class of $w_0$ consisting of all $Q$-adapted reduced expressions.
It is known that, for a fixed Dynkin diagram $\triangle$, the set of commutation classes $\{ [Q] \ | \ Q=(\triangle,\xi) \}$ forms an $r$-cluster point, denoted by $[\hspace{-0.3ex}[ \triangle ]\hspace{-0.3ex}]$.
It is also well-known that all of reduced expressions of a Coxeter element $\tau$ form a single commutation class and they are adapted to some Dynkin quiver $Q$. Conversely, for each Dynkin quiver $Q$,
there exists a unique Coxeter element $\tau_Q$ all of whose reduced expressions are adapted to $Q$.
All Coxeter elements are conjugate in $\sfW$, and their common order is called the \emph{Coxeter number} and denoted by ${\ms{1mu}\mathsf{h}}$.
\smallskip
A bijection $\sigma$ from $\triangle_0$ to itself is said to be a \emph{Dynkin diagram automorphism}
if $\langle h_i,\alpha_j \rangle= \langle h_{\sigma(i)},\alpha_{\sigma(j)} \rangle$ for all $i,j\in \triangle_0$. Throughout this paper, we
assume that Dynkin diagram automorphisms $\sigma$ satisfy the following condition:
\begin{align} \label{eq: auto cond}
\text{ there is no $i \in \triangle_0$ such that $d(i,\sigma(i))=1$}.
\end{align}
For a Dynkin diagram of type $A_{2n-1}$, $D_n$ and $E_6$, there exists a unique non-identity Dynkin diagram automorphism $\vee$ of order $2$ (except $D_4$-type
in which case there are three automorphisms of order $2$ and
two automorphisms $\widetilde{\vee}$ and $\widetilde{\vee}{\akew[.2ex]}^2$ of order $3$) satisfying the condition in~\eqref{eq: auto cond}.
The automorphisms $\vee$ and $\widetilde{\vee}$ can be depicted as follows:
\begin{subequations}
\label{eq: diagram foldings}
\begin{gather}
\label{eq: B_n}
\begin{tikzpicture}[xscale=1.75,yscale=.8,baseline=0]
\node (A2n1) at (0,1) {$\triangle_{A_{2n-1}}$};
\node[dynkdot,label={above:$n+1$}] (A6) at (4,1.5) {};
\node[dynkdot,label={above:$n+2$}] (A7) at (3,1.5) {};
\node[dynkdot,label={above:$2n-2$}] (A8) at (2,1.5) {};
\node[dynkdot,label={above:$2n-1$}] (A9) at (1,1.5) {};
\node[dynkdot,label={above left:$n-1$}] (A4) at (4,0.5) {};
\node[dynkdot,label={above left:$n-2$}] (A3) at (3,0.5) {};
\node[dynkdot,label={above left:$2$}] (A2) at (2,0.5) {};
\node[dynkdot,label={above left:$1$}] (A1) at (1,0.5) {};
\node[dynkdot,label={right:$n$}] (A5) at (5,1) {};
\path[-]
(A1) edge (A2)
(A3) edge (A4)
(A4) edge (A5)
(A5) edge (A6)
(A6) edge (A7)
(A8) edge (A9);
\path[-,dotted] (A2) edge (A3) (A7) edge (A8);
\path[<->,thick,red] (A1) edge (A9) (A2) edge (A8) (A3) edge (A7) (A4) edge (A6);
\def-1{-1}
\node (Bn) at (0,-1) {$\Dynkin_{ \vee} = {\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{B_n}$};
\foreach \x in {1,2}
{\node[dynkdot, fill= black,label={below:$\x$}] (B\x) at (\x,-1) {};}
\node[dynkdot, fill= black,label={below:$n-2$}] (B3) at (3,-1) {};
\node[dynkdot, fill= black,label={below:$n-1$}] (B4) at (4,-1) {};
\node[dynkdot,label={below:$n$}] (B5) at (5,-1) {};
\path[-] (B1) edge (B2) (B3) edge (B4);
\draw[-,dotted] (B2) -- (B3);
\draw[-] (B4.30) -- (B5.150);
\draw[-] (B4.330) -- (B5.210);
\draw[-] (4.55,-1) -- (4.45,-1+.2);
\draw[-] (4.55,-1) -- (4.45,-1-.2);
\draw[-latex,dashed,color=blue,thick]
(A1) .. controls (0.75,-1+1) and (0.75,-1+.5) .. (B1);
\draw[-latex,dashed,color=blue,thick]
(A9) .. controls (1.75,-1+1) and (1.25,-1+.5) .. (B1);
\draw[-latex,dashed,color=blue,thick]
(A2) .. controls (1.75,-1+1) and (1.75,-1+.5) .. (B2);
\draw[-latex,dashed,color=blue,thick]
(A8) .. controls (2.75,-1+1) and (2.25,-1+.5) .. (B2);
\draw[-latex,dashed,color=blue,thick]
(A3) .. controls (2.75,-1+1) and (2.75,-1+.5) .. (B3);
\draw[-latex,dashed,color=blue,thick]
(A7) .. controls (3.75,-1+1) and (3.25,-1+.5) .. (B3);
\draw[-latex,dashed,color=blue,thick]
(A4) .. controls (3.75,-1+1) and (3.75,-1+.5) .. (B4);
\draw[-latex,dashed,color=blue,thick]
(A6) .. controls (4.75,-1+1) and (4.25,-1+.5) .. (B4);
\draw[-latex,dashed,color=blue,thick] (A5) -- (B5);
\draw[->] (A2n1) -- (Bn);
\node (A2n1) at (-0.1,-0.1) {$^{\vee}$};
\end{tikzpicture}
\allowdisplaybreaks \\
\label{eq: C_n}
\begin{tikzpicture}[xscale=1.65,yscale=1.25,baseline=-25]
\node (Dn1) at (0,0) {$\triangle_{D_{n+1}}$};
\node[dynkdot,label={above:$1$}] (D1) at (1,0){};
\node[dynkdot,label={above:$2$}] (D2) at (2,0) {};
\node[dynkdot,label={above:$n-2$}] (D3) at (3,0) {};
\node[dynkdot,label={above:$n-1$}] (D4) at (4,0) {};
\node[dynkdot,label={right:$n$}] (D6) at (5,.4) {};
\node[dynkdot,label={right:$n+1$}] (D5) at (5,-.4) {};
\path[-] (D1) edge (D2)
(D3) edge (D4)
(D4) edge (D5)
(D4) edge (D6);
\draw[-,dotted] (D2) -- (D3);
\path[<->,thick,red] (D6) edge (D5);
\def-1.1{-1.2}
\node (Cn) at (0,-1.1) {$\Dynkin_{ \vee}={\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{C_n}$};
\foreach \x in {1,2}
{\node[dynkdot,label={below:$\x$}] (C\x) at (\x,-1.1) {};}
\node[dynkdot,label={below:$n-2$}] (C3) at (3,-1.1) {};
\node[dynkdot,label={below:$n-1$}] (C4) at (4,-1.1) {};
\node[dynkdot, fill= black,label={below:$n$}] (C5) at (5,-1.1) {};
\draw[-] (C1) -- (C2);
\draw[-,dotted] (C2) -- (C3);
\draw[-] (C3) -- (C4);
\draw[-] (C4.30) -- (C5.150);
\draw[-] (C4.330) -- (C5.210);
\draw[-] (4.55,-1.1+.1) -- (4.45,-1.1) -- (4.55,-1.1-.1);
\path[-latex,dashed,color=blue,thick]
(D1) edge (C1)
(D2) edge (C2)
(D3) edge (C3)
(D4) edge (C4);
\draw[-latex,dashed,color=blue,thick]
(D6) .. controls (4.55,-.25) and (4.55,-0.8) .. (C5);
\draw[-latex,dashed,color=blue,thick]
(D5) .. controls (5.25,-1.1+.5) and (5.25,-1.1+.3) .. (C5);
\draw[->] (Dn1) -- (Cn);
\node (Dn1) at (-0.1,-0.65) {$^{\vee}$};
\end{tikzpicture}
\allowdisplaybreaks \\
%
\label{eq: F_4}
\begin{tikzpicture}[xscale=1.75,yscale=.8,baseline=0]
\node (E6desc) at (0,1) {$\triangle_{E_6}$};
\node[dynkdot,label={above:$2$}] (E2) at (4,1) {};
\node[dynkdot,label={above:$4$}] (E4) at (3,1) {};
\node[dynkdot,label={above:$5$}] (E5) at (2,1.5) {};
\node[dynkdot,label={above:$6$}] (E6) at (1,1.5) {};
\node[dynkdot,label={above left:$3$}] (E3) at (2,0.5) {};
\node[dynkdot,label={above right:$1$}] (E1) at (1,0.5) {};
\path[-]
(E2) edge (E4)
(E4) edge (E5)
(E4) edge (E3)
(E5) edge (E6)
(E3) edge (E1);
\path[<->,thick,red] (E3) edge (E5) (E1) edge (E6);
\def-1{-1}
\node (F4desc) at (0,-1) {$\Dynkin_{ \vee}={\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{F_4}$};
\foreach \x in {1,2}
{\node[dynkdot,fill= black,label={below:$\x$}] (F\x) at (\x,-1) {};}
\node[dynkdot,label={below:$3$}] (F3) at (3,-1) {};
\node[dynkdot, label={below:$4$}] (F4) at (4,-1) {};
\draw[-] (F1.east) -- (F2.west);
\draw[-] (F3) -- (F4);
\draw[-] (F2.30) -- (F3.150);
\draw[-] (F2.330) -- (F3.210);
\draw[-] (2.55,-1) -- (2.45,-1+.2);
\draw[-] (2.55,-1) -- (2.45,-1-.2);
\path[-latex,dashed,color=blue,thick]
(E2) edge (F4)
(E4) edge (F3);
\draw[-latex,dashed,color=blue,thick]
(E1) .. controls (1.25,-1+1) and (1.25,-1+.5) .. (F1);
\draw[-latex,dashed,color=blue,thick]
(E3) .. controls (1.75,-1+1) and (1.75,-1+.5) .. (F2);
\draw[-latex,dashed,color=blue,thick]
(E5) .. controls (2.75,-1+1) and (2.25,-1+.5) .. (F2);
\draw[-latex,dashed,color=blue,thick]
(E6) .. controls (0.25,-1+1) and (0.75,-1+.5) .. (F1);
\draw[->] (E6desc) -- (F4desc);
\node (E6desc) at (-0.1,-0.1) {$^{\vee}$};
\end{tikzpicture}
\allowdisplaybreaks \\
%
\label{eq: G_2}
\begin{tikzpicture}[xscale=1.9,yscale=1.5,baseline=-25]
\node (D4desc) at (0,0) {$\triangle_{D_{4}}$};
\node[dynkdot,label={right:$1$}] (D1) at (1.75,.4){};
\node[dynkdot,label={above:$2$}] (D2) at (1,0) {};
\node[dynkdot,label={right:$3$}] (D3) at (2,0) {};
\node[dynkdot,label={right:$4$}] (D4) at (1.75,-.4) {};
\draw[-] (D1) -- (D2);
\draw[-] (D3) -- (D2);
\draw[-] (D4) -- (D2);
\path[->,red,thick]
(D1) edge[bend left=20] (D3)
(D3) edge[bend left=20] (D4)
(D4) edge[bend left=20] (D1);
\def-1.1{-1.1}
\node (G2desc) at (0,-1.1) {$\Dynkin_{ \widetilde{\vee}}={\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{G_2}$};
\node[dynkdot,label={below:$1$}] (G1) at (1,-1.1){};
\node[dynkdot, fill= gray,label={below:$2$}] (G2) at (2,-1.1) {};
\draw[-] (G1) -- (G2);
\draw[-] (G1.40) -- (G2.140);
\draw[-] (G1.320) -- (G2.220);
\draw[-] (1.55,-1.1+.1) -- (1.45,-1.1) -- (1.55,-1.1-.1);
\path[-latex,dashed,color=blue,thick]
(D2) edge (G1);
\draw[-latex,dashed,color=blue,thick]
(D1) .. controls (2.7,0.1) and (2.5,-0.8) .. (G2);
\draw[-latex,dashed,color=blue,thick]
(D3) .. controls (2.1,-0.3) and (2.2,-0.7) .. (G2);
\draw[-latex,dashed,color=blue,thick] (D4) -- (G2);
\draw[->] (D4desc) -- (G2desc);
\node (D4desc) at (-0.1,-0.55) {$^{\widetilde{\vee}}$};
\end{tikzpicture}
\end{gather}
\end{subequations}
\begin{definition}\label{def:folding}
Let $\triangle$ be a simply-laced Dynkin diagram and $\sigma$ a Dynkin diagram automorphism satisfying \eqref{eq: auto cond}.
\begin{enumerate}[{\rm (i)}]
\item
Let ${\Dynkin_{ \sigma}}$ be the folding of $\triangle$ by $\sigma$.
Namely, the set $({\Dynkin_{ \sigma}})_0$ of vertices of ${\Dynkin_{ \sigma}}$
is the set of $\sigma$-orbits. Let
$\pi\cl \triangle_0\to({\Dynkin_{ \sigma}})_0$ be the projection.
The edges of ${\Dynkin_{ \sigma}}$ are given by the Cartan matrix
$$\mathsf{c}^{{\Dynkin_{ \sigma}}}_{\pi(i),\pi(j)}=\sum_{j'\in\pi^{-1}\pi(j)}\mathsf{c}^\triangle_{i,j'}.$$
Hence $\ms{1mu}\mathsf{d}_{\pi(i)}=|\pi^{-1}\pi(i)|$.
\item
For a simple Lie algebra $\mathsf{g}$ of simply-laced type associated to $\triangle$,
we denote by $\mathsf{g}_\sigma$ the simple Lie algebra whose
Dynkin diagram is ${\Dynkin_{ \sigma}}$.
\end{enumerate}
\end{definition}
Note that we have
the following relations of Coxeter numbers:
\eq
&&{\ms{1mu}\mathsf{h}}^{\triangle_\sigma}={\ms{1mu}\mathsf{h}}^{\triangle}.\label{eq:Coxeter}
\eneq
\begin{definition}
Let $Q=(\triangle,\xi)$ be a Dynkin quiver with $\triangle$ and $\sigma$
of a Dynkin diagram automorphism of $\triangle$.
We say that $Q$ is \emph{$\sigma$-fixed}
if
$$ \xi_{\sigma(i)} = \xi_{i} \qquad \text{ for all $i \in \triangle_0$}. $$
For a $\sigma$-fixed Dynkin quiver $Q$, we sometimes denote its height function by ${}^\sigma\xi$ instead of $\xi$, to emphasize.
\end{definition}
\begin{example} \label{ex: sigma-fixed}
Here are several examples of $\sigma$-fixed Dynkin quivers $Q$.
\begin{subequations}
\begin{gather}
\label{it: A5 fixed}
\xymatrix@R=0.5ex@C=6ex{ *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{1 \ \ } & *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{2}} \ \ }_<{2 \ \ }
&*{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{1}} \ \ }_<{3 \ \ } & *{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{2}} \ \ }_<{4 \ \ } & *{\circ}<3pt> \ar@{}[l]_<{ \quad \ _{\underline{3}} \ \ }^<{\quad 5 }
} \qt{for $Q=(\triangle_{A_5}, {}^\vee\hspace{-.4ex}\xi)$.}
\\
\label{it: D4 fixed vee}
\raisebox{1.2em}{ \xymatrix@R=0.5ex@C=6ex{ && *{\circ}<3pt> \ar@{}[dl]_<{ \quad \ \ _{\underline{1}} \ }^<{ \ 3 } \\
*{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{1 \ \ } & *{\circ}<3pt> \ar@{->}[ur]^<{ _{\underline{2}} \ }_<{2 \ \quad } \\
&& *{\circ}<3pt> \ar@{<-}[ul]_<{ \ \ _{\underline{1}} \ }^<{ \quad 4 }
}}\qt{for $Q=(\triangle_{D_4},{}^\vee\hspace{-.4ex}\xi)$.}
\\
\label{it: E6 fixed}
\raisebox{1.2em}{ \xymatrix@R=2ex@C=6ex{ && *{\circ}<3pt> \ar@{<-}[d]^<{ _{\underline{0}} \ \ }_<{2} \\
*{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{1 \ \ } & *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{2}} \ \ }_<{3 \ \ }
&*{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{1}} }_<{4 \ \ } & *{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{2}} \ \ }_<{5\ \ } & *{\circ}<3pt> \ar@{}[l]_<{ \quad \ _{\underline{3}} \ \ }^<{\quad 6 }
}}\qt{for $Q=(\triangle_{E_6},{}^\vee\hspace{-.4ex}\xi)$.}
\\
\label{it: D4 fixed wvee}
\raisebox{1.2em}{ \xymatrix@R=0.5ex@C=6ex{ && *{\circ}<3pt> \ar@{}[dl]_<{ \quad \ \ _{\underline{1}} \ }^<{ \ 3 } \\
*{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{1}} \ \ }_<{1 \ \ } & *{\circ}<3pt> \ar@{->}[ur]^<{ _{\underline{2}} \ }_<{2 \ \quad } \\
&& *{\circ}<3pt> \ar@{<-}[ul]_<{ \ \ _{\underline{1}} \ }^<{ \quad 4 }
}}\qt{for $Q=(\triangle_{D_4}, {}^{\widetilde{\vee}}\hspace{-.4ex}\xi)$.}
\end{gather}
\end{subequations}
Here
\begin{enumerate}[{\rm (i)}]
\item an underline integer $\underline{*}$ is the value $\xi_i$ at each vertex $i \in \triangle_0$,
\item an arrow $\xymatrix@R=0.5ex@C=4ex{ *{\circ }<3pt> \ar@{->}[r]_<{ i \ \ } & *{\circ}<3pt> \ar@{-}[l]^<{ \ \ j }}$ means that
$\xi_i >\xi_j$ and $d(i,j)=1$.
\end{enumerate}
\end{example}
The following lemma is obvious.
\begin{lemma}
Let $\triangle$ be a Dynkin diagram, and $\sigma$
a Dynkin diagram automorphism of $\triangle.$
Then, the number of $\sigma$-fixed Dynkin quivers
\ro as an oriented graph\rfm \ is
$2^r$, where $r$ is the number of edges of $\triangle_\sigma$.
In particular, the number of $\vee$-fixed
Dynkin quivers is
$$
\begin{cases}
2^{n-1} & \text{ if Dynkin is of type $A_{2n-1}$ or $D_{n+1}$,} \\
8 & \text{ if Dynkin is of type $E_6$.}
\end{cases}
$$
The number of $\widetilde{\vee}$-fixed Dynkin quivers of $D_4$ is $2$.
\end{lemma}
\section{$t$-quantized Cartan matrix} \label{Sec: t-Cartan}
In this section, we briefly recall the $(q,t)$-Cartan matrix $\mathsf{C}(q,t)$ introduced by Frenkel-Reshetikhin in~\cite{FR98}, and the results about the
relationship between $\mathsf{C}(1,t)^{-1}$ and Dynkin quivers
in~\cite{HL15,KO22}.
Then we shall see that inverse of $\mathsf{C}(1,t)\mathsf{D}^{-1}$ of type $\triangle_\sigma$ can be obtained from the one of $\mathsf{C}(1,t)$ of type $\triangle$ via the \emph{folding} by
$\sigma \ne {\rm id}$.
\medskip
For an indeterminate $x$ and $k \in \mathbb{Z}$, we set,
$$ [k]_x \mathbin{:=} \dfrac{x^k-x^{-k}}{x-x^{-1}}.$$
For an indeterminate $q$ and $i$, we set $q_i \mathbin{:=} q^{\ms{1mu}\mathsf{d}_i}$. For instance, when $\mathfrak{g}$ is of type $G_2$, we have
$q_2 = q^{3}$.
For a given finite Cartan datum, we define
the \emph{adjacent matrix} $\mathcal{I}=(\mathcal{I}_{i,j})_{ i,j\in I }$ of $\sfC$ by
\begin{align}\label{eq: adjacent}
\mathcal{I}_{i,j} = 2\delta_{i,j} - \mathsf{c}_{i,j} \qquad \text{ for } i,j \in I.
\end{align}
In~\cite{FR98}, the $(q,t)$-deformation of Cartan matrix $\sfC(q,t) =(\mathsf{c}_{i,j}(q,t))_{ i,j\in I } $ is introduced:
$$
\mathsf{c}_{i,j}(q,t) \mathbin{:=} (q_it^{-1}+q_i^{-1}t)\delta_{i,j}-[\mathcal{I}_{i,j}]_{q}.
$$
The specialization of $\sfC(q,t)$ at $t=1$, denoted by $\mathsf{C{\scriptstyle q}}} %{\overline{\mathsf{C}}(q)\mathbin{:=} \mathsf{C}(q,1)$, is usually called the \emph{quantum Cartan matrix}.
\begin{definition}
For each finite Cartan datum, we set
$$ {\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t) \mathbin{:=} \sfC(1,t).$$
and call it \emph{$t$-quantized Cartan matrix}. We also set
$$ \underline{\mathsf{B}}(t) \mathbin{:=} {\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t) \mathsf{D}^{-1} = (\underline{\mathsf{B}}_{i,j}(t))_{i,j\in I} \quad \text{ and } \quad \overline{\mathsf{B}}(t) \mathbin{:=} \mathsf{D} {\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t) = (\overline{\mathsf{B}}_{i,j}(t))_{i,j\in I}.$$
\end{definition}
Hence we have
\eqn
&&\underline{\mathsf{B}}_{i,j}(t)=\begin{cases}
\ms{1mu}\mathsf{d}_i^{-1}(t+t^{-1})&\text{if $i=j$,}\\
(\alpha_i^\vee,\alpha_j^\vee)&\text{if $i\not=j$,}
\end{cases}
\eneqn
where $\alpha_i^\vee\mathbin{:=}\ms{1mu}\mathsf{d}_i^{-1}\alpha_i$ is the coroot.
\begin{example} Note that, for simply-laced types, we have $\underline{\mathsf{B}}(t)={\mathsf{C\ms{1mu}{\scriptstyle t}}}}%{\underline{\mathsf{C}}(t)$. The followings are $\underline{\mathsf{B}}(t)$ of non-simply-laced types:
\begin{align*}
& \underline{\mathsf{B}}(t)_{B_n}=\scriptsize{ \left(\begin{matrix}
\frac{t+t^{-1}}{2} & -\frac{1}{2} & 0& 0 &\cdots & 0 \\
-\frac{1}{2}&\frac{t+t^{-1}}{2} & -\frac{1}{2} & 0 & \cdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \cdots & 0 \\
0 & \cdots & \cdots & -\frac{1}{2} & \frac{t+t^{-1}}{2} & -1 \\
0 & \cdots & \cdots & 0 & -1 & t+t^{-1}
\end{matrix}\right)}, \allowdisplaybreaks \\
& \underline{\mathsf{B}}(t)_{C_n}=\scriptsize{\left(\begin{matrix}
t+t^{-1} & -1 & 0& 0 &\cdots & 0 \\
-1 & t+t^{-1} & -1 & 0 & \cdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \cdots & 0 \\
0 & \cdots & \cdots & -1 & t+t^{-1} & -1 \\
0 & \cdots & \cdots & 0 & -1 & \frac{t+t^{-1}}{2}
\end{matrix}\right) }, \allowdisplaybreaks \\
& \underline{\mathsf{B}}(t)_{F_4} =
\scriptsize{\left(\begin{array}{rrrr}
\frac{t+t^{-1}}{2}& -\frac{1}{2} & 0 & 0 \\
-\frac{1}{2} & \frac{t+t^{-1}}{2} & -1 & 0 \\
0 & -1 &t+t^{-1} & -1 \\
0 & 0 & -1 & t+t^{-1}
\end{array}\right)}, \allowdisplaybreaks\\
& \underline{\mathsf{B}}(t)_{G_2}=\scriptsize{\left(\begin{array}{rr}
t+t^{-1} & -1 \\
-1 & \frac{t+t^{-1}}{3}
\end{array}\right)}.
\end{align*}
\end{example}
Note that $\underline{\mathsf{B}}(t)\vert_{t=1}=\underline{\mathsf{B}} \in {\rm GL}_{I}(\mathbb{Q})$. We regard $\underline{\mathsf{B}}(t)$ as an element of ${\rm GL}_{I}(\mathbb{Q}(t))$ and denote its inverse
by $\widetilde{\underline{\mathsf{B}}}(t)\mathbin{:=}\bl\underline{\mathsf{B}}(t)\br^{-1}=(\widetilde{\underline{\mathsf{B}}}_{i,j}(t))_{i,j\in I}$. Let
$$ \widetilde{\underline{\mathsf{B}}}_{i,j}(t) =\sum_{u\in\mathbb{Z}} \widetilde{\mathsf{b}}_{i,j}(u)t^u$$
be the Laurent expansion of $ \widetilde{\underline{\mathsf{B}}}_{i,j}(t)$ at $t=0$. Note that $ \widetilde{\underline{\mathsf{B}}}_{i,j}(t) = \widetilde{\underline{\mathsf{B}}}_{j,i}(t)$ for all $i,j \in I$.
\smallskip
Recall that, for each Dynkin quiver $Q=(\triangle, \xi)$ of an arbitrary finite type,
there exists a unique Coxeter element $\tau_Q \in W_{\triangle}$ such that
all of its reduced expressions are adapted to $Q$.
\begin{definition}[\cite{HL15,FO21,KO22}]
For a Dynkin quiver $Q$ and $i,j \in I$, we define a function $\eta_{i,j}^Q\cl\mathbb{Z} \to \mathbb{Z}$ by
$$
\eta_{i,j}^Q(u) \mathbin{:=} \begin{cases}
(\varpi_i , \tau_Q^{(u+\xi_j-\xi_i-1)/2}(\gamma_j^Q) ) & \text{ if } u+\xi_j-\xi_i-1\in 2\mathbb{Z}, \\
0& \text{ otherwise}.
\end{cases}
$$
Here $\gamma_j^Q\mathbin{:=}(1-\tau_Q)\varpi_{j} \in \Phi^+$.
\end{definition}
\begin{lemma} [\cite{HL15,KO22}] Let $Q'$ be another Dynkin quiver of the same type of $Q$. Then we have
$$ \eta_{i,j}^Q = \eta_{i,j}^{Q'} \quad \text{and hence $\eta_{i,j}^\triangle$ is well-defined.}$$
\end{lemma}
\begin{theorem}[\cite{HL15,KO22}] \label{thm: inv}
For each $i,j \in I$ and $u \in \mathbb{Z}_{\ge 0}$, we have $$\widetilde{\mathsf{b}}_{i,j}(u)=\eta_{i,j}(u).$$ In other words, we can compute
$\widetilde{\underline{\mathsf{B}}}(t)$ by using $\Gamma_Q$ of any Dynkin quiver $Q$ as follows:
$$\widetilde{\mathsf{b}}_{i,j}(u) = \begin{cases}
(\varpi_i , \tau_Q^{(u+\xi_j-\xi_i-1)/2}(\gamma_j^Q)) & \text{if $u+\xi_j-\xi_i-1\in2\mathbb{Z}$ and $u\ge0$, } \\
0 & \text{ otherwise.}
\end{cases}
$$
\end{theorem}
\begin{corollary}[\cite{HL15,FO21,FM21,KO22}]
\label{cor: bij property}
The coefficients $\{ \widetilde{\mathsf{b}}_{i,j}(u) \ | \ i,j \in I, \ u \in \mathbb{Z}_{\ge 0} \}$ enjoy the following properties:
\begin{enumerate}[{\rm (1)}]
\item \label{it: vanish} $\widetilde{\mathsf{b}}_{i,j}(u)=0$ for $u\le 0$, and $\widetilde{\mathsf{b}}_{i,j}(1)=\ms{1mu}\mathsf{d}_i\, \delta(i=j)$.
\item \label{it: additive} $\widetilde{\mathsf{b}}_{i,j}(u-1)+ \widetilde{\mathsf{b}}_{i,j}(u+1) = \displaystyle\sum_{k; \ d(k,j)=1} -\langle h_k,\alpha_j \rangle \widetilde{\mathsf{b}}_{i,k}(u)$ for $u \ge 1$.
\item \label{it: h nega} $\widetilde{\mathsf{b}}_{i,j}(u+{\ms{1mu}\mathsf{h}})=-\widetilde{\mathsf{b}}_{i,j^*}(u)$ and $\widetilde{\mathsf{b}}_{i,j}(u+2{\ms{1mu}\mathsf{h}})=\widetilde{\mathsf{b}}_{i,j}(u)$ for $u \ge 0$.
\item $\widetilde{\mathsf{b}}_{i,j}({\ms{1mu}\mathsf{h}}-u)=\widetilde{\mathsf{b}}_{i,j^*}(u)$ for $0 \le u \le {\ms{1mu}\mathsf{h}}$ and $\widetilde{\mathsf{b}}_{i,j}(2{\ms{1mu}\mathsf{h}}-u)=-\widetilde{\mathsf{b}}_{i,j}(u)$ for $0 \le u \le 2{\ms{1mu}\mathsf{h}}$.
\item\label{it: tfb positive} $\widetilde{\mathsf{b}}_{i,j}(u) \ge 0$ for $0 \le u \le {\ms{1mu}\mathsf{h}}$ and $\widetilde{\mathsf{b}}_{i,j}(u)\le 0$ for ${\ms{1mu}\mathsf{h}} \le u \le 2{\ms{1mu}\mathsf{h}}$.
\end{enumerate}
\end{corollary}
By Corollary~\ref{cor: bij property}~\eqref{it: h nega},
it suffices to compute the $\widetilde{\mathsf{b}}_{i,j}(u)$ for $1 \le u \le {\ms{1mu}\mathsf{h}}$, to know $\widetilde{\underline{\mathsf{B}}}_{i,j}(t)$ for any $i,j \in \triangle_0$.
Thus we set
$$\widetilde{\de}_{i,j}(t)\mathbin{:=} \widetilde{\underline{\mathsf{B}}}_{i,j}(t)+t^{\ms{1mu}\mathsf{h}} \widetilde{\underline{\mathsf{B}}}_{i,j^*}(t)= \sum_{u=1}^{{\ms{1mu}\mathsf{h}}}\widetilde{\mathsf{b}}_{i,j}(u)t^u $$
and
$$\widetilde{\de}_{i,j}[k]\mathbin{:=}\widetilde{\mathsf{b}}_{i,j}(k-1) \quad \text{for $0 \le k \le {\ms{1mu}\mathsf{h}}$.}
$$
Then, we have
$$\widetilde{\underline{\mathsf{B}}}_{i,j}(t)=\dfrac{\widetilde{\de}_{i,j}(t)-t^{2{\ms{1mu}\mathsf{h}}}\,\widetilde{\de}_{i,j}(t^{-1})}{1-t^{2{\ms{1mu}\mathsf{h}}}}.$$
\begin{theorem} \label{thm: tbij folding}
Let $\sigma$ be a Dynkin diagram automorphism,
${\Dynkin_{ \sigma}}$ the folding of $\triangle$ by $\sigma$,
and $\pi\cl(\triangle)_0\to({\Dynkin_{ \sigma}})_0$ the projection
\ro see Definition~\ref{def:folding}\/\rfm.
Then we have the following relation:
$$ \widetilde{\de}^{{\Dynkin_{ \sigma}}}_{i,j}(t) = \sum_{\substack{ \oi\in \pi^{-1}(i), \\ \oj\in \pi^{-1}(j)}} \widetilde{\de}^{\triangle}_{\oi,\oj}(t)\qt{for any $i,j\in({\Dynkin_{ \sigma}})_0$.}$$
\end{theorem}
\Proof
First let us remark the relations of $\underline{\mathsf{B}}$ for ${\Dynkin_{ \sigma}}$ and $\triangle$:
\eqn
&&(\underline{\mathsf{B}}^{\Dynkin_{ \sigma}})_{i,\pi(\oj)}=\ms{1mu}\mathsf{d}_{i}^{-1}
\sum_{\oi\in\pi^{-1}(i)}(\underline{\mathsf{B}}^\triangle)_{\oi,\oj}
\qt{for any $i\in({\Dynkin_{ \sigma}})_0$ and $\oj\in\triangle_0$.}
\eneqn
Now set
\eqn
f_{\oi,\oj}^\triangle&&=\Bigl((\underline{\mathsf{B}}^\triangle)^{-1}\Bigr)_{\oi,\oj}
=(1+t^{\ms{1mu}\mathsf{h}})^{-1}\widetilde{\de}^{\triangle}_{\imath,\jmath}(t),\\
f_{i,j}^{{\Dynkin_{ \sigma}}}&&=\sum_{\oi\in\pi^{-1}(i),\;\oj\in\pi^{-1}(j)}
f_{\oi,\oj}^\triangle
\eneqn
for $\oi,\oj\in\triangle_0$ and $i,j\in({\Dynkin_{ \sigma}})_0$.
In order to see the theorem, it is enough to show that
\eq
&&\sum_{j\in({\Dynkin_{ \sigma}})_0}(\underline{\mathsf{B}}^{\Dynkin_{ \sigma}})_{i,j}f^{\Dynkin_{ \sigma}}_{j,k}=\delta_{i,k}
\qt{for any $i,k\in({\Dynkin_{ \sigma}})_0$.}\label{eq:req}
\eneq
We have
$$
\delta_{\oi,\ok}
=\sum_{\oj\in (\triangle)_0}(\underline{\mathsf{B}}^\triangle)_{\oi,\oj}f^\triangle_{\oj,\ok}$$
for any $\oi,\ok\in\triangle_0$.
Hence for $i\in ({\Dynkin_{ \sigma}})_0$ and $\ok\in\triangle_0$, we have
\eqn
\delta_{i,\pi(\ok)}&&=\sum_{\oi\in\pi^{-1}(i)}\delta_{\oi,\ok}
=\sum_{\oi\in\pi^{-1}(i),\;\oj\in (\triangle)_0}(\underline{\mathsf{B}}^\triangle)_{\oi,\oj}f^\triangle_{\oj,\ok}\\
&&=\sum_{\oj\in (\triangle)_0}\ms{1mu}\mathsf{d}_{i}(\underline{\mathsf{B}}^{\Dynkin_{ \sigma}})_{i,\pi(\oj)}f^\triangle_{\oj,\ok}.
\eneqn
Hence, for any $i,k\in ({\Dynkin_{ \sigma}})_0$, we have
\eqn
\ms{1mu}\mathsf{d}_k\delta_{i,k}
&&=\sum_{\ok\in\pi^{-1}(k)}\delta_{i,\pi(\ok)}
=\sum_{\ok\in\pi^{-1}(\ok),\;\oj\in (\triangle)_0}
\ms{1mu}\mathsf{d}_{i}(\underline{\mathsf{B}}^{\Dynkin_{ \sigma}})_{i,\pi(\oj)}f^\triangle_{\oj,\ok}\\
&&=\sum_{\ok\in\pi^{-1}(\ok),\;j\in ({\Dynkin_{ \sigma}})_0,\;\oj\in\pi^{-1}(j)}
\ms{1mu}\mathsf{d}_{i}(\underline{\mathsf{B}}^{\Dynkin_{ \sigma}})_{i,j}f^\triangle_{\oj,\ok}\\
&&=\sum_{j\in ({\Dynkin_{ \sigma}})_0}
\ms{1mu}\mathsf{d}_{i}(\underline{\mathsf{B}}^{\Dynkin_{ \sigma}})_{i,j}f^{\Dynkin_{ \sigma}}_{j,k}.
\eneqn
Thus we obtain \eqref{eq:req}.
\QED
The polynomials $\widetilde{\de}_{i,j}(t)$ are calculated explicitly as
follows:
\begin{theorem} [\cite{HL15,Fuj19,OS19,KO22}] \label{thm: BC denom} Note that $\widetilde{\de}_{i,j}(t) = \widetilde{\de}_{j,i}(t)$ for $i,j \in I$ .
\begin{enumerate}[{\rm (1)}]
\item For $\triangle$ of type $A_{n}$, and $i,j\in I =\{1,\ldots,n\}$, $\widetilde{\de}_{i,j}(t)$ is given as follows:
\begin{align} \label{eq: A formula}
\widetilde{\de}_{i,j}(t)& = \sum_{s=1}^{\min(i,j,n+1-i,n+1-j)}t^{|i-j|+2s-1}.
\end{align}
\item For $\triangle$ of type $D_{n+1}$, and $i,j\in I =\{1,\ldots,n,n+1\}$, $\widetilde{\de}_{i,j}(t)$ is given as follows:
\begin{align}\label{eq: D formula}
\widetilde{\de}_{i,j}(t) & = \begin{cases}
\displaystyle\sum_{s=1}^{\min(i,j)} \bl t^{|i-j|+2s-1}+\delta(\max(i,j)<n)\,t^{2n-i-j+2s-1})
& \text{ if } \min(i,j)<n,\\[3ex]
\ \displaystyle \sum_{s=1}^{\lfloor (n+ \delta_{i,j}) /2 \rfloor} t^{4s-1 -2\ms{1mu}\delta(i,j)} & \text{ otherwise.}
\end{cases}
\end{align}
\item
For $\triangle$ of type $B_n$ or $C_n$, and $i,j\in I =\{1,\ldots,n\}$, $\widetilde{\de}_{i,j}(t)$ is given as follows:
\begin{equation}\label{eq: BC formula}
\begin{aligned}
\widetilde{\de}_{i,j}(t)& =
\max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j) \displaystyle\sum_{s=1}^{\min(i,j)}\Bigl( t^{|i-j|+2s-1} + \delta(\max(i,j)<n)\, t^{2n -i-j+2s-1} \Bigr).
\end{aligned}
\end{equation}
\item For $\triangle$ of type $E_6$ and $i \le j\in I =\{1,\ldots,6\}$, $\widetilde{\de}_{i,j}(t)$ is given as follows:
\begin{align*}
&\widetilde{\de}_{1,1}(t) = t + t^{7}, && \widetilde{\de}_{1,2}(t) = t^{4} + t^{8}, \allowdisplaybreaks\\
& \widetilde{\de}_{1,3}(t) = t^{2} +t^{6}+t^{8}, && \widetilde{\de}_{1,4}(t) = t^{3}+t^{5}+t^{7}+t^{9}, \allowdisplaybreaks\\
& \widetilde{\de}_{1,5}(t) = t^{4}+t^{6}+t^{10}, && \widetilde{\de}_{1,6}(t) = t^{5}+t^{11}, \allowdisplaybreaks\\
& \widetilde{\de}_{2,2}(t) = t^{1}+t^{5}+t^{7}+t^{11}, && \widetilde{\de}_{2,3}(t) = t^{3}+t^{5}+t^{7}+t^{9}, \allowdisplaybreaks\\
& \widetilde{\de}_{2,4}(t) = t^{2}+t^{4}+2t^{6}+t^{8}+t^{10}, && \widetilde{\de}_{3,3}(t) = t^{1}+t^{3}+t^{5}+2t^{7}+t^{9}, \allowdisplaybreaks\\
&\widetilde{\de}_{3,4}(t) = t^{2}+2t^{4}+2t^{6}+2t^{8}+t^{10}, && \widetilde{\de}_{3,5}(t) = t^{3}+2 t^{5}+ t^{7}+ t^{9}+ t^{11},\allowdisplaybreaks \\
& \widetilde{\de}_{4,4}(t) = t^{1}+ 2t^{3}+ 3t^{5}+ 3t^{7}+ 2t^{9}+ t^{11}, && \widetilde{\de}_{i,j}(t) =
t^{\ms{1mu}\mathsf{h}}\widetilde{\de}_{i,j^*}(t^{-1}) = \widetilde{\de}_{j,i}(t) =t^{\ms{1mu}\mathsf{h}} \widetilde{\de}_{j,i^*}(t^{-1}).
\end{align*}
\item For $E_7$ and $E_8$, see {\rm Appendix~\ref{appeA: tde}.}
\item For $\triangle$ of type $F_4$ and $i \le j\in I =\{1,2,3,4\}$, $\widetilde{\de}_{i,j}(t)$ is given as follows:
\begin{align*}
&\widetilde{\de}_{1,1}(t) = 2(t+t^{5}+t^{7}+t^{11} ), && \widetilde{\de}_{1,2}(t) = 2(t^{2}+t^{4}+2t^{6}+t^{8}+t^{10} ), \allowdisplaybreaks\\
&\widetilde{\de}_{1,3}(t) = 2(t^{3}+t^{5}+t^{7}+t^{9} ), && \widetilde{\de}_{1,4}(t) = 2(t^{4}+t^{8} ), \allowdisplaybreaks\\
&\widetilde{\de}_{2,2}(t) = 2(t+2t^{3}+3t^{5}+3t^{7}+2t^{9}+t^{11} ), && \widetilde{\de}_{2,3}(t) = 2(t^{2}+2t^{4}+2t^{6}+2t^{8}+t^{10} ), \allowdisplaybreaks\\
&\widetilde{\de}_{2,4}(t) = 2(t^{3}+t^{5}+t^{7}+t^{9} ), && \widetilde{\de}_{3,3}(t) = t+2t^{3}+3t^{5}+3t^{7}+2t^{9}+t^{11}, \allowdisplaybreaks\\
&\widetilde{\de}_{3,4}(t) = t^{2}+t^{4}+2t^{6}+t^{8}+t^{10}, && \widetilde{\de}_{4,4}(t) = t+t^{5}+t^{7}+t^{11}.
\end{align*}
\item For $\triangle$ of type $G_2$ and $i \le j\in I =\{1,2\}$, $\widetilde{\de}_{i,j}(t)$ is given as follows:
$$
\widetilde{\de}_{1,1}(t) = t+2t^{3}+t^5, \qquad \widetilde{\de}_{1,2}(t) =3(t^{2}+t^{4}), \qquad \widetilde{\de}_{2,2}(t) =3 \widetilde{\de}_{1,1}(t).
$$
\end{enumerate}
\end{theorem}
\section{Auslander-Reiten (AR) quivers} \label{Sec: classical AR}
In this section, we first recall (combinatorial) Auslander-Reiten quiver $\Gamma_Q$ associated to each Dynkin quiver $Q$. Then we
recall their combinatorial properties, including the simple algorithm for labeling them with the set of positive roots, and statistics in classical $A_{n}$ and $D_{n}$-cases. In the next section, we will investigate the relationship among AR-quivers related to the Dynkin diagram folding $\sigma$.
\subsection{Quivers} For each reduced expression ${\underline{w}}_0 =s_{i_1} \cdots s_{i_\ell}$ of $w_0 \in \sfW$, we associate a quiver $\Upsilon_{{\underline{w}}_0}$ as follows \cite{OS19A}:
\begin{enumerate}[{\rm (i)}]
\item The set of vertices is $\Phi^+ = \{ \beta^{{\underline{w}}_0}_k \ | \ 1 \le k \le \ell \}$.
\item We assign $(-\langle h_{i_k},\alpha_{i_l} \rangle)$-many arrows from
$\beta^{{\underline{w}}_0}_k$ to $\beta^{{\underline{w}}_0}_l$ if and only if $1 \le l < k \le \ell$ and there is no $j$ such that
$l< j< k$ and $i_j \in \{i_k,i_l\}$.
\end{enumerate}
We say that a total order $(\beta_1< \cdots < \beta_\ell)$ of $\Phi^+$ is a \emph{compatible reading} of $\Upsilon_{{\underline{w}}_0}$ if we have $k < l$ whenever there is
an arrow from $\beta_l$ to $\beta_k$ in $\Upsilon_{{\underline{w}}_0}$.
\begin{theorem}[{\cite{OS19A,KO22}}] \label{thm: OS17}
Let $\cc$ be a commutation class.
\begin{enumerate}[{\rm (i)}]
\item Two reduced expressions ${\underline{w}}_0$ and ${\underline{w}}'_0$ of $w_0$
are commutation equivalent if and only if
$\Upsilon_{{\underline{w}}_0}=\Upsilon_{{\underline{w}}'_0}$ as quivers.
Hence $\Upsilon_{\cc}$ is well-defined.
\item For $\alpha,\beta \in \Phi^+$, $\alpha \preceq_{\cc} \beta$ if and only if there exists a path from $\beta$ to $\alpha$ in $\Upsilon_{\cc}$. In other words, the quiver
$\Upsilon_{\cc}$ \ro forgetting the number of arrows\rfm is the Hasse quiver of the partial order $\preceq_{\cc}$.
\item \label{it: noncom}
If $\alpha,\beta \in \Phi^+$ are not comparable with respect to $\preceq_{\cc}$, then we have $(\alpha,\beta)=0$.
\item \label{it: comp reading}
If
$\Phi^+=\{\beta_1< \cdots < \beta_\ell\}$ is a compatible reading of $\Upsilon_{\cc}$, then
there is a unique reduced expression ${\underline{w}}_0=s_{i_1} \cdots s_{i_\ell}$
in $\cc$ such that
$\beta_k=\beta^{{\underline{w}}_0}_k$ for any $k$.
\end{enumerate}
\end{theorem}
\begin{definition} [\cite{KO22}] For a Dynkin quiver $Q=(\triangle,\xi)$, the
{\em repetition quiver} $\widehat{\Dynkin}=(\widehat{\Dynkin}_0,\widehat{\Dynkin}_1)$ associated to
$Q$ is defined as follows:
\begin{equation}\label{eq: rep quiver}
\begin{aligned}
& \widehat{\Dynkin}_0 \mathbin{:=} \{ (i,p) \in \triangle_0 \times \mathbb{Z} \bigm| p -\xi_i \in 2 \mathbb{Z}\}, \\
& \widehat{\Dynkin}_1 \mathbin{:=} \{ (i,p) \To[-\langle h_{i},\alpha_j \rangle] (j,p+1) ) \ | \ (i,p),(j, p+1) \in \widehat{\Dynkin}_0, \ d(i,j)=1 \}.
\end{aligned}
\end{equation}
Here $(i,p) \To[-\langle h_{i},\alpha_j \rangle] (j,p+1)$ denotes that we assign $(-\langle h_{i},\alpha_j \rangle)$-many arrows from $(i,p)$ to $(j,p+1)$.
Note that the definition of $\widehat{\Dynkin}$ depend only on the parity of the height function of $Q$.
\end{definition}
For any Dynkin quiver $Q = (\triangle,\xi)$, we have a bijection $\phi_Q\cl \widehat{\Dynkin}_0 \isoto \widehat{\Phi}^+_\triangle \mathbin{:=} \Phi^+_\triangle \times \mathbb{Z}$ as follows (see \cite{HL15,KO22}): set
$\gamma_i^Q \mathbin{:=} (1-\tau_Q)\varpi_i \in \Phi^+_\triangle$ and
\begin{eqnarray} &&
\parbox{75ex}{
\begin{enumerate}[{\rm (i)}]
\item $\phi_Q(i,\xi_i)=(\gamma_i^Q,0)$,
\item if $\phi_Q(i,p)=(\beta,u)$, then we define
\begin{enumerate}[{\rm (a)}]
\item $\phi_Q(i,p\mp 2)=(\tau_Q^{\pm 1}(\beta),u)$ if $\tau_Q^{\pm 1}(\beta) \in \Phi^+_\triangle$,
\item $\phi_Q(i,p\mp 2)=(-\tau_Q^{\pm 1}(\beta),u\pm1)$ if $\tau_Q^{\pm 1}(\beta) \in \Phi^-_\triangle$.
\end{enumerate}
\end{enumerate}
}\label{eq: bijection}
\end{eqnarray}
We say that $(i,p)$ with $\phi_Q(i,p)=(\beta,u)$ is the \emph{$Q$-coordinate of $(\beta,u)$ in $\widehat{\Dynkin}$}.
The combinatorial description $\gamma^Q_i$ is given as follows \cite[\S 2.2]{HL15}:
\begin{align} \label{eq: gaQ}
\gamma^Q_i = \sum_{j \in B^Q(i)} \alpha_j,
\end{align}
where $B^Q(i)$ denotes the set of vertices $j$ such that there exists a path in $Q$ from $j$ to $i$.
For each Dynkin quiver $Q=(\triangle,\xi)$, we also denote by
$\Gamma_Q=\bl(\Gamma_Q)_0, (\Gamma_Q)_1 \br$ the full subquiver of
$\widehat{\Dynkin}$ whose set of vertices is $\phi_Q^{-1}(\Phi_{\triangle}^+
\times 0)$. We call $\Gamma_Q$ the \emph{AR-quiver associated to
$Q$}.
\begin{theorem} [\cite{HL15, OS19A,KO22}] Let $Q=(\triangle,\xi)$ be a Dynkin quiver.
\begin{enumerate}[{\rm (i)}]
\item \label{it:range}
We have $ \xi_i \equiv \xi_{i^*} - {\ms{1mu}\mathsf{h}} \bmod2$ for any $i\in I$ and
\begin{align}\label{eq: range}
(\Gamma_Q)_0=\{ (i,p) \in \widehat{\Dynkin}_0 \mid \xi_i \ge p > \xi_{i^*} - {\ms{1mu}\mathsf{h}} \}.
\end{align}
In particular, we have
\begin{align*}
\phi_Q(i,p)=\bl\tau_Q^{(\xi_i-p)/2}(\gamma_i^Q),0\br \quad \text{for any $(i,p) \in (\Gamma_Q)_0$.}
\end{align*}
\item \label{it:si}
If $i$ is a source of $Q$, then we have
$$\phi_{s_iQ}(i^*,\xi_i-{\ms{1mu}\mathsf{h}})=\phi_Q(i,\xi_i)=(\alpha_i,0)
\text{ and } \
\Gamma_{s_iQ}=\left(\Gamma_Q\setminus\{(i,\xi_i) \} \right) \cup \{(i^*,\xi_i-{\ms{1mu}\mathsf{h}})\}.$$
\item The map $\phi_Q$ induces a quiver isomorphism
$\Gamma_Q\isoto\Upsilon_{[Q]}$.
Recall that $[Q]$ is the commutation class of $Q$-adapted reduced expressions.
\end{enumerate}
\end{theorem}
\begin{example} \label{ex: AR Q ADE}
Here are some examples of $\Gamma_Q$ for $Q$ of simply-laced types.
\begin{enumerate}[{\rm (1)}]
\item For $ Q= \xymatrix@R=0.5ex@C=6ex{ *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{4}} \ \ }_<{1 \ \ } & *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{2 \ \ }
&*{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{2}} \ \ }_<{3 \ \ } & *{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{3}} \ \ }_<{4 \ \ }
& *{\circ}<3pt> \ar@{-}[l]_<{ \ \ _{\underline{4}} }^<{ \ \ 5 } } $ of type $A_5$, $\Gamma_Q$ can be depicted as
$$
\raisebox{3.2em}{ \scalebox{0.7}{\xymatrix@!C=5ex@R=0.5ex{
(i\setminus p) & -2 & -1 & 0 & 1 & 2 & 3 & 4\\
1&&& [3,5]\ar@{->}[dr]&& [2] \ar@{->}[dr] && [1] \\
2&& [3,4] \ar@{->}[dr]\ar@{->}[ur] && [2,5] \ar@{->}[dr] \ar@{->}[ur] && [1,2] \ar@{->}[ur] \\
3& [3] \ar@{->}[dr]\ar@{->}[ur] && [2,4] \ar@{->}[dr]\ar@{->}[ur] && [1,5] \ar@{->}[dr]\ar@{->}[ur]\\
4&& [2,3] \ar@{->}[dr]\ar@{->}[ur] && [1,4] \ar@{->}[dr] \ar@{->}[ur] && [4,5] \ar@{->}[dr] \\
5&&& [1,3]\ar@{->}[ur]&& [4] \ar@{->}[ur] && [5]. \\
}}}
$$
Here $[a,b] \mathbin{:=} \sum_{k=a}^b \alpha_k$ for $1 \le a <b \le 5$ and $[a] \mathbin{:=} \alpha_a$ for $1 \le a \le 5$.
\item \label{it: E6}
For
$Q=\raisebox{1.2em}{ \xymatrix@R=2ex@C=6ex{ && *{\circ}<3pt> \ar@{<-}[d]^<{ _{\underline{0}} \ \ }_<{2} \\
*{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{1 \ \ } & *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{2}} \ \ }_<{3 \ \ }
&*{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{1}} }_<{4 \ \ } & *{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{2}} \ \ }_<{5\ \ } & *{\circ}<3pt> \ar@{}[l]_<{ \quad \ _{\underline{3}} \ \ }^<{\quad 6 }
}}$ of type $E_6$, $\Gamma_Q$ can be depicted as
$$ \hspace{-6ex}
\raisebox{6em}{\scalebox{0.58}{\xymatrix@R=0.5ex@C=2ex{
(i/p) & -10 & -9 & -8 & -7 & -6 & -5 & -4 & -3 & -2 & -1 & 0 & 1 & 2 & 3 \\
1 &&&& \sprt{010111}\ar[dr] && \sprt{001100}\ar[dr]&& \sprt{111110}\ar[dr]&& \sprt{000111}\ar[dr]&& \sprt{001000}\ar[dr]&& \sprt{100000} \\
3 &&&\sprt{010110}\ar[dr]\ar[ur]&&\sprt{011211}\ar[dr]\ar[ur]&& \sprt{112210}\ar[dr]\ar[ur]&& \sprt{111221}\ar[dr]\ar[ur]&& \sprt{001111}\ar[dr]\ar[ur]&& \sprt{101000}\ar[ur] \\
4 && \sprt{010100}\ar[dr]\ar[ddr]\ar[ur]&& \sprt{011210}\ar[dr]\ar[ddr]\ar[ur]&& \sprt{122321}\ar[dr]\ar[ddr]\ar[ur]&& \sprt{112321}\ar[dr]\ar[ddr]\ar[ur]&&
\sprt{112221}\ar[dr]\ar[ddr]\ar[ur]&& \sprt{101111} \ar[ddr]\ar[ur]\\
2 & \sprt{010000}\ar[ur]&& \sprt{000100}\ar[ur]&& \sprt{011110}\ar[ur]&& \sprt{111211}\ar[ur]&& \sprt{000111}\ar[ur]&& \sprt{111111}\ar[ur] \\
5 &&& \sprt{011100}\ar[uur]\ar[dr]&& \sprt{111210}\ar[uur]\ar[dr]&& \sprt{011221}\ar[uur]\ar[dr]&& \sprt{112211}\ar[uur]\ar[dr]&& \sprt{101110}\ar[uur]\ar[dr]
&& \sprt{000011} \ar[dr] \\
6 &&&& \sprt{111100}\ar[ur] && \sprt{000110} \ar[ur]&& \sprt{011111}\ar[ur]&& \sprt{101100}\ar[ur]&& \sprt{000010}\ar[ur]&& \sprt{000001}
}}}
$$
Here $\sprt{a_1a_2a_3a_4a_5a_6} \mathbin{:=} \ \sum_{i=1}^6 a_i\alpha_i$.
\end{enumerate}
\end{example}
\begin{example} \label{ex: AR Q BCFG}
For $Q = \xymatrix@R=0.5ex@C=6ex{ *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{4}} \ \ }_<{1} & *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{2 \ \ } &*{\circled{$\circ$}}<3pt>
\ar@{-}[l]_>{ \qquad \qquad\qquad _{\underline{2}}}^<{\ \ 3 } }$ of type $C_3$, $\Gamma_Q$ can be depicted as
$$ \raisebox{3.2em}{ \scalebox{0.7}{\xymatrix@!C=6ex@R=0.5ex{
(i\setminus p) & -2 & -1 & 0 & 1 & 2 & 3 & 4\\
1&&& \srt{1,3}\ar@{->}[dr]&& \srt{2,-3} \ar@{->}[dr] && \srt{1,-2} \\
2&& \srt{2,3} \ar@{=>}[dr]\ar@{->}[ur] && \srt{1,2} \ar@{=>}[dr] \ar@{->}[ur] && \srt{1,-3} \ar@{->}[ur] \\
3& \srt{3,3} \ar@{->}[ur] && \srt{2,2} \ar@{->}[ur] && \srt{1,1} \ar@{->}[ur] }}}
$$
Here $\langle a, \pm b\rangle \mathbin{:=} \epsilon_a \pm \epsilon_b$ for $1 \le a < b \le 4$ and $\langle a,a\rangle \mathbin{:=} 2\epsilon_a$ for $1 \le a \le 3$.
\end{example}
\noindent
Note the followings:
\begin{eqnarray} &&
\parbox{80ex}{
\begin{enumerate}[{\rm (a)}]
\item The vertices in $\Gamma_{Q}$ are labeled by $\Phi^+_{\triangle}$ by the bijection $\phi_{Q}|_{(\Gamma_{Q})_0}$.
\item \label{it: reflection}
For a source $i$ of $Q$, $\Gamma_{s_iQ}$ can be obtained from $\Gamma_Q$ by following way:
\begin{enumerate}[{\rm (i)}]
\item Each $\beta \in \Phi^+ \setminus \{ \alpha_i \}$ is located at $(j,p)$ in $\Gamma_{s_iQ}$, if $s_i\beta$ is at $(j,p)$ in $\Gamma_Q$.
\item $\alpha_i$ is located at the coordinate $(i^*,\xi_i-{\ms{1mu}\mathsf{h}})$ in $\Gamma_{s_iQ}$, while $\alpha_i$ was at $(i,\xi_i)$ in $\Gamma_Q$.
\end{enumerate}
\end{enumerate}
}\label{eq: Qd properties}
\end{eqnarray}
We refer the operation in ~\eqref{eq: Qd properties}~\eqref{it: reflection} the \emph{reflection} from $\Gamma_Q$ to $\Gamma_{s_iQ}$.
Let $Q$ be a Dynkin diagram except $E$-types. We say an arrow $(i,p) \to (j,p+1)$ in $\Gamma_Q$ is a \emph{downward} (resp.\ \emph{upward}) arrow if $j > i$ (resp.\ $j<i$) by identifying vertices of $\triangle_0$ with integers as in \eqref{fig:Dynkin}.
For $\triangle$ of $E$-types, we give a total order $<$ on $\{ 1,2, \ldots, 8 \}$ as follows:
$$ 1 < 3 < 4 < 2 < 5 < 6 < 7 < 8 .$$
Then we can define a downward (resp.\ upward) arrow in $\Gamma_Q$ as in other cases (see Example~\ref{ex: AR Q ADE}~\eqref{it: E6} for $E_6$-case).
\begin{definition} \hfill
\begin{enumerate}[{\rm (i)}]
\item A \prq $\pair{\alpha,\beta} \in (\Phi^+)^2$ with $\phi_Q(i,p)=(\alpha,0)$ and $\phi_Q(j,s)=(\beta,0)$ is \emph{sectional in $\Gamma_Q$} if
$d(i,j)=p-s$.
\item A full subquiver $\rho$ of $\Gamma_Q$ is \emph{sectional} if every \prq $\pair{\alpha,\beta}$ in $\rho$ is sectional.
\item A connected full subquiver $\varrho$ in $\Gamma_Q$ is said to be \emph{$S$-sectional} (resp.\ \emph{$N$-sectional}) path if it is concatenation of downward (resp.\ upward) arrows.
\item An $S$-sectional (resp.\ $N$-sectional) path $\varrho$ is said to be \emph{$N$-path} (resp.\ \emph{$S$-path}) if there is no longer connected sectional path in $\Gamma_Q$ consisting of downward (resp.\ upward) arrows and containing $\varrho$.
\end{enumerate}
\end{definition}
\subsection{Classical AR-quiver $\Gamma_Q$} In this subsection, we review the labeling and the combinatorial properties of $\Gamma_Q$ for Dynkin quivers $Q$ of type $ADE$.
Throughout this subsection, $Q=(\triangle, \xi)$ denotes a Dynkin quiver of type $ADE$.
\begin{proposition}[\cite{B99,KKK13b}] \label{prop: range descrption}
\begin{enumerate}[{\rm (1)}]
\item Let $Q$ be a Dynkin quiver of $ADE$-type, and ${\ms{1mu}\mathsf{h}}$ the Coxeter number.
For $i \in I$, let
$$ r^Q_i \mathbin{:=} = \dfrac{{\ms{1mu}\mathsf{h}} +{\rm a}^Q_i -{\rm b}^Q_i}{2} $$
where ${\rm a}^Q_i$ is the number of arrows in $Q$ between $i$ and $i^*$ directed toward $i^*$,
${\rm b}^Q_i$ is the number of arrows in $Q$ between $i$ and $i^*$ directed toward $i$ \ro ${\rm a}^Q_i -{\rm b}^Q_i=\xi_{i}-\xi_{i^*})$.
Then we have $ r^Q_i\in\mathbb{Z}_{\ge0}$ and
$$(\Gamma_Q)_0 = \{ (i,\xi_i-2k ) \ | \ 0 \le k < r_i^Q \}.$$
\item For $i , j \in \triangle_0$ and any $Q$, we have
\begin{align*}
(i,\xi_j-d(i,j)),\ (i,\xi_j-2r_j^Q+d(i,j))\in \Gamma_Q.\
\end{align*}
\end{enumerate}
\end{proposition}
Note that, if $Q$ is $\sigma$-fixed $(\sigma\not=\id)$, we have
\begin{align}\label{eq: riq fixed}
r_i^Q = {\ms{1mu}\mathsf{h}}/2 \quad \text{ for all $i$}.
\end{align}
\begin{proposition}[{\cite[Proposition 4.1]{Oh18}}] \label{prop: dir Q cnt}
Let $Q$ be a Dynkin quiver $Q$ of type $ADE$
and let $\pair{\alpha,\beta}$ be a \prq sectional in $\Gamma_Q$. Then, we have
\begin{enumerate}
\item[{\rm (a)}] the \prq $\pair{\alpha,\beta}$ is $[Q]$-simple,
\item[{\rm (b)}] $ (\alpha , \beta)=1$,
\item[{\rm (c)}] $\alpha-\beta\in\Phi$,
\item[{\rm (d)}] either $\pair{\alpha,\beta-\alpha}$ is a $[Q]$-minimal \prq for $\beta$ or $\pair{\alpha-\beta,\beta}$ is a $[Q]$-minimal \prq for $\alpha$.
\end{enumerate}
\end{proposition}
\begin{definition} \label{def: Phi ijk}
For $i,j \in I$ and integer $k \in
\mathbb{Z}_{\ge 0}$, let $\Phi_Q(i,j)[k]$
be the set of \prs[{[Q]}] $\pair{\alpha,\beta}$
such that
$$ |p-s| = k \qt{ when $\{ \phi_{Q}(i,p), \phi_{Q}(j,s)\} = \{ (\alpha,0),(\beta,0) \}$. }$$
\end{definition}
\begin{lemma} [{\cite[Lemma 6.15]{Oh18}}] \label{lem: integer o}
For any $\pair{\alpha,\beta}$, $\pair{\alpha',\beta'} \in \Phi_Q(i,j)[k]$, we have
$$ {\operatorname{deg}}_{[Q]}(\pair{\alpha,\beta}) ={\operatorname{deg}}_{[Q]}(\pair{\alpha',\beta'}).$$
\end{lemma}
For $i,j\in I$ and $k\in\mathbb{Z}_{>0}$, we define $\mathtt{o}_k^{Q}(i,j)$
by
\eq\mathtt{o}_k^{Q}(i,j)\mathbin{:=}
\begin{cases}{\operatorname{deg}}_{[Q]}(\pair{\alpha,\beta})&\text{
if $\pair{\alpha,\beta} \in \Phi_Q(i,j)[k]$,}\\
0&\text{if $\Phi_Q(i,j)[k]=\emptyset$.}
\end{cases}\label{def:tto}\eneq
It is well-defined by the above lemma.
\begin{definition} \label{def: Distance 1}
Let $t$ be an indeterminate.
For $i,j \in I$ and a Dynkin quiver $Q$ of type ADE,
we define a polynomial $\mathfrak{d}_{i,j}^Q(z) \in \mathbb{Z}_{\ge0}[t]$
$$ \mathfrak{d}_{i,j}^Q(t) \mathbin{:=} \sum_{k \in \mathbb{Z}_{\ge 0} } {\mathtt{O}_k^{Q}(i,j)} t^{k-1},$$
where
\begin{align} \label{eq: ttO}
\mathtt{O}_k^{Q}(i,j) \mathbin{:=} \max(\mathtt{o}_k^{Q}(i,j), \mathtt{o}_k^{Q^*}(i,j)).
\end{align}
\end{definition}
\begin{proposition} [{\cite[Proposition 6.18]{Oh18}}] \label{prop: well-defined integer}
For any Dynkin quivers $Q$, $Q'$ of $\triangle$,
we have $$\mathtt{O}_k^{Q}(i,j) = \mathtt{O}_k^{Q'}(i,j).$$
Thus $\mathtt{O}_k^{\triangle}(i,j)$ is well-defined.
\end{proposition}
\begin{remark}
In \cite{Oh18}, $\mathtt{O}_k^{Q}(i,j)$ was defined as $\max(\mathtt{o}_k^{Q}(i,j), \mathtt{o}_k^{Q^{\rm rev}}(i,j))$, where $Q^{{\rm rev}}$
is a Dynkin quiver obtained from $Q$ by reversing all arrows. However, by applying the argument in \cite{Oh18}, one can check
$$\max(\mathtt{o}_k^{Q}(i,j), \mathtt{o}_k^{Q^{\rm rev}}(i,j)) = \max(\mathtt{o}_k^{Q}(i,j), \mathtt{o}_k^{Q^*}(i,j)).$$
\end{remark}
By Proposition~\ref{prop: well-defined integer}, we can replace the superscript $^Q$ in $\mathfrak{d}_{i,j}^Q(t)$ with $^\triangle$. We call
$\mathfrak{d}_{i,j}^\triangle(t)$ the \emph{degree polynomial of $\triangle$ at $(i,j)$}. For $k \in \mathbb{Z}$, we set
$$ \mathfrak{d}_{i,j}[k] \mathbin{:=} \mathtt{O}_k^\triangle(i,j).$$
In the next \S\,\ref{subsub:D} and \S\,\ref{subsub:A}
we recall the results in~\cite{Oh16,Oh17,Oh18}
on the combinatorial properties and statistics of AR-quivers of type $A_{n}$ and $D_n$, and prove that
$\widetilde{\de}_{i,j}^\triangle(t)=\mathfrak{d}_{i,j}^\triangle(t)+\delta_{i,j^*}t^{{\ms{1mu}\mathsf{h}}-1}$ in those types.
\subsubsection{$D_{n+1}$-case}\label{subsub:D} Let $Q$ be a Dynkin quiver of type $D_{n+1}$.
The simple roots $\{\alpha_i \ | \ 1 \le i \le n+1 \}$ and $\Phi^+_{D_{n+1}}$ can be identified in $\sfQ^+ \subset \mathop{\mbox{\normalsize$\bigoplus$}}\limits_{i=1}^{n+1} \mathbb{R}\epsilon_i$ as follows:
\begin{equation}\label{eq: PR D}
\begin{aligned}
& \alpha_i =\epsilon_i- \epsilon_{i+1} \quad \text{ for $i<n+1$} \quad\text{ and } \quad \alpha_{n+1} =\epsilon_n+\epsilon_{n+1}, \\
&\Phi_{D_{n+1}}^+ = \left\{ \epsilon_i -\epsilon_{j} = \sum_{i \le k <j} \alpha_k \ | \ 1 \le i < j \le n \right\} \\
&\quad \mathop{\mbox{\normalsize$\bigsqcup$}}\limits \left\{ \epsilon_i-\epsilon_{n+1} = \sum_{k=i}^{n} \alpha_k \ | \ 1 \le i \le n \right\} \mathop{\mbox{\normalsize$\bigsqcup$}}\limits \left\{ \epsilon_i+\epsilon_{n+1} = \sum_{k=i}^{n-1} \alpha_k + \alpha_{n+1} \ | \ 1 \le i \le n \right\} \\
& \quad\mathop{\mbox{\normalsize$\bigsqcup$}}\limits \left\{ \epsilon_i+\epsilon_{j} = \sum_{ i \le k <j} \alpha_k + 2\sum_{j \le k <n} \alpha_s +\big(\alpha_n +\alpha_{n+1} \big) \ | \ 1 \le i <j < n+1 \right\}.
\end{aligned}
\end{equation}
\begin{convention}
For $2 \le a \le n+1$, we write $\epsilon_{-a}$ to denote $-\epsilon_a$. For $\beta=\epsilon_a+\epsilon_b \in \Phi$ $(1 \le |a| < |b| \le n+1)$, we call $\epsilon_a$ and $\epsilon_b$ \emph{components} of $\beta$.
\end{convention}
\begin{remark} \label{rmk: analy roots D}
Note the followings:
\begin{enumerate}[{\rm (i)}]
\item For each $1 \le a \le n$, there are exactly $(2n+1-a)$-many positive roots having $\epsilon_a$ as their components.
\item For each $2 \le a \le n$, there are exactly $(a-1)$-many positive roots having $\epsilon_{-a}$ as their components.
\item There are exactly $n$-many positive roots having $\epsilon_{\pm (n+1)}$ as their components.
\end{enumerate}
\end{remark}
\begin{definition} \label{def: swing} A connected subquiver $\uprho$ in $\Gamma_Q$ is called a {\it swing} if it consists of vertices and arrows in the following way: There exist positive roots
$\alpha,\beta \in \Phi^+$ and $r,s \le n$ such that
$$\raisebox{2em}{\xymatrix@C=2.5ex@R=0.1ex{ &&&&\beta\ar@{->}[dr] \\S_r \ar@{->}[r] & S_{r+1} \ar@{->}[r] & \cdots \ar@{->}[r]& S_{n-1}
\ar@{->}[ur]\ar@{->}[dr] && N_{n-1} \ar@{->}[r]& N_{n-2} \ar@{->}[r] & \cdots \ar@{->}[r] & N_s,\\
&&&&\alpha\ar@{->}[ur] }} \text{ where }$$
\begin{enumerate}[{\rm (a)}]
\item $ \raisebox{1.8em}{\xymatrix@C=2.5ex@R=0.1ex{ &&&&\beta
\\ S_{r} \ar@{->}[r] & S_{r+1} \ar@{->}[r] & \cdots \ar@{->}[r]& S_{n-1} \ar@{->}[ur]\ar@{->}[dr] \\
&&&&\alpha}}$ is an $S$-path ($\phi^{-1}_{Q}(S_l,0)=(l,k)$ for some $k \in \mathbb{Z}$),
\item $\raisebox{1.8em}{\xymatrix@C=2.5ex@R=0.1ex{ \beta \ar@{->}[dr] \\
& N_{n-1} \ar@{->}[r]& N_{n-2} \ar@{->}[r] & \cdots \ar@{->}[r] & N_{s}\\
\alpha\ar@{->}[ur]}}$ is an $N$-path ($\phi^{-1}_{Q}(N_l,0)=(l,k')$ for some $k' \in \mathbb{Z}$),
\item $\phi^{-1}_{Q}(\beta,0)=(n,u)$ and $\phi^{-1}_{Q}(\alpha,0)=(n+1,u)$ for some $u \in \mathbb{Z}$
\end{enumerate}
(see Example~\ref{ex: label D} below for examples of swings).
\end{definition}
\begin{proposition} [{\cite[Theorem 2.21]{Oh16}}]\label{prop: label1}
For each swing $\uprho$ of $\Gamma_Q$, there exists a unique $1 \le a \le n$ satisfying the following properties$\colon$
\begin{enumerate}[{\rm (i)}]
\item Every vertex shares a component $\epsilon_a$.
\item It consists of exactly $(2n+1-a)$-many vertices.
\item Every positive roots having $\epsilon_a$ as its component appears in the swing.
\item $\min(r,s)=1$, where $r$, $s$ are given in Definition~\ref{def: swing}.
\end{enumerate}
Thus we say the swing as an $a$-swing and denote it by $\uprho_a$.
\end{proposition}
By above proposition and Remark~\ref{rmk: analy roots D}, there are distinct $n$-swings in $\Gamma_Q$.
\begin{definition}
We say that a $S$-path (resp.\ $N$-path) is {\it shallow} if it ends (resp.\ starts) at residue less than $n$.
\end{definition}
\begin{proposition}[{\cite[Theorem 2.25]{Oh16}}]
\label{prop: label2} Let $\rho$ be a shallow $S$-path $($resp.\ $N$-path$)$. Then there exists a unique $2 \le a \le n$ such that
\begin{enumerate}[{\rm (i)}]
\item $\rho$ consists of $(a-1)$-many vertices,
\item all vertices in $\rho$ share $\epsilon_{-a}$ as their components.
\end{enumerate} Furthermore, there are exactly $(n-1)$-many shallow $X$-paths
$(X=S$ or $N)$ in $\Gamma_Q$. Thus we say the shallow $X$-path as
$(-a)$-shallow path and denote it by $\rho_{-a}$.
\end{proposition}
In this paper, we sometimes understand elements in $\mathsf{W}_{D_{n+1}}$ as automorphisms of the set $$ \{ \pm 1, \pm 2,\ldots,\pm (n+1) \},$$
i.e., $s_i\epsilon_{t}=\epsilon_{s_i(t)}$ for $t$ with $1 \le |t| \le n$.
Note that $s_ns_{n+1}= s_{n+1}s_n$, $s_ns_{n+1}(n)=-n$ and $s_ns_{n+1}(n+1)=-n-1$.
\begin{lemma} \label{lem: tauQ fixed D}
Let $Q=(\triangle,{}^\vee\hspace{-.4ex}\xi)$ be a $\vee$-fixed Dynkin quiver of type $D_{n+1}$.
\begin{enumerate}[{\rm (a)}]
\item If $\phi^{-1}_{Q}(\alpha,0)=(n,k)$ $($resp.\ $(n+1,k))$ for some $\alpha \in \Phi^+$, there exists $\beta \in \Phi^+$ such that $\phi^{-1}_{Q}(\beta,0)=(n+1,k)$ $($resp.\ $(n,k))$.
\item For $\alpha,\beta \in \Phi^+$ with $\phi^{-1}_{Q}(\alpha,0)=(n,k)$ and $\phi^{-1}_{Q}(\beta,0)=(n+1,k)$, there exists a unique $1 \le a\le n$ such that
$$ \{ \alpha,\beta\} = \{ \epsilon_a+\epsilon_{n+1}, \ \epsilon_a-\epsilon_{n+1} \}.$$
\item \label{it: fixed D} $\tau_Q( \pm (n+1)) = \mp (n+1)$.
\end{enumerate}
\end{lemma}
\begin{proof} The first assertion is \cite[Lemma 2.14]{Oh16}. Recall that any reduced expression of $\tau_Q$ is adapted to $Q$. Since $\xi_{n}=\xi_{n+1}$, there exists a reduced expression ${\underline{w}} = s_{i_1}\cdots s_{i_{n+1}}$ of $\tau_Q$ and $u \in [1,n]$ such that
$\{ i_{u}, i_{u+1} \} = \{ n,n+1\}$. Then we have
$$ \tau_Q(n+1) = s_{i_1}\cdots s_{i_{u-1}}s_{n}s_{n+1}(n+1) = s_{i_1}\cdots s_{i_{u-1}}(-(n+1)) =-(n+1),$$
as desired.
\end{proof}
Using the results above,
we can label the $\Gamma_Q$ in terms of $\Phi^+$ without computation
$\beta^{{\underline{w}}_0}_k$'s or the bijection $\phi_{Q}|_{\Gamma_Q}$.
\begin{example} \label{ex: label D}
Let us consider the following $\vee$-fixed Dynkin quiver $Q$ of type $D_5$:
$$Q \ =\raisebox{1.2em}{ \xymatrix@R=0.5ex@C=6ex{ &&&*{\circ}<3pt> \ar@{<-}[dl]^<{5 }_<{ \ \ \ _{\underline{0}}} \\
*{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{1} & *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{2}} \ \ }_<{2} & *{\circ}<3pt> \ar@{->}[ur] \ar@{->}[dr]^<{ _{\underline{1}} \ \ }_<{3} \\
&&&*{\circ}<3pt> \ar@{}[l]_>{ \qquad\qquad\qquad _{\underline{0}}}^<{ \ \ 4} }}$$
By ~\eqref{eq: bijection}, the $\Gamma_Q$ without labeling are given as follows:
\begin{align*}
\raisebox{3.2em}{ \scalebox{0.7}{\xymatrix@!C=2ex@R=2ex{
(i\setminus p) &-6 &-5 &-4 &-3 & -2 & -1 & 0 & 1 & 2 & 3 \\
1&&&& \circ \ar[dr] && \circ \ar[dr]&& \circ \ar[dr]&& \circ \\
2&&& \circ \ar[dr]\ar[ur] && \circ \ar[dr]\ar[ur] && \circ\ar[dr]\ar[ur] && \circ \ar[ur] \\
3&& \circ \ar[ddr] \ar[dr]\ar[ur] && \circ \ar[ddr]\ar[dr]\ar[ur] && \circ \ar[ddr]\ar[dr]\ar[ur]&& \circ \ar[ur] \\
4& \circ \ar[ur] && \circ \ar[ur]&& \circ \ar[ur]&& \circ \ar[ur] \\
5& \circ \ar[uur] && \circ \ar[uur]&& \circ \ar[uur] && \circ \ar[uur]
}}}
\end{align*}
Using (i) Proposition~\ref{prop: label1} and (ii) Lemma~\ref{lem: tauQ fixed D} and Proposition~\ref{prop: label2} in order, we can complete the label of $\Gamma_Q$
as follows:
\begin{align*}
{\rm (i)} \raisebox{4.1em}{ \scalebox{0.6}{\xymatrix@!C=1ex@R=2ex{
(i\setminus p) &-6 &-5 &-4 &-3 & -2 & -1 & 0 & 1 & 2 & 3 \\
1&&&& \epsilon_1^* +\epsilon_4^\dagger\ \ar[dr] &&\epsilon_3^\ddagger\ar[dr]&& \epsilon_2 ^\star\ar[dr]&&\epsilon_1^* \\
2&&& \epsilon_2 ^\star+\epsilon_4^\dagger\ar[dr]\ar[ur] &&\epsilon_1^*+\epsilon_3^\ddagger \ar[dr]\ar[ur] &&\epsilon_2 ^\star\ar[dr]\ar[ur] &&\epsilon_1^*\ar[ur] \\
3&&\epsilon_3^\ddagger+\epsilon_4^\dagger\ar[ddr] \ar[dr]\ar[ur] && \epsilon_2 ^\star+\epsilon_3^\ddagger\ar[ddr]\ar[dr]\ar[ur] &&\epsilon_1^*+\epsilon_2 ^\star\ar[ddr]\ar[dr]\ar[ur]&&\epsilon_1^*\ar[ur] \\
4& \epsilon_4^\dagger\ar[ur] &&\epsilon_3^\ddagger\ar[ur]&& \epsilon_2 ^\star\ar[ur]&&\epsilon_1^*\ar[ur] \\
5& \epsilon_4^\dagger \ar[uur] &&\epsilon_3^\ddagger \ar[uur]&& \epsilon_2 ^\star \ar[uur] &&\epsilon_1^* \ar[uur]
}}} \hspace{-5ex}\To \
{\rm (ii)} \hspace{-1ex} \raisebox{4.1em}{ \scalebox{0.6}{\xymatrix@!C=2ex@R=1.5ex{
(i\setminus p) &-6 &-5 &-4 &-3 & -2 & -1 & 0 & 1 & 2 & 3 \\
1&&&& \srt{1,4} \ar[dr] && \srt{3,{\color{magenta}-4}} \ar[dr]&& \srt{2,{\color{red}-3}} \ar[dr]&& \srt{1, {\color{blue}-2}} \\
2&&& \srt{2,4} \ar[dr]\ar[ur] && \srt{1,3} \ar[dr]\ar[ur] && \srt{2,{\color{magenta}-4}}\ar[dr]\ar[ur] && \srt{1,{\color{red}-3}} \ar[ur] \\
3&& \srt{3,4} \ar[ddr] \ar[dr]\ar[ur] && \srt{2,3} \ar[ddr]\ar[dr]\ar[ur] && \srt{1,2} \ar[ddr]\ar[dr]\ar[ur]&& \srt{1,{\color{magenta}-4}} \ar[ur] \\
4& \srt{4,5} \ar[ur] && \srt{3,-5} \ar[ur]&& \srt{2,5} \ar[ur]&& \srt{1,5} \ar[ur] \\
5& \srt{4,-5} \ar[uur] && \srt{3,5} \ar[uur]&& \srt{2,-5} \ar[uur] && \srt{1,-5} \ar[uur]
}}}
\end{align*}
Here, each symbol $^{\divideontimes}$ in (i) represents an $a$-swing in Proposition~\ref{prop: label1} and each color in (ii) represents an $(-a)$-shallow path in Proposition~\ref{prop: label2}.
\end{example}
Furthermore, in~\cite{Oh18}, the second named author proved
the following proposition by using
the labeling algorithm for $(\Gamma_Q)_0$ and considering all relative positions of $\pair{\alpha,\beta}$'s.
\begin{proposition}[{\cite[Proposition 4.5]{Oh18} $($see also \cite[Lemma 2.6]{BKM12}$)$}]
For any \pr $\underline{p}=\pair{\alpha,\beta}$ and any Dynkin quiver $Q$ of type $D$, we have the followings:
\begin{enumerate}[{\rm (1)}]
\item ${\operatorname{hd}}_{[Q]}(\pair{\alpha,\beta})$ is well-defined. In particular, if $\alpha+\beta = \gamma \in \Phi^+$, then ${\operatorname{hd}}_{[Q]}(\pair{\alpha,\beta})=\gamma.$
\item ${\operatorname{deg}}_{[Q]}(\pair{\alpha,\beta}) \le 2$.
\end{enumerate}
\end{proposition}
The relative positions for all \prs $\pair{\alpha,\beta}$ with ${\operatorname{deg}}_Q(\pair{\alpha,\beta}) \ne 0$ are classified and described, which do not depend on the choice of $Q$, as follows (\cite{Oh18}):
\begin{equation} \label{eq: dist al, be 1}
\begin{aligned}
& \scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(140,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(150,-6) **\dir{.};
"DD"+(-20,-10); "DD"+(150,-10) **\dir{.};
"DT"+(-32,-4)*{\scriptstyle 1};
"DT"+(-32,-8)*{\scriptstyle 2};
"DT"+(-32,-12)*{\scriptstyle \vdots};
"DT"+(-32,-16)*{\scriptstyle \vdots};
"DT"+(-34,-36)*{\scriptstyle n};
"DT"+(-33,-40)*{\scriptstyle n+1};
"DL"+(-10,0); "DD"+(-10,0) **\dir{-};"DL"+(-10,0); "DT"+(-14,-4) **\dir{.};
"DT"+(-6,-4); "DR"+(-10,0) **\dir{.};"DR"+(-10,0); "DD"+(-10,0) **\dir{-};
"DL"+(-6,0)*{\scriptstyle \beta};
"DL"+(0,0)*{{\rm(i)}};
"DR"+(-14,0)*{\scriptstyle \alpha};
"DL"+(-10,0)*{\bullet};
"DR"+(-10,0)*{\bullet};
"DT"+(-6,-4); "DT"+(-14,-4) **\crv{"DT"+(-10,-2)};
"DT"+(-5.5,-6)*{{\scriptstyle(1,p) }};
"DT"+(-15,-6)*{{\scriptstyle(1,p-2) }};
"DT"+(-10,0)*{\scriptstyle 2};
"DD"+(-10,0)*{\bullet};
"DD"+(-10,-2)*{\scriptstyle \underline{s}=\alpha+\beta};
"DL"+(15,-3); "DD"+(15,-3) **\dir{-};
"DR"+(11,-7); "DD"+(15,-3) **\dir{-};
"DT"+(11,-7); "DR"+(11,-7) **\dir{-};
"DT"+(11,-7); "DL"+(15,-3) **\dir{-};
"DL"+(19,-3)*{\scriptstyle \beta};
"DR"+(7,-7)*{\scriptstyle \alpha};
"DL"+(30,0)*{{\rm(ii)}};
"DL"+(15,-3)*{\bullet};
"DL"+(31,12.5)*{\bullet};
"DL"+(31,14.5)*{\scriptstyle s_1};
"DL"+(25,-13)*{\bullet};
"DL"+(25,-15)*{\scriptstyle s_2};
"DR"+(11,-7)*{\bullet};
"DD"+(28,4); "DD"+(38,-6) **\dir{-};
"DD"+(28,4); "DD"+(50,26) **\dir{.};
"DD"+(48,4); "DD"+(38,-6) **\dir{-};
"DD"+(48,4); "DD"+(70,26) **\dir{.};
"DD"+(82,18); "DD"+(74,26) **\dir{.};
"DD"+(82,18); "DD"+(58,-6) **\dir{-};
"DD"+(48,4); "DD"+(58,-6) **\dir{-};
"DD"+(82,18)*{\bullet};
"DD"+(78,18)*{\scriptstyle \alpha};
"DD"+(28,4)*{\bullet};
"DD"+(32,4)*{\scriptstyle \beta};
"DD"+(74,26); "DD"+(70,26) **\crv{"DD"+(72,28)};
"DD"+(76,24)*{{\scriptstyle(1,p) }};
"DD"+(67,24)*{{\scriptstyle(1,p-2) }};
"DD"+(72,30)*{\scriptstyle 2};
"DD"+(38,14)*{\bullet};
"DD"+(45,14)*{\scriptstyle \underline{s}=\alpha+\beta};
"DL"+(59,0)*{{\rm(iii)}};
"DD"+(48,4); "DD"+(38,14) **\dir{.};
"DD"+(80,4); "DD"+(90,-6) **\dir{-};
"DD"+(96,0); "DD"+(90,-6) **\dir{-};
"DD"+(96,0); "DD"+(102,-6) **\dir{-};
"DD"+(122,16); "DD"+(102,-6) **\dir{-};
"DD"+(80,4); "DD"+(102,26) **\dir{.};
"DD"+(122,16); "DD"+(112,26) **\dir{.};
"DD"+(96,0); "DD"+(117,21) **\dir{.};
"DD"+(96,0); "DD"+(86,10) **\dir{.};
"DD"+(117,21)*{\bullet};
"DD"+(114,21)*{\scriptstyle s_1};
"DD"+(86,10)*{\bullet};
"DD"+(89,10)*{\scriptstyle s_2};
"DD"+(102,26); "DD"+(112,26) **\crv{"DD"+(107,28)};
"DD"+(113,24)*{{\scriptstyle(1,p) }};
"DD"+(102,24)*{{\scriptstyle(1,p-l) }};
"DD"+(107,29)*{\scriptstyle l >2};
"DD"+(80,4)*{\bullet};
"DL"+(107,0)*{{\rm (iv)}};
"DD"+(84,4)*{\scriptstyle \beta};
"DD"+(122,16)*{\bullet};
"DD"+(117,16)*{\scriptstyle \alpha};
"DD"+(119,4); "DD"+(129,-6) **\dir{-};
"DD"+(135,0); "DD"+(129,-6) **\dir{-};
"DD"+(119,4); "DD"+(141,26) **\dir{-};
"DD"+(154,19); "DD"+(147,26) **\dir{-};
"DD"+(135,0); "DD"+(154,19) **\dir{-};
"DD"+(119,4)*{\bullet};
"DL"+(146,0)*{{\rm (v)}};
"DD"+(123,4)*{\scriptstyle \beta};
"DD"+(141,26); "DD"+(147,26) **\crv{"DD"+(144,28)};
"DD"+(148,24)*{{\scriptstyle(1,p) }};
"DD"+(139,24)*{{\scriptstyle(1,p-2) }};
"DD"+(154,19)*{\bullet};
"DD"+(154,17)*{\scriptstyle \alpha};
"DD"+(129,-10)*{\bullet};
"DD"+(129,-8)*{\scriptstyle s_1};
"DD"+(129,-6)*{\bullet};
"DD"+(129,-4)*{\scriptstyle s_2};
"DD"+(144,29)*{\scriptstyle 2};
\endxy}}
\allowdisplaybreaks \\
& \scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(120,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(130,-6) **\dir{.};
"DD"+(-20,-10); "DD"+(130,-10) **\dir{.};
"DT"+(-34,-4)*{\scriptstyle 1};
"DT"+(-34,-8)*{\scriptstyle 2};
"DT"+(-34,-12)*{\scriptstyle \vdots};
"DT"+(-34,-16)*{\scriptstyle \vdots};
"DT"+(-36,-36)*{\scriptstyle n};
"DT"+(-34,-40)*{\scriptstyle n+1};
"DD"+(-17,4); "DD"+(-7,-6) **\dir{-};
"DD"+(-1,0); "DD"+(-7,-6) **\dir{-};
"DD"+(-17,4); "DD"+(2,23) **\dir{-};
"DD"+(12,13); "DD"+(2,23) **\dir{-};
"DD"+(-1,0); "DD"+(12,13) **\dir{-};
"DD"+(-17,4)*{\bullet};
"DD"+(2,23)*{\bullet};
"DL"+(10,0)*{{\rm (vi)}};
"DD"+(-13,4)*{\scriptstyle \beta};
"DD"+(2,25)*{\scriptstyle s_3};
"DD"+(12,13)*{\bullet};
"DD"+(12,11)*{\scriptstyle \alpha};
"DD"+(-7,-10)*{\bullet};
"DD"+(-7,-8)*{\scriptstyle s_1};
"DD"+(-7,-6)*{\bullet};
"DD"+(-7,-4)*{\scriptstyle s_2};
"DD"+(15,4); "DD"+(25,-6) **\dir{-};
"DD"+(15,4); "DD"+(5,-6) **\dir{-};
"DD"+(15,4); "DD"+(30,19) **\dir{.};
"DD"+(40,9);"DD"+(25,-6) **\dir{-};
"DD"+(40,9);"DD"+(30,19) **\dir{.};
"DD"+(40,9)*{\bullet};
"DD"+(30,19)*{\bullet};
"DD"+(30,21)*{\scriptstyle s_1};
"DD"+(5,-6)*{\circ};
"DD"+(5,-10)*{\circ};
"DD"+(5,-8)*{\scriptstyle \beta};
"DD"+(25,-6.2)*{\circ};
"DD"+(25,-8.2)*{\scriptstyle s_2};
"DD"+(25,-10.2)*{\circ};
"DD"+(37,9)*{\scriptstyle \alpha};
"DD"+(25,4)*{{\rm (vii)}};
"DD"+(45,4); "DD"+(55,-6) **\dir{-};
"DD"+(45,4); "DD"+(35,-6) **\dir{-};
"DD"+(45,4); "DD"+(67,26) **\dir{.};
"DD"+(80,19);"DD"+(55,-6) **\dir{-};
"DD"+(80,19);"DD"+(73,26) **\dir{.};
"DD"+(80,19)*{\bullet};
"DD"+(55,4)*{{\rm (viii)}};
"DD"+(84,19)*{\scriptstyle \alpha};
"DD"+(35,-6)*{\circ};
"DD"+(35,-10)*{\circ};
"DD"+(55,-6.2)*{\circ};
"DD"+(55,-8.2)*{\scriptstyle \underline{s}=\alpha+\beta};
"DD"+(55,-10.2)*{\circ};
"DD"+(35,-8)*{\scriptstyle \beta};
"DD"+(67,26); "DD"+(73,26) **\crv{"DD"+(70,28)};
"DD"+(74,24)*{{\scriptstyle(1,p) }};
"DD"+(66,24)*{{\scriptstyle(1,p-2) }};
"DD"+(70,30)*{\scriptstyle 2};
"DD"+(92,16); "DD"+(114,-6) **\dir{-};
"DD"+(92,16); "DD"+(70,-6) **\dir{-};
"DD"+(92,4)*{ {\rm (ix)}};
"DD"+(114,-6)*{\circ};
"DD"+(114,-10)*{\circ};
"DD"+(114,-8)*{\scriptstyle \alpha};
"DD"+(70,-6)*{\circ};
"DD"+(92,16)*{\bullet};
"DD"+(92,18)*{\scriptstyle \underline{s}=\alpha+\beta};
"DD"+(70,-10)*{\circ};
"DD"+(70,-8)*{\scriptstyle \beta};
\endxy}} \hspace{-3em} {\operatorname{deg}}_Q(\pair{\alpha,\beta})=1,
\end{aligned}
\end{equation}
and
\begin{align} \label{eq: dist al be 2}
\scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(70,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(80,-6) **\dir{.};
"DD"+(-20,-10); "DD"+(80,-10) **\dir{.};
"DT"+(-32,-4)*{\scriptstyle 1};
"DT"+(-32,-8)*{\scriptstyle 2};
"DT"+(-32,-12)*{\scriptstyle \vdots};
"DT"+(-32,-16)*{\scriptstyle \vdots};
"DT"+(-33,-36)*{\scriptstyle n};
"DT"+(-34,-40)*{\scriptstyle n+1};
"DD"+(-10,4); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(12,-6) **\dir{-};
"DD"+(32,16); "DD"+(12,-6) **\dir{-};
"DD"+(-10,4); "DD"+(12,26) **\dir{.};
"DD"+(32,16); "DD"+(22,26) **\dir{.};
"DD"+(6,0); "DD"+(27,21) **\dir{.};
"DD"+(6,0); "DD"+(-4,10) **\dir{.};
"DD"+(12,26); "DD"+(22,26) **\crv{"DD"+(17,28)};
"DD"+(23,24)*{{\scriptstyle(1,p) }};
"DD"+(11,24)*{{\scriptstyle(1,p-2) }};
"DD"+(17,30)*{\scriptstyle 2};
"DD"+(-10,4)*{\bullet};
"DL"+(17,0)*{{\rm(x)}};
"DD"+(-6,4)*{\scriptstyle \beta};
"DD"+(-4,12)*{\scriptstyle \eta_2};
"DD"+(32,16)*{\bullet};
"DD"+(27,16)*{\scriptstyle \alpha};
"DD"+(28,23)*{\scriptstyle \eta_1};
"DD"+(27,21)*{\scriptstyle \bullet};
"DT"+(2,-36)*{\scriptstyle \bullet};
"DT"+(2,-38)*{\scriptstyle \tau_1};
"DT"+(2,-40)*{\scriptstyle \bullet};
"DT"+(2,-42)*{\scriptstyle \tau_2};
"DT"+(-10,-36)*{\scriptstyle \bullet};
"DT"+(-10,-38)*{\scriptstyle \zeta_1};
"DT"+(-10,-40)*{\scriptstyle \bullet};
"DT"+(-10,-42)*{\scriptstyle \zeta_2};
"DT"+(-4,-30)*{\scriptstyle \bullet};
"DT"+(-4,-28)*{\scriptstyle \underline{s}=\alpha+\beta};
"DD"+(35,4); "DD"+(45,-6) **\dir{-};
"DD"+(51,0); "DD"+(45,-6) **\dir{-};
"DD"+(51,0); "DD"+(57,-6) **\dir{-};
"DD"+(69,8); "DD"+(57,-6) **\dir{-};
"DD"+(35,4); "DD"+(54,23) **\dir{.};
"DD"+(69,8); "DD"+(54,23) **\dir{.};
"DD"+(51,0); "DD"+(64,13) **\dir{.};
"DD"+(51,0); "DD"+(41,10) **\dir{.};
"DD"+(35,4)*{\bullet};
"DL"+(62,0)*{{\rm(xi)}};
"DD"+(39,4)*{\scriptstyle \beta};
"DD"+(41,10)*{\scriptstyle \bullet};
"DD"+(45,10)*{\scriptstyle \eta_2};
"DD"+(69,8)*{\bullet};
"DD"+(64,12.5)*{\bullet};
"DD"+(61,12.5)*{\scriptstyle \eta_1};
"DD"+(65,8)*{\scriptstyle \alpha};
"DT"+(47,-36)*{\scriptstyle \bullet};
"DT"+(47,-38)*{\scriptstyle \tau_1};
"DT"+(35,-36)*{\scriptstyle \bullet};
"DT"+(35,-38)*{\scriptstyle \zeta_1};
"DT"+(47,-40)*{\scriptstyle \bullet};
"DT"+(47,-42)*{\scriptstyle \tau_2};
"DT"+(35,-40)*{\scriptstyle \bullet};
"DT"+(35,-42)*{\scriptstyle \zeta_2};
"DT"+(41,-32)*{\scriptstyle s_2};
"DT"+(44,-5)*{\scriptstyle s_1};
"DT"+(44,-7)*{\scriptstyle \bullet};
"DT"+(41,-30)*{\scriptstyle \bullet};
\endxy}} \ {\operatorname{deg}}_Q(\pair{\alpha,\beta})=2.
\end{align}
Here we only exhibit the cases when $\mathrm{res}^{[Q]}(\alpha) \le \mathrm{res}^{[Q]}(\beta)$ and
\begin{enumerate}[{\rm (1)}]
\item we set $\underline{s}= \seq{s_1,\ldots,s_r}= {\operatorname{hd}}_Q(\pair{\alpha,\beta})$,
\item
$\beta$ in (vii), (viii) and (ix) is one of two vertices $\circ$ and then $s_2$ in (vii) (resp.\ $\alpha+\beta$ in (viii), $\alpha$ in (ix)) is determined by
the choice,
\item every \ex ${\underline{m}} \prec_Q^\mathtt{b} \pair{\alpha,\beta}$ is also described in the above pictures ($\prec_Q \mathbin{:=} \prec_{[Q]}$),
\item in~\eqref{eq: dist al, be 1}, the positive roots with residue $n$ or $n+1$ appear in (v), (vi), (vii), (viii), (ix) (x) and (xi),
\item see Example~\ref{ex: Uding D} below for $ {\operatorname{deg}}_Q(\pair{\alpha,\beta})=2$ cases.
\end{enumerate}
In each case, the value $(\alpha,\beta)$ does not depend on the choice of $Q$ and is given as follows:
\begin{table}[h]
\centering
{ \arraycolsep=1.6pt\def1.5{1.5}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& (i) & (ii) & (iii) & (iv) & (v) & (vi) & (vii) & (viii) & (ix) & (ix) & (x) \\ \hline
$(\alpha,\beta)$ & $-1$ & $0$ & $-1$ & $0$ & $0$ & $1$ & $0$ & $-1$ & $-1$ & $-1$ & $0$ \\ \hline
\end{tabular}
}\\[1.5ex]
\caption{$(\alpha,\beta)$ for non $[Q]$-simple \prs of type $D_{n+1}$}
\protect\label{table: al,be D}
\end{table}
\begin{example} \label{ex: Uding D}
Using $\Gamma_Q$ in Example~\ref{ex: label D},
\begin{enumerate}[{\rm (1)}]
\item $ \srt{1,-4} \prec_Q^\mathtt{b} \pair{\srt{1,-2},\srt{2,-4}}$ corresponds to (i),
\item $ \pair{\srt{2,-3},\srt{1,-4}} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{2,-4}}$ corresponds to (ii),
\item $ \srt{1,4} \prec_Q^\mathtt{b} \pair{\srt{1,-2},\srt{2,4}}$ corresponds to (iii),
\item $ \pair{\srt{1,4},\srt{2,-3}} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{2,4}}$ corresponds to (iv),
\item $ \pair{\srt{1,5},\srt{1,-5}} \prec_Q^\mathtt{b} \pair{\srt{1,-2},\srt{1,2}}$ corresponds to (v),
\item $ \seq{\srt{2,-3} ,\srt{1,5},\srt{1,-5}} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{1,2}}$ corresponds to (vi),
\item $ \pair{\srt{3,-4}, \srt{1,5}} \prec_Q^\mathtt{b} \pair{\srt{1,-4},\srt{3,5}}$ and $ \pair{\srt{3,-4} , \srt{1,-5}} \prec_Q^\mathtt{b} \pair{\srt{1,-4},\srt{3,-5}}$ correspond to (vii),
\item $ \srt{1,5} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{3,5}}$ and $ \srt{1,-5} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{3,-5}}$ correspond to (viii),
\item $ \srt{1,3} \prec_Q^\mathtt{b} \pair{\srt{1,5},\srt{3,-5}}$ and $ \srt{1,3} \prec_Q^\mathtt{b} \pair{\srt{1,-5},\srt{3,5}}$ correspond to (ix),
\item $ \srt{1,2} \prec_Q^\mathtt{b} \pair{\srt{1,-5},\srt{2,5}}, \pair{\srt{1,5},\srt{2,-5}} ,\pair{\srt{1,3},\srt{2,-3}} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{2,3}}$ corresponds to (x),
\item $ \pair{\srt{1,2},\srt{3,-4}} \prec_Q^\mathtt{b} \begin{matrix} \seq{\srt{1,-5},\srt{2,5},\srt{3,-4}} \\ \seq{\srt{1,5},\srt{2,-5},\srt{3,-4}} \\ \pair{\srt{1,3},\srt{2,-4}} \end{matrix} \prec_Q^\mathtt{b} \pair{\srt{1,-4},\srt{2,3}}$ corresponds to (xi).
\end{enumerate}
\end{example}
\smallskip
For $\triangle$ of type $D_{n+1}$, the degree polynomials $\mathfrak{d}_{i,j}^{D_{n+1}}(t)$ for $i,j \in \triangle_0$ are given in the following explicit form \cite{Oh16,Oh18,Fuj19}:
\begin{align}\label{eq: d poly D}
\mathfrak{d}_{i,j}^{D_{n+1}}(t) + \delta_{i,j^*}t^{{\ms{1mu}\mathsf{h}}-1} = \begin{cases}
\dfrac{\displaystyle \sum_{s=1}^{\min(i,j)} (t^{|i-j|+2s-1}+t^{2n-i-j+2s-1})}{1+ \delta(\max(i,j) \ge n)} & \text{ if } \min(i,j)<n,\\[3ex]
\displaystyle \sum_{s=1}^{\lfloor (n+ \delta_{i,j}) /2 \rfloor} t^{4s-1 -2\delta(i,j)} & \text{ otherwise.}
\end{cases}
\end{align}
\begin{remark} \label{remark: dist 2}
The coefficient $t^{s-1}$ of $\mathfrak{d}_{i,j}^\triangle(t)$ is $2$ if and only if
\begin{align} \label{eq: double pole}
2 \le i,j \le n-1, \ \ i+j \ge n+1, \ \ {\ms{1mu}\mathsf{h}}+2 -i-j \le s \le i+j \ \text{ and } \ s\equiv_2 i+j
\end{align}
(see \cite[Lemma 3.2.4]{KKK13b} also).
Equivalently, if ${\operatorname{deg}}_Q(\pair{\alpha,\beta})=2$, the relative positions of the \prs $\pair{\alpha,\beta}$ in $\Gamma_Q$ are described as in~\eqref{eq: dist al be 2},
where
\begin{enumerate}[{\rm (i)}]
\item[{\rm (x)}] happens when $\alpha+\beta=\gamma=\underline{s}$ with ${\rm mul}(\gamma)=2$ and we have
$$ \gamma \prec^\mathtt{b}_Q \pair{\eta_1,\eta_2}, \pair{\tilde{\tau}_1,\tilde{\zeta}_1}, \pair{\tilde{\tau}_2,\tilde{\zeta}_2} \prec^\mathtt{b}_{Q} \pair{\alpha,\beta} $$
where
$\{ \tilde{\tau}_1,\tilde{\tau}_2\} =\{\tau_1,\tau_2 \}$, $\{ \tilde{\zeta}_1,\tilde{\zeta}_2\} =\{\zeta_1,\zeta_2 \}$ and $\tilde{\tau}_i+\tilde{\zeta}_i =\gamma$ $(i=1,2)$,
\item[{\rm (xi)}] happens when ${\operatorname{hd}}_Q(\pair{\alpha,\beta}) = \pair{\gamma_1,\gamma_2}$ and we have
$$ \pair{ \gamma_1,\gamma_2} \prec^\mathtt{b}_Q \pair{\eta_1,\eta_2}, \seq{\tilde{\tau}_1,\tilde{\zeta}_1,\gamma_1}, \seq{\tilde{\tau}_2,\tilde{\zeta}_2,\gamma_1} \prec^\mathtt{b}_{Q} \pair{\alpha,\beta} $$
where
$\{ \tilde{\tau}_1,\tilde{\tau}_2\} =\{\tau_1,\tau_2 \}$, $\{ \tilde{\zeta}_1,\tilde{\zeta}_2\} =\{\zeta_1,\zeta_2 \}$ and $\tilde{\tau}_i+\tilde{\zeta}_i =\gamma_1$ $(i=1,2)$.
\end{enumerate}
\end{remark}
\subsubsection{$A_{2n-1}$-case} \label{subsub:A} Let $Q$ be a Dynkin quiver of type $A_{2n-1}$.
The simple roots and $\Phi^+_{A_{2n-1}}$ can be identified in $\sfQ^+ \subset \mathop{\mbox{\normalsize$\bigoplus$}}\limits_{i=1}^{2n} \mathbb{Z}\epsilon_i$ as follows:
\begin{equation}\label{eq: PR A}
\begin{aligned}
& \alpha_i =\epsilon_i- \epsilon_{i+1} \quad \text{ for $1 \le i \le 2n-1$}, \\
&\Phi_{A_{2n-1}}^+ = \left\{ \epsilon_i -\epsilon_{j} = \sum_{ i \le k <j} \alpha_k \ \left| \ 1 \le i \le j \le 2n \right. \right\}.
\end{aligned}
\end{equation}
\begin{remark} \label{rmk: vee fixed A}
Note that for $\vee$-fixed Dynkin quiver $Q=(\triangle, {}^\vee\hspace{-.4ex}\xi)$ of type $A_{2n-1}$,
\begin{enumerate}[{\rm (1)}]
\item there is no path $\mathtt{p}$ in $Q$ passing through $n$,
\item $n$ is always sink or source of $Q$,
\end{enumerate}
since ${}^\vee\hspace{-.4ex}\xi_{n-1}={}^\vee\hspace{-.4ex}\xi_{n+1}$ and $|{}^\vee\hspace{-.4ex}\xi_n - {}^\vee\hspace{-.4ex}\xi_{n \pm 1}|=1$.
\end{remark}
We identify $\beta \in \Phi^+_{A_{2n-1}}$ with a \emph{segment} $[a,b]$ for $1 \le a \le b \le 2n-1$, where $\beta = \sum_{k=a}^b \alpha_k$.
For $[a,b]$, we call $a$ the \emph{first component} and $b$ the \emph{second component} of $\beta$. If $\beta$ is simple,
we sometimes write $\beta$ as $[a]$ instead of $[a,a]$.
\begin{proposition}\cite[Theorem 1.11]{Oh17} \label{prop :labeling A} \hfill
\begin{enumerate}[{\rm (1)}]
\item Every positive root in an $N$-path shares the same first component
and every positive root in an $S$-path shares the same second component.
\item For each $1\le i \le 2n-i$, there are exactly one $N$-path $\varrho^N_i$ containing $(2n-i)$ vertices and each vertex in
$\varrho^N_i$ has $i$ as its first component. Conversely, every positive root with the first component $i$ appears in $\varrho^N_i$.
\item For each $1\le i \le 2n-i$, there are exactly one $S$-path $\varrho^S_i$ containing $i$ vertices and each vertex in
$\varrho^S_i$ has $i$ as its second component. Conversely, every positive root with the second component $i$ appears in $\varrho^S_i$.
\end{enumerate}
\end{proposition}
As in $D_{n+1}$-case, we can label the $\Gamma_Q$ in
terms of $\Phi^+$ without computation $\beta^{{\underline{w}}_0}_k$'s or the
bijection $\phi_{Q}|_{\Gamma_Q}$ by using the result above and obtain the following proposition:
\begin{proposition} \cite[Proposition 4.5]{Oh18} $($see also~\cite[Lemma 2.6]{BKM12}$)$
For any \pr $\underline{p}=\pair{\alpha,\beta}$ and any Dynkin quiver $Q$ of type $A_n$, we have the followings:
\begin{enumerate}[{\rm (1)}]
\item ${\operatorname{hd}}_{[Q]}(\pair{\alpha,\beta})$ is well-defined. In particular, if $\alpha+\beta = \gamma \in \Phi^+$, then ${\operatorname{hd}}_{[Q]}(\pair{\alpha,\beta}) =\gamma.$
\item ${\operatorname{deg}}_{[Q]}(\pair{\alpha,\beta}) \le 1$.
\end{enumerate}
\end{proposition}
Understanding the non-trivial Dynkin diagram automorphism $\vee$ on
$A_{2n-1}$ as an involution on $\{ 1,\ldots,2n\}$, $\vee$ normalizes
$\mathsf{W}_{A_{2n-1}}$ via $\vee s_i \vee = s_{\vee(i)}$.
\begin{lemma} \label{lem: vee comuute A}
For $\vee$-fixed Dynkin quiver $Q=(\triangle,{}^\vee\hspace{-.4ex}\xi)$ of type $A_{2n-1}$ and $i \in \triangle_0$, we have
$$ \gamma_{\vee(i)}^Q = \vee(\gamma^Q_i).$$
Here we understand $\vee$ as an involution of $\Phi^+_{A_{2n-1}}$.
\end{lemma}
\begin{proof}
Recall $B^Q(i)$ in~\eqref{eq: gaQ}.
By the facts that ${}^\vee\hspace{-.4ex}\xi_i = {}^\vee\hspace{-.4ex}\xi_{\vee(i)}$ and Remark~\ref{rmk: vee fixed A},
if there exists a path $j$ to $i$ in $Q$ for $i,j\le n$, then there exists a path $2n-i$ to $2n-j$ in $Q$.
Hence $j \in B^Q(i) \iff 2n-j \in B^Q(2n-i)$ for $i,j \le n$ and our assertion follows from~\eqref{eq: gaQ}.
\end{proof}
For $\triangle$ of type $A_{2n-1}$ with ${\ms{1mu}\mathsf{h}}=2n$, the degree polynomials $\mathfrak{d}_{i,j}^{A_{2n-1}}(t)$ for $i,j \in \triangle_0$ are given in the following explicit form \cite{Oh16,Oh18,Fuj19}:
\begin{align*}
\mathfrak{d}_{i,j}^{A_{2n-1}}(t) + \delta_{i,j^*}t^{{\ms{1mu}\mathsf{h}}-1} = \sum_{s=1}^{\min(i,j,2n-i,2n-j)}t^{|i-j|+2s-1}.
\end{align*}
Note that every non-zero coefficient of $\mathfrak{d}_{i,j}^{A_{2n-1}}(t)$ is $1$.
\subsection{Relation between $\mathfrak{d}_{i,j}(t)$ and $\widetilde{\de}_{i,j}(t)$} For a Dynkin diagram $\triangle$ of type $A_{n}$, $D_{n}$ and $E_6$, we have the following theorem:
\begin{theorem}[\cite{Oh18}] \label{thm: ADE6 de=tde}
For $\triangle$ of type $A_{n}$, $D_{n}$ and $E_6$, and $i,j \in I$, we have
\begin{align} \label{eq: de=tde}
\widetilde{\de}_{i,j}(t) = \mathfrak{d}_{i,j}(t) +\delta_{i,j^*} t^{{\ms{1mu}\mathsf{h}}-1}.
\end{align}
\end{theorem}
\begin{remark} \label{rmk: remained type}
In Section~\ref{sec: Degree poly}, we will see that ~\eqref{eq: de=tde} also holds for $B_n$ and $C_n$ (see Theorem~\ref{thm:Main}). However, in types $E_7,E_8$, $F_4$ and $G_2$,
~\eqref{eq: de=tde} do not hold (see Remark~\ref{rmk: F4 do not hold} and Remark~\ref{rmk: G2 do not hold} below for $F_4$ and $G_2$.)
\end{remark}
\section{Labeling of AR-quivers in type BCFG} \label{sec: Labeling BCFG}
For a Lie algebra $\mathsf{g}$ of simply-laced type associated to
$\triangle$ and a Dynkin diagram automorphism $\upsigma\; (\ne {\rm id})$
on $\triangle_\mathsf{g}$, we denote by ${}^\upsigma \hspace{-.4ex}\sfg$ the Lie subalgebra of $\mathsf{g}$
such that it is of non simply-laced type \cite[Proposition 7.9]{Kac}:
\begin{table}[h]
\centering
{ \arraycolsep=1.6pt\def1.5{1.5}
\begin{tabular}{|c||c|c|c|c|}
\hline
$(\triangle, \upsigma)$ & $(\triangle_{A_{2n-1}}, \vee)$& $(\triangle_{D_{n+1}}, \vee)$ & $(\triangle_{E_{6}}, \vee)$ & $(\triangle_{D_{4}}, \widetilde{\vee})$ \\ \hline
Type of ${}^\upsigma \hspace{-.4ex}\sfg$ & $C_n $ & $B_n$ & $F_4$ & $G_2$ \\ \hline
\end{tabular}
}\\[1.5ex]
\caption{${}^\upsigma \hspace{-.4ex}\sfg$ for each $(\triangle,\sigma)$ with $\sigma \ne {\rm id}$}
\protect\label{g sigma}
\end{table}
Note that
the index ${}^\upsigma \hspace{-.4ex}I$ for ${}^\upsigma \hspace{-.4ex}\sfg$ can be identified the image of
$I_\mathsf{g}$ under the following surjection $\overline{\upsigma}$: For
${}^\upsigma \hspace{-.4ex}\sfg$ of type $B_n$ or $C_n$,
$$
\overline{\upsigma}(i) =
\begin{cases}
n & \text{ if ${}^\upsigma \hspace{-.4ex}\sfg$ is of type $B_n$ and $i =n+1$}, \\
2n-i & \text{ if ${}^\upsigma \hspace{-.4ex}\sfg$ is of type $C_n$ and $i \ge n+1$}, \\
i & \text{ otherwise},
\end{cases}
$$
and
\begin{align*}
&\overline{\upsigma}(1) = \overline{\upsigma}(6) =2, \quad \overline{\upsigma}(3) =\overline{\upsigma}(5) = 2, \quad \overline{\upsigma}(4) = 2, \quad \overline{\upsigma}(2) =1, &&\text{ if ${}^\upsigma \hspace{-.4ex}\sfg$ is of type $F_4$,} \\
&\overline{\upsigma}(1) = \overline{\upsigma}(3) =\overline{\upsigma}(4) =2, \qquad \overline{\upsigma}(2) =1, &&\text{ if ${}^\upsigma \hspace{-.4ex}\sfg$ is of type $G_2$.}
\end{align*}
Then, for a $\upsigma$-fixed Dynkin quiver $Q=(\triangle_{\mathsf{g}}, {}^\upsigma\hspace{-.4ex}\xi)$,
we can obtain a Dynkin quiver $\overline{Q}=(\triangle_{{}^\upsigma \hspace{-.4ex}\sfg}, \overline{\xi})$ such that
\begin{align}\label{eq: oQ}
\overline{\xi}_{\overline{\upsigma}(i)} = {}^\upsigma\hspace{-.4ex}\xi_{i}.
\end{align}
for any $i \in I$. We write ${}^\upsigma \hspace{-.4ex}I = \{ \ov{\imath} \mathbin{:=} \overline{\upsigma}(\imath) \ | \ \imath \in I_\mathsf{g}\}$.
\begin{example}
Here are $\overline{Q}$'s corresponding to $Q$'s in Example~\ref{ex: sigma-fixed}.
\begin{enumerate}[{\rm (1)}]
\item \label{it: C3 induced}
$\overline{Q} = \xymatrix@R=0.5ex@C=6ex{ *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{1 \ \ } & *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{2}} \ \ }_<{2 \ \ }
&*{\circled{$\circ$}}<3pt> \ar@{}[l]_<{ \ \ _{\underline{1}} }^<{ \ \ 3 }
}$ of type $C_n$ for $Q$ of type $A_5$ in Example~\ref{ex: sigma-fixed}~\eqref{it: A5 fixed}.
\item \label{it: B3 induced}
$\overline{Q} = \xymatrix@R=0.5ex@C=6ex{ *{\circled{$\circ$}}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{1 \ \ } & *{\circled{$\circ$}}<3pt> \ar@{->}[r]^<{ _{\underline{2}} \ \ }_<{2 \ \ }
&*{\circ}<3pt> \ar@{}[l]_<{ \ \ _{\underline{1}} }^<{ \ \ 3 }
}$ of type $B_3$ for $Q$ of type $D_4$ in Example~\ref{ex: sigma-fixed}~\eqref{it: D4 fixed vee}.
\item \label{it: F4 induced}
$\overline{Q} = \xymatrix@R=0.5ex@C=6ex{ *{\circled{$\circ$}}<3pt> \ar@{<-}[r]^<{ _{\underline{0}} \ \ }_<{1 \ \ } & *{\circled{$\circ$}}<3pt> \ar@{<-}[r]^<{ _{\underline{1}} \ \ }_<{2 \ \ }
&*{\circ}<3pt> \ar@{<-}[r]^<{ _{\underline{2}} \ \ }_<{3 \ \ } &*{\circ}<3pt> \ar@{}[l]_<{ \ \ _{\underline{3}} }^<{ \ \ 4 }
}$ of type $F_4$
for $Q$ of type $E_6$ in Example~\ref{ex: sigma-fixed}~\eqref{it: E6 fixed}.
\item \label{it: G2 induced}
$\overline{Q} = \xymatrix@R=0.5ex@C=6ex{ *{\circ}<3pt> \ar@{->}[r]^<{ _{\underline{2}} \ \ }_<{1 \ \ }
&*{\circled{$\odot$}}<3pt> \ar@{}[l]_<{ \ \ _{\underline{1}} }^<{ \ \ 2 }
}$ of type $G_2$ for $Q$ of type $D_4$ in Example~\ref{ex: sigma-fixed}~\eqref{it: D4 fixed wvee}.
\end{enumerate}
\end{example}
\medskip
In
this section, we shall show that we can obtain $\Gamma_{\ov{Q}}$
from $\Gamma_Q$ and hence the labeling of $\Gamma_{\ov{Q}}$ from
the ones of $\Gamma_Q$.
The following lemma holds for Dynkin quivers $Q$ of any finite type, which generalizes~\eqref{eq: gaQ}:
\begin{lemma} \label{lem: general gaQ}
For Dynkin quiver $Q=(\triangle, \xi)$ of any finite type and $i \in I$, we have
$$ \gamma_i^Q = \sum_{j \in B^Q(i)} \left( \prod_{k=1}^{l^j-1} - \left\langle h_{p^j_k},\alpha_{p^j_{k+1}} \right\rangle \right)\alpha_j $$
where $B^Q(i)$ denotes the set of indices $j$ such that there exists a
path $\mathtt{p}^j = (j=p^j_1 \to p^j_2 \to \cdots \to p^j_{l^j-1} \to
p^j_{l^j}=i)$ in $Q$. Here we understand $\left( \prod_{k=1}^{l^j-1} - \left\langle h_{p^j_k},\alpha_{p^j_{k+1}} \right\rangle \right)=1$ when $i=j$ and hence $l^j=1$.
\end{lemma}
\begin{proof} Set $i_m=i$. Note that we have
\begin{align}\label{eq: gaQ 2}
\gamma_{i_m}^Q = (1-\tau_Q)\varpi_{i_m} = (s_{i_1} \cdots s_{i_{m-1}}- s_{i_1} \cdots s_{i_{m-1}}s_{i_m} )\varpi_{i_m} = s_{i_1} \cdots s_{i_{m-1}}(\alpha_{i_m}),
\end{align}
where $\tau_Q = s_{i_1} \cdots s_{i_{|\triangle_0|}}$ and $1 \le m \le |\triangle_0|$.
Let $j \not\in B^Q(i)$. Then there is no \ex $1 \le t_1< \ldots < t_l = m$ such that $d(i_{t_x},i_{t_{x+1}})=1$. Hence $j$ can not be contained in ${\rm supp}(\gamma_{i}^Q)$ by~\eqref{eq: gaQ 2}.
Let $j \in B^Q(i)$. Since $\triangle$ is of finite type, there exists a
unique path $\mathtt{p}^j = (j=p^j_1 \to p^j_2 \to \cdots \to p^j_{l^j-1}
\to p^j_{l^j}=i)$ in $Q$. If the length of $\mathtt{p}^j$ is $2$, we have
$\xi_j=\xi_i+1$ and the assertion follows from~\eqref{eq: gaQ 2}. By
an induction on $l$, $p^j_2 \in {\rm supp}(\gamma_{i}^Q)$ and
${\rm supp}_{p^j_2}(\gamma_{i}^Q) = \left( \displaystyle\prod_{k=2}^{l^j-1}
- \left\langle h_{p^j_k},\alpha_{p^j_{k+1}} \right\rangle \right)$. Since
$\xi_{j} = \xi_{p^j_2}+1$ and $\tau_Q$ is adapted to $Q$, we can
guarantee that ${\rm supp}_{j}(\gamma_{i}^Q) \ge \left(
\displaystyle\prod_{k=1}^{l^j-1} - \left\langle
h_{p^j_k},\alpha_{p^j_{k+1}} \right\rangle \right)$.
\noindent
(i) Assume that there exists $j' \in \triangle_0$ such that $d(j,j')=1$ and $\xi_{j'}=\xi_j+1$. Then $s_j'$ appears earlier than $s_j$ in every $Q$-adapted reduced expression of $\tau_Q$.
\noindent
(ii) Assume that there exists $j' \in \triangle_0$ such that $d(j,j')=1$ and $\xi_{j'}=\xi_j-1$. Then there is no path form $j'$ to $i$. Thus, even though $s_j'$ appears later than $s_j$ in every $Q$-adapted reduced expression of $\tau_Q$, we have $j' \not \in {\rm supp}(\gamma_i^Q)$.
By (i) and (ii), we have ${\rm supp}_{j}(\gamma_{i}^Q) = \left( \displaystyle\prod_{k=1}^{l^j-1} - \left\langle h_{p^j_k},\alpha_{p^j_{k+1}} \right\rangle \right)$, as we desired.
\end{proof}
\begin{corollary}
For a Dynkin quiver $Q=(\triangle,\xi)$ of an arbitrary type and $i \in \triangle_0$, we have
$$\alpha_i - \sum_{\substack{d(i,j)=1, \\ \xi_j-\xi_i=1}}\mathsf{c}_{j,i} \gamma_j^Q = \gamma_i^Q.$$
\end{corollary}
\begin{convention}
In the rest of this section, we use
\begin{enumerate}[{\rm (a)}]
\item $\imath,\jmath$ for elements in the index set $I$ of type $ADE$ and $i,j$ for elements in the index set $I$ of type $BCFG$,
\item $\alpha,\beta,\gamma$ for positive roots of type $ADE$ and $\upalpha,\upbeta,\upgamma$ for positive roots of type $BCFG$,
\item $\epsilon,\varepsilon$ for elements in $\mathbb{R} \otimes_\mathbb{Z} \sfQ^+$ of type $ADE$ and $\upepsilon,\upvarepsilon$ for elements in $\mathbb{R} \otimes_\mathbb{Z} \sfQ^+$ of type $BCFG$,
\end{enumerate}
to avoid confusion.
\end{convention}
For subsections~\ref{subsec: Bn combinatorics} and~\ref{subsec: Cn combinatorics} below, we shall investigate the combinatorial properties and the labeling algorithm for AR-quivers of type $B_n$ and $C_n$, and observe the relationship with the ones for $D_n$ and $A_{2n-1}$, respectively. The results in those subsections will be used for obtaining the degree polynomials in the next section crucially.
\subsection{$B_{n}$-case} \label{subsec: Bn combinatorics}
Note that for $\mathsf{g}$ of type $D_{n+1}$, its corresponding ${}^\vee \hspace{-.4ex}\sfg$ is of type $B_n$. For $Q=({\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{B_n}, \xi)$ of type $B_n$,
we denote by $\underline{Q} = (\triangle_{D_{n+1}}, {}^\vee \hspace{-.4ex} \uxi )$ the $\vee$-fixed Dynkin quiver of type $D_{n+1}$ such that
\begin{align} \label{eq: uB}
{}^\vee \hspace{-.4ex} \uxi _{\imath} = \xi_{\ov{\imath}} \qquad \text{ for $\imath \in \triangle_0$}.
\end{align}
Recall the notation ${\overline{Q}}=({\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{B_n}, \overline{\xi})$ of type $B_n$ corresponding to $\vee$-fixed Dynkin quiver $Q=(\triangle_{D_{n+1}}, {}^\vee\hspace{-.4ex}\xi)$ of type $D_{n+1}$ in~\eqref{eq: oQ}.
The simple roots $\{ \upalpha_i \}$ and $\Phi^+$ of type $B_n$ can be identified in $\sfQ^+ \subset \mathop{\mbox{\normalsize$\bigoplus$}}\limits_{i=1}^n \mathbb{R} \upepsilon_i$ as follows:
\begin{equation}\label{eq: PR B}
\begin{aligned}
\upalpha_i &=\upvarepsilon_i- \upvarepsilon_{i+1} \quad \text{ for $i<n$,} \quad \upalpha_n =\upvarepsilon_n \quad\text{ and } \quad \upvarepsilon_i =\sqrt{2}\upepsilon_i,\\
\Phi^+_{B_n} & = \left\{ \upvarepsilon_i-\upvarepsilon_{j} = \sum_{k=i}^{j-1} \upalpha_k \ | \ 1 \le i <j \le n \right\} \mathop{\mbox{\normalsize$\bigsqcup$}}\limits \left\{ \upvarepsilon_i = \sum_{k=i}^n \upalpha_k \ | \ 1 \le i \le n \right\}\\
& \quad \mathop{\mbox{\normalsize$\bigsqcup$}}\limits \left\{ \upvarepsilon_i+\upvarepsilon_{j} = \sum_{k=i}^{j-1} \upalpha_k + 2\sum_{s=j}^n \upalpha_s \ | \ 1 \le i <j \le n \right\}.
\end{aligned}
\end{equation}
For $\upbeta \in \Phi^{+}_{B_n}$, we write
\begin{align}\label{eq: conv B root} \upbeta =
\begin{cases}
\langle i,\pm j \rangle & \text{ if $\beta=\upvarepsilon_{i} \pm \upvarepsilon_j$ for some $1 \le i<j\le n$}, \\
\langle i \rangle & \text{ if $\beta=\upvarepsilon_{i}$ for some $1 \le i \le n$}
\end{cases}
\end{align}
Understanding $\sfW_{B_n}$ as bijections on $\{ \pm1,\ldots,\pm n\}$, we have the following descriptions for simple reflections:
$$ s_i (\pm i)=\pm(i+1) \quad \text{ for }i <n \quad \text{ and } \quad s_n(n)=-n.$$
Note that there is a surjection $\psi\cl \Phi^+_{D_{n+1}} \to \Phi^+_{B_{n}}$ whose description can be described in two ways using $\epsilon_\imath$'s, $\upvarepsilon_i$'s, $\alpha_\imath$'s and $\upalpha_i$'s:
\begin{align} \label{eq: D to B}
\psi(\epsilon_\imath) = \begin{cases}
\ \upvarepsilon_{{\overline{\im}}} & \text{ if } \imath \le n, \\
\ 0 & \text{ if } \imath = n+1,
\end{cases}
\qquad \left(\text{equivalently, } \
\psi(\alpha_\imath) = \upalpha_{{\overline{\im}}}
\right)
\end{align}
and extends it linearly.
For each $i \in I_{B_n}$, let $\imath$ be the index of $I_{D_n}$ such that $i = \imath$ as an integer.
By~\eqref{eq: PR D} and~\eqref{eq: PR B}, we have
$$
\psi^{-1}(\upvarepsilon_i)=\{ \epsilon_\imath + \epsilon_{n+1}, \epsilon_\imath - \epsilon_{n+1} \} \quad \text{ and } \quad \psi^{-1}(\upvarepsilon_i \pm \upvarepsilon_j)= \epsilon_\imath \pm \epsilon_\jmath.
$$
\begin{proposition} \label{prop: surgery D to B}
For a $\vee$-fixed Dynkin quiver $Q$ of type $D_{n+1}$, we can obtain $\Gamma_{\overline{Q}}$ from $\Gamma_Q$ by the following simple surgery:
\begin{enumerate}[{\rm (1)}]
\item Remove all vertices in $\Gamma_Q$ located at residue $(n+1)$.
\item Replace all labels at residue $n$ from $ \langle \imath, \pm (n+1) \rangle$ to $\langle i \rangle$.
\end{enumerate}
\end{proposition}
To prove Proposition~\ref{prop: surgery D to B}, we need preparations:
\begin{lemma} \label{lem: Weyl D and B}
For any $\vee$-fixed Dynkin quiver $Q$ of type $D_{n+1}$, we have
$$\tau_{Q}(\imath)=\tau_{{\overline{Q}}}(i) \qquad \text{ for $1 \le \imath \le n$.}$$
\end{lemma}
\begin{proof}
By Lemma~\ref{lem: tauQ fixed D}~\eqref{it: fixed D}, $\tau_Q(n+1) = -(n+1)$.
Since $Q$ is $\vee$-fixed, there exists a $1 \le k < n+1$ reduced expression $s_{\imath_1} \ldots s_{\imath_{n+1}}$ of $\tau_Q$ such that $\{ \imath_k,\imath_{k+1} \}=\{n,n+1\}$.
Since $\overline{\xi}_{i} ={}^\vee\hspace{-.4ex}\xi_\imath$, $1 \le \imath \le n$, $\tau_{{\overline{Q}}}$ has a reduced expression $s_{\ov{\imath_1}} \ldots s_{\ov{\imath_{k-1}}} s_{i^\dagger_k} s_{\ov{\imath_{k+2}}} \ldots s_{\ov{\imath_{n+1}}}$ such that $i^\dagger_k =n$.
Thus our assertion follows.
\end{proof}
The following lemma is obvious:
\begin{lemma} \label{eq: reflection commuting}
For $Q=({\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{B_n}, \xi)$, we have
$$ \underline{s_iQ} = \iota(s_i) \underline{Q} \quad \text{ for $1 \le i \le n$}, $$
where
$$ \iota(s_i) \mathbin{:=} \begin{cases}
s _\imath & \text{ if } i < n, \\
s _n s_{n+1} & \text{ if } i= n.
\end{cases} $$
\end{lemma}
\begin{lemma} \label{lem: ga D and B}
For any $\vee$-fixed Dynkin quiver $Q$ of type $D_{n+1}$, we have
$$ \psi( \gamma_\imath^Q ) = \upgamma_i^{{\overline{Q}}} \quad \text{ for $1\le \imath \le n$} \quad \text{ and } \quad \psi( \gamma_{n+1}^Q ) = \upgamma_n^{{\overline{Q}}}.$$
\end{lemma}
\begin{proof}
By Lemma~\ref{lem: general gaQ} and~\eqref{eq: D to B},
\begin{equation}
\begin{aligned}
\psi( \gamma_\imath^Q ) & = \sum_{\jmath \in B^Q(\imath) \setminus \{ n+1 \} } \left( \prod_{k=1}^{l^\jmath-1} - \left\langle h_{p^\jmath_k},\alpha_{p^\jmath_{k+1}} \right\rangle \right) \upalpha_j \\
& \hspace{25ex} + \delta(n + 1 \in B^Q(\imath)) \left( \prod_{k=1}^{l^{n+1}-1} - \left\langle h_{p^{n+1}_k},\alpha_{p^{n+1}_{k+1}} \right\rangle \right) \upalpha_n .
\end{aligned}
\end{equation}
For any $j \in B^{\overline{Q}}(i) \setminus \{ n \}$, we have $|\mathsf{c}_{ji}^{B_n}| \le 1$.
Then we have
${\rm supp}_j( \upgamma_i^{\overline{Q}} ) = {\rm supp}_\jmath( \upgamma_\imath^Q )$ for any $j \in B^Q(i) \setminus \{ n \}$ by Lemma~\ref{lem: general gaQ}.
Assume that $n \in B^{\overline{Q}}(i)$. Then we have $n,n+1 \in B^Q(\imath)$ if $i<n$ and $n+1 \not \in B^Q(n)$ if $i=n$. Since $-\mathsf{c}_{n,n-1}=2$, we have
\begin{align*}
& \left( \prod_{k=1}^{l^n-1} - \left\langle h_{p^n_k},\alpha_{p^n_{k+1}} \right\rangle \right)\upalpha_n +\delta(n + 1 \in B^Q(\imath)) \left( \prod_{k=1}^{l^{n+1}-1} - \left\langle h_{p^{n+1}_k},\alpha_{p^{n+1}_{k+1}} \right\rangle \right) \upalpha_n \\
& \hspace{55ex}= \left( \prod_{k=1}^{l^n-1} - \left\langle h_{p^n_k},\upalpha_{p^n_{k+1}} \right\rangle \right)\upalpha_n,
\end{align*}
which completes our assertion.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop: surgery D to B}] By Lemma~\ref{lem: ga D and B},
we can obtain $\{\upgamma^{\overline{Q}}_i \}$ by using $\{\gamma^Q_\imath \}$ and the surjection $\psi$ in~\eqref{eq: D to B}.
Then our assertion follows from Lemma~\ref{lem: Weyl D and B} and~\eqref{eq: bijection}.
\end{proof}
\begin{definition}
A connected full subquiver $\ov{\uprho}$ in $\Gamma_{\overline{Q}}$ is said to be
\emph{swing} if it is a concatenation of one $N$-path and one
$S$-path whose intersection is located at $(n,p) \in (\Gamma_Q)_0$
for some $p \in \mathbb{Z}$: There exist a positive root
$\beta \in \Phi^+$ and $r,s \le n$ such that
$$\xymatrix@C=2.5ex@R=0.5ex{ S_r \ar@{->}[r] & S_{r+1} \ar@{->}[r] & \cdots \ar@{->}[r]& S_{n-1}
\ar@{->}[r] & \beta \ar@{=>}[r] & N_{n-1} \ar@{->}[r]& N_{n-2} \ar@{->}[r] & \cdots \ar@{->}[r] & N_s} \text{ where }$$
\begin{itemize}
\item $\xymatrix@C=2.5ex@R=0.5ex{ S_{r} \ar@{->}[r] & S_{r+1} \ar@{->}[r] & \cdots \ar@{->}[r]& S_{n-1} \ar@{->}[r] &\beta}$ is an $S$-path ($\phi^{-1}_{Q}(S_l,0)=(l,k)$ for $k \in \mathbb{Z}$),
\item $\xymatrix@C=2.5ex@R=0.5ex{ \beta \ar@{=>}[r] & N_{n-1} \ar@{->}[r]& N_{n-2} \ar@{->}[r] & \cdots \ar@{->}[r] & N_{s} }$ is an $N$-path ($\phi^{-1}_{Q}(N_l,0)=(l,k')$ for $k' \in \mathbb{Z}$),
\item $\beta$ is located at $(n,p)$ for some $p \in \mathbb{Z}$.
\end{itemize}
\end{definition}
By Proposition~\ref{prop: surgery D to B}, we have the $B_n$-analogue of Proposition~\ref{prop: label1} and Proposition~\ref{prop: label2} as follows:
\begin{proposition} \label{prop: label12 B} \hfill
\begin{enumerate}[{\rm (1)}]
\item \label{it: swing B} For each swing in $\Gamma_{\overline{Q}}$, there exists a unique $1 \le a \le n$ satisfying the following properties:
\begin{enumerate}[{\rm (i)}]
\item Every vertex shares a component $\upvarepsilon_a$.
\item If every vertex shares a component $\upvarepsilon_a$, it consists of exactly $(2n-a)$-many vertices.
\item Every positive roots having $\upvarepsilon_a$ as its component appears in the swing.
\item It starts at $1$ or ends at $1$.
\item There are only $n$-swings.
\end{enumerate}
Thus we say the swing as $a$-swing and denote it by $\ov{\varrho}_a$.
\item \label{it: shallow B} Let $\ov{\rho}$ be a shallow $S$ $($resp.\ $N)$-path in $\Gamma_{\overline{Q}}$. Then there exists a unique $2 \le a \le n$ such that
\begin{enumerate}[{\rm (i)}]
\item $\ov{\rho}$ consists of $(a-1)$-many vertices,
\item all vertices in $\rho$ share $\upvarepsilon_{-a}$ as their components.
\end{enumerate} \end{enumerate} Furthermore, there are exactly $(n-1)$-many shallow
$X$-paths $(X=S$ or $N)$ in $\Gamma_{\overline{Q}}$. Thus we say the shallow
$X$-path as $(-a)$-path and denote it by $\ov{\rho}_{-a}$.
\end{proposition}
From~Proposition~\ref{prop: surgery D to B} and Proposition~\ref{prop: label12 B}, we have the following main result of this subsection:
\begin{theorem} \label{thm: D to B}
For each $\sigma$-fixed $Q$ of type $D_{n+1}$ and $k \in \mathbb{Z}$, we have
$$ \psi\left( \tau_Q^k (\gamma^Q_\imath) \right) = \tau_{\overline{Q}}^k (\gamma^{\overline{Q}}_i) \quad \text{ for $\imath<n$} $$
and
$$ \psi\left( \tau_Q^k (\gamma^Q_\imath) \right) = \tau_{\overline{Q}}^k (\gamma^{\overline{Q}}_n) \quad \text{ for $\imath=n$ or $n+1$.} $$
\end{theorem}
\begin{example} \label{ex: Label B4}
Let us consider the following $\vee$-fixed Dynkin quiver $Q$ of type $B_4$:
$$Q \ =
\xymatrix@R=0.5ex@C=4ex{ *{\circled{$\circ$}}<3pt> \ar@{<-}[r]_<{ 1 \ \ }^{_{\underline{2}} \qquad } & *{\circled{$\circ$}}<3pt> \ar@{->}[r]_<{ 2 \ \ }^{_{\underline{3}} \qquad }
&*{\circled{$\circ$}}<3pt> \ar@{<-}[r]_>{ \ \ 4}^{_{\underline{2}} \qquad \ } &*{\circ}<3pt> \ar@{}[l]^>{ 3 \ \ }_{ \ \qquad _{\underline{3}} } }
$$
By ~\eqref{eq: range}, the $\Gamma_Q$ without labeling are given as follows:
\begin{align*}
\raisebox{3.2em}{ \scalebox{0.7}{\xymatrix@!C=2ex@R=2ex{
(i\setminus p) &-4 &-3 & -2 & -1 & 0 & 1 & 2 & 3 \\
1& \circ \ar[dr] && \circ \ar[dr] && \circ \ar[dr]&& \circ \ar[dr] \\
2&& \circ \ar[dr]\ar[ur] && \circ \ar[dr]\ar[ur] && \circ\ar[dr]\ar[ur] && \circ \\
3& \circ \ar[dr]\ar[ur] && \circ \ar[dr]\ar[ur] && \circ \ar[dr]\ar[ur]&& \circ \ar[ur] \ar[dr] \\
4&& \circ \ar@{=>}[ur]&& \circ \ar@{=>}[ur]&& \circ \ar@{=>}[ur] && \circ \\
}}}
\end{align*}
By applying Proposition~\ref{prop: label12 B}~\eqref{it: swing B} and~\eqref{it: shallow B} in order, we can complete the label of $\Gamma_Q$
as follows:
\begin{align*}
{\rm (i)} \raisebox{4.3em}{ \scalebox{0.7}{\xymatrix@!C=1ex@R=2ex{
(i\setminus p) &-4 &-3 & -2 & -1 & 0 & 1 & 2 & 3 \\
1& \upvarepsilon_1^* \ar[dr] && \upvarepsilon_2^\star\ar[dr] && \upvarepsilon_3^\ddagger + \upvarepsilon_4^\dagger \ar[dr]&& \upvarepsilon_1^* \ar[dr] \\
2&& \upvarepsilon_1^* \ar[dr]\ar[ur] && \upvarepsilon_2^\star+ \upvarepsilon_3^\ddagger \ar[dr]\ar[ur] && \upvarepsilon_1^* + \upvarepsilon_4^\dagger\ar[dr]\ar[ur] && \upvarepsilon_2^\star \\
3& \upvarepsilon_3^\ddagger\ar[dr]\ar[ur] && \upvarepsilon_1^*+ \upvarepsilon_3^\ddagger \ar[dr]\ar[ur] && \upvarepsilon_1^* + \upvarepsilon_2^\star \ar[dr] \ar[ur]&& \upvarepsilon_2^\star+\upvarepsilon_4^\dagger\ar[ur] \ar[dr] \\
4&& \upvarepsilon_3^\ddagger\ \ar@{=>}[ur]&& \upvarepsilon_1^* \ar@{=>}[ur]&& \upvarepsilon_2^\star\ar@{=>}[ur] && \upvarepsilon_4^\dagger \\
}}}
\hspace{-1ex}
\To
{\rm (ii)} \hspace{-2ex} \raisebox{4.3em}{ \scalebox{0.7}{\xymatrix@!C=1ex@R=2ex{
(i\setminus p) &-4 &-3 & -2 & -1 & 0 & 1 & 2 & 3 \\
1& \srt{1,-2}\ar[dr] && \srt{2,-4} \ar[dr] && \srt{3,4} \ar[dr]&& \srt{1,-3} \ar[dr] \\
2&& \srt{1,-4} \ar[dr]\ar[ur] && \srt{2,3} \ar[dr]\ar[ur] && \srt{1,4}\ar[dr]\ar[ur] && \srt{2,-3} \\
3& \srt{3,-4} \ar[dr]\ar[ur] && \srt{1,3} \ar[dr]\ar[ur] && \srt{1,2} \ar[dr]\ar[ur]&& \srt{2,4} \ar[ur] \ar[dr] \\
4&& \srt{3} \ar@{=>}[ur]&& \srt{1} \ar@{=>}[ur]&& \srt{2} \ar@{=>}[ur] && \srt{4} \\
}}}
\end{align*}
\end{example}
\subsection{$C_{n}$-case} \label{subsec: Cn combinatorics}
Note that for $\mathsf{g}$ of type $A_{2n-1}$, its corresponding ${}^\vee \hspace{-.4ex}\sfg$ is of type $C_n$.
The simple roots $\{ \upalpha_i \}$ and $\Phi^+_{C_{n}}$ can be identified in $\sfQ^+ \subset \mathop{\mbox{\normalsize$\bigoplus$}}\limits_{i=1}^n \mathbb{Z}\upepsilon_i$ as follows:
\begin{equation} \label{eq: PR C}
\begin{aligned}
\upalpha_i &=\upepsilon_i- \upepsilon_{i+1} \quad \text{ for $i<n$} \quad\text{ and } \quad \alpha_n =2\upepsilon_n \allowdisplaybreaks \\
\Phi^+_{C_n} & = \left\{ \upepsilon_i-\upepsilon_{j} = \sum_{k=i}^{j-1} \upalpha_k \ | \ 1 \le i <j \le n \right\} \mathop{\mbox{\normalsize$\bigsqcup$}}\limits
\left\{ 2\upepsilon_i = 2\sum_{k=i}^{n-1} \upalpha_k +\upalpha_n \ | \ 1 \le i \le n \right\} \allowdisplaybreaks\\
& \quad \mathop{\mbox{\normalsize$\bigsqcup$}}\limits \left\{ \upepsilon_i+\upepsilon_{j} = \sum_{k=i}^{j-1} \upalpha_k + 2\sum_{s=j}^{n-1} \upalpha_s + \upalpha_n \ | \ 1 \le i <j \le n \right\}.
\end{aligned}
\end{equation}
For $\upbeta \in \Phi^{+}_{C_n}$, we write (cf. ~\eqref{eq: conv B root})
\begin{align}\label{eq: conv C root} \upbeta =
\begin{cases}
\langle i,\pm j \rangle & \text{ if $\upbeta=\upepsilon_{i} \pm \upepsilon_j$ for some $1 \le i<j\le n$}, \\
\langle i, i \rangle & \text{ if $\upbeta=2\upepsilon_{i}$ for some $1 \le i \le n$}.
\end{cases}
\end{align}
Note that $\mathsf{W}_{B_n} \simeq \mathsf{W}_{C_n}$. For a Dynkin quiver $Q=({\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{B_n}, \xi)$ of type $B_n$, we denote by $Q^{\rm tr}=({\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{C_n} ,\xi^{{\rm tr}})$
the Dynkin quiver of type $C_n$ such that
$$ \xi^{{\rm tr}}_i =\xi_i \qquad \text{ for } 1 \le i \le n.$$
Then we have a natural bijection $\psi^{\rm tr}\cl \Phi^+_{B_n} \to \Phi^+_{C_n}$ given by
$$ \psi^{{\rm tr}}(\upvarepsilon_i\pm\upvarepsilon_j) = \upepsilon_i\pm\upepsilon_j \ \ (1 \le i < j \le n) \ \quad \text{ and } \ \quad \phi^{{\rm tr}}(\upvarepsilon_i)=2\upepsilon_i.$$
\begin{proposition}
For all $1\le i\le n$, we have
$$ \psi^{{\rm tr}}(\upgamma_i^{Q} ) =\upgamma_i^{Q^{{\rm tr}}}.$$
\end{proposition}
\begin{proof}
Note that
$B^Q(i) = B^{Q^{\rm tr}}(i)$ for all $1 \le i \le n$.
Set
\begin{enumerate}[{\rm (a)}]
\item $a = \min( s \ | \ s \in B^Q(i))$,
\item $b = \max( s \ | \ s \in B^Q(i))$ if $i \ne n$ and $n \not\in B^Q(i)$.
\end{enumerate}
By Lemma~\ref{lem: general gaQ}, we have
\begin{align}
\upgamma_i^Q
& = \begin{cases}
\displaystyle \sum_{j \in B^Q(n)} \upalpha_j & \text{ if } i=n, \\
\displaystyle \sum_{j \in B^Q(i) \setminus\{n\}} \upalpha_j +\delta(n \in B^Q(i)) 2\upalpha_n & \text{ if } i \ne n,
\end{cases} \label{eq: ga B}\\
& =
\begin{cases}
\upvarepsilon_a & \text{ if } i=n, \\
\upvarepsilon_a -\upvarepsilon_{b+1} & \text{ if } i \ne n \text{ and } n \not\in B^Q(i), \\
\upvarepsilon_a+\upvarepsilon_n,& \text{ if } i \ne n \text{ and } n \in B^Q(i),
\end{cases} \nonumber
\end{align}
On the other hand, we have
\begin{align}
\upgamma_i^{Q^{\rm tr}}
& = \begin{cases}
\displaystyle \left( \sum_{j \in B^{Q}(n) \setminus\{ n \}} 2\upalpha_j \right) + \upalpha_n & \text{ if } i=n, \qquad\qquad \qquad \\[4ex]
\displaystyle \sum_{j \in B^{Q}(i)} \upalpha_j & \text{ if } i \ne n,
\end{cases} \label{eq: ga C} \\
& =
\begin{cases}
2\upepsilon_a & \text{ if } i=n, \\
\upepsilon_a -\upepsilon_{b+1} & \text{ if } i \ne n \text{ and } n \not\in B^{Q}(i), \\
\upepsilon_a+\upepsilon_n,& \text{ if } i \ne n \text{ and } n \in B^{Q}(i),
\end{cases} \nonumber
\end{align}
which implies our assertion.
\end{proof}
Note that every $Q$ of type $C_n$ is $Q^{\rm tr}$ for some $Q$ of type $B_n$. Hence we have the following theorem:
\begin{proposition} \label{prop: transpose}
The labeling of $\Gamma_{Q^{\rm tr}}$ in terms of notations $\langle i,\pm j \rangle$ and $\langle i,i \rangle$ can be obtained from the one of $\Gamma_{Q}$
by replacing only $\langle i \rangle$ with $\langle i,i \rangle$ for $1 \le i \le n$.
\end{proposition}
\begin{proof}
By the previous proposition, the replacement of notations works for $\{ \upgamma_i^Q\}$ and $\{ \upgamma_i^{Q^{\rm tr}}\}$. Then our assertion follows from the fact that $\sfW_{B_n}=\sfW_{C_n}$
and the bijection $\phi$ in~\eqref{eq: bijection}.
\end{proof}
\begin{example} \label{ex: Label C4}
Consider the Dynkin quiver $Q$ of type $B_4$ in Example~\ref{ex: Label B4}. Then its corresponding $Q^{\rm tr}$ of $C_4$ can be depicted as follows:
$$Q \ =
\xymatrix@R=0.5ex@C=4ex{ *{\circ}<3pt> \ar@{<-}[r]_<{ 1 \ \ }^{_{\underline{2}} \qquad } & *{\circ}<3pt> \ar@{->}[r]_<{ 2 \ \ }^{_{\underline{3}} \qquad }
&*{\circ}<3pt> \ar@{<-}[r]_>{ \ \ 4}^{_{\underline{2}} \qquad \ } &*{\circled{$\circ$}}<3pt> \ar@{}[l]^>{ 3 \ \ }_{ \ \qquad _{\underline{3}} } }
$$
Using the labeling of $\Gamma_Q$ in Example~\ref{ex: Label B4} and Proposition~\ref{prop: transpose}, we have
$$\raisebox{3.7em}{ \scalebox{0.7}{\xymatrix@!C=1ex@R=2ex{
(i\setminus p) &-4 &-3 & -2 & -1 & 0 & 1 & 2 & 3 \\
1& \srt{1,-2}\ar[dr] && \srt{2,-4} \ar[dr] && \srt{3,4} \ar[dr]&& \srt{1,-3} \ar[dr] \\
2&& \srt{1,-4} \ar[dr]\ar[ur] && \srt{2,3} \ar[dr]\ar[ur] && \srt{1,4}\ar[dr]\ar[ur] && \srt{2,-3} \\
3& \srt{3,-4} \ar@{=>}[dr]\ar[ur] && \srt{1,3} \ar@{=>}[dr]\ar[ur] && \srt{1,2} \ar@{=>}[dr]\ar[ur]&& \srt{2,4} \ar[ur] \ar@{=>}[dr] \\
4&& \srt{3,3} \ar[ur]&& \srt{1,1} \ar[ur]&& \srt{2,2} \ar[ur] && \srt{4,4} \\
}}}
$$
\end{example}
For each $i \in I_{C_n}$, let $\imath$ be the index of $I_{A_{2n-1}}$ such that $i = \imath$ as an integer.
Note that there is a surjection $\psi\cl \Phi^+_{A_{2n-1}} \to \Phi^+_{C_{n}}$ which is described as follows:
\begin{align}\label{eq: psi A to C}
\psi(\alpha_\imath) =
\upalpha_{{\overline{\im}}} \qquad \left(\text{ equivalently, } \
\psi(\epsilon_\imath) = \begin{cases}
\ \upepsilon_{{\overline{\im}}} & \text{ if } \imath \le n, \\
\ -\upepsilon_{2n+1 -{\overline{\im}}} & \text{ if } \imath > n,
\end{cases} \right)
\end{align}
and extends it linearly.
Then the inverse image of $\upbeta \in \Phi^+_{C_{n}}$ can be described as follows:
\begin{equation} \label{eq: inverse psi C to A}
\begin{aligned}
\psi^{-1}(2\upepsilon_i) &= \epsilon_\imath - \epsilon_{2n+1-\imath} = \sum_{k=\imath}^{2n-\imath} \alpha_k \quad \text{ for } 1 \le i \le n,\\
\psi^{-1}(\upepsilon_i -\upepsilon_j) &= \left\{ \epsilon_\imath - \epsilon_{\jmath}=\sum_{k=\imath}^{\jmath-1} \alpha_k, \vee(\epsilon_\imath - \epsilon_{\jmath}) = \epsilon_{2n+1-\jmath} - \epsilon_{2n+1-\imath} =\sum_{k=2n+1-\jmath}^{2n-\imath}\alpha_k \right\},\\
\psi^{-1}(\upepsilon_i +\upepsilon_j)
&=\left\{ \epsilon_\imath - \epsilon_{2n+1-\jmath}=\sum_{k=\imath}^{2n-\jmath} \alpha_k, \vee(\epsilon_\imath - \epsilon_{2n+1-\jmath}) = \epsilon_{\jmath} - \epsilon_{2n+1-\imath} =\sum_{k=\jmath}^{2n-\imath} \alpha_k \right\}
\end{aligned}
\end{equation}
for $i<j \le n$.
Note that, for $\vee$-fixed Dynkin quiver $Q=(\triangle, {}^\vee\hspace{-.4ex}\xi)$ of type $A_{2n-1}$,
since there exists a reduced expression $s_{\imath_1}\cdots s_{\imath_{2n-1}}$ of $\tau_Q$ such that
\begin{align}\label{eq: red ex tauQ A}
\begin{cases}
\imath_1=n, \hspace{3ex} (\imath_{2s}, \imath_{2s+1}) =(k_s,\vee(k_s)) &\text{ if $n$ is a source of $Q$}, \\
\imath_{2n-1}=n, (\imath_{2s-1},\imath_{2s}) =(k_s,\vee(k_s)) &\text{ if $n$ is a sink of $Q$},
\end{cases}
\end{align}
for $1 \le s \le n-1$ and unique $k_s \in \{1,\ldots,n-1\}$.
Thus, for a $\vee$-fixed Dynkin quiver $Q$ of type $A_{2n-1}$, the Coxeter element $\tau_Q$ is also $\vee$-fixed in the sense that
\begin{align}\label{eq: sigma fix tau A}
\vee \tau_Q \vee =\tau_Q.
\end{align}
\begin{lemma} \label{lem: psi C}
For each $\vee$-fixed $Q$ of type $A_{2n-1}$ and $k \in \mathbb{Z}$, we have
\begin{align}\label{eq: ga: A to C}
\psi(\gamma^Q_\imath) = \psi(\gamma^Q_{2n-\imath}) =\upgamma_i^{\overline{Q}} \quad \text{ for }\imath \le n.
\end{align}
\end{lemma}
\begin{proof}
The equation in~\eqref{eq: ga: A to C} follow from Lemma~\ref{lem: vee comuute A} and the surjection $\psi$ in~\eqref{eq: psi A to C}.
Since $n$ is a sink or source of $Q$, ${\rm supp}(\gamma_\imath^Q) \subset \{
1,\ldots, n\}$ for $\imath < n$ and ${\rm supp}(\gamma_\imath^Q) \subset \{
n,\ldots, 2n-1\}$ for $\imath > n$. Thus $n$ is a sink or source of
${\overline{Q}}$ and ${\rm supp}(\upgamma_i^Q) \subset \{ 1,\ldots, n\}$ for $i < n$
and
$$ \psi(\gamma_\imath^Q) = \displaystyle \sum_{\jmath \in B^Q(\imath)} \upalpha_j .$$
Since $B^Q(\imath)=B^{\overline{Q}}(i)$ for $i <n$ as subsets of integers, the assertion for these cases follows by~\eqref{eq: ga C}.
For $i=n$,
if $i \in B^{\overline{Q}}(n)$, we have $\imath,2n-\imath \in B^Q(n)$. Thus the assertion for $i=n$ also follows by~\eqref{eq: ga C}.
\end{proof}
For $\vee$-fixed Dynkin quiver $Q$ of type $A_{2n-1}$,
the set of positive roots of residue $n$ in $\Gamma_Q$ form the set $\upeta \mathbin{:=} \{ [i,2n-i] \ | 1 \le i \le n \}$, since $\gamma_n^Q$ and $\tau_Q$ are $\vee$-fixed and $r_n^Q =n$ (see~\eqref{eq: riq fixed}).
We label the set $\upeta$
by using $\phi_Q^{-1}$ as follows:
$$\eta_k =[a_k,2n-a_k] =\epsilon_{a_k} -\epsilon_{2n+1-a_k} = \epsilon_{a_k} -\epsilon_{b_k} \in \Phi^+_{A_{2n-1}} \ \ (1\le k \le n \text{ and } 1 \le a_{k} \le n) $$
where
$$(\eta_1,0) = \phi_{Q}(n,\xi_n), \ (\eta_2,0) = \phi_{Q}(n,\xi_n-2), \ldots, (\eta_n,0) = \phi_{Q}(n,\xi_n-2(n-1)).$$
Then we have $\{ a_1,\ldots, a_n \} = \{1,\ldots,n\}$ and $\{ b_1,\ldots, b_n \} = \{n+1,\ldots,2n\}$. It also implies
$$\tau_Q(\epsilon_{a_k}) = \epsilon_{a_{k+1}} \text{ and }
\tau_Q(\epsilon_{b_k}) = \tau_Q(\epsilon_{2n+1-a_k}) = \epsilon_{b_{k+1}} \text{ for $1 \le i <n$}.$$
Recall that $n$ is source or sink of $Q$.
\begin{enumerate}[{\rm (1)}]
\item If $n$ is source of $Q$, $\epsilon_{a_1}-\epsilon_{b_1}=[n]=\epsilon_n-\epsilon_{n+1}$ and thus $\tau_Q(\epsilon_{a_n})=\epsilon_{n+1}$.
\item If $n$ is sink of $Q$, $\epsilon_{a_n}-\epsilon_{b_n}=[n]=\epsilon_n-\epsilon_{n+1}$ and thus $\tau_Q(\epsilon_{n})=\epsilon_{2n+1-a_1}$ and $\tau_Q(\epsilon_{n+1})=\epsilon_{a_1}$.
\end{enumerate}
\begin{proposition} \label{prop: tauQ A to C}
We have
$$
\tau_{\overline{Q}}(\upepsilon_{a_k}) = \upepsilon_{a_{k+1}} \quad \text{ for $1 \le k < n$} \quad \text{ and } \quad \tau_{\overline{Q}}(\upepsilon_{a_n}) = -\upepsilon_{a_1}
$$
\end{proposition}
\begin{proof}
Recall the reduced expression $\tau_Q=s_{\imath_1} \cdots s_{\imath_{2n-1}}$ in~\eqref{eq: red ex tauQ A}.
\noindent
(1) Assume $n$ is a source of $Q$.
Then $\tau_{\overline{Q}}$ has a reduced expression $s_ns_{i_2}s_{i_4}\cdots s_{i_{2n}}$.
Since $a_k<n$ for $2\le k \le n$, we have $ \tau_Q^{k-1}(\epsilon_n)= \tau_Q^{k-1}(\epsilon_{a_1}) = \epsilon_{a_k}=s_n \epsilon_{a_k}$. Hence $\tau_{\overline{Q}}(\upepsilon_{a_k}) = \upepsilon_{a_{k+1}}$ for $1 \le k <n$.
For $k=n$, we have $\tau_Q(\epsilon_{a_n}) = \epsilon_{n+1} = s_n (\epsilon_n)$ which imply $s_{i_2}s_{i_4}\cdots s_{i_{2n}}(\upepsilon_{a_n}) = \upepsilon_n$. Then we $\tau_{\overline{Q}}(\upepsilon_{a_n}) =s_n(\upepsilon_n) =-\upepsilon_n = - \upepsilon_{a_1}$.
\noindent
(2) Assume $n$ is a sink of $Q$.
Then $\tau_{\overline{Q}}$ has a reduced expression $s_{i_1}s_{i_3}\cdots s_{i_{2n-1}}s_n$.
Note that $ a_i \le n$ for $1\le i <n$ and $a_n=n$. Then, for $1 \le k \le n$, we have
$ \tau_{\overline{Q}}^{k-1}(\upepsilon_{a_1}) = \tau_{\overline{Q}}^{k-1}(\upepsilon_{a_1})= \upepsilon_{a_k}$.
Now let us compute $\tau_{\overline{Q}}(\upepsilon_{a_n})$.
By Lemma~\ref{lem: psi C} and , we have
$$w_0(2\upepsilon_{a_1}) = \tau_{\overline{Q}}^n(2\upepsilon_{a_1}) = \tau_{\overline{Q}}^n(\upgamma_n^{\overline{Q}}) = \tau_{\overline{Q}}(2\epsilon_{a_n})= -\gamma_n^{\overline{Q}} = -2\upepsilon_{a_1},$$
which implies $\tau_{\overline{Q}}(\upepsilon_{a_n}) = -\upepsilon_{a_1}$.
\end{proof}
\begin{theorem}
For each $\vee$-fixed $Q$ of type $A_{2n-1}$ and $k \in \mathbb{Z}$, we have
$$ \psi\left( \tau_Q^k(\gamma_{\imath}^Q) \right) = \psi\left( \tau_Q^k(\gamma_{2n-\imath}^Q) \right) = \tau_{\overline{Q}}^k( \upgamma_i^{\overline{Q}}).$$
\end{theorem}
\begin{proof}
By Proposition~\ref{eq: psi A to C}, ~\eqref{eq: psi A to C} and~\eqref{eq: sigma fix tau A}, we have
$$ \psi(\tau_Q(\alpha_\imath)) = \tau_{\overline{Q}}(\psi(\alpha_\imath)) \qquad \text{ for all $1 \le \imath \le 2n-1$}.$$
Since $\psi(\gamma_{\imath}^Q)=\psi(\gamma_{2n-\imath}^Q)=\gamma_i^{\overline{Q}}$ by Lemma~\ref{lem: psi C}, our assertion follows.
\end{proof}
\subsection{$F_4$ and $G_2$ cases}
(1) The set of positive roots $\Phi^+_{E_6}$ can be described as
\begin{align*}
&(000001), (000010), (000100),(011111),(101110),(010100), \\
&(001000),(100000), (000011),(000110),(011211),(112221),\\
&(111210),(011100),(101000),(000111), (011221),(112321),\\
&(122321),(112210),(111100),(010111),(001110),(111211),\\
&(011110),(101100),(010000),(001111),(111211),(011210),\\
&(112211),(111110),(101111),(010110),(001100),(111111),
\end{align*}
where $(a_1a_2a_3a_4a_5a_6) \mathbin{:=} \sum_{i=1}^6 a_i\alpha_i$.
On the other hand, the set of positive roots $\Phi^+_{F_4}$ can be described as
\begin{align*}
&(0001), (0010), (0100),(1121),(0121),(1100),\\
& (0011),(0110),(1221),(1242),(1110), (0111), \\
& (1231),(1342),(2342), (1111),(0120),(1222),\\
&(1120),(1000),(1220),(1232),(0122),(1122),
\end{align*}
where $(b_1b_2b_3b_4) \mathbin{:=} \sum_{i=1}^4 b_i\upalpha_i$.
Then one can easily check that there exists a bijection $\psi\cl \Phi^+_{E_{6}} \to \Phi^+_{F_{4}}$ given by
\begin{align*}
\alpha_\imath \mapsto \upalpha_{{\overline{\im}}} \quad \text{ and extends it linearly}.
\end{align*}
Note that the positive roots $\upbeta$ in $\Phi^+_{F_4}$ such that $\psi^{-1}(\upbeta) =1$ or $2$, and
\begin{align*}
& \{ \upbeta \in \Phi^+_{F_4} \ | \ \phi^{-1}(\upbeta) =2 \} = \left\{ \begin{matrix}(0001),(0010),(1221),(0011),(1231),(1111)\\ (0111),(0121),(1222),(1121),(1110),(0110)\end{matrix} \right\}.
\end{align*}
\noindent
(2) The positive roots $\Phi^+_{G_2}$ are listed as below:
\begin{align*}
\Phi^+_{G_2} = \{ \upalpha_1,\upalpha_2,\upalpha_1+\upalpha_2,2\upalpha_1+\upalpha_2,3\upalpha_1+\upalpha_2,3\upalpha_1+2\upalpha_2\}
\end{align*}
Then one can easily check that there exists a bijection $\widetilde{\psi}\cl \Phi^+_{D_{4}} \to \Phi^+_{G_{2}}$ given by
\begin{align*}
\alpha_\imath, \mapsto \upalpha_{{\overline{\im}}} \quad \text{ and extends it linearly}.
\end{align*}
Now one can check the following theorem holds:
\begin{theorem} \label{thm: ED to FG}
For each $\vee$-fixed $($resp.\ $\widetilde{\vee}$-fixed$)$ $Q=(\triangle,{}^\vee\hspace{-.4ex}\xi)$ $($resp, $Q=(\triangle,{}^{\widetilde{\vee}}\hspace{-.4ex}\xi))$ of type $E_{6}$ $($resp.\ $D_4)$ and $k \in \mathbb{Z}$, we have
$$ \psi\left( \tau_Q^k (\gamma^Q_\imath) \right) = \tau_{\overline{Q}}^k (\upgamma_{\ov{i}}^{\overline{Q}} ) \quad \text{ $($resp.\ $ \widetilde{\psi}\left( \tau_Q^k (\gamma^Q_i) \right) = \tau_{\overline{Q}}^k (\upgamma_{\ov{i}}^{\overline{Q}} ) )$} \quad \text{ for } i \in \triangle_0.$$
\end{theorem}
\section{Degree polynomials} \label{sec: Degree poly}
In this section, we generalize the definition of degree polynomial to include
type $BCFG$ and compute the degree polynomials in those types by
using the results in the previous sections. Recall the statistics
reviewed in subsection~\ref{subsec: stat}. In this section, we
almost skip the proofs because their arguments are almost similar to
the ones in \cite{Oh18} which use the labeling algorithms described
in Proposition~\ref{prop: label12 B} and Proposition~\ref{prop:
transpose} as crucial ingredients.
\smallskip
By ~\eqref{eq: bijection} and the reflection operation in~\eqref{eq: Qd properties}~\eqref{it: reflection}, Lemma~\ref{lem: integer o} and Proposition~\ref{prop: well-defined integer} can be generalized to Dynkin diagram $\triangle$ of any finite type:
\begin{proposition} \label{eq: d well-defined} \hfill
\begin{enumerate}[{\rm (a)}]
\item Let $Q$ be a Dynkin quiver of any finite type. For any $\pair{\alpha,\beta}$, $\pair{\alpha',\beta'} \in \Phi_Q(i,j)[k]$, we have $ {\operatorname{deg}}_{[Q]}(\pair{\alpha,\beta}) ={\operatorname{deg}}_{[Q]}(\pair{\alpha',\beta'})$. Thus the integer $\mathtt{o}_k^{Q}(i,j)$
is well-defined by \eqref{def:tto}.
We then define $\mathtt{O}_k^Q(i,j)$ by \eqref{eq: ttO}.
\item Let $\triangle$ be a Dynkin diagram of any finite type. For any $[Q],[Q'] \in [\hspace{-0.3ex}[ \triangle ]\hspace{-0.3ex}]$, we have $\mathtt{O}_k^{Q}(i,j) = \mathtt{O}_k^{Q'}(i,j)$. Thus $\mathtt{O}_k^{\triangle}(i,j)$ is well-defined.
\end{enumerate}
\end{proposition}
\begin{proof}
(a) By ~\eqref{eq: bijection},~\eqref{eq: range} and the assumption, there exists $0 \le a,a',b,b' < {\ms{1mu}\mathsf{h}}/2$ such that $\tau_Q^a(\gamma^Q_i)=\alpha$, $\tau_Q^b(\gamma^Q_j)=\beta$, $\tau_Q^{a'}(\gamma^Q_i)=\alpha'$, $\tau_Q^{b'}(\gamma^Q_j)=\beta'$, $s\mathbin{:=} a-a'=b-b'$.
Thus ${\underline{m}} \prec_Q^\mathtt{b} \pair{\alpha,\beta}$ if and only if $\tau_Q^{-s}({\underline{m}}) \prec_Q^\mathtt{b} \pair{\alpha',\beta'}$. Here $\tau_Q^{-s}({\underline{m}})$ denotes
the canonical exponents of ${\underline{m}}$ obtained by applying $\tau_Q^{-s}$ to ${\underline{m}}$.
\noindent
(b) It is enough to show that
$\mathtt{O}_k^{Q}(i,j) = \mathtt{O}_k^{s_tQ}(i,j)$ for a Dynkin quiver $Q=(\triangle,\xi)$ with a source $t \in \triangle_0$. Since ${\ms{1mu}\mathsf{h}}/2 > 2$, the second assertion follows from (a) and the reflection operation in~\eqref{eq: Qd properties}~\eqref{it: reflection}.
\end{proof}
\begin{definition} \label{def: distane poly 2}
Let $t$ be an indeterminate.
\begin{enumerate}[{\rm (1)}]
\item
For a Dynkin diagram $\triangle$ of any finite type and $i,j \in \triangle_0$, we define a polynomial $\mathfrak{d}^\triangle_{i,j}(t)$ as follows:
\begin{align} \label{eq: d poly BCFG}
\mathfrak{d}_{i,j}^\triangle(t) \mathbin{:=} \sum_{k \in \mathbb{Z}_{\ge 0} } \max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j) {\mathtt{O}_k^{\triangle}(i,j)} t^{k-1}.
\end{align}
We call $\mathfrak{d}_{i,j}^\triangle(t)$ the \emph{degree polynomial of $\triangle$ at $(i,j)$}.
\item
For $k \in \mathbb{Z}$, we set
$$ \mathfrak{d}_{i,j}[k] \mathbin{:=} \max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j) \mathtt{O}_k^\triangle(i,j).$$
\end{enumerate}
\end{definition}
\begin{remark} Definition~\ref{def: distane poly 2} coincides with Definition~\ref{def: Distance 1} when $Q$ is a Dynkin quiver of type $ADE$, since
$\ms{1mu}\mathsf{d}_i=1$ for all $i \in \triangle_0$.
\end{remark}
\begin{lemma} \label{lem: nonzero to nonzero}
Let $\underline{p}=\pair{\alpha,\beta}$ be a \prq in $(\Phi^+_{\triangle})^2$. If ${\operatorname{deg}}_Q(\pair{\alpha,\beta}) \ne 0$, we have
$${\operatorname{deg}}_{\overline{Q}}(\pair{\psi(\alpha),\psi(\beta)}) \ne 0.$$
\end{lemma}
\begin{proof}
Since ${\operatorname{deg}}_Q(\alpha,\beta) \ne 0$, there exists ${\underline{m}}$ such that ${\underline{m}}
\prec^\mathtt{b}_Q \underline{p}$. Let $\psi({\underline{m}})$ be the \ex in
$\Phi^+_\blacktriangle$, which can be understood as a canonical image of
${\underline{m}}$ via $\psi$. Then we have
$$ \psi({\underline{m}}) \prec_{{\overline{Q}}}^\mathtt{b} \pair{\psi(\alpha),\psi(\beta)} $$
since $\eta_1 \prec_Q \eta_2$ implies $ \psi(\eta_1) \prec_{\overline{Q}} \psi(\eta_2)$. Thus the assertion follows.
\end{proof}
\begin{remark}
Note that even though $\alpha,\beta \in \Phi^+_\triangle$ is not comparable with respect to $\prec_Q$, sometimes
$\psi(\alpha),\psi(\beta) \in \Phi^+_\blacktriangle$ is comparable with respect to $\prec_{\overline{Q}}$. Moreover, there exists a pair $\pair{\alpha,\beta} \in (\Phi^+_{\triangle})^2$
such that ${\operatorname{deg}}_Q(\alpha,\beta) =0$ but ${\operatorname{deg}}_{\overline{Q}}(\psi(\alpha),\psi(\beta)) \ne 0$. For instance, consider $\pair{[2,5],[2,3] }$ of type $A_5$ in Example~\ref{ex: AR Q ADE}. Then we have
$$ {\operatorname{deg}}_Q( [2,5],[2,3])=0, \text{ while } {\operatorname{deg}}_{\overline{Q}}( \psi([2,5]),\psi([2,3]))={\operatorname{deg}}_{\overline{Q}}( \langle 2,3 \rangle,\langle 1,2 \rangle)=1,$$
since ${\operatorname{hd}}_{\overline{Q}} ( \langle 2,3 \rangle,\langle 1,2 \rangle) = (\lan1,3\rangle, \langle 2, 2 \rangle)$ (see $\Gamma_Q$ of type $A_5$ in Example~\ref{ex: AR Q ADE} and $\Gamma_{\overline{Q}}$ of type $C_3$ in Example~\ref{ex: AR Q BCFG}).
\end{remark}
In the following subsections, we will give the proof of the following theorem by the case-by-case method (see Remark~\ref{rmk: remained type}).
\begin{theorem} \label{thm:Main}
Let $\triangle$ be a Dynkin diagram of classical type $A_n, B_n, C_n, D_n$ and $E_6$. Then we have
\eqn
&&\widetilde{\de}_{i,j}^\triangle(t)=
\mathfrak{d}_{i,j}^\triangle(t) + \delta_{i,j^*}\, \ms{1mu}\mathsf{d}_i\, t^{{\ms{1mu}\mathsf{h}}-1}.
\eneqn
\end{theorem}
Note that for cases when $A_n, D_n$ and $E_6$, the above theorem was proved in \cite{Oh16,Oh17,Oh18}.
\subsection{$B_n$-case}
Throughout this subsection, $\triangle$ denotes the Dynkin diagram of type $D_{n+1}$ and ${\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle$ denotes the one of type $B_n$.
\begin{theorem} \label{thm: B_n distance polynomial}
Let ${\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle$ be the Dynkin diagram of type $B_n$. Then we have
\begin{align*}
&\mathfrak{d}_{i,j}^\blacktriangle(t) + \delta_{i,j}\, \ms{1mu}\mathsf{d}_i\, t^{{\ms{1mu}\mathsf{h}}-1} = \widetilde{\de}_{i,j}^\blacktriangle(t).
\end{align*}
\end{theorem}
By taking $\psi$ on~\eqref{eq: dist al, be 1}, we have the following pictures:
\begin{equation} \label{eq: dist al, be 1 p}
\begin{aligned}
& \scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(140,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(150,-6) **\dir{.};
"DT"+(-32,-4)*{\scriptstyle 1};
"DT"+(-32,-8)*{\scriptstyle 2};
"DT"+(-32,-12)*{\scriptstyle \vdots};
"DT"+(-32,-16)*{\scriptstyle \vdots};
"DT"+(-34,-36)*{\scriptstyle n};
"DL"+(-10,0); "DD"+(-10,0) **\dir{-};"DL"+(-10,0); "DT"+(-14,-4) **\dir{.};
"DT"+(-6,-4); "DR"+(-10,0) **\dir{.};"DR"+(-10,0); "DD"+(-10,0) **\dir{-};
"DL"+(-6,0)*{\scriptstyle \psi(\beta)};
"DL"+(0,0)*{{\rm(i)'}};
"DR"+(-14,0)*{\scriptstyle \psi(\alpha)};
"DL"+(-10,0)*{\bullet};
"DR"+(-10,0)*{\bullet};
"DT"+(-6,-4); "DT"+(-14,-4) **\crv{"DT"+(-10,-2)};
"DT"+(-10,0)*{\scriptstyle 2};
"DD"+(-10,0)*{\bullet};
"DD"+(-10,-2)*{\scriptstyle \psi(\underline{s}=\alpha+\beta)};
"DL"+(15,-3); "DD"+(15,-3) **\dir{-};
"DR"+(11,-7); "DD"+(15,-3) **\dir{-};
"DT"+(11,-7); "DR"+(11,-7) **\dir{-};
"DT"+(11,-7); "DL"+(15,-3) **\dir{-};
"DL"+(19,-3)*{\scriptstyle \psi(\beta)};
"DR"+(7,-7)*{\scriptstyle \psi(\alpha)};
"DL"+(30,0)*{{\rm(ii)'}};
"DL"+(15,-3)*{\bullet};
"DL"+(31,12.5)*{\bullet};
"DL"+(31,14.5)*{\scriptstyle \psi(s_1)};
"DL"+(25,-13)*{\bullet};
"DL"+(25,-15)*{\scriptstyle \psi(s_2)};
"DR"+(11,-7)*{\bullet};
"DD"+(28,4); "DD"+(38,-6) **\dir{-};
"DD"+(28,4); "DD"+(50,26) **\dir{.};
"DD"+(48,4); "DD"+(38,-6) **\dir{-};
"DD"+(48,4); "DD"+(70,26) **\dir{.};
"DD"+(82,18); "DD"+(74,26) **\dir{.};
"DD"+(82,18); "DD"+(58,-6) **\dir{-};
"DD"+(48,4); "DD"+(58,-6) **\dir{-};
"DD"+(82,18)*{\bullet};
"DD"+(78,18)*{\scriptstyle \psi(\alpha)};
"DD"+(28,4)*{\bullet};
"DD"+(32,4)*{\scriptstyle \psi(\beta)};
"DD"+(74,26); "DD"+(70,26) **\crv{"DD"+(72,28)};
"DD"+(72,30)*{\scriptstyle 2};
"DD"+(38,14)*{\bullet};
"DD"+(47,14)*{\scriptstyle \psi(\underline{s}=\alpha+\beta)};
"DL"+(59,0)*{{\rm(iii)'}};
"DD"+(48,4); "DD"+(38,14) **\dir{.};
"DD"+(80,4); "DD"+(90,-6) **\dir{-};
"DD"+(96,0); "DD"+(90,-6) **\dir{-};
"DD"+(96,0); "DD"+(102,-6) **\dir{-};
"DD"+(122,16); "DD"+(102,-6) **\dir{-};
"DD"+(80,4); "DD"+(102,26) **\dir{.};
"DD"+(122,16); "DD"+(112,26) **\dir{.};
"DD"+(96,0); "DD"+(117,21) **\dir{.};
"DD"+(96,0); "DD"+(86,10) **\dir{.};
"DD"+(117,21)*{\bullet};
"DD"+(112,21)*{\scriptstyle \psi(s_1)};
"DD"+(86,10)*{\bullet};
"DD"+(91,10)*{\scriptstyle \psi(s_2)};
"DD"+(102,26); "DD"+(112,26) **\crv{"DD"+(107,28)};
"DD"+(107,29)*{\scriptstyle l > 2};
"DD"+(80,4)*{\bullet};
"DL"+(109,0)*{{\rm (iv)'}};
"DD"+(84,4)*{\scriptstyle \psi(\beta)};
"DD"+(122,16)*{\bullet};
"DD"+(117,16)*{\scriptstyle \psi(\alpha)};
"DD"+(119,4); "DD"+(129,-6) **\dir{-};
"DD"+(135,0); "DD"+(129,-6) **\dir{-};
"DD"+(119,4); "DD"+(141,26) **\dir{-};
"DD"+(154,19); "DD"+(147,26) **\dir{-};
"DD"+(135,0); "DD"+(154,19) **\dir{-};
"DD"+(119,4)*{\bullet};
"DL"+(146,0)*{{\rm (v)'}};
"DD"+(123,4)*{\scriptstyle \psi(\beta)};
"DD"+(141,26); "DD"+(147,26) **\crv{"DD"+(144,28)};
"DD"+(154,19)*{\bullet};
"DD"+(154,17)*{\scriptstyle \psi(\alpha)};
"DD"+(129,-6)*{\bullet};
"DD"+(129,-8)*{\scriptstyle 2 \psi(s_2)};
"DD"+(144,29)*{\scriptstyle 2};
\endxy}}
\allowdisplaybreaks \\
& \scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(120,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(130,-6) **\dir{.};
"DT"+(-34,-4)*{\scriptstyle 1};
"DT"+(-34,-8)*{\scriptstyle 2};
"DT"+(-34,-12)*{\scriptstyle \vdots};
"DT"+(-34,-16)*{\scriptstyle \vdots};
"DT"+(-36,-36)*{\scriptstyle n};
"DD"+(-17,4); "DD"+(-7,-6) **\dir{-};
"DD"+(-1,0); "DD"+(-7,-6) **\dir{-};
"DD"+(-17,4); "DD"+(2,23) **\dir{-};
"DD"+(12,13); "DD"+(2,23) **\dir{-};
"DD"+(-1,0); "DD"+(12,13) **\dir{-};
"DD"+(-17,4)*{\bullet};
"DD"+(2,23)*{\bullet};
"DL"+(10,0)*{{\rm (vi)'}};
"DD"+(-13,4)*{\scriptstyle \psi(\beta)};
"DD"+(2,25)*{\scriptstyle \psi(s_3)};
"DD"+(12,13)*{\bullet};
"DD"+(12,11)*{\scriptstyle \psi(\alpha)};
"DD"+(-7,-6)*{\bullet};
"DD"+(-7,-8)*{\scriptstyle 2 \psi(s_2)};
"DD"+(15,4); "DD"+(25,-6) **\dir{-};
"DD"+(15,4); "DD"+(5,-6) **\dir{-};
"DD"+(15,4); "DD"+(30,19) **\dir{.};
"DD"+(40,9);"DD"+(25,-6) **\dir{-};
"DD"+(40,9);"DD"+(30,19) **\dir{.};
"DD"+(40,9)*{\bullet};
"DD"+(30,19)*{\bullet};
"DD"+(30,21)*{\scriptstyle \psi(s_1)};
"DD"+(5,-6)*{\bullet};
"DD"+(5,-8)*{\scriptstyle \psi(\beta)};
"DD"+(25,-6.2)*{\bullet};
"DD"+(25,-8.2)*{\scriptstyle \psi(s_2)};
"DD"+(35,9)*{\scriptstyle \psi(\alpha)};
"DD"+(25,4)*{{\rm (vii)'}};
"DD"+(45,4); "DD"+(55,-6) **\dir{-};
"DD"+(45,4); "DD"+(35,-6) **\dir{-};
"DD"+(45,4); "DD"+(67,26) **\dir{.};
"DD"+(80,19);"DD"+(55,-6) **\dir{-};
"DD"+(80,19);"DD"+(73,26) **\dir{.};
"DD"+(80,19)*{\bullet};
"DD"+(55,4)*{{\rm (viii)'}};
"DD"+(76,19)*{\scriptstyle \psi(\alpha)};
"DD"+(35,-6)*{\bullet};
"DD"+(55,-6.2)*{\bullet};
"DD"+(55,-8.2)*{\scriptstyle \psi(\underline{s}=\alpha+\beta)};
"DD"+(35,-8)*{\scriptstyle \psi(\beta)};
"DD"+(67,26); "DD"+(73,26) **\crv{"DD"+(70,28)};
"DD"+(70,30)*{\scriptstyle 2};
"DD"+(92,16); "DD"+(114,-6) **\dir{-};
"DD"+(92,16); "DD"+(70,-6) **\dir{-};
"DD"+(92,4)*{ {\rm (ix)'}};
"DD"+(114,-6)*{\bullet};
"DD"+(114,-8)*{\scriptstyle \psi(\alpha)};
"DD"+(70,-6)*{\bullet};
"DD"+(92,16)*{\bullet};
"DD"+(92,18)*{\scriptstyle\psi( \underline{s}=\alpha+\beta)};
"DD"+(70,-8)*{\scriptstyle \psi(\beta)};
\endxy}}
\end{aligned}
\end{equation}
By taking $\psi$ on~\eqref{eq: dist al be 2}, we have the following:
\begin{align} \label{eq: dist al be 2 p}
\scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(70,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(80,-6) **\dir{.};
"DT"+(-32,-4)*{\scriptstyle 1};
"DT"+(-32,-8)*{\scriptstyle 2};
"DT"+(-32,-12)*{\scriptstyle \vdots};
"DT"+(-32,-16)*{\scriptstyle \vdots};
"DT"+(-33,-36)*{\scriptstyle n};
"DD"+(-10,4); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(12,-6) **\dir{-};
"DD"+(32,16); "DD"+(12,-6) **\dir{-};
"DD"+(-10,4); "DD"+(12,26) **\dir{.};
"DD"+(32,16); "DD"+(22,26) **\dir{.};
"DD"+(6,0); "DD"+(27,21) **\dir{.};
"DD"+(6,0); "DD"+(-4,10) **\dir{.};
"DD"+(12,26); "DD"+(22,26) **\crv{"DD"+(17,28)};
"DD"+(17,30)*{\scriptstyle 2};
"DD"+(-10,4)*{\bullet};
"DL"+(17,0)*{{\rm(x)'}};
"DD"+(-6,4)*{\scriptstyle \psi(\beta)};
"DD"+(-4,12)*{\scriptstyle \psi(\eta_2)};
"DD"+(32,16)*{\bullet};
"DD"+(27,16)*{\scriptstyle \psi(\alpha)};
"DD"+(28,23)*{\scriptstyle \psi(\eta_1)};
"DD"+(27,21)*{\scriptstyle \bullet};
"DT"+(2,-36)*{\scriptstyle \bullet};
"DT"+(2,-38)*{\scriptstyle \psi(\tau_1)};
"DT"+(-10,-36)*{\scriptstyle \bullet};
"DT"+(-10,-38)*{\scriptstyle \psi(\zeta_1)};
"DT"+(-4,-30)*{\scriptstyle \bullet};
"DT"+(-4,-32)*{\scriptstyle \psi(\gamma)};
"DD"+(35,4); "DD"+(45,-6) **\dir{-};
"DD"+(51,0); "DD"+(45,-6) **\dir{-};
"DD"+(51,0); "DD"+(57,-6) **\dir{-};
"DD"+(69,8); "DD"+(57,-6) **\dir{-};
"DD"+(35,4); "DD"+(54,23) **\dir{.};
"DD"+(69,8); "DD"+(54,23) **\dir{.};
"DD"+(51,0); "DD"+(64,13) **\dir{.};
"DD"+(51,0); "DD"+(41,10) **\dir{.};
"DD"+(35,4)*{\bullet};
"DL"+(62,0)*{{\rm(xi)'}};
"DD"+(39,4)*{\scriptstyle \psi(\beta)};
"DD"+(41,10)*{\scriptstyle \bullet};
"DD"+(45,10)*{\scriptstyle \psi(\eta_2)};
"DD"+(69,8)*{\bullet};
"DD"+(64,12.5)*{\bullet};
"DD"+(59,12.5)*{\scriptstyle \psi(\eta_1)};
"DD"+(65,8)*{\scriptstyle \psi(\alpha)};
"DT"+(47,-36)*{\scriptstyle \bullet};
"DT"+(47,-38)*{\scriptstyle \psi(\tau_1)};
"DT"+(35,-36)*{\scriptstyle \bullet};
"DT"+(35,-38)*{\scriptstyle \psi(\zeta_1)};
"DT"+(41,-32)*{\scriptstyle \psi(s_2)};
"DT"+(44,-5)*{\scriptstyle \psi(s_1)};
"DT"+(44,-7)*{\scriptstyle \bullet};
"DT"+(41,-30)*{\scriptstyle \bullet};
\endxy}} \qquad
2 \le i,j \le n-2
\end{align}
In each case, the value $(\upalpha,\upbeta) = (\psi(\alpha),\psi(\beta))$ does not depend on the choice of $Q$ and are given in Table~\ref{table: al,be B}:
\begin{table}[h]
\centering
{ \arraycolsep=1.6pt\def1.5{1.5}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& (i) & (ii) & (iii) & (iv) & (v) & (vi) & (vii) & (viii) & (ix) & (x) & (ix) \\ \hline
$(\alpha,\beta)$ & $-2$ & $0$ & $-2$ & $0$ & $0$ & $2$ & $0$ & $-2$ & $0$ & $-2$ & $0$ \\ \hline
\end{tabular}
}\\[1.5ex]
\caption{$(\alpha,\beta)$ for non $[Q]$-simple \prqs of type $B_n$}
\protect\label{table: al,be B}
\end{table}
For the cases when $\pair{\alpha,\beta}$ is a $[Q]$-minimal \prq for $\alpha+\beta \in \Phi^+$,
\begin{align}\label{eq: p be al B}
p_{\beta,\alpha}=1 \quad \text{ if $\pair{\alpha,\beta}$ is of case ${\rm (ix)'}$} \quad \text{ and } \quad \text{$p_{\beta,\alpha}=0$, otherwise. }
\end{align}
\begin{example} \label{ex: Uding B}
Using $\Gamma_Q$ in Example~\ref{ex: Label B4},
\begin{enumerate}[{\rm (1)}]
\item $ \srt{1,2} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{2,3}}$ corresponds to ${\rm (i)}'$,
\item $ \pair{\srt{1,-3},\srt{2,4}} \prec_Q^\mathtt{b} \pair{\srt{2,-3},\srt{1,4}}$ corresponds to ${\rm (ii)}'$,
\item $ \srt{2,-4} \prec_Q^\mathtt{b} \pair{\srt{2,-3},\srt{3,-4}}$ corresponds to ${\rm (iii)}'$,
\item $ \pair{\srt{1,-3},\srt{2,-4}} \prec_Q^\mathtt{b} \pair{\srt{2,-3},\srt{1,-4}}$ corresponds to ${\rm (iv)}'$,
\item $ 2\srt{1} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{1,3}}$ corresponds to ${\rm (v)}'$,
\item $ \seq{\srt{3,4},2\srt{1}} \prec_Q^\mathtt{b} \pair{\srt{1,4},\srt{1,3}}$ corresponds to ${\rm (vi)}'$,
\item $ \pair{\srt{1},\srt{2,3}} \prec_Q^\mathtt{b} \pair{\srt{1,2},\srt{3}}$ correspond to ${\rm (vii)}'$,
\item $ \srt{1} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{3}}$ correspond to ${\rm (viii)}'$,
\item $ \srt{2,3} \prec_Q^\mathtt{b} \pair{\srt{2},\srt{3}}$ correspond to ${\rm (ix)'}$,
\item $ \srt{1,2} \prec_Q^\mathtt{b} \pair{\srt{1,-3},\srt{2,3}} ,\pair{\srt{2},\srt{1}} \prec_Q^\mathtt{b} \pair{\srt{2,-3},\srt{1,3}}$ corresponds to ${\rm (x)}'$,
\item $ \pair{\srt{1,2},\srt{3,4}} \prec_Q^\mathtt{b} \pair{\srt{1},\srt{2},\srt{3,4}} ,\pair{\srt{2,3},\srt{1,4}} \prec_Q^\mathtt{b} \pair{\srt{2,4},\srt{1,3}}$ corresponds to ${\rm (xi)}'$.
\end{enumerate}
\end{example}
By the similarities between the combinatorics for $\Gamma_Q$ and $\Gamma_{\overline{Q}}$, described in Proposition~\ref{prop: label1}, ~\ref{prop: label2} for $\Gamma_Q$ and Proposition~\ref{prop: label12 B} for $\Gamma_{\overline{Q}}$, we can apply the same argument in
\cite{Oh18} and obtain the following theorem:
\begin{theorem} \label{thm: soc B}
Let $Q=({\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle,\xi)$ be a Dynkin quiver of type $B_n$.
\begin{enumerate}[{\rm (1)}]
\item For any \prq $\pair{\alpha,\beta} \in (\Phi^+_{B_n})^2$, ${\operatorname{hd}}_Q(\pair{\alpha,\beta})$ is well-defined.
\item For a \prq $\pair{\alpha,\beta}$, if ${\operatorname{deg}}_Q(\pair{\alpha,\beta}) \ne 0$, then the relative position of $\pair{\alpha,\beta}$ is one of ${\rm (i)}' \sim {\rm (xi)}'$ in~\eqref{eq: dist al, be 1 p} and~\eqref{eq: dist al be 2 p}.
\item Every \ex ${\underline{m}} \prec_Q^\mathtt{b} \pair{\alpha,\beta}$ is also described in ~\eqref{eq: dist al, be 1 p} and~\eqref{eq: dist al be 2 p}.
\item For $\pair{\alpha,\beta} \in (\Phi^+_\blacktriangle)^2$, assume ${\operatorname{deg}}_Q(\pair{\alpha,\beta}) \ne 0$. Then there exist $\alpha' \in \psi^{-1}(\alpha)$
and $\beta' \in \psi^{-1}(\beta)$ such that ${\operatorname{deg}}_{\underline{Q}}( \pair{\alpha',\beta'} ) \ne 0$ and ${\operatorname{hd}}_Q(\pair{\alpha,\beta})=\psi({\operatorname{hd}}_{\underline{Q}}( \pair{\alpha',\beta'} ))$. Furthermore, in this case, we have
$$ {\operatorname{deg}}_Q(\pair{\alpha,\beta})={\operatorname{deg}}_{\underline{Q}}( \pair{\alpha',\beta'} ).$$
\end{enumerate}
\end{theorem}
Notice that, in cases (v) and (vi), the head of $\pair{\alpha,\beta}$
has component $2$ in which phenomena does not happen in
$D_{n+1}$-case.
\begin{proof}[Proof of Theorem~\ref{thm: B_n distance polynomial}]
Note that for $1 \le i ,j \le n$ with $\min(i,j)<n$, we have $\max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j)=2$. By Theorem~\ref{thm: soc B}, we have
$$\mathfrak{d}^{\blacktriangle}_{i,j}(t) = 2 \times \mathfrak{d}^{\triangle}_{i,j}(t)$$
implying the assertion by the formulas~\eqref{eq: d poly D} and~\eqref{eq: d poly BCFG}.
In the case when $i=j=n$, by Theorem~\ref{thm: soc B}, and
the facts that $\psi(\epsilon_i \pm \epsilon_{n+1})=\varepsilon_i$ and $\ms{1mu}\mathsf{d}_n=1$, we have
$$ \mathfrak{d}^{\blacktriangle}_{n,n}(t) = \left( \mathfrak{d}^{\triangle}_{n,n+1}(t)+\mathfrak{d}^{\triangle}_{n,n}(t) \right) = \sum_{s=1}^{n} t^{2s-1},$$
which completes the assertion.
\end{proof}
By observing~\eqref{eq: dist al be 2 p}, we have the following proposition:
\begin{proposition} \label{prop: many pairs}
For a Dynkin quiver $Q$ of type $B_n$, let $\pair{\alpha,\beta}$ be a \prq such that ${\operatorname{deg}}_Q(\pair{\alpha,\beta})=2$.
\begin{enumerate}[{\rm (1)}]
\item If $\alpha+\beta =\gamma \in \Phi^+_{B_n}$, then there are two \prqs $\pair{\alpha_i,\beta_i}$ $(1 \le i \le 2)$ of $\gamma$ such that
$$ {\operatorname{hd}}_Q(\pair{\alpha,\beta}) =\gamma \prec^\mathtt{b}_{[Q]} \pair{\alpha_i,\beta_i} \prec^\mathtt{b}_{[Q]} \pair{\alpha,\beta} $$
\begin{enumerate}[{\rm (i)}]
\item $\pair{\alpha_1,\beta_1}$, $\pair{\alpha_2,\beta_2}$ are not comparable with respect to $\prec^\mathtt{b}_{[Q]}$,
\item $\{ \alpha_2,\beta_2\} = \{ \langle a \rangle , \langle b \rangle \}$ and $\mathrm{res}_Q(\alpha_1), \mathrm{res}_Q(\beta_1 )<n$.
\end{enumerate}
\item If $\alpha+\beta \not\in \Phi^+_{B_n}$, then there are \exs $\pair{\alpha_1,\beta_1}$, $\seq{\tau,\zeta,\gamma}$ such that
$$ {\operatorname{hd}}_Q(\pair{\alpha,\beta}) \prec^\mathtt{b}_{[Q]} \pair{\alpha_1,\beta_1} , \seq{\tau,\zeta,\gamma} \prec^\mathtt{b}_{[Q]} \pair{\alpha,\beta} $$
\begin{enumerate}[{\rm (i)}]
\item $\pair{\alpha_1,\beta_1}$, $\seq{\tau,\zeta,\gamma}$ are not comparable with respect to $\prec^\mathtt{b}_{[Q]}$,
\item $\mathrm{res}^{[Q]}(\alpha_1), \mathrm{res}^{[Q]}(\gamma),\mathrm{res}^{[Q]}(\beta_1 )<n$ and $\mathrm{res}^{[Q]}(\tau), \mathrm{res}^{[Q]}(\zeta)=n$.
\end{enumerate}
\end{enumerate}
\end{proposition}
\subsection{$C_n$-case}
Throughout this subsection, ${\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle$ usually denotes the Dynkin diagram of type $C_{n}$.
This subsection is devoted to prove the following theorem:
\begin{theorem} \label{thm: distance C_n}
Let ${\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle$ be the Dynkin diagram of type $C_n$. Then we have
\begin{align*}
\mathfrak{d}_{i,j}^\blacktriangle(t) +\delta_{i,j} \max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j)t^{{\ms{1mu}\mathsf{h}}-1}
= \widetilde{\de}_{i,j}^\blacktriangle(t).
\end{align*}
\end{theorem}
Recall the natural bijection $\psi^{\rm tr}\cl \Phi^+_{B_n} \to
\Phi^+_{C_n}$ and Proposition~\ref{prop: transpose}, which tells that
labeling of $\Gamma_Q$ and $\Gamma_{Q^{\rm tr}}$ for Dynkin quiver $Q$ of
type $B_n$ are almost the same, when we use the notations $\langle a,b \rangle$, $\langle a\rangle$ and $\langle a,a\rangle$
for $1 \le a \le |b| \le n$.
For $\alpha \in \Phi^+_{B_n}$, we set $$\alpha^{\rm tr} \mathbin{:=} \psi^{\rm tr}(\alpha).$$
Using Proposition~\ref{prop: label12 B} for $\Gamma_{\overline{Q}}$ and the similar argument in \cite{Oh18},
we obtain the following as in $B_n$-cases:
\begin{theorem} \label{thm: soc C}
Let $Q^{\rm tr}$ a Dynkin quiver of type $C_n$.
\begin{enumerate}[{\rm (1)}]
\item For any \prq $\pair{\alpha,\beta} \in (\Phi^+_{C_n})^2$, $ {\operatorname{hd}}_{Q^{\rm tr}} (\pair{\alpha,\beta})$ is well-defined.
\item For $\pair{\alpha,\beta} \in (\Phi^+_{B_n})^2$, ${\operatorname{deg}}_Q(\pair{\alpha,\beta}) \ne 0$ if and only if ${\operatorname{deg}}_{Q^{\rm tr}}(\pair{\alpha^{\rm tr},\beta^{\rm tr}}) \ne 0$. Moreover, we have
$$ {\operatorname{deg}}_Q(\pair{\alpha,\beta})={\operatorname{deg}}_{Q^{\rm tr}}(\pair{\alpha^{\rm tr},\beta^{\rm tr}}).$$
\end{enumerate}
\end{theorem}
Note that
$$
\langle i \rangle = \varepsilon_i \ \text{ if ${\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle$ is of type $B_n$} \text{ and }
\langle i,i \rangle = 2\epsilon_i \ \text{ if ${\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle$ is of type $C_n$}.
$$
By this difference, the descriptions for $ {\operatorname{hd}}_Q(\pair{\alpha,\beta}) $ for
\prqs $\pair{ \alpha,\beta} \in \Phi^+_{C_n}$ and a Dynkin quiver $Q$ of
type $C_n$ with ${\operatorname{deg}}_Q(\pair{\alpha,\beta}) \ne 0$ are sometimes different
from the corresponding ones in cases of type $B_n$. Here we present
the \prqs $\pair{ \alpha,\beta} \in \Phi^+_{C_n}$ for ${\operatorname{deg}}_Q(\pair{\alpha,\beta})
\ne 0$ with description of $ {\operatorname{hd}}_Q(\pair{\alpha,\beta})$ as follows:
\begin{align*}
& \scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(140,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(150,-6) **\dir{.};
"DT"+(-32,-4)*{\scriptstyle 1};
"DT"+(-32,-8)*{\scriptstyle 2};
"DT"+(-32,-12)*{\scriptstyle \vdots};
"DT"+(-32,-16)*{\scriptstyle \vdots};
"DT"+(-34,-36)*{\scriptstyle n};
"DL"+(-10,0); "DD"+(-10,0) **\dir{-};"DL"+(-10,0); "DT"+(-14,-4) **\dir{.};
"DT"+(-6,-4); "DR"+(-10,0) **\dir{.};"DR"+(-10,0); "DD"+(-10,0) **\dir{-};
"DL"+(-6,0)*{\scriptstyle \beta};
"DL"+(0,0)*{{\rm(i)''}};
"DR"+(-14,0)*{\scriptstyle \alpha};
"DL"+(-10,0)*{\bullet};
"DR"+(-10,0)*{\bullet};
"DT"+(-6,-4); "DT"+(-14,-4) **\crv{"DT"+(-10,-2)};
"DT"+(-10,0)*{\scriptstyle 2};
"DD"+(-10,0)*{\bullet};
"DD"+(-10,-2)*{\scriptstyle \ \underline{s}=\alpha+\beta};
"DL"+(15,-3); "DD"+(15,-3) **\dir{-};
"DR"+(11,-7); "DD"+(15,-3) **\dir{-};
"DT"+(11,-7); "DR"+(11,-7) **\dir{-};
"DT"+(11,-7); "DL"+(15,-3) **\dir{-};
"DL"+(19,-3)*{\scriptstyle \beta};
"DR"+(7,-7)*{\scriptstyle \alpha};
"DL"+(30,0)*{{\rm(ii)''}};
"DL"+(15,-3)*{\bullet};
"DL"+(31,12.5)*{\bullet};
"DL"+(31,14.5)*{\scriptstyle s_1};
"DL"+(25,-13)*{\bullet};
"DL"+(25,-15)*{\scriptstyle s_2};
"DR"+(11,-7)*{\bullet};
"DD"+(28,4); "DD"+(38,-6) **\dir{-};
"DD"+(28,4); "DD"+(50,26) **\dir{.};
"DD"+(48,4); "DD"+(38,-6) **\dir{-};
"DD"+(48,4); "DD"+(70,26) **\dir{.};
"DD"+(82,18); "DD"+(74,26) **\dir{.};
"DD"+(82,18); "DD"+(58,-6) **\dir{-};
"DD"+(48,4); "DD"+(58,-6) **\dir{-};
"DD"+(82,18)*{\bullet};
"DD"+(78,18)*{\scriptstyle \alpha};
"DD"+(28,4)*{\bullet};
"DD"+(32,4)*{\scriptstyle \beta};
"DD"+(74,26); "DD"+(70,26) **\crv{"DD"+(72,28)};
"DD"+(72,30)*{\scriptstyle 2};
"DD"+(38,14)*{\bullet};
"DD"+(47,14)*{\scriptstyle \underline{s}=\alpha+\beta};
"DL"+(59,0)*{{\rm(iii)''}};
"DD"+(48,4); "DD"+(38,14) **\dir{.};
"DD"+(80,4); "DD"+(90,-6) **\dir{-};
"DD"+(96,0); "DD"+(90,-6) **\dir{-};
"DD"+(96,0); "DD"+(102,-6) **\dir{-};
"DD"+(122,16); "DD"+(102,-6) **\dir{-};
"DD"+(80,4); "DD"+(102,26) **\dir{.};
"DD"+(122,16); "DD"+(112,26) **\dir{.};
"DD"+(96,0); "DD"+(117,21) **\dir{.};
"DD"+(96,0); "DD"+(86,10) **\dir{.};
"DD"+(117,21)*{\bullet};
"DD"+(112,21)*{\scriptstyle s_1};
"DD"+(86,10)*{\bullet};
"DD"+(91,10)*{\scriptstyle s_2};
"DD"+(102,26); "DD"+(112,26) **\crv{"DD"+(107,28)};
"DD"+(107,29)*{\scriptstyle l > 2};
"DD"+(80,4)*{\bullet};
"DL"+(109,0)*{{\rm (iv)''}};
"DD"+(84,4)*{\scriptstyle \beta};
"DD"+(122,16)*{\bullet};
"DD"+(117,16)*{\scriptstyle \alpha};
"DD"+(119,4); "DD"+(129,-6) **\dir{-};
"DD"+(135,0); "DD"+(129,-6) **\dir{-};
"DD"+(119,4); "DD"+(141,26) **\dir{-};
"DD"+(154,19); "DD"+(147,26) **\dir{-};
"DD"+(135,0); "DD"+(154,19) **\dir{-};
"DD"+(119,4)*{\bullet};
"DL"+(146,0)*{{\rm (v)''}};
"DD"+(123,4)*{\scriptstyle \beta};
"DD"+(141,26); "DD"+(147,26) **\crv{"DD"+(144,28)};
"DD"+(154,19)*{\bullet};
"DD"+(154,17)*{\scriptstyle \alpha};
"DD"+(129,-6)*{\bullet};
"DD"+(129,-8)*{\scriptstyle s=\alpha+\beta};
"DD"+(144,29)*{\scriptstyle 2};
\endxy}}
\allowdisplaybreaks \\
& \scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(120,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(130,-6) **\dir{.};
"DT"+(-34,-4)*{\scriptstyle 1};
"DT"+(-34,-8)*{\scriptstyle 2};
"DT"+(-34,-12)*{\scriptstyle \vdots};
"DT"+(-34,-16)*{\scriptstyle \vdots};
"DT"+(-36,-36)*{\scriptstyle n};
"DD"+(-17,4); "DD"+(-7,-6) **\dir{-};
"DD"+(-1,0); "DD"+(-7,-6) **\dir{-};
"DD"+(-17,4); "DD"+(2,23) **\dir{-};
"DD"+(12,13); "DD"+(2,23) **\dir{-};
"DD"+(-1,0); "DD"+(12,13) **\dir{-};
"DD"+(-17,4)*{\bullet};
"DD"+(2,23)*{\bullet};
"DL"+(10,0)*{{\rm (vi)''}};
"DD"+(-13,4)*{\scriptstyle \beta};
"DD"+(2,25)*{\scriptstyle s_2};
"DD"+(12,13)*{\bullet};
"DD"+(12,11)*{\scriptstyle \alpha};
"DD"+(-7,-6)*{\bullet};
"DD"+(-7,-8)*{\scriptstyle s_1};
"DD"+(15,4); "DD"+(25,-6) **\dir{-};
"DD"+(15,4); "DD"+(5,-6) **\dir{-};
"DD"+(15,4); "DD"+(30,19) **\dir{.};
"DD"+(40,9);"DD"+(25,-6) **\dir{-};
"DD"+(40,9);"DD"+(30,19) **\dir{.};
"DD"+(40,9)*{\bullet};
"DD"+(30,19)*{\bullet};
"DD"+(30,21)*{\scriptstyle s_1};
"DD"+(5,-6)*{\bullet};
"DD"+(5,-8)*{\scriptstyle \beta};
"DD"+(15,4.2)*{\bullet};
"DD"+(15,6.2)*{\scriptstyle s_2};
"DD"+(35,9)*{\scriptstyle \alpha};
"DD"+(25,4)*{{\rm (vii)''}};
"DD"+(45,4); "DD"+(55,-6) **\dir{-};
"DD"+(45,4); "DD"+(35,-6) **\dir{-};
"DD"+(45,4); "DD"+(67,26) **\dir{.};
"DD"+(80,19);"DD"+(55,-6) **\dir{-};
"DD"+(80,19);"DD"+(73,26) **\dir{.};
"DD"+(80,19)*{\bullet};
"DD"+(55,4)*{{\rm (viii)''}};
"DD"+(76,19)*{\scriptstyle \alpha};
"DD"+(35,-6)*{\bullet};
"DD"+(45,4.2)*{\bullet};
"DD"+(45,2.2)*{\scriptstyle \underline{s}=\alpha+\beta};
"DD"+(35,-8)*{\scriptstyle \beta};
"DD"+(67,26); "DD"+(73,26) **\crv{"DD"+(70,28)};
"DD"+(70,30)*{\scriptstyle 2};
"DD"+(92,16); "DD"+(114,-6) **\dir{-};
"DD"+(92,16); "DD"+(70,-6) **\dir{-};
"DD"+(92,4)*{ {\rm (ix)''}};
"DD"+(114,-6)*{\bullet};
"DD"+(114,-8)*{\scriptstyle \alpha};
"DD"+(70,-6)*{\bullet};
"DD"+(92,16)*{\bullet};
"DD"+(92,18)*{\scriptstyle 2s};
"DD"+(70,-8)*{\scriptstyle \beta};
\endxy}} \hspace{-3em} {\operatorname{deg}}_Q(\pair{\alpha,\beta})=1,
\allowdisplaybreaks \\
&\scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(70,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(80,-6) **\dir{.};
"DT"+(-32,-4)*{\scriptstyle 1};
"DT"+(-32,-8)*{\scriptstyle 2};
"DT"+(-32,-12)*{\scriptstyle \vdots};
"DT"+(-32,-16)*{\scriptstyle \vdots};
"DT"+(-33,-36)*{\scriptstyle n};
"DD"+(-10,4); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(12,-6) **\dir{-};
"DD"+(32,16); "DD"+(12,-6) **\dir{-};
"DD"+(-10,4); "DD"+(12,26) **\dir{.};
"DD"+(32,16); "DD"+(22,26) **\dir{.};
"DD"+(6,0); "DD"+(27,21) **\dir{.};
"DD"+(6,0); "DD"+(-4,10) **\dir{.};
"DD"+(12,26); "DD"+(22,26) **\crv{"DD"+(17,28)};
"DD"+(17,30)*{\scriptstyle 2};
"DD"+(-10,4)*{\bullet};
"DL"+(17,0)*{{\rm(x)''}};
"DD"+(-6,4)*{\scriptstyle \beta};
"DD"+(-4,12)*{\scriptstyle \eta_2};
"DD"+(32,16)*{\bullet};
"DD"+(27,16)*{\scriptstyle \alpha};
"DD"+(28,23)*{\scriptstyle \eta_1};
"DD"+(27,21)*{\scriptstyle \bullet};
"DT"+(-4,-30)*{\scriptstyle \bullet};
"DT"+(-4,-32)*{\scriptstyle \gamma};
"DD"+(35,4); "DD"+(45,-6) **\dir{-};
"DD"+(51,0); "DD"+(45,-6) **\dir{-};
"DD"+(51,0); "DD"+(57,-6) **\dir{-};
"DD"+(69,8); "DD"+(57,-6) **\dir{-};
"DD"+(35,4); "DD"+(54,23) **\dir{.};
"DD"+(69,8); "DD"+(54,23) **\dir{.};
"DD"+(51,0); "DD"+(64,13) **\dir{.};
"DD"+(51,0); "DD"+(41,10) **\dir{.};
"DD"+(35,4)*{\bullet};
"DL"+(62,0)*{{\rm(xi)''}};
"DD"+(39,4)*{\scriptstyle \beta};
"DD"+(41,10)*{\scriptstyle \bullet};
"DD"+(45,10)*{\scriptstyle \eta_2};
"DD"+(69,8)*{\bullet};
"DD"+(64,12.5)*{\bullet};
"DD"+(61,12.5)*{\scriptstyle \eta_1};
"DD"+(65,8)*{\scriptstyle \alpha};
"DT"+(41,-32)*{\scriptstyle \gamma_2};
"DT"+(44,-5)*{\scriptstyle \gamma_1};
"DT"+(44,-7)*{\scriptstyle \bullet};
"DT"+(41,-30)*{\scriptstyle \bullet};
\endxy}} {\operatorname{deg}}_Q(\pair{\alpha,\beta})=2.
\end{align*}
Here one can recognize that the differences happen in cases (vii)$'' \sim$ (xi)$''$, when we compare the cases for $B_n$.
In each case, the value $(\alpha,\beta) $ does not depend on the choice of $Q$ and are given in Table~\ref{table: al,be C}:
\begin{table}[h]
\centering
{ \arraycolsep=1.6pt\def1.5{1.5}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& (i) & (ii) & (iii) & (iv) & (v) & (vi) & (vii) & (viii) & (ix) & (x) & (xi) \\ \hline
$(\alpha,\beta)$ & $-1$ & $0$ & $-1$ & $0$ & $0$ & $1$ & $0$ & $-1$ & $0$ & $-1$ & $0$ \\ \hline
\end{tabular}
}\\[1.5ex]
\caption{$(\alpha,\beta)$ for non $[Q]$-simple \prqs of type $C_n$}
\protect\label{table: al,be C}
\end{table}
For the cases when a \prq $\pair{\alpha,\beta}$ is $[Q]$-minimal for $\alpha+\beta \in \Phi^+$,
\begin{align}\label{eq: p be al C}
p_{\beta,\alpha}=1 \quad \text{ if $\pair{\alpha,\beta}$ is of case ${\rm (v)''}$} \quad \text{ and } \quad \text{$p_{\beta,\alpha}=0$, otherwise. }
\end{align}
\begin{example} \label{ex: Uding C}
Using $\Gamma_Q$ in Example~\ref{ex: Label C4},
\begin{enumerate}[{\rm (1)}]
\item[(7)] $ \pair{\srt{1,3},\srt{3,4}} \prec_Q^\mathtt{b} \pair{\srt{1,4},\srt{3,3}}$ correspond to ${\rm (vii)}''$,
\item[(8)] $ \srt{1,3} \prec_Q^\mathtt{b} \pair{\srt{1,4},\srt{3,-4}}$ correspond to ${\rm (viii)}''$,
\item[(9)] $ 2\srt{3,4} \prec_Q^\mathtt{b} \pair{\srt{3,3},\srt{4,4}}$ correspond to ${\rm (ix)''}$,
\item[(10)] $ \srt{1,3} \prec_Q^\mathtt{b} \pair{\srt{3,4},\srt{1,-4}} \prec_Q^\mathtt{b} \pair{\srt{1,4},\srt{3,-4}}$ corresponds to ${\rm (x)}''$,
\item[(11)] $ \pair{\srt{1,2},\srt{3,4}} \prec_Q^\mathtt{b} \pair{\srt{2,3},\srt{1,4}} \prec_Q^\mathtt{b} \pair{\srt{2,4},\srt{1,3}}$ corresponds to ${\rm (xi)}''$.
\end{enumerate}
\end{example}
By observing (x)$''$ and (xi)$''$, we have the following proposition:
\begin{proposition} For a Dynkin quiver $Q$ of type $C_n$, let $\pair{\alpha,\beta}$ be a \prq such that ${\operatorname{deg}}_Q(\alpha,\beta)=2$.
Then there exists a unique \prq $\underline{p}$ such that
$$ {\operatorname{hd}}_Q(\pair{\alpha,\beta}) \prec^\mathtt{b}_Q \underline{p} \prec_Q^\mathtt{b} \pair{\alpha,\beta}.$$
\end{proposition}
\begin{proof} [Proof of Theorem~\ref{thm: distance C_n}]
By Theorem~\ref{thm: soc B}, we have
$$
\sum_{k \in \mathbb{Z}_{\ge 0} } {\mathtt{O}_k^{B_n}(i,j)} t^{k-1} = \sum_{k \in \mathbb{Z}_{\ge 0} } {\mathtt{O}_k^{C_n}(i,j)} t^{k-1}
$$
for any $1 \le i,j \le n$.
Since, when $\max(i,j)=n$, we have $\max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j)=2$ and
$$
\sum_{s=1}^{\min(i,j)} (t^{|i-j|+2s-1}+t^{2n-i-j+2s-1})
= 2 \sum_{s=1}^{i}t^{n-i+2s-1},
$$
our assertion follows from the definition of degree polynomial
\[ \mathfrak{d}_{i,j}^\blacktriangle(t) \mathbin{:=} \sum_{k \in \mathbb{Z}_{\ge 0} } \max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j) {\mathtt{O}_k^{\blacktriangle}(i,j)} t^{k-1}. \qedhere \]
\end{proof}
\subsection{$F_4$-case} In this case, we have only finite many Dynkin quivers. Recall that $\ms{1mu}\mathsf{d}_1=\ms{1mu}\mathsf{d}_2=2$ and $\ms{1mu}\mathsf{d}_3=\ms{1mu}\mathsf{d}_4=1$. Then one can check the results in this subsections and we exhibit several examples instead of giving proofs.
\smallskip
For a Dynkin quiver $Q=({\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{F_4}, \xi)$ with
$$\xymatrix@R=0.5ex@C=6ex{ *{\circled{$\circ$}}<3pt> \ar@{->}[r]^<{ _{\underline{4}} \ \ }_<{1} & *{\circled{$\circ$}}<3pt> \ar@{->}[r]^<{ _{\underline{3}} \ \ }_<{2}
&*{\circ}<3pt> \ar@{->}[r]^>{ \ \ _{\underline{1}}}_<{3 \ \ } &*{\circ}<3pt> \ar@{}[l]^<{ \ \ 4}_>{ _{\underline{2}} \ \ } },$$
the AR-quiver $\Gamma_Q$ can be described as follows:
\begin{align}\label{eq: F_4Q}
\Gamma^{F_4}_{Q}= \hspace{-2ex} \raisebox{3.15em}{ \scalebox{0.63}{\xymatrix@!C=3.3ex@R=2ex{
(i\setminus p) & -9 & -8 & -7 & -6 & -5 & -4 & -3 & -2 & -1 & 0 & 1 & 2 & 3 & 4\\
1&&&& \sprt{1,0,-1,0}\ar[dr] && \sprt{0,0,1,1} \ar@{->}[dr] && \sprt{1,0,0,-1} \ar@{->}[dr] &&\sprt{0,1,0,1}\ar@{->}[dr]&& \sprt{0,0,1,-1}\ar@{->}[dr]&& \sprt{0,1,-1,0} \\
2&&& \sprt{1,-1,0,0} \ar@{->}[dr]\ar@{->}[ur] && \sprt{1,0,0,1} \ar@{->}[dr]\ar@{->}[ur] && \sprt{1,0,1,0} \ar@{->}[dr]\ar@{->}[ur]&& \sprt{1,1,0,0}\ar@{->}[dr]\ar@{->}[ur]&& \sprt{0,1,1,0}\ar@{->}[dr]\ar@{->}[ur]&& \sprt{0,1,0,-1} \ar@{->}[ul]\ar@{->}[ur] \\
3&&\sprt{\frac{1}{2},-\frac{1}{2},-\frac{1}{2},\frac{1}{2}} \ar@{->}[dr]\ar@{=>}[ur] &&\sprt{\frac{1}{2},-\frac{1}{2},\frac{1}{2},\frac{1}{2}}\ar@{->}[dr]\ar@{=>}[ur] && \sprt{1,0,0,0}\ar@{->}[dr]\ar@{=>}[ur] && \sprt{\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}}\ar@{->}[dr]\ar@{=>}[ur] && \sprt{\frac{1}{2},\frac{1}{2},\frac{1}{2},-\frac{1}{2}}\ar@{->}[dr]\ar@{=>}[ur] &&\sprt{0,1,0,0} \ar@{=>}[ur]\\
4& \sprt{\frac{1}{2},-\frac{1}{2},-\frac{1}{2},-\frac{1}{2}} \ar@{->}[ur] && \sprt{0,0,0,1}\ar@{->}[ur] && \sprt{\frac{1}{2},-\frac{1}{2},\frac{1}{2},-\frac{1}{2}}\ar@{->}[ur] && \sprt{\frac{1}{2},\frac{1}{2},-\frac{1}{2},\frac{1}{2}} \ar@{->}[ur] &&\sprt{0,0,1,0}\ar@{->}[ur] & & \sprt{\frac{1}{2},\frac{1}{2},-\frac{1}{2},-\frac{1}{2}} \ar@{->}[ur]
}}}
\end{align}
Here we use notation the notation $\sprt{a,b,c,d}\mathbin{:=} a\varepsilon_1+b\varepsilon_2+c\varepsilon_3+d\varepsilon_4$, $\varepsilon_i =\sqrt{2}\epsilon_i$ and
$$ \alpha_1 = \sprt{0,1,-1,0}, \quad \alpha_2 = \sprt{0,0,1,-1}, \quad \alpha_3 = \sprt{0,0,0,1} \quad \text{ and } \quad \alpha_4 = \sprt{1/2,-1/2,-1/2,-1/2}.$$
\begin{theorem} For a Dynkin quiver $Q$ of type $F_4$ and any \prq $\pair{\alpha,\beta} \in (\Phi^+_{F_4})^2$, ${\operatorname{hd}}_Q(\pair{\alpha,\beta})$ is well-defined. In particular, if $\alpha+\beta=\gamma$,
then ${\operatorname{hd}}_Q(\pair{\alpha,\beta})=\gamma$.
\end{theorem}
\begin{remark} \label{rmk: F4 do not hold}
In $F_4$-case, $\mathfrak{d}_{i,j}(t)+\delta_{i,j}t^{{\ms{1mu}\mathsf{h}}-1}$ does not coincide with $\widetilde{\de}_{i,j}(t)$ any more. For instance,
$$ \widetilde{\de}_{2,3}[9]=4 \quad \text{ while } \quad \mathfrak{d}_{2,3}[9]=6.$$
Here the fact that $\mathfrak{d}_{2,3}[9]=6$ can be computed by using
$\Gamma_Q$ in~\eqref{eq: F_4Q} as follows: Let $\alpha=\sprt{0,1,0,-1}$
and $\beta=\sprt{\frac{1}{2},-\frac{1}{2},\frac{1}{2},\frac{1}{2}}$.
Then $\gamma=\alpha+\beta \in \Phi^+$ and
\begin{align} \label{eq: F_4 de not tde}
\gamma \prec_Q^\mathtt{b}
\pair{\sprt{\frac{1}{2},\frac{1}{2},-\frac{1}{2},-\frac{1}{2}},\sprt{0,0,1,0} } \prec_Q^\mathtt{b}
\pair{\sprt{0,1,0,0},\sprt{\frac{1}{2},-\frac{1}{2},\frac{1}{2},-\frac{1}{2}} } \prec_Q^\mathtt{b} \pair{\alpha,\beta}.
\end{align}
\end{remark}
\subsection{$G_2$-case} \label{subsec: G2 degree}
In this case, we have only finite many Dynkin quivers. Recall that $\ms{1mu}\mathsf{d}_1=1$ and $\ms{1mu}\mathsf{d}_2=3$. Then one can check the results in this subsections and we exhibit several examples instead of giving proofs.
For a Dynkin quiver $Q=({\mathop{\mathbin{\mbox{\large $\blacktriangle$}}}}} %{\blacktriangle_{G_2}, \xi)$ with
$$ \ \xymatrix@R=0.5ex@C=6ex{ *{\circ}<3pt> \ar@{->}[r]_<{ 1 \ \ }^<{ _{\underline{2}} \ \ } & *{\circled{$\odot$}}<3pt> \ar@{-}[l]^<{ \ \ 2 }_<{ \ \ _{\underline{1}} } },$$
we have
$$
\Gamma^{G_2}_{Q}= \raisebox{2.3em}{ \scalebox{0.9}{\xymatrix@!C=4ex@R=2ex{
(i\setminus p) & -3 & -2 & -1 & 0 & 1 & 2\\
1& \ssrt{0,1,-1} \ar@{=>}[dr] && \ssrt{1,0,-1} \ar@{=>}[dr] &&\ssrt{1,-1,0} \ar@{=>}[dr] \\
2&& \ssrt{1,1,-2} \ar@{-}[ul]\ar@{->}[ur] && \ssrt{2,-1,-1} \ar@{-}[ul]\ar@{->}[ur] &&\ssrt{1,-2,1} \ar@{-}[ul]
}}}.
$$
Here we use the orthonormal basis $\{ \epsilon_i \ | \ 1 \le i \le 3 \}$ of $\mathbb{R}^3$, the notation $\ssrt{a,b,c}\mathbin{:=} a\epsilon_1+b\epsilon_2+c\epsilon_3$ and
$$ \alpha_1 = \ssrt{0,1,-1} \quad \text{ and } \quad \alpha_2 = \ssrt{1,-2,1}.$$
\begin{theorem}
For a Dynkin quiver $Q$ of type $G_2$ and any \prq $\pair{\alpha,\beta} \in (\Phi^+_{G_2})^2$, ${\operatorname{hd}}_Q(\pair{\alpha,\beta})$ is well-defined. In particular, if $\alpha+\beta=\gamma$,
then ${\operatorname{hd}}_Q(\pair{\alpha,\beta})=\gamma$.
\end{theorem}
\begin{remark} \label{rmk: G2 do not hold}
As $F_4$-case, $\mathfrak{d}_{i,j}(t)+\delta_{i,j}t^{{\ms{1mu}\mathsf{h}}-1}$ does not coincide with $\widetilde{\de}_{i,j}(t)$ any more in $G_2$-case either. For instance,
$$ \widetilde{\de}_{1,1}[4]= 2 \quad \text{ while } \quad \mathfrak{d}_{1,1}[4]= 1. $$
\end{remark}
Note that, for each Dynkin quiver $Q$ of type $G_2$, there exist only one \prq of positive roots $\pair{\alpha,\beta}$ such that $\alpha+\beta =\gamma \in \Phi^+$ and ${\operatorname{deg}}_Q(\pair{\alpha,\beta})=2$ $(i=1,2)$. In that case,
there exists a unique \prq $\underline{p}$ such that
$$\gamma \prec_Q^\mathtt{b} \underline{p} \prec_Q^\mathtt{b} \pair{\alpha,\beta}.$$
\section{Quiver Hecke algebras and $\mathrm{R}$-matrices} \label{Sec: quiver Hecke}
In this section, we briefly recall the definition of unipotent quantum coordinate ring and quiver Hecke algebra $R$ associated to a finite Cartan datum $(\sfC,\sfP,\Pi,\wl^\vee,\Pi^\vee)$. Then we review the definition, invariants and properties of affinizations of modules over quiver Hecke algebras which are introduced and investigated in \cite{KP18,KKOP19A}. In the final section, we will see that the $\mathfrak{d}$-invariants for pairs of cuspidal modules over $R$ \emph{associated with Dynkin quivers} coincide with $(\widetilde{\de}_{i,j}(t))_{i,j \in I}$.
\subsection{Unipotent quantum coordinate ring}
Let $q$ be an indeterminate. For $k \in \mathbb{Z}_{\ge 1}$ and $i \in I$, we set
$$ q_i =q^{\ms{1mu}\mathsf{d}_i}, \quad [k]_i \mathbin{:=} \dfrac{q_i^k -q_i^{-k}}{q_i-q_i^{-1}} \quad \text{ and } \quad [k]_i! \mathbin{:=} \prod_{s=1}^k \; [s]_i.$$
We denote by $U_q(\mathfrak{g})$ the quantum group associated to a finite
Cartan datum $(\mathsf{C},\sfP,\Pi,\sfP^\vee,\Pi^\vee)$, which is the
associative algebra over $\mathbb Q(q)$ generated by $e_i,f_i$ $(i
\in I)$ and $q^{h}$ ($h\in\sfP^\vee$). We set $U_q^\pm(\mathfrak{g})$ be
the subalgebra generated by $e_i$ (resp.\ $f_i$) for $i \in I$. Note
that $U_q(\mathfrak{g})$ admits the weight space decomposition:
$$ U_q(\mathfrak{g}) = \mathop{\mbox{\normalsize$\bigoplus$}}\limits_{\beta \in \sfQ} U_q(\mathfrak{g})_\beta. $$
For any $i \in I$, there exists a unique $\mathbb{Q}(q)$-linear endomorphisms $e_ i'$ of $U_q^-(\mathfrak{g})$ such
that
$$ e_ i'(f_j)=\delta_{i,j},\ e_i'(xy) = e_i'(x)+ q^{(\alpha_i,\beta)}x e'_i(y)\quad (x \in U_q^-(\mathfrak{g})_\beta, \ y \in U_q^-(\mathfrak{g})).$$
Then there exists a unique non-degenerate symmetric bilinear form $( , )_K$ on $U_q^-(\mathfrak{g})$ such that
$$
( \mathbf{1},\mathbf{1})_K=1, \quad (f_iu,v)_K = (u,e_i'v)_K \text{ for $i \in I$, $u,v \in U_q^-(\mathfrak{g})$}.
$$
We set $\mathbb{A}=\mathbb{Z}[q^{\pm 1}]$ and let
$U_q^{+}(\mathfrak{g})_{\mathbb{A}}$ be the $\mathbb{A}$-subalgebra of
$U_q^{+}(\mathfrak{g})$ generated by $e_i^{(n)}\mathbin{:=} e_i^n/[n]_i!$
($i \in I$, $n\in\mathbb{Z}_{>0}$).
Let $\Delta_\mathfrak{n}$ be the algebra homomorphism $U_q^+(\mathfrak{g}) \to U_q^+(\mathfrak{g}) \mathop\otimes U_q^+(\mathfrak{g})$ given by $ \Delta_\mathfrak{n}(e_i) = e_i
\mathop\otimes 1 + 1 \mathop\otimes e_i$,
where the algebra structure on $U_q^+(\mathfrak{g})
\mathop\otimes U_q^+(\mathfrak{g})$ is defined by
$$(x_1 \mathop\otimes x_2) \cdot (y_1 \mathop\otimes y_2) = q^{-({\rm wt}(x_2),{\rm wt}(y_1))}(x_1y_1 \mathop\otimes x_2y_2).$$
Set
$$ A_q(\mathfrak{n}) = \mathop{\mbox{\normalsize$\bigoplus$}}\limits_{\beta \in \sfQ^-} A_q(\mathfrak{n})_\beta \quad \text{ where } A_q(\mathfrak{n})_\beta \mathbin{:=} \operatorname{Hom}_{\mathbb{Q}(q)}(U^+_q(\mathfrak{g})_{-\beta}, \mathbb{Q}(q)).$$
Then $A_q(\mathfrak{n})$ is an algebra with the multiplication given by
$(\psi \cdot \theta)(x)= \theta(x_{(1)})\psi(x_{(2)})$, when
$\Delta_\mathfrak{n}(x)=x_{(1)} \mathop\otimes x_{(2)}$ in Sweedler's notation.
Let us denote by $A_\mathbb{A}(\mathfrak{n})$ the $\mathbb{A}$-submodule
of $A_q(\mathfrak{n})$
consisting of $ \uppsi \in A_q(\mathfrak{n})$ such that
$ \uppsi \left( U_q^{+}(\mathfrak{g})_{\mathbb{A}} \right) \subset\mathbb{A}$. Then
it is an $\mathbb{A}$-subalgebra of $A_q(\mathfrak{n})$.
Note that Lusztig \cite{Lus90,Lus91} and Kashiwara \cite{K91} have constructed a specific $\mathbb{A}$-basis $\mathbf{B}^{{\rm up}}$ of $A_\mathbb{A}(\mathfrak{n})$.
\smallskip
Note that for a commutation class $\cc$,
the convex order
$\prec_\cc$ provides the \emph{dual PBW-vectors}
$$\st{\pbw(\beta)\mid\beta\in\Phi^+}$$ of
$A_q(\mathfrak{n})$ which satisfies the following conditions.
\begin{theorem} [\cite{BKM12}] \label{thm: minimal pair dual pbw}
The dual PBW vector $\pbw(\alpha_i)$ $(i\in I)$ is the element such that
$\bl f_j,\pbw(\alpha_i)\br_K=\delta_{i,j}$.
For a $\cc$-minimal \pr $\pair{\alpha,\beta}$ for
$\gamma \in \Phi^+ \setminus \Pi$, we have
\begin{align}\label{eq: BKMc}
\pbw(\alpha)\pbw(\beta) - q^{-(\alpha,\beta)}\pbw(\beta)\pbw(\alpha)
= q^{-p_{\beta,\alpha}}(1-q^{2(p_{\beta,\alpha}-(\alpha,\beta))} )\pbw(\gamma).
\end{align}
\end{theorem}
For a dominant weight $\lambda \in \sfP^+$, let $V(\lambda)$ be the
irreducible highest weight $U_q(\mathfrak{g})$-module with highest weight
vector $u_\lambda$ of weight $\lambda$. Let $( \ , \ )_\lambda$ be the
non-degenerate symmetric bilinear form on $V(\lambda)$ such that
$(u_\lambda,u_\lambda)_\lambda=1$ and $(xu,v)_\lambda=(u,\upvarphi(x)v)_\lambda$ for $u,v
\in V(\lambda)$ and $x\in U_q(\mathfrak{g})$, where $\upvarphi$ is the algebra
antiautomorphism on $U_q(\mathfrak{g})$ defined by $\upvarphi(e_i)=f_i$,
$\upvarphi(f_i)=e_i$ and $\upvarphi(q^h)=q^h$. For each $\mu, \zeta \in
\mathsf{W}_\mathfrak{g} \lambda$, the \emph{unipotent quantum minor} $D(\mu, \zeta)$ is
an element in $A_q(\mathfrak{n})$ given by
$D(\mu,\zeta)(x)=(x u_\mu, u_\zeta)_\lambda$
for $x\in U_q^+(\mathfrak{g})$, where $u_\mu$ and $u_{\zeta}$ are the
extremal weight vectors in $V(\lambda)$ of weight $\mu$ and $\zeta$,
respectively. Then we have $D(\mu,\zeta)\in \mathbf B^{{\rm up}}
\sqcup\{0\}$.
Note that $D(\mu,\zeta)\in \mathbf B^{{\rm up}}$ if and only if
$\mu\preceq \zeta$.
Here $\mu\preceq \zeta$ if there are $w,w'\in \mathsf{W}_\mathfrak{g}$
such that $w'\preceq w$ and $\mu=w\lambda$, $\zeta=w'\lambda$.
\smallskip
For a reduced expression ${\underline{w}}_0=s_{i_1}s_{i_2}\cdots s_{i_\ell}$ of the longest element
$w_0 \in \mathsf{W}_\mathfrak{g}$, define
${\underline{w}}_{\le k}\mathbin{:=} s_{i_1}\cdots s_{i_k}$ and
$ \lambda_k \mathbin{:=} {\underline{w}}_{\le k}\varpi_{i_k}$ for $1 \le k \le l$.
Note that $\lambda_{k^-} ={\underline{w}}_{\le k-1}\varpi_{i_k}$ if $k^- >0$. Here
$$k^-\mathbin{:=} \max \left( \{ 1 \le s < k \mid
i_s=i_k\}\sqcup \{0\} \right).
$$
For $0 \le t \le s \le \ell$, we set
$$
D_{{\underline{w}}_0}(s,t) \mathbin{:=} \begin{cases}
D( {\underline{w}}_{\le s}\varpi_{i_s},\ {\underline{w}}_{\le t}\varpi_{i_t}) & \text{ if } 1 \le t \le s\le \ell \text{ and } i_s=i_t, \\
D( {\underline{w}}_{\le s}\varpi_{i_s},\ \varpi_{i_s}) & \text{ if } 0 = t < s \le \ell, \\
\mathbf{1} & \text{ if } 0 = t =s.
\end{cases}
$$
Then $D_{{\underline{w}}_0}(s,t)$ belongs to $\mathbf B^{{\rm up}}$ and
\begin{align} \label{eq: D[s]}
\pbw[{[{\underline{w}}_0]}](\beta^{{\underline{w}}_0}_s) = D_{{\underline{w}}_0}(s,s^-) \qquad \text{ for } \ 1\le s \le \ell.
\end{align}
\subsection{Quiver Hecke algebras} Let $\mathbf{k}$ be a field. For $i,j \in I$, we choose polynomials $\mathcal{Q}_{i,j}(u,v) \in \mathbf{k}[u,v]$ such that $\mathcal{Q}_{i,j}(u,v)=\mathcal{Q}_{j,i}(v,u)$, which is of the form
$$
\mathcal{Q}_{i,j}(u,v) = \begin{cases}
\sum_{p(\alpha_i,\alpha_i)+q(\alpha_j,\alpha_j)=-2(\alpha_i,\alpha_j)} t_{i,j;p,q}u^pv^p & \text{ if } i \ne j, \\
0 & \text{ if } i =j,
\end{cases}
$$
where $ t_{i,j;-c_{i,j},0} \in \mathbf{k}^\times$
For $\beta \in \sfQ^+$ with ${\rm ht}(\beta)=m$, we set
$$ I^\beta \mathbin{:=} \Bigl\{ \nu =( \nu_1,\ldots, \nu_m) \in I^m \ \bigm| \ \sum_{k=1}^m \alpha_{ \nu_k} =\beta \Bigr\},$$
on which the symmetric group $\mathfrak{S}_m = \langle r_k \mid k =1,\ldots,n-1 \rangle$ acts by place permutations.
\begin{definition}
For $\beta \in \sfQ^+$, the \emph{quiver Hecke algebra} $R(\beta)$ associated with $\sfC$ and $(\mathcal{Q}_{i,j}(u,v))_{i,j \in I}$ is the $\mathbf{k}$-algebra generated by
$$ \{ e( \nu) \ | \ \nu \in I^\beta \}, \ \{ x_k \ | \ 1 \le k \le m \}, \ \{ \tau_l \ | \ 1 \le l <m \} $$
satisfying the following defining relations:
\begin{align*}
& e( \nu)e( \nu') = \delta_{\nu,\nu'}, \ \sum_{ \nu \in I^\beta} e( \nu)=1, \ x_ke(\nu)=e(\nu)x_k, x_kx_l=x_lx_k , \allowdisplaybreaks\\
&\tau_l e( \nu) = e\bl r_l( \nu)\br\tau_l, \ \tau_k\tau_l=\tau_l\tau_k \ \text{ if } |k-l|>1, \ \tau_k^2e( \nu) = \mathcal{Q}_{ \nu_k, \nu_{k+1}}(x_k,x_{k+1}) e( \nu), \allowdisplaybreaks\\
& (\tau_kx_l-x_{r_k(l)}\tau_k)e( \nu) = \begin{cases}
-e( \nu) & \text{ if } l =k \text{ and } \nu_k= \nu_{k+1}, \\
e( \nu) & \text{ if } l =k+1 \text{ and } \nu_k= \nu_{k+1}, \\
0 & \text{ otherwise},
\end{cases} \allowdisplaybreaks \\
& (\tau_{k+1}\tau_k\tau_{k+1}-\tau_k\tau_{k+1}\tau_k)e( \nu) = \begin{cases}
\overline{\mathcal{Q}}_{ \nu_k, \nu_{k+1}}(x_k,x_{k+1},x_{k+2})e( \nu) & \text{ if } \nu_k= \nu_{k+2}, \\
0 & \text{ otherwise},
\end{cases}
\end{align*}
where
$$
\overline{\mathcal{Q}}_{i,j}(u,v,w) \mathbin{:=} \dfrac{\mathcal{Q}_{i,j}(u,v)-\mathcal{Q}_{i,j}(w,v)}{u-w} \in \mathbf{k}[u,v,w].
$$
\end{definition}
The algebra $R(\beta)$ has the $\mathbb{Z}$-grading defined by
$$ \deg(e( \nu))=0, \quad \deg(x_ke( \eta))=(\alpha_{ \eta_k},\alpha_{ \eta_k}), \quad \deg(\tau_le( \eta))=-(\alpha_{ \eta_l},\alpha_{ \eta_{l+1}}).$$
We denote by $R(\beta) \text{-}\mathrm{gmod}$ the category of graded finite-dimensional $R(\beta)$-modules with degree preserving homomorphisms..
We set
$R\text{-}\mathrm{gmod} \mathbin{:=} \bigoplus_{\beta \in \sfQ^+} R(\beta)\text{-}\mathrm{gmod}$.
The trivial $R(0)$-module of degree 0 is denoted by $\mathbf{1}$.
For simplicity, we write ``a module" instead of ``a graded module''.
We define the grading shift functor $q$
by $(q M)_k = M_{k-1}$ for a $ \mathbb{Z}$-graded module $M = \bigoplus_{k \in \mathbb{Z}} M_k $.
For $M, N \in R(\beta)\text{-}\mathrm{gmod} $, $\operatorname{Hom}_{R(\beta)}(M,N)$ denotes the space of degree preserving module homomorphisms.
We define
\[
\mathrm{H{\scriptstyle OM}}_{R(\beta)}( M,N ) \mathbin{:=} \bigoplus_{k \in \mathbb{Z}} \operatorname{Hom}_{R(\beta)}(q^{k}M, N),
\]
and set $ \deg(f) \mathbin{:=} k$ for $f \in \operatorname{Hom}_{R(\beta)}(q^{k}M, N)$.
We sometimes write $R$ for $R(\beta)$ in $\mathrm{H{\scriptstyle OM}}_{R(\beta)}( M,N )$ for simplicity.
For $M \in R(\beta)\text{-}\mathrm{gmod}$, we set $M^\star \mathbin{:=} \mathrm{H{\scriptstyle OM}}_{R(\beta)}(M, R(\beta))$ with the $R(\beta)$-action given by
$$
(r \cdot f) (u) \mathbin{:=} f(\psi(r)u), \quad \text{for $f\in M^\star$, $r \in R(\beta)$ and $u\in M$,}
$$
where $\psi$ is the antiautomorphism of $R(\beta)$ which fixes the generators.
We say that $M$ is \emph{self-dual} if $M \simeq M^\star$. For an $R(\beta)$-module $M$, we set ${\rm wt}(M) \mathbin{:=} -\beta$.
\smallskip
\emph{We sometimes ignore grading shifts in the rest of this paper.}
\smallskip
Let
$
e(\beta, \beta') \mathbin{:=} \sum_{\nu \in I^\beta, \nu' \in I^{\beta'}} e(\nu, \nu'),
$
where $e(\nu, \nu')$ is the idempotent corresponding to the concatenation
$\nu\ast\nu'$ of
$\nu$ and $\nu'$.
Then there is an injective ring homomorphism
$$R(\beta)\otimes R(\beta')\to e(\beta,\beta')R(\beta+\beta')e(\beta,\beta')$$
given by
$e(\nu)\otimes e(\nu')\mapsto e(\nu,\nu')$,
$x_ke(\beta)\otimes 1\mapsto x_ke(\beta,\beta')$,
$1\otimes x_ke(\beta')\mapsto x_{k+{\rm ht}(\beta)}e(\beta,\beta')$,
$\tau_ke(\beta)\otimes 1\mapsto \tau_ke(\beta,\beta')$ and
$1\otimes \tau_ke(\beta')\mapsto \tau_{k+{\rm ht}(\beta)}e(\beta,\beta')$.
For $R(\beta)$-modules $M$ and $N$, we set
$$
M \mathop{\mathbin{\mbox{\large $\circ$}}} N \mathbin{:=} R(\beta+\beta') e(\beta, \beta') \otimes_{R(\beta) \otimes R(\beta')} (M \otimes N).
$$
We denote by $M \mathbin{\scalebox{.9}{$\nabla$}} N$ the head of $M \mathop{\mathbin{\mbox{\large $\circ$}}} N$ and by $M \mathbin{\scalebox{.9}{$\Delta$}} N$ the socle of $M \mathop{\mathbin{\mbox{\large $\circ$}}} N$.
We say that simple $R$-modules $M$ and $N$ \emph{strongly commute} if $M \mathop{\mathbin{\mbox{\large $\circ$}}} N$ is simple. A simple $R$-module
$L$ is \emph{real} if $L$ strongly commutes with itself. Note that if $M$ and $N$ strongly commute, then $M\mathop{\mathbin{\mbox{\large $\circ$}}} N \simeq N \mathop{\mathbin{\mbox{\large $\circ$}}} M$ up to a grading shift.
We denote by $K(R\text{-}\mathrm{gmod})$ the Grothendieck ring of $R\text{-}\mathrm{gmod}$.
\begin{theorem} [{\cite{KL09, KL11, R08}}] \label{Thm: categorification}
There exists an isomorphism of $\mathbb{A}$-bialgebras
\begin{align}\label{eq: upomega}
\Upomega \colon K(R\text{-}\mathrm{gmod}) \isoto A_{\mathbb{A}}(\mathfrak{n}),
\end{align}
which preserves weights.
\end{theorem}
\begin{proposition}[{\cite[Proposition 4.1]{KKOP18}}] \label{prop: cat of D}
For $\varpi \in\sfP^+$ and $\mu,\zeta \in W\varpi$ with $\mu\preceq \zeta$, there exists a self-dual real simple $R$-module $\mathsf{M}(\mu,\zeta)$ such that
$$ \Upomega([\mathsf{M}(\mu,\zeta)]) = D(\mu,\zeta).$$
\end{proposition}
We call $\mathsf{M}(\mu,\zeta)$ the \emph{determinantial module} associated to $D(\mu,\zeta)$. Note that every
determinantial module is real.
\subsection{Affinization and $\mathrm{R}$-matrices} In this subsection, we recall the notion of affinization and $\mathrm{R}$-matrices for quiver Hecke algebras
which were mainly investigated in~\cite{KKK18A,KKKO18,KP18}.
\smallskip
For $\beta \in \sfQ^+$ with ${\rm ht}(\beta)=m$ and $i \in I$, let
\begin{align} \label{eq: pibe}
\frakp_{i,\be} = \sum_{ \eta\, \in I^\beta} \left( \prod_{a \in [1,m]; \ \eta_a = i} x_a\right)e(\eta) \in R(\beta),
\end{align}
which belongs to the center $Z(R(\beta))$ of $R(\beta)$.
\begin{definition} \label{Def: aff}
Let $M$ be a simple $R(\beta)$-module. An \emph{affinization} of $M$ with degree $t_{\widehat{\sfM}}$ is an $R(\beta)$-module $\widehat{\sfM}$ with an endomorphism $z_{\widehat{\sfM}}$ of $\widehat{\sfM}$ with degree $t_{\widehat{\sfM}} \in \mathbb{Z}_{>0}$
and an isomorphism $\widehat{\sfM} / z_{\widehat{\sfM}} \widehat{\sfM} \simeq M$ such that
\begin{enumerate}[{\rm (i)}]
\item $\widehat{\sfM}$ is a finitely generated free module over the polynomial ring
$\cor[z_{\widehat{\sfM}}]$,
\item $\frakp_{i,\be} \widehat{\sfM} \ne 0$ for all $i\in I$.
\end{enumerate}
\end{definition}
Note that every affinization is \emph{essentially even}, i.e., $t_{\widehat{\sfM}} \in 2\mathbb{Z}_{>0}$ (\cite[Proposition 2.5]{KP18}).
Thus, from now on, we assume that every affinization has an even degree.
\begin{definition}\label{def: affreal}
We say that a simple $R$-module $M$ is \emph{affreal} if $M$ is real and admits an affinization.
If an affinization has degree $t \in 2\mathbb{Z}_{>0}$, we say that $M$ is \emph{$t$-affreal}.
\end{definition}
\begin{theorem} [{\cite[Theorem 3.26]{KKOP19A}}]
For $\varpi \in\sfP^+$ and $\mu,\zeta \in W\varpi$ with $\mu\preceq \zeta$, the determinantial module $\mathsf{M}(\mu,\zeta)$
admits an affinization $\widehat{\sfM}(\mu,\zeta)$. When $\varpi=\varpi_i$, $\mathsf{M}(\mu,\zeta)$ admits an affinization of degree $(\alpha_i,\alpha_i)=2\ms{1mu}\mathsf{d}_i$.
\end{theorem}
Let $\beta \in \sfQ^+$ with $m = {\rm ht}(\beta)$. For $k=1, \ldots, m-1$ and $ \nu \in I^\beta$, the \emph{intertwiner} $\varphi_k$ is defined by
$$
\varphi_k e(\nu) =
\begin{cases}
(\tau_k x_k - x_k \tau_k) e(\nu)
= (x_{k+1}\tau_k - \tau_kx_{k+1}) e(\nu) & \text{ if } \nu_k = \nu_{k+1}, \\
\tau_k e(\nu) & \text{ otherwise.}
\end{cases}
$$
\begin{lemma} [{\cite[Lemma 1.5]{KKK18A}}] \label{Lem: intertwiners}
Let $\sym_m$ be the symmetric group of degree $m\in\mathbb{Z}_{>0}$.
\begin{enumerate}[\rm (i)]
\item $\varphi_k^2 e( \nu ) = ( Q_{\nu_k, \nu_{k+1}} (x_k, x_{k+1} )+ \delta_{\nu_k, \nu_{k+1}} )\, e(\nu)$.
\item $\{ \varphi_k \}_{k=1, \ldots, m-1}$ satisfies the braid relation.
\item For a reduced expression $w = s_{i_1} \cdots s_{i_t} \in \mathfrak{S}_m$, we set $\varphi_w \mathbin{:=} \varphi_{i_1} \cdots \varphi_{i_t} $. Then
$\varphi_w$ does not depend on the choice of reduced expression of $w$.
\item For $w \in \mathfrak{S}_m$ and $1 \le k \le m$, we have $\varphi_w x_k = x_{w(k)} \varphi_w$.
\item For $w \in \mathfrak{S}_m$ and $1 \le k < m$, if $w(k+1)=w(k)+1$, then $\varphi_w \tau_k = \tau_{w(k)} \varphi_w$.
\end{enumerate}
\end{lemma}
For $m,n \in \mathbb{Z}_{\ge 0}$, we denote by $w[m,n]$ the element of $\mathfrak{S}_{m+n}$ defined by
$$
w[m,n](k) \mathbin{:=}
\left\{
\begin{array}{ll}
k+n & \text{ if } 1 \le k \le m, \\
k-m & \text{ if } m < k \le m+n.
\end{array}
\right.
$$
Let $\beta, \gamma \in \sfQ^+$ and set $m\mathbin{:=} {\rm ht}(\beta)$ and $n\mathbin{:=} {\rm ht}(\gamma)$.
For $M \in R(\beta)\text{-}\mathrm{Mod}$ and $N \in R(\gamma)\text{-}\mathrm{Mod}$, the $R(\beta)\otimes R(\gamma)$-linear map $M \otimes N \rightarrow N \mathop{\mathbin{\mbox{\large $\circ$}}} M$ defined by $u \otimes v \mapsto \varphi_{w[n,m]} (v \otimes u)$
can be extended to an $R(\beta+\gamma)$-module homomorphism (up to a grading shift)
$$
\mathrm{R}_{M,N}\colon M\mathop{\mathbin{\mbox{\large $\circ$}}} N \longrightarrow N \mathop{\mathbin{\mbox{\large $\circ$}}} M.
$$
Let $\widehat{\sfM}$ be an affinization of a simple $R$-module $M$, and let $N$ be a non-zero $R$-module. We define a homomorphism (up to a grading shift)
$$
R^{{\rm norm}}_{\widehat{\sfM}, N} \mathbin{:=} z_{\widehat{\sfM}}^{-s} \mathrm{R}_{\widehat{\sfM}, N}\colon \widehat{\sfM} \mathop{\mathbin{\mbox{\large $\circ$}}} N \longrightarrow N \mathop{\mathbin{\mbox{\large $\circ$}}} \widehat{\sfM},
$$
where $s$ is the largest integer such that $\mathrm{R}_{\widehat{\sfM}, N}(\widehat{\sfM} \mathop{\mathbin{\mbox{\large $\circ$}}} N) \subset z_{\widehat{\sfM}}^s (N \mathop{\mathbin{\mbox{\large $\circ$}}} \widehat{\sfM})$.
We define
$$
\mathbf{r}^{\mathrm{l}}_{M,N} \colon M \mathop{\mathbin{\mbox{\large $\circ$}}} N \longrightarrow N \mathop{\mathbin{\mbox{\large $\circ$}}} M
$$
to be the homomorphism (up to a grading shift) induced from $R^{{\rm norm}}_{\widehat{\sfM}, N}$ by specializing at $z_{\widehat{\sfM}}=0$. By the definition, $\mathbf{r}^{\mathrm{l}}_{M,N}$ never vanishes.
Similarly, for a simple module $N$ admitting an affinization $\widehat{\sfN}$, we can define
$$ \mathbf{r}^{\mathrm{r}}_{M,N} \colon M \mathop{\mathbin{\mbox{\large $\circ$}}} N \longrightarrow N \mathop{\mathbin{\mbox{\large $\circ$}}} M.$$
\begin{proposition}%
[{\cite{KKKO15}, \cite[Proposition 3.13]{KKOP19A}}]\label{prop: l=r}
Let $M$ and $N$ be simple modules.
We assume that one of them is affreal.
Then we have
$$\mathrm{H{\scriptstyle OM}}_R(M \mathop{\mathbin{\mbox{\large $\circ$}}} N,N \mathop{\mathbin{\mbox{\large $\circ$}}} M)=\cor\, \mathbf{r},$$
where $\mathbf{r}= \mathbf{r}^{\mathrm{l}}_{M,N}$ or $\mathbf{r}^{\mathrm{r}}_{M,N}$.
\end{proposition}
By the above proposition, we can write the homomorphism as $\mathbf{r}_{M,N}$, called the \emph{$\mathrm{R}$-matrix}, without superscript.
Now we define
\begin{align*}
\Lambda(M,N) &\mathbin{:=} \deg (\mathbf{r}_{M,N}) , \\
\widetilde{\Lambda}(M,N) &\mathbin{:=} \frac{1}{2} \big( \Lambda(M,N) + ({\rm wt}(M), {\rm wt}(N)) \big) , \\
\mathfrak{d}(M,N) &\mathbin{:=} \frac{1}{2} \big( \Lambda(M,N) + \Lambda(N,M)\big).
\end{align*}
for simple modules $M$, $N$ such that one of them admits an
affinization.
\begin{proposition}%
[{\cite[Theorem 3.2]{KKKO15}, \cite[Proposition 2.10]{KP18}, \cite[Proposition 3.13, Lemma 3.17]{KKOP19A}}] \label{prop: simple head}
Let $M$ and $N$ be simple $R$-modules such that one of $M$ and $N$ is affreal.
Then we have
\begin{enumerate}[{\rm (i)}]
\item $M \mathop{\mathbin{\mbox{\large $\circ$}}} N$ has simple socles and simple heads,
\item ${\rm Im}(\mathbf{r}_{M,N})$ is isomorphic to $M \mathbin{\scalebox{.9}{$\nabla$}} N$ and $N\mathbin{\scalebox{.9}{$\Delta$}} M$,
\item $M \mathbin{\scalebox{.9}{$\nabla$}} N$ and $M \mathbin{\scalebox{.9}{$\Delta$}} N$ appear once in the composition series of $M \mathop{\mathbin{\mbox{\large $\circ$}}} N$, respectively.\label{it:once}
\end{enumerate}
\end{proposition}
Now,
let us collect properties of $\Lambda(M,N)$, $\widetilde{\Lambda}(M,N)$ and $\mathfrak{d}(M,N)$
for simple $R$-modules $M$ and $N$ such that one of $M$ and $N$ is affreal.
They are proved in
\cite{KKKO15,KKKO18,KKOP18,KKOP19A} or can be proved by using the arguments in those papers:
\begin{lemma}[{\cite[Lemma 3.11]{KKOP19A}}] \label{lem: d/2}
Let $M$ and $N$ be simple $R$-modules.
Assume that one of them admit affinization of degree $t$.
Then we have $$
\widetilde{\Lambda}(M,N) , \ \mathfrak{d}(M,N) \in \frac{\;t\;}{2}\; \mathbb{Z}_{\ge0}.
$$
\end{lemma}
By this lemma, for $t_1$-affreal $M$ and $t_2$-affreal $N$, we have
\begin{align*}
\widetilde{\Lambda}(M,N) , \ \mathfrak{d}(M,N) \in \frac{\; {\rm lcm}(t_1,t_2)\;}{2}\; \mathbb{Z}_{\ge0}.
\end{align*}
Here ${\rm lcm}(a,b)$ denotes the least common multiple of $a,b \in \mathbb{Z} \setminus\{ 0 \}$.
\begin{proposition}[{\cite[Proposition 3.2.17]{KKKO18}}] \label{prop: length 2} Let $M_i$ be $t_i$-affreal $(i=1,2)$.
Assume that $$\mathfrak{d}(M_1,M_2)=\dfrac{ \max(t_1,t_2)}{2}.$$ Then we have an exact sequence
$$ 0 \to M_1 \mathbin{\scalebox{.9}{$\Delta$}} M_2 \to M_1 \mathop{\mathbin{\mbox{\large $\circ$}}} M_2 \to M_1 \mathbin{\scalebox{.9}{$\nabla$}} M_2 \to 0.$$
\end{proposition}
\begin{lemma} [{\cite[Lemma 3.1.4]{KKKO18}}] \label{lem: tLa}
Let $M$ and $N$ be self-dual simple modules. If one of them is affreal, then
$$ q^{\widetilde{\Lambda}(M,N)} M \mathbin{\scalebox{.9}{$\nabla$}} N \text{ is a self-dual simple module}. $$
\end{lemma}
\medskip
\begin{proposition}[{\cite[Corollary 3.11]{KKKO15}}] \label{prop: not vanishing}
Let $M_k$ be $R$-modules $(k=1,2,3)$
and assume that $M_2$ is simple.
If $\varphi_1\cl L \to M_1 \mathop{\mathbin{\mbox{\large $\circ$}}} M_2$ and
$\varphi_2\cl M_2 \mathop{\mathbin{\mbox{\large $\circ$}}} M_3 \to L'$ are non-zero homomorphisms, then the composition
\[
L\mathop{\mathbin{\mbox{\large $\circ$}}} M_3\To[\varphi_1\circ M_3] M_1\mathop{\mathbin{\mbox{\large $\circ$}}} M_2\mathop{\mathbin{\mbox{\large $\circ$}}} M_3\To[M_1\circ\varphi_2]
M_1\mathop{\mathbin{\mbox{\large $\circ$}}} L'
\]
does not vanish. Similarly,
if $\psi_1\cl L \to M_2 \mathop{\mathbin{\mbox{\large $\circ$}}} M_3$ and
$\psi_2\cl M_1 \mathop{\mathbin{\mbox{\large $\circ$}}} M_2 \to L'$ are non-zero homomorphisms, then
the composition
\[
M_1 \mathop{\mathbin{\mbox{\large $\circ$}}} L \To[M_1\circ \varphi_1] M_1\mathop{\mathbin{\mbox{\large $\circ$}}} M_2\mathop{\mathbin{\mbox{\large $\circ$}}} M_3 \To[\varphi_2 \circ M_3]
L '\mathop{\mathbin{\mbox{\large $\circ$}}} M_3.
\]
does not vanish.
\end{proposition}
The following proposition was incorrectly stated in \cite[Proposition 5.11]{Oh18}, and we present here its correct statement and its proof.
\begin{proposition} [{\cite[Proposition 5.11]{Oh18}}] \label{prop: inj}
Let $M_i$ and $N_i$ be simple $R$-modules such that one of them is affreal
and $M_i \mathop{\mathbin{\mbox{\large $\circ$}}} N_i$ has composition length $2$
for $i=1,2$. Let $M$ and $N$ be simple $R$-modules
such that one of them is affreal.
Let $\varphi_i \colon M_i \mathop{\mathbin{\mbox{\large $\circ$}}} N_i \to M \mathop{\mathbin{\mbox{\large $\circ$}}} N$ $(i=1,2)$
be a non-zero homomorphisms.
We assume that
$$M_1 \mathbin{\scalebox{.9}{$\Delta$}} N_1 \simeq M_2 \mathbin{\scalebox{.9}{$\Delta$}} N_2
\qtq M_1 \mathbin{\scalebox{.9}{$\nabla$}} N_1 \not\simeq M_2 \mathbin{\scalebox{.9}{$\nabla$}} N_2.$$
Then we have
$$M_1 \mathbin{\scalebox{.9}{$\Delta$}} N_1 \simeq M_2 \mathbin{\scalebox{.9}{$\Delta$}} N_2 \simeq M \mathbin{\scalebox{.9}{$\Delta$}} N \quad \text{ and } \quad \varphi_i \colon M_i \mathop{\mathbin{\mbox{\large $\circ$}}} N_i \monoto M \mathop{\mathbin{\mbox{\large $\circ$}}} N.$$
\end{proposition}
\Proof
If $\vphi_i$ is not injective, then
$\Im(\vphi_i)\simeq\ M_i\mathbin{\scalebox{.9}{$\nabla$}} N_i\simeq M\mathbin{\scalebox{.9}{$\Delta$}} N$.
If $\vphi_i$ is injective, then
$M_i\mathbin{\scalebox{.9}{$\Delta$}} N_i\simeq M\mathbin{\scalebox{.9}{$\Delta$}} N$.
\snoi
(i)\ If $\vphi_1$ and $\vphi_2$ are not injective, then
$M_1 \mathbin{\scalebox{.9}{$\nabla$}} N_1 \simeq M_2 \mathbin{\scalebox{.9}{$\nabla$}} N_2$, which is a contradiction.
\snoi
(ii) Assume that one of $\vphi_1$ and $\vphi_2$ (say $\vphi_1$) is injective and the other is
not injective. Then we have
$M\mathbin{\scalebox{.9}{$\Delta$}} N\simeq M_1\mathbin{\scalebox{.9}{$\Delta$}} N_1\simeq M_2\mathbin{\scalebox{.9}{$\nabla$}} N_2$.
Hence $ M_2\mathbin{\scalebox{.9}{$\nabla$}} N_2\simeq M_1\mathbin{\scalebox{.9}{$\Delta$}} N_1\simeq M_2\mathbin{\scalebox{.9}{$\Delta$}} N_2$,
which contradicts Proposition~\ref{prop: simple head}\;\eqref{it:once}.
\snoi(iii)\ Hence $\vphi_1$ and $\vphi_2$ are injective,
and $M\mathbin{\scalebox{.9}{$\Delta$}} N\simeq M_1\mathbin{\scalebox{.9}{$\Delta$}} N_1\simeq M_2\mathbin{\scalebox{.9}{$\Delta$}} N_2$.
\QED
\begin{definition}[{\cite[Definition 2.5]{KK19}}]
Let $L_1, L_2, \ldots, L_r$ be affreal modules.
The sequence $(L_1, \ldots, L_r)$ is called a \emph{normal sequence} if the composition of the R-matrices
\begin{align*}
\mathbf{r}_{L_1, \ldots, L_r} \mathbin{:=}& \prod_{1 \le i < j \le r} \mathbf{r}_{L_i, L_j}\allowdisplaybreaks \\
= & (\mathbf{r}_{L_{r-1, L_r}}) \circ \cdots \circ ( \mathbf{r}_{L_2, L_r} \circ \cdots \circ \mathbf{r}_{L_2, L_3} ) \circ ( \mathbf{r}_{L_1, L_r} \circ \cdots \circ \mathbf{r}_{L_1, L_2} ) \allowdisplaybreaks\\
& \colon q\ms{4mu}\raisebox{1.5ex}{$\scriptstyle\sum_{1\le i < k \le r} \Lambda(L_i,L_k)$} L_1 \mathop{\mathbin{\mbox{\large $\circ$}}} L_2 \mathop{\mathbin{\mbox{\large $\circ$}}} \cdots \mathop{\mathbin{\mbox{\large $\circ$}}} L_r \longrightarrow L_r \mathop{\mathbin{\mbox{\large $\circ$}}} \cdots \mathop{\mathbin{\mbox{\large $\circ$}}} L_2 \mathop{\mathbin{\mbox{\large $\circ$}}} L_1.
\end{align*}
does not vanishes.
\end{definition}
The following lemmas are proved in \cite{KK19,KKOP21C} when $\mathfrak{g}$ is
of symmetric type. However, by using the same arguments, we can
prove when $\mathfrak{g}$ is of symmetrizable type.
\begin{lemma} [{\cite[Lemma 2.6]{KK19}}]\label{Lem: normal head socle}
Let $(L_1, \ldots, L_r)$ be a normal sequence of affreal $R$-modules.
Then ${\rm Im}(\mathbf{r}_{L_1,\ldots, L_r})$ is simple
and it coincides with the head of $L_1\mathop{\mathbin{\mbox{\large $\circ$}}}\cdots \mathop{\mathbin{\mbox{\large $\circ$}}} L_r$
and also with the socle of $L_r\mathop{\mathbin{\mbox{\large $\circ$}}}\cdots \mathop{\mathbin{\mbox{\large $\circ$}}} L_1$.
\end{lemma}
\begin{lemma}[{\cite[Lemma 2.8, Lemma 2.10]{KK19}}] \label{lem: normality} \hfill
\begin{enumerate}[{\rm (i)}]
\item A sequence $(L_1 , \ldots , L_r)$ of affreal $R$-modules is a normal
sequence if and only if $(L_1 , \ldots , L_{r-1})$ is a normal sequence and
$$ \Lambda({\rm hd}(L_1 , \ldots , L_{r-1}),L_r ) = \sum_{1 \le j \le r-1}\Lambda(L_j,L_r).$$
\item A sequence $(L_1 , \ldots , L_r)$ of affreal $R$-modules is a normal
sequence if and only if $(L_{ 2}, \ldots , L_{r})$ is a normal sequence and
$$ \Lambda(L_1,{\rm hd}(L_2, \ldots , L_{r}) ) = \sum_{2 \le j \le r}\Lambda(L_1,L_j).$$
\end{enumerate}
\end{lemma}
\begin{lemma}[{\cite[Lemma 2.23]{KKOP21C}}] \label{lem: de additive}
Let $L, M, N$ be affreal $R$-modules. Then $\mathfrak{d}(L, M \mathbin{\scalebox{.9}{$\nabla$}} N)$ = $\mathfrak{d}(L, M) + \mathfrak{d}(L, N)$, if and only if $(L, M, N)$ and $(M, N, L)$
are normal sequences.
\end{lemma}
\section{Cuspidal modules and $t$-quantized Cartan matrix} \label{sec: Cuspidal}
Let $(\sfC,\sfP,\Pi,\wl^\vee,\Pi^\vee)$ be an arbitrary finite Cartan datum.
In this section, we first recall the \emph{cuspidal modules}
over quiver Hecke algebras categorifying the dual PBW vectors. Then we
will prove that the $\mathfrak{d}$-invariants among cuspidal modules
associated to Dynkin quivers are encoded in $(\widetilde{\de}_{i,j}(t))_{i,j \in I}$, and $(\mathfrak{d}_{i,j}(t))_{i,j \in I}$ informs us of the information on the composition length of convolution product of
pair of the cuspidal modules. To prove them, we will use the combinatorial properties and statistics of AR-quivers in a crucial way.
\subsection{Cuspidal modules} \label{subsec: cuspidal}
Let $R$ be the quiver Hecke algebra of type of $\sfC$.
Then, in \cite{BKM12,McNa15,Kato12,KR09}, the following theorem is proved:
\begin{theorem} \label{thm: cuspidal}
For any reduced expression ${\underline{w}}_0=s_{i_1}\cdots s_{i_\ell}$ of $w_0 \in \mathsf{W}$, there exists a family of self-dual affreal simple modules
$$ \{ S_{{\underline{w}}_0}(\beta) \mid\beta\in\Phi^+ \} $$
satisfying the following properties:
\begin{enumerate}[{\rm (1)}]
\item For each $\beta\in\Phi^+$,
we have $$\Upomega([S_{{\underline{w}}_0}(\beta)]) = \pbw[{[{\underline{w}}_0]}](\beta) \quad \text{ and }\quad {\rm wt}( S_{{\underline{w}}_0}(\beta)) =-\beta.$$
\item For any \ex ${\underline{m}}=\seq{m_\beta}\in \mathbb{Z}_{\ge 0}^{\Phi^+}$, there exists a unique non-zero $R$-module homomorphism \ro up to grading shift\/\rfm
\begin{align*}
& \overset{\to}{\mathcal{S}}_{{\underline{w}}_0}({\underline{m}})\mathbin{:=} S_{{\underline{w}}_0}(\beta_\ell)^{\mathop{\mathbin{\mbox{\large $\circ$}}} m_{\beta_\ell}} \mathop{\mathbin{\mbox{\large $\circ$}}} \cdots \mathop{\mathbin{\mbox{\large $\circ$}}} S_{{\underline{w}}_0}(\beta_1)^{\mathop{\mathbin{\mbox{\large $\circ$}}} m_{\beta_1}} \\
& \hspace{20ex} \overset{\mathbf{r}_{{\underline{m}}}}{\To}
\overset{\gets}{\mathcal{S}}_{{\underline{w}}_0}({\underline{m}})\mathbin{:=} S_{{\underline{w}}_0}(\beta_1)^{\mathop{\mathbin{\mbox{\large $\circ$}}} m_{\beta_1}} \mathop{\mathbin{\mbox{\large $\circ$}}} \cdots \mathop{\mathbin{\mbox{\large $\circ$}}} S_{{\underline{w}}_0}(\beta_\ell)^{\mathop{\mathbin{\mbox{\large $\circ$}}} m_{\beta_\ell}}
\end{align*}
where $\beta_k=\beta_k^{{\underline{w}}_0}$.
Moreover, ${\rm Im}(\mathbf{r}_{{\underline{m}}})$ is simple and
${\rm Im}(\mathbf{r}_{{\underline{m}}}) \simeq {\operatorname{soc}}\left( \overset{\gets}{\mathcal{S}}_{{\underline{w}}_0} ( {\underline{m}} ) \right) \simeq {\operatorname{hd}}\left( \overset{\to}{\mathcal{S}}_{{\underline{w}}_0}({\underline{m}}) \right)$ up to a grading shift.
\item For any simple $R$-module $M$, there exists a unique \ex ${\underline{m}} \in \mathbb{Z}_{\ge 0}^{\Phi^+}$ such that
$$M \simeq {\rm hd}\left( \overset{\to}{\mathcal{S}}_{{\underline{w}}_0}({\underline{m}}) \right)\qt{up to a grading shift.}$$
\end{enumerate}
\end{theorem}
Note that, for each ${\underline{m}}=\seq{m_1,\ldots,m_\ell}\in \mathbb{Z}_{\ge 0}^\ell$,
${\operatorname{hd}}( q^{s_{\underline{m}}} \overset{\to}{\mathcal{S}}_{{\underline{w}}_0}({\underline{m}}) )$ is a self-dual simple $R$-module, where
$$ s_{\underline{m}} \mathbin{:=} \sum_{k=1}^\ell {\ms{1mu}\mathsf{d}_{\beta^{{\underline{w}}_0}_k}}\;\dfrac{ m_k (m_k-1)}{2}.$$
We denote
$$S_{{\underline{w}}_0}({\underline{m}})\mathbin{:=}{\operatorname{hd}}( q^{s_{\underline{m}}} \overset{\to}{\mathcal{S}}_{{\underline{w}}_0}({\underline{m}})).$$
We call $S_{{\underline{w}}_0}(\beta) $ the \emph{cuspidal module} associated to ${\underline{w}}_0$. It is known (see \cite[Theorem 5.8]{Oh18}) that the set $ \{ S_{{\underline{w}}_0}(\beta)
\mid \beta\in\Phi^+ \} $ does depend only on the commutation class $[{\underline{w}}_0]$; i.e.,
\begin{align*}
S_{{\underline{w}}_0}(\beta) \simeq S_{{\underline{w}}'_0}(\beta) \quad \text{
if ${\underline{w}}_0$ and ${\underline{w}}'_0$ are commutation equivalent.}
\end{align*}
Thus, for any commutation class $\cc$, $S_{\cc}(\beta)$ for $\beta \in \Phi^+$ is well-defined.
\Lemma[{\cite[Proposition 5.7]{Oh18}}] \label{lem: non-com simple}
Let ${\underline{w}}_0$ be a reduced expression of $w_0$.
If $\alpha$ and $\beta$ are incomparable with respect to $\preceq_{[{\underline{w}}_0]}$,
then
$S_{[{\underline{w}}_0]}(\alpha)$ and $S_{[{\underline{w}}_0]}(\beta)$ commute.
\enlemma
By this lemma, $S_{{\underline{w}}_0}({\underline{m}})$,
as well as $\overset{\gets}{\mathcal{S}}_{{\underline{w}}_0}({\underline{m}})$ and
$\overset{\to}{\mathcal{S}}_{{\underline{w}}_0}({\underline{m}})$, depends only on the commutation class $[{\underline{w}}_0]$.
Hence, $S_{[{\underline{w}}_0]}({\underline{m}})$, $\overset{\gets}{\mathcal{S}}_{[{\underline{w}}_0]}({\underline{m}})$ and
$\overset{\to}{\mathcal{S}}_{[{\underline{w}}_0]}({\underline{m}})$ are well-defined.
\begin{proposition}[{\cite{BKM12,McNa15,Oh18}}] \label{prop: minimal,simple}
Let $\cc$ be a commutation class.
\begin{enumerate}[{\rm (i)}]
\item For any $\cc$-minimal \pr $\pair{\alpha,\beta}$ for $\gamma \in \Phi^+$, there exist exact sequences of $R$-modules
\begin{align*}
\ba{l}
0 \to S_\cc(\gamma)\to S_\cc(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_\cc(\alpha)\to
S_\cc(\beta) \mathbin{\scalebox{.9}{$\nabla$}} S_\cc(\alpha)\to 0,\\
0 \to S_\cc(\beta) \mathbin{\scalebox{.9}{$\nabla$}} S_\cc(\alpha)\to S_\cc(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_\cc(\beta)\to S_\cc(\gamma)
\to 0
\ea
\end{align*}
and
$$ {[S_{{\cc}}(\alpha)]}\cdot{[S_{{\cc}}(\beta)]} = [S_{\cc}(\gamma)] + [S_{\cc}(\alpha) \mathbin{\scalebox{.9}{$\Delta$}} S_{\cc}(\beta)] \qt{in $K(R\text{-}\mathrm{gmod})$}$$
up to grading shifts.
\item For any \ex ${\underline{m}}$, we have
\begin{align}\label{eq: comp series}
[\overset{ \to }{\mathcal{S}}_{\cc}({\underline{m}})] \in [S_\cc({\underline{m}})] +
\sum_{{\underline{m}}' \prec^{\mathtt{b}}_{\cc} {\underline{m}}} \mathbb{Z}_{\ge 0} [S_\cc({\underline{m}}')].
\end{align} up to grading shifts.
\item If ${\underline{m}}$ is a $\cc$-simple sequence, then
$\overset{\to}{\mathcal{S}}_\cc({\underline{m}})$ and $\overset{\gets}{\mathcal{S}}_\cc({\underline{m}})$
are isomorphic up to a grading shift and they are simple.
\end{enumerate}
\end{proposition}
\begin{proposition} [{\cite[Theorem 4.7]{BKM12}}] \label{prop: tLa}
For any commutation class $\cc$ and a \pr $\pair{\alpha,\beta}$, we have
\begin{align} \label{eq: tla be al general}
\widetilde{\Lambda}( S_\cc(\beta),S_\cc(\alpha) ) =0 \qtq[and hence]\Lambda( S_\cc(\beta),S_\cc(\alpha) ) = -(\alpha,\beta).
\end{align}
In particular, if the \pr $\pair{\alpha,\beta}$ is $\cc$-minimal for a positive root $\gamma \in \Phi^+$, we have short exact sequences \ro including grading shifts\rfm
\begin{equation}\label{eq: 2 sess}
\begin{aligned}
& 0 \to q^{p_{\beta,\alpha}- (\beta,\alpha)} S_\cc(\gamma) \to S_\cc(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_\cc(\alpha)
\to S_\cc(\pair{\alpha,\beta}) \to 0, \\
& 0 \to q^{-(\beta,\alpha)} S_\cc(\pair{\alpha,\beta}) \to S_\cc(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_\cc(\beta) \to q^{-p_{\beta,\alpha}} S_\cc \to 0,
\end{aligned}
\end{equation}
which tells
\eqn
&&\widetilde{\Lambda}( S_\cc(\alpha),S_\cc(\beta) ) = p_{\beta,\alpha},\quad
\Lambda( S_\cc(\alpha),S_\cc(\beta) ) =2 p_{\beta,\alpha} -(\alpha,\beta),\qtq\\
&&\mathfrak{d}( S_\cc(\alpha),S_\cc(\beta) )= p_{\beta,\alpha} -(\alpha,\beta).
\eneqn
\end{proposition}
\begin{proposition} [{\cite[Corollary 5.12]{Oh18}}] \label{prop: simple socle}
Let ${\underline{m}}^{(k)}$ $(k=1,2,3)$ be distinct sequences such that ${\rm wt}_{\cc}({\underline{m}}_{k_1}^{(a)}) = {\rm wt}_{\cc}({\underline{m}}^{(b)})$ $(a,b \in \{1,2,3\})$,
${\operatorname{deg}}_{\cc}({\underline{m}}^{(j)})=1$ and
${\operatorname{deg}}_{\cc}({\underline{m}}^{(3)})>1$ and ${\underline{m}}^{(j)} \prec_{\cc}^\mathtt{b} {\underline{m}}^{(3)}$ $(j=1,2)$. If there exist non-zero $R(\gamma)$-homomorphisms $($up to grading shift$)$
$$ \overset{ \to }{\mathcal{S}}_{\cc}({\underline{m}}^{(j)}) \longrightarrow \overset{ \to }{\mathcal{S}}_{\cc}({\underline{m}}^{(3)})\qt{for $j=1,2$,}$$
and
$\overset{ \to }{\mathcal{S}}_{\cc}({\underline{m}}^{(k)})$ $(k=1,2,3)$ have the common simple head,
then we have
$$
\overset{ \to }{\mathcal{S}}_{\cc}({\underline{m}}^{(j)}) \hookrightarrow \overset{ \to }{\mathcal{S}}_{\cc}({\underline{m}}^{(3)}) \qquad \text{
for $j=1,2$.} $$
\end{proposition}
For a $\mathrm{Q}$-datum $Q=(\triangle,\xi)$
and $(i,p)\in(\Gamma_Q)_0$, we set
$$ S_Q(i;p) \mathbin{:=} S_{[Q]}(\beta)\qt{where $\phi_Q(i,p)=(\beta,0)$.}$$
The following is a main theorem of this paper:
\begin{theorem} \label{thm: determimant}
Let $Q = (\triangle, \xi)$ be a Dynkin quiver of an arbitrary type. Then we have
\eq
&&\mathfrak{d}( S_Q(i;p) , S_Q(j;s)) = \widetilde{\de}^\triangle_{i,j}[\;|p-s|\;]
\qt{for any $(i,p)$, $(j,s)\in(\Gamma_Q)_0$.}
\label{eq: de ADE}
\eneq
Moreover, for a \prq $\pair{\alpha,\beta}$ with $\phi_Q(i,p)=(\alpha,0)$ and $\phi_Q(j,s)=(\beta,0)$, we have the following statements:
\begin{enumerate}[{\rm (a)}]
\item \label{it: head} $S_Q(\alpha) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta) \simeq S_Q({\rm hd}_Q(\pair{\alpha,\beta}))$
up to a grading shift.
\item \label{it: length2} If $ \mathfrak{d}^\triangle_{i,j}[\;|p-s|\;] = \max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j)$, we have $\ell(S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta)) =2$.
\item \label{it: length>2} If $ \mathfrak{d}^\triangle_{i,j}[\;|p-s|\;] > \max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j)$, we have $\ell(S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta))>2$.
\item label{it: length1} If $ \mathfrak{d}^\triangle_{i,j}[\;|p-s|\;] =0$, then $S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta)$ is simple.
\end{enumerate}
Here, $\ell(M)$ denotes the composition length of $M$ for $M \in R\text{-}\mathrm{gmod}$.
\end{theorem}
\begin{remark} \label{rmk: simple product}
In Theorem~\ref{thm: determimant}~\eqref{it: head}, $S_Q({\rm hd}_Q(\pair{\alpha,\beta}))$ is isomorphic to a convolution product of mutually commuting cuspidal modules, since ${\rm hd}_Q(\pair{\alpha,\beta}$ is $[Q]$-simple.
For $\mathfrak{g}$ of simply-laced type, we refer the reader to \cite[Theorem 5.18]{Oh18}.
\end{remark}
The simply-laced case of this theorem is obtained in
\cite[Theorem 4.12]{KKOP21C}, \cite[Theorem 1.2]{Fuj19}, \cite{Oh18}.
In the next subsection, we will give a proof of this theorem in BCFG case.
\smallskip
\begin{corollary} Recall~\eqref{eq: double pole}.
For $\mathfrak{g}$ of type $B_n$, $C_n$ or $D_{n+1}$ and $(i;p),(j;s)\in(\Gamma_Q)_0$,
the $R$-module $S_Q(i;p) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(j;s)$ is of composition length $>2$ if and only if %
\begin{align*}
2 \le i,j \le n-1, \ i+j \ge n+1, \ {\ms{1mu}\mathsf{h}}+2 -i-j \le |p-s| \le i+j \text{ and } |p-s| \equiv_2 i+j.
\end{align*}
\end{corollary}
The corollary below for ADE-case was also proved in \cite[Lemma 9.12]{FHOO} (see also \cite{KKOP21C}).
\begin{corollary} \label{cor: dual-phen}
For $\phi_Q(\alpha_a,0)=(i,p)$ and $\phi_Q(\alpha_b,0)=(j,s)$ with $a\ne b$, we have
\begin{align}\label{eq: dual-phenomenon}
\widetilde{\de}_{i^*,j}[{\ms{1mu}\mathsf{h}}-|p-s|] =\widetilde{\de}_{i,j^*}[{\ms{1mu}\mathsf{h}}-|p-s|] =0.
\end{align}
\end{corollary}
\begin{proof}
Note that
\begin{eqnarray} &&
\parbox{80ex}{
\begin{enumerate}[{\rm (A)}]
\item \label{it: D n inv} $\widetilde{\de}^{D_{n+1}}_{i,n}(t) =\widetilde{\de}^{D_{n+1}}_{i,n+1}(t)$ for $i<n$ and
\item \label{it: B C cor} $\widetilde{\de}^{C_n}_{i,j}(t) = \kappa_{i,j} \widetilde{\de}^{B_n}_{i,j}(t)$, where
$\kappa_{i,j} = \begin{cases}
1/2 & \text{ if } i,j<n, \\
1 & \text{ if } \min(i,j)<\max(i,j)=n,\\
2 & \text{ if } i=j =n.
\end{cases}
$
\end{enumerate}
}\label{eq: simple obervation}
\end{eqnarray}
\noindent
(1) Let us assume that $\max(a,b)<n$ and $\triangle$ is of type $B_n$ or $C_n$.
In this case, we have $\max(i,j)<n$ since $\ms{1mu}\mathsf{d}_{\alpha_a}=\ms{1mu}\mathsf{d}_{\alpha_a} \ne \ms{1mu}\mathsf{d}_{\alpha_n}$.
By regarding $\alpha_a \in \Phi^+_{B_n}$, we have $\alpha_a=\psi^{-1}(\alpha_a) \in \Phi_{D_{n+1}}^+$ and $\phi_{\underline{Q}}^{-1}(\alpha_a,0) = (\imath,p)$ for $\imath =\pi^{-1}(i)$ by Proposition~\ref{prop: surgery D to B}.
Similar observations hold for $\alpha_b = \psi^{-1}(\alpha_b)$. Since~\eqref{eq: dual-phenomenon} holds for $D_{n+1}$-case, we have
$$ \dfrac{1}{2} \widetilde{\de}^{B_n}_{i,j}[{\ms{1mu}\mathsf{h}}-|p-s|]=\widetilde{\de}^{C_n}_{i,j}[{\ms{1mu}\mathsf{h}}-|p-s|] = \widetilde{\de}^{D_{n+1}}_{\imath,\jmath}[{\ms{1mu}\mathsf{h}}-|p-s|] =0. $$
Here the second equation follows from Theorem~\ref{thm: tbij folding}. Thus the assertion for this case follows.
\noindent
(2) Let us assume that $\max(a,b)=n$ and $\triangle$ is of type $B_n$ or $C_n$. Without loss of generality, we set $\alpha_a=\alpha_n$. Note that $n$ is a sink or source in $Q$ and $i=n$.
By regarding $\alpha_n\in \Phi^+_{B_n}$, we have $\{\alpha_n,\alpha_{n+1} \}=\psi^{-1}(\alpha_n) \subset \Phi_{D_{n+1}}^+$ and $\{ \phi^{-1}_{\underline{Q}}(\alpha_{n},0),\phi^{-1}_{\underline{Q}}(\alpha_{n+1},0) \}= \{(n,p),(n+1,p)\}$. Then, as in (1)-case, we have
\[ \widetilde{\de}^{B_n}_{n,j}[{\ms{1mu}\mathsf{h}}-|p-s|]= \widetilde{\de}^{C_n}_{n,j}[{\ms{1mu}\mathsf{h}}-|p-s|] = \widetilde{\de}^{D_{n+1}}_{n,\jmath}[{\ms{1mu}\mathsf{h}}-|p-s|] + \widetilde{\de}^{D_{n+1}}_{n+1,\jmath}[{\ms{1mu}\mathsf{h}}-|p-s|] =0. \]
\noindent
(3) The cases for $F_4$ and $G_2$ can be directly checked by considering all Dynkin quivers of those types.
\end{proof}
\subsection{BCFG cases} Throughout this subsection, we consider the convolution products of $[Q]$-cuspidal modules, when
$Q$ is a Dynkin quiver of type BCFG.
\smallskip
We set $\phi_{Q}(i,p)=(\alpha,0)$ and $\phi_{Q}(j,s)=(\beta,0)$.
\begin{proposition} \label{prop: minimal pair ga de}
Let $Q$ be any Dynkin quiver of types $B_n$ and $C_n$.
For a Dynkin quiver $Q$, let $\pair{\alpha,\beta}$ be a $[Q]$-minimal \prq for $\alpha+\beta=\gamma \in \Phi^+$. Then we have
$$
\mathfrak{d}( S_Q(\alpha),S_Q(\beta) ) = \max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j) = \widetilde{\de}_{i,j}^\triangle[\;|p-s|\;].$$
\end{proposition}
\begin{proof}
The second equality follows from Theorem~\ref{thm:Main}.
\smallskip
\noindent
(1) Consider the case when $ \ms{1mu}\mathsf{d}_\gamma=2$, $\ms{1mu}\mathsf{d}_{\alpha}= \ms{1mu}\mathsf{d}_{\beta} =1$ and $\mathfrak{g}$ is of type $B_n$. Then we have $\mathrm{res}^{[Q]}(\alpha)=\mathrm{res}^{[Q]}(\beta)=n$ and $\mathrm{res}^{[Q]}(\gamma) <n$. Hence
$\gamma=\varepsilon_a+\varepsilon_b$ and $\{ \alpha,\beta \} =\{ \epsilon_a,\epsilon_b \}$ for $1 \le a \ne b \le n$. Thus we have
$$ \mathfrak{d}( S_Q(\alpha),S_Q(\beta) ) = 1 - (\varepsilon_a,\varepsilon_b) =1 = \max(\ms{1mu}\mathsf{d}_\alpha,\ms{1mu}\mathsf{d}_\beta).$$
\noindent
(2) For the remained cases of type $B_n$ or $C_n$, we need to show that
$$ \mathfrak{d}( S_Q(\alpha),S_Q(\beta) ) = -(\alpha,\beta) = \max(\ms{1mu}\mathsf{d}_\alpha,\ms{1mu}\mathsf{d}_\beta).$$
Then we have
\begin{enumerate}[{\rm (a)}]
\item $\{\alpha,\beta\} = \{ \varepsilon_a - \varepsilon_b , \varepsilon_b \pm \varepsilon_c \}$ or $ \{ \varepsilon_a - \varepsilon_b , \varepsilon_b \}$ if $\mathfrak{g}$ is of type $B_n$,
\item $\{\alpha,\beta\} = \{ \epsilon_a - \epsilon_b , \epsilon_b \pm \epsilon_c \}$, $ \{ \epsilon_a - \epsilon_b , 2\epsilon_b\}$ or $ \{ \epsilon_a + \epsilon_b , \epsilon_a -\epsilon_b \}$ if $\mathfrak{g}$ is of type $C_n$,
\end{enumerate}
for proper distinct $1 \le a,b,c \le n$. Then one can easily see that
$$-(\alpha,\beta) = \max(\ms{1mu}\mathsf{d}_i,\ms{1mu}\mathsf{d}_j),$$
which completes our assertion.
\end{proof}
Note that the arguments in Proposition~\ref{prop: minimal pair ga de} also works for $ADE$-cases.
\subsubsection{$B_n$ case} Let $Q$ be \emph{any} Dynkin quiver of type $B_n$.
Under the assumption that
$$\text{a \prq $\pair{\alpha,\beta}$ is neither $[Q]$-simple nor $[Q]$-minimal for $\alpha+\beta =\gamma \in \Phi^+$,}$$ it suffices to consider the cases
${\rm (ii)}'$, ${\rm (iv)}'$, ${\rm (v)}'$, ${\rm (vi)}'$, ${\rm (vii)}'$ in~\eqref{eq: dist al, be 1 p}, and ${\rm (x)}'$, ${\rm (xi)}'$ in~\eqref{eq: dist al be 2 p}. Recall the values $(\alpha,\beta)$
and $p_{\beta,\alpha}$ are recorded in Table~\ref{table: al,be B} and~\eqref{eq: p be al B}, and do not depend on the choice of $Q$.
\smallskip
Throughout this section, we set
$$ \phi_Q(i,p)=(\alpha,0) \quad \text{ and } \quad \phi_Q(j,s)=(\beta,0).$$
\smallskip
In this subsection, we will prove Theorem~\ref{thm: determimant} by considering each case.
\smallskip
Consider the case ${\rm (ii)}'$.
By Proposition~\ref{prop: dir Q cnt} and the fact that $\psi$ is additive, there exists $\eta \in \Phi^+$ such that
\begin{enumerate}[{\rm (a)}]
\item \label{it: ii a} the \prq $\pair{\eta,s_1}$ is $[Q]$-minimal for $\alpha$ and of the form ${\rm (i)}'$, or
\item \label{it: ii b} the \prq $\pair{\alpha,\eta}$ is a $[Q]$-minimal for $s_1$ and of the form ${\rm (iii)}'$.
\end{enumerate}
\begin{equation*}
\begin{aligned}
& \scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-50,-4); "DT"+(50,-4)**\dir{.};
"DD"+(-40,-7); "DD"+(60,-7) **\dir{.};
"DT"+(-52,-4)*{\scriptstyle 1};
"DT"+(-52,-8)*{\scriptstyle 2};
"DT"+(-52,-12)*{\scriptstyle \vdots};
"DT"+(-52,-16)*{\scriptstyle \vdots};
"DT"+(-54,-37)*{\scriptstyle n};
"DL"+(15,-3); "DD"+(15,-3) **\dir{-};
"DR"+(11,-7); "DD"+(15,-3) **\dir{-};
"DT"+(11,-7); "DR"+(11,-7) **\dir{-};
"DT"+(11,-7); "DL"+(15,-3) **\dir{-};
"DT"+(14,-4); "DT"+(18,-4) **\crv{"DT"+(16,-2)};
"DT"+(16,-1)*{\scriptstyle 2};
"DT"+(11,-7); "DT"+(14,-4) **\dir{.};
"DT"+(18,-4); "DT"+(26,-12) **\dir{.};
"DT"+(26,-12); "DT"+(22,-16) **\dir{.};
"DT"+(28,-12)*{\scriptstyle \eta};
"DT"+(22,-12)*{\scriptstyle{{\rm (a)}}};
"DD"+(-22,26); "DD"+(-26,26) **\crv{"DD"+(-24,28)};
"DD"+(-24,29)*{\scriptstyle 2};
"DD"+(11,-7); "DD"+(-22,26) **\dir{.};
"DD"+(-26,26); "DD"+(-34,18) **\dir{.};
"DD"+(-36,18)*{\scriptstyle \eta};
"DD"+(-30,18)*{\scriptstyle{{\rm (b)}}};
"DD"+(11,-7); "DD"+(15,-3) **\dir{.};
"DL"+(15,-3); "DL"+(1,-17) **\dir{.};
"DL"+(-24,8); "DL"+(1,-17) **\dir{.};
"DL"+(19,-3)*{\scriptstyle \beta};
"DR"+(7,-7)*{\scriptstyle \alpha};
"DL"+(30,0)*{{\rm(ii)'}};
"DL"+(15,-3)*{\bullet};
"DL"+(31,12.5)*{\bullet};
"DL"+(31,14.5)*{\scriptstyle s_1};
"DL"+(25,-13)*{\bullet};
"DL"+(25,-15)*{\scriptstyle s_2 };
"DR"+(11,-7)*{\bullet};
\endxy}}
\end{aligned}
\end{equation*}
Assume ~\eqref{it: ii a} first. Then ~\eqref{eq: 2 sess} tells that there exists a homogeneous $R$-module monomorphism
$$ q^{p_{s_1,\eta} - (s_1,\eta)} S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_Q(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta). $$
Note that $\pair{\eta,\beta}$ is a $[Q]$-minimal \prq of the form ${\rm (i)}'$ for $s_2$. Thus we have
$$ S_Q(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta)\twoheadrightarrow q^{-p_{\beta,\eta}} S_{Q}(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_2) $$
by~\eqref{eq: 2 sess} again. Then the composition
\begin{align} \label{eq: head comp}
q^{p_{s_1,\eta} - (s_1,\eta)} S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_Q(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(\beta)
\twoheadrightarrow q^{-p_{\beta,\eta}} S_{Q}(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_2)
\end{align}
does not vanish by Proposition~\ref{prop: not vanishing}.
Since $ S_{Q}(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_2)$ is self-dual simple by Lemma~\ref{lem: non-com simple},
we have $S_Q(\alpha) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta) \simeq S_{Q}(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_2)$ up to a grading shift, and
$$
S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \twoheadrightarrow q^{-p_{\beta,\eta}-p_{s_1,\eta} +(s_1,\eta)} S_{Q}(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_2),
$$
which tells
\begin{align*}
\widetilde{\Lambda}( S_Q(\alpha) , S_Q(\beta) ) = p_{\beta,\eta}+p_{s_1,\eta} -(s_1,\eta),
\end{align*}
by Lemma~\ref{lem: tLa}. Thus~\eqref{eq: tla be al general} tells that
\begin{align*}
\mathfrak{d}( S_Q(\alpha) , S_Q(\beta) ) = p_{\beta,\eta}+p_{s_1,\eta} -(s_1,\eta)-(\alpha,\beta).
\end{align*}
Since $ p_{\beta,\eta}=p_{s_1,\eta} =0 $, $(s_1,\eta)=-2$ and $(\alpha,\beta)=0$, we have
\begin{align*}
\mathfrak{d}( S_Q(\alpha) , S_Q(\beta) ) =2 = \max(\ms{1mu}\mathsf{d}_\alpha,\ms{1mu}\mathsf{d}_\beta)=\mathfrak{d}_{i,j}[p-s].
\end{align*}
Now let us assume ~\eqref{it: ii b}. Similarly, we have a homogeneous $R$-module homomorphism,
$$
q^{p_{\eta,\alpha}-(\eta,\alpha)} S_Q(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(s_1) \hookrightarrow S_Q(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha)
$$
Note that the \prq $\pair{s_2,\eta}$ is $[Q]$-minimal for $\beta$ and of the form ${\rm (iii)}'$. Thus we have
a non-zero composition of $R$-module homomorphism,
\begin{align} \label{eq: head comp2}
q^{p_{\eta,\alpha}-(\eta,\alpha)} S_Q(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(s_1) \hookrightarrow S_Q(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha) \twoheadrightarrow q^{-p_{\eta,s_2}} S_Q(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha),
\end{align}
which tells $ S_Q(\beta) \mathbin{\scalebox{.9}{$\Delta$}} S_Q(\alpha) \simeq S_Q(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(s_2)$ up to a grading shift and
\begin{align*}
\mathfrak{d}( S_Q(\alpha) , S_Q(\beta) ) & = p_{\eta,\alpha}+p_{\eta,s_2}-(s_2,s_1)-(\eta,\alpha) =2= \max(\ms{1mu}\mathsf{d}_\alpha,\ms{1mu}\mathsf{d}_\beta)=\mathfrak{d}_{i,j}[p-s].
\end{align*}
From~\eqref{eq: head comp} and~\eqref{eq: head comp2}, we have a non-zero $R$-module monomorphism (up to grading shifts)
$$ S_{Q}(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_1) \hookrightarrow S_Q(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha).$$
We can
apply the similar argument to the length $1$ chains
of $\prec_Q^\mathtt{b}$-sequence, which are of the form ${\underline{m}} \prec^\mathtt{b}_Q {\underline{m}}'$ and depicted in the
cases ${\rm (iv)}'$, ${\rm (vii)}'$, ${\rm (x)}'$ and ${\rm (xi)}'$,
for obtaining non-zero $R$-module homomorphisms (up to grading shifts)
\begin{align}\label{eq: non-zero}
\overset{\gets}{\mathcal{S}}_{[{\underline{w}}_0]}({\underline{m}}) \to \overset{\gets}{\mathcal{S}}_{[{\underline{w}}_0]}({\underline{m}}').
\end{align}
More precisely, those length $1$-chains are listed below:
$$
\begin{cases}
\pair{s_1,s_2} \prec_{Q}^\mathtt{b} \pair{\alpha,\beta} & \text{ ${\rm (iv)}'$ and ${\rm (vii)}'$ cases }, \\
\pair{\eta_1,\eta_2},\pair{\tau_1,\zeta_1} \prec_{Q}^\mathtt{b} \pair{\alpha,\beta} & \text{ ${\rm (x)}'$ case}, \\
\pair{\eta_1,\eta_2} \prec_{Q}^\mathtt{b} \pair{\alpha,\beta} ,\seq{s_1,\tau_1,\zeta_1} \prec_{Q}^\mathtt{b} \pair{\alpha,\beta}, & \text{ ${\rm (xi)}'$ case}, \\
\pair{s_1,s_2} \prec_{Q}^\mathtt{b} \pair{\eta_1,\eta_2}, \pair{s_1,s_2} \prec_{Q}^\mathtt{b} \seq{s_1,\tau_1,\zeta_1}, & \text{ ${\rm (xi)}'$ case continued}.
\end{cases}
$$
In cases ${\rm (iv)}'$ and ${\rm (vii)}'$,
one can prove
$$ S_Q(\alpha) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta) \simeq S_Q(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_2) \quad \text{ (up to grading shifts)} $$
and
$$\mathfrak{d}(S_Q(\alpha),S_Q(\beta))=2$$ as in ${\rm (ii)}'$-case.
Moreover, by~\eqref{eq: comp series}
together with Proposition~\ref{prop: simple head},
we have
$$[S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta)]= [S_Q(\alpha) \mathbin{\scalebox{.9}{$\Delta$}} S_Q(\beta)] +[S_Q(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_1)],$$
for the cases ${\rm (ii)}'$, ${\rm (iv)}'$ and ${\rm (vii)}'$. Thus the assertions in Theorem~\ref{thm: determimant} hold for those cases.
\begin{example}
Using $\Gamma_Q$ in Example~\ref{ex: Label B4} and the \prqs corresponding to ${\rm (iv)}'$ and ${\rm (vii)}'$ in Example~\ref{ex: Uding B}, we can compute
$\mathfrak{d}(S_Q(\alpha),S_Q(\beta))$ as follows:
\begin{enumerate}[{\rm (1)}]
\item Since $p_{\srt{1,-2},\srt{2,-4}}=0$, $p_{\srt{1,-2},\srt{2,-3}}=0$, $(\srt{1,-2},\srt{2,-4})=-2$ and $(\srt{2,-3},\srt{1,-4})=0$, we have
$$\mathfrak{d}(S_Q(\srt{2,-3}),S_Q(\srt{1,-4}) ) = 2.$$
\item Since $p_{\srt{1},\srt{2}}=1$, $p_{\srt{3},\srt{2}}=1$, $(\srt{1},\srt{2})=0$ and $(\srt{2},\srt{3})=0$, we have $$\mathfrak{d}(S_Q(\srt{1,2}),S_Q(\srt{3}) ) = 2.$$
\end{enumerate}
\end{example}
Now let us consider the case ${\rm (x)}'$.
In this case, we have non-zero $R$-module homomorphisms (up to grading shifts)
\begin{align}\label{eq: two non-zero}
S_Q(\eta_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta_1) \to S_Q(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha) \quad \text{ and } \quad S_Q(\zeta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\tau_1)\to S_Q(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha)
\end{align}
by~\eqref{eq: non-zero}
and hence the homomorphisms are injective, and
\begin{align} \label{eq: common socle}
S_Q(\alpha) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta) & \simeq S_Q({\rm hd}_Q(\alpha,\beta)) \simeq S_Q(\gamma) \simeq S_Q(\eta_1) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\eta_2) \simeq S_Q(\tau_1) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\zeta_1)
\end{align}
(up to grading shifts) by Proposition~\ref{prop: inj} and Proposition~\ref{prop: simple socle}. Moreover, by Proposition~\ref{prop:
minimal,simple} and Proposition~\ref{prop: inj}, we can conclude
that $$\ell(S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) ) >2.$$
Similarly, we can conclude the same result for case ${\rm (xi)}'$.
Let us compute $\mathfrak{d}(S_Q(\alpha),S_Q(\beta))$ in case ${\rm (x)'}$: Since $(\alpha,\eta_1)$ is sectional,
there exists $\zeta \in \Phi^+$ such that
\begin{enumerate}[{\rm (a)}]
\item \label{it: x a} the \prq $\pair{\kappa,\eta_1}$ is $[Q]$-minimal for $\alpha$ and of the form ${\rm (i)'}$, or
\item \label{it: x b} the \prq $\pair{\alpha,\kappa}$ is $[Q]$-minimal for $\eta_1$ and of the form ${\rm (iii)'}$.
\end{enumerate}
\begin{align*}
\scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-50,-4); "DT"+(70,-4)**\dir{.};
"DD"+(-40,-6); "DD"+(80,-6) **\dir{.};
"DT"+(-52,-4)*{\scriptstyle 1};
"DT"+(-52,-8)*{\scriptstyle 2};
"DT"+(-52,-12)*{\scriptstyle \vdots};
"DT"+(-52,-16)*{\scriptstyle \vdots};
"DT"+(-53,-36)*{\scriptstyle n};
"DD"+(-8,2); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(12,-6) **\dir{-};
"DD"+(33,15); "DD"+(12,-6) **\dir{-};
"DD"+(-8,2); "DD"+(16,26) **\dir{.};
"DD"+(32,16); "DD"+(22,26) **\dir{.};
"DD"+(6,0); "DD"+(27,21) **\dir{.};
"DD"+(32,26); "DD"+(27,21) **\dir{.};
"DD"+(32,26); "DD"+(36,26) **\crv{"DD"+(34,28)};
"DD"+(34,29)*{\scriptstyle 2};
"DD"+(36,26); "DD"+(40,22) **\dir{.};
"DD"+(33,15); "DD"+(40,22) **\dir{.};
"DD"+(42,22)*{\scriptstyle \kappa};
"DD"+(37,22)*{\scriptstyle {\rm (a)} };
"DD"+(6,0); "DD"+(-2,8) **\dir{.};
"DD"+(-20,26); "DD"+(-2,8) **\dir{.};
"DD"+(-20,26); "DD"+(-24,26) **\crv{"DD"+(-22,28)};
"DD"+(-22,29)*{\scriptstyle 2};
"DD"+(-30,22)*{\scriptstyle \kappa};
"DD"+(-25,22)*{\scriptstyle {\rm (b)} };
"DD"+(-24,26); "DD"+(-28,22) **\dir{.};
"DD"+(-28,22); "DD"+(-8,2) **\dir{.};
"DD"+(16,26); "DD"+(22,26) **\crv{"DD"+(19,28)};
"DD"+(19,29)*{\scriptstyle 2};
"DD"+(-8,2)*{\bullet};
"DL"+(18,-1)*{{\rm(x)'}};
"DD"+(-5,2)*{\scriptstyle \beta};
"DD"+(-2,8)*{\bullet};
"DD"+(-2,10)*{\scriptstyle \eta_2};
"DD"+(33,15)*{\bullet};
"DD"+(30,15)*{\scriptstyle \alpha};
"DD"+(27.5,18)*{\scriptstyle \eta_1 };
"DD"+(27,21)*{\scriptstyle \bullet};
"DT"+(2,-36)*{\scriptstyle \bullet};
"DT"+(2,-38)*{\scriptstyle \tau_1};
"DT"+(-10,-36)*{\scriptstyle \bullet};
"DT"+(-10,-38)*{\scriptstyle \zeta_1};
"DT"+(-4,-30)*{\scriptstyle \bullet};
"DT"+(-4,-32)*{\scriptstyle \gamma};
\endxy}}
\end{align*}
Assume~\eqref{it: x a}. Then as in the previous cases, we have a homogeneous $R$-module homomorphism
$$
q^{p_{\eta_1,\kappa}-(\eta_1,\kappa)} S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_Q(\eta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\kappa) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta).
$$
Since the \prq $\pair{\kappa ,\beta}$ is $[Q]$-minimal for $\eta_2$ and of the form ${\rm (iii)'}$, we have a non-zero composition of $R$-module homomorphisms
$$
q^{p_{\eta_1,\kappa}-(\eta_1,\kappa)} S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_Q(\eta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\kappa) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \twoheadrightarrow
q^{-p_{\beta,\kappa}} S_Q(\eta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta_2).
$$
Note that the homomorphism is surjective by Proposition~\ref{prop: simple socle} and~\eqref{eq: two non-zero}. Hence
we can obtain a non-zero $R$-module homomorphism
$$
q^{p_{\eta_1,\kappa}-(\eta_1,\kappa)} S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \twoheadrightarrow q^{-p_{\beta,\kappa}-p_{\eta_2,\eta_1}} S_Q(\gamma),
$$
since $S_Q(\eta_1) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\eta_2) \simeq q^{-p_{\eta_2,\eta_1}} S_Q(\gamma)$.
Thus we have
\begin{align*}
\widetilde{\Lambda}(S_Q(\alpha),S_Q(\beta)) = p_{\beta,\kappa}+p_{\eta_2,\eta_1} + p_{\eta_1,\kappa}-(\eta_1,\kappa)
\end{align*}
and
\begin{align*}
\mathfrak{d}(S_Q(\alpha),S_Q(\beta)) = p_{\beta,\kappa}+p_{\eta_2,\eta_1} + p_{\eta_1,\kappa}-(\eta_1,\kappa)-(\alpha,\beta).
\end{align*}
Since $p_{\beta,\kappa}=p_{\eta_2,\eta_1} = p_{\eta_1,\kappa}=0$ and $(\eta_1,\kappa)=(\alpha,\beta)=-2$, we have
\begin{align}
\mathfrak{d}(S_Q(\alpha),S_Q(\beta)) = 4 = {\operatorname{deg}}_Q(\alpha,\beta) \times \max(\ms{1mu}\mathsf{d}_\alpha,\ms{1mu}\mathsf{d}_\beta)=\mathfrak{d}_{i,j}[p-s].
\end{align}
Similarly, we have the same results for $\widetilde{\Lambda}(S_Q(\alpha),S_Q(\beta)) $ and $\mathfrak{d}(S_Q(\alpha),S_Q(\beta))$ in ~\eqref{it: x b} case.
Let us compute $\mathfrak{d}(S_Q(\alpha),S_Q(\beta))$ in case ${\rm (xi)}'$: As in case ${\rm (x)}'$, we have
\begin{enumerate}[{\rm (a)}]
\item \label{it: xi a} the \prq $\pair{\zeta,\eta_1}$ is $[Q]$-minimal for $\alpha$ and of the form ${\rm (i)'}$, or
\item \label{it: xi b} the \prq $\pair{\alpha,\zeta}$ is $[Q]$-minimal for $\eta_1$ and of the form ${\rm (iii)'}$.
\end{enumerate}
Assume~\eqref{it: x a}. As in case ${\rm (x)'}$, we have a non-zero composition of $R$-module homomorphisms
$$
q^{p_{\eta_1,\zeta}-(\eta_1,\zeta)} S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_Q(\eta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\zeta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \twoheadrightarrow
q^{-p_{\beta,\zeta}} S_Q(\eta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta_2).
$$
Since the \prq $\pair{\eta_1,\eta_2}$ corresponds to case ${\rm (ii)'}$, \eqref{eq: common socle} tells that we can obtain a non-zero $R$-module homomorphism
$$
q^{p_{\eta_1,\zeta}-(\eta_1,\zeta)} S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \twoheadrightarrow q^{-p_{\beta,\zeta}-\widetilde{\Lambda}(S_Q(\eta_1),S_Q(\eta_2))} S_Q(s_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(s_2),
$$
which is surjective. Thus we have
\begin{align*}
\widetilde{\Lambda}(S_Q(\alpha),S_Q(\beta)) = p_{\beta,\zeta}+\widetilde{\Lambda}(S_Q(\eta_1),S_Q(\eta_2) )+ p_{\eta_1,\zeta}-(\eta_1,\zeta)
\end{align*}
and hence
\begin{align*}
\mathfrak{d}(S_Q(\alpha),S_Q(\beta)) = p_{\beta,\zeta}+\widetilde{\Lambda}(S_Q(\eta_1),S_Q(\eta_2) )+ p_{\eta_1,\zeta}-(\eta_1,\zeta)-(\alpha,\beta).
\end{align*}
Since the \prq $\pair{\eta_1,\eta_2}$ corresponds to case ${\rm (ii)'}$, we have
$$\widetilde{\Lambda}(S_Q(\eta_1),S_Q(\eta_2) ) =\mathfrak{d}(S_Q(\eta_1),S_Q(\eta_2) ) +(\eta_1,\eta_2) =2.$$
By Table~\ref{table: al,be B}, and the facts that $p_{\beta,\zeta}=p_{\eta_1,\zeta}=0$ $(\eta_1,\zeta)=-2$ and $(\alpha,\beta)=0$, we have
$$
\mathfrak{d}(S_Q(\alpha),S_Q(\beta)) = 4 = {\operatorname{deg}}_Q(\alpha,\beta) \times \max(\ms{1mu}\mathsf{d}_\alpha,\ms{1mu}\mathsf{d}_\beta)=\mathfrak{d}_{i,j}[p-s].
$$
Thus the assertions in Theorem~\ref{thm: determimant} hold for cases ${\rm (x)}'$ and ${\rm (xi)}'$.
\smallskip
Let us consider the cases ${\rm (v)}'$ and ${\rm (vi)}'$, in which contain $m_i =2$ in the head ${\underline{m}}$ of $\pair{\alpha,\beta}$.
\begin{equation*}
\begin{aligned}
& \scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-30,-4); "DT"+(60,-4)**\dir{.};
"DD"+(-20,-6); "DD"+(70,-6) **\dir{.};
"DT"+(-32,-4)*{\scriptstyle 1};
"DT"+(-32,-8)*{\scriptstyle 2};
"DT"+(-32,-12)*{\scriptstyle \vdots};
"DT"+(-32,-16)*{\scriptstyle \vdots};
"DT"+(-34,-36)*{\scriptstyle n};
"DD"+(-11,4); "DD"+(-1,-6) **\dir{-};
"DD"+(5,0); "DD"+(-1,-6) **\dir{-};
"DD"+(-11,4); "DD"+(11,26) **\dir{-};
"DD"+(24,19); "DD"+(17,26) **\dir{-};
"DD"+(5,0); "DD"+(24,19) **\dir{-};
"DD"+(-11,4)*{\bullet};
"DL"+(16,0)*{{\rm (v)'}};
"DD"+(-7,4)*{\scriptstyle \beta};
"DD"+(11,26); "DD"+(17,26) **\crv{"DD"+(14,28)};
"DD"+(24,19)*{\bullet};
"DD"+(24,17)*{\scriptstyle \alpha};
"DD"+(-1,-6)*{\bullet};
"DD"+(-1,-8)*{\scriptstyle 2 s};
"DD"+(14,29)*{\scriptstyle 2};
"DD"+(23,4); "DD"+(33,-6) **\dir{-};
"DD"+(39,0); "DD"+(33,-6) **\dir{-};
"DD"+(23,4); "DD"+(42,23) **\dir{-};
"DD"+(52,13); "DD"+(42,23) **\dir{-};
"DD"+(39,0); "DD"+(52,13) **\dir{-};
"DD"+(23,4)*{\bullet};
"DD"+(42,23)*{\bullet};
"DL"+(50,0)*{{\rm (vi)'}};
"DD"+(27,4)*{\scriptstyle \beta};
"DD"+(42,25)*{\scriptstyle s_2};
"DD"+(52,13)*{\bullet};
"DD"+(52,11)*{\scriptstyle \alpha};
"DD"+(33,-6)*{\bullet};
"DD"+(33,-8)*{\scriptstyle 2 s_1};
\endxy}}
\end{aligned}
\end{equation*}
Consider ${\rm (v)}'$ case. As in the previous cases,
there exists $\eta \in \Phi^+$ such that
\begin{enumerate}[{\rm (a)}]
\item \label{it: va} the \prq $\pair{\eta,s}$ is $[Q]$-minimal for $\alpha$ and of the form ${\rm (ix)}'$ or
\item \label{it: vb} the \prq $\pair{s,\eta}$ is $[Q]$-minimal for $\beta$ and of the form ${\rm (ix)}'$.
\end{enumerate}
Assume~\eqref{it: va}. Then
we have a homogeneous $R$-module monomorphism
$$ q^{p_{s,\eta} - (s,\eta)} S_Q(\alpha)\mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_{Q}(s) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta). $$
Note that $\pair{\eta,\beta}$ is a $[Q]$-minimal \prq for $s$ and of the form ${\rm (viii)}'$. Thus the composition
$$
q^{p_{s,\eta} - (s,\eta)} S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_{Q}(s) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta)
\twoheadrightarrow q^{-p_{\beta,\eta}} S_{Q}(s)^{ \mathop{\mathbin{\mbox{\large $\circ$}}} 2}
$$
does not vanish by Proposition~\ref{prop: not vanishing} and hence $S_Q(\alpha) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta) \simeq S_{Q}(s)^{ \mathop{\mathbin{\mbox{\large $\circ$}}} 2}$ up to a grading shift.
Note that $ S_{Q}(s)^{ \mathop{\mathbin{\mbox{\large $\circ$}}} 2} $ is not self-dual simple, while $ S_{Q}(2s) \mathbin{:=} q S_{Q}(s)^{ \mathop{\mathbin{\mbox{\large $\circ$}}} 2} $ is self-dual simple. Thus we have
$$
S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \twoheadrightarrow q^{-p_{\beta,\eta}-p_{s,\eta} + (s,\eta)-1} S_{Q}(2s)
$$
which tells
\begin{align*}
\widetilde{\Lambda}(S_Q(\alpha), S_Q(\beta) ) =p_{\beta,\eta}+p_{s,\eta} - (s,\eta)+1
\end{align*}
and hence
\begin{align*}
\mathfrak{d}(S_Q(\alpha), S_Q(\beta) ) =p_{\beta,\eta}+p_{s,\eta} - (s,\eta)-(\alpha,\beta)+1.
\end{align*}
Since $p_{\beta,\eta}=1$ and $p_{s,\eta}= (s,\eta)=(\alpha,\beta)=0$, we have
\begin{align}\label{eq: tLa de v B}
\mathfrak{d}(S_Q(\alpha), S_Q(\beta) ) =2 = \max(\ms{1mu}\mathsf{d}_\alpha,\ms{1mu}\mathsf{d}_\beta)=\mathfrak{d}_{i,j}[p-s].
\end{align}
By Proposition~\ref{prop: length 2}, $\ell(S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta)) =2$.
We can
apply the similar arguments to~\eqref{it: vb}.
Consider ${\rm (vi)}'$ case. As in ${\rm (v)}'$-case, there exists $\eta \in \Phi^+$ such that
\begin{enumerate}[{\rm (a)}]
\item \label{it: vi a} the \prq $\pair{\eta,s_2}$ is $[Q]$-minimal for $\alpha$ and of the form ${\rm (ix)}'$, or
\item \label{it: vi b} the \prq $\pair{s_2,\eta}$ is $[Q]$-minimal for $\beta$ and of the form ${\rm (ix)}'$.
\end{enumerate}
Assume~\eqref{it: vi a}. Then we have
$$ q^{p_{s_2,\eta} - (s_2,\eta)} S_Q(\alpha)\mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_{Q}(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta). $$
Then the \prq $\pair{\eta,\beta}$ corresponds to ${\rm (vii)}'$. Thus we have a non-zero composition of homogeneous $R$-module homomorphisms
\begin{align*}
& q^{p_{s_2,\eta} - (s_2,\eta)} S_Q(\alpha)\mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \hookrightarrow S_{Q}(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \\
& \ \twoheadrightarrow q^{-\widetilde{\Lambda}( S_Q(\eta), S_Q(\beta))} S_{Q}(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_{Q}(s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(s_3) \simeq q^{-\widetilde{\Lambda}( S_Q(\eta), S_Q(\beta))-1} S_{Q}(2s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(s_3),
\end{align*}
and hence $ S_Q(\alpha)\mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \simeq S_{Q}(2s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(s_3)$ up to a grading shift.
Furthermore, we have
$$
S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) \twoheadrightarrow q^{-p_{s_2,\eta} +(s_2,\eta) -\widetilde{\Lambda}( S_Q(\eta), S_Q(\beta))-1} S_{Q}(2s_2) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(s_3)
$$
which tells
\begin{align*}
\widetilde{\Lambda}(S_Q(\alpha), S_Q(\beta) ) = p_{s_2,\eta} -(s_2,\eta) +\widetilde{\Lambda}( S_Q(\eta), S_Q(\beta))+1
\end{align*}
and hence
\begin{align*}
\mathfrak{d}(S_Q(\alpha), S_Q(\beta) ) =p_{s_2,\eta} +\widetilde{\Lambda}( S_Q(\eta), S_Q(\beta))-(s_2,\eta) -(\alpha,\beta)+1.
\end{align*}
Since the \prq $\pair{\eta,\beta}$ corresponds to ${\rm (vii)}'$, we have
$$
\widetilde{\Lambda}( S_Q(\eta), S_Q(\beta)) = \widetilde{\Lambda}( S_Q(\eta), S_Q(\beta)) +(\eta,\beta) = 2 + 0= 2.
$$
Then the facts that $p_{s_2,\eta}=1$, $(s_2,\eta)=0$ and
$(\alpha,\beta)=2$ tell that we have
\begin{align}
\mathfrak{d}(S_Q(\alpha), S_Q(\beta) ) = 2 = \max(\ms{1mu}\mathsf{d}_\alpha,\ms{1mu}\mathsf{d}_\beta)=\mathfrak{d}_{i,j}[ p-s].
\end{align}
We can
apply the similar arguments to~\eqref{it: vi b}. Thus the assertions in Theorem~\ref{thm: determimant} hold for cases ${\rm (v)}'$ and ${\rm (vi)}'$.
\subsubsection{$C_n$ case} Now we shall consider the $C_n$-cases.
Comparing with $B_n$-case, there is no $C_n$-analogue of Proposition~\ref{prop: many pairs}; i.e., for a \prq $\underline{p}=\pair{\alpha,\beta}$ with ${\operatorname{deg}}_Q(\alpha,\beta)=2$,
there exists a unique sequence ${\underline{m}}$ such that
$$ {\rm hd}_Q(\underline{p}) \prec_Q^{\mathtt{b}} {\underline{m}} \prec_Q^{\mathtt{b}} \underline{p}. $$
Thus we can not apply Proposition~\ref{prop: inj} for the cases ${\rm (x)}''$ and ${\rm (xi)}''$. For the cases except ${\rm (x)}''$ and ${\rm (xi)}''$, we can apply the same strategies of $B_n$ type to show Theorem~\ref{thm: determimant}.
\smallskip
Let us consider the ${\rm (x)}''$-case. Before we consider the general ${\rm (x)}''$-case, let us observe the AR-quiver $\Gamma_Q$ of type $C_3$ described in Example~\ref{ex: AR Q BCFG}.
In this case, $\pair{\alpha,\beta}\mathbin{:=} \pair{\srt{1,-3},\srt{2,3}}$ is the unique \prq corresponding to ${\rm (x)}''$-case.
Now we want to show
$$\mathfrak{d}(S_Q(\srt{1,-3}),S_Q(\srt{2,3})) =2.$$
Since \prq $\pair{\srt{2,-3} ,\srt{3,3} }$ is $[Q]$-minimal for $\srt{2,3}$, we have
$$S_Q(\srt{2,3}) \simeq S_Q(\srt{2,-3}) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\srt{3,3} ) \quad \text{(up to grading shift)}.$$
Let $L=S_Q(\srt{1,-3})$, $M=S_Q(\srt{2,-3})$ and $N=S_Q(\srt{3,3})$. Since $L$ and $M$ commute, the sequence
$(L,M,N)$ is a normal sequence. On the other hand,
we can compute
$$ \Lambda(M,L)=-1 \quad \Lambda(N,L) = 2 \quad \text{ and } \quad \Lambda(M \mathbin{\scalebox{.9}{$\nabla$}} N,L)= 1 $$
since $$\srt{1,-3} \prec_Q \srt{2,-3}, \quad \srt{1,-3} \prec_Q \srt{3,3} \quad \text{ and } \quad \srt{1,-3} \prec_Q \srt{2,3}.$$
Then Lemma~\ref{lem: normality} implies $(M,N,L)$ is normal and hence
$$\mathfrak{d}(L,M\mathbin{\scalebox{.9}{$\nabla$}} N) = \mathfrak{d}(L,M)+\mathfrak{d}(L,N) = 0+2 =2,$$
by Lemma~\ref{lem: de additive}. Here we use the fact that $\mathfrak{d}(L,N)=2$ followed by Proposition~\ref{prop: minimal pair ga de}.
Moreover, we have
\begin{align*}
L \mathbin{\scalebox{.9}{$\nabla$}} (M\mathbin{\scalebox{.9}{$\nabla$}} N) \simeq {\operatorname{hd}}(L \mathop{\mathbin{\mbox{\large $\circ$}}} M \mathop{\mathbin{\mbox{\large $\circ$}}} N) &\simeq {\rm hd}(M \mathop{\mathbin{\mbox{\large $\circ$}}} L \mathop{\mathbin{\mbox{\large $\circ$}}} N) \simeq {\rm hd}(M \mathop{\mathbin{\mbox{\large $\circ$}}} (L \mathbin{\scalebox{.9}{$\nabla$}} N) ) \\
& \simeq S_Q(\srt{2,-3}) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\srt{1,3}) \simeq S_Q(\srt{1,2})
\end{align*}
as we desired. Since there exists a non-zero $R$-module homomorphism
$$ S_Q(\srt{1,3}) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\srt{2,-3}) \to S_Q(\srt{2,3}) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(1,-3),$$
$\ell(S_Q(\srt{1,3}) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\srt{2,-3}))=2$ and they have common simple socle, the $R$-module homomorphism
is injective. Thus we have
$$\ell( S_Q(\srt{2,3}) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(1,-3)) > 2.$$
Thus we can check the assertions in Theorem~\ref{thm: determimant} hold for the \prq.
\medskip
Let us consider the general ${\rm (x)}''$-case. Then there exists $\kappa \in \Phi^+$ such that
\begin{enumerate}[{\rm (a)}]
\item \label{it: Cx a} $\pair{\kappa,\eta_1}$ is a $[Q]$-minimal \prq for $\alpha$ and of the form ${\rm (i)}''$, or
\item \label{it: Cx b} $\pair{\alpha,\kappa}$ is a $[Q]$-minimal \prq for $\eta_1$ and of the form ${\rm (iii)}''$.
\end{enumerate}
\begin{align*}
\scalebox{0.79}{{\xy
(-20,0)*{}="DL";(-10,-10)*{}="DD";(0,20)*{}="DT";(10,10)*{}="DR";
"DT"+(-50,-4); "DT"+(70,-4)**\dir{.};
"DD"+(-40,-6); "DD"+(80,-6) **\dir{.};
"DT"+(-52,-4)*{\scriptstyle 1};
"DT"+(-52,-8)*{\scriptstyle 2};
"DT"+(-52,-12)*{\scriptstyle \vdots};
"DT"+(-52,-16)*{\scriptstyle \vdots};
"DT"+(-53,-36)*{\scriptstyle n};
"DD"+(-8,2); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(0,-6) **\dir{-};
"DD"+(6,0); "DD"+(12,-6) **\dir{-};
"DD"+(33,15); "DD"+(12,-6) **\dir{-};
"DD"+(-8,2); "DD"+(16,26) **\dir{.};
"DD"+(32,16); "DD"+(22,26) **\dir{.};
"DD"+(6,0); "DD"+(27,21) **\dir{.};
"DD"+(32,26); "DD"+(27,21) **\dir{.};
"DD"+(32,26); "DD"+(36,26) **\crv{"DD"+(34,28)};
"DD"+(34,29)*{\scriptstyle 2};
"DD"+(36,26); "DD"+(40,22) **\dir{.};
"DD"+(33,15); "DD"+(40,22) **\dir{.};
"DD"+(42,22)*{\scriptstyle \kappa};
"DD"+(37,22)*{\scriptstyle {\rm (a)} };
"DD"+(6,0); "DD"+(-2,8) **\dir{.};
"DD"+(-20,26); "DD"+(-2,8) **\dir{.};
"DD"+(-20,26); "DD"+(-24,26) **\crv{"DD"+(-22,28)};
"DD"+(-22,29)*{\scriptstyle 2};
"DD"+(-30,22)*{\scriptstyle \kappa};
"DD"+(-25,22)*{\scriptstyle {\rm (b)} };
"DD"+(-24,26); "DD"+(-28,22) **\dir{.};
"DD"+(-28,22); "DD"+(-8,2) **\dir{.};
"DD"+(16,26); "DD"+(22,26) **\crv{"DD"+(19,28)};
"DD"+(19,29)*{\scriptstyle 2};
"DD"+(-8,2)*{\bullet};
"DL"+(18,-1)*{{\rm(x)''}};
"DD"+(-5,2)*{\scriptstyle \beta};
"DD"+(-2,8)*{\bullet};
"DD"+(-2,10)*{\scriptstyle \eta_2};
"DD"+(33,15)*{\bullet};
"DD"+(30,15)*{\scriptstyle \alpha};
"DD"+(27.5,18)*{\scriptstyle \eta_1 };
"DD"+(27,21)*{\scriptstyle \bullet};
"DT"+(2,-36)*{\scriptstyle \bullet};
"DT"+(-10,-36)*{\scriptstyle \bullet};
"DT"+(-4,-30)*{\scriptstyle \bullet};
"DT"+(-4,-32)*{\scriptstyle \gamma};
"DT"+(2,-38)*{\scriptstyle \tau_1};
"DT"+(-10,-38)*{\scriptstyle \tau_2};
\endxy}}
\end{align*}
Here $\tau_1+\tau_2 = 2\gamma = 2(\alpha+\beta)$.
\smallskip
Assume~\eqref{it: Cx a}. Then we have
\begin{eqnarray} &&
\parbox{75ex}{
\begin{enumerate}[{\rm (1)}]
\item $S_Q(\kappa) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\eta_1) \simeq S_Q(\alpha)$,
\item the \prq $\pair{\kappa,\beta}$ is $[Q]$-minimal for $\eta_2$ of the form ${\rm (iii)}''$,
\item the \prq $\pair{\alpha,\eta_2}$ is $[Q]$-minimal for $\tau_1$ of the form ${\rm (v)}''$,
\item the \prq $\pair{\eta_1,\beta}$ is $[Q]$-minimal for $\tau_2$ of the form ${\rm (v)}''$,
\item the \prq $\pair{\kappa,\tau_2}$ is $[Q]$-minimal for $\gamma$ of the form ${\rm (viii)}''$.
\end{enumerate}
}\label{eq: C X a}
\end{eqnarray}
Now we shall prove the following claims:
$$
\text{ The sequences $(S_Q(\beta),S_Q(\kappa),S_Q(\eta_1))$ and $(S_Q(\kappa),S_Q(\eta_1),S_Q(\beta))$ are normal.}
$$
\noindent
(I) Note that
$$\text{$\Lambda( S_Q(\beta),S_Q(\kappa) )=-(\beta,\kappa)$, $\Lambda( S_Q(\beta),S_Q(\eta_1) )=-(\beta,\eta_1)$ and $\Lambda( S_Q(\beta),S_Q(\alpha) )=-(\beta,\alpha)$,}$$
since $ \alpha,\eta_1,\kappa \prec_Q \beta$. Thus we have
$$ \Lambda( S_Q(\beta), S_Q(\kappa) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\eta_1) )= \Lambda( S_Q(\beta), S_Q(\kappa) )+ \Lambda( S_Q(\beta), S_Q(\eta_1) )$$
which implies that the sequence $(S_Q(\beta),S_Q(\kappa),S_Q(\eta_1))$ is normal.
\noindent
(II) We have
\begin{align*}
\Lambda( S_Q(\kappa),S_Q(\eta_1) ) & = 2\mathfrak{d}( S_Q(\kappa),S_Q(\eta_1) ) + (\kappa,\eta_1) \overset{{\rm (A)}}{=} 2+ (\kappa,\eta_1), \\
\Lambda( S_Q(\kappa),S_Q(\beta) ) & = 2\mathfrak{d}( S_Q(\kappa),S_Q(\beta) ) + (\kappa,\beta) \overset{{\rm (B)}}{=} 2+ (\kappa,\beta), \\
\Lambda( S_Q(\kappa),S_Q(\tau_2) ) & = 2\mathfrak{d}( S_Q(\kappa),S_Q(\tau_2) ) + (\kappa,\tau_2) \overset{{\rm (C)}}{=} 4+ (\kappa,\tau_2).
\end{align*}
Here we use the facts $$\text{$\mathfrak{d}( S_Q(\kappa),S_Q(\eta_1) ) \overset{{\rm (A)}}{=} 1$, $\mathfrak{d}( S_Q(\kappa),S_Q(\beta) ) \overset{{\rm (B)}}{=} 1$ and $\mathfrak{d}( S_Q(\kappa),S_Q(\tau_2) ) \overset{{\rm (C)}}{=} 2$}$$ that follow from Proposition~\ref{prop: minimal pair ga de}
and~\eqref{eq: C X a}.
Thus we have
$$
\Lambda( S_Q(\kappa),S_Q(\eta_1) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta)) = \Lambda( S_Q(\kappa),S_Q(\tau_2) ) = \Lambda( S_Q(\kappa),S_Q(\eta_1) )+ \Lambda( S_Q(\kappa),S_Q(\beta) )
$$
which implies that the sequence $(S_Q(\kappa),S_Q(\eta_1),S_Q(\beta))$ is normal.
\medskip
By (I) and (II), Lemma~\ref{lem: de additive} tells that
$$
\mathfrak{d}(S_Q(\alpha),S_Q(\beta) ) = \mathfrak{d} (S_Q(\kappa) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\eta_1) ,S_Q(\beta)) = \mathfrak{d} (S_Q(\kappa) ,S_Q(\beta)) + \mathfrak{d} (S_Q(\eta_1) ,S_Q(\beta)) =2,
$$
as we desired. Moreover, we have
\begin{align*}
S_Q(\alpha) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta) & \simeq {\rm hd}( S_Q(\kappa) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\eta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta) ) \\
& \simeq S_Q(\kappa) \mathbin{\scalebox{.9}{$\nabla$}} (S_Q(\eta_1) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta) ) \simeq S_Q(\kappa) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\tau_2) \\
& \simeq S_Q(\gamma),
\end{align*}
up to grading shifts. Finally we have
$$ \ell( S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta)) >2,$$
as in the particular case of type $C_3$. For the case~\eqref{it: Cx b} and ${\rm (xi)}''$, we can apply the similar argument.
Thus the assertions in Theorem~\ref{thm: determimant} hold for cases ${\rm (x)}''$ and ${\rm (xi)}''$.
\subsubsection{$G_2$ case}
Note that there exist only $2$-reduced expressions of $w_0$ of type $G_2$, which are all adapted to the two Dynkin quivers (up to constant).
We shall only consider $Q$ and $\Gamma_Q$ in Section~\ref{subsec: G2 degree}. Then we have
\begin{align*}
&\beta_1 = \alpha_2 = (1,-2,1), && \beta_2 = \alpha_1+\alpha_2 = (1,-1,0), && \beta_3 = 3\alpha_1+\alpha_2 = (2,-1,-1), \\
&\beta_4 = 2\alpha_1+\alpha_2 = (1,0,-1), && \beta_5 = 3\alpha_1+\alpha_2 = (1,1,-2), && \beta_6 = \alpha_1 = (0,1,-1).
\end{align*}
Note that non $[Q]$-simple \prqs are given as follows:
\begin{equation} \label{eq: G2 regular}
\begin{aligned}
&\beta_3 \prec_Q^\mathtt{b} \pair{\beta_2,\beta_4}, && \beta_5 \prec_Q^\mathtt{b} \pair{\beta_4,\beta_6}, && \beta_4 \prec_Q^\mathtt{b} \pair{\beta_2,\beta_6}, \\
&2\beta_2 \prec_Q^\mathtt{b} \pair{\beta_1,\beta_4}, && 2\beta_4 \prec_Q^\mathtt{b} \pair{\beta_3,\beta_6}, &&2\beta_3 \prec_Q^\mathtt{b} \pair{\beta_2,\beta_5},
\end{aligned}
\end{equation}
and
\begin{align}\label{eq: G2 special}
&{\rm (i)} \ 3\beta_2 \prec_Q^\mathtt{b} \pair{\beta_1,\beta_3}, && {\rm (ii)} \ 3\beta_4 \prec_Q^\mathtt{b} \pair{\beta_3,\beta_5}, &&{\rm (iii)} \ \beta_3 \prec_Q^\mathtt{b} \pair{\beta_2,\beta_4} \prec_Q^\mathtt{b} \pair{\beta_5,\beta_1}.
\end{align}
The cases in~\eqref{eq: G2 regular} can be proved as in $B_n$ or $C_n$-cases. So let us consider the cases in~\eqref{eq: G2 special}.
At first, we shall deal with (iii) in \eqref{eq: G2 special}. Take
$L= S_Q(\beta_1)$, $M= S_Q(\beta_4)$, $N= S_Q(\beta_6)$ and hence $M
\mathbin{\scalebox{.9}{$\nabla$}} N \simeq S_Q(\beta_5)$ and $L \mathbin{\scalebox{.9}{$\nabla$}} M \simeq
S_Q(\beta_2)^{\circ 2}$ up to grading shifts. Since we know
$\mathfrak{d}(S_Q(\beta_2),N)=2$, $\mathfrak{d}(L,N)=3$ and $\mathfrak{d}(M,N)=1$ by
Proposition~\ref{prop: tLa}, we know
\begin{align*}
2 \times 3 = 2 \Lambda(S_Q(\beta_2),N)& = \Lambda(L\mathbin{\scalebox{.9}{$\nabla$}} M,N) \\
&= \Lambda(L,N)+\Lambda(M,N) =3+3,
\end{align*}
which implies that the sequence $(L,M,N)$ is normal by Lemma~\ref{lem: normality}. By the convex order $\prec_Q$, the sequence $(M,N,L)$ is also normal. Thus
$$\mathfrak{d}(L,M\mathbin{\scalebox{.9}{$\nabla$}} N) =\mathfrak{d}(L,M)+\mathfrak{d}(L,N)=6,$$
by Lemma~\ref{lem: de additive}. Furthermore, we have
\begin{align*}
L \mathbin{\scalebox{.9}{$\nabla$}} (M \mathbin{\scalebox{.9}{$\nabla$}} N) & \simeq (L \mathbin{\scalebox{.9}{$\nabla$}} M)\mathbin{\scalebox{.9}{$\nabla$}} N \simeq S_Q(\beta_2)^{\circ 2} \mathbin{\scalebox{.9}{$\nabla$}} N \\
& \simeq S_Q(\beta_2) \mathbin{\scalebox{.9}{$\nabla$}} (S_Q(\beta_2) \mathbin{\scalebox{.9}{$\nabla$}} N) \simeq S_Q(\beta_2) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\beta_4) \simeq S_Q(\beta_3).
\end{align*}
Since we can construct a non-zero homomorphism
$$ S_Q(\beta_4) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_2) \to S_Q(\beta_5) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_1) $$
and they have common simple socle, we have
$$ S_Q(\beta_4) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_2) \hookrightarrow S_Q(\beta_5) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_1) $$
and hence
$$ \ell(S_Q(\beta_5) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_1)) >2.$$
Now let us consider (i) in~\eqref{eq: G2 special}. Since $\pair{\beta_2,\beta_4}$ is a minimal
\prq for $\beta_5$, we have a non-zero composition of $R$-module
homomorphisms
$$
S_Q(\beta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_3) \hookrightarrow S_Q(\beta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_4) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_2) \twoheadrightarrow S_Q(\beta_2)^{\mathop{\mathbin{\mbox{\large $\circ$}}} 3},
$$
which implies
$$
S_Q(\beta_1) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta_3) \simeq S_Q(\beta_2)^{\mathop{\mathbin{\mbox{\large $\circ$}}} 3}.
$$
The assertion (ii) in~\eqref{eq: G2 special} can be proved in the same way.
\subsubsection{$F_4$ case}
In $F_4$-case, we can prove Theorem~\ref{thm: determimant} by using techniques we have used in the previous subsections to each \prq $\pair{\alpha,\beta}$.
In this subsection, we shall present several cases, instead of
considering all cases. Let us use $\Gamma_Q$ in~\eqref{eq: F_4Q}
and assume that Theorem~\ref{thm: determimant} holds for \prqs
$\pair{\theta,\mu}$ with ${\operatorname{deg}}_Q(\theta,\mu) \le 1$.
\smallskip
\noindent
(1) Let us compute $\mathfrak{d}(S_Q(\alpha'),S_Q(\beta') )$ when $\alpha'=\sprt{0,1,0,0}$ and $\beta'= \sprt{\frac{1}{2},-\frac{1}{2},\frac{1}{2},-\frac{1}{2}}$. In this case, we have
\begin{enumerate}[{\rm (a)}]
\item $\phi_Q(3,2)=(\alpha',0)$, $\phi_Q(4,-5)=(\beta',0)$, $\widetilde{\de}_{3,4}[7]= 2$,
\item there exists a unique \prq $\underline{p} = \pair{\sprt{\frac{1}{2},\frac{1}{2},-\frac{1}{2},-\frac{1}{2}},\sprt{0,0,1,0} }=\pair{\alpha^{(1)},\beta^{(1)}} $ such that
$$\gamma\mathbin{:=} \alpha'+\beta' \prec_Q^\mathtt{b} \underline{p} \prec_Q^\mathtt{b} \pair{\alpha',\beta'}.$$
\end{enumerate}
Note that $\pair{\beta^{(1)},\eta \mathbin{:=} \sprt{\frac{1}{2},-\frac{1}{2},-\frac{1}{2},-\frac{1}{2}}}$ is a minimal \prq for $\beta'$ and $\pair{\alpha',\beta^{(1)}}$ is a minimal \prq for $\zeta = \sprt{0,1,1,0}$. Setting $L=S_Q(\alpha')$, $M=S_Q(\beta^{(1)})$, $N= S_Q(\eta)$ and $T=S_Q(\zeta)$,
we have $ \mathfrak{d}(S_Q(\alpha'),S_Q(\beta')) = \mathfrak{d}(L,M\mathbin{\scalebox{.9}{$\nabla$}} N)$ and $L \mathbin{\scalebox{.9}{$\nabla$}} M \simeq T$. Since $\alpha' \prec_Q \beta^{(1)} \prec_Q \beta \prec_Q \eta$, the sequence $(M,N,L)$ is normal.
On the other hand, we have
$$ \Lambda(L \mathbin{\scalebox{.9}{$\nabla$}} M,N) = \Lambda(T,N) = 2\;\mathfrak{d}(T,N) +(\zeta,\eta) = 2\time 2 -2 =2 $$
and
$$ \Lambda(L,N) = 2 \mathfrak{d}(L,N) + (\alpha',\eta) = 2\times 1 -1 =1 , \ \ \Lambda(M,N) = 2 \mathfrak{d}(M,N) + (\beta^{(1)},\eta) =2 \times 1 -1 =1.$$
Thus the sequence $(L,M,N)$ is also normal. Thus we have
$$\mathfrak{d}(L,M\mathbin{\scalebox{.9}{$\nabla$}} N)=\mathfrak{d}(L,M)+\mathfrak{d}(L,N)= 1 + 1 =2.$$
Finally, we have
\begin{align*}
L \mathbin{\scalebox{.9}{$\nabla$}}(M\mathbin{\scalebox{.9}{$\nabla$}} N) \simeq (L\mathbin{\scalebox{.9}{$\nabla$}} M) \mathbin{\scalebox{.9}{$\nabla$}} N \simeq T \mathbin{\scalebox{.9}{$\nabla$}} N \simeq S_Q(\gamma)
\end{align*}
and hence $\ell(S_Q(\alpha')\mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta'))>2$.
\smallskip
\noindent
(2) Now let us keep the notations in (1) and compute $\mathfrak{d}(S_Q(\alpha),S_Q(\beta) )$ when $\alpha=\sprt{0,1,0,-1}$ and $\beta= \sprt{\frac{1}{2},-\frac{1}{2},\frac{1}{2},\frac{1}{2}}$ in~\eqref{eq: F_4 de not tde}.
As we have seen in~\eqref{eq: F_4 de not tde}, we have
$$
\gamma =\alpha+\beta \prec_Q^\mathtt{b}
\pair{\alpha^{(1)},\beta^{(1)} } \prec_Q^\mathtt{b}
\pair{\alpha',\beta' } \prec_Q^\mathtt{b} \pair{\alpha,\beta}.
$$
However, we have another chain given as follows:
$$
\gamma =\alpha+\beta \prec_Q^\mathtt{b}
\pair{\alpha^{(1)},\beta^{(1)} } \prec_Q^\mathtt{b}
\pair{\alpha'',\beta'' } \prec_Q^\mathtt{b} \pair{\alpha,\beta},
$$
where $\alpha''=\sprt{0,0,1,-1}$ and $\beta''= \sprt{\frac{1}{2},\frac{1}{2},-\frac{1}{2},\frac{1}{2}}$. As in (1), we can prove that $\mathfrak{d}(J,K)=2$ and $J \mathbin{\scalebox{.9}{$\nabla$}} K \simeq S_Q(\gamma)$, where
$J \mathbin{:=} S_Q(\alpha'')$ and $K \mathbin{:=} S_Q(\beta'')$. Let $\kappa = \sprt{0,1,-1,0}$, and set $P \mathbin{:=} S_Q(\kappa)$, $R=S_Q(\alpha)$, $X=S_Q(\beta')$ and $U=S_Q(\beta)$. Then
$ \mathfrak{d}(R,U) = \mathfrak{d}(P \mathbin{\scalebox{.9}{$\nabla$}} J, U) $, since $\pair{\alpha'',\kappa}$ is a minimal \prq of $\alpha$. Note that $J \mathbin{\scalebox{.9}{$\nabla$}} U \simeq M \mathop{\mathbin{\mbox{\large $\circ$}}} X \simeq X \mathop{\mathbin{\mbox{\large $\circ$}}} M$. Since
$$ \kappa \prec_Q \alpha'' \prec_Q \beta^{(1)} \prec_Q \beta' \prec_Q \beta,$$
the sequence $(J,U,P)$ is normal.
On the other hand, we have
\begin{align*}
\Lambda(P,J\mathbin{\scalebox{.9}{$\nabla$}} U) & = \Lambda(P, X \mathop{\mathbin{\mbox{\large $\circ$}}} M) = \Lambda(P,X)+\Lambda(P,M) \\
& = (2\mathfrak{d}(P,X)+(\kappa,\beta')) + (2\mathfrak{d}(P,M)+(\kappa,\beta^{(1)})) = (2 \times 2 - 2 ) + (2 \times 2 -2) =4
\end{align*}
and
\begin{align*}
\Lambda(P,J) &= 2\mathfrak{d}(P,J)+(\kappa,\alpha'') = 2 \times 2 -2 =2,\\
\Lambda(P,U) &= 2\mathfrak{d}(P,U)+(\kappa,\beta) =2 \times 2 -2 =2.
\end{align*}
Thus the sequence $(P,J,U)$ is also normal. Hence
$$ \mathfrak{d}(P\mathbin{\scalebox{.9}{$\nabla$}} J, U) = \mathfrak{d}(P,U)+\mathfrak{d}(J,U) =2+2=4 = \widetilde{\de}_{1,3}[9].$$
Consequently, we have
\begin{align*}
(P \mathbin{\scalebox{.9}{$\nabla$}} J) \mathbin{\scalebox{.9}{$\nabla$}} U &\simeq P \mathbin{\scalebox{.9}{$\nabla$}} (J\mathbin{\scalebox{.9}{$\nabla$}} U) \simeq P \mathbin{\scalebox{.9}{$\nabla$}} (X\mathop{\mathbin{\mbox{\large $\circ$}}} M) \simeq (P \mathbin{\scalebox{.9}{$\nabla$}} X) \mathbin{\scalebox{.9}{$\nabla$}} M \\
& \simeq L \mathbin{\scalebox{.9}{$\nabla$}} M \simeq S_Q(\gamma).
\end{align*}
Thus we have injective $R$-module homomorphisms
$$ S_Q(\beta'') \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha'') \hookrightarrow S_Q(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha) \quad \text{ and } \quad S_Q(\beta') \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha') \hookrightarrow S_Q(\beta) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\alpha),$$
since they have the common socle $S_Q(\gamma)$, which appears once in their composition series.
Thus we have $\ell(S_Q(\alpha) \mathop{\mathbin{\mbox{\large $\circ$}}} S_Q(\beta)) >2$.
\subsection{Ending Remark}
We can also apply the techniques in the previous subsections to prove \eqref{eq: de ADE} for $ADE$-types.
Also the result for ${\operatorname{deg}}_Q(\alpha,\beta)=1$ can be written in the form of~\eqref{eq: BKMc}.
\begin{example}
For a reduced expression $s_1s_2s_3s_1s_2s_3s_1s_2s_3$ of $w_0$ of type $C_3$, adapted to $Q$ in Example~\ref{ex: AR Q BCFG}, the fact that
$$ S_Q(\srt{1,1} ) \mathbin{\scalebox{.9}{$\nabla$}} S_Q(\srt{2,2} ) \simeq S_Q(\srt{1,2} )^{\mathop{\mathbin{\mbox{\large $\circ$}}} 2} \quad \text{ (up to a grading shift)}$$
corresponds to
$$ r_{[Q]}(\srt{1,1}) r_{[Q]}(\srt{2,2}) - r_{[Q]}(\srt{2,2}) r_{[Q]}(\srt{1,1}) = (q^{-1}-q^3) r_{[Q]}(\srt{1,2})^2.$$
\end{example}
|
{
"arxiv_id": "2302.08702",
"language": "en",
"timestamp": "2023-02-20T02:07:18",
"url": "https://arxiv.org/abs/2302.08702",
"yymm": "2302"
} | \section{Introduction}
\hspace{1em}
The preference of any player in non-cooperative games represents his choice in the set of available options and it mostly depends on other players' choices \cite{shafer}. According to Debreu \cite{debreubook,debreu1}, the preference of any player is representable in terms of real-valued objective functions if it meets certain conditions, namely, transitivity and completeness. In game theory, one can maximize the corresponding objective functions in place of maximizing preferences of players if the mentioned conditions are fulfilled. In particular, the following form of generalized Nash equilibrium problem (GNEP) is well-known in literature, where each player intends to maximize a real-valued objective function representing his preference \cite{debreu,faccsurvey}. Consider a set $\Lambda=\{1,2,\cdots N\}$ consisting of $N$-players. Suppose any player $i\in \mathcal \Lambda$ regulates a strategy variable $x_i\in \mathbb{R}^{n_i}$ where $\sum_{i\in \Lambda} n_i=n$. Then, we denote $x=(x_i)_{i\in \Lambda}\in \mathbb{R}^n$ as $x=(x_{-i},x_i)$ where $x_{-i}$ is the strategy vector of all the players except player $i$. The feasible strategy set of each player is given as $K_i(x)\subseteq \mathbb{R}^{n_i}$, which depends on his strategy and the strategies chosen by rivals. For a given strategy $x_{-i}$ of rivals, any player $i$ intends to choose a strategy $x_i\in K_i(x)$ such that $x_i$ maximizes his objective function $u_i:\mathbb{R}^n\rightarrow \mathbb{R}$,
\begin{equation}\label{GNEP2}
u_i(x_{-i},x_i) = \max_{y_i\in K_i(x)} u_i(x_{-i}, y_i).
\end{equation}
Suppose $Sol_i(x_{-i})$ consists of all such vectors $x_i$, which solves the problem (\ref{GNEP2}). Then a vector $\bar x$ is known as generalized Nash equilibrium if $\bar x_i\in Sol_i(\bar x_{-i})$ for each $i\in \mathcal{I}$. For a given $x=(x_{-i},x_i)\in \mathbb{R}^n$, if the feasible strategy set $K_i(x)$ is particularly defined as
\begin{equation}\label{Kiconv}
K_i(x)=\{y_i\in\mathbb{R}^{n_i}|\,(x_{-i},y_i)\in \mathcal X\},
\end{equation}
where $\mathcal X\subseteq\mathbb{R}^n$ is a non-empty closed convex set. Then, this type of GNEP is classified as jointly convex GNEP \cite{rosen,facc,ausselGNEP}.
The variational inequality (VI) theory introduced by Stampacchia \cite{stampacchia}, works as an efficient tool to study optimization problems and Nash games \cite{ausselnormal,ausselGNEP,facc,milasipref2021}.
It is well known that solving a generalized Nash equilibrium problem with concave differentiable objective functions is equivalent to solving a quasi-variational inequality (QVI) where the operator is formed by using gradients of all objective functions \cite{faccsurvey}. Further, one can determine some solutions of jointly convex GNEP having concave differentiable objective functions by solving a VI problem, in which the operator is formed by using gradients of all objective functions \cite{facc}. In the case of jointly convex GNEP with quasi-concave objective functions, the operator for an associated VI problem is formed by using normal cones to super-level sets corresponding to the objective functions \cite{ausselGNEP,cotrina}. This characterization of jointly convex GNEP through VI problems is helpful in studying several complex economic problems, for instance, time-dependent electricity market model \cite{aussel-rachna}. In the existing literature, the variational reformulation of GNEP relies on the numerical representation of players' preferences into objective functions, which is only possible if preferences are transitive and complete \cite{debreubook}.
In fact, preference of any player meets the completeness property if he is able to compare any pair of available options. As per \cite{neumann}, the completeness property for preferences is not always valid in real-world scenarios. This motivated several authors like Shafer-Sonnenschein \cite{shafer}, Tian \cite{tian}, He-Yannelis \cite{yannelis} to study GNEP with non-ordered (incomplete and non-transitive) inter-dependent preferences by using fixed point theory. On the other hand, Bade \cite{bade} studied Nash games with incomplete preferences by using scalarization approach. Recently, Milasi-Scopelliti \cite{milasipref2021} and Milasi-Puglis-Vitanza \cite{milasipref}
studied maximization problem for incomplete preference relations by using variational approach and as an application they derived existence results for economic equilibrium problems under uncertainty. However, there is no detailed study on the variational reformulation of a GNEP in which players have non-ordered inter-dependent preferences.
In this work, we aim to study variational reformulation for the generalized Nash equilibrium problems in which the players have non-ordered non-convex inter-dependent preferences (refer \cite{shafer,tian,yannelis} for more details on such games). In particular, we characterize the jointly convex GNEP in terms of a variational inequality without requiring any numerical representation of preferences. In this regard, the adapted normal cones corresponding to preferences are used by us to form an operator for the variational inequality associated with the jointly convex GNEP. Furthermore, we provide the suitable conditions under which any solution of a quasi-variational inequality becomes an equilibrium for the generalized Nash games considered in \cite{shafer,tian,yannelis}. We check the solvability of these games when the strategy maps of players are (possibly) unbounded by using variational technique. Finally, we apply our results to ensure the occurrence of competitive equilibrium for Arrow-Debreu economy under uncertainty.
\section{Preliminaries}
Suppose $C$ is a subset of $\mathbb{R}^m$, then the polar set of $C$ denoted by $C^{\circ}$ is generally defined as \cite{ausselnormal,cotrina},
$$ C^{\circ}= \{y\in \mathbb{R}^m|\,\langle y,x\rangle\leq 0~\text{for all}~x\in C\}.$$
Furthermore, the normal cone of the set $C$ at some point $x\in \mathbb{R}^m$ is given as,
$$N_C(x)=(C-\{x\})^\circ=\{x^*\in \mathbb{R}^m|\,\langle x^*,y-x\rangle \leq 0~\text{for all}~y\in C\}.$$
We assume $N_C(x)=\mathbb{R}^m$ whenever $C=\emptyset$.
For a given set $C\subseteq \mathbb{R}^m$, we denote $co(C)$ and $cl(C)$ as convex hull and closure of the set $C$, respectively. Moreover, $P:C\rightrightarrows \mathbb{R}^p$ denotes a multi-valued map, that is, $P(x)\subseteq \mathbb{R}^p$ for any $x\in C$. The map $P$ is said to admit open upper sections \cite{tian} if $P(x)$ is open for any $x\in C$. The reader may refer to \cite{aubin}, in order to recall some important concepts of upper semi-continuous (u.s.c.), lower semi-continuous (l.s.c.) and closed multi-valued maps.
Let us recollect the definitions of variational and quasi-variational inequality problems. Suppose $K\subseteq \mathbb{R}^n$ and $T:K\rightrightarrows \mathbb{R}^n$ is a multi-valued map. Then, the Stampacchia variational inequality problem $VI(T,K)$ \cite{aussel-hadj} corresponds to determine $\bar x\in K$ such that,
$$\text{there exists}~\bar x^*\in T(\bar x)~\text{satisfying}~\langle \bar x^*,z-\bar x\rangle\geq 0~\text{for all}~z\in K.$$
Suppose $T:D\rightrightarrows \mathcal{R}^n$ and $K:D\rightrightarrows D$ are multi-valued maps where $D\subseteq \mathbb{R}^n$. Then, the quasi-variational inequality problem $QVI(T,K)$ \cite{chan,ausselcoer} corresponds to determine $\bar x\in K(\bar x)$ such that,
$$\text{there exists}~\bar x^*\in T(\bar x)~\text{satisfying}~\langle \bar x^*,z-\bar x\rangle\geq 0~\text{for all}~z\in K(\bar x).$$
For given $Y_1\subseteq\mathbb{R}^m$ and $Y_2\subseteq\mathbb{R}^p$, suppose $P:Y_1\times Y_2\rightrightarrows Y_2$ is a multi-valued map.
Let us define a multi-valued map $\mathcal N_{P}:Y_1\times Y_2\rightrightarrows \mathbb{R}^p$ corresponding to the map $P$ as follows \cite{ausselGNEP,cotrina},
\begin{equation}\label{normal}
\mathcal N_{P}(x,y)= N_{P(x,y)}(y)= \{y^*\in \mathbb{R}^p|\,\langle y^*,z-y\rangle \leq 0~\forall~z\in P(x,y)\}.
\end{equation}
We require the following results in order to prove the upcoming existence results on generalized Nash games defined by preference relations,
\begin{proposition} \label{closed}
Consider a map $P:Y_1\times Y_2\rightrightarrows Y_2$ where $Y_1$ and $Y_2$ are subsets of $\mathbb{R}^m$ and $\mathbb{R}^p$, respectively. Then, the map $\mathcal N_{P}:Y_1\times Y_2\rightrightarrows \mathbb{R}^p$ defined as (\ref{normal}) meets following properties,
\begin{itemize}
\item[(a)] $\mathcal N_{P}(x,y)\setminus \{0\}\neq \emptyset$ if $P(x,y)$ is a convex subset of $Y_2$ satisfying $y\notin P(x,y)$ for any $(x,y)\in Y_1\times Y_2$;
\item[(b)] $\mathcal N_{P}$ is a closed map if $P:Y_1\times Y_2\rightrightarrows Y_2$ is lower semi-continuous.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
\item[(a)] Suppose $(x,y)\in Y_1\times Y_2$ is arbitrary. If $P(x,y)=\emptyset$, then our claim follows trivially as $\mathcal N_{P}(x,y)=\mathbb {R}^p$. Suppose $P(x,y)\neq \emptyset$ then in the view of assumption (a), we obtain some $0\neq y^*\in \mathbb R^p$ by applying separation theorem \cite[Section 2.5.1]{boyd} such that, $$\langle y^*,z\rangle \leq \langle y^*,y\rangle~\text{for all}~z\in P(x,y).$$ This, finally leads us to the conclusion $y^*\in \mathcal N_{P}(x,y)\setminus \{0\}$.
\item[(b)]In order to prove that the map $\mathcal N_{P}$ is a closed map we consider sequences $\{(x_n,y_n)\}_{n\in \mathbb{N}}\in Y_1\times Y_2$ and $y_n^*\in \mathcal N_{P}(x_n, y_n)~\forall~n\in \mathbb{N}$ such that $(x_n,y_n)\rightarrow(x,y)$ and $y_n^*\rightarrow y^*$. We aim to show that $y^*\in \mathcal N_{P}(x,y)$. In fact, the claim follows trivially if $P(x,y)=\emptyset$. Suppose $z\in P(x,y)$, then by employing the lower semi-continuity of $P$ we obtain some sequence $z_n\in P(x_n,y_n)$ such that $z_n\rightarrow z$. Since, we already have $\{y_n^*\}\in \mathcal N_{P}(x_n, y_n)$, we obtain
$$ \langle y_n^*, z_n-y_n\rangle \leq 0,\quad \forall n\in \mathbb{N}.$$
Finally, we obtain $\langle y^*,z-y\rangle \leq 0$ by taking $n\rightarrow \infty$. Hence, $y^*\in \mathcal N_{P}(x,y)$ and $\mathcal N_{P}$ becomes a closed map.
\end{itemize}
\end{proof}
\begin{proposition}\label{relation}
Consider a map $P:Y_1\times Y_2\rightrightarrows Y_2$ where $Y_1$ and $Y_2$ are subsets of $\mathbb{R}^m$ and $\mathbb{R}^p$, respectively. Suppose $P(x,y)$ is an open convex subset of $Y_2$ for any $(x,y)\in Y_1\times Y_2$. Then, $\langle y^*,z-y\rangle<0$ for any $z\in P(x,y)$ and $0\neq y^*\in \mathcal N_{P}(x,y)$.
\end{proposition}
\begin{proof}
Suppose $z\in P(x,y)$ then $z-y\in (P(x,y)-\{y\})=int (P(x,y)-\{y\})$. In the view of \cite[Lemma 2.1]{cotrina}, we obtain $\langle y^*,z-y\rangle<0$ for any $y^*\in (P(x,y)-\{y\})^\circ=\mathcal N_{P}(x,y)$ with $y^*\neq 0$.
\end{proof}
\section{Generalized Nash Games with Non-ordered Inter-dependent Preferences} \label{mainresult}
The concept of generalized Nash equilibrium problems with non-ordered inter-dependent preferences was introduced by Shafer-Sonnenschein \cite{shafer}. They investigated the occurrence of equilibrium for such problems by using celebrated Kakutani's fixed point result. Later, several other researchers (like \cite{tian,yannelis}) derived the occurrence of equilibrium for such games by relaxing the sufficient conditions required in \cite{shafer}.
Let us recall the definition of GNEP with non-ordered inter-dependent preferences \cite{shafer,yannelis,tian}. Suppose $\Lambda=\{1,2,\cdots N\}$ denotes the set of involved agents. Let $X_i\subseteq \mathbb{R}^{n_i}$ denotes the choice set for each agent $i\in \Lambda$, where $\sum_{i\in \Lambda}n_i=n$. Suppose that
$$X=\prod_{i\in \Lambda} X_i \subseteq \mathbb{R}^n~\text{and}~ X_{-i}= \prod_{j\in (\Lambda\setminus \{i\})} X_j \subseteq \mathbb{R}^{n-n_i}.$$
We consider the multi-valued maps $P_i:X\rightrightarrows X_i$ and $K_i:X\rightrightarrows X_i$ as preference map and constraint map, respectively, for any $i\in \Lambda$. Let us represent the generalized Nash game defined by preference maps, also known as \textit{abstract economy}, by N-ordered triples $\Gamma=(X_i,K_i,P_i)_{i\in \Lambda}$. A vector $\tilde x\in X$ is said to be equilibrium for the game $\Gamma$ if $\tilde x\in K_i(\tilde x)$ and $P_i(\tilde x)\cap K_i(\tilde x)=\emptyset$ for each $i\in \Lambda$.
\begin{remark}\label{numGNEP}
If the preference map $P_i:X_{-i}\times X_i\rightrightarrows X_i$ is representable by utility function $u_i:X_{-i}\times X_i\rightarrow \mathbb{R}$ then $P_i(x_{-i},x_i)=\{y_i\in X_i\,|\, u_i(x_{-i},y_i)>u_i(x_{-i},x_i)\}$ (see \cite{shafer}) and we observe the given game $\Gamma$ reduces to Arrow-Debreu abstract economy \cite{debreu}.
\end{remark}
\subsection{Jointly Convex GNEP with Preference maps and Variational Inequalities} \label{GNEP}
In the seminal work by Rosen \cite{rosen}, a special class of GNEP known as \textit{jointly convex} GNEP or generalized Nash equilibrium problem with \textit{coupled constraints} was initiated. In past few years, Facchinei et al. \cite{facc}, Aussel-Dutta \cite{ausselGNEP} and Bueno-Cotrina \cite{cotrina} characterized the jointly convex GNEP in terms of a variational inequality under concavity, semi-strict quasi-concavity and quasi-concavity of objective functions, respectively. In this section, we characterize the jointly convex GNEP in terms of a variational inequality without requiring any numerical representation of preferences into objective functions. Thus, we consider that players have non-ordered inter-dependent preferences. Further, we provide the sufficient conditions required for occurrence of equilibrium in such games by using this characterization
\textcolor{black}{According to \cite{rosen,facc,ausselGNEP}, the feasible strategy set for any player $i\in \Lambda$ is defined as (\ref{Kiconv}) in a jointly convex GNEP. According to (\ref{Kiconv}), one can observe that the choice set $X_i$ for any player is actually the projection of the set $\mathcal{X}$ on $\mathbb{R}^{n_i}$ (see \cite{rosen,cotrina} for more details). In a given abstract economy $\Gamma=(X_i, K_i, P_i)_{i\in \Lambda}$, let us consider the case where the constraint map $K_i:X\rightrightarrows X_i$ is defined in terms of non-empty, convex and closed set $\mathcal X \subset \mathbb{R}^n$,
\begin{equation}\label{constraint}
K_i(x)=\{z_i\in X_i|\,(x_{-i}, z_i)\in \mathcal X\}.
\end{equation}
If the map $K_i$ is defined as (\ref{constraint}) for each $i\in \Lambda$ the game $\Gamma=(X_i, K_i, P_i)_{i\in\Lambda}$ reduces to a jointly convex GNEP denoted by $\Gamma'=(\mathcal{X}, K_i, P_i)_{i\in \Lambda}$.} Note that the constraint maps $K_i$ are actually coupled due to joint constraint set $\mathcal X$.
Let us define the map $\tilde{P}_i:X_{-i}\times X_i\rightrightarrows \mathbb{R}^{n_i}$ as $\tilde P_i(x)= co(P_i(x))$. In order to construct a variational inequality corresponding to the game $\Gamma$, we define a map $T_i:X_{-i}\times X_i\rightrightarrows \mathbb{R}^{n_i}$ for each $i\in \Lambda$ as follows,
\begin{equation} \label{Ti}
T_i(x_{-i},x_i)=co(\mathcal{N}_{\tilde P_i}(x_{-i},x_i)\cap S_i[0,1]),
\end{equation}
where $\mathcal{N}_{\tilde P_i}:X_{-i}\times X_i\rightrightarrows \mathbb{R}^{n_i}$ is defined as (\ref{normal}) and $S_i[0,1]=\{x\in \mathbb{R}^{n_i}|\,\norm{x}=1\}$. Further, suppose $T:X\rightrightarrows \mathbb{R}^n$ is defined as,
\begin{equation}
T(x)=\prod_{i\in \Lambda}T_i(x).\label{T}
\end{equation}
Then, clearly the map $T$ is convex and compact valued map.
Now, we provide the sufficient conditions on the preference relations $P_i$ under which the map $T$ becomes upper semi-continuous and non-empty valued.
\begin{lemma} \label{usc}
Suppose $T:X\rightrightarrows \mathbb{R}^n$ is a multi-valued map as defined in (\ref{T}). If for each $i\in \Lambda$,
\begin{itemize}
\item[(a)] $P_i$ is lower semi-continuous map then $T$ is an upper semi-continuous map;
\item[(b)] $P_i$ satisfies $x_i\notin co (P_i(x))$ for any $x\in X$ then $T$ admits non-empty values.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item[(a)] Suppose $i\in \Lambda$ is arbitrary. Then the map $\tilde P$ is lower semi-continuous according to \cite[Theorem 5.9]{rockafellar}. It is easy to observe that $\mathcal{N}_{\tilde P_i}:X_{-i}\times X_i\rightrightarrows \mathbb R^{n_i}$ is a closed map as per Proposition \ref{closed} (b). Hence, the map $T_i:X \rightrightarrows \mathbb{R}^{n_i}$ defined as (\ref{Ti}) is also closed. Finally, the fact that $T_i(x)\subseteq \bar B_i(0,1)$ for each $x\in X$ implies $T_i$ is u.s.c. map. This leads us to the conclusion $T=\prod_{i\in \Lambda} T_i$ is u.s.c. map.
\item[(b)] Suppose $x\in X$ is arbitrary. According to Proposition \ref{closed}, the map $\mathcal{N}_{\tilde P_i}:X_{-i}\times X_i\rightrightarrows \mathbb R^{n_i}$ satisfies $\mathcal{N}_{\tilde P_i}(x_{-i},x_i)\setminus \{0\}\neq \emptyset$. Suppose $z_i^*\in \mathcal{N}_{\tilde P_i}(x_{-i},x_i)\setminus \{0\}$. Then, clearly $\frac{z_i^*}{\norm{z_i^*}}\in \mathcal{N}_{\tilde P_i}(x_{-i},x_i) \cap S_i[0,1]$. Hence, $T(x)=\prod_{i\in \Lambda} T_i(x)\neq \emptyset$ for any $x\in X$.
\end{itemize}
\end{proof}
Now, we provide the sufficient conditions under which any solution of a variational inequality solves the jointly convex GNEP, $\Gamma'=(\mathcal{X}, K_i, P_i)_{i\in \Lambda}$.
\begin{theorem}\label{VI}
Assume that $\mathcal X\subseteq \mathbb{R}^n$ is non-empty and the map $P_i:X\rightrightarrows X_i$ has open upper sections for any $i\in \Lambda$. Suppose the map $T:X \rightrightarrows \mathbb{R}^n$ is defined as (\ref{T}) and the map $K_i:X \rightrightarrows X_i$ is defined as (\ref{constraint}). Then, any solution of $VI(T,\mathcal X)$ is an equilibrium for $\Gamma'=(\mathcal{X}, K_i, P_i)_{i\in \Lambda}$.
\end{theorem}
\begin{proof}
Suppose $\tilde x\in \mathcal X$ solves $VI(T,\mathcal X)$ then,
\begin{equation}\label{equ}
\exists~\tilde x^*\in T(\tilde x), \langle \tilde x^*, y-\tilde x\rangle \geq 0~\text{for all}~y\in \mathcal X.
\end{equation}
Suppose $\tilde x$ is not a solution of $\Gamma$, then there exists $i\in \Lambda$ such that $P_i(\tilde x) \cap K_i(\tilde x)\neq \emptyset$. Suppose $z_i\in P_i(\tilde x) \cap K_i(\tilde x)$,
then $y=(\tilde x_{-i},z_i)\in \mathcal X$ according to (\ref{constraint}). Hence, from (\ref{equ}) one can obtain,
\begin{equation} \label{ineq}
\langle \tilde x^*_i, z_i-\tilde x_i\rangle\geq 0.
\end{equation}
Since $z_i\in \tilde P_i(\tilde x)$, we observe that $\tilde x_i^*=0$ by virtue of Proposition \ref{relation} as $\tilde x_i^*\in co(\mathcal N_{\tilde P_i}(\tilde x)\cap S_i[0,1])\subseteq \mathcal N_{\tilde P_i}(\tilde x)$ fulfills (\ref{ineq}).
Again, using the fact that $\tilde x_i^*\in co(\mathcal N_{\tilde P_i}(\tilde x)\cap S_i[0,1])$, one can obtain $x_{1i}^*,\cdots, x_{ri}^*\in \mathcal N_{\tilde P_i}(\tilde x)\cap S_i[0,1]$ with $\lambda_1,\cdots, \lambda_r\in [0,1]$ satisfying $\sum_{k=1}^{r}\lambda_k=1$ and,
$$0= \tilde x_i^*=\sum_{k=1}^{r}\lambda_k x_{ki}^*.$$
Suppose we have $k_\circ\in \{1,\cdots, r\}$ with $\lambda_{k_\circ} >0$. Then,
$$ -x_{k_\circ i}^*=\sum_{k\neq k_\circ,k=1}^{r} \frac{\lambda_k}{\lambda_{k_\circ}} x^*_{ki}.$$
We get $-x_{k_\circ i}^*\in \mathcal{N}_{\tilde P_i}(\tilde x)$ as it is a convex cone. Hence, $x_{k_\circ i}^*\in \mathcal{N}_{\tilde P_i}(\tilde x)\cap -\mathcal{N}_{\tilde P_i}(\tilde x)$. However, $\tilde P_i(\tilde x)$ is an open set due to our assumptions on $P_i$ and it also contains $z_i$. Therefore, $\mathcal{N}_{\tilde P_i}(\tilde x)\cap -\mathcal{N}_{\tilde P_i}(\tilde x)=\{0\}=\{x_{k_\circ i}^*\}$ by \cite[Lemma 2.1]{cotrina}. But, this contradicts the fact that $x_{k_\circ i}^*\in S_i[0,1]$. Consequently, our hypothesis is false and $\tilde x$ becomes an equilibrium for the game $\Gamma'$.
\end{proof}
Finally, we provide the sufficient conditions required for the occurrence of equilibrium for jointly convex GNEP $\Gamma'=(\mathcal{X}, K_i, P_i)_{i\in \Lambda}$ by employing the above characterization of $\Gamma'$ in terms of variational inequality.
\begin{theorem}\label{existence}
Suppose $\mathcal X$ is a non-empty closed convex subset of $\mathbb{R}^n$ and $K_i:X\rightrightarrows X_i$ is defined as (\ref{constraint}). Then there exists a solution for $\Gamma'=(\mathcal{X}, K_i, P_i)_{i\in \Lambda}$ if,
\begin{itemize}
\item[(a)] for each $i\in \Lambda$, the map $P_i:X\rightrightarrows X_i$ is lower semi-continuous with open upper sections satisfying $x_i\notin co(P_i(x))$ for any $x\in \mathbb{R}^{n}$;
\item[(b)] there exists $\rho>0$ such that for each $x\in \mathcal{X}\setminus \bar B(0,\rho)$ a vector $z\in \mathcal{X}$ exists in such a way that $\norm{z}<\norm{x}$ and $z_i\in P_i(x)$ for each $i\in \Lambda$;
\item[(c)] there exists $\rho'>\rho$ such that $\mathcal X\cap \bar B(0,\rho')\neq \emptyset$.
\end{itemize}
\end{theorem}
\begin{proof}
\textcolor{black}
In order to prove this result, it is enough to show that $VI(T,\mathcal X)$ admits a solution as per Theorem \ref{VI}.
In the view of Lemma \ref{usc}, we know that $T$ is an u.s.c. map with non-empty convex compact values. Further, the set $\mathcal{X}_{\rho'}=\mathcal{X}\cap \bar B(0,\rho')$ is non-empty convex compact.
Therefore, by using \cite[Theorem 9.9]{aubin} we obtain $\tilde x\in \mathcal X_{\rho'}$ and $\tilde x^*\in T(\tilde x)$ which satisfy,}
\begin{equation}\label{VIeq}
\langle \tilde x^*, y-\tilde x \rangle \geq 0~\text{for each}~y\in \mathcal{X}_{\rho'}.
\end{equation}
We intend to show that inequality (\ref{VIeq}) indeed holds for each $z\in \mathcal{X}$, that is,
\begin{equation}\label{VIunbound}
\langle \tilde x^*, z-\tilde x \rangle \geq 0~\text{for each}~z\in \mathcal{X}.
\end{equation}
In this regard, one can observe that there exists $z^\circ \in \mathcal X\cap B(0,\rho')$ such that,
\begin{equation}\label{VIeqa}
\langle x^*,z^\circ-\tilde x\rangle\leq 0~\text{for each}~x^*\in T(\tilde x).
\end{equation}
In fact, one can consider $z^\circ=\tilde x$ if $\norm{\tilde x}<\rho$. Otherwise, if $\tilde x=\rho'>\rho$ then we obtain some $z^\circ\in\mathcal{X}$ with $\norm{z^\circ}<\rho'$ and $z^\circ_i\in P_i(\tilde x)$ due to our hypothesis. We know that $\mathcal N_{P_i}(x_{-i},x_i)=\{x_i^*|\,\langle x_i^*, y_i-x_i\rangle\leq 0,~\forall\,y_i\in P_i(x_{-i},x_i)\}$ as per (\ref{normal}). Therefore, the fact that $z^\circ_i\in P_i(\tilde x)$ for any $i\in \Lambda$ implies,
\begin{equation}
\langle x_i^*, z^\circ_i-\tilde x_i\rangle \leq 0,~\text{for all}~x_i^*\in \mathcal N_{P_i}(\tilde x_{-i}, \tilde x_i).
\end{equation}
Since $T_i(\tilde x)\subseteq \mathcal{N}_{P_i}(\tilde x)$, we observe that (\ref{VIeqa}) holds true.
To prove (\ref{VIunbound}), suppose $z\in \mathcal{X}\setminus \bar B(0,\rho')$ is arbitrary. Since $\mathcal X$ is convex and $z_\circ \in \mathcal X\cap B(0,\rho')$, we obtain some $t\in (0,1)$ such that $tz+(1-t)z_\circ\in \mathcal{X}_{\rho'}$. Hence, in the view of (\ref{VIeq}) we have,
\begin{equation}\label{VIeqb}
\langle \tilde x^*, tz+(1-t)z_\circ-\tilde x \rangle \geq 0.
\end{equation}
We observe that inequality (\ref{VIunbound}) follows on combining inequality (\ref{VIeqb}) with (\ref{VIeqa}). Finally, the vector $\tilde x\in\mathcal{X}$ which solves $VI(T,\mathcal{X})$ (\ref{VIeqb}) is an equilibrium for the jointly convex GNEP $\Gamma'$ by virtue of Theorem \ref{VI}.
\end{proof}
\begin{remark}\label{boundedVI}
\begin{itemize}
\item[(i)] If $\mathcal{X}$ is bounded, the coercivity criterion in hypotheses (b) and (c) becomes true by assuming that $\rho>\sup \{x|\,x\in \mathcal{X}\}$.
\item[(ii)] The coercivity criterion assumed by us in the hypotheses (b) and (c) is motivated from the coercivity criterion for variational inequalities with unbounded constraint sets \cite[Theorem 2.1]{aussel-hadj}.
\item[(iii)] The coercivity criterion assumed by us in the hypothesis (b) can be interpreted as follows: there exists $\rho>0$ such that for any vector $x=(x_{-i},x_i)$ with the norm larger than $\rho>0$ there exists a vector $z=(z_{-i},z_i)$ whose norm is less than the norm of vector $x$ and any player $i$ prefers $z_i$ over $x_i$. It is easy to notice that the coercivity condition in hypothesis (b) is satisfied if the players can always find the preferable strategies in some bounded subset of $\mathcal{X}$.
\end{itemize}
\end{remark}
\subsection{GNEP with Preference maps and Quasi-variational Inequalities}
In this section, we state the suitable conditions under which any solution of a QVI problem becomes a generalized Nash equilibrium for the GNEP having non-ordered non-convex inter-dependent preferences \cite{shafer,tian,yannelis}. Moreover, we investigate the sufficient conditions required for solvability of such games.
First of all, let us demonstrate that any solution of a quasi-variational inequality is an equilibrium for the considered generalized Nash game $\Gamma$ under suitable conditions.
\begin{theorem}\label{equivalence}
Assume that the following assumptions hold for any $i\in \Lambda$:
\begin{itemize}
\item [(a)] $K_i:X\rightrightarrows X_i$ admits non-empty values;
\item[(b)] $P_i:X\rightrightarrows X_i$ has open upper section.
\end{itemize}
Suppose the map $T$ is defined as (\ref{T}) and the map $K:X\rightrightarrows X$ is defined as $K(x)=\prod_{i\in \Lambda} K_i(x)$. Then, any solution of $QVI(T,K)$ is an equilibrium for $\Gamma =(X_i,K_i,P_i)_{i\in \Lambda}$.
\end{theorem}
\begin{proof}
Suppose $\tilde x$ solves $QVI(T,K)$ then $\tilde x\in K(\tilde x)$ satisfies,
\begin{equation}\label{GNEPQVI}
\exists~\tilde x^*\in T(\tilde x), \langle \tilde x^*, y-\tilde x\rangle \geq 0~\text{for all}~ y\in K(\tilde x).
\end{equation}
Let us assume that $\tilde x$ is not an equilibrium for the given generalized Nash game $\Gamma$. Then, $co(P_i(\tilde x))\cap K_i(\tilde x)\neq \emptyset$ for some $i\in \Lambda$. Suppose $z_i\in co(P_i(\tilde x))\cap K_i(\tilde x)$. Then, clearly $y=(\tilde x_{-i}, z_i)\in K(\tilde x)$ and from (\ref{GNEPQVI}) we obtain,
\begin{equation}\label{rel}
\langle \tilde x_i^*, z_i-\tilde x_i\rangle \geq 0.
\end{equation}
By virtue of Proposition \ref{relation}, one can observe that $\tilde x_i^*=0$ because $\tilde x_i^*\in T_i(\tilde x) \subseteq \mathcal N_{\tilde P_i}(\tilde x)$ fulfills (\ref{rel}).
Since $0=\tilde x_i^*\in co(\mathcal N_{\tilde P_i}(\tilde x)\cap S_i[0,1])$, one can obtain $x_{1i}^*,\cdots, x_{ri}^*\in \mathcal N_{\tilde P_i}(\tilde x)\cap S_i[0,1]$ with $\lambda_1,\cdots, \lambda_r\in [0,1]$ satisfying $\sum_{k=1}^{r}\lambda_k=1$ and $0= \tilde x_i^*=\sum_{k=1}^{r}\lambda_k x_{ki}^*.$ By following the argument presented in the proof of Theorem \ref{VI}, one can easily obtain a contradiction to the fact that $x_{ki}^*\in S_i[0,1]$ for each $ki$.
\end{proof}
One can notice the practical importance of GNEP with non-ordered inter-dependent preferences and unbounded constraint maps by referring to \cite{yannelis,yannelis2022}, where the case of Walrasian equilibrium problems is recently considered. In an existing work on GNEP with non-ordered inter-dependent preferences by Tian \cite{tian}, the GNEP with unbounded strategy sets but compact valued strategy maps is studied. In particular, Tian \cite[Theorem 2]{tian} employed a coercivity condition to ensure the occurrence of equilibrium for such GNEP by using a fixed point result.
Our aim is to derive the occurrence of equilibrium for GNEP in which the players have non-ordered inter-dependent preferences, unbounded strategy sets and non-compact valued strategy maps by using variational techniques. For this purpose, we will employ the following coercivity criterion: Suppose $P_i:X\rightrightarrows X_i$ and $K_i:X\rightrightarrows X_i$ denote the preference and strategy maps of player $i$, respectively. Let $K:X\rightrightarrows X$ be formed as $K(x)=\prod_{i\in \Lambda} K_i(x)$. Then, the coercivity criterion ($\mathcal{C}_x$) holds at any $x\in X$ if,
\begin{align*}
(\mathcal{C}_x): \enspace \exists\,\rho_x & >0~\text{such that}~\forall\,y\in K(x)\setminus\bar B(0,\rho_x),\exists\, z\in K(x)\\& \quad \text{with}~\norm{z} <\norm{y}~\text{and}~z_i\in P_i(y)~\forall\,i\in\Lambda.
\end{align*}
\begin{remark}
\begin{itemize}
\item[(i)] For any $x\in X$, if $K_i(x)$ is bounded for each $i\in \Lambda$ then the coercivity criterion ($\mathcal{C}_x$) is fulfilled by assuming that $\rho_x> \sup \{\norm{y}\,|\,y\in K(x)\}$.
\item[(ii)] One can observe that the coercivity criterion ($\mathcal{C}_x$) is comparable with the existing coercivity criterion for GNEP having numerical representation of preferences (as described in Remark \ref{numGNEP}). In fact, on strengthening the \textit{coerciveness condition} (1) in \cite{cotrinatime} to following form: for any $x\in X$ there exists a non-empty convex compact subset $H_x$ such that,
\begin{align}
\text{for any}~y\in K(x)\setminus & H_x~\text{there exists}~ z\in K(x)\cap H_x~\text{satisfying,}\notag\\&u_i(y_{-i},z_i)> u_i(y_{-i},y_i);\label{numGNEP1}
\end{align}
we observe that ($\mathcal{C}_x$) is fulfilled if (\ref{numGNEP1}) holds. One can verify this by simply assuming $\rho_x>\sup\, \{y|\,y\in H_x\}$.
\end{itemize}
\end{remark}
Finally, we check the solvability of the generalized Nash equilibrium problem $\Gamma= (X_i,K_i,P_i)_{i\in\Lambda}$ by employing the characterization of $\Gamma$ in terms of quasi-variational inequality derived in Theorem \ref{equivalence}.
\begin{theorem} \label{QVImainresult}
Suppose that the following assumptions hold for any $i\in \Lambda$:
\begin{itemize}
\item[(a)] $X_i$ is a non-empty, closed and convex subset of $\mathbb{R}^{n_i}$;
\item[(b)] $K_i$ is closed and lower semi-continuous map with $K_i(x)$ being non-empty convex for any $x\in X$;
\item[(c)] $P_i$ is lower semi-continuous map with open upper sections and $x_i\notin co(P_i(x))$ for any $x\in X$.
\end{itemize}
Then, there exists an equilibrium for $\Gamma =(X_i,K_i,P_i)_{i\in\Lambda}$ if the coercivity criterion ($\mathcal{C}_x$) holds at any $x\in X$ and there exists $\rho>\sup\{\rho_x\,|\,x\in X\}$ in such a way that $(\prod_{i\in \Lambda} K_i(x))\cap \bar B(0,\rho)\neq \emptyset$ for each $x\in X$.
\end{theorem}
\begin{proof}
In order to prove this result, it is enough to show that $QVI(T,K)$ admits a solution according to Theorem \ref{equivalence}. Suppose $X_\rho =X\cap \bar B(0,\rho)$ and the map $K_\rho:X_\rho\rightrightarrows X_\rho$ is formed as $K_\rho (x)= K(x)\cap \bar B(0,\rho)$ where $K(x)=\prod_{i\in \Lambda} K_i(x)$.
We claim that $K_\rho$ is a lower semi-continuous map.
In fact, one can observe that $K(x)\cap \bar B(0,\rho)\neq \emptyset$ as per our hypothesis. Hence, there exists some $z\in K(x)\cap B(0,\rho)$ due to coercivity criterion ($\mathcal{C}_x$). Then, we observe that $K_\rho$ is lower semi-continuous map as per \cite[Lemma 2.3]{cotrinaeq}. Consequently, $K_\rho$ is closed and lower semi-continuous map and $K_\rho (x)$ is non-empty convex compact for any $x\in X$.
According to Lemma \ref{usc}, $T:X_\rho \rightrightarrows \mathbb{R}^n$ is a u.s.c. map with non-empty convex and compact values. As per \cite[Corollary(Theorem 3)]{tan}, we obtain some $\tilde x\in K_\rho (\tilde x)$ such that,
\begin{equation}\label{coerc}
\text{there exists}~\tilde x^*\in T(\tilde x),\langle\tilde x^*,y-\tilde x\rangle \geq 0,~\text{for all}~y\in K_\rho(\tilde x).
\end{equation}
We claim that $\tilde x$ is a solution of $QVI(T,K)$, that is,
\begin{equation}\label{coere}
\text{there exists}~\tilde x^*\in T(\tilde x),\langle\tilde x^*,y-\tilde x\rangle \geq 0,~\text{for all}~y\in K(\tilde x).
\end{equation}
In this regard, we first show that there exists $z\in K(x)\cap B(0,\rho)$ such that,
\begin{equation}\label{coera}
\langle x^*,z-\tilde x\rangle \leq 0,~\text{for all}~x^*\in T(\tilde x).
\end{equation}
Since $\tilde x\in K_\rho(\tilde x)$, one can observe that (\ref{coera}) holds true if $\norm{\tilde x}<\rho$. On the other hand, in the case $\norm{\tilde x}=\rho$ then $\tilde x\in K(\tilde x)\setminus \bar B(0,\rho_{\tilde x})$. Hence, there exists $z\in K(\tilde x)$ with $\norm z<\rho$ and $z_i\in P_i(\tilde x)$ for all $i\in \Lambda$. Then, for any $i\in \Lambda$ we obtain,
\begin{equation}\label{coerb}
\langle x_i^*, z_i-\tilde x_i\rangle \leq 0,~\text{for all}~x_i^*\in \mathcal N_{P_i}(\tilde x_{-i},\tilde x_i)
\end{equation}
by definition of $\mathcal N_{P_i}$ (see (\ref{normal})). Since $T_i(\tilde x)\subseteq \mathcal N_{P_i}(\tilde x)$, we observe that (\ref{coera}) holds true. Finally, for any $y\in K(\tilde x)\setminus\bar B(0,\rho)$ we obtain some $t\in(0,1)$ such that $ty+(1-t)z\in K(\tilde x)\cap \bar B(0,r)$ by using convexity of $K(\tilde x)$. Hence, in the view of (\ref{coerc}) we observe,
\begin{equation}\label{coerd}
\langle\tilde x^*,ty+(1-t)z-\tilde x \rangle\geq 0.
\end{equation}
We observe that inequality (\ref{coere}) follows on combing (\ref{coerd}) with (\ref{coera}). Finally, the vector $\tilde x$ which solves $QVI(T,K)$ (\ref{coere}) is an equilibrium for the given GNEP $\Gamma$ by virtue of Theorem \ref{equivalence}.
\end{proof}
\textcolor{black}{We observe that the coercivity condition $(\mathcal{C}_x)$ holds for any $x\in X$ if $X_i$ is bounded for any $i\in \Lambda$. Hence, we obtain following result as a corollary of Theorem \ref{QVImainresult}.
\begin{corollary}\label{QVIcoro}
Suppose that the following assumptions hold for any $i\in \Lambda$:
\begin{itemize}
\item[(a)] $X_i$ is a non-empty, compact and convex subset of $\mathbb{R}^{n_i}$;
\item[(b)] $K_i$ is closed and lower semi-continuous map with $K_i(x)$ being non-empty convex for any $x\in X$;
\item[(c)] $P_i$ is lower semi-continuous map with open upper sections and $x_i\notin co(P_i(x))$ for any $x\in X$.
\end{itemize}
Then, there exists an equilibrium for $\Gamma =(X_i,K_i,P_i)_{i\in\Lambda}$.
\end{corollary}}
\begin{remark}
\begin{itemize}
\item[(i)] One can observe that \cite[Theorem 1]{shafer} and \cite[Theorem 1]{tian} follows from above-stated result Corollary \ref{QVIcoro}.
\item[(ii)] The coercivity criterion assumed by us in the above theorem is motivated from the coercivity criterion for quasi-variational inequalities with unbounded constraint maps in \cite{ausselcoer}.
\end{itemize}
\end{remark}
\section{GNEP with Preference Maps in terms of Binary Relations}\label{secbinary}
In this section, we establish the existence of equilibrium for the GNEP in which the preferences of consumers are expressed in terms of binary relations.
Let us assume that the binary relation `$\succ_i$' characterizes the preferences of $i^{th}$ player over the set $X$. For any given strategy vector $x_{-i}\in \prod_{j\in (\Lambda\setminus\{i\})} X_j$ of rival players, whenever the consumer strictly prefers some strategy $y_i\in X_i$ over $x_i$, we represent this situation as $(x_{-i},y_i)\succ_i (x_{-i},x_i)$ \cite{bade}. Clearly, the preference map $P_i:X\rightrightarrows X_i$ can be defined in terms of binary relation as,
\begin{equation}\label{pref}
P_i(x_{-i},x_i)=\{y_i\in X_i\,|\, (x_{-i},y_i)\succ_i (x_{-i},x_i)\}.
\end{equation}
Moreover, the definition discussed in Section 3 can be given in terms of binary relations as follows: Suppose the $N$-ordered triples $\Gamma=(X_i,K_i,\succ_i)_{i\in \Lambda}$ represents the generalized Nash game defined by preference relations. A vector $\tilde x\in X$ is known as equilibrium for the game $\Gamma$ if $\tilde x\in K_i(\tilde x)$ and for each $i\in \Lambda$ there is no such action $y_i\in K_i(\tilde x)$ such that $(\tilde x_{-i},y_i)\succ_i (\tilde x_{-i},\tilde x_i)$ \cite{bade}.
Let us recollect some important properties of preference relation from \cite{kreps}.
\begin{definition}
Suppose $\succ_i$ denotes the preference relation over the set $X$ \cite{debreubook,krepsweak,milasipref2021}. Then $\succ_i$ is known as,
\begin{itemize}
\item[(a)] irreflexive if there is no such $x\in X$ for which we have $x\succ_i x$;
\item[(b)] asymmetric if there is no $x,y\in X$ such that $x\succ_i y$ and $y\succ_i x$;
\item[(c)] negatively transitive if for any $x,y,z \in X$ we have $x\succ_i y$, then either $x\succ_i z$ or $z\succ_i y$, or both.
\end{itemize}
\end{definition}
It is well known that strict preference relation $\succ_i$ induces a weak preference relation `$\succeq_i$' \cite{milasipref2021, kreps}. For any $x,y\in X$ we have $x\succeq_i y$, that is, player $i$ weakly prefers $x$ over $y$ if $y\succ_i x$ is not the case. Furthermore, if the strict preference relation $\succ_i$ is asymmetric and negatively transitive then the corresponding weak preference relation,
\begin{itemize}
\item[(a)] $\succeq_i$ is complete: For any $x,y\in X$ either $x\succeq_i y$ or $y\succeq_i x$;
\item[(b)] $\succeq_i$ is transitive: For any $x,y,z\in X$ if $x\succeq_i z$ and $z\succeq_i y$ then $x\succeq_i y$,
\end{itemize}
respectively. We say that the preference relation $\succeq_i$ (or $\succ_i$) is non-ordered if it is not complete and transitive (it is not asymmetric and negatively transitive).
Let us discuss the continuity and convexity properties of preference relations which play an important role in establishing the existence of equilibrium for the proposed games.
\begin{itemize}
\item[(a)] $\succ_i$ is said to be l.s.c. if the set $\{x\in X\,|\, x\succ_i y\}$ is open for any $y\in X$ and it is said to be u.s.c. if the set $\{x\in X\,|\, y\succ_i x\}$ is open for any $y\in X$;
\item[(b)] $\succ_i$ is said to be continuous if it is both u.s.c. and l.s.c. Alternatively, `$\succ_i$' is continuous iff the induced weak preference relation `$\succeq_i$' satisfies $x_n\succeq_i y_n,~\forall n\implies x\succeq_i y$ for any sequences $\{x_n\}$ and $\{y_n\}$ converging to $x$ and $y$ respectively;
\item[(c)] $\succ_i$ is convex if for any $x,y,z\in X$ satisfying $x\succ_i z$ and $y\succ_i z$ we have $tx+(1-t)y\succ_i z$ for each $t\in (0,1)$;
\item[(d)] $\succ_i$ is said to be non-satiated if for any $x\in X$ there exists $z\in X$ satisfying $z\succ_i x$.
\end{itemize}
Following result will be useful in proving the upcoming existence result on generalized Nash equilibrium problems with preference maps in terms of binary relations:
\begin{proposition} \label{binary}
Assume that $\succ_i$ is the strict preference relation defined over $X$ for player $i$. Suppose the map $P_i:X\rightrightarrows X_i$ be defined as (\ref{pref}). Then,
\begin{itemize}
\item[(a)] $x_i\notin P_i(x)$ for any $x\in X$ if $\succ_i$ is irreflexive;
\item[(b)] $P_i(x)\neq \emptyset$ for any $x\in X$ if $\succ_i$ is non-satiated on $X$;
\item[(c)] $P_i$ admits open upper sections if $\succ_i$ is lower semi-continuous;
\item[(d)] $P_i$ is lower semi-continuous if $\succ_i$ is continuous.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
\item[(a)] Since $\succ_i$ is irreflexive, we can not have $(x_{-i},x_i)\succ_i(x_{-i},x_i)$ for any $x\in X$. Hence, $x_i\notin P_i(x)$ for any $x\in X$.
\item[(b)] It follows from definition directly.
\item[(c)] Suppose $x\in X$ is arbitrary. If $P_i(x)=\emptyset$ then the case follows trivially. Suppose $y_i\in P_i(x)$ then we claim that for any sequence $\{(y_i)_n\}$ converging to $y_i$ there exists some $n_\circ \in \mathbb{N}$ such that $(y_i)_n\in P_i(x)$ for all $n\geq n_\circ$. In fact, $y_i\in P_i(x)$ implies $(x_{-i},y_i)\succ_i(x_{-i},x_i)$. Hence, on taking $y=(x_{-i},y_i)$ and $y_n=(x_{-i},(y_i)_n)$ we observe that the sequence $\{y_n\}$ converges to $y$. Now, by using lower semi-continuity of $\succ_i$ we get some $n_\circ\in \mathbb{N}$ such that $y_n\succ_i x$ for all $n\geq n_\circ$. Hence, $(y_i)_n\in P_i(x)$ for all $n\geq n_\circ$.
\item[(d)] Suppose $P_i$ is not lower semi-continuous at some $x=(x_{-i},x_i)\in X$ then there exists $V_i\subset X_i$ with $P_i(x_{-i},x_i)\cap V_i\neq \emptyset$ and for each $n\in \mathbb{N}$ there exists $x_n\in B(x,\frac{1}{n})$ such that $P_i(x_n)\cap V_i=\emptyset$ for all $n\in \mathbb{N}$. Consequently,
\begin{equation} \label{cont}
((x_{-i})_n,(x_i)_n) \succeq_i ((x_{-i})_n,z_i)~\forall\,n\in \mathbb{N},~\forall\,z_i\in V_i.
\end{equation}
Suppose $z_i\in V_i$ is arbitrary. Assume that $z_n=((x_{-i})_n, z_i)$ and $z=(x_{-i},z_i)$. Clearly, the sequence $\{z_n\}$ converges to $z$. Now, in the view of (\ref{cont}) we get $x\succeq_i z$ by using the continuity of $\succ_i$. Since $z_i$ was chosen arbitrarily, we observe $(x_{-i},x_i) \succeq_i (x_{-i},z_i) \text{ holds for all } z_i\in V_i.$
But, this contradicts $P_i(x_{-i},x_i)\cap V_i\neq \emptyset$. Finally, we affirm that $P_i$ is l.s.c.
\end{itemize}
\end{proof}
Following result ensures the occurrence of equilibrium for the jointly convex GNEP in which the preference relations of players are incomplete and non-transitive. It is obtained by combining Theorem \ref{existence}, Remark \ref{boundedVI}(i) and Proposition \ref{binary}.
\begin{theorem}
Assume that $\mathcal X$ is a convex, compact and non-empty subset of $\mathbb{R}^n$. Suppose for any $i\in \Lambda$, the binary relation $\succ_i$ is irreflexive, convex and continuous over the set $X$. Suppose $K_i:X\rightrightarrows X_i$ is defined as (\ref{constraint}). Then, there exists a solution for $\Gamma'=(\mathcal X,K_i,\succ_i)_{i\in }$.
\end{theorem}
Furthermore, the following result proves the occurrence of equilibrium for generalized Nash equilibrium problems in which the preference relations of players are incomplete and non-transitive. It is obtained by combining Corollary \ref{QVIcoro} and Proposition \ref{binary}.
\begin{theorem}
Suppose that the following assumptions hold for any $i\in \Lambda$:
\begin{itemize}
\item[(a)] $X_i$ is a non-empty, convex and compact subset of $\mathbb{R}^{n_i}$;
\item[(b)] $K_i:X\rightrightarrows X_i$ is closed and lower semi-continuous map with $K_i(x)$ being non-empty convex set for any $x\in X$;
\item[(c)] $\succ_i$ is irreflexive, convex and continuous over the set $X$.
\end{itemize}
Then, there exists a solution for $\Gamma=(X_i,K_i,\succ_i)_{i\in \Lambda}$.
\end{theorem}
\section{Applications}
In this section, we apply the theoretical results established by us to demonstrate the existence of equilibrium for Arrow-Debreu market economy under uncertainty \cite{debreubook}.
These problems are recently studied in \cite{milasipref, milasipref2021} by using variational approach. It is worth noticing that Milasi et. al. in \cite{milasipref, milasipref2021} considered the economic equilibrium problems under uncertainty in which the preferences of agents are convex (or semi-strictly convex), incomplete and independent of the strategies chosen by other agents. We check the solvability of economic equilibrium problem under uncertainty in which the preferences of agents are non-convex, non-ordered, price-dependent and inter-dependent.
Consider an economy consisting of finite number of agents ($I$ consumers and $J$ producers) actively trading $L$ commodities. Suppose $\mathcal I=\{1,2,\cdots I\}$, $\mathcal J=\{1,2,\cdots J\}$ and $\mathcal{L}=\{1,2,\cdots L\}$ denote the sets of consumers, producers and commodities, respectively. Assume that these agents trade in two periods of time, $t=0$ (today) and $t=1$ (tomorrow). The market at $t=0$ is generally represented by the state $\mathcal S_0=\{0\}$. Further, the uncertainty at $t=1$ is represented in the form of states which may take place tomorrow. Suppose the set $\mathcal S_1=\{1,2,\cdots s_1\}$ consists of all such states which possibly occur at $t=1$.
Then, the set $\mathcal S=\mathcal S_0\cup \mathcal S_1$ consists of $S=s_1+1$ states. We assume that $\mathcal E=\{\zeta_0,\zeta_1,\cdots \zeta_{s_1}\}$ represents the set of all possible situations occurring in these states.
In this economy with uncertainty, we assume that the evolution of market is indicated by using an oriented graph $\mathcal G$ with nodes $\mathcal E$ and root $\zeta_0$. Every node $\zeta_s\in\mathcal{E}$ in this graph indicates the \textit{contingency} plan of market in the state $s$. A state-contingent commodity $x_i^l(\zeta_s)\geq 0$
indicates the quantity of physical commodity $l\in \mathcal{L}$ received by consumer $i\in \mathcal{I}$ if the state $s\in \mathcal{S}$ occurs. Thus, $a_i=((a_i^l(\zeta_s))_{l\in \mathcal{L}})_{s\in \mathcal{S}}$ represents the state-contingent commodity vector of consumer $i$ in the consumption set $A_i\subseteq \mathbb{R}_+^{H}$ where $H=L(s_1+1)$. We indicate $b_j^l(\zeta_s)\geq 0$ (or $\leq 0$) as output (or input) of the commodity $l\in\mathcal{L}$ by a production unit $j\in J$ if the state $s\in \mathcal{S}$ occurs. Thus, a vector $b_j=((b_j^l(\zeta_s))_{l\in \mathcal{L}})_{s\in \mathcal{S}}$ denotes the state-contingent commodity vector of producer $j$ in the set of possible production plans $B_j\subseteq \mathbb{R}^{H}$. Moreover, $p=((p^l(\zeta_s))_{l\in \mathcal{L}})_{s\in \mathcal{S}}\in \mathbb{R}^{H}$ is the system of prices for the state-contingent commodities.
Consider that the state-contingent commodity vector $e_i=((e_i^l(\zeta_s))_{l\in \mathcal{L}})_{s\in \mathcal{S}}$ is an initial endowment for any consumer $i$, which satisfies $e_i^l(\zeta_s)> \hat x_i^l(\zeta_s)$ for some fixed $\hat x_i=((\hat x_i^l(\zeta_s))_{l\in \mathcal{L}})_{s\in \mathcal{S}}\in X_i$. In the total production, the share of any consumer $i$ is $\sum_{j\in \mathcal{J}}\theta_{ij} y_j$, where the fixed weights $\theta_{ij}$ satisfy $\sum_{i\in \mathcal {I}}\theta_{ij}=1$. Suppose that the price vectors of contingent commodities, that belong to the simplex $\mathbb{R}_+^{H}$, are normalized so that the price set becomes,
$$\Delta=\bigg\{p\in\mathbb{R}_+^{H}|\, \sum_{l\in \mathcal{L},s\in \mathcal{S}}p^l(\zeta_s)=1\bigg\}.$$
For a given price vector $p$ and production vector $b=(b_j)_{j\in \mathcal{J}}$ of contingent commodities, the total wealth of consumer $i$ is $\langle p, e_i +\sum_{j\in \mathcal{J}}\theta_{ij} y_j\rangle$. Since the expenditure of consumer on contingent commodities can not exceed his total existing wealth, any consumer $i$ has the following budget constraint,
$$M_i(p,b)=\bigg\{a_i\in A_i|\,\langle p,a_i\rangle\leq \langle p,e_i\rangle +\max \bigg[0,\sum_{j\in \mathcal{J}}\theta_{ij}\langle p, b_j\rangle\bigg]\bigg \}.$$
Let us denote,
$$A=\prod_{i\in \mathcal{I}} A_i~\text{and}~ B= \prod_{j\in \mathcal{J}} B_j.$$
Motivated by the Walrasian equilibrium problem considered in \cite{shafer, tian, yannelis}, we assume that every consumer $i\in \mathcal{I}$ has a preference map $\hat P_i:A\times B\times\Delta\rightrightarrows A_i$ that depends on the choice of other agents and price of contingent commodities, simultaneously. Further, we define the competitive equilibrium for Arrow-Debreu market economy under uncertainty as follows which is motivated by \cite{faccsurvey,yannelis,milasipref,cotrinatime}:
\begin{definition} \label{defequili}
A vector $(\tilde a_1,\tilde a_2,\cdots,\tilde a_I,\tilde b_1,\tilde b_2,\cdots \tilde b_J)\in A\times B\subset \mathbb{R}^{H(I+J)}$ and a price system $\tilde p=((\tilde p^l(\zeta_s))_{l\in \mathcal{L}})_{s\in\mathcal{S}}\in \Delta$ together forms a competitive equilibrium for Arrow-Debreu market economy under uncertainty if following conditions hold:
\begin{align}
\langle \tilde p,\tilde b_j\rangle =\max_{b_j\in B_j} \langle&\tilde p,b_j\rangle,~\text{for each}~j\in \mathcal{J};\\
\hat P_i(\tilde a,\tilde b,\tilde p)\cap M_i(\tilde p,&\tilde b)= \emptyset, ~\text{for each}~ i\in \mathcal{I};\label{res2a}\\
\bigg \langle \tilde p, \sum_{i\in \mathcal{I}} (\tilde a_i- e_i) -\sum_{j\in \mathcal{J}} \tilde b_j\bigg \rangle &=\max_{p\in \Delta}\, \bigg\langle p, \sum_{i\in \mathcal{I}} (\tilde a_i- e_i)-\sum_{j\in \mathcal{J}} \tilde b_j\bigg\rangle. \label{req1}
\end{align}
\end{definition}
Following lemma states that for any contingent commodity, the number of units available for consumption cannot exceed the sum of initial endowments and the units produced. In particular, it shows that the Definition \ref{defequili} implies the \cite[Definition 3.2]{milasipref}.
\begin{lemma}\label{competitive}
Assume that the set $B_j\subseteq \mathbb{R}^H$ contains $0$ vector for each $j\in \mathcal{J}$. Then, we have:
\begin{itemize}
\item[(a)] $(\tilde a, \tilde b,\tilde p)$ fulfills,
\begin{equation}\label{res1}
\sum_{i\in \mathcal{I}} \tilde a_i^l(\zeta_s)\leq \sum_{j\in \mathcal{J}} \tilde b_j^l(\zeta_s) +\sum_{i\in \mathcal{I}} e_i^l(\zeta_s),~\text{for any}~l\in \mathcal{L}~\text{and}~s\in \mathcal{S};
\end{equation}
\item[(b)] $(\tilde a, \tilde b,\tilde p)$ fulfills,
\begin{equation}\label{res2}
\langle \tilde p, \sum_{i\in \mathcal{I}} (\tilde a_i- e_i)-\sum_{j\in \mathcal{J}} \tilde b_j\rangle=0
\end{equation}
if for any $i\in \mathcal{I}$, $\hat P_i$ admits non-empty convex values and open upper sections with $a_i\in cl(\hat P_i(a,b,p))$ for each $(a,b,p)\in A\times B\times \Delta$.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item[(a)] Since $0\in B_j$, we have,$\langle \tilde p,\tilde b_j\rangle \geq 0$ for any $j\in\mathcal{J}$. Further, $\tilde a_i\in M_i(\tilde p,\tilde b)$ for each $i\in \mathcal{I}$ implies,
\begin{equation}\label{req2a}
\langle \tilde p,\tilde a_i\rangle\leq \langle \tilde p,e_i\rangle +\sum_{j\in \mathcal{J}}\theta_{ij}\langle \tilde p, \tilde b_j\rangle~\text{for any}~i\in\mathcal{I}
\end{equation}
Taking sum over $i\in\mathcal{I}$ yields,
\begin{equation} \label{req2}
\sum_{i\in\mathcal I} \langle \tilde p, \tilde a_i-e_i\rangle -\sum_{j\in\mathcal{J}} \langle\tilde p, \tilde b_j\rangle \leq 0.
\end{equation}
Now, inequality (\ref{req2}) together with Definition \ref{defequili} equation (\ref{req1}) yields,
\begin{equation}\label{req3}
\sum_{i\in\mathcal I} \langle p, \tilde a_i-e_i\rangle -\sum_{j\in\mathcal{J}} \langle p, \tilde b_j\rangle \leq 0~\text{for any}~p\in\Delta.
\end{equation}
For some $l_\circ\in\mathcal{L}$ and $s_\circ\in\mathcal{S}$, suppose $p=(p^l(\zeta_s))$ is defined as,
\begin{equation*}
p^l(\zeta_s)=\begin{cases}
1,~l=l_\circ,s=s_\circ,\\
0,~\text{otherwise}.
\end{cases}
\end{equation*}
Then, clearly $p\in \Delta$ and by substituting $p$ in the inequality (\ref{req3}) we obtain,
$$\sum_{i\in \mathcal{I}} \tilde a_i^{l_\circ}(\zeta_{s_\circ})\leq \sum_{j\in \mathcal{J}} \tilde b_j^{l_\circ}(\zeta_{s_\circ}) +\sum_{i\in \mathcal{I}} e_i^{l_\circ}(\zeta_{s_\circ}).$$
Since $l_\circ$ and $s_\circ$ are chosen arbitrarily, we observe inequality (\ref{res1}) follows.
\item[(b)] We claim that
\begin{equation}\label{res2c}
\langle\tilde p, \tilde a_i\rangle=\langle\tilde p, e_i\rangle+ \sum_{j\in \mathcal{J}}\theta_{ij}\langle p, \tilde b_j\rangle
\end{equation}
for each $i\in \mathcal{I}$. In fact, in the view of inequality (\ref{req2a}), if we suppose that
\begin{equation}\label{res2b}
\langle\tilde p, \tilde a_i\rangle<\langle\tilde p, e_i\rangle+ \sum_{j\in \mathcal{J}}\theta_{ij}\langle p, \tilde b_j\rangle
\end{equation}
for some $i\in \mathcal{I}$. Then, we know that $\hat P_i(\tilde a,\tilde b,\tilde p)$ is non-empty convex open set with $\tilde a_i\in cl(\hat P_i(\tilde a,\tilde b,\tilde p))$. Hence, we obtain a sequence $\{a_i^n\}_{n\in\mathbb{N}}\subseteq \hat P_i(\tilde a,\tilde b,\tilde p)$ converging to $\tilde a_i$. According to (\ref{res2a}), $\hat P_i(\tilde a,\tilde b,\tilde p)\cap M_i(\tilde p,\tilde b)= \emptyset$ and we observe $a_i^n\notin M_i(\tilde p,\tilde b)$ for any $n\in\mathbb{N}$. Thus, we have
$$\langle\tilde p, \tilde a_i^n\rangle\geq \langle\tilde p, e_i\rangle+ \sum_{j\in \mathcal{J}}\theta_{ij}\langle p, \tilde b_j\rangle~\text{for any}~n\in \mathbb{N}.$$
But, this contradicts our assumption in (\ref{res2b}) as we know that the sequence $\{a_i^n\}_{n\in\mathbb{N}}$ converges to $\tilde a_i$. Hence, our claim in (\ref{res2c}) holds true. Finally, we see equation (\ref{res2}) follows if we take sum over $i\in \mathcal{I}$ in equation (\ref{res2c}).
\end{itemize}
\end{proof}
Now, we show the occurrence of the competitive equilibrium for Arrow-Debreu economy under uncertainty by using Theorem \ref{QVImainresult}.
\begin{theorem}\label{comperesult}
Assume that for any $i\in \mathcal{I}$ and $j\in \mathcal{J}$ we have,
\begin{itemize}
\item[(a)] the sets $A_i\subseteq \mathbb{R}_+^{H}$ and $B_j\subseteq \mathbb{R}^{H}$ are non-empty convex compact;
\item[(b)] $\hat P_i$ is a lower semi-continuous map with non-empty convex values;
\item[(c)] $\hat P_i$ admits open upper sections with $a_i\notin \hat P_i(a,b,p)$ but $a_i\in cl(\hat P_i(a,b,p))$ for each
$(a,b,p)\in A\times B\times \Delta$.
\end{itemize}
Then, the considered Arrow-Debreu economy under uncertainty admits atleast one competitive equilibrium.
\end{theorem}
\begin{proof}
We observe that the given Arrow-Debreu economy under uncertainty is a specific instance of the generalized Nash equilibrium problem $\Gamma=(X_i,K_i,P_i)_{i\in \Lambda}$ considered in Section \ref{mainresult}. In fact, one can consider that there are $I+J+1$ players, including $I$ consumers, $J$ producers and one fictitious player \cite{yannelis}. These players regulates strategy vectors in the sets $X_i$, which are precisely given as,
\begin{equation*}
X_i=\begin{cases}
A_i,&~i\in \{1,2,\cdots I\}\\
B_{i-I},&~ i\in \{I+1, I+2,\cdots I+J\}\\
P,&~i= I+J+1.
\end{cases}
\end{equation*}
Then, $x=(a,b,p)=(a_1,\cdots,a_I,b_1,\cdots,b_J,p)$ in the product set $X$ is given as:
\begin{equation*}
x_i=\begin{cases}
a_i,&~i\in\{1,2,\cdots I\}\\
b_{i-I},&~ i\in \{I+1, I+2,\cdots I+J\}\\
p,&~i= I+J+1.
\end{cases}
\end{equation*}
Further, the preference maps $P_i: X \rightrightarrows X_i$ for ${N}=I+J+1$ players in the set $\Lambda=\{1,\cdots,I,1,\cdots,J,I+J+1\}$ are formed as: For $I$ consumers we set $P_i=\hat P_i$, for $J$ producers we form $P_j:A\times B\times \Delta\rightrightarrows B_j$ as, $$P_{j}(a,b,p)=\big\{\bar b_j\in B_j|\,\langle p,\bar b_j\rangle>\langle p,b_j\rangle\big\}$$ and for the fictitious player we form $P_{I+J+1}:A\times B\times \Delta\rightrightarrows \Delta$ as,
$$P_{I+J+1}(a,b,p)=\bigg\{ \bar p\in \Delta~\bigg|\,\bigg\langle \bar p, \sum_{i\in \mathcal{I}} (a_i- e_i) -\sum_{j\in \mathcal{J}} b_j\bigg\rangle > \bigg\langle p, \sum_{i\in \mathcal{I}} (a_i- e_i) -\sum_{j\in \mathcal{J}} b_j\bigg\rangle\bigg\}.$$
Moreover, we define the constraint map $K_i$ for these $I+J+1$ players as follows,
\begin{equation*}
K_i(a,b,p)= \begin{cases}
M_i(p,b),~& i\in\{1,2,\cdots I\}\\
B_{i-I},~& i\in \{I+1,I+2,\cdots,I+J\}\\
\Delta,~& i=I+J+1.
\end{cases}
\end{equation*}
Since $A_i$ and $B_j$ are assumed to be non-empty convex compact sets, the map $K_i$ admits non-empty convex compact values for any $i\in \Lambda$. Further, $K_i$ is lower semi-continuous and closed according to \cite{milasi2013,milasipref}.
One can easily verify that the preference map $P_i$ of any player in the set $\Lambda$ has non-empty convex values and admits open upper sections. Further, $P_i$ is a lower semi-continuous map with $x_i\notin P_i(x)$ for any $x\in A\times B\times \Delta$.
Hence, the considered GNEP $\Gamma$ admits an equilibrium as per Corollary \ref{QVIcoro}. In the view of Lemma \ref{competitive}, this equilibrium is a required competitive equilibrium for Arrow-Debreu economy under uncertainity.
\end{proof}
If the preferences of players are represented in terms of a binary relation as described in Section \ref{secbinary} and the set $U_i(x)=\{y_i\in X_i|\,(x_{-i},y_i)\succ_i (x_{-i},x_i)\}$ for any $x\in\prod_{i\in \Lambda} X_i$, then we can ensure the occurrence of competitive equilibrium by combining Theorem \ref{comperesult} with Proposition \ref{binary}.
\begin{corollary}\label{corol}
Assume that for any $i\in \mathcal{I}$ and $j\in\mathcal{J}$ we have,
\begin{itemize}
\item[(a)] the sets $A_i\subseteq \mathbb{R}_+^{H}$ and $B_j\subseteq \mathbb{R}^{H}$ are non-empty convex compact;
\item[(b)] $\succ_i$ is irreflexive, convex, continuous and non-satiated over the set $\prod_{i\in \mathcal{I}} A_i$;
\item[(c)] $a_i\in cl(U_i(a,b,p))$ for each $(a,b,p)\in A\times B\times \Delta$.
\end{itemize}
Then, the Arrow-Debreu economy under uncertainty admits atleast one competitive equilibrium.
\end{corollary}
\begin{remark}
One can observe that
under the boundedness of consumption sets, the existence result Corollary \ref{corol} derived by us extends \cite[Theorem 3.2]{milasipref} because in \cite{milasipref} the preference relation of involved consumers $\succ_i$ is semi-strictly convex, ordered (complete and transitive) and independent of the price or choice of other agents. We relaxed certain assumptions over $\succ_i$ by considering that the preference relation of any consumer is convex, non-ordered, inter-dependent and price-dependent.
\end{remark}
\section*{Acknowledgement(s)}
The first author acknowledges Science and Engineering Research Board, India $\big(\rm{MTR/2021/000164}\big)$ for the financial support. The second author is grateful to the University Grants Commission (UGC), New Delhi, India for the financial assistance provided by them throughout this research work under the registration number: $\big(\rm{1313/(CSIRNETJUNE2019)}\big)$.
|
{
"arxiv_id": "2302.08641",
"language": "en",
"timestamp": "2023-02-20T02:04:36",
"url": "https://arxiv.org/abs/2302.08641",
"yymm": "2302"
} | \section{Introduction}\label{Introduction}
\IEEEPARstart{G}{enerative} Adversarial Network (GAN) was introduced by Goodfellow et al. \cite{gan} in 2014 to synthesize images in a given distribution. GAN consists of two neural networks, namely Generator ($G$) and Discriminator ($D$) as shown in Fig. \ref{fig:gan_conceptual}. These networks are trained jointly in an adversarial manner. The generator network outputs a synthesized image from a random latent vector as input. Whereas, the discriminator network classifies the generated images in fake category and actual images in real category. Since its inception, several variants of GANs have been introduced to synthesize high quality images, such as Deep Convolutional GAN (DCGAN) \cite{dcgan}, Wasserstein GAN (WGAN) \cite{wgan}, Least Square GAN (LSGAN) \cite{lsgan}, ProgessiveGAN \cite{progressivegan}, StyleGAN \cite{staylegan}, \cite{karras2020analyzing}, DR-GAN \cite{tan2022dr}, DHI-GAN \cite{lin2022dhi}, ZeRGAN \cite{diao2022zergan}, Tensorizing GAN \cite{yu2021tensorizing}, etc.
Image-to-image translation is one of the major applications of GANs. The early work includes Conditional GAN \cite{conditional_gan} based Pix2pix model \cite{pix2pix} which takes images in a given domain as input and generates images in the target domain as output. In order to do so, the Pix2pix model modifies the generator network of GAN and uses the Encoder-Decoder framework, where the Encoder network is a CNN and the Decoder network is an Up-CNN. Pix2pix makes use of a pixel error to train the model along with the adversarial training, hence requires the paired dataset. To overcome this issue, CycleGAN \cite{cyclegan} uses a cyclic consistency loss that works on the unpaired datasets. Several variants of GAN have been proposed for image-to-image translation, such as PCSGAN \cite{pcsgan}, MUNIT \cite{munit}, CUT \cite{CUT}, CouncilGAN \cite{councilgan}, MobileAR-GAN \cite{mobileargan}, etc.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{Figures/gan.png}
\caption{A conceptual representation of GAN \cite{gan}.}
\label{fig:gan_conceptual}
\end{figure}
A huge progress in GAN models, its objective functions and training strategies has been observed, such as an overview on the working of GAN model and its variants is presented in \cite{hong2019generative}, the challenges in GAN models with possible solutions are summarized in \cite{saxena2021generative}, a survey on GAN's variants, applications, and training is performed in \cite{jabbar2021survey}, and a review on GAN's algorithms, theory, and applications in conducted by Gui et al. \cite{gui2021review}.
The GAN-based models have been very successful for several applications as indicated by different surveys/reviews on GANs, such as computer vision \cite{wang2021generative}, image-to-image translation \cite{pang2021image}, face generation \cite{toshpulatov2021generative}, \cite{kammoun2022generative}, medical image analysis \cite{alamir2022role}, \cite{yi2019generative}, video generation \cite{aldausari2022video}, spatio-temporal data analysis \cite{gao2022generative}, and text-to-image synthesis \cite{zhou2021survey}.
In recent years, Transformer networks \cite{vaswani2017attention} have received lots of attention due to its outstanding performance. Transformer network is based on multi-head self-attention modules, which capture the global relationship in the data while transforming the input feature to output feature. Motivated from the success of Transformer networks, several variants have been introduced, such as Bidirectional Transformers (BERT) \cite{bert}, Generative Pre-Training Transformer (GPT) \cite{gpt3}, Swin Transformer \cite{swin_transformer}, Vision Transformer (ViT) \cite{vit}, Multitask ViT \cite{tian2023end}, Tokens-to-token ViT \cite{token_vit}, Video Transformer \cite{alfasly2022effective}, HSI Denoising Transformer (Hider) \cite{chen2022hider}, Word2Pix Transformer \cite{zhao2022word2pix}, and many more.
Transformer networks have been very successful for different problems of natural language processing and computer vision, as indicated by several survey papers. A survey of efficient transformer models is presented in \cite{tay2022efficient}. A review of BERT-based transformer models for text-based emotion detection is conducted by Acheampong et al. \cite{acheampong2021transformer}. Kalyan et al. present a survey of transformer-based biomedical pretrained language models \cite{kalyan2021ammu}. The potential of transformers is also witnessed in vision \cite{khan2022transformers}, \cite{han2022survey}. Shin et al. highlights the transformer architectures for cross-modal vision and language tasks in terms of different perspectives and prospects \cite{shin2022perspectives}.
Motivated from the wider acceptance of GAN and Transformer models, recently, some researchers have proposed the Transformer-based GAN models for image generative tasks \cite{transgan}, \cite{transformer_attngan}, \cite{swingan}, \cite{vitgan}, \cite{uvcgan}. The recently developed Transformer-based GAN models have shown very promising performance for different applications of image and video synthesis. The contributions of this paper are as follows:
\begin{itemize}
\item As per our best knowledge, no survey paper exists on GAN models utilizing Transformer networks. Hence, this paper provides a comprehensive survey on the developments in Transformer-based GANs for computer vision applications.
\item The categorization of models is performed for different image and video applications, such as image generation, image translation, image inpainting, image restoration, image reconstruction, image enhancement, image super-resolution, image colorization, video generation, video inpainting, video translation, etc.
\item The comparison of models is also performed in terms of the generator architecture, discriminator architecture, loss functions and datasets used for different image and video applications.
\item The experimental results comparison and analysis is also conducted for different applications using standard metrics on benchmark datasets to provide the status of state-of-the-art Transformer-based GANs.
\item The research trends and potential future directions are also narrated with a very concise view to benefit the researchers working on deep learning and computer vision problems.
\end{itemize}
Section \ref{background} presents a brief of Generative Adversarial Network and Transformer Network. Sections III-VI are devoted to transformer-based GAN models for image generation, image-to-image translation, video applications and miscellaneous applications, respectively. Section VII provides the conclusion and future directions.
\section{Background} \label{background}
In this section, the background of GANs and Transformers is presented.
\subsection{Generative Adversarial Network}
Generative Adversarial Network (GAN) was introduced by Goodfellow et al. \cite{gan} in 2014 for image generation. A conceptual representation of GAN is depicted in Fig. \ref{fig:gan_conceptual}. The generator network ($G$) in GAN synthesizes a new sample ($G(z)$) from a random latent vector/noise ($z$), i.e., $G: z \rightarrow G(z)$, where $z \in \mathbb{R}^d$ is sampled from a uniform probability distribution $p_v$ and $d$ is the dimensionality of $z$. If the probability distribution of the generated samples ($G(z)$) and real samples ($x_{data}$) are given by $p_{model}$ and $p_{data}$, respectively, then we want to learn $p_{model}$ that matches $p_{data}$. The training of the generator network is facilitated by the discriminator network ($D$). The purpose of the discriminator network is to distinguish the generated samples from the real samples. The output of the discriminator network is considered as the probability of real samples. Hence, the discriminator network tries to produce $D(x_{data}) \approx 1$, $\forall x_{data} \sim{p_{data}}$ and $D(G(z)) \approx 0$, $\forall z \sim{p_v}$, where $p_{data}$ represents the probability distribution of real samples. However, at the same time, the generator network tries to fool the discriminator network and to achieve $D(G(z)) \approx 1$, $\forall z \sim{p_v}$. It makes training of the GAN model as a min-max optimization. The objective function of GAN is given as,
\begin{equation}
\begin{aligned}
{\mathcal{L}}_{GAN}(G, D) =
& Min_{G}Max_{D}( \mathbb{E}_{x_{data}\sim{p_{data}}} [\log D(x_{data})] + \\
& \mathbb{E}_{z\sim{p_v}}[\log(1-D(G(z)))] )
\end{aligned}
\end{equation}
where ${\mathcal{L}}_{GAN}(G, D)$ is the adversarial loss function.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/transformer.png}
\caption{(\textit{left}) Transformer network, (\textit{right, top}) Multi-head attention mechanism, and (\textit{right, bottom}) Scaled dot-product attention module \cite{vaswani2017attention}.}
\label{fig:transformer}
\end{figure}
Considering unlimited training data and unlimited capacity for generator and discriminator:
\begin{itemize}
\item The objective ${\mathcal{L}}_{GAN}(G, D)$ is equivalent to the Jensen-Shannon divergence between $p_{data}$ and $p_{model}$ and global optimum (Nash equilibrium) is given by $p_{data}$ and $p_{model}$.
\item If at each step, $D$ is allowed to reach its optimum given $G$, and $G$ is updated to decrease ${\mathcal{L}}_{GAN}(G, D)$, then $p_{model}$ will eventually converge to $p_{data}$.
\end{itemize}
The discriminator network is usually a Convolutional Neural Network (CNN) and works as a binary classifier. The last layer of discriminator network is a Sigmoid function which produces the output as the probability for real class. The discriminator network is only required for training purpose. The generator network is usually an Up-CNN which can take the input having low dimensionality and produce the output with high dimensionality. The generator network architecture is suitably modified for different type of data and applications, such as encoder-decoder based networks are used for image-to-image translation \cite{pang2021image} and RNN-CNN based networks are used for text-to-image synthesis \cite{zhou2021survey}.
\subsection{Transformer Network}
The Transformer network utilizes a self-attention mechanism introduced by Vaswani et al. in 2017 \cite{vaswani2017attention}. Originally, the Transformer was proposed for a machine translation task where the Encoder and Decoder networks are built using Transformers as shown in Fig. \ref{fig:transformer} (\textit{left}). First, the feature embedding from input is computed and combined with the positional embedding to generate the input feature embedding of $k$ dimensionality for Transformer. A Transformer block in Encoder consists of a multi-head attention module followed by a feed forward module having skip connection and normalization. However, a Transformer block in Decoder includes an additional masked multi-head attention module. Basically, a Transformer block transforms an input feature embedding ($u \in \mathbb{R}^k$) into an output feature embedding ($v \in \mathbb{R}^k$). The Transformer block is repeated $N\times$ in Encoder and Decoder networks. The multi-head attention module is illustrated in Fig. \ref{fig:transformer} (\textit{right, top}). It is basically the concatenation of the output of several independent scaled dot-product attention followed by a linear layer. The use of multiple attention heads facilitates the extraction of features of different important regions or characteristics. The scaled dot-product attention mechanism is depicted in Fig. \ref{fig:transformer} (\textit{right, bottom}). The input ($u$) to a Transformer block is projected into the Query ($Q$), Key ($K$), and Value ($V$) using linear layers having weights $W_Q$, $W_K$, and $W_V$, respectively. The output of the scaled dot-product attention module is computed as,
\begin{equation}
\text{Attention}(Q,K,V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
where $d_k$ is the dimensionality of $Q$ and $K$. The masking in scaled dot-product attention module is used only in Decoder network. The feed forward module consists of a linear layer followed by the ReLU activation function followed by another linear layer.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/vit.png}
\caption{Vision Transformer (ViT) for image classification \cite{vit}.}
\label{fig:vit}
\end{figure}
Vision Transformer (ViT) \cite{vit} is a variant of the transformer network which facilitates to utilize it for the image input. Basically, it divides the images into patches. The feature embeddings extracted from these patches are used as the input to Transformer network as depicted in Fig. \ref{fig:vit}. Transformer Encoder consists of $N$ Transformer blocks. An additional token for class embedding is added in ViT which is used as the input to the Multi-layer Perceptron (MLP) head to generate the class scores for different classes. Though Transformer and ViT networks were originally proposed for the machine translation and image classification tasks, respectively, they are very heavily utilized for different problems of computer vision in recent years. In this paper, we provide the advancements in GAN models using Transformer networks for image and video synthesis and analyze from different perspectives.
\begin{table*}[!t]
\caption{A summary of Transformer-based GANs for image generation.}
\centering
\begin{tabular}{p{0.23\columnwidth}|p{0.15\columnwidth}|p{0.4\columnwidth}|p{0.32\columnwidth}|p{0.34\columnwidth}|p{0.32\columnwidth}}
\hline
\textbf{Model} & \textbf{Venue} & \textbf{Generator} & \textbf{Discriminator} & \textbf{Objective Function} & \textbf{Datasets} \\
\hline
GANsformer \cite{gansformer} & ICML'21 & Bipartite Transformer having simplex and duplex attention & Attention CNN-based discriminator & Loss functions of StyleGAN & CLEVR \\
\hline
GANformer2 \cite{ganformer2} & NeurIPS'21 & Generator works in two stages: layout generation and layout to scene translation & One CNN for real vs. fake and one U-Net for semantic matching & Adversarial loss, Semantic-matching loss and Segment-fidelity loss & CLEVR, Bedrooms, CelebA, Cityscapes and COCO\\
\hline
TransGAN \cite{transgan} & NeurIPS'21 & A Transformer-based generator that progressively increases feature resolution & Transformer-based discriminator that takes input at multiple scales & WGAN-GP loss & CIFAR-10, STL-10, CelebA, CelebA-HQ and LSUN Church \\
\hline
HiT \cite{hit} & NeurIPS'21 & Multi-Axis Nested Transformer at low-resolution and Implicit Functions at high-resolution & ResNet-based discriminator & Non-saturating logistic GAN loss, R1 gradient penalty to only discriminator & ImageNet, CelebA-HQ and FFHQ\\
\hline
TokenGAN \cite{tokengan} & NeurIPS'21 & Visual Transformer with content and style tokens & Discriminator of StyleGAN2 & Non-saturating logistic adversarial loss, R1 regularization to only discriminator & FFHQ and LSUN Church \\
\hline
VQGAN \cite{vqgan} & CVPR'21 & CNN-based image constituents vocabulary and Transformer-based modeling of vocabulary composition within high-resolution image & CNN-based discriminator & Adversarial loss, reconstruction loss, commitment loss and perceptual reconstruction loss & ImageNet, ImageNet-Animal, LSUN Churches \& Towers, COCO-Stuff, ADE20K, CelebA-HQ and FFHQ \\
\hline
Styleformer \cite{styleformer} & CVPR'22 & Transformer with Styleformer Encoders having Increased Multi-Head Self-Attention & Discriminator of StyleGAN2-ADA & Losses of StyleGAN2-ADA & CIFAR-10, STL-10, CelebA, LSUN-Church, CLEVR and Cityscapes \\
\hline
StyleSwin \cite{styleswin} & CVPR'22 & Style-based GAN with Transformer having double attention modules & Wavelet-based discriminator & Non-saturating GAN loss with R1 gradient penalty and spectral normalization on the discriminator & FFHQ, CelebA-HQ and LSUN Church\\
\hline
ViTGAN \cite{vitgan} & ICLR'22 & ViT-based ordered patch generator & ViT-based discriminator & Non-saturating logistic adversarial loss & CIFAR-10, CelebA and LSUN Bedroom \\
\hline
Unleashing Transformer \cite{unleashing_transformer} & ECCV'22 & Trained Transformer using Masked Vector-Quantized tokens prediction & Traditional discriminator & Vector-Quantized loss, generator loss and reconstruction loss & FFHQ, LSUN Bedroom and LSUN Churches \\ \hline
Swin-GAN \cite{swin_gan_tvc} & TVC'22 & Swin Transformer-based generator & Swin Transformer-based multi-scale discriminator & WGAN-GP loss & CIFAR-10 and Anime images \\ \hline
PGTCEGAN \cite{pgtcegan} & SMC'22 & Capsule Embedding based Progressive Growing Transformer & CNN with multi-scale input in different layer & WGAN-GP loss & CIFAR-10, CelebA and LSUN-Church \\ \hline
MedViTGAN \cite{medvitgan} & ICPR'22 & ViT Encoder-based generator & ViT Encoder-based discriminator in conditional GAN setup & WGAN-GP loss with adaptive hybrid loss weighting mechanism & Histopathology image dataset: PatchCamelyon (PCam) and BreakHis \\ \hline
PTNet3D \cite{ptnet3d} & IEEE-TMI'22 & U-shape generator with performer encoder, transformer bottleneck and performer decoder & 3D ResNet-18 model pretrained on Kinetics-400 dataset & Adversarial loss, Perceptual loss and Mean square error & MRI datasets: Developing Human Connectome Project (dHCP) and longitudinal Baby Connectome Project (BCP) \\ \hline
SLATER \cite{slater} & IEEE-TMI'22 & Generator uses cross-attention transformers with input from a mapper & CNN-based discriminator & Non-saturating logistic adversarial loss, gradient penalty for discriminator & MRI synthesis: brain MRI data from fastMRI\\ \hline
SwinGAN \cite{swingan} & CBM'23 & Swin Transformer U-Net-based frequency-domain and image-domain generators & CNN-based discriminator & Adversarial loss, k-space loss and image domain loss & MRI reconstruction: IXI brain dataset \\ \hline
3D Face Transformer \cite{transformer_3d_face_reconstruction} & IEEE-TCSVT'22 & Residual blocks followed by a multi-layer transformer encoder-based generator & Traditional discriminator & Adversarial loss, L1 loss, Edge loss, L1 loss on the transformer outputs and Self-supervised reprojection consistency loss & 3D Face reconstruction: 300W-LP, AFLW, AFLW2000-3D, NoW, In-the-wild images \\ \hline
\end{tabular}
\label{tab:image_generation}
\end{table*}
\section{Transformer-based GANs for Image Generation}
The image generation has been very important application of GANs. Several improvements in GAN have been validated for this task. The researchers have also exploited the Transformer-based GANs for image generation of different types, such as objects, scenes, medical, etc. A summary of the different models is presented in Table \ref{tab:image_generation} in terms of the generator, discriminator, losses and datasets.
\subsection{Image Generation}
In the initial attempts, Transformers are utilized in an auto-regressive manner \cite{gpp}, \cite{vqgan} for the image generation. However, these methods are not very efficient in terms of the inference speed.
Jiang et al. in 2021 conducted the initial study on a pure Transformer-based GAN, dubbed TransGAN \cite{transgan}, which is completely free of convolution operations. TransGAN contains a Transformer-based generator network that increases the resolution of features in a progressive manner and a Transformer-based discriminator network. Both the generator and discriminator networks utilize the grid-transformer blocks. TransGAN achieve Fréchet Inception Distance (FID) of 9.26 on CIFAR-10 dataset \cite{cifar}.
Hudson and Zitnick \cite{gansformer} proposed generative adversarial transformer (GANsformer) for image generation. GANsformer makes use of the bipartite transformer which is a stack of alternative simplex or duplex attention modules. The results of GANsformer are demonstrated on four benchmark datasets. GANsformer is further extended to GANformer2 by incorporating explicit and strong structural priors \cite{ganformer2}.
Researchers have experienced that training GANs with Transformers is challenging. Xu et al. \cite{stransg} proposed STrans-G generator network and STrans-D discriminator network for image generation by studying the characteristics of Transformer in GAN framework. STrans-G is CNN free network. It is noticed in \cite{stransg} that the residual connections in self-attention layers is not good for Transformer-based discriminators and conditional generators. Zeng et al. \cite{tokengan} proposed a TokenGAN which exploits a token-based transformer generator to assign the styles to the content tokens through attention mechanism for controlling the image synthesis. TokenGAN generates the high-fidelity images at high-resolution (i.e., 1024 × 1024 size).
Zhao et al. \cite{hit} proposed an efficient high-resolution image transformer (HiT) based GAN model for image generation. HiT exploits the multi-axis blocked self-attention module at low-resolution synthesis and removes the costly global self-attention module at high-resolution synthesis. In an another study, the Transformer-based generator network is utilized with Convolution-based discriminator by Durall et al. \cite{transG_convD} for image generation. The experimental results using the generator of TransGAN and discriminator of SNGAN in \cite{transG_convD} show an improved FID of 8.95 on the CIFAR-10 dataset. Park and Kim \cite{styleformer} introduced the Styleformer model which contains the Transformer structure-based generator network for synthesizing the images using style vectors. An attention style injection module is introduced in Styleformer for the modulation and demodulation of style with self-attention. Styleformer shows an outstanding performance for image generation on CIFAR-10 dataset with an FID score of 2.82. Recently, in 2022, a StyleSwin model is proposed by Zhang et al. \cite{styleswin} for image generation. StyleSwin is a Swin Transformer-based GAN model in a style-based architecture for high-resolution synthesis. In order to exploit the local and shifted window contexts, StyleSwin works based on a double attention mechanism. The transformer is also exploited as the generator network with conditional GAN for image generation by Xi et al. \cite{transformer_cgan}. An unconstrained Transformer architecture is used as the backbone in \cite{unleashing_transformer} which performs the parallel prediction of Vector-Quantized tokens and achieves the competitive FID scores on benchmark datasets.
A progressive growing transformer with capsule embedding GAN (PGTCEGAN) is introduced by Jiang et al. \cite{pgtcegan}. The generator network generates the images gradually using the transformer. The capsule encoder is utilized to generate the positional embedding. Wang et al. proposed a Swin-GAN by exploiting the shifted window attention mechanism-based transformer to gradually increase the resolution \cite{swin_gan_tvc}. Basically, the generator and discriminator networks of Swin-GAN use the Swin Transformer Blocks. The discriminator in Swin-GAN receives the input by dividing the image into patches of different sizes. Relative position coding, layer normalization and data enhancement are also exploited by Swin-GAN to improve the stability.
Lee et al. utilized the vision transformer in GAN framework and proposed ViTGAN \cite{vitgan}. It is noticed that the training of ViT discriminator is unstable due to the self-attention module not able to cope with the existing GAN regularization methods. ViTGAN exploits different regularization techniques to tackle this problem, such as the Lipschitzness of Transformer discriminator, improved spectral normalization, overlapping image patches and convolutional projection. By carefully designing the ViT generator, ViTGAN is able to converge with comparable performance to the leading CNN-based GAN models for image generation. It is also observed that by using the generator network of ViTGAN (ViTGAN-G) and a discriminator network of (StyleGAN2-D), the performance of the image generation task is improved. A HybridViT model is introduced in \cite{hybridvit} by integrating the ViT architecture into diffusion denoising probability models. HybridViT model is utilized for joint image generation and classification.
\begin{table}[!t]
\caption{The results comparison of Transformer-based GANs for image generation in terms of FID score. \textbf{Best} and \textit{second best} results are highlighted in \textbf{bold} and \textit{italic}, respectively, even in other tables also.}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{p{0.23\columnwidth} p{0.13\columnwidth} p{0.1\columnwidth} p{0.08\columnwidth} p{0.08\columnwidth} p{0.1\columnwidth} p{0.1\columnwidth}}
\hline
Method & Venue & CIFAR10 ($32^2$) & STL10 ($48^2$) & CelebA ($64^2$) & ImageNet ($128^2$) \\
\hline
TransGAN \cite{transgan} & NeurIPS'21 & 9.26 & \textit{18.28} & 5.01 & - \\
HiT \cite{hit} & NeurIPS'21 & - & - & - & \textit{30.83}\\
STrans-G \cite{stransg} & arXiv'21 & \textbf{2.77} & - & \textbf{2.03} & \textbf{12.12} \\
Styleformer \cite{styleformer} & CVPR'22 & \textit{2.82} & \textbf{15.17} & 3.92 & - \\
PGTCEGAN \cite{pgtcegan} & SMC'22 & 8.43 & - & \textit{3.59} & - \\
Swin-GAN \cite{swin_gan_tvc} & TVC'22 & 9.23 & - & - & - \\
ViTGAN \cite{vitgan} & ICLR'22 & 4.92 & - & 3.74 & - \\
\hline
\end{tabular} }
\label{tab:results_image_generation}
\end{table}
\subsection{Medical Image Synthesis}
The image synthesis in medical domain is a very demanding application of GANs. The synthetic medical images are generally useful for the data augmentation to tackle the requirement of large-scale datasets of deep learning models. Due to the improved quality of generated images, the Transformer-based GANs have also been extensively utilized for medical image synthesis of different modality.
In 2021, Korkmaz et al. \cite{gvtrans} proposed a generative vision transformer-based GVTrans model for synthesizing the MRI data from noise variables and latent space. GVTrans model is tested for unsupervised MRI reconstruction with promising performance. In 2022, Zhang et al. introduced a PTNet3D model \cite{ptnet3d} which uses the pyramid transformer network for the synthesis of 3D high-resolution longitudinal infant brain MRI data. PTNet3D model makes use of attention in transformer and performer layers. The synthesis accuracy of PTNet3D is superior with better generalization capability. The corrupted scan replaced by synthesized image through PTNet3D leads to better infant whole brain segmentation. A multi-contrast multi-scale transformer (MMT) is proposed by Liu et al. \cite{mmt} for MRI missing data imputation. In order to capture inter- and intra-contrast dependencies, MMT exploits the multi-contrast Swin transformer blocks. MMT is tested for image synthesis on two multi-contrast MRI datasets with appealing performance.
In 2022, Li et al. proposed MedViTGAN for histopathological image synthesis for data augmentation \cite{medvitgan}. MedViTGAN is a vision transformer-based conditional GAN where both the generator and discriminator networks are designed based on the transformer encoder module. The performance of MedViTGAN is highly improved as compared to the CNN models, such as ResNet and DenseNet. Li et al. \cite{hvitgan} further extended the MedViTGAN using an auxiliary discriminator head for classification. Korkmaz et al. \cite{slater} proposed a zero-Shot Learned Adversarial TransformERs (SLATER) for unsupervised MRI reconstruction. SLATER utilizes cross-attention transformers with the deep adversarial network to project latent variables and noise into coil-combined MR images.
\begin{table}[!t]
\caption{The results comparison of Transformer-based GANs for high-resolution image generation in terms of FID score. Here, HiT-B refers to HiT with large model capacity.}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{p{0.235\columnwidth} p{0.13\columnwidth} p{0.07\columnwidth} p{0.075\columnwidth} p{0.07\columnwidth} p{0.07\columnwidth} p{0.08\columnwidth}}
\hline
Method & Venue & FFHQ ($256^2$) & CelebA-HQ ($256^2$) & LSUN ($256^2$) & FFHQ ($1024^2$) & CelebA-HQ ($1024^2$)\\
\hline
VQGAN \cite{vqgan} & CVPR'21 & 11.40 & 10.70 & - & - & -\\
TransGAN \cite{transgan} & NeurIPS'21 & - & 9.60 & 8.94 & - & - \\
GANsformer \cite{gansformer} & ICML'21 & 7.42 & - & 6.51 & - & - \\
GANformer2 \cite{gansformer} & NeurIPS'21 & 7.77 & - & 6.05 & - & - \\
TokenGAN \cite{tokengan} & NeurIPS'21 & 5.41 & - & 5.56 & - & - \\
STrans-G \cite{stransg} & arXiv'21 & 4.84 & - & - & - & - \\
HiT-B \cite{hit} & NeurIPS'21 & \textit{2.95} & \textit{3.39} & - & \textit{6.37} & \textit{8.83}\\
PGTCEGAN \cite{pgtcegan} & SMC'22 & - & - & \textit{3.92} & - & - \\
StyleSwin \cite{styleswin} & CVPR'22 & \textbf{2.81} & \textbf{3.25} & \textbf{2.95} & \textbf{5.07} & \textbf{4.43} \\
\hline
\end{tabular}}
\label{tab:results_hr_image_generation}
\end{table}
\subsection{Results Comparison and Analysis}
The results comparison of Transformer-based GANs is reported in Table \ref{tab:results_image_generation} and Table \ref{tab:results_hr_image_generation}. The Fréchet inception distance (FID) metric is compared to the different models on benchmark datasets. A lower FID score represents the better synthesized image quality. Table \ref{tab:results_image_generation} summarizes the FID scores on CIFAR10 \cite{cifar}, STL10 \cite{stl}, CelebA \cite{celeba} and ImageNet \cite{imagenet} datasets to generate the images of size $32\times32$, $48\times48$, $64\times64$ and $128\times128$, respectively. It is noticed that STrans-G \cite{stransg} provides best FID scores for CIFAR10, CelebA and ImageNet datasets. The Swin architecture at resolution more than $16\times16$ provides local attention in STrans-G which is beneficial for qualitative image generation. The performance of Styleformer is also promising with best on STL10 and second best on CIFAR10 dataset. The FID scores for high-resolution image generation are illustrated in Table \ref{tab:results_hr_image_generation} over Flickr-Faces-HQ (FFHQ) \cite{staylegan}, CelebA-HQ \cite{progressivegan} and LSUN Church \cite{lsun_church} datasets to generate the images of size $256\times256$. The results for generating the images of size $1024\times1024$ are also included over FFHQ and CelebA datasets. It is evident that StyleSwin \cite{styleswin} is the best performing model over all the datasets for high-resolution image synthesis as the capabilities of both StyleGAN and Swin transformer are inherited by StyleSwin. Moreover, the local-global positional encoding in StyleSwin is able to maintain a good trade-off between local and global context while generating the high-resolution images. The results of HiT \cite{hit} and PGTCEGAN \cite{pgtcegan} are also very promising.
\subsection{Summary}
Following is the summary from the survey of Transformer-based GANs for image generation:
\begin{itemize}
\item The majority of the transformer-based GAN models use the transformer-based generator network and CNN-based discriminator network. However, few models also exploit the transformers in discriminator network.
\item The uses of both local and global information are imnportant. Hence, several models try to exploit the CNN-based encoder and decoder modules along with transformer module in the generator network.
\item ViT and Swin transformers are heavily exploited for the generator network. It is observed that ViT and Swin transformer are useful to preserve the local context.
\item The objective function for different models generally includes Adversarial GAN loss and perceptual loss. Other losses such as L1 loss, gradient penalty and edge loss are also exploited by some models.
\item The CIFAR10, STL10, CelebA, ImageNet, FFHQ, CelebA-HQ and LSUN Church are the most common benchmark datasets for image generation.
\item The combination of StyleGAN with Swin transformer becomes very powerful to generate high-resolution images.
\item The transformer-based GANs are very useful in tackling the data limitation problem in the medical domain by generating very realistic medical images.
\end{itemize}
\begin{table*}[!t]
\caption{A summary of Transformer-based GANs for image-to-image translation.}
\centering
\begin{tabular}{p{0.21\columnwidth}|p{0.15\columnwidth}|p{0.4\columnwidth}|p{0.25\columnwidth}|p{0.36\columnwidth}|p{0.4\columnwidth}}
\hline
\textbf{Model} & \textbf{Venue} & \textbf{Generator} & \textbf{Discriminator} & \textbf{Objective Function} & \textbf{Application \& Datasets} \\
\hline
InstaFormer \cite{instaformer} & CVPR'22 & Generator with ViT encoder blocks consisting of adaptive instance normalization & Traditional discriminator & Adversarial loss, Global and Instance-level content loss, Image and Style reconstruction loss & Image translation: INIT, Domain adaptation: KITTI to Cityscapes \\ \hline
UVCGAN \cite{uvcgan} & WACV'23 & UNet-ViT Generator in CycleGAN framework (pre-trained on the image inpainting task) & CycleGAN discriminator with gradient penalty & GAN loss, cycle-consistency loss, and identity-consistency loss & Unpaired image translation: Selfie to Anime, Anime to Selfie, Male to Female, Female to Male, Remove Glasses, and Add Glasses\\ \hline
SwinIR \cite{swinir} & ICCV'21 & Generator with residual Swin Transformer blocks & Traditional discriminator for super-resolution & Super-resolution: Pixel loss, GAN loss and perceptual loss & Super-resolution: Set5, Set14, BSD100, Urban100, Manga109, RealSRSet \\ \hline
RFormer \cite{rformer} & IEEE-JBHI'22 & Transformer-based U-shaped generator with window-based self-attention block & Transformer-based discriminator with window-based self-attention block & Adversarial loss, Edge loss, Charbonnier loss and Fundus quality perception loss & Fundus image restoration: Real Fundus dataset \\ \hline
3D Transformer GAN \cite{3d_transformer_gan} & MICCAI'21 & Generator consisting of Encoder CNN, Transformer and Decoder CNN & CNN with four convolution blocks & Adversarial loss and L1 loss & PET reconstruction: PET data \\ \hline
3D CVT-GAN \cite{3dcvtgan} & MICCAI'22 & 3D convolutional vision transformer (CVT) based encoder and 3D transposed CVT based decoder & Patch-based discriminator embedded with 3D CVT blocks & Adversarial loss and L1 loss & PET reconstruction: PET data \\ \hline
Low-Light Transformer-GAN \cite{lowlight_transformer_gan} & IEEE-SPL'22 & Transformer using multi-head multi-covariance self-attention and Light feature-forward module structures & Convolutional discriminator & Adversarial loss, smooth L1 loss, perceptual loss, and multi-scale SSIM loss & Low-light enhancement: LOL and SICE \\ \hline
LightingNet \cite{lightingnet} & IEEE-TCI'23 & Fusion of CNN-based encoder-decoder and ViT-based encoder-decoder & CNN-based discriminator & Adversarial loss, smooth L1 loss, perceptual loss, and multi-scale SSIM loss & Low-light enhancement: LOL, SICE, ExDARK, DICM, LIME, MEF, and NPE \\ \hline
ICT \cite{ict} & ICCV'21 & Bi-directional transformer guided CNN & CNN-based discriminator & Adversarial loss and L1 loss & Image completion: FFHQ and Places2 \\ \hline
BAT-Fill \cite{batfill} & ACMMM'21 & Bidirectional and autoregressive transformer + CNN-based texture generation & CNN-based discriminator & Adversarial loss, perceptual loss and reconstruction loss & Image inpainting: CelebA-HQ, Places2 and Paris StreetView \\ \hline
\textit{T}-former \cite{tformer} & ACMMM'22 & U-shaped generator with transformer blocks & Patch GAN discriminator & Adversarial loss, style loss, reconstruction loss and perceptual loss & Image inpainting: CelebA-HQ, Places2 and Paris StreetView \\ \hline
APT \cite{apt} & ACMMM'22 & Atrous pyramid transformer and dual spectral transform convolution & CNN-based discriminator & Adversarial loss, perceptual loss, style loss and L1 loss for masked and preserved regions & Image inpainting: CelebA-HQ, Places2 and Paris StreetView \\ \hline
MAT \cite{mat} & CVPR'22 & A convolutional head, a mask-aware transformer body and a convolutional tail & Traditional discriminator & Adversarial loss, perceptual loss and R1 regularization & Large hole image inpainting: CelebA-HQ and Places365-Standard \\ \hline
ZITS \cite{zits} & CVPR'22 & Transformer-based structure restorer + CNN-based structure feature encoding and texture restoration & PatchGAN discriminator & Adversarial loss, L1 loss over unmasked region, feature match loss and high receptive field perceptual loss & Image inpainting: Places2, ShanghaiTech, NYUDepthV2 and MatterPort3D \\ \hline
HAN \cite{han} & ECCV'22 & Generator with CNN encoder, hourglass attention structure blocks and CNN decoder & PatchGAN discriminator with spectral norm & Adversarial loss, style loss, reconstruction loss and perceptual loss & Image inpainting: CelebA-HQ, Places2 and Paris StreetView \\ \hline
SRInpainter \cite{srinpaintor} & IEEE-TCI'22 & Resolution progressive CNN encoder, hierarchical transformer and CNN decoder & SNPatchGAN discriminator & Adversarial loss and super-resolved L1 loss & Image inpainting: CelebA-HQ, Places2 and Paris StreetView \\ \hline
NDMAL \cite{ndmal} & WACV'23 & Nested deformable attention layer mixed with convolution and de-convolution layers & PatchGAN discriminator & Adversarial loss, perceptual loss, edge loss and L1 loss & Image inpainting: CelebA-HQ and Places2 \\ \hline
Hint \cite{hint} & WACV'22 & ViT-based generated hint converts outpaining to inpainting & Traditional discriminator & Adversarial loss, style loss, perceptual loss and L1 loss & Image outpainting: SUN and Beach \\ \hline
ColorFormer \cite{color_former} & ECCV'22 & Generator using transformer-based encoder and a color memory decoder & PatchGAN discriminator & Adversarial loss, perceptual loss and content loss & Image colorization: ImageNet, COCO-Stuff and CelebA-HQ \\ \hline
SGA \cite{sga} & ECCV'22 & Generator with stop-gradient attention module between encoder and decoder & Conditional CNN discriminator & Adversarial loss, perceptual loss, reconstruction loss and style loss & Image colorization: Anime portraits and Animal FacesHQ \\ \hline
VTGAN \cite{vtgan} & ICCV'21 & CNN-based generator at different resolution & ViT for discriminator and classification at different resolution & Adversarial loss, mean square error, perceptual loss, embedding feature loss and cross-entropy loss & Retinal image synthesis and disease prediction using fundus and fluorescein angiogram images \\ \hline
ResViT \cite{resvit} & IEEE-TMI'22 & Transformer-based generator using aggregated residual transformer blocks & Conditional PatchGAN discriminator & Adversarial loss, reconstruction loss and pixel loss & Multimodal medical image synthesis: IXI brain MRI, BRATS and MRI-CT \\ \hline
\end{tabular}
\label{tab:image_translation}
\end{table*}
\section{Transformer-based GANs for Image-to-Image Translation}
The image generation mechanism generates the artificial sample from a random latent vector in a given data distribution. However, the image-to-image translation aims to transform the image from one domain to another domain. Hence, the generator network in GAN is modified such that it can take an image as the input and produces an image as the output. Mainly, encoder-decoder based architecture serves this purpose. Recently, several researchers have tried to investigate the Transformer-based GANs for the image-to-image translation tasks. We provide a summary of Transformer-based GAN models for image-to-image translation in Table \ref{tab:image_translation} in terms of generator model, discriminator model, objective function, applications and datasets.
\begin{table*}[!t]
\caption{The results comparison of Transformer-based GANs for PET reconstruction on NC subjects and MCI subjects datasets. }
\centering
\begin{tabular}{p{0.35\columnwidth} p{0.15\columnwidth} p{0.08\columnwidth} p{0.08\columnwidth} p{0.12\columnwidth} p{0.08\columnwidth} p{0.08\columnwidth} p{0.12\columnwidth} p{0.09\columnwidth} p{0.09\columnwidth}}
\hline
& & \multicolumn{3}{c}{NC subjects} & \multicolumn{3}{c}{MCI subjects} & Params & GFLOPs \\
Method & Venue & PSNR & SSIM & NMSE & PSNR & SSIM & NMSE \\ \hline
3D Transformer-GAN \cite{3d_transformer_gan} & MICCAI'21 & \textit{24.818} & \textit{0.986} & \textit{0.0212} & \textit{25.249} & \textbf{0.987} & \textit{0.0231} & \textit{76M} & \textbf{20.78}\\
3D CVT-GAN \cite{3dcvtgan} & MICCAI'22 & \textbf{24.893} & \textbf{0.987} & \textbf{0.0182} & \textbf{25.275} & \textbf{0.987} & \textbf{0.0208} & \textbf{16M} & \textit{23.80}\\
\hline
\end{tabular}
\label{tab:results_pet_reconstruction}
\end{table*}
\subsection{Image Translation}
Image translation from a source domain to a target domain is an attractive application of GANs, such as Pix2pix \cite{pix2pix} and CycleGAN \cite{cyclegan} are very popular for such tasks.
Recently, Kim et al. discovered InstaFormer which exploits the transformer by integrating the global information with instance information for unpaired image translation \cite{instaformer}. The instance-level features are generated with the help of bounding box detection. InstaFormer utilizes the adaptive instance normalization and instance-level content contrastive loss to improve the local context encoding. The ViT encoder block \cite{vit} is used as a backbone in the generator network of InstaFormer. An impressive results are observed using Instaformer in terms of FID \& SSIM of 84.72 \& 0.872 on sunny$\rightarrow$night and 71.65 \& 0.818 on night$\rightarrow$sunny datasets, respectively.
The generator network of CycleGAN \cite{cyclegan} is replaced by a Vision Transformer (ViT) \cite{vit} for unsupervised image-to-image transformation in UVCGAN by Torbunov et al. \cite{uvcgan}. It is noticed from the experiments of UVCGAN that the self-supervised pre-training and gradient penalty are important for the improvements. UVCGAN reports the state-of-the-art FID scores of 79.0, 122.8, 9.6, 13.9, 14.4, and 13.6 on Selfie to Anime, Anime to Selfie, Male to Female, Female to Male, Remove Glasses, and Add Glasses datasets, respectively.
Using ViT as a generator in GAN framework is computationally challenging. Zheng et al. \cite{ittr} proposed ITTR model for image-to-image translation using transformer for unpaired scenario. ITTR reduces the computational complexity by utilizing a dual pruned self-attention mechanism. In order to utilize global semantics, ITTR performs the mixing of tokens of varying receptive fields through hybrid perception blocks. The FID scores achieved by ITTR are 45.1, 68.6, 33.6, 73.4, 93.7, and 91.6 on Scene$\rightarrow$Semantic Map (Cityscapes), Cat$\rightarrow$Dog, Horse$\rightarrow$Zebra, Selfie$\rightarrow$Anime, Face$\rightarrow$Metface, and Female$\rightarrow$Cartoon datasets, respectively. Wang et al. \cite{piti} performs generative pretraining for the diverse downstream tasks to generate a highly semantic latent space as tokens using a transformer from text-image pairs. An adversarial diffusion upsampler is utilized for increasing the resolution of generated samples.
The image translation has been also a popular choice for medical applications using GANs. A Swin transformer GAN (MMTrans) is proposed in \cite{swin_transformer_GAN} for multi-modal medical image translation. The generator network of MMTrans is followed by a registration network. MMTrans uses a convolution-based discriminator network.
\subsection{Image Restoration}
Liang et al. proposed a Swin Transformer-based SwinIR model for image restoration \cite{swinir}. Basically, SwinIR utilizes several Swin Transformer layers together with a residual connection in a residual Swin Transformer block. SwinIR is tested for image super-resolution, image denoising and JPEG compression artifact restoration. It is reported that SwinIR leads to a reduction in the total number of parameters by up to 67\%, while achieves performance gain by up to 0.14$\sim$0.45dB.
Global-local stepwise GAN (GLSGN) is proposed by exploiting stepwise three local pathways and one global pathway based restoring strategy for high-resolution image restoration \cite{glsgn}. Interpathway consistency is applied for the mutual collaboration between four pathways of GLSGN. An impressive results are reported by GLSGN for high-resolution image dehazing, image deraining, and image reflection removal. However, the complexity of GLSGN is increased due to the uses of inputs at different resolution for different local pathways.
Transformer-based GAN (RFormer) is introduced by Deng et al. \cite{rformer} to clinical fundus images for restoration of the real degradation. In order to exploit the long-range dependencies and non-local self-similarity, a window-based self-attention block is utilized by RFormer. Moreover, a Transformer-based discriminator network is also used by RFormer. The experimental results with PSNR of 28.32 and SSIM of 0.873 are achieved for fundus image restoration by RFormer.
\subsection{Image Reconstruction}
Image reconstruction is very important w.r.t. biomedical applications, which can also be seen as image-to-image translation. Luo et al. \cite{3d_transformer_gan} proposed 3D Transformer-GAN for reconstruction of the standard-dose positron emission tomography (SPET) image from the low-dose PET (LPET) image. The generator network in \cite{3d_transformer_gan} is developed as a CNN-based Encoder followed by a Transformer followed by a CNN-based Decoder. 3D Transformer-GAN shows the improved performance for the clinical PET reconstruction. A convolutional ViT based GAN, dubbed 3D CVT-GAN, is introduced by Zeng et al. for SPET reconstruction from LPET images \cite{3dcvtgan}. The encoder and decoder of 3D CVT-GAN use 3D CVT blocks for feature encoding and 3D transposed CVT (TCVT) blocks for SPET restoration, respectively. Table \ref{tab:results_pet_reconstruction} shows the results comparison between 3D Transformer-GAN \cite{3d_transformer_gan} and 3D CVT-GAN \cite{3dcvtgan} for PET reconstruction on normal control (NC) subjects and mild cognitive impairment (MCI) subjects datasets in terms of PSNR, SSIM, NMSE, parameters and GFLOPs. The higher values of PSNR \& SSIM and lower value of NMSE points out that 3D CVT-GAN is able to reconstruct the PET images with better quality. Moreover, 3D CVT-GAN is lighter than 3D Transformer-GAN in terms of the number of parameters. However, GFLOPs suggests that 3D Transformer-GAN leads to less number of operations as compared to 3D CVT-GAN.
A model-based adversarial transformer architecture (MoTran) is introduced for image reconstruction by Korkmaz et al. \cite{motran}. The generator of MoTran includes transformer and data-consistency blocks. MoTran reports better results than SLATER with best PSNR of 47.8 and SSIM of 0.992 on T2-weighted acquisitions of IXI brain MRI dataset\footnote{http://brain-development.org/}. Zhao et al. \cite{swingan} proposed a SwinGAN which is a Swin Transformer-based GAN consisting of a frequency-domain generator and an image-domain generator for reconstruction of MRI images. SwinGAN achieves SSIM of 0.95 and PSNR of 32.96 on IXI brain dataset with 20\% undersampling rate, SSIM of 0.939 and PSNR of 34.50 on MRNet dataset of Knee \cite{mrnet}.
Chen et al. utilized a conditional GAN for cross-domain face synthesis and a mesh transformer for 3D face reconstruction \cite{transformer_3d_face_reconstruction}. Basically, conditional GAN translates the face images to a specific rendered style, which is exploited by transformer network to output 3D mesh vertices. The promising performance is reported in \cite{transformer_3d_face_reconstruction} for 3D face reconstruction using Transformer-based GAN.
\subsection{Low-light Image Enhancement}
Image-to-image translation is also useful for enhancing the low-light images. Wang et al. \cite{spgat} introduced a structural prior driven generative adversarial transformer (SPGAT) consisting of a structural prior estimator, a generator and two discriminators. The generator is a U-shaped transformer model. The efficacy of SPGAT is tested on both synthetic and real-world datasets for low-light image enhancement. In 2022, a Transformer-based GAN (Transformer-GAN) is used by Yang et al. \cite{lowlight_transformer_gan} for low-light image enhancement. In the first stage, the features are extracted by an iterative multi-branch network and image enhancement is performed in the second stage of image reconstruction. Basically, a ViT-based generator is combined with convolution-based discriminator in \cite{lowlight_transformer_gan}. Very recently, in 2023, Yang et al. proposed a LightingNet model for low-light image enhancement \cite{lightingnet}. LightingNet uses ViT-based low-light enhancement subnetwork along with a Res2Net-based complementary learning subnetwork. However, a CNN with 10 convolution layers is used as the discriminator network by LightingNet. The PSNR and SSIM scores of Transformer-based GANs are reported in Table \ref{tab:results_enhancement} for Low-light image enhancement over LOL \cite{lol} and SICE \cite{sice} datasets. Transformer-GAN \cite{lowlight_transformer_gan} achieves best PSNR and SSIM on LOL dataset. However, LightingNet \cite{lightingnet} performs better on the SICE dataset in terms of PSNR. It shows the suitability of transformers with GANs for image enhancement.
\begin{table}[!t]
\caption{The results comparison of Transformer-based GANs for Low-light image enhancement over LOL \cite{lol} and SICE \cite{sice} datasets. }
\centering
\begin{tabular}{p{0.3\columnwidth} p{0.174\columnwidth} p{0.06\columnwidth} p{0.06\columnwidth} p{0.06\columnwidth} p{0.06\columnwidth} }
\hline
& & \multicolumn{2}{c}{LOL dataset} & \multicolumn{2}{c}{SICE dataset} \\
Method & Venue & PSNR & SSIM & PSNR & SSIM \\ \hline
SPGAT \cite{spgat} & arXiv'22 & 19.800 & 0.823 & - & - \\
Transformer-GAN \cite{lowlight_transformer_gan} & IEEE-SPL'22 & \textbf{23.501} & \textbf{0.851} & \textit{21.902} & \textbf{0.878}\\
LightingNet \cite{lightingnet} & IEEE-TCI'23 & \textit{22.735} & \textit{0.842} & \textbf{22.389} & \textit{0.801}\\
\hline
\end{tabular}
\label{tab:results_enhancement}
\end{table}
\subsection{Image Super-resolution}
Transformer-based GANs have also shown its suitability for image super-resolution. SwinIR \cite{swinir}, proposed for image restoration, is also tested for image super-resolution on five benchmark datasets, resulting to outstanding performance.
Kasem et al. utilized spatial transformer to develop a robust super-resolution GAN (RSR-GAN) \cite{spatial_transformer_gan}. Both the generator and discriminator networks of RSR-GAN use spatial transformer. The RSR-GAN shows promising performance on five benchmark super-resolution datasets. Du and Tian \cite{tgan} utilized the transformer and GAN (T-GAN model) for medical image super-resolution. The generator of T-GAN processes the input with two sub-networks, including a residual block based sub-network and a texture Transformer-based sub-network, and finally combines their output to generate the super-resolution image. The discriminator of T-GAN is a CNN-based network. The reported performance of T-GAN is PSNR of 34.92 \& SSIM of 0.94964 on Knee MRI images and PSNR of 34.69 \& SSIM of 0.9353 on Abdominal MRI images for super-resolution. Li et al. \cite{srinpaintor} proposed SRInpaintor by inheriting the characteristics of super-resolution using transformer for image inpainting. Coarse-to-fine information propagation and long-term relation encoding using hierarchical transformer are the key component of SRInpaintor. Very recently in 2023, Bao et al. proposed SCTANet model for face image super-resolution \cite{sctanet} which is a CNN-Transformer aggregation network by exploiting the spatial attention guidance. SCTANet utilizes a tail consisting of sub-pixel MLP-based upsampling module followed by a convolution layer. An outstanding performance is reported by SCTANet on CelebA and Helen face datasets for 4, 8 and 16 times super-resolution.
\begin{table}[!t]
\caption{The results comparison of Transformer-based GANs for image inpainting over CelebA-HQ \cite{progressivegan} and Places2 \cite{places2} datasets. }
\centering
\begin{tabular}{p{0.06\columnwidth}p{0.217\columnwidth} p{0.174\columnwidth} p{0.05\columnwidth} p{0.06\columnwidth} p{0.05\columnwidth} p{0.06\columnwidth} }
\hline
& & & \multicolumn{2}{c}{CelebA-HQ} & \multicolumn{2}{c}{Places2} \\
Mask & Method & Venue & SSIM & FID & SSIM & FID \\ \hline
\multirow{4}{*}{30-40\%} & \textit{T}-former \cite{tformer} & ACMMM'22 & \textbf{0.945} & \textbf{3.88} & 0.846 & \textit{26.56} \\
& SRInpaintor \cite{srinpaintor} & IEEE-TCI'22 & 0.943 & 5.70 & \textit{0.862} & \textbf{11.24} \\
& HAN \cite{han} & ECCV'22 & \textit{0.945} & \textit{3.93} & 0.839 & 28.85 \\
& APT \cite{apt} & ACMMM'22 & - & - & \textbf{0.912} & - \\
\hline
\multirow{5}{*}{40-60\%} & ICT \cite{ict} & ICCV'21 & - & - & 0.739 & 34.21 \\
& BAT-Fill \cite{batfill} & ACMMM'21 & 0.834 & \textbf{12.50} & 0.704 & \textbf{32.55} \\
& PLSA \cite{plsa_vqgan} & HDIS'22 & - & - & \textbf{0.801} & \textit{33.14}\\
& MAT \cite{mat} & CVPR'22 & \textit{0.847} & 13.12 & 0.726 & 35.81 \\
& NDMAL \cite{ndmal} & WACV'23 & \textbf{0.858} & \textit{12.90} & \textit{0.776} & 37.89 \\
\hline
\end{tabular}
\label{tab:results_inpainting}
\end{table}
\subsection{Image Inpainting and Outpainting}
Image inpainting/completion aims to fill the cracks and holes in the images. It is considered as an application of image-to-image translation. CNNs tend to suffer in understanding global structures for image completion due to some inherent characteristics (e.g., spatial invariant kernels and local inductive prior). Recently, transformers are also utilized for image inpainting as it can capture the global relationship.
In 2021, Wang et al. performed image inpainting using automatic consecutive context perceived transformer GAN (ACCP-GAN) \cite{accp_gan}. In the first stage, the broken areas are detected and repaired roughly using Convolution and gated Convolution-based sub-networks. In the second stage, the generated rough patches are refined by exploiting the serial perceive transformer which also exploits the information from neighboring images. ACCP-GAN is able to achieve 47.2252 FID on the N7 dataset for image inpainting. The transformer is combined with CNN in ICT \cite{ict} by first modeling the pluralistic coherent structures and some coarse textures using a transformer and then enhancing the local texture details using CNN. Impressive results on large-scale ImageNet dataset are obtained using ICT for image completion.
The recent methods exploit the masking with transformers \cite{mat}, \cite{zits}.
A dynamic mask based attention module is utilized in Mask-Aware Transformer (MAT) \cite{mat} which aggregates non-local information only from partial valid tokens. MAT uses the transformer block between a convolutional head and a convolutional tail. MAT shows outstanding performance for high-resolution image inpainting. A Zero-initialized Residual Addition based Incremental Transformer Structure (ZITS) is proposed in \cite{zits} for image inpainting. ZITS exploits the orthogonal positional encoding in the masked regions. An impressive performance using ZITS is reported.
Hourglass attention network (HAN) and Laplace attention based transformer is proposed for image inpainting in \cite{han}. In 2022, \textit{T}-former is developed for image inpainting by Deng et al. \cite{tformer}. \textit{T}-former follows a linear attention based on Taylor expansion to reduce the computational complexity of self-attention for images. SRInpaintor performs the inpainting using transformer by exploiting the super-resolution characteristics \cite{srinpaintor}.
Very recently, Phutke and Murala \cite{ndmal} proposed a nested deformable attention based transformer (NDMAL) for face image inpainting. The multi-head attention used in \cite{ndmal} consists of a deformable convolution leads to an efficient transformer model. Other Transformer-based GAN methods for image inpainting include atrous pyramid transformer and spectral convolution based model \cite{apt}, gated convolution and Transformer-based model \cite{generative_image_inpainting}, and visual transformers with multi-scale patch partitioning \cite{mspp}. The transformers have also been utilized for pluralistic image inpainting/completion \cite{ict}, \cite{plsa_vqgan}. The Swin Transformer-based models are used for image inpainting in \cite{sfiswin}, \cite{cswin_transformer}.
The results comparison in terms of SSIM and FID is performed in Table \ref{tab:results_inpainting} on CelebA-HQ \cite{progressivegan} and Places2 \cite{places2} datasets using different transformer driven GAN models for 30-40\% and 40-60\% mask. Higher SSIM and lower FID represent the better performance. It is noticed that \textit{T}-former \cite{tformer} performs best on CelebA-HQ dataset for 30-40\% mask. However, NDMAL \cite{ndmal} shows highest SSIM for 40-60\% mask on CelebA-HQ dataset. On Places2 dataset, APT \cite{apt} and PLSA \cite{plsa_vqgan} lead to highest SSIM for 30-40\% and 40-60\% mask, respectively. It is also observed that BAT-Fill \cite{batfill} is a better model to maintain a lower FID score across both the datasets.
Similar to image inpaining, image outpainiting is also performed as image-to-image translation where the outer area is reconstructed. Transformer-based GANs are also investigated for image outpaining, such as U-Transformer \cite{utransformer_outpainting} and ViT-based Hint \cite{hint}. U-Transformer exploits Swin transformer blocks for U-shaped encoder-decoder \cite{utransformer_outpainting}. Hint method attaches the representative hint at the boundaries and converts outpainting problem into inpainting problem \cite{hint}. The representative hint is generated using ViT on different patches in \cite{hint}.
\subsection{Image Colorization}
Image colorization is performed as image-to-image translation. Recently, few Transformer-based GANs have shown promising performance for image colorization. A color memory powered hybrid-attention transformer (ColorFormer) is introduced by Ji et al. \cite{color_former} for image colorization. The encoder of ColorFormer is a global-local hybrid attention based transformer. The decoder of ColorFormer utilizes a color memory decoder for image-adaptive queries through semantic-color mapping. A stop-gradient attention (SGA) mechanism is utilized by Li et al. \cite{sga} which removes the conflicting gradient and becomes better suitable for reference-based line-art colorization. An impressive results are obtained for four datasets. SGA is also utilized for Anime line drawing colorization \cite{sga_ext}. A dual decoder based DDColor model consisting of a transformer-based color decoder and a multi-scale image decoder is investigated for image colorization in \cite{ddcolor}. Plausible results of image colorization are obtained using DDColor model.
A cycle swin transformer GAN (CSTGAN) is proposed in \cite{cstgan} for colorizing the infrared images in an unpaired scenario. CSTGAN combines two swin transformer blocks with one convolution block in a module and connect such modules with skip connections in the generator network.
\subsection{Medical Image Synthesis}
Li et al. \cite{slmtnet} performed cross-modality MR image synthesis using a self-supervised learning based multi-scale transformer network (SLMT-Net). A pretrained ViT encoder using edge information is followed by a multi-scale transformer U-Net to produce the image in target modality. Better PSNR scores are obtained using the SLMT-Net model for MR image synthesis.
A transformer generator is used to enhance the input image with global information followed by a CNN generator in TCGAN model \cite{tcgan}. Basically, TCGAN takes a positron emission tomography (PET) image as the input and synthesizes the corresponding computer tomography (CT) image as the output. In an another Residual Transformer Conditional GAN (RTCGAN) work, MR image is synthesized into the corresponding CT image by Zhao et al. \cite{rtcgan}. RTCGAN encodes the local texture information using CNN and global correlation using Transformer. RTCGAN achieves an impressive SSIM of 0.9105. A cycle-consistent Transformer (CyTran) model utilizing convolutional transformer block is used to synthesize the contrast CT images from the corresponding non-contrast CT images \cite{cytran}.
The fundus retinal images are translated into corresponding angiogram images in VTGAN \cite{vtgan}, where two vision transformers are utilized as the discriminators-cum-classifiers for coarse and fine images, respectively. Basically, the vision transformer in VTGAN discriminates between the real and fake samples and at the same time outputs the normal vs. abnormal class labels. Different source-target modality configurations are combined into a ResViT model in \cite{resvit}. ResViT utilizes an Encoder-Decoder based generator which uses a residual convolutional and Transformer-based building blocks. ResViT enjoys the precision of convolution operators along with the contextual sensitivity of vision transformers. ResViT is demonstrated for the synthesis of missing frames in multi-contrast MRI, and CT images from MRI. A multi-view transformer-based generator by exploiting cross-view attention mechanism is introduced in \cite{mvt} for cardiac cine MRI reconstruction. The multi-view transformer model \cite{mvt} is able to focus on the important regions in different views for the reconstruction.
\begin{table*}[!t]
\caption{A summary of Transformer-based GANs for video applications.}
\centering
\begin{tabular}{p{0.18\columnwidth}|p{0.15\columnwidth}|p{0.5\columnwidth}|p{0.25\columnwidth}|p{0.36\columnwidth}|p{0.33\columnwidth}}
\hline
\textbf{Model} & \textbf{Venue} & \textbf{Generator} & \textbf{Discriminator} & \textbf{Objective Function} & \textbf{Application \& Datasets} \\
\hline
TATS \cite{tats} & ECCV'22 & Generator with time-agnostic 3D VQGAN and time-sensitive Trasformer & Two discriminators: a spatial discriminator and a temporal discriminator & VQGAN: Adversarial loss, matching loss, reconstruction loss, codebook loss, commit loss, Transformer: Negative log-likelihood & Video generation: UCF-101, Sky Time-lapse and Taichi-HD \\ \hline
MAGViT \cite{magvit} & arXiv'22 & Generator consisting of the 3D-VQ Encoder, Bidirectional Transformer and 3D-VQ Decoder & StyleGAN-based 3D discriminator & GAN loss, image perceptual loss, LeCam regularization, reconstruction loss, refine loss, and masking loss & Video generation: Multi-task with 10 tasks including prediction, interpolation, inpainiting, and outpainting \\ \hline
ActFormer \cite{actformer} & arXiv'22 & Generator having action-conditioned interaction and temporal transformers & Graph Convolutional Network & Conditional Wasserstein GAN loss & Motion generation: NTU-13, NTU RGB+D 120, BABEL, and GTA Combat \\ \hline
FuseFormer \cite{fuseformer} & ICCV'21 & A sub-token fusion enabled Transformer with a soft split and composition method with CNN encoder and decoder & CNN-based video discriminator & Adversarial loss and reconstruction loss & Video inpainting: DAVIS and YouTube-VOS \\ \hline
Style Transformer \cite{aast} & IEEE-TMM'22 & Generator with a deep encoder, axial attention block, transformer, and decoder & Temporal PatchGAN-based discriminator & Adversarial loss, L1 loss, and reconstruction loss & Video inpainting: DAVIS and YouTube-VOS \\ \hline
DeViT \cite{devit} & ACMMM'22 & Generator with Encoder, Patch-based deformed Transformer, and decoder & Temporal PatchGAN-based discriminator & Adversarial loss and reconstruction loss on hole and valid pixels & Video inpainting: DAVIS and YouTube-VOS \\ \hline
FGT \cite{fgt} & ECCV'22 & Generator with flow-guided content propagation, spatial and temporal Transformers & Temporal PatchGAN-based discriminator & Adversarial loss and reconstruction loss & Video inpainting: DAVIS and YouTube-VOS \\ \hline
FGT++ \cite{fgt_extended} & arXiv'23 & FGT with flow-guided feature integration and flow-guided feature propagation modules & Temporal PatchGAN-based discriminator & Adversarial loss, spatial domain reconstruction loss and amplitude loss & Video inpainting: DAVIS and YouTube-VOS \\ \hline
CT-D2GAN \cite{ct-d2gan} & ACMMM'21 & Convolutional transformer with encoder, temporal self-attention module and decoder & Two discriminators: 2D Conv and 3D Conv-based discriminators & Adversarial loss and pixel-wise L1 loss & Video anomaly detection: UCSD Ped2, CUHK Avenue, and ShanghaiTech Campus dataset \\ \hline
Trajectory Transformer \cite{trajectory_prediction} & IJIS'21 & Transformer with multi-head convolutional self‐attention & Discriminator with a decoder and prediction module & Adversarial loss and L2 loss & Trajectory prediction: ETH and UCY datasets \\ \hline
Bidirectional Transformer GAN \cite{btgan} & ACM-TMCCA'23 & Transformer-based motion generator for forward and backward processing & Frame-based and Sequence-based discriminators & Adversarial loss, inverse loss, and soft dynamic time warping (Soft-DTW) loss & Human motion prediction: public Human3.6M dataset \\ \hline
MaskViT \cite{maskvit} & ICLR'23 & Generator with VQGAN and Bidirectional window transformer for variable percentage masked tokens prediction & VQ-GAN discriminator & Adversarial loss, perceptual loss and reconstruction loss & Video prediction: BAIR, RoboNet and KITTI datasets \\ \hline
Recurrent Transformer Network \cite{rtn} & CVPR'22 & A bi-directional RNN architecture having temporal aggregation module in masked encoder and flow features and spatial restoration transformer followed by Conv layers & Temporal PatchGAN discriminator & Spatial-temporal adversarial loss, L1 loss and perceptual loss & Video colorization: DAVIS and REDS dataset\\
\hline
\end{tabular}
\label{tab:video_applications}
\end{table*}
\subsection{Other Image-to-Image Translation Applications}
Transformer-based GANs are also utilized for different other image translation applications. A Transformer-based CycleGAN is proposed in \cite{transformer_liver_segmentation} for liver tumor segmentation. A CASTformer is proposed in \cite{castformer} for 2D medical image segmentation, which is a class-aware transformer for learning the important object semantic regions. Trans-CycleGAN is developed in \cite{transcyclegan} by utilizing the Transformer-based generator and discriminator networks for image style transfer.
A symmetric and semantic aware transformer (SSAT) is exploited in \cite{ssat} for makeup transfer and removal by learning the semantic correspondences. Pose guided human image synthesis (PGHIS) is performed in \cite{pghis} by exploiting a transformer module between the encoder and decoder networks. The synthesis using PGHIS is performed on the decoupled human body parts (e.g., face, hair, feet, hands, etc.). Pedestrian synthesis is performed using a pose and color-gamut guided GAN (PC-GAN) in \cite{pc-gan}. The generator in the PC-GAN consists of a local displacement estimator, a color-gamut transformer, and a pose transporter. PC-GAN is used to improve the performance of person re-identification by augmenting the training set.
In \cite{asset}, a Transformer-based autoregressive semantic scene editing (ASSET) method is developed for high-resolution images based on the user's defined semantic map edits. ASSET uses dense attention in the transformer at lower image resolutions and sparsifies the attention matrix at high resolutions. Xu et al. proposed a Transformer-based GAN, named TransEditor, for facial editing \cite{transeditor}. TransEditor exploits dual-space editing and inversion strategy to enhance the interaction in a dual-space GAN to provide additional editing flexibility. The results indicate that TransEditor is effective for highly controllable facial editing.
In 2023, a cycle transformer GAN (CTrGAN) is introduced by Mahpod et al. \cite{ctrgan} for Gait transfer having Transformer-based generator and discriminator networks. The source image is first converted into a dense pose which is then translated into the most natural dense pose of the target using CTrGAN. Finally, the image of the target is synthesized by rendering the generated pose. CTrGAN shows promising results for Gait transfer.
FusionGAN performs the depth estimation using Transformer-based GAN as a multimodal image translation \cite{fusiongan}. FusionGAN utilizes the late fusion of the features of the transformer block with the sparse depth map and RGB image, which is followed by residual in residual dense block and convolution layer to produce the dense depth map.
\subsection{Summary}
Following is the summary drawn from the Transformer-based GANs for image-to-image translation:
\begin{itemize}
\item It is noticed that both global and local contexts play an important roles for image-to-image translation. Hence, the majority of the methods exploit both the Transformer and Convolution in the generator network.
\item The generator generally consists of Encoder, Transformer and Decoder modules. Vision transformer based architectures are heavily utilized. Some methods also try to modify the self-attention module with task specific information.
\item CNN-based discriminators are mostly exploited, such as PatchGAN discriminator. Some regularizers are also used by few models, such as gradient penalty and spectral normalization.
\item The L1 loss, perceptual loss, reconstruction loss, and style loss are commonly used in the objective function along with adversarial loss.
\item Transformer-based GANs have shown state-of-the-art results for several image-to-image translation applications, such as image translation, image reconstruction, image restoration, image enhancement, image super-resolution, image inpainting \& outpainting, image colorization, medical image synthesis, image segmentation, human and pose synthesis, image editing and depth prediction.
\end{itemize}
\section{Transformer-based GANs for Video Applications}
In recent years, several attempts have been made to utilize the Transformer-based GANs for different video processing applications. A summary on Transformer-based GANs is presented in Table \ref{tab:video_applications} for different video applications in terms of generator network, discriminator network, objective function, application and datasets.
\subsection{Video Synthesis}
In 2022, Ge et al. \cite{tats} synthesized the longer videos of thousands of frames using Time-Agnostic VQGAN and Time-Sensitive Transformer (TATS). The hierarchical transformer in TATS helps to capture longer temporal dependencies. The experiments on benchmark UCF-101, Sky Time-lapse, and Taichi-HD datasets confirm the suitability of TATS \cite{tats} for generating the longer videos.
Yu et al. proposed a Masked Generative Video Transformer (MAGViT) \cite{magvit} by first 3D tokenizing the video and then applying an embedding method to model the masked video tokens. MAGViT is able to perform for diverse video generation from different visual appearances.
In 2022, Xu et al. proposed ActFormer model to generate the action-conditioned 3D human motion from a latent vector for the frames of single-person as well as multi-person interactions \cite{actformer}.
\subsection{Video Translation}
\subsubsection{Video Inpainting}
Video inpainting is a very important application of video translation. In 2021, Liu et al. proposed a Transformer-based FuseFormer model for video inpainting \cite{fuseformer}. FuseFormer exploits soft split and composition method to perform sub-token fusion. In 2022, a deformed vision transformer (DeViT) is proposed for video inpainting \cite{devit}. Deformed patch homography based patch alignment and mask pruning based patch attention are exploited by DeViT. The attention to spatial-temporal tokens is obtained by a spatial-temporal weighting adaptor module in \cite{devit}. A generator, consisting of deep encoder, axial attention block, transformer, and decoder, is utilized in Axial Attention-based Style Transformer (AAST) \cite{aast} for video inpainting. The transformer in \cite{aast} exploits the high-frequency patch information between temporal and spatial features. Flow guided transformer (FGT) is also utilized for video inpainting by integrating the completed optical flow into the transformers \cite{fgt}, \cite{fgt_extended}.
The results comparison is performed in Table \ref{tab:results_video_inpainting} for video inpainting using transformer-based GAN models on YouTube-VOS \cite{youtube-vos} and DAVIS \cite{davis} datasets in terms of PSNR and SSIM. FGT++* with flow-guided content propagation \cite{fgt_extended} performs best for video inpainting. It is observed that the utilization of flow information with the transformer is very beneficial for video inpainting.
\subsubsection{Other Video Translation Applications}
Feng et al. proposed a CT-D2GAN model for future frame synthesis \cite{ct-d2gan}, consisting of a convolutional transformer generator, a 2D convolutional discriminator and a 3D convolutional discriminator. The generator in CT-D2GAN is made with a convolutional encoder followed by a temporal self-attention block followed by a convolutional decoder. Very recently, MaskViT is introduced for future frame prediction in videos \cite{maskvit}. MaskViT first converts the video frames into tokens using VQGAN and then randomly mask some tokens of the future frames and performs pre-training of a Bidirectional Window Transformer for masked token prediction. At the inference time, frames are predicted in an iterative refinement fashion with incrementally decreasing masking ratio. A recurrent transformer network (RTN) is introduced in \cite{rtn} for the restoration of degraded old films. RTN exploits the useful information from adjacent frames for ensuring temporal coherency and restoring the challenging artifacts.
A bidirectional transformer GAN (BTGAN) is proposed in \cite{btgan} for human motion generator. BTGAN follows the CycleGAN framework with motion sequences as input to generator and predicted motion sequences as output. A new soft dynamic time warping (Soft-DTW) loss is utilized by BTGAN for training the generator.
Transformer-based GANs are also utilized for pedestrian trajectory prediction in \cite{trajectory_prediction} by learning the pedestrian distribution to generate more reasonable future trajectories.
\begin{table}[!t]
\caption{The results comparison of Transformer-based GANs for video inpainting on YouTube-VOS \cite{youtube-vos} and DAVIS \cite{davis} datasets. The use of flow-guided content propagation is represented by *. }
\centering
\begin{tabular}{p{0.23\columnwidth} p{0.2\columnwidth} p{0.07\columnwidth} p{0.07\columnwidth} p{0.07\columnwidth} p{0.07\columnwidth}}
\hline
& & \multicolumn{2}{c}{Youtube-VOS} & \multicolumn{2}{c}{DAVIS} \\
Method & Venue & PSNR & SSIM & PSNR & SSIM \\ \hline
FuseFormer \cite{fuseformer} & ICCV'21 & 33.16 & 0.967 & 32.54 & 0.970 \\
DeViT \cite{devit} & ACMMM'22 & 33.42 & 0.973 & 32.43 & 0.972\\
AAST \cite{aast} & IEEE-TMM'22 & 33.23 & 0.967 & 32.71 & 0.972 \\
FGT \cite{fgt} & ECCV'22 & 34.04 & 0.971 & 32.60 & 0.965 \\
FGT* \cite{fgt} & ECCV'22 & 34.53 & \textit{0.976} & \textit{33.41} & \textit{0.974} \\
FGT++ \cite{fgt_extended} & arXiv'23 & \textit{35.02} & \textit{0.976} & 33.18 & 0.971 \\
FGT++* \cite{fgt_extended} & arXiv'23 & \textbf{35.36} & \textbf{0.978} & \textbf{33.72} & \textbf{0.976} \\
\hline
\end{tabular}
\label{tab:results_video_inpainting}
\end{table}
\subsection{Summary}
Following is the summary on Transformer-based GANs for video applications:
\begin{itemize}
\item Most of the models convert video into tokens using VQGAN-based Encoder and then apply the spatial and/or temporal transformers.
\item The flow information is also exploited and utilized at different levels in the generator of some models.
\item Temporal PatchGAN discriminator is utilized by a majority of the models. Convolution discriminators are also used by some models.
\item The objective function usually contains adversarial loss, reconstruction loss, L1 loss and perceptual loss.
\item Transformer-based GANs have shown promising performance for different video applications, such as video generation, video inpainting, video prediction, video anomaly detection and video colorization.
\item It is noticed that the flow-guided feature integration and propagation is very important with transformers for video inpainting.
\end{itemize}
\section{Transformer-based GANs for Miscelleneous Applications}
Apart from image and video synthesis, Transformer-based GANs have also been exploited for other applications, such as text-to-image generation, hyperspectral image classification, document image enhancement, etc.
\subsection{Text-to-Image Generation}
Transformer-based GANs have been utilized for text-to-image generation \cite{transformer_attngan}, \cite{dsegan}, \cite{layout_vqgan}, \cite{muse}. Naveen et al. performed text-to-image generation experiments with AttnGAN using different Transformer models such as BERT, GPT2, and XLNet \cite{transformer_attngan}. An improvement of 49.9\% in FID is observed in \cite{transformer_attngan}. Huang et al. proposed a dynamical semantic evolution GAN (DSE-GAN) \cite{dsegan} for text-to-image generation. The generator of DSE-GAN is composed of a single adversarial multi-stage structure having a series of grid-Transformer-based generative blocks which are weighted by a series of DSE modules at different stages. An object-guided joint decoding transformer is introduced by Wu et al. \cite{layout_vqgan} for generating the image and the corresponding layout from the text. The layout is encoded and decoded using a Layout-VQGAN to extract additional useful features for the synthesis of the complex scenes. In 2023, a masked generative transformer, named Muse, is investigated for text-to-image generation \cite{muse}. Muse is trained on a masked modeling task to predict randomly masked image tokens using pre-trained large language model (LLM). Muse shows very promising performance on the benchmark datasets, including FID of 7.88 on zero-shot COCO evaluation using a 3B parameter model and FID of 6.06 on CC3M dataset using a 900M parameter model.
\subsection{Hyperspectral Image Classification}
The ViT-based generator, discriminator, and classifier networks are utilized in HyperViTGAN model \cite{hypervitgan} for hyperspectral image (HSI) classification. The classifier network classifies the input HSI patch into one of the class categories, whereas the discriminator network classifies the input HSI patch into real or fake category. The improved results on three benchmark HSI datasets are obtained by HyperViTGAN. Similarly, a transformer-based generator network is used for synthesizing the class-specific HSI patches in \cite{transgan_hsi} to alleviate the data imbalance problem in HSI classification.
\subsection{Other Applications}
Kodym and Hradiš \cite{tg2} generated high-quality text line image from the corresponding degraded low-quality text line image and the corresponding transcription as an input. The low-quality image is translated into the high-quality image using an Encoder-Decoder network. The transcription information is aligned with the output of Encoder using transformer consisting of multi-head attention modules. A promising performance is recorded for the document image restoration tasks, including inpainting, debinarization, and deblurring.
A transformer model is used for anomaly detection in \cite{transformer_based_gan} by exploiting the holistic features of different classes. In order to encode global semantic information, self-attention modules are used in generator of GAN in \cite{transformer_based_gan}.
A cross-modal Transformer-based GAN (CT-GAN) is utilized in \cite{ct-gan} for synthesizing the multi-modal brain connectivity. Basically, CT-GAN fuses the structural information and functional information from different imaging modalities. Two decoders are used to extract the functional connectivity and structural connectivity from the fused connectivity. Two discriminators are also employed in \cite{ct-gan} corresponding to two decoders.
\subsection{Summary}
Overall, it is noticed that Transformer-based GANs are very successful for different image applications, including text-to-image generation, hyperspectral image classification, document analysis, anomaly detection, etc. The use of transformer-based GANs is still limited to various computer vision applications leaving to a huge scope for future research in this domain.
\section{Conclusion and Future Directions}
\label{conclusion}
\subsection{Conclusion and Trend}
This paper presents a comprehensive survey of Transformer-based Generative Adversarial Networks (GANs) for computer vision applications. As most of the Transformer-based GANS are very recent, the progress and advancements in the past few years, i.e., 2020-2023 are presented. The survey is conducted for different type of applications, including image generation, image-to-image translation, video generation, video translation and other computer vision applications. The detailed discussion is provided with further application specific categorization, such as translation, restoration, reconstruction, inpainting, super-resolution, etc. A summary of different Transformer-based GAN models is presented in terms of generator architecture, discriminator architecture, objective functions, applications and datasets. The performance analysis using the state-of-the-art Transformer-based GAN models are also presented in terms of FID/PSNR/SSIM for different applications, such as image generation, high-resolution image generation, PET image reconstruction, low-light image enhancement, image inpainting and video inpainting.
The research trend in image and video synthesis points out that the Transformer-based GANs are the latest driving factor in the progress of computer vision methods and applications. The generator with recently developed Transformer, Vision Transformer and Swin Transformer models has shown the superior performance for image and video synthesis. The utilization of both local context and global context using the Convolution and Transformer networks is heavily utilized by several models. For translation task, U-shaped generator with Convolution-based Encoder followed by Transformer followed by Convolution-based Decoder is very well exploited. The StyleGAN with Swin Transformer has shown outstanding performance for image generation. For video translation task, the flow information is exploited with Transformers. Mostly, Transformer-based GANs rely on CNN-based discriminator. However, few models also tried to exploit the Transformers for discriminator. The utilization of suitable loss functions with adversarial loss is also the recent trend in order to improve the synthesis quality, perceptual quality, etc. Other trends include prediction of masked tokens, designing of novel attention module, utilization of cross-attention and convolutional transformers, etc.
\subsection{Future Directions}
The future works in Transformer-based GANs include development of generator and discriminator architectures for different applications. There is a huge scope to find better ways to fuse the Convolution and Transformer features in order to exploit the local and global information. The modification in self-attention module with task specific characteristics is also a potential future direction. Other future direction involves the advancements in Vision and Swin Transformers by exploiting the power of state-of-the-art GAN models, such as StyleGAN, etc. The masking-based pre-training of transformer model can also be further explored. A major future aspect is to utilize the Transformer-based GANs for different applications of image and video processing in computer vision. Exploitation of flow information with Transformers can be explored further for video applications. The pre-training of Transformer-based GANs with diverse applications using large-scale datasets, development of light-weight models, utilization of self-supervision, etc. are the further scope to progress. The development of better loss functions and the identification of suitable combination of losses in objective function can also be performed in the future. The exploration to find the better hyperparameter settings to stabilize the training and to increase the generalization capability of the Transformer-based GAN models.
{\small
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08656",
"language": "en",
"timestamp": "2023-02-20T02:05:34",
"url": "https://arxiv.org/abs/2302.08656",
"yymm": "2302"
} | \section{Introduction}
The electric power grid is undergoing a massive transformation with the influx of new technologies, processes, and digitization to provide cleaner, smarter, affordable, and equitable electricity to everyone. Yet, at the same time, the grid is witnessing an unprecedented increase in threats caused by climate change and cyber attacks. Blackouts caused in California due to heat waves, outages due to hurricanes in Florida and Puerto Rico, are some recent examples of massive power disruptions affecting millions of lives. Operating the grid under such ``dark sky'' events and, in general, ``intermittent sky'' days with uncertain renewable forecasts, remains an ever-increasing challenge. Accurate and fast wide area analysis tools provide critical aid to grid operators and planners, as they make decisions. However, with greater fidelity comes greater computational complexity.
Heterogeneous computing, such as combining \ac{cpu}s with \emph{hardware accelerators}, is becoming a dominant paradigm in the computing landscape. Spurred by demand from the video gaming industry, artificial intelligence and computer vision applications, Graphical Processing Units (\ac{gpu}s) are the most prevalent hardware accelerators today. \ac{gpu}s deliver high computational power at low cost through massive fine-grain parallelism.
The challenge for using \ac{gpu}s arises from their \ac{simd} architecture that imposes modeling and solver constraints. The \ac{simd} architecture is suitable for porting legacy CPU-based \textit{dense} vector and matrix operations. Legacy \textit{sparse} operations, which are used in current state-of-the-art \ac{acopf} analysis \cite{ONeill2012} cannot be recast in \ac{simd} terms without additional mathematical considerations.
In our prior work \cite{abhyankar2021acopf}, we discussed solving \ac{acopf} \cite{ONeill2012, Frank2012} on \ac{gpu}s with a dense solver, by compressing the sparse formulation that naturally arises from power systems. However, the computational complexity of dense linear solvers is $O(N^3)$ and their memory complexity is $O(N^2)$, where $N$ is the number of linear equations. For larger grid models, all performance gains on \ac{gpu} are offset by a cubic increase in computational cost and a quadratic increase in memory.
Here, we present our results on solving \emph{sparse} \ac{acopf} on \ac{gpu}s. In particular, we detail our efforts on accelerating the linear solver, a core component of \ac{acopf} that dominates the computational cost \cite{su2019full}. This exploration is motivated by our recent efforts on sparse linear solvers for \ac{acopf} problems \cite{swirydowicz2022linear, regev2022kkt}, as well as recent investigations of dynamic phasor simulations \cite{dinkelbach2021factorisation, razik2019comparative} and power flow simulations \cite{dorto2021comparing} on \ac{gpu}s.
The main contributions of this paper are:
\begin{itemize}
\item \textcolor{black}{First reported meaningful acceleration of \ac{acopf} on \ac{gpu}s, using our \emph{sparse} linear solver, demonstrated on large interconnection-level grid models and reproducible using the open-source ExaGO\textsuperscript{TM}\xspace package (Sec.~\ref{sec:gpu})}.
\item Performance analysis of \ac{acopf} on \ac{cpu}s (Sec.~\ref{sec:opf}) and \ac{gpu}s (Sec.~\ref{sec:gpu}) with key bottlenecks identified.
\item Performance projections and limitations of current technology and identifying research needed to achieve even better acceleration on \ac{gpu}s (Sec.~\ref{sec:conslusion}).
\end{itemize}
\section{State of The Art Methods for Alternating Current Optimal Power Flow Analysis}\label{sec:opf}
\ac{acopf} falls in the category of nonlinear nonconvex optimization problems. In compact mathematical form it can be formulated as
\begin{align}
\underset{x}{\min}&~ F(x)& \label{eq:opfobj}\\
\text{s.t.}~~ & g(x) = 0 & \label{eq:opfeq}\\
h^{-}& \le h(x) \le h^{+}& \label{eq:opfineq}\\
x^{-}& \le x \le x^{+} & \label{eq:opfbounds}
\end{align}
where $F(x)$ is a scalar function defining the optimization objective, typically the total cost of generation; $x$ is the vector of optimization variables, such as generator power outputs;
vectors $x^{-},x^{+}$ define variable bounds, e.g.~resource capacity limits; vector function $g(x)$ defines equality constraints, like power balance; and vector function $h(x)$ defines security constraints with security limits given by vectors $h^{-},h^{+}$. The reader is referred to \cite{Zimmerman2011,ONeill2012} for comprehensive details on the \ac{acopf} formulation.
The optimization problem (\ref{eq:opfobj})-(\ref{eq:opfbounds}) is typically solved in an iterative manner using an interior-point method \cite{wachter2006implementation}. To keep our presentation streamlined and without loss of generality, we rewrite (\ref{eq:opfobj})-(\ref{eq:opfbounds}) in a more compact form
\begin{align}
\underset{y}{\min}~& f(y)& \label{eq:opfcompactobj}\\
\text{s.t.}~~& c(y) = 0& \label{eq:opfcompacteq}\\
& y \ge 0& \label{eq:opfcompactbounds}
\end{align}
Here we convert (\ref{eq:opfineq}) to equality constraints
by adding slack variables $s',s'' \ge 0$ satisfying $h(x)-h^- - s'$ and $h(x)-h^+ + s''$. Bounds (\ref{eq:opfbounds}) are expressed in form (\ref{eq:opfcompactbounds}) by introducing variables $x',x'' \ge 0$ such that $x - x^- = x'$ and $x^+ - x = x''$. In the compact form, the vector of optimization primal variables is $y=(x',x'',s',s'') \in \mathbb{R}^n$ and $f: \mathbb{R}^n \to \mathbb{R}$ is the scalar objective function $F(x)$ expressed in space of primal variables $y$. All constraints are formulated in $c: \mathbb{R}^n \to \mathbb{R}^m$, where $m$ is the number of equality and inequality constraints.
To enforce variable bounds, the objective function (\ref{eq:opfcompactobj}) is augmented with barrier functions as
\begin{equation}
\phi(y) = f(y) + \mu \sum_{i=1}^n \ln y_i.
\end{equation}
The solution to the augmented problem then can be obtained by solving the nonlinear equations
\begin{align}
\nabla{f(y)} + \nabla{c(y)}\lambda - \mu Y^{-1}e &=0, \label{eq:gradient}\\
c(y) & = 0, \label{eq:constraint}
\end{align}
where $J\equiv \nabla{c(y)}$ is the $m \times n$ constraint Jacobian matrix, $\lambda \in \mathbb{R}^m$ is the vector of constraints Lagrange multipliers (dual variables), $Y \equiv \text{diag}(y)$, and $e \in \mathbb{R}^n$ is a vector of ones.
The augmented system (\ref{eq:gradient})-(\ref{eq:constraint}) is solved using a Newton method, and the solution to the original optimization problem is obtained by a continuation algorithm setting $\mu \to 0$. As it searches for the optimal solution, the interior point method solves a sequence of linearized systems $K_k \Delta x_k = r_k$, $k=1,2,\dots$ of Karush-Kuhn-Tucker (KKT) type
\begin{align} \label{eq:kktlinear}
\overbrace{\begin{bmatrix}
H + D_y & J
\\ J^T & 0
\end{bmatrix}}^{K_k}
\overbrace{\begin{bmatrix}
\Delta y \\
\Delta \lambda
\end{bmatrix}}^{\Delta x_k}=
\overbrace{\begin{bmatrix}
r_{y} \\ r_{\lambda}
\end{bmatrix}}^{r_k},
\end{align}
where index $k$ denotes the optimization solver iteration (including continuation step in $\mu$ and Newton iterations), $K_k$ is a system (KKT) matrix, vector $\Delta x_k$ is a search direction for the primal and dual variables, and $r_k$ is the residual vector for (\ref{eq:gradient})-(\ref{eq:constraint}) evaluated at the current value of the primal and dual variables.
The Hessian
\begin{equation*}
H \equiv \nabla^2 f(y) + \sum_{i=1}^{m} \lambda_{i} \nabla^2 c_i(y),
\end{equation*}
is a sparse symmetric $n \times n$ matrix and
$D_y \equiv \mu Y^{-2}$ is a diagonal $n \times n$ matrix.
Matrices $K_k$ in (\ref{eq:kktlinear}) are sparse symmetric indefinite. All $K_k$ have the \textit{same sparsity pattern}, a property that can be exploited by a solver. Characteristic for ACOPF is that $K_k$ generated in the analysis are \textit{very sparse with an irregular non-zero pattern} without dense blocks \cite{deAraujo2013}.
When the barrier parameter $\mu=0$, the matrix in (\ref{eq:kktlinear}) is singular. The continuation algorithm driving $\mu \to 0$ therefore needs to exit when the solution to the augmented system is close enough to the solution of the original system, but before (\ref{eq:kktlinear}) becomes too ill-conditioned. Typically, KKT matrices are characterized by \textit{extremely high condition numbers} ($>10^{16}$), which means that solving these systems in a naive way (e.g. without pivoting and equilibration) would result in solutions with error exceeding computer floating point double precision.
Only a few {\color{red}linear} solvers have been effective for this type of problems. These solvers are typically sparse direct solvers based on $\text{LDL}^\text{T}$ or $LU$ decomposition \cite{golub2013matrix}. State-of-the art approaches usually employ KLU (for non-symmetric matrices) \cite{davis2010algorithm, razik2019comparative} or MA57~\cite{duff2004ma57}.
A sparse direct solver typically computes the solution to a linear system in three stages: ($i$) an analysis stage, where the system is equilibrated, column and row permutations are set, pivot order is established and the non-zero pattern of the factors is chosen; ($ii$) a numerical factorization stage, in which matrix factors are computed and ($iii$) a solution stage, in which the factors computed in the previous stage are used for forward and backward triangular solves. Some solvers follow the third stage by iterative refinement~\cite{skeel1980iterative}. \textcolor{black}{The first stage is considered to be more expensive than others, but if the matrix non-zero pattern does not change between the systems, as in (\ref{eq:kktlinear}), the analysis can be executed once and the results reused for all $K_k$ \cite{razik2019comparative,dorto2021comparing}. The numerical factorization stage typically requires pivoting for the computation to remain stable. Pivoting changes the sparsity pattern of $L$ and $U$ factors and in some cases may even require memory reallocation. Therefore, while changes in the pivot sequence may not be a concern on \acp{cpu}, they degrade performance on \acp{gpu} as we describe in Sec.~\ref{sec:gpu}.}
For \ac{acopf} problems, more than a half of the overall computational cost is typically spent solving the KKT linear system. Within that, the largest cost is in matrix factorization (symbolic and numeric factorization) and to a lesser degree in triangular solves (backward-forward substitution). Other significant items are evaluation of derivative matrices -- constraint Jacobian $J$, and Hessian matrix $H$. We illustrate that in Fig. \ref{fig:ma57cost} where we show computational cost breakdown for \ac{acopf} for 25,000-bus model of Northeast U.S. grid and 70,000-bus model of Eastern U.S. grid, respectively. Table \ref{tab:tamu_cases} provides a description of the two synthetic grids evaluated.
\begin{figure}[h!]
\centering
{\includegraphics[width=0.8\columnwidth,trim={1.1cm 0.4cm 1.1cm 7.5cm},clip]{ma57.10k.pie}}
\begin{subfigure}{0.49\columnwidth}
{\includegraphics[width=\columnwidth,trim={3cm 2cm 3cm 0.1cm},clip]{ma57.25k.pie}}
\caption{Northeast U.S. grid}
\end{subfigure}
\begin{subfigure}{0.49\columnwidth}
{\includegraphics[width=\columnwidth,trim={3cm 2cm 3cm 0.1cm},clip]{ma57.70k.pie}}
\caption{Eastern U.S. grid}
\end{subfigure}
\caption{Computational cost of ACOPF broken down by solver functions. Linear solver functions (matrix factorization and triangular solve) contribute to 60\% of the overall ACOPF compute time.}
\label{fig:ma57cost}
\end{figure}
\begin{table}[h!]
\centering
\caption{Description of synthetic grid models from \cite{birchfield2017tamu-cases} used in performance analysis}
\begin{tabular}{|p{0.35\columnwidth}|p{0.1\columnwidth}|p{0.2\columnwidth}|p{0.15\columnwidth}|}
\hline
\textbf{Grid} & \textbf{Buses} & \textbf{Generators} & \textbf{Branches} \\ \hline
Northeast U.S & 25,000 & 4,834 & 32,230\\ \hline
Eastern U.S. & 70,000 & 10,390 & 88,270\\ \hline
\end{tabular}
\label{tab:tamu_cases}
\end{table}
The analysis has been performed using the ExaGO\textsuperscript{TM}\xspace package \cite{ExaGo} with the HiOp\xspace optimization engine \cite{hiop_techrep} and the MA57 linear solver. Numerical matrix factorization takes roughly half of the total compute time in \ac{acopf}, while the triangular solve takes another 12\%. Model evaluation operations take about 30\% of the total compute time. All other operations (evaluation of the objective, objective gradient, infeasibility norm, etc.) in \ac{acopf} take less than 10\% of the overall time.
Achieving any speedup largely depends on how effectively the KKT linear system can be solved on a \ac{gpu}.
\section{Computing with graphical processing units}
\label{sec:gpu}
There has been a substantial effort in the scientific community devoted towards using \ac{gpu} acceleration to speed up sparse direct linear solvers, and the available software usually has at least some \ac{gpu} capabilities.
\textcolor{black}{However, solving the linear systems described herein on \acp{gpu} is particularly challenging: ($i$) Due to irregular and very sparse nature of power systems, the underlying linear problems do not have a structure that traditional multifrontal~\cite{duff1983multifrontal} and supernodal~\cite{demmel1999supernodal} approaches can use to form dense blocks where \ac{simd} operations can be performed. ($ii$) The underlying linear problems are ill-conditioned and require pivoting for numerical factorization stability. Pivoting degrades performance on \acp{gpu} because of conditional branching and considerable data movement.}
In \cite{swirydowicz2022linear}, we identified and tested several \ac{gpu} accelerated sparse direct solvers and concluded that none of them was faster than MA57, especially for the largest test cases.
These solvers only offload a part of the computations (e.g., matrix-matrix products or triangular solves) to the \ac{gpu}, while the rest of the computation happens on the \ac{cpu}. This results in a substantial overhead associated with allocating and freeing small chunks of \ac{gpu} memory and copying the data to and from the device. For our test cases, we also observed that factorization on the \ac{gpu} was particularly expensive.
\textcolor{black}{We use a refactorization approach to solve (\ref{eq:kktlinear}) on \ac{gpu} (see Algorithm~\ref{alg:kluglu}). A similar approach was successfully used for dynamic phasor \cite{razik2019comparative,dinkelbach2021factorisation} and power flow \cite{dorto2021comparing} simulations {\color{black}but not for \ac{acopf}. Our approach is the first we are aware of, where the sparse linear solvers in ACOPF analysis were accelerated using GPUs and outperformed state-of-the-art CPU solvers}. We replace the MA57 solver in our software stack with a solver we implemented using KLU and NVIDIA's \texttt{cuSolver} libraries. We factorize the first system on the \ac{cpu} using KLU, set the patterns of $L$ and $U$, and compute the permutation vectors. We also evaluated \texttt{cuSolver}'s methods for setting the patterns of $L$ and $U$, but found KLU to perform better. Once computed, we keep the sparsity pattern of the factors the same for the rest of the computation. }The non-zero structure of the factors and the permutation vectors are then copied to the \ac{gpu} and passed to \texttt{cuSolver}. The refactorization is set up, and each subsequent system is solved using \ac{gpu} refactorization, without a need for pivoting, which is prohibitively expensive on \ac{gpu}s.
\textcolor{black}{Using this method all data movement and mapping is done only once on the \ac{cpu} while the subsequent computations are implemented in terms of \ac{simd} operations on \ac{gpu}.
The tradeoff is that refactorization approach produces lower quality solution, so the solve stage may need to be followed by an iterative refinement to recover accuracy needed by the optimization method.}
\begin{algorithm*}[!h]
\caption{Refactorization Solution Strategy: KLU + \texttt{cuSolverGLU}}\label{alg:kluglu}
\hspace*{\algorithmicindent} \textbf{Input:} Sequence of linear systems $K_kx_k=b_k$, $k=1, 2, \ldots$ \\
\hspace*{\algorithmicindent} \textbf{Output:} Sequence of solution vectors $x_k$
\begin{algorithmic}[1]
\State{Use KLU to solve $Ax_1=b_1$}
\State{Extract symbolic factorization and permutation vectors. \Comment{Using \texttt{klu\_extract}, returns L,U in CSC format}}
\State{Convert CSC to combined L+U CSR object.} \Comment{Up to here, computed on the \ac{cpu}}
\State{Setup \texttt{cuSolverGLU}}\Comment{\ac{gpu} computation starts.}
\For{$k=2\ldots$}
\State Update factorization object using new values from $K_k$.~~~~ \Comment{Using \texttt{cuSolverGLU} function}
\State Refactor.
\State Perform triangular solve for $b_k$. \Comment{Always followed by built-in iterative refinement.}
\EndFor
\end{algorithmic}
\end{algorithm*}
Two refactorization interfaces were tested: \texttt{cuSolverGLU} and \texttt{cuSolverRf}. \texttt{cuSolverGLU} is an undocumented (but publicly available) part of the \texttt{cuSolver} library. It is generally faster than \texttt{cuSolverRf}, more numerically stable and comes with built-in iterative refinement. We used \texttt{cuSolverGLU} for all the results shown in this paper. \texttt{cuSolverGLU} requires a combined $L+U$ storage of the factors in CSR format hence we needed to perform a format conversion after using KLU, which operates on matrices in CSC format.
Since the optimization solver typically calls the linear solver hundreds of times, the cost of one linear solve performed on \ac{cpu} can be amortized over subsequent iterations. In Fig. \ref{fig:ma57_vs_cusolver} and Table \ref{tab:ma57_vs_cusolver}, we show computational speedup obtained using this approach. The figure shows average cost per optimization solver iteration, including amortized cost of one-time factorization with KLU. The numerical experiments were conducted with an IBM Power 9 \ac{cpu} and an NVIDIA V100 \ac{gpu}, which has 16 GB of high bandwidth memory and approximately 7 TF of double precision peak performance.
\begin{figure}[htb]
\centering
{\includegraphics[width=0.8\columnwidth,trim={1.1cm 0.4cm 1.1cm 7.5cm},clip]{ma57.25k.pie}}
\begin{subfigure}{\columnwidth}
{\includegraphics[width=\columnwidth,trim={0.1cm 1cm 0.1cm 0.1cm},clip]{solverComparison.25k.bar}}
\caption{Northeast U.S. grid}
\end{subfigure}
\begin{subfigure}{\columnwidth}
{\includegraphics[width=\columnwidth,trim={0.1cm 1cm 0.1cm 0.1cm},clip]{solverComparison.70k.bar}}
\caption{Eastern U.S. grid}
\end{subfigure}
\caption{Comparison of the computational cost (average time per iteration in milliseconds) between \ac{cpu} and \ac{gpu} with a breakdown in terms of most expensive operations. Model evaluation time is the same in both test cases since only linear solver is GPU accelerated.
}
\label{fig:ma57_vs_cusolver}
\end{figure}
\begin{table}[htb]
\centering
\caption{Comparison of total run times for \ac{acopf} show 30\% speedup when using sparse linear solver on \ac{gpu}}
\begin{tabular}{|p{0.3\columnwidth}|p{0.2\columnwidth}|p{0.15\columnwidth}|p{0.1\columnwidth}|}
\hline
\multicolumn{4}{|c|}{\textbf{Northeast U.S. grid}} \\
\hline
Linear solver used & KLU+cuSolver & MA57 & Speedup \\ \hline
Total time (s) & 116.0 & 152.1 & 1.3 \\ \hline
Number of iterations & 547 & 529 & - \\ \hline
\multicolumn{4}{|c|}{\textbf{Eastern U.S. grid}} \\
\hline
Linear solver used & KLU+cuSolver & MA57 & Speedup \\ \hline
Total time (s) & 146.7 & 196.4 & 1.3 \\ \hline
Number of iterations & 263 & 263 & - \\ \hline
\end{tabular}
\label{tab:ma57_vs_cusolver}
\end{table}
The overall speedup is entirely due to acceleration of the numerical factorization. Including the cost of initial KLU factorization, we obtain an average speedup of $2.4\times$ for the factorization and $1.3\times$ overall compared to MA57 benchmark on \ac{cpu}. Somewhat surprisingly, we find that triangular solve is 30-40\% slower on \ac{gpu}.
\textcolor{black}{This is likely due to suboptimal iterative refinement embedded in \texttt{cuSolverGLU} triangular solver function. }We also note there is a slight increase in ``other'' computational cost when using \ac{gpu}s. This increase is mostly due to the overhead of the CUDA API, including the cost of launching \ac{gpu} kernels. Other parts of the computation were not \ac{gpu}-accelerated and have the same cost, approximately, in both test configurations. Data movement between \ac{cpu} and \ac{gpu} during the solution process is less than 2\% of \ac{gpu} time.
The solver requires little memory on the \ac{gpu} (1.1 GB and 2.9 GB for the 25,000-bus model of the Northeast U.S. grid and the 70,000-bus model of Eastern U.S. grid, respectively). This is in stark contrast with the implementation in \cite{abhyankar2021acopf}, which requires 21 GB of \ac{gpu} memory for the smaller 10,000-bus model of Western U.S. grid.
\textcolor{black}{Results in \cite{abhyankar2021acopf} show that the model evaluation could be \ac{gpu}-accelerated. We can therefore make projections for further performance improvements. We conservatively anticipate a $4\times$ speedup for the sparse Hessian evaluation on \ac{gpu} and $3\times$ for constraints, constraint Jacobians and other model components. In \cite{abhyankar2021acopf}, speedup of $2$-$3\times$ was achieved on vector kernels and more than $15\times$ on dense matrix kernels for the largest grid evaluated (2,000-bus Texas grid model). While memory access is more efficient when evaluating dense matrix elements, we nevertheless believe it is safe to expect at least $4\times$ speedup for sparse matrix evaluation, especially for the large use cases we consider. Fig. \ref{fig:ma57_vs_cusolver_projection} illustrates the projected performance for the Eastern U.S. grid ($2\times$ speedup on the \ac{gpu}) if the model evaluation were performed on the \ac{gpu}.}
\begin{figure}[h!]
\centering
{\includegraphics[width=0.8\columnwidth,trim={1.1cm 0.4cm 1.1cm 7.5cm},clip]{ma57.25k.pie}}
{\includegraphics[width=\columnwidth,trim={0.1cm 1cm 0.1cm 0.1cm},clip]{solverProjection.70k.bar}}
\caption{A conservative projection of speedup for \ac{acopf} analysis with technology already available in ExaGO\textsuperscript{TM}\xspace and HiOp\xspace libraries shows $2\times$ speedup for the Eastern U.S. grid.
}
\label{fig:ma57_vs_cusolver_projection}
\end{figure}
\section{Conclusions and Next steps}
\label{sec:conslusion}
\textcolor{black}{We eliminated the main bottleneck for efficient \ac{acopf} analysis on \ac{gpu} by accelerating sparse matrix factorization by $2.4\times$ compared to MA57 baseline on \ac{cpu}. With only linear solver accelerated, we get the 30\% overall speedup for \ac{acopf} analysis. We also show it is feasible to accelerate the entire analysis by at least $2\times$ with technology available today.}
The \ac{gpu} profiling result clearly show that the \textcolor{black}{next bottleneck} of the entire linear system solution pathway is now the \ac{gpu} triangular solve.
\textcolor{black}{We believe that significant improvement is possible by using more efficient and configurable iterative refinement than the one embedded in \texttt{cuSolverGLU} library.} Triangular solves can also be approximated by Jacobi or Gauss-Seidel iteration with very promising performance results~\cite{anzt2015iterative} but our investigation on that topic is in early stage.
The ACOPF matrices exhibit what is known as {\it saddle-point} structure in the literature. A specialized, \ac{gpu}-ready solver, which exploits this structure was developed in~\cite{regev2022kkt}. This solver performs triangular solve on smaller matrices, and early (standalone) results show an impressive speedup over MA57. In the future, we plan to integrate it with our software stack and evaluate its performance.
\section*{Acknowledgments}
We thank Cosmin Petra and Nai-Yuan Chiang for their guidance when using HiOp\xspace optimization solver.
We also want to thank Phil Roth and Christopher Oehmen for their support on this work. Finally, we thank Shaked Regev for critical reading of the manuscript and providing helpful comments.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08576",
"language": "en",
"timestamp": "2023-02-20T02:01:50",
"url": "https://arxiv.org/abs/2302.08576",
"yymm": "2302"
} | \section{Introduction}
In recent years several Internet websites have become the hubs for communities
where users can produce, consume, and disseminate content without central
oversight. Examples of these \emph{user-generated content} (UGC) websites
include major social media platforms, like Facebook or Twitter, or global
online knowledge production communities like Wikipedia, which is known as a
model for the production of vast reliable, high-quality
knowledge~\cite{yasseri2021crowdsourcing}.
However, a negative consequence of the popularity of UGC websites is that their
low barriers to access, combined with the lack of supervision from experts or
other gatekeepers, results in the proliferation of false or misleading
information on the Web as a whole~\cite{wardle2017information,
lazer2018science}.
False or misleading content often spreads on social networking
platforms~\cite{amoruso2020contrasting,castillo2011information,zareie2021minimizing,grinberg2019fake,guess2019less,guess2020exposure,allcott2017social},
but there are growing concerns that other UGC communities like Wikipedia may be
vulnerable to these threats too~\cite{diego2019online}. This is especially
worrisome since Wikipedia is one of top most visited internet
websites~\cite{similarweb2022websites} and a popular source of
knowledge~\cite{okoli2014wikipedia}.
Wikipedia contains over 50 million articles in more than 300 languages; in
February 2022, the English language edition of Wikipedia alone received 781M
visits (from unique devices) and was edited over 5M
times~\cite{wikimedia2022size,wikimedia2022wikimedia}. Hence, preserving the
integrity of Wikipedia is of paramount importance for the Web as a
whole~\cite{diego2019online}.
There are many potential threats to the integrity of knowledge in
Wikipedia~\cite{diego2019online}. One common threat comes from vandalism, which
is ``a deliberate attempt to compromise the integrity of the encyclopedia,
often through the insertion of obscenities, insults, nonsense or crude humour,
or by page blanking''~\cite{wikimedia2022vandalism}.
Vandalism, however, is not the only threat to the integrity of Wikipedia's
content. Whereas vandalism focuses on defacing existing entries, there exists
evidence showing that Wikipedia is also targeted by \emph{hoaxes}, whose aim is
to create whole new entries about fake, fictitious topics. An example of a
famous Wikipedia hoax is the entry \emph{Jar'Edo Wens}, a fake Australian
Aboriginal deity, which went undetected for almost 10 years before being
debunked and deleted~\cite{dewey2015story}. But hoaxes remain a threat to
Wikipedia's content integrity to this day. Recently, one of the largest such
incidents the platform has ever seen has been discovered on the Chinese
Wikipedia: a user named \emph{Zhemao} wrote 206 fake entries, starting from
2019 until 2022, about Russia's history in the Middle
Ages~\cite{moon2022chinese}.
Hoaxes are thus not to be confused with vandalism; although vandalism is a much
bigger threat in scope and size compared to hoax articles, hoaxes constitute a
more subtle threat, which has received less attention compared to vandalism.
A crucial question that remains unresolved is what drives the creation of
hoaxes on Wikipedia. Because their original authors are aware that these
articles are false, hoax articles are different from mere
\emph{misinformation}, but should rather be considered instances of
\emph{disinformation}~\cite{wardle2017information,lazer2018science}. As such,
understanding the factors that determine the supply of hoaxes on Wikipedia
could shed light on disinformation in general, including broader threats to the
integrity of the Web, like state-sponsored propaganda~\cite{king2017chinese,
zannettou2019disinformation,golovchenko2020cross} and conspiracy
theories~\cite{starbird2017examining}.
To bridge this gap, in this paper, we study the role of \emph{online
attention}, in the form of individual page views, in the supply of
disinformation in Wikipedia. The idea of an economy of attention was first
introduced by \citet{simon1971designing}, who observed that human attention is
a limited resource that needs to be allocated~\cite{goldhaber1997attention}.
Here, to quantify the flow of collective attention to individual topics of
knowledge, we take advantage of the unique Wikipedia traffic dataset and API.
Specifically, in this work we seek to answer the following questions:
\begin{enumerate}
\item[Q1.] Does online attention toward a topic increase the likelihood of
disinformation being created about it?
\item[Q2.] Operationally, is there a relationship between traffic to Wikipedia
and the production of hoax articles?
\end{enumerate}
To answer these questions, we collected a list of known hoax
articles~\cite{wikimedia2022list} along with their creation timestamps and
content. To control for potential confounding factors in the distribution of
traffic to Wikipedia over time, for each hoax, we considered a cohort
consisting of all the legitimate (i.e.~non-hoax) Wikipedia articles that were
created on the same day as the hoax. Similar to
\citet{kumar2016disinformation}, we find that hoaxes differ from legitimate
articles in key appearance features, but do not strongly differ in the number
of hyperlinks they contain. Next, for each article (either hoax or non-hoax),
we parsed its content and extracted all the out-links, i.e. its neighbors in
the Wikipedia hyperlink network. The presence of a link between two Wikipedia
entries is an indication that they are semantically related. Therefore, traffic
to these neighbors gives us a rough measure of the level of online attention to
a topic \emph{before} a new piece of information (in this case an entry in the
encyclopedia) is created.
Finally, we measure the relative change of traffic in the 7-day period before
and after the creation of a hoax and compare this change to that of the
articles in its cohort. To preview our results, we find that, on average,
online attention tends to precede the creation of hoaxes more than it does for
legitimate articles. This observation is consistent with the idea that the
supply of false and misleading information on a topic is driven by the
attention it receives.
In the rest of the paper we discuss related work (\Cref{sec:related_work}), and
then describe our methodology (\Cref{sec:data_and_methods}): the details of the
data collection process, the comparison between features of hoaxes and
legitimate articles, and the pre-processing of the Wikipedia traffic data.
\Cref{sec:results} discusses the techniques used to quantify online attention
and its relationship to the hoax creation, and the statistical procedures
performed to asses the results. Finally, \cref{sec:discussion_and_future_work}
summarizes our findings and future directions.
All code and data needed to replicate the findings of this study are available
on Github at
\href{https://github.com/CSDL-USF/wikihoaxes}{github.com/CSDL-USF/wikihoaxes}.
\section{Related Work}
\label{sec:related_work}
Over the years Wikipedia has developed an impressive array of socio-technical
solutions to ensure the quality of its content. Early work on Wikipedia has
shown that most acts of vandalism are repaired manually by the crowd of
contributors within a matter of minutes~\cite{viegas2004studying}. In addition
to human interventions, automated tools like ClueBot~NG play a crucial role in
keeping the encyclopedic entries clear from
damage~\cite{geiger2013levee,halfaker2012bots}. On top of these methods, there
exist other preventive measures such as patrolling recent changes, creating
watchlists, blocking frequent vandalism creators, and using editorial filters.
Finally, multiple research attempts have been conducted to aid in both the
manual and the automatic detection of
vandalism~\cite{potthast2008automatic,adler2010detecting,smets2008automatic,harpalani2011language}.
Despite this wealth of work, little is known about Wikipedia hoaxes.
\citet{kumar2016disinformation} collected a sample of known hoaxes from
resources compiled by the Wikipedia community, and studied their longevity,
along with other characteristics. They found that one in a hundred hoaxes
remain undetected for more than a year, with 92\% of the cases detected within
the first day. They also observed that, although only 1\% of all hoaxes remain
undetected for more than a year, those that stay undetected have a higher
chance over time of remaining so. Finally, they showed that, on average, hoaxes
have a lower density of internal links and receive less traffic than legitimate
(i.e.,~non-hoax) articles~\cite{kumar2016disinformation}.
Traffic to Wikipedia has been used before to study collective attention.
\citet{garcia2017memory} studied the patterns of attention to Wikipedia in the
wake of airplane crashes. They found that the traffic to entries about
\emph{previous} airplane crashes was greater than that of the current crash,
i.e. the one that triggered the attention surge~\cite{garcia2017memory}.
\citet{ciampaglia2015production} studied traffic patterns during the creation
of new Wikipedia entries (i.e.,~not just hoaxes) and observed that the creation
of new information about a topic is preceded by spikes of attention toward it,
as measured by traffic to neighboring entries~\cite{ciampaglia2015production}.
This is consistent with a model in which the demand for information on a topic
drives the supply of novel information about it. Consequently, measuring
traffic to Wikipedia entries can help us get a step closer to understanding why
and when hoaxes are more likely to be produced.
\section{Data and Methods}
\label{sec:data_and_methods}
We first describe how the dataset of hoaxes was collected and the process of
building the cohort of each hoax.
\subsection{Data Collection}
\label{sec:data_collection}
Prior work has relied on a broad definition of `hoaxes' that leverages the `New
Page Patrol' (or NPP) process~\cite{kumar2016disinformation}. Unfortunately,
access to these data was not public due to the nature of the NPP process.
Therefore, in the present work we relied on a smaller, public list documenting
known hoaxes discovered by Wikipedia editors outside of the NPP
process~\cite{wikimedia2022list}. To be included in this list, a discovered
hoax must meet either of the following two characteristics: (\emph{i}\,) they
have gone undetected for more than a month after
patrolling~\cite{kumar2016disinformation}, or (\emph{ii}\,) they were discussed
by reliable media sources.
To collect this list, we queried the Wikipedia API using the `prefix search'
endpoint~\cite{mediawiki2022main} to collect the titles of the hoaxes residing
in the administrative list maintained by Wikimedia under the prefix 'List of
Hoaxes on Wikipedia'. The total number of titles retrieved was $N_h=190$. We
then used the Toolforge~\cite{wikitech2021toolforge} to query the database
replica of the English Wikipedia for the creation date of each hoax article,
defined as the timestamp of the first revision recorded in the database.
\Cref{fig:creation_years_cohort_size}~(left) shows a summary of the number of
hoaxes created over time, with the majority of hoaxes appearing in the period
2005--2007, and a decline starting in 2008. This observed behavior can be in
part explained by the fact that the Wikipedia community started patrolling new
pages in November of 2007~\cite{kumar2016disinformation,wikimedia2022new} and
is also consistent with the well-known peak of activity of the English
Wikipedia community~\cite{halfaker2013rise}.
Finally, to build the cohort of each hoax, we queried the Wikipedia database
replica for all legitimate articles created on the same day. Since Wikipedia
entries are often accessible through different titles, in collecting the
cohort, we resolved all redirects created the same day as the hoax. Treating
these redirects as separate entries would inflate the cohort size and could
skew traffic statistics used later for estimating the level of online
attention. \Cref{fig:creation_years_cohort_size}~(right) shows the effect that
redirects have on the size of each cohort. In some cases, failing to account
for redirects can increase the size of cohorts to up to $16,000$ articles.
\begin{figure*}[tbp]
\centering
%
\includegraphics[width=\columnwidth]{plots/hoax_dates.png}
%
\includegraphics[width=\columnwidth]{plots/cohort_count.png}
%
\caption{Left: Hoaxes detected in the English Wikipedia. Right: Cohort size
distribution for hoaxes in our dataset before (solid blue) and after (solid
orange) resolving redirects.}
%
\label{fig:creation_years_cohort_size}
%
\end{figure*}
\subsection{Appearance Characteristics Analysis}
\label{sec:appearance_characteristics_analysis}
To understand the differences between each hoax and its cohort members, we
analyzed their appearance features, inspired by the work of
\citet{kumar2016disinformation} who, in addition to appearance features,
studied network, support, and editor features for both hoax and legitimate
articles~\cite{kumar2016disinformation}. We considered the following four
features: (\emph{i})~the \emph{plain text length} is the number of words in an
article after removing all the wiki markup; (\emph{ii})~the \emph{ratio of
plain to markup text} is the number of words obtained after removing all markup
in the body of the article, divided by the number of words before removal;
(\emph{iii})~the \emph{density of wiki-links} is the number of wiki-links per
100 words, counted before markup removal; and, finally, (\emph{iv})~the
\emph{density of external links} is defined similarly as the density of
wiki-links, but for links to external Web pages.
To be able to calculate these features for each hoax and its cohort, we
consulted the API to extract their plain text using the \emph{TextExtracts}
extension instead~\cite{mediawiki2022text}. For the wiki markup we used the
revisions API~\cite{mediawiki2022revisions}. A regular expression was used to
count the number of words in plain and markup text. Finally, to find the wiki
and external links within each article we used
\emph{wikitextparser}~\cite{5j92022github}.
Aside from the plain text to markup ratio, the chosen appearance features have
very skewed distributions. To illustrate this point, \cref{fig:outliers} shows
the distribution of each score for five manually sampled cohorts in our data.
For the plain text length, \cref{fig:outliers} shows that the median is between
$100$ and $1,000$ words, yet there exist articles that reach and even exceed
$10,000$ words. The same case persists in the wiki-link density --- the median
is under $10$ links per $100$ words, however some articles have up to $40$
links, and similar for the other two features.
Thus, after collecting all the four features, we computed the modified
$z$-score $z'$ to compare different hoaxes together:
\begin{equation}
z' = \frac{x - \tilde{x}}{\mathrm{MAD}} \\
\label{eq:z-score}
\end{equation}
Where $x$ is a feature measured on a hoax, $\tilde{x}$ the median value of the
feature on the non-hoaxes, and $\mathrm{MAD}$ the median absolute deviation of
$x$ with respect to $\tilde{x}$. We chose to use $z'$ instead of the regular
$z$-score since it is more resilient to outliers~\cite{iglewicz1993how}.
\begin{figure}[tbp]
\begin{subfigure}{0.23\textwidth}
\includegraphics[width=\linewidth]{plots/outliers_plain.png}
\end{subfigure}
\begin{subfigure}{0.23\textwidth}
\includegraphics[width=\linewidth]{plots/outliers_wikilink.png}
\end{subfigure}
\begin{subfigure}{0.23\textwidth}
\includegraphics[width=\linewidth]{plots/outliers_ratio.png}
\end{subfigure}
\begin{subfigure}{0.23\textwidth}
\includegraphics[width=\linewidth]{plots/outliers_extlink.png}
\end{subfigure}
\caption{Distribution of appearance features for five manually sampled
cohorts in our data.}
\label{fig:outliers}
\end{figure}
\subsection{Analyzing Wikipedia Traffic Data}
\label{sec:analyzing_wikipedia_traffic_data}
To analyze the traffic that the articles in our dataset receive, we used a
dataset on traffic compiled by the Wikimedia
foundation~\cite{wikimedia2022page}. The Wikimedia Foundation has published two
main traffic datasets: the earlier \emph{pagecounts-raw} (Dec. 2007--Aug.
2015), and the more recent \emph{pageviews} (started Jul. 2015). Since most of
the hoaxes in our dataset were created in the period between 2005 and 2011, we
have decided to use the older pagecounts-raw data. This dataset contains the
count of non-unique HTTP requests made for each article in an hourly time
frame, collected by the proxy server of
Wikipedia~\cite{ciampaglia2015production}, along with request title and
additional metadata. We pre-processed pagecounts-raw to resolve redirects,
filter unwanted entries, and clean illegal titles.
Pre-processing the data was performed over the following three steps. First,
the raw data was filtered. The filtration process selected only entries related
to the English Wikipedia project while removing all pages from namespaces other
than the `main' MediaWiki namespace. Second, the filtered data was cleaned from
illegal titles. Illegal titles were discarded by removing characters which are
not allowed in Wikipedia page titles~\cite[cf.~`Page
Restrictions']{wikimedia2022help}. The hashtag sign `\#' is considered illegal
only if it is the first character in a title; otherwise it indicates a
subsection within a page. Hence, a title including `\#' is discarded only in
the former case. In addition to removing illegal characters, we decoded common
URL-encoded characters (e.g. `\%20') and replaced any space with an underscore
character. Third, to resolve redirects, the Toolforge was consulted to extract
all the redirects within the main namespace of the English Wikipedia. The
result was a cleaned and filtered hourly dataset of the view count for pages
within the main namespace of the English Wikipedia.
\begin{figure*}[tbp]
\begin{subfigure}{0.245\textwidth}
\includegraphics[width=\linewidth]{plots/zplain.png}
\caption{Plain text length}
\label{fig:z-scores_plain}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\includegraphics[width=\linewidth]{plots/zwlink.png}
\caption{Wiki-link density}
\label{fig:z-scores_wiki-link}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\includegraphics[width=\linewidth]{plots/zmarkup.png}
\caption{Plain to markup text ratio}
\label{fig:z-scores_ratio}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\includegraphics[width=\linewidth]{plots/zext.png}
\caption{External link density}
\label{fig:z-scores_ext-link}
\end{subfigure}
\caption{Modified $z$-scores for all hoaxes in our sample relative to non-hoax articles in their cohorts for the four appearance features we considered. Hoaxes tend to have similar or slightly smaller count of plain text words (however with several higher-count outliers), lower external link density, higher plain to markup text ratio, and similar wiki-link density.}
\end{figure*}
\section{Results}
\label{sec:results}
\subsection{Appearance Features}
We start by analyzing the appearance features of hoaxes relative to the
non-hoaxes in their cohort. \Cref{fig:z-scores_plain} shows that most hoaxes
have either similar or slightly smaller plain text length compared to that of
their cohorts. We also observe the presence of several outliers, indicating
that a subset of hoaxes in our sample tends to have unusually higher word
counts. This is consistent with the results of \citet{kumar2016disinformation},
who observed that `successful' hoaxes (i.e., that have gone undetected for at
least 1 month) have a median plain text length of $134$ words --- almost twice
as large as that of legitimate articles. However, the analysis of
\citet{kumar2016disinformation} differs from ours in multiple ways. First, as
already mentioned, they used a different, larger set of hoaxes collected as
part of Wikipedia's regular NPP process. Second, they used a matching procedure
to compare each hoax to only one legitimate article created on the same day.
They also considered other types of articles, such as wrongly flagged articles
and failed hoaxes. Another potential differentiating factor is the method of
extraction for the plain text, markup content, and links for each page, which
might contribute to not obtaining exactly the same results.
\Cref{fig:z-scores_wiki-link} shows that hoaxes tend to have a similar density
of wiki-links when compared to non-hoaxes. This is important, since to quantify
online attention toward a topic we compute the volume of traffic to the
wiki-link neighbors of an article. Thus, in the following analysis on traffic,
we can safely exclude potential confounding factors due to different linking
patterns between hoaxes and non-hoaxes.
\Cref{fig:z-scores_ratio,fig:z-scores_ext-link} show the distributions of the
ratio of plain to markup text and of external link density, respectively. Aside
from a few outliers, hoaxes almost always contain more plain text than markup
text, compared with non-hoaxes. This is also consistent with the findings of
\cite{kumar2016disinformation}, who observed that, on average, 58\% of a
legitimate article, 71\% of a successful hoax, and 92\% of a failed hoax is
just plain text.
In summary, hoaxes tend to have more plain text than legitimate articles and
fewer links to external web pages outside of Wikipedia. This means that
non-hoax articles, in general, contain more references to links residing
outside Wikipedia. Such behavior is expected as a hoax's author would need to
put a significant effort into crafting external resources at which the hoax can
point.
\subsection{Traffic Analysis}
Recall that the cohort of a hoax is defined as all the non-hoax articles
created on the same day it was created. To understand the nature of the
relationship between the creation of hoaxes and the attention their respective
topics receive, we first seek to quantify the relative volume change before and
after this creation day. Here, a \emph{topic} is defined as all of the
(non-hoax) neighbors linked within the contents of an article i.e., its
(non-hoax) out-links. Traffic to Wikipedia is known to fluctuate following
circadian and weekly patterns, and is likely to depend on a host of additional,
unknown factors, such as the relative popularity of Wikipedia over the years,
the total number and geographic distribution of web
users~\cite{yasseri2012circadian}, etc. To account for these potential
confounding factors, \cite{ciampaglia2015production} proposed to quantify the
volume change in a way that controls for the circadian rhythm and the
fluctuating nature of traffic on the
Web~\cite{ciampaglia2015production,thompson1997wide}. They have shown that
studying traffic over a 14-day observation window, 7 days before and after the
creation day, considers both short spikes in attention and weekly changes in
traffic. The relative volume change is defined as:
\begin{align}
\frac{\Delta V}{V} & = \frac{V^{(b)} - V^{(a)}}{V^{(b)} + V^{(a)}}
\label{eq:delta_v}
\end{align}
where $V^{(b)}$ and $V^{(a)}$ are respectively the median traffic to neighbors
in the 7 days before and after the creation of the article. According to
\cref{eq:delta_v}, $\Delta V / V > 0$ when the majority of traffic occurs
before an article is created, i.e., attention toward the topic of the articles
precedes its creation. When $\Delta V / V < 0$, attention tends instead to
follow the creation of the hoax. Note that our traffic data covers a period
spanning from December 2007 to August 2016. Since not all hoaxes in our dataset
fell within that time frame, $\Delta V / V$ was calculated only for the $83$
hoaxes (and their cohorts) whose creation dates fell within that period.
\Cref{fig:density_distributions} shows the distribution of the $\Delta V' / V'$
values for each cohort, the cohort mean, and the value of $\Delta V / V$ of the
corresponding hoax, for a manually selected sample of hoaxes collected from our
data.
Having defined a way to quantify whether traffic to a given article preceded or
followed its creation, we want to determine whether hoaxes tend to have a
greater $\Delta V / V$ than legitimate articles in general. Unfortunately, we
know very little about the distribution of $\Delta V / V$ over multiple pages,
and how it has changed over the course of the history of Wikipedia. However, if
hoaxes do not differ from legitimate articles, then on average the difference
the $\Delta V/V$ of a hoax and that of its cohorts should be zero. Therefore,
we define:
\begin{equation}
D = \frac{\Delta V}{V} - \mathsf{E}\left[\frac{\Delta V'}{V'}\right] =
\frac{\Delta V}{V} - \frac{1}{n}\sum_{i=1}^n \frac{\Delta V'_i} {V'_i}
\label{eq:D}
\end{equation}
where $\mathsf{E}\left[\frac{\Delta V'}{V'}\right]$ indicates the expected
$\Delta V' / V'$ of legitimate articles. Thus, when $D>0$ a hoax accumulates
more attention preceding its creation, compared to its cohort.
To test whether $D > 0$ holds in general, we estimate the mean value of $D$ in
our sample of hoaxes, and used bootstrapping to compute the confidence interval
of the mean.
To perform bootstrapping, we resampled the original list of $D$ values $10,000$
times with replacement.
\begin{figure}
\includegraphics[width=\columnwidth]{plots/density_histograms_3x3.pdf}
\caption{For a manual sample of hoaxes, the distribution of the $\Delta V' / V'$ values for each cohort (turquoise blue histograms) in comparison to the $\Delta V / V$ of the respective hoax (black dashed line). The $\Delta V / V$ of hoaxes tend to, in general, be higher than the mean of their cohorts (red solid line).}
\label{fig:density_distributions}
\end{figure}
In general, we observe a trend in which hoaxes tend to have greater $\Delta V /
V$ than their cohort: $D > 0$ in 75 out of 83 of the hoaxes in our data. The
histogram in \cref{fig:bootstrap}~(left) shows the distribution of the
differences, and shows that the mean is approximately equal to $0.123$, with a
bootstrapped 95\% confidence interval of $(0.1227, 0.1234)$.
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{plots/mean_CI.png}
\includegraphics[width=\columnwidth]{plots/sampling_means.png}
\caption{Left: Histogram of the relative traffic change differences $D$ (see \Cref{eq:D}). The black dashed line is the sample mean, and the red area the 95\% bootstrapped CI. The blue solid line is a kernel density estimate. The inset shows the sample mean relative to the confidence interval. Right: The sampling distribution of means obtained by bootstrapping $10,000$ samples with replacement.}
\label{fig:bootstrap}
\end{figure*}
According to the Central Limit Theorem (CLT), the distribution of sample means
approximates the normal distribution with the increase of sample size $n$,
regardless of the original distribution of data~\cite{feller1991introduction}.
\Cref{fig:bootstrap}~(right) shows the distribution of the means for each of
the $10,000$ resampled vectors. It is worth noting that all of the means
returned were positive, implying a greater $\Delta V / V$ for the hoax.
\section{Discussion and future work}
\label{sec:discussion_and_future_work}
Our study analyzes the role of online attention in the supply of disinformation
on Wikipedia (Q1). From an operational point of view, we study the relationship
between the creation of hoaxes and the traffic preceding each hoax's creation
day $d$ (Q2). To do so, we collected the view count of the out-link neighbors
of the hoaxes and their cohorts for $d\pm{}7$ days. Following prior
work~\cite{ciampaglia2015production}, to assess the allocation of attention
during that period, we calculated the relative traffic volume change, which
accounts for potential confounding factors due to traffic fluctuations. We
observe that 90\% of hoaxes have a higher $\Delta V / V$ than their respective
cohort and confirmed it by means of resampling. This indicates that, on
average, hoaxes tend to have more traffic accumulated before their creation
than after. In summary, our observed $D$ indicates that the generation of
hoaxes in Wikipedia is associated with prior consumption of information, in the
form of online attention, providing an answer to our original research question
(Q1).
This study has some limitations that need to be acknowledged. First of all, our
results are based on a list of only $83$ hoaxes. Even though we originally
collected a dataset that was twice the size of this one, we were limited by the
fact that not all hoaxes were covered in our traffic dataset. Future work
should extend our results to larger available samples of hoaxes (e.g.,
NPP-based) to ensure consistent results with prior work.
Additional limitations stem from our operational definition of the topic of a
new article (hoax or non-hoax). In this work, we relied on outgoing hyperlinks
(out-links) and neglected incoming hyperlinks (or in-links), owing to our lack
of access to data on hyperlinks to hoaxes. This data is present but not
publicly accessible in the Wikipedia database, presumably due internal
regulations within the Wikipedia project. In the future, we would like to
extend our analysis to include in-links as well.
Future work should also consider a more advanced definition of an article's
topic that does not rely solely on hyperlinks, as they provide a very rough
notion of topic. Links to very generic entries like locations or dates (e.g.,
`United States of America' or `1492') typically convey little information about
an article's topic .
Third, our traffic dataset is based on an older definition of pagecounts, which
is affected by known limitations, including the presence of automated crawlers,
which are known to generate large amounts of hits in a short period of time. To
mitigate for the presence of this type of outliers, in our definitions of
traffic volume we rely on the median instead of the mean, which is more robust
to outliers. However, in the future we would like to include a more recent
traffic dataset that is not affected by this and other
biases~\cite{wikimedia2022research}.
In conclusion, our study sheds light on an important factor affecting the
supply of disinformation on the Web. Future work should extend our results to
venues other than Wikipedia, for example social media platforms like Facebook
or Twitter. In addition, other types of media (like video, audio, etc.) should
be considered --- hoaxes do not only come in the form of textual articles, and
attention is an effective incentive for people to keep spreading more
disinformation, regardless of its medium. Future work should also consider
studying the role of attention in versions of Wikipedia other than English. We
expect similar trends to ones observed here to apply to non-English language
editions as well. However, the signal may be weaker owing to lower traffic
volume of non-English language editions. A comparative analysis of the role of
attention in the supply of disinformation across cultures could shed more light
about these type of threats to the Web as a whole.
|
{
"arxiv_id": "2302.08591",
"language": "en",
"timestamp": "2023-02-20T02:02:30",
"url": "https://arxiv.org/abs/2302.08591",
"yymm": "2302"
} |
\section{Introduction}
The field of activity recognition has gained substantial attention in recent years due to its usefulness in various domains, including healthcare \cite{straczkiewicz_systematic_2021}, sports \cite{zhuang_sport-related_2019}, transportation \cite{naseeb_activity_2020}, and human well-being \cite{meegahapola2020smartphone}. For instance, fitness-tracking mobile health applications enable users to access activity-specific metrics \cite{zhuang_sport-related_2019, siirtola_efficient_2011}. Similarly, smart home systems can make changes to the environment (e.g., lighting, temperature) based on the information gathered about people's activities \cite{makonin_smarter_2013, nef_evaluation_2015}. Context awareness, a key aspect of mobile phone user experience, is enabled with the integration of activity recognition \cite{wang_context_aware_music_rec, oh2015intelligent}.
Traditionally, sensor-based activity recognition relied on custom sensors attached to the body \cite{choudhury_mobile_2008}. While this approach is effective for small-scale studies, it is often challenging to scale up. The cost and maintenance required for these sensors can make them both expensive and obtrusive, reducing the motivation to use them. The alternative approach of using commercial wearables is not immune to these challenges, and these devices are often perceived as niche or abandoned after a short period of usage \cite{mercer_acceptance_2016, coorevits_rise_2016}. This is where the presence of smartphones comes in handy. In the United States, 85\% of adults and 96\% of young adults own a smartphone, making it easier to target a broader audience \cite{noauthor_mobile_2021}. Research in mobile sensing has revealed the potential of smartphone data for activity recognition \cite{straczkiewicz_systematic_2021, meegahapola2020smartphone}. The widespread ownership and unobtrusive nature of smartphones make them an attractive solution to traditional sensor-based activity recognition. However, there is still a need to understand how multiple sensing modalities in smartphones can be utilized for complex daily activity recognition. Additionally, the generalization of complex daily activity recognition models across different countries remains an under-explored area of research.
Recognizing complex daily activities is important. In the activity recognition literature, multiple types of activities have been considered, each at different granularity levels \cite{dernbach_simple_2012, saguna2013complex}. Coarse-grained or simple activities like walking, sitting, or cycling are repeated \emph{unitary} actions directly measurable from a proxy (e.g., inertial sensor unit). Fine-grained complex activities, or activities of daily living (ADL), are built on top of simple activities, but convey more specific contextual information \cite{wiener1990measuring, rai2012mining, saguna2013complex}. For example, eating, studying, working, and movie watching entail participants sitting. Such activities can not be measured by inertial sensor units alone \cite{boutonyour2022, meegahapola2020protecting, biel_bitesnbits_2018} and need a more holistic multimodal sensing approach that captures a wide range of contexts and behaviors that build on top of simple activities \cite{saguna2013complex}. Further, recognizing such complex daily activities could: \emph{(i)} allow tracking the digital well-being of individuals in a more fine-grained manner (e.g., providing a breakdown of time spent eating, resting, attending a lecture, and studying, instead of just sitting \cite{sewall2020psychosocial, boutonyour2022}); \emph{(ii)} provide context-aware user experiences and notifications by understanding user behavior better (e.g., not sending phone notifications when a person is studying or attending a lecture, suggesting products while a user is shopping \cite{mehrotra2017intelligent}); and \emph{(iii)} allow better content recommendation (e.g., recommending music based on the current daily activity such as working, studying, or shopping \cite{wang_context_aware_music_rec}), where complex activities can be more informative and valuable than simpler ones. However, even though inertial, location, or WiFi/Bluetooth data have been used separately for activity recognition \cite{rai2012mining, saguna2013complex}, prior work has not exhaustively studied complex daily activities by using multimodal smartphone sensing data.
The use of multimodal smartphone sensing data in machine learning models could provide a more comprehensive picture of complex daily activities when compared to using single modalities. This is especially relevant in light of the Covid-19 pandemic, which has brought about a significant shift in daily habits and activities \cite{stockwell_changes_2021, zheng_covid-19_2020}. The lockdown measures enforced to slow the spread of the virus resulted in a decrease in physical activity and an increase in sedentary behavior, particularly among young adults. This shift is evident in changes to smartphone use patterns \cite{ratan_smartphone_2021, saadeh_smartphone_2021, li_impact_2021}, which can impact the effectiveness of location-based activity recognition methods in a remote/hybrid work/study setting where individuals tend to remain sedentary for extended periods of time. Hence, the importance of inertial and location sensors as predictive features could diminish due to sedentary behavior. This underscores the importance of incorporating fine-grained multimodal sensing features to accurately characterize the complex daily activities of these emerging lifestyles through smartphones. However, there is currently little understanding of which smartphone sensing features are systematically useful in characterizing different complex daily activities.
Taking a few steps back, we can also consider the ``country'' dimension and its influence on smartphone usage. Country differences can affect smartphone usage in different world regions \cite{mathur_moving_2017}. For example, it could be socially frowned upon to take a call at a formal restaurant in Japan, while people in Europe could leave a movie theater to check their phone \cite{canton_cell_2012}. It has been shown that people in Japan tend to be more reticent than in Sweden about talking on the phone in public transportation or, more generally, about being loud in public \cite{phone_usage_jp_sw_usa}. Another study about smartphone addiction among young adults in 24 countries found that the rigidity of social norms and obligations highly influenced smartphone usage \cite{smartphone_addiction_24_countries}. In addition to how people use the phone, prior work also discussed how passively sensed behavioral data about people differ in many countries \cite{althoff2017large}. These differences across countries constitute a form of diversity, which is a growing area of interest in computing and AI research \cite{AICultures2023} \footnote{While we acknowledge that cultures can be multidimensional and exist in tension with each other and in plurality within the same country \cite{yuval2004gender}, some prior studies in mobile sensing, psychology, and sociology have used ``culture'' as a proxy to refer to the country of data collection \cite{phan_mobile_2022, khwaja_modeling_2019, van2004bias, he2012bias}. However, in this study, for consistency, we use ``country'' (a more specific geographic region) as the unit of analysis that could affect phone usage behavior and sensing data. We also used the term ``geographic'' rarely, when appropriate and when referring to regions (i.e., Europe).}. From a machine learning point-of-view, a diversified system contains more information and can better fit various environments \cite{gong_diversity_2019}. More generally, diversity-aware machine learning aims to improve the model's representational ability through various components such as input data, parameters, and outputs \cite{gong_diversity_2019}. Concretely, country-level, diversity-aware activity recognition should try to understand the effect of the country diversity of smartphone users, on inference model performance. However, the understanding of how country diversity affects the smartphone sensing pipeline (from collected data to model performance) is limited, as previous work aimed at quantifying such effects has been scarce \cite{khwaja_modeling_2019, meegahapola2020smartphone, phan_mobile_2022}, due to reasons including, but not limited to, logistical difficulties in conducting longitudinal smartphone sensing studies with the same protocol in diverse countries.
\begin{table*}
\centering
\caption{Terminology used in this study for training and testing approaches and target classes.}
\label{tab:keywords_explaination}
\resizebox{\textwidth}{!}{%
\begin{tabular}{>{\arraybackslash}m{3cm} >{\arraybackslash}m{13cm}}
\rowcolor[HTML]{EDEDED}
\textbf{\makecell[l]{Terminology}} &
\textbf{Description}
\\
Complex Daily Activity&
Based on prior studies that looked into complex activities of daily living \cite{saguna2013complex, rai2012mining, laput_sensing_2019}, we define these as activities that punctuate one's daily routine; that are complex in nature and occur over a non-instantaneous time window; and that have a semantic meaning and an intent, around which context-aware applications could be built. \\
\arrayrulecolor{Gray}
\hline
Country-Specific &
This approach uses training and testing data from the same single country.
Each country has its own model without leveraging data from other countries. As the name indicates, these models are specific to each country (e.g., a model trained in Italy and tested in Italy).\\
\arrayrulecolor{Gray}
\hline
Country-Agnostic &
This approach assumes that data and models are agnostic to the country. Hence, a trained model can be deployed to any country regardless of the country of training. There are two types of country-agnostic phases:\newline (Phase I) This phase uses training data from one country and testing data from another country. This corresponds to the scenario where a trained machine learning model already exists, and we need to understand how it would generalize to a new country (e.g., a model trained in Italy and tested in Mongolia). \newline (Phase II) This phase uses training data from four countries and testing data from the remaining country. This corresponds to a scenario where the model was already trained with data from several countries, and we need to understand how it would generalize to a new country (e.g., a model trained with data from Italy, Denmark, UK, and Paraguay, and tested in Mongolia). \\
\arrayrulecolor{Gray}
\hline
Multi-Country&
This one-size-fits-all approach uses training data from all five countries and tests the learned model in all countries. This corresponds to the setting in which multi-country data is aggregated to build one single generalized model. However, this is also how models are typically built without considering aspects such as country-level diversity. \\
\arrayrulecolor{Gray2}
\hline
\end{tabular}
}
\end{table*}
Our work uses a set of experimental approaches (country-specific, country-agnostic, and multi-country, described in Table~\ref{tab:keywords_explaination}), and model types (population-level and hybrid, described in Section~\ref{sec:inference}). With the support of rich multimodal smartphone sensing data collected in multiple countries under the same experimental protocol, we address three research questions:
\begin{itemize}[wide, labelwidth=!, labelindent=0pt]
\item[\textbf{RQ1:}] How are complex daily activities expressed in different countries, and what smartphone sensing features are the most useful in discriminating different activities?
\item[\textbf{RQ2:}] Is a generic multi-country approach well-suited for complex daily activity recognition? To which extent can country differences be accurately modeled by country-specific approaches?
\item[\textbf{RQ3:}] Can complex daily activity recognition models be country-agnostic? In other words, how well do models trained in one or more countries generalize to unseen countries?
\end{itemize}{}
In addressing the above research questions, we provide the following contributions:
\begin{itemize}[wide, labelwidth=!, labelindent=0pt]
\item[\textbf{Contribution 1:}] We examined a novel smartphone sensor dataset and over 216K self-reports (including complex daily activities) collected from 637 college students in five countries (Denmark, Italy, Mongolia, Paraguay, and the United Kingdom) for over four weeks. To represent each activity self-report, we extracted around 100 features by processing multimodal smartphone sensor data (Table~\ref{tab:agg-features}). Moreover, we defined 12 complex daily activity classes based on participant responses, prevalence, and prior work. The list includes sleeping, studying, eating, watching something, online communication and social media use, attending classes, working, resting, reading, walking, sports, and shopping. On the one hand, we found that similar features are most informative for all countries for specific activities (e.g., sleep, shopping, walking). On the other hand, for some other activities, the most informative features vary across countries. Interestingly, however, they remain approximately similar across geographically closer countries. For example, the "sport" activity has the use of "health \& fitness apps" as a top feature across European countries. However, the feature was not prominent in Mongolia and Paraguay, where such physical activity-related app usage is lower. This divide is also visible in the “watching something” activity, which is influenced by the use of entertainment apps in European countries, and not in the other two countries.
\item[\textbf{Contribution 2:}] We defined and evaluated a 12-class complex daily activity inference task with country-specific, country-agnostic, and multi-country approaches (Table~\ref{tab:keywords_explaination}). We also used population-level (not personalized) and hybrid (partially personalized) models to evaluate how model personalization affects performance within and across countries. We show that the generic multi-country approach, which directly pools data from all countries (a typical approach in many studies), achieved an AUROC of 0.70 with hybrid models. Country-specific models perform the best for the five countries, with AUROC scores in the range of 0.79-0.89. These results suggest that even though multi-country models are trained with more data, models could not encapsulate all the information towards better performance, possibly due to the averaging effect of diverse behaviors across countries. The country-specific approach consistently worked better.
\item[\textbf{Contribution 3:}] With the country-agnostic approach, we found that models do not generalize well to other countries, with all AUROCs being below 0.7 in the population-level setting. With hybrid models, personalization increased the generalization of models reaching AUROC scores above 0.8, but not up to the same level as country-specific hybrid models. Moreover, even after partial personalization, we observed that models trained in European countries performed better when deployed in other European countries than in Mongolia or Paraguay. This shows that in addition to country diversity, behavior and technology usage habits could be what mediates the performance of models in different countries. In light of these findings, we believe that human-computer interaction and ubiquitous computing researchers should be aware of machine learning models' geographic sensitivities when training, testing, and deploying systems to understand real-life human behavior and complex daily activities. We also highlight the need for more work to address the domain shift challenge in multimodal mobile sensing datasets across countries.
\end{itemize}
To the best of our knowledge, this is the first study that focuses on the use of multimodal smartphone sensing data for complex daily activity recognition, while examining the effect of country-level diversity of data on complex activity recognition models with a large-scale multi-country dataset, and highlighting domain shift-related issues in daily activity recognition, even when the same experimental protocols are used to collect data in different countries.
The paper is organized as follows. In Section~\ref{sec:relatedwork}, we describe the related work and background. Then, we describe the dataset in Section~\ref{sec:dataset}. In Section~\ref{sec:descriptive}, we present the descriptive and statistical analysis regarding important features. We define and evaluate inference tasks in Section~\ref{sec:inference} and Section~\ref{sec:results}. Finally, we end the paper with the Discussion in Section~\ref{sec:discussion} and the Conclusion in Section~\ref{sec:conclusion}.
\section{Background and Related Work}\label{sec:relatedwork}
\subsection{Mobile Sensing}
In prior work, researchers have collected and analyzed mobile sensing data to understand various attributes of a particular population. Depending on the study, that goal can be put under coarse categories such as behavior, context, and person-aspect recognition \cite{meegahapola2020smartphone}. Behavior recognition is aimed at understanding user activities broadly. Person aspect recognition looks into understanding demographic attributes (e.g., sex, age, etc.), psychology-related attributes (e.g., mood, stress, depression, etc.), and personality. Finally, context recognition identifies different contexts (e.g., social context, location, environmental factors, etc.) in which mobile users operate.
Regarding behavior recognition, there are studies that aimed to capture binary (sometimes three) states of a single complex activity/behavior such as eating (e.g., eating meals vs. snacks \cite{biel_bitesnbits_2018}, overeating vs. undereating vs. as usual eating \cite{meegahapola_one_2021}), smoking (e.g., smoking or not \cite{mcclernon2013your}) or drinking alcohol (e.g., drinking level \cite{phan2020understanding, bae2017detecting}, drinking or not \cite{santani_drinksense_2018}). Another study used the action logs of an audio-based navigation app to predict its usage and understand what drives user engagement \cite{low_vision_app_engagement}. Then, regarding person aspects, the MoodScope system \cite{likamwa_moodscope_2013} inferred the mood of smartphone users with a multi-linear regression based on interactions with email, phone, and SMS, as well as phone location and app usage. Servia-Rodriguez et al. \cite{servia-rodriguez_mobile_2017} observed a correlation between participants' routines and some psychological variables. They trained a deep neural network that could predict participants' moods using smartphone sensor data. Additionally, Khwaja et al. \cite{khwaja_modeling_2019} developed personality models based on random forests using smartphone sensor data. Finally, context recognition is aimed at detecting the context around behaviors and activities. \cite{meegahapola_alone_2020} used sensing data from Switzerland and Mexico to understand its relation to the social context of college students when performing eating activities. More specifically, they built an inference model to detect whether a participant eats alone or with others. Similarly, \cite{meegahapola_examining_2021} examined smartphone data from young adults to infer the social context of drinking episodes using features from modalities such as the accelerometer, app usage, location, Bluetooth, and proximity. In this case, context detection is two-fold: it's based on the number of people in a group, and on their relationship to the participant (e.g., alone, with another person, with friends, with colleagues). Similarly, mobile sensing studies attempted to infer other contexts, psychological traits, and activities by taking behavior and contexts sensed using smartphone sensors as proxies \cite{meegahapola2020smartphone, cornet2018systematic, hoseini2013survey}.
One common aspect regarding most of these studies is that they were done in the wild, focused on two or three-class state inference, and sensing is not fine-grained (i.e., using behavior and context as proxies to the dependent variable). This paper follows a similar approach with a dataset captured real life, using multimodal smartphone sensor data, and taking behavior and context as proxies for our dependent variable. However, in this study, the target attribute entails a 12-class daily activity recognition problem that is complex and novel compared to prior work. In addition, we are interested in examining model performance within and across five countries, with and without partial personalization.
\subsection{Activity Recognition}
Human activity recognition (HAR) aims to understand what people are doing at a given time. Large-scale datasets issued from the activity of smartphone users have a lot of potential in solving that task. This ``digital footprint" has been used to re-identify individuals using credit-card metadata \cite{credit_card_reidentification}: it has been shown that only 4 data points are required to re-identify 90\% of individuals. While the same approach could be followed using smartphone sensing data, our main focus is activity recognition at a single point in time rather than using time series for re-identification. We will focus on two types of activity recognition techniques: wearable-based and smartphone-based \cite{straczkiewicz2021systematic}.
\subsubsection{Wearable-based HAR}
In wearable-based activity recognition, the users wear sensors such as wearable accelerometers from which the data is analyzed and classified to detect activities. For example, in healthcare, wearable-based HAR can be used to analyze gait and prevent falling or monitor physical activity and observe health outcomes \cite{liu_overview_2021}. The wearable-sensing trend emerged two decades ago and relied on custom-designed wearable sensors \cite{farringdon_wearable_1999, park_wearable_1999}, which were backed by encouraging findings in health research. With time, custom sensors were replaced by commercial fitness or activity trackers. Unfortunately, applying these findings to real-world settings was rare due to the high cost of producing custom sensors, the difficulty distributing devices to a broad audience, and their unpopularity among some users \cite{coorevits_rise_2016}. This restricted most studies using wearables to performing experiments in a controlled environment or in the wild with smaller populations. However, wearable-based HAR models that could recognize simple activities are currently deployed across many commercial wearable devices.
\subsubsection{Smartphone-based HAR}
With the popularity of smartphones in the past two decades, the problems of wearable-based HAR were solved. Reality Mining \cite{eagle_reality_2006} is a pioneering study in the field of mobile sensing: it showed the utility of mobile sensing data in a free-living setting. In smartphone-based activity recognition, people do not need to use wearable sensors. Instead, the system relies on a smartphone that is always on and stays closer to its user. Smartphones replace wearable devices as the former contains multiple sensors such as an accelerometer, gyroscope, GPS, proximity, or thermometer. Nevertheless, smartphones capture data at multiple positions (e.g., a pocket, hand, or handbag), which introduces a bias in sensor measurements as they are position-dependent \cite{yang_pacp_2016}.
Regardless of the device used, most prior activity recognition tasks have been done in lab-based/controlled settings where accurate ground truth capture is possible \cite{straczkiewicz_systematic_2021}. The prime goal of such studies is to increase the accuracy of activity recognition models with precise ground truth and sensor data collection (e.g., by placing sensors on fixed body positions, recording ground truth with videos, etc.). However, these studies are hard to scale and do not capture the real behavior of participants, and this is especially true for complex daily activities \cite{saguna2013complex}. For example, a person's behavior when studying, working, or shopping in an unconstrained environment can not be replicated in a lab. On the other hand, some studies are done in the wild \cite{laput2019sensing, saguna2013complex}, where the ground truth and sensor data collection might not be that precise but allow capturing complex daily activities in a naturalistic setting. Our study is similar, where our intention was to take a more exploratory stance, build country-level diversity-aware models, and compare their performance within and across different countries.
\subsection{Activity Types}
One crucial difference across existing studies is in the selection of activities. A majority of studies work towards the recognition of simple activities. For example, Straczkiewicz et al. \cite{straczkiewicz_systematic_2021} classified activities into groups such as posture (lying, sitting, standing), mobility (walking, stair claiming, running, cycling), and locomotion (motorized activities). Laput and Harrison \cite{laput_sensing_2019} called such activities coarse or whole-body. Activities belonging to these groups are directly measurable from one or more proxies (e.g., inertial sensor unit, location). For example, when considering the accelerometer, each activity has a distinct pattern on the different axes \cite{dernbach_simple_2012}. However, they constitute a small subset of activities performed by people in daily lives \cite{saguna2013complex, rai2012mining, cobian2013active}.
Notice that some of the simple activities described above are
usually part of more complex activities (e.g., sitting while eating, walking while shopping). Dernbach et al. \cite{dernbach_simple_2012} defined complex activities as a series of multiple actions, often overlapping. Along with Bao et al. \cite{bao_activity_2004}, they used the same techniques to recognize both simple and complex activities. This results in weaker performances for complex activities since their structure is more complicated. Another approach is considering complex activities hierarchically by using combinations of simple activities to predict more complex ones. Huynh et al. \cite{huynh_discovery_2008} characterized user routines as a probabilistic combination of simple activities. Blanke et al. \cite{choudhury_daily_2009} used a top-down method that first identifies simple activities to recognize complex ones. However, this requires pre-defining simple activities and mappings to complex activities. Some studies focus on detecting binary episodes of a single complex activity or a specific action. For example, the Bites'n'Bits study \cite{biel_bitesnbits_2018} examined the contextual differences between eating a meal and a snack, and presented a classifier able to discriminate eating episodes among students. Likewise, DrinkSense \cite{santani_drinksense_2018} aimed at detecting alcohol consumption events among young adults on weekend nights. Unfortunately, such task-specific classifiers will perform poorly when exposed to situations they were not trained on.
In this study, we focus on a majority of complex daily activities (11 out of 12 and one simple activity: walking) derived by considering over 216K self-reports from college students in five countries. In this context, drawing from prior studies that looked into activities of daily living \cite{saguna2013complex, rai2012mining}, for the scope of this paper, we define complex activities as \emph{"activities that punctuate one's daily routine; that are complex in nature and occur over a non-instantaneous time window; and that have a semantic meaning and an intent, around which context-aware applications could be built"}. While it is impossible to create a classifier that could recognize all complex human activities, we believe the classifier we propose captures a wide range of prevalent activities/behaviors, especially among young adults.
\subsection{Diversity-Awareness in Smartphone Sensing}
Research in the field of smartphone sensing, including the studies mentioned above, lacks diversity in their study populations \cite{meegahapola2020smartphone}. Regarding country diversity, with a few exceptions \cite{servia-rodriguez_mobile_2017, khwaja_modeling_2019}, most experiments were conducted in a single country or rarely two. This can be problematic with respect to the generalization of findings since smartphone usage differs across geographic regions, which can lead to different patterns being observed in, for example, two populations of different genders or age range \cite{del_rosario_comparison_2014}. Khwaja et al. stressed the importance of diversity awareness in mobile sensing \cite{khwaja_modeling_2019}. Moreover, experiments performed in a controlled setting usually can not accommodate many participants. While this makes the whole process lighter and more manageable, it also restricts the generalization of results to a broader free-living audience \cite{sasaki_performance_2016, van_hees_impact_2013}. According to Phan et al. \cite{phan_mobile_2022}, cross-country generalizability is the extent to which findings apply to different groups other than those under investigation.
Diversity awareness and model generalization are two essential aspects, as they will allow an activity recognition system to be deployed and to perform well across different user groups and countries \cite{meegahapola2023generalization, Schelenz2021The}. In computer vision research, the lack of diversity has been repeatedly shown for specific attributes such as gender and skin color \cite{raji_actionable_2019, cuthbertson_self-driving_2019, karkkainen2021fairface}. In natural language processing and speech research, not accounting for dialects in different countries could marginalize groups of people from certain countries~\cite{prabhakaran2022cultural}. Hence, ignoring country diversity when developing AI systems could harm users in the long run by marginalizing certain groups of people \cite{prabhakaran2022cultural}. In this context, smartphone sensing studies that consider country-level diversity are still scarce \cite{phan_mobile_2022}. This could be due to the lack of large-scale datasets, logistical difficulties in data collection in different countries, and studies being time and resource-consuming. Khwaja et al. \cite{khwaja_modeling_2019} built personality inference models using smartphone sensor data from five countries and showed that such models perform well when tested in new countries. To the best of our knowledge, their study is one of the first to investigate the generalization of smartphone sensing-based inference models across different countries. In our work, we focus on complex daily activity recognition with smartphone sensing and aim to uncover and examine model behavior in multi-country settings.
\subsection{Human-Centered Aspects in Smartphone Usage}
Our literature review has so far focused on the technical aspects such as data collection or target variables. We now discuss the impact of smartphone usage on individuals and society, which is studied by various disciplines in the social sciences. Previous work includes the study of smartphone dependence among young adults, where it was found that problematic smartphone use varies by country and gender \cite{self_reported_phone_dependence, addictive_behaviour_modelling}, and those specific activities such as social networking, video games, and online shopping contribute to the addiction \cite{self_reported_phone_dependence, smartphone_addiction_24_countries}. Another study \cite{Rathod_Ingole_Gaidhane_Choudhari_2022} summarized findings on correlations between smartphone usage and psychological morbidities among teens and young adults. Excessive smartphone usage could lead to emotional difficulties, impulsivity, shyness, low self-esteem, and some medical issues such as insomnia, anxiety, or depression. From a sociological standpoint, Henriksen et al. \cite{soc10040078} studied how smartphones impact interactions in cafés and defined three concepts of social smartphone practices. \emph{Interaction suspension} (e.g., your friend goes to the bathroom), which can lead to using the smartphone to appear occupied or to avoid uncomfortable situations while being alone. \emph{Deliberate interaction shielding} corresponds to situations where one suspends an ongoing interaction to answer a phone call or a text message, whether it is an emergency or just in fear of missing out. \emph{Accessing shareables}, which leads to a collective focus on shared content (e.g., pictures or short videos), giving the smartphone a role of enhancing face-to-face social interactions rather than obstructing them. Nelson and Pieper \cite{Nelson_Pieper_2022} showed that smartphone attachment ``inadvertently exacerbates feelings of despair while simultaneously promises to resolve them", thus trapping users in negative cycles.
According to Van Deursen et al. \cite{addictive_behaviour_modelling}, older populations are less likely to develop addictive smartphone behaviors. While they are often associated with younger generations, smartphones are slowly gaining popularity among older generations as they are coming up with creative ways to integrate them into their habits. Miller et al. \cite{global_smartphone} investigated the role that smartphones play in different communities across nine countries. Through 16-month-long ethnographies, they showed that various groups of people have specific ways of taking ownership of their smartphones through apps, customization, and communication. For example, in Ireland, smartphones are used by the elderly in many of their daily activities, and in Brazil, the usage of messaging applications for health have lead to the creation of a manual of best practices for health through such applications. More globally, smartphones can help users stay in touch with their extended families or distant friends, a feature that has been particularly important during the 2020 global pandemic. In this paper, we attempt to uncover country-specific smartphone usage patterns through multimodal sensing data. While these insights may not have the depth that field observations provide, they represent a starting point for future research to draw upon.
Hence, all while considering these factors, we aim to examine smartphone sensing-based inference models for complex daily activity recognition with country-specific, country-agnostic, and multi-country approaches, as described in Figure~\ref{fig:overview}.
\section{Data, Features, and Target Classes}\label{sec:dataset}
\begin{figure*}
\includegraphics[width=0.7\textwidth]{images/overview_diagram.pdf}
\caption{High-level overview of the study. The study uses continuous and interaction sensing modalities and different approaches (country-specific, country-agnostic, and multi-country) to infer complex daily activities.}
\label{fig:overview}
\Description{High-level overview of the study is given in this diagram. The study uses continuous and interaction sensing modalities and different approaches such as country-specific, country-agnostic, and multi-country, to infer complex daily activities.}
\end{figure*}
\subsection{Dataset Information}
To address our research questions, we collected a smartphone sensing dataset regarding the everyday behavior of college students for four weeks during November 2020, in the context of the European project "WeNet: The Internet of Us" \footnote{The dataset is planned to be released for research purposes after the end of the project, by complying with all regulations governing the data collection protocol within and outside the European Union. Hence, future plans for dataset access will be made available on the project website: \url{https://www.internetofus.eu/}}. The study procedure is summarized in a technical report \cite{giunchiglia2022aworldwide}. The sample consisted of both undergraduate and graduate students. This dataset was collected to study the effect of the diversity of study participants on social interactions and smartphone sensor data. The dataset contains over 216K self-reported activities collected from 637 college students living in five countries (ordered by the number of participants): Italy, Mongolia, the United Kingdom, Denmark, and Paraguay. All data were collected using Android smartphones with the same mobile app. Table \ref{tab:num-participants} shows the distribution of participants across countries. Moreover, the data were collected with a protocol compliant with the EU General Data Protection Regulation (GDPR) and with each non-EU country's rules. In addition, written approvals from the ethical review boards (or similar entities) were acquired by each participating university, separately.
\begin{table*}
\centering
\caption{A summary of participants of the data collection. Countries are sorted based on the number of participants.}
\label{tab:num-participants}
\begin{tabular}{llcccr}
\rowcolor[HTML]{EDEDED}
\textbf{University} &
\textbf{Country} &
\textbf{Participants} &
\textbf{$\mu$ Age ($\sigma$)} &
\textbf{\% Women} &
\textbf{\# of Self-Reports}
\\
\arrayrulecolor{Gray}
University of Trento &
Italy &
259 &
24.1 (3.3) &
58 &
116,170
\\
\arrayrulecolor{Gray}
\hline
National University of Mongolia &
Mongolia &
224 &
22.0 (3.1) &
65 &
65,387
\\
\arrayrulecolor{Gray}
\hline
\makecell[l]{London School of Economics\\ \& Political Science} &
UK &
86 &
26.6 (5.0) &
66 &
20,238
\\
\arrayrulecolor{Gray}
\hline
\makecell[l]{Universidad Católica \\"Nuestra Señora de la Asunción"} &
Paraguay &
42 &
25.3 (5.1) &
60 &
6,998
\\
\arrayrulecolor{Gray}
\hline
Aalborg University &
Denmark &
26 &
30.2 (6.3) &
58 &
7,461
\\
\arrayrulecolor{Gray2}
\rowcolor[HTML]{EDEDED}
Total/Mean &
&
637 &
24.0 (4.3) &
62 &
216,254
\\
\end{tabular}
\end{table*}
The first phase of the data collection obtained questionnaire data about the participants, their habits, social relations, individual practices (e.g., physical activities, leisure), and skills (personal and interpersonal). This data was aimed at capturing different aspects of diversity, including observable characteristics such as demographics as well as less observable aspects such as personality, traits, skills, values, and relations \cite{schelenz_theory_2021}. The second phase collected data through a smartphone application. Participants filled out time diaries multiple times throughout the day. Participants were asked about their sleep quality and expectations at the start of the day. At the end of the day, they had to report how their day went. At every hour, they had to self-report what they were doing (current activity, using a drop-down list of 34 activities), location (a list of 26 semantic locations), social context (a list of 8 social contexts), and mood (valence was captured similar to \cite{likamwa_moodscope_2013} with a five-point scale). The app continuously collected data from more than thirty smartphone sensors, which can be broken down into two categories \cite{meegahapola2020smartphone}: continuous sensing modalities such as the simple activity type (derived using inertial sensors and location with the Google Activity Recognition API \cite{GoogleActivity2022}), step count, location, WiFi, Bluetooth, phone signal, battery, and proximity; and interaction sensing modalities such as application usage time, screen usage episode counts and time, notification clicking behaviors, and user presence time.
\subsection{Deriving Features}
The choice of the dataset's format is key for the rest of the study. A \emph{tabular} dataset centered around activities or events enables the handcrafting of a multitude of sensor-specific features discussed in prior literature \cite{santani_drinksense_2018, meegahapola2020smartphone, canzian2015trajectories, servia-rodriguez_mobile_2017, meegahapola_one_2021}. This later enables the use of traditional machine learning methods. However, a {temporal} dataset relies mainly on raw sensor measurements in the form of time series (i.e., raw accelerometer and gyroscope data in typical activity recognition). This approach allows deep learning methods to extract and learn relevant high-level features automatically. Past research \cite{zhu_efficient_2019, aviles-cruz_coarse-fine_2019} has shown that using deep learning techniques yields better-performing HAR classifiers. These studies typically include simple activities that are easier to detect with inertial sensors than the more complex ones we are interested in. This is particularly important in remote study/work settings, where many activities are performed while at home. Therefore, we chose to perform the analysis using a tabular dataset with the heterogeneous handcrafted features described below.
We aggregated all sensor measurements with self-reports to create features using smartphone data. We followed a time-window-based approach similar to prior studies on event-level inferences \cite{meegahapola2020smartphone, straczkiewicz2021systematic, servia-rodriguez_mobile_2017}. Hence, we used 10 minutes before and after each self-report and aggregated sensor data in the corresponding 20-minute interval \footnote{We conducted experiments with different time windows between 5 minutes and 25 minutes. We did not go beyond 25 minutes because it would lead to overlapping sensor data segments, hence leaking data between data points. 20-minute window performed the best out of the examined time windows. For brevity, we only present results with the 20-minute window. Shorter windows might not have performed reasonably because they do not capture enough contextual information to make the inference. Prior work too has shown that large time windows might be suitable to detect binary activities \cite{bae_detecting_2017, meegahapola_examining_2021, biel_bitesnbits_2018}}. While traditional and inertial sensor-based recognition of simple activities attempts to capture repetitive moments using deep learning with a smaller time window, that method is not applicable here because we attempt to capture a set of non-repetitive activities that last longer. In addition, we consider behavior and context sensed with the smartphone as a proxy to the target activity, similar to prior ubicomp studies \cite{mcclernon2013your, meegahapola_one_2021, servia-rodriguez_mobile_2017}. So, the corresponding features generated using sensing modalities are shown in Table \ref{tab:agg-features}. More details on how each sensing modality was pre-processed can be found in \cite{meegahapola2023generalization}. In addition to sensor data features, we added a feature that describes the time period of the day when the activity occurred and a weekend indicator. While there is no agreement as to how a day could be split into morning, afternoon, evening, and night in the literature \cite{obuchi2020predicting, wang2020predicting, wang2017predicting, nepal2020detecting, wang2022first}, we defined five time periods: morning from 6 AM to 10 AM, noon from 10 AM to 2 PM, afternoon from 2 PM to 6 PM, evening from 6 PM to 10 PM, and night from 10 PM to 6 AM, and included it as another feature that could be used in training machine learning models.
\begin{table*}
\caption{Summary of 108 features extracted from raw sensing data, aggregated around activity self-reports using a time window.}
\label{tab:agg-features}
\centering
\resizebox{0.9\textwidth}{!}{%
\begin{tabular}{{p{0.2\linewidth}p{0.8\linewidth}}}
\rowcolor[HTML]{EDEDED}
\textbf{Modality} & \textbf{Corresponding Features and Description} \\
Location & radius of gyration, distance traveled, mean altitude calculated similarly to prior work \cite{canzian2015trajectories} \\
\arrayrulecolor{Gray}
\hline
Bluetooth {[}LE, normal{]} & number of devices (the total number of unique devices found), mean/std/min/max RSSI (Received Signal Strength Indication -- measure close/distant closer devices are) \cite{santani_drinksense_2018} \\
\arrayrulecolor{Gray}
\hline
WiFi & connected to a network indicator, number of devices (the total number of unique devices found), mean/std/min/max RSSI \cite{santani_drinksense_2018} \\
\arrayrulecolor{Gray}
\hline
Cellular {[}GSM, WCDMA, LTE{]} & number of devices (the total number of unique devices found), mean/std/min/max phone signal strength \cite{santani_drinksense_2018} \\
\arrayrulecolor{Gray}
\hline
Notifications & notifications posted (the number of notifications that came to the phone), notifications removed (the number of notifications that were removed by the participant) -- these features were calculated with and without duplicates. \cite{likamwa_moodscope_2013}\\
\arrayrulecolor{Gray}
\hline
Proximity & mean/std/min/max of proximity values \cite{bae_detecting_2017} \\
\arrayrulecolor{Gray}
\hline
Activity & time spent doing the following simple activities: still, in\_vehicle, on\_bicycle, on\_foot, running, tilting, walking, other (derived using the Google Activity Recognition API \cite{GoogleActivity2022}) \\
\arrayrulecolor{Gray}
\hline
Steps & steps counter (steps derived using the total steps since the last phone turned on), steps detected (steps derived using event triggered for each new step) \cite{das2022semantic} \\
\arrayrulecolor{Gray}
\hline
Screen events & touch events (number of phone touch events), user presence time (derived using android API that indicate whether a person is using the phone or not), number of episodes (episode is from turning the screen of the phone on until the screen is turned off), mean/min/max/std episode time (a time window could have multiple episodes), total time (total screen on time within the time window) \cite{likamwa_moodscope_2013, bae_detecting_2017, meegahapola_one_2021} \\
\arrayrulecolor{Gray}
\hline
App events & time spent on apps of each category derived from Google Play Store \cite{likamwa_moodscope_2013, santani_drinksense_2018}: action, adventure, arcade, art \& design, auto \& vehicles, beauty, board, books \&
reference, business, card, casino, casual, comics, communication,
dating, education, entertainment, finance, food \& drink, health \&
fitness, house, lifestyle, maps \& navigation, medical, music, news \&
magazines, parenting, personalization, photography, productivity,
puzzle, racing, role-playing, shopping, simulation, social, sports,
strategy, tools, travel, trivia, video players \& editors, weather, word \\
\arrayrulecolor{Gray}
\hline
Time \& Day & hour of the day, period of the day (morning, noon, afternoon, evening, night), weekend indicator (weekday or weekend) \cite{biel_bitesnbits_2018, meegahapola_one_2021} \\
\arrayrulecolor{Gray2}
\hline
\end{tabular}}
\end{table*}
\subsection{Determining Target Classes}
Hourly self-reports required participants to log what they were doing at the time by selecting an activity from a predefined list of thirty-four items. These items were derived based on prior work \cite{giunchiglia2017personal, zhang2021putting}. By looking at their distribution in different countries (Figure \ref{fig:target-dist-before}), one can quickly notice that they are highly unbalanced. The remote work/study constraints during the time of data collection were one of the causes behind this imbalance, because activities such as traveling, walking, or shopping would have been more popular if mobility was not restricted. A closer look at the list of activities shows that some classes are too broad in terms of semantic meaning. Hence, similar to prior work that narrowed down activity lists based on various aspects \cite{laput_sensing_2019}, we narrowed down the original list of activities into 12 categories to capture complex daily activities that are common enough in the daily lives of people, especially in a remote work/study setting. For example, under ``hobbies'', one can be playing the piano or painting, and the two do not entail the same smartphone usage and are not common enough. Similarly, “social life” is too broad, as one could be in a bar, a restaurant, or a park. Moreover, to mitigate the class imbalance problem, we decided to filter the target classes. First, classes that had similar semantic meanings were merged: this is the case of eating and cooking, and social media and internet chatting. Classes representing a broad activity were removed, such as personal care, household care, games, and hobbies. Finally, classes that did not have enough data in all countries were removed, such as listening to music, movie, theatre, concert, and free-time study. Filler classes such as ``nothing special'' or ``other'' were also removed. This filtering reduced the number of target classes to twelve, and their updated distribution is shown in Figure \ref{fig:target-dist-after}. These classes entail activities performed during daily life that are complex in nature and have a semantic meaning around which context-aware applications could be built. Moreover, the selected activities also align with prior work that looked into complex daily activity recognition \cite{saguna2013complex}.
\begin{figure*}
\includegraphics[width=0.8\textwidth]{images/count_per_activity_country_before.png}
\caption{The original distribution of target classes before any filtering or merging was done.}
\Description{The original distribution of target classes before any filtering or merging was done. X-axis shows the activities. Y-axis shows the country, separated by countries. Activities with the highest number of self-reports include sleeping, studying, eating, watching something, and attending class.}
\label{fig:target-dist-before}
\end{figure*}
\begin{figure}
\includegraphics[width=0.5\textwidth]{images/count_per_activity_country_after.png}
\caption{Distribution of target classes after removing classes that are semantically broad or lack data.}
\Description{Distribution of target classes after removing classes that are semantically broad or lack data. Now the 12 classes include sleeping, studying, eating, watching something, online communication or social media, attending class, working, resting, reading, walking, sport, and shopping.}
\label{fig:target-dist-after}
\end{figure}
\section{How are activities expressed in different countries, and what smartphone features are most discriminant? (RQ1)}\label{sec:descriptive}
\begin{figure*}
\includegraphics[width=\textwidth]{images/activities_per_hour.png}
\caption{Density functions of target classes as a function of the hour of day in each country.}
\Description{Density functions of target classes as a function of the hour of the day in each country. In each diagram, the x-axis refers to the hour of the day, and the y-axis refers to the density of each activity.}
\label{fig:activities-per-hour}
\end{figure*}
To understand the distribution of activities in each country and to determine the influence of features on the target, we provide a descriptive and statistical analysis of the dataset in this section, hence shedding light on \textbf{RQ1}.
\subsection{Hourly Distribution of Activities}
The activities we consider all seem to occur at different times: people tend to sleep at night, work during the day, and eat around noon and in the evening. However, not all schedules are the same, especially not across different countries \cite{fisher_daily_2010, esteban_ortiz-ospina_time_2020}. We reported the density function of each target class at different hours of the day in Figure \ref{fig:activities-per-hour}. In each diagram, the x-axis refers to the hour of the day, and the y-axis refers to the density of each activity. On an important note, while most activities were reported as they were being performed, in the case of sleeping, participants reported the activity {after} they woke up and still in bed, meaning that peaks for that activity could also be interpreted as ``waking up''. This was later confirmed with many participants in all countries during post-study interviews. This also makes the time of the day less informative when inferring the sleeping activity.
A first look at the distribution shows the ``expected'' patterns, such as a peek of sleeping during the night or peaks around eating times for lunch and dinner. Notice that participants from Paraguay tend to sleep less than others, reflecting that they start working and resting earlier in the day. Online communication and social media usage happen around noon, coinciding with a break from classes and lunchtime, followed by a high peak towards the end of the day. This is in line with prior studies that showed that depending on the location context and hour of the day, the use of certain social media applications (i.e., Twitter) could differ \cite{deng2018social}. Moreover, we also observe country differences in hourly social media and online communication app usage patterns as reported by users. For example, between noon and 6 pm, there is a dip in the usage of these types of apps in Italy, Paraguay, and Denmark, whereas that pattern is not visible in the UK. Prior work has also studied social media app usage and adoption-related differences, especially across countries. As per those studies, such usage differences could result from cultural characteristics within countries and from motives of people for using different apps \cite{alsaleh2019cross, lim2014investigating}. Most leisure activities (reading, shopping, sport, watching something) happen towards the end of the day, right when students have finished their classes.
Another activity that showed clear cross-country differences is ``Eating''. We can observe that Italians tend to eat later than others, which hints at their Mediterranean customs \cite{tobin_what_2018}. Italy also showed two clear peaks for lunch and dinner with a sharp dip in between the two meals. The dip is less visible in other countries, indicating that meals are more spread out across different times. Moreover, the dinner peaks for all countries except Mongolia were peaking on or after 6 pm, whereas in Mongolia, it was before 6 pm. These findings suggest that the hour of the day could indicate whether people are eating or not---slightly differently in Italy, Mongolia, and other countries. In fact, prior studies that used mobile sensors for studies regarding eating behavior showed that the hour of the day is an important feature in predicting aspects related to eating \cite{biel_bitesnbits_2018, meegahapola_one_2021}. To add to that, prior studies have also pointed out that meal times, frequency, and sizes too could differ between countries \cite{chiva1997cultural}, even within Europe \cite{southerton2012behavioural}. Finally, the activity ``walking'' had more or less similar distributions across countries. In fact, a smartphone-based activity tracking study by Althoff et al. \cite{althoff2017large} mentioned that the average number of steps walked by people across Italy, the UK, Denmark, and Mongolia were in the same ballpark (i.e., around 5000-6000 daily steps).
\subsection{Statistical Analysis of Features}
To understand the importance of each smartphone sensing feature in discriminating each target activity from others, we reported in Table \ref{tab:statistical-analysis} the top three features and their ANOVA (Analysis of variance) F-values \cite{kim2014analysis} for each activity and each country. The goal is to identify features that define an activity and how those differ across countries. We consider each country-activity pair alone to find features that influence the classification task in a binary setting (i.e., determining whether the participant is sleeping or not, studying or not, eating or not, etc.).
\begin{table*}
\caption{ANOVA F-values (F) with p-value $< 0.05$ for each target activity
and each country. The best feature is the first in the list. Comparing
F-values are only valid locally within the same activity and country.}
\label{tab:statistical-analysis}
\resizebox{1\textwidth}{!}{%
\begin{tabular}{cllllllllll}
\rowcolor[HTML]{EDEDED}
\multicolumn{1}{l}{\cellcolor[HTML]{EDEDED}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Italy}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Mongolia}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{UK}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Denmark}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Paraguay}}
\\
\multicolumn{1}{l}{} & \textit{Feature} & \textit{F} & \textit{Feature} & \textit{F} & \textit{Feature} & \textit{F} & \textit{Feature} & \textit{F} & \textit{Feature} & \textit{F}
\\
\arrayrulecolor{Gray}
\hline
& app\_tools & 5423 & day\_period & 6623 & day\_period & 1632 & day\_period & 354 & app\_not-found & 510
\\
& day\_period & 4439 & app\_not-found & 3595 & screen\_max\_episode & 603 & screen\_max\_episode & 249 & noti\_removed\_wo\_dups & 348 \\
\multirow{-3}{*}{\textbf{Sleeping}} & screen\_max\_episode & 2498 & noti\_removed\_wo\_dups & 1052 & screen\_time\_per\_episode & 534 & screen\_time\_per\_episode & 156 & notifications\_posted\_wo\_dups & 289
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & screen\_max\_episode & 1447 & day\_period & 683 & screen\_time\_total & 446 & screen\_max\_episode & 241 & app\_video players \& editors & 147
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & screen\_time\_total & 1378 & noti\_removed\_wo\_dups & 220 & screen\_max\_episode & 396 & screen\_time\_total & 225 & app\_not-found & 84
\\
\rowcolor[HTML]{EFEFEF}
\multirow{-3}{*}{\cellcolor[HTML]{EFEFEF}\textbf{Studying}} & day\_period & 1146 & app\_photography & 178 & screen\_time\_per\_episode & 247 & weekend & 154 & day\_period & 43 \\
& day\_period & 271 & day\_period & 518 & day\_period & 61 & proximity\_std & 38 & app\_not-found & 37
\\
& app\_tools & 98 & app\_not-found & 180 & app\_not-found & 26 & proximity\_max & 29 & wifi\_mean-rssi & 23
\\
\multirow{-3}{*}{\textbf{Eating}} & app\_not-found & 61 & activity\_still & 72 & app\_video players \& editors & 23 & app\_communication & 18 & wifi\_max-rssi & 21
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & app\_entertainment & 715 & day\_period & 326 & app\_video players \& editors & 397 & app\_entertainment & 151 & wifi\_mean-rssi & 51
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & app\_not-found & 426 & app\_not-found & 325 & wifi\_std\_rssi & 85 & app\_not-found & 59 & app\_lifestyle & 38
\\
\rowcolor[HTML]{EFEFEF}
\multirow{-3}{*}{\cellcolor[HTML]{EFEFEF}\textbf{\makecell[c]{Watching\\something}}} & weekend & 334 & wifi\_num\_of\_devices & 217 & app\_entertainment & 66 & notifications\_posted & 58 & weekend & 29 \\
& app\_social & 1381 & touch\_events & 503 & wifi\_num\_of\_devices & 112 & app\_tools & 64 & app\_tools & 95
\\
& screen\_time\_total & 565 & screen\_time\_total & 355 & wifi\_connected & 93 & app\_causal & 58 & proximity\_max & 58
\\
\multirow{-3}{*}{\textbf{\makecell[c]{Online comm./\\Social media}}} & screen\_max\_episode & 473 & app\_not-found & 354 & screen\_time\_total & 92 & screen\_time\_total & 42 & proximity\_mean & 48 \\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & weekend & 3167 & day\_period & 455 & weekend & 357 & app\_not-found & 119 & notifications\_posted\_wo\_dups & 148
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & screen\_num\_of\_episodes & 745 & weekend & 289 & day\_period & 260 & notifications\_posted & 104 & weekend & 112
\\
\rowcolor[HTML]{EFEFEF}
\multirow{-3}{*}{\cellcolor[HTML]{EFEFEF}\textbf{Attending class}} & app\_tools & 476 & app\_not-found & 251 & screen\_max\_episode & 70 & screen\_max\_episode & 37 & screen\_time\_total & 87
\\
& steps\_detected & 271 & wifi\_mean\_rssi & 1049 & screen\_time\_per\_episode & 143 & proximity\_mean & 305 & activity\_invehicle & 441
\\
& screen\_time\_per\_episode & 210 & wifi\_max\_rssi & 848 & proximity\_mean & 129 & proximity\_max & 304 & wifi\_num\_of\_devices & 226
\\
\multirow{-3}{*}{\textbf{Working}} & screen\_num\_of\_episodes & 206 & wifi\_min\_rssi & 633 & screen\_max\_episode & 124 & proximity\_std & 292 & activity\_walking & 163
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & day\_period & 337 & day\_period & 191 & app\_medical & 374 & notifications\_posted & 22 & app\_photography & 145
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & app\_tools & 117 & screen\_time\_total & 89 & app\_arcade & 72 & app\_not-found & 16 & app\_trivia & 64
\\
\rowcolor[HTML]{EFEFEF}
\multirow{-3}{*}{\cellcolor[HTML]{EFEFEF}\textbf{Resting}} & app\_educational & 66 & screen\_max\_episode & 75 & day\_period & 55 & touch\_events & 14 & app\_maps \& navigation & 23
\\
& app\_books \& reference & 955 & app\_not-found & 167 & app\_not-found & 215 & cellular\_lte\_min & 252 & app\_adventure & 21
\\
& app\_comics & 93 & touch\_events & 122 & wifi\_std\_rssi & 109 & app\_tools & 83 & app\_comics & 16
\\
\multirow{-3}{*}{\textbf{Reading}} & app\_news \& magazines & 93 & day\_period & 121 & wifi\_max\_rssi & 77 & location\_altitude & 76 & location\_altitude & 6
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & activity\_onfoot & 3518 & activity\_onfoot & 1582 & steps\_detected & 376 & steps\_detected & 285 & activity\_walking & 25
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & activity\_walking & 3497 & activity\_walking & 1579 & steps\_counter & 314 & activity\_walking & 101 & activity\_onfoot & 25
\\
\rowcolor[HTML]{EFEFEF}
\multirow{-3}{*}{\cellcolor[HTML]{EFEFEF}\textbf{Walking}} & steps\_detected & 3374 & steps\_detected & 1009 & activity\_walking & 232 & activity\_onfoot & 101 & location\_radius\_of\_gyration & 23 \\
& app\_health \& fitness & 502 & day\_period & 33 & app\_health \& fitness & 931 & app\_health \& fitness & 1248 & wifi\_max\_rssi & 50
\\
& day\_period & 233 & wifi\_num\_of\_devices & 32 & proximity\_min & 52 & noti\_removed & 72 & proximity\_std & 41
\\
\multirow{-3}{*}{\textbf{Sport}} & notifications\_posted & 132 & wifi\_min\_rssi & 23 & day\_period & 40 & day\_period & 34 & wifi\_mean\_rssi & 41
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & steps\_detected & 283 & activity\_onfoot & 1270 & day\_period & 74 & activity\_walking & 132 & app\_weather & 86
\\
\rowcolor[HTML]{EFEFEF}
\cellcolor[HTML]{EFEFEF} & activity\_onfoot & 267 & activity\_walking & 1269 & user\_presence\_time & 41 & activity\_onfoot & 131 & app\_auto \& vehicles & 84
\\
\rowcolor[HTML]{EFEFEF}
\multirow{-3}{*}{\cellcolor[HTML]{EFEFEF}\textbf{Shopping}} & activity\_walking & 265 & steps\_detected & 504 & screen\_num\_of\_episodes & 38 & steps\_detected & 55 & activity\_walking & 79
\\
\hline
\end{tabular}%
}
\end{table*}
The resulting features across countries for the same activity are different in most cases, highlighting the dataset's diversity and each country's cultural differences or habits. For example, when studying, features regarding screen episodes dominate in the UK, Italy, and Denmark, while the day period appears in Italy, Mongolia, and Paraguay. This could mean that European students tend to use their phones when studying more (or less) than students from Paraguay or Mongolia. This divide is also visible when ``watching something'', which is influenced by the use of entertainment applications in Europe, but not in Paraguay or Mongolia. This effect could be due to the unpopularity of streaming services classified as entertainment applications in the latter two countries, where participants might rely on alternatives. In fact, differences in using streaming services across countries have been studied in prior work, highlighting differences in usage percentages \cite{lotz2021between} and the relations to income level \cite{nhan2020comparison}. On the other hand, it could also be that students watch something on a medium that is not their smartphone. In fact, research shows that young adults aged 18-29 use more online media streaming services as compared to television in the USA \cite{PewResearch2017About}. However, whether similar percentages hold across different countries with contrasting cultures, income levels, and internet quality remains a question. While not conclusive, these could be the reason for entertainment apps not being indicative of ``watching something'' in Mongolia and Paraguay, which are the non-European countries in this study.
For some activities, the top three features are inherent to the nature of the activity. For example, ``reading'' in Italy has features corresponding to reading applications such as books, comics, newspapers, and magazines. Other countries do not show this. The same observation can be made for the ``sports'' activity: health and fitness apps are one of the determining features in European countries. This effect could correspond to participants tracking their workouts using a smartphone app.
The ``walking'' activity has almost the same features in all five countries: steps detected and an on-foot or walking activity detected by the Google Activity Recognition API. This homogeneity is due to the nature of the activity---walking is considered a simple activity. This is also why shopping has some of the same features as walking since participants also walk when they shop. To summarize, in most cases, each country has different defining features when looking at the same activity. For some activities, the features found are inherent to the activity and are usually app categories. Finally, it is worth mentioning that the period of the day is an important feature, which matches what has been observed in Figure \ref{fig:activities-per-hour} --- all activities do not occur at the same frequency throughout the day.
Finally, it is worth noting that we could expect some of the highly informative features to change over time, with changes to technology use and habits of people, in different countries \cite{xu2022globem, adler2022machine}. For example, a reason for the lack of use of streaming services in certain countries is the lack of laws surrounding the usage of illegally downloaded content (e.g., Germany has strict laws about not using illegal downloads \cite{rump2011kind}). Changes in the laws of countries could change the behavior of young adults. Further, internet prices could also affect the use of streaming services. While bandwidth-based and cheap internet is common in developed countries, it is not the same in developing nations in Asia, Africa, and South America, where internet usage is expensive, hence demotivating streaming. In addition, income levels too could influence captured features a lot. For example, with increasing income levels (usually happens when a country's GDP changes), young adults may use more wearables for fitness tracking, leading to the usage of health and fitness apps on mobile phones. Another aspect that could affect the captured behaviors is the weather condition. All five countries mentioned in this study go through different seasons, as all are somewhat far from the equator. Hence, we could expect changes in features in different seasons. More about this is discussed in the limitations section.
\section{Machine Learning-based Inference: Experimental Setup, Models, and Performance Measures}\label{sec:inference}
This study aims to perform a multi-class inference of smartphone sensing data to predict what participants do at a particular time. The input space consists of the features in the tabular dataset previously mentioned. We study the three approaches to the problem as summarized in Figure \ref{fig:overview}, going from country-specific to multi-country.
\begin{figure}
\includegraphics[width=0.5\textwidth]{images/missing_data.png}
\caption{Proportion of missing data per sensor type.}
\Description{Proportion of missing data per sensor type is shown as a bar chart. More than 90\% of GSM cellular sensor values were unavailable, possibly due to devices being put in airplane mode, sensor failure, or the phone mostly operating with LTE signals. For other modalities, data were available more than 70\% of the time.}
\label{fig:missing-data}
\end{figure}
\subsection{Data Imputation} The first step in preparing the dataset for inference was data imputation. Missing data in the context of smartphone sensing can occur for multiple reasons \cite{servia-rodriguez_mobile_2017, bae_detecting_2017, meegahapola2022sensing}: the device being on low-consumption mode, the failure of a sensor, or insufficient permissions from the participants. In the dataset we used, we noticed that most sensors have some missing values (see Figure \ref{fig:missing-data}). For example, more than 90\% of GSM cellular sensor values were unavailable, possibly due to devices being put in airplane mode, sensor failure, or the phone mostly operating with LTE signals. To deal with missing values, we decided to drop features from sensors that were missing more than 70\% of their data (refer to the dotted line on Figure \ref{fig:missing-data}) similar to prior work \cite{santani_drinksense_2018}. For the remaining features, and each country individually, we used k-Nearest Neighbour (kNN) imputation \cite{zhang2012nearest} to infer missing information from neighboring samples \footnote{We also tried mean imputation, user-based mean imputation, most frequent value imputation, last observation carried forward (LOCF) imputation, in addition to kNN. However, we obtained the best results for inferences with kNN. In addition, using kNN is common in studies that used passive sensing \cite{zhang2022predicting, rashid2020predicting, zhou2018missing, xu2021understanding}. Hence, we only reported results obtained with kNN.}.
\subsection{Models and Performance Measures} To conduct all experiments, we used the scikit-learn \cite{scikit-learn} and Keras \cite{chollet2015keras} frameworks, with Python. We first trained the following two baseline models: one that always predicts the most frequent label and another that randomly predicts targets by considering the class distribution. This will allow us to understand if the trained models perform better than a randomized guess. The experiments were carried out with the following model types: Random Forest Classifier \cite{breiman_random_2001} (RF), AdaBoost with Decision Tree Classifier \cite{hastie_multi-class_2009}, and Multi-Layer Perceptron neural networks (MLP) \cite{wang2003artificial} \footnote{We initially tried out other model types such as Gradient Boosting and XGBoost in addition to the reported models. Results for these models were not reported considering their performance and page limits. All these model types are commonly used in small mobile sensing datasets that are in tabular format \cite{merrill2022self, biel_bitesnbits_2018, meegahapola_one_2021}}. The first two inherently leverage class imbalance, and RFs also facilitate the interpretability of results. Each experiment was carried out ten times to account for the effect of randomness. For each experimental setup, we reported the mean and standard deviation across the ten runs for the following metrics: F1 score \cite{F1score2022}, and the area under the Receiver Operating Characteristic curve (AUROC) \cite{AUROC2022}. Even though we calculated the accuracies of models, and while the accuracy is easy to interpret, it might not present a realistic picture in an imbalanced data setting. Hence, we did not include it in the results. The weighted macro F1 score computes metrics for each class and averages them following their support, resulting in a metric that considers label imbalance. Moreover, it takes a significant hit if one of the classes has a lot of false positives. A low F1 score could imply that the classifier has difficulty with rare target classes. The AUROC score measures how well the model can distinguish each activity. It can be understood as an average of F1 scores at different thresholds. We also used a weighted macro version to account for label imbalance.
Next, we examine results for country-specific, country-agnostic, and multi-country approaches \cite{khwaja_modeling_2019}. Finally, for all three approaches, we examine population-level, and hybrid models that correspond to no and partial personalization, respectively, similar to \cite{meegahapola2023generalization, meegahapola2022sensing, likamwa_moodscope_2013} (training and testing splits were always done with 70:30 ratio):
\begin{figure}
\includegraphics[width=0.5\textwidth]{images/population-hybrid.png}
\caption{Personalization levels used in the country-specific, country-agnostic, and multi-country approaches. Population-level corresponds to models with no personalization. Hybrid corresponds to models with partial personalization.}
\label{fig:population-hybrid}
\Description{Personalization levels used in the country-specific, country-agnostic, and multi-country approaches. Population-level corresponds to models with no personalization. Hybrid corresponds to models with partial personalization.}
\end{figure}
\begin{itemize}[wide, labelwidth=!, labelindent=0pt]
\item \textbf{Population-Level} model, also known as leave-k-participants-out in country-specific and multi-country approaches, and leave-k-countries-out in country-agnostic approach: the set of participants present in the training set ($\approx$70\%) and the testing set ($\approx$30\%) are disjoint. The splitting was done in a stratified manner, meaning each split was made by preserving the percentage of samples for each class. This represents the case where the model was trained on a subset of the population, and a new set of participants joined a system that runs the model and started using it.
\begin{itemize}
\item In the country-specific approach, this means that data from disjoint participants are in training and testing splits, and everyone is from the same country. E.g., trained with a set of participants in Italy and tested with another set of participants in Italy who were not in the training set.
\item In the country-agnostic approach, this means the training set is from one (Phase I) or four (Phase II) countries, and the testing set is from a country not seen in training. E.g., For Phase I --- trained with a set of participants in Italy and tested with a set of participants in Mongolia; Phase II --- trained with a set of participants in Italy, Denmark, UK, and Mongolia, and tested with a set of participants in Paraguay.
\item In the multi-country approach, this means a disjoint set of participants in training and testing without considering country information. This is the typical way of training models even when data are collected from multiple countries \cite{servia-rodriguez_mobile_2017}. E.g., trained with a set of participants from all five countries and tested with a set of participants in all five countries who were not in the training set.
\end{itemize}
\item \textbf{Hybrid} model, also known as the leave-k-samples-out: the sets of participants in the training and testing splits are not disjoint. Part of the data of some participants present in the testing set ($\approx$70\%) was used in training the models. Testing is done with the rest of the data from the participants ($\approx$30\%). This represents the case where the model was trained on the population, and the same participants whose data were used in training continue to use the model. Hence, models are partially personalized.
\begin{itemize}
\item In the country-specific setting, this means that some data from participants within a country in the testing set can also be in the training set. This represents a scenario where personalization is examined within the country. E.g., trained with a set of participants in Italy and tested with another set of participants in Italy, whose data (70\%) were also used in the training set. The rest of the data (30\%) were used in the testing set.
\item In the country-agnostic setting, this means the training set is from one/more countries, and the testing set is from another country, where a percentage of their past data (70\%) was also included in the training. This represents a scenario where personalization is examined when deployed to a new country. E.g., Phase I --- trained with a set of participants in Italy and tested with a set of participants in Mongolia, whose data (70\%) were also used in the training set. Rest of the data (30\%) were used in the testing set; Phase II --- trained with a set of participants in Italy, Denmark, UK, Mongolia, and tested with a set of participants in Paraguay, whose data (70\%) were also used in the training set. The rest of the data (30\%) were used in the testing set.
\item In the multi-country setting, this means that training and testing participants are not disjoint, and country information is not considered. This is the typical way of partially personalizing models even when data are collected from multiple countries. E.g., trained with a set of participants from all five countries and tested with a set of participants in all five countries, whose data (70\%) were also used in the training set. The rest of the data (30\%) were used in the testing set.
\end{itemize}
\end{itemize}
\section{Inference Results}\label{sec:results}
In this section, we present the results of the experiments. First, we discuss results from the country-specific and multi-country approaches, shedding light on \textbf{RQ2}. Then, the country-agnostic approach is discussed by providing answers to \textbf{RQ3} on model generalization.
\subsection{Country-Specific and Multi-Country Approaches (RQ2)}
\paragraph{Country-Specific Approach}
\begin{table*}
\caption{Mean ($\bar{S}$) and Standard Deviation ($S_\sigma$) of inference
F1-scores, and AUROC scores computed from ten iterations using
three different models (and two baselines) for each country separately.
Results are presented as $\bar{S} (S_\sigma)$, where $S$ is any of the two
metrics.}
\label{tab:specific-results}
\resizebox{\textwidth}{!}{%
\begin{tabular}{
>{\columncolor[HTML]{FFFFFF}}l ll
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}l rr}
&
\multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Baseline I}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Baseline II}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Random Forest}} &
\multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{AdaBoost}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{MLP}}
\\
\multicolumn{11}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Population-Level}} \\
\arrayrulecolor{Gray}
\hline
& \textit{F1} & \textit{AUROC} & \textit{F1} & \textit{AUROC} & \textit{F1} & \textit{AUROC} & \textit{F1} & \textit{AUROC} & \multicolumn{1}{l}{\textit{F1}} & \multicolumn{1}{l}{\textit{AUROC}}
\\
\arrayrulecolor{Gray2}
\hline
Italy &
0.17 (0.000) & 0.50 (0.000) &
0.19 (0.001) & 0.50 (0.001) &
0.41 (0.001) & 0.71 (0.001) &
0.39 (0.000) & 0.71 (0.000) &
0.38 (0.002) & 0.68 (0.002)
\\
Mongolia &
0.26 (0.000) & 0.50 (0.000) &
0.23 (0.001) & 0.50 (0.001) &
0.33 (0.002) & 0.62 (0.001) &
0.33 (0.000) & 0.63 (0.000) &
0.34 (0.003) & 0.61 (0.004)
\\
UK &
0.17 (0.000) & 0.50 (0.000) &
0.18 (0.002) & 0.50 (0.001) &
0.32 (0.004) & 0.63 (0.003) &
0.31 (0.000) & 0.59 (0.000) &
0.22 (0.006) & 0.56 (0.003)
\\
Denmark &
0.25 (0.000) & 0.50 (0.000) &
0.24 (0.006) & 0.49 (0.003) &
0.32 (0.008) & 0.61 (0.006) &
0.34 (0.000) & 0.57 (0.000) &
0.25 (0.008) & 0.57 (0.006)
\\
Paraguay &
0.19 (0.000) & 0.50 (0.000) &
0.19 (0.006) & 0.49 (0.002) &
0.30 (0.004) & 0.59 (0.003) &
0.28 (0.000) & 0.56 (0.000) &
0.31 (0.009) & 0.58 (0.004)
\\
\arrayrulecolor{Gray}
\hline
\multicolumn{11}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Hybrid}} \\
\arrayrulecolor{Gray}
\hline
& \textit{F1} & \textit{AUROC} & \textit{F1} & \textit{AUROC} & \textit{F1} & \textit{AUROC} & \textit{F1} & \textit{AUROC} & \multicolumn{1}{l}{\textit{F1}} & \multicolumn{1}{l}{\textit{AUROC}} \\
\arrayrulecolor{Gray2}
\hline
Italy &
\multicolumn{1}{r}{0.17 (0.000)} &
\multicolumn{1}{r}{0.50 (0.000)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.19 (0.001)} & \multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.50 (0.001)} &
\multicolumn{1}{r}{0.63 (0.001)} &
\multicolumn{1}{r}{0.87 (0.001)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.40 (0.000)} & \multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.73 (0.000)} &
0.51 (0.002) & 0.81 (0.000)
\\
Mongolia &
\multicolumn{1}{r}{0.26 (0.000)} &
\multicolumn{1}{r}{0.50 (0.000)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.23 (0.002)} & \multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.50 (0.001)} &
\multicolumn{1}{r}{0.51 (0.001)} &
\multicolumn{1}{r}{0.79 (0.001)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.34 (0.000)} & \multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.66 (0.000)} &
0.45 (0.002) &
0.75 (0.002)
\\
UK &
\multicolumn{1}{r}{0.17 (0.000)} &
\multicolumn{1}{r}{0.50 (0.000)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.19 (0.003)} & \multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.50 (0.001)} &
\multicolumn{1}{r}{0.66 (0.001)} &
\multicolumn{1}{r}{0.88 (0.006)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.34 (0.000)} & \multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.68 (0.000)} &
0.58 (0.003) &
0.83 (0.002)
\\
Denmark &
\multicolumn{1}{r}{0.25 (0.000)} &
\multicolumn{1}{r}{0.50 (0.000)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.24 (0.003)} & \multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.50 (0.002)} &
\multicolumn{1}{r}{0.69 (0.002)} &
\multicolumn{1}{r}{0.89 (0.001)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.41 (0.000)} & \multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.66 (0.000)} &
0.67 (0.002) &
0.87 (0.002)
\\
Paraguay &
\multicolumn{1}{r}{0.18 (0.000)} &
\multicolumn{1}{r}{0.50 (0.000)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.19 (0.002)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.49 (0.003)} &
\multicolumn{1}{r}{0.61 (0.003)} &
\multicolumn{1}{r}{0.84 (0.001)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.30 (0.000)} &
\multicolumn{1}{r}{\cellcolor[HTML]{FFFFFF}0.61 (0.000)} &
0.58 (0.002) &
0.79 (0.001)
\\
\hline
\end{tabular}%
}
\end{table*}
We consider this approach to be the base setting that does leverage country-level diversity in building separate models---each country has its own model independently from others. Table \ref{tab:specific-results} summarizes the results of experiments following the country-specific approach. In the population-level setting, the three models perform more or less similarly, but the RFs are generally better based on F1 and AUROC scores. In the case of the hybrid models, RFs performed the best across the five countries, with AUROC scores in the range of 0.79-0.89, where the lowest was for Mongolia, and the highest was for Denmark. Compared to population-level models, we can notice a substantial bump in performance in the hybrid models, showing the effect of personalization within countries. These results suggest that random forest models applied to a partially personalized setting can recognize complex daily activities from passive sensing data with a good performance. Given this conclusion, even though we got results for all model types for subsequent sections, we will present results only using random forest models.
\paragraph{Multi-Country Approach}
This approach aims at building a generic multi-country or one-size-fits-all model with the expectation that it would capture the diversity of all countries. All five countries are present in both the training and the testing set. We, therefore, consider all participants of the dataset, regardless of their country, similar to an experiment where country-level diversity is ignored. Hence, we can examine population-level and hybrid models for a multi-country approach in this context. Further, models were evaluated with a dataset with an imbalanced representation from five countries (multi-country w/o downsampling --- MC w/o DS) and a balanced representation from five countries by randomly downsampling from countries with more data to make it equal to the country with the least number of self-reports (i.e., Paraguay) (multi-country w/ downsampling --- MC w/ DS). The results are shown in Figure~\ref{fig:multi_country_comparison} in comparison to country-specific results. MC w/o DS had an AUROC of 0.71 while MC w/ DS had an AUROC of 0.68, indicating that training on the original data distribution performed better. The reason could in fact be that, more data led to better performance. The expectation of training with downsampled data was to give equal emphasis to each country, expecting that the model would perform well to all countries. However, the result indicates that it is not the case.
These results shed light on our \textbf{RQ2}: learning a multi-country model for complex activity recognition solely using passive smartphone sensing data is difficult (AUROC: 0.709 with hybrid models). It does not yield better performance than the country-specific approach (AUROCs of the range 0.791-0.894). This may stem from the data's imbalance between countries and classes or the context in which the dataset was collected. Another primary reason for this could be behavioral differences in data highlighted in Table~\ref{tab:statistical-analysis}, making it difficult for a model to learn the representation when the diversity of data is unknown. Distributional shifts \footnote{\url{https://huyenchip.com/2022/02/07/data-distribution-shifts-and-monitoring.html\#data-shifts}} across datasets from different countries could be the reason for this. When sensor feature and ground truth distributions (we discussed ground truth distributions in Section~\ref{sec:descriptive}) are different across countries, it could lead to an averaging effect, which would lead to worse-performing models than models for each country. Moreover, it is worth noting that there are not a lot of studies that trained country-specific and multi-country models for performance comparison \cite{phan_mobile_2022}. In one of the only other studies that we found \cite{khwaja_modeling_2019}, personality trait inference performance using smartphone sensor data was better when using country-specific models, similar to what we found for complex daily activity inference. Finally, from a human-centered perspective, recruiting participants to collect smartphone sensing data to build machine learning models means that---rather than targeting large samples from a single country, recruiting a reasonable number of participants from diverse countries could help deploy better-performing models to multiple countries.
\begin{figure*}
\includegraphics[width=\textwidth]{images/mc_comparison.png}
\caption{Mean AUROC score comparison for country-specific and multi-country approaches with population-level and hybrid models. MC: Multi-Country; w/o DS: without downsampling; w/ DS: with downsampling.}
\Description{Diagram shows mean AUROC score comparison for country-specific and multi-country approaches with population-level and hybrid models. MC: Multi-Country; w/o DS: without downsampling; w/ DS: with downsampling. Results show that the country-specific approach provides better results than the multi-country approach, by large margins across both population-level and hybrid models.}
\label{fig:multi_country_comparison}
\end{figure*}
\subsection{Generalization Issues with Country-Agnostic Approach (RQ3)}
We examined this research question with two phases as detailed in Table~\ref{tab:keywords_explaination}. During the first phase, to evaluate the extent to which country-specific models generalize to new countries, we tested models trained with a single country's data in the other four countries separately. In the second phase, to evaluate the extent to which a model trained with four countries generalized to the remaining country, we trained with different combinations of countries and tested on the remaining country.
\begin{figure*}
\includegraphics[width=\textwidth]{images/agnostic_pl_results_v2.png}
\caption{Mean AUROC scores obtained in the country-agnostic approach with population-level models.}
\label{fig:agnostic_plm}
\Description{Mean AUROC scores obtained in the country-agnostic approach with population-level models.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{images/agnostic_hl_results_v2.png}
\caption{Mean AUROC scores obtained in the country-agnostic approach with hybrid models.}
\label{fig:agnostic_hlm}
\Description{Mean AUROC scores obtained in the country-agnostic approach with hybrid models.}
\end{figure*}
\begin{itemize}[wide, labelwidth=!, labelindent=0pt]
\item \textbf{Phase I:} Figure~\ref{fig:agnostic_plm} summarizes results for population-level models and Figure~\ref{fig:agnostic_hlm} summarizes results for hybrid models. To allow easy comparison, in both figures, the result mentioned as the performance of a country, when tested on the same country is the result from Table~\ref{tab:specific-results}. For instance, at the population-level, Italy had an AUROC of 0.71 according to Table~\ref{tab:specific-results}, and this is marked in Figure~\ref{fig:agnostic_plm} where both Training and Testing country is Italy. Population-level results suggest that the country-agnostic approach tends to perform better in countries geographically close to the country where the model was originally trained. For example, the Italy model had an AUROC of 0.71 for the Italian populations in a population-level setting and performed better in Denmark (AUROC: 0.69) and the UK (AUROC: 0.67) than it did in Mongolia (AUROC: 0.62) or Paraguay (AUROC: 0.62). Similar results can also be observed for hybrid models, where the Italian model performed better in Denmark and UK. This observation suggests that college students from countries within the same geographic region (Europe) could have behaviors that translate to similar smartphone usage and contexts during periods of doing similar activities. This is consistent with the observations made in the descriptive analysis above, where the countries that deviate from the general {trends} are usually those outside Europe. In summary, even after using the same experimental protocol when collecting mobile sensing data, we could still observe a distribution shift of data by the performance of models across geographically distant countries.
\item \textbf{Phase II: } The second phase looked into extending the work done in phase I. Instead of testing a country-specific model in a new country, we were interested in testing a model already exposed to diverse data (e.g., from four countries) in a new country. We present results for random forest models (because they performed the best across experiments) where the training set consisted of data from four countries, and the testing set had data from the fifth. As suggested in prior studies \cite{khwaja_modeling_2019}, each country contributed equally to the training set in terms of data volume, which means we had to downsample the data from each country to a common count (which was equal to the minimum number of data points available from one country). Table \ref{tab:country-agnostic} presents the results for experiments of the second phase. Similar to previous cases, we observed an increase in performance from population-level to hybrid models. More generally, and by looking at the F1 and AUROC scores, the performance of the hybrid models in the country-agnostic approach is lower than that of the same model in the country-specific approach. This is somewhat expected since including data from other distributions (i.e., other countries) in the training set increases the data's variance and makes it more difficult to represent all distributions accurately. This drop in performance could also be due to the downsampling. For instance, in a model where we train with four countries, including Italy and Paraguay, Italy represents the largest portion of the dataset compared to Paraguay, which is the smallest. When reducing the number of samples in each country to that of Paraguay, a lot of information is lost in the other countries: the larger the original dataset is, the larger the loss gets. This could explain the low performance of country-agnostic models in Italy and Mongolia, especially in the hybrid setting.
\end{itemize}
In addition, when comparing different modeling approaches, the results with Multi-Country w/o Downsampling are similar to those found in Phase II (hybrid) of the country-agnostic approach, which was expected since the training sets are similar. However, the bump in performance when going from population-level to hybrid is less noticeable here compared to previous cases. Furthermore, MC w/ DS performs worse than the previous approach, with an AUROC of 0.68 compared to 0.71. This could be because we lose much data from many countries due to downsampling, reducing models' representational ability. To summarize, a hybrid model in a country-agnostic approach can not predict complex activities better than its country-specific counterpart. Furthermore, while more data often means better performances, this does not apply when the data follow different distributions, one per country in this case. This suggests that each country has specific characteristics that make learning one representation difficult.
These results shed light on our \textbf{RQ3}: complex activity recognition models trained in specific countries often generalize reasonably to other countries (especially with hybrid models). However, the performance is not comparable to the country-specific approach, suggesting that there is still a distributional shift between countries. In fact, in Section~\ref{sec:descriptive}, we discussed how the labels used in the inference (i.e., shown in Figure~\ref{fig:activities-per-hour}--complex daily activities such as resting, studying, reading, etc.) had different distributions across the five countries. Further, the extent of the generalization often depended on whether countries are geographically closer (i.e., within Europe) or not. This result is in line with findings from previous studies \cite{phan_mobile_2022, khwaja_modeling_2019} that highlighted the effect of geographic dimensions (i.e., country of data collection) on mobile sensing model performance. For example, \cite{khwaja_modeling_2019} found that country-specific models that used mobile sensing data as input, could perform well for the inference of three personality traits--Extraversion, Agreeableness, and Conscientiousness. Furthermore, we would also like to highlight that the issue regarding distributional shifts and generalization is an open problem in multimodal mobile sensing, as highlighted by two recent studies that examined similar datasets collected from the same country in different time periods~\cite{xu2022globem, adler2022machine}. This is possibly due to behavioral changes over time leading to different distributions in sensor data and ground truth. Our results go beyond this and show that even if data is collected within the same time period and with the same protocol, distributional shifts could still occur due to country differences.
\begin{table}
\caption{Mean ($\bar{S}$) and Standard Deviation ($S_\sigma$) of
F1-scores and AUROC scores obtained by testing each Country-Agnostic model (trained in four countries) on data from a new country. Results are presented as $\bar{S} (S_\sigma)$,
where $S$ is any of the two metrics.}
\label{tab:country-agnostic}
\resizebox{0.5\textwidth}{!}{%
\begin{tabular}{
>{\columncolor[HTML]{FFFFFF}}l l
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}l
>{\columncolor[HTML]{FFFFFF}}l }
& \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Population-Level}} & \multicolumn{2}{c}{\cellcolor[HTML]{EDEDED}\textbf{Hybrid}}
\\
\textit{Test Country} &
\textit{F1} &
\textit{AUROC} &
\textit{F1} &
\textit{AUROC}
\\
\arrayrulecolor{Gray2}
\hline
Italy &
0.33 (0.005) & 0.65 (0.006) &
0.37 (0.004) & 0.71 (0.002)
\\
Mongolia &
0.30 (0.011) & 0.60 (0.004) &
0.37 (0.006) & 0.67 (0.003)
\\
UK &
0.29 (0.004) & 0.63 (0.005) &
0.47 (0.004) & 0.78 (0.002)
\\
Denmark &
0.38 (0.006) & 0.65 (0.006) &
0.63 (0.008) & 0.86 (0.004)
\\
Paraguay &
0.28 (0.005) & 0.59 (0.006) &
0.55 (0.006) & 0.80 (0.008)
\\
\arrayrulecolor{Gray2}
\hline
\end{tabular}}
\end{table}
\begin{figure*}
\includegraphics[width=0.8\textwidth]{images/importances_v2.png}
\caption{Feature importance of each feature category for hybrid country-specific and multi-country models.}
\Description{Feature importance of each feature category for hybrid country-specific and multi-country models. These values were obtained using random forest classifiers.}
\label{fig:feat-importances}
\end{figure*}
\subsection{Feature Importance for Complex Daily Activity Recognition}\label{sec:features}
The random forest models trained in our experiments inherently provide the Gini importance of the features seen during training \cite{breiman_classification_1998}. In Figure \ref{fig:feat-importances}, each set of box plots represents the distribution of feature importances for a given modality (as defined in Table \ref{tab:agg-features}) for hybrid models under the country-specific approach and multi-country approach (MC w/o DS). A first look shows that the multi-country distribution deviates from other countries for all sensing modalities. For example, one cellular feature in the model from Denmark is more important than the other models' cellular features. The temporal, WiFi and notification features are more important in Paraguay than in other countries. App events are mostly unimportant, except for a few outliers across all countries. This is reasonable given that out of the long list of app types used for the analysis, participants frequently used only a few types (e.g., entertainment, social, educational, health and fitness, etc.). Our analysis showed that the outliers here are, in fact, the apps used by participants the most. By looking at the top whiskers of each set of box plots, the most predictive features overall are part of the following modalities: time \& day, wifi, app usage, simple activity type, and location.
\
\section{Discussion}\label{sec:discussion}
\subsection{Summary of Results}
We examined a multi-country smartphone sensing dataset to develop inference models of complex daily activities. Our primary goal was to seek whether reasonably performing complex daily activity recognition models could be trained using multimodal sensor data from smartphones. Then, our goal was to identify differences among countries visible through smartphone usage and to leverage these differences to decide whether it makes sense to build country-specific or generic multi-country models, and whether models generalize well. We believe these findings are important when designing and deploying sensing and ML-based apps and systems in geographically diverse settings. The main findings for the three research questions can be summarized as follows:
\begin{itemize}[wide, labelwidth=!, labelindent=0pt]
\item \textbf{RQ1}: Different features in each country can characterize an activity. Their distributions throughout the day also vary between countries and seem to be affected. This finding points towards biases that could get propagated if proper care is not taken during the design and data collection phase of studies involving people and smartphones. In Section \ref{subsubsec:biases}, we discuss this in more detail under a set of biases: construct bias \cite{he2012bias}, sample bias \cite{meegahapola2020smartphone}, device-type bias \cite{blunck2013heterogeneity}, and bias from user practices \cite{van2004bias}.
\item \textbf{RQ2}: It is feasible to train models with the country-specific approach to infer 12 complex activities from smartphone data. Furthermore, personalization within countries increases performance (AUROCs of the range 0.79-0.89). Hence, the country-specific approach outperforms the multi-country approach, which only yields an AUROC of 0.71 with hybrid models. However, building multi-country models solely from sensing features is a non-trivial task that might require more effort with regard to data balance and feature selection. Our results also show that the sedentary lifestyles of the pandemic world can be captured with country-specific partially personalized machine learning models. In addition, we also show that multimodal smartphone sensors could be used to recognize complex daily activities that go beyond binary inferences to 12-class inferences. In Section \ref{subsubsec:inthewild}, we discuss why real-life studies are important to capture complex emerging lifestyles; in Section \ref{subsubsec:contextaware}, we also discuss how complex daily activities could be useful to design novel context-aware applications.
\item \textbf{RQ3}: Under the country-agnostic approach, we found that models generalize reasonably to new countries. However, unsurprisingly, the performance is not as high as when the model was tested in the same country where it was trained. Interestingly, models trained in European countries performed better in other European countries than in Paraguay or Mongolia. This issue broadly falls under the topic of domain shifts, which remains under-explored in mobile sensing literature. We elaborate more on this in Section~\ref{subsubsec:domainshift}.
\end{itemize}
\subsection{Implications}\label{subsec:implications}
Our work has implications aligned to both theoretical and practical aspects.
\subsubsection{Accounting for Country Biases in Study Design (RQ1).}\label{subsubsec:biases} Studies using sensing data drawn from geographically diverse samples (i.e., different countries) should account for and understand the \emph{sources of biases} that can occur at different stages of the study. Our study, and also previous studies on human behavior, sociology, and psychology, allow an understanding of these aspects in detail. For example, the following taxonomy can be used to characterize such biases \cite{phan_mobile_2022}. \textit{(i) Construct bias} occurs when the target is expressed differently across countries, depending on countries’ norms or environmental factors \cite{he2012bias}. For example, the ``walking'' activity in one country where physical exercise is not widespread could be labeled as ``walking'', whereas in a country where it is an activity done for fitness by many people, so it could be labeled as a ``sport'' as well. Hence, some behaviors can be specific to a particular environment or group of people. \textit{(ii) Sample bias} concerns the comparability of diverse samples that can be impacted by the recruitment process in each country \cite{meegahapola2020smartphone}. For example, if the samples in each country differ in age or gender, sensing data would likely not have similar distributions across countries. \textit{(iii) Device-type bias} is due to the differences in the devices used by participants across countries and in environmental factors affecting sensor measurements \cite{blunck2013heterogeneity}. Devices worldwide are not equipped with the same software and hardware, and similar sensors can differ in accuracy and precision (e.g., Apple devices are more prominent in developed countries, whereas Android phones are common in others). Finally, the \textit{(iv) bias from user practices} arises when participants from different countries use their mobile phones differently \cite{van2004bias}. Examples abound: how a phone is physically carried could distort measurements; how sensors are disabled to save battery or mobile data (especially in countries where unlimited mobile data plans are not standard) also changes what is measured; and different motivations to use certain apps in different countries also changes the resulting logs \cite{lim2014investigating}. Phan et al. \cite{phan_mobile_2022} have proposed a set of mitigation strategies that aim to reduce biases and foster fair approaches to diversity-aware research. To achieve these objectives, the authors recommend taking several steps during both the planning and implementation phases of the study. During the planning phase, researchers are advised to acquire knowledge about potential cross-country differences and relevant environmental factors, with the assistance of local informants. Furthermore, researchers should ensure that their study targets are comparable across countries and that they exist in each country being studied. In the implementation phase, the authors suggest inclusive recruitment strategies that aim to make each sampled country representative of a given target. These recommended strategies are important in promoting diversity-aware research and mitigating the potential for biases that can skew results.
\subsubsection{Activities Captured in Real-Life Studies (RQ2).}\label{subsubsec:inthewild} In terms of theoretical implications, it is worth highlighting that the set of activities that we considered are complex behaviors that can not be typically captured during in-lab studies. Fine-grained sensing-based activity recognition studies help increase performance on simple activities (e.g., walking, running, sitting, climbing stairs, etc. --- that have a repetitive nature in sensor data) that can be captured in in-lab settings. In contrast, building sensing-based ML models to capture complex daily behaviors requires conducting real-life studies. Activities like studying, attending classes, or shopping is hard to replicate in lab settings. Further, while simple activities might not have led to differences in model performances across countries, complex daily activities tightly bound with cultural, country-level, or geographic norms lead to differences in behaviors, leading to differences in the sensed data. In this context, prior work in the domain has not focused on this aspect enough, in our view. Even more so, we believe that studies must capture data from diverse communities to build models that work for all intended users. While this is a challenging task, it is much needed for the field of research to mature for more real-life use cases. Our study is one of the first studies in this direction.
\subsubsection{Novel Applications of Context-Aware Mobile Apps (RQ2).}\label{subsubsec:contextaware} In terms of practical implications, our findings point towards adding context awareness to mobile applications. Current mobile applications provide context-aware services, interventions, and notifications based on location and simple activity sensing \cite{mehrotra2017intelligent, meegahapola2020smartphone}. However, a range of potential applications that go beyond the current offering could become feasible with complex daily activity recognition. For example, previously, a smartphone would only know that a user was sitting in a particular place. With complex activity recognition, it would know that a user is studying, attending a lecture, reading, or eating, which all entail sitting. For example, if the student is reading, studying, or attending a lecture, automatically putting the phone in silent or do-not-disturb mode might make sense, even though, in many cases, people forget to do so. In summary, complex daily activity recognition could offer diverse use cases to build mobile applications around in the future.
\subsubsection{Domain Adaptation for Multimodal Mobile Sensing (RQ3).}\label{subsubsec:domainshift} Another theoretical implication can be described in a machine learning sense. We discussed the challenges of generalization and domain shifts in our smartphone sensor dataset. We described how this shift affects model performance, specifically for complex daily activity recognition with multimodal sensors. Although biases, distributional shifts, and model generalization have been widely studied in other domains such as natural language processing \cite{elsahar2019annotate}, speech \cite{sun2017unsupervised}, and computer vision \cite{luo2019taking}, smartphone sensing studies have yet to receive sufficient attention \cite{gong2021dapper}. We demonstrated that model personalization (hybrid setting) could reduce distributional shifts to a certain extent. In a way, according to transfer learning-related terms, this approach is similar to fine-tuning an already trained model for a specific user to achieve model personalization \cite{chen2020fedhealth}. Such strategies for personalization have been used in prior work \cite{meegahapola_one_2021}. However, recent research in domain adaptation has shown limitations in mobile sensing, particularly with regard to time series data \cite{wilson2022domain}. The diversity of wearable device positioning poses a persistent issue in human activity recognition, which affects the performance of recognition models \cite{chang2020systematic, mathur2019unsupervised}. Wilson et al. \cite{wilson2022domain} conducted a study of domain adaptation in datasets captured from individuals of different age groups, yet the findings are limited to simpler time series accelerometer data. Other works admit that the current lack of solutions for domain adaptation and generalization from smartphone and wearable data presents an opportunity for future exploration \cite{adler2022machine, xu2022globem}. We have added to the literature by confirming that domain adaptation techniques are necessary for multi-country, multimodal smartphone sensor data. In addition, even on a fundamental level, approaches that allow quantifying cross-dataset distributional differences for multimodal sensing features and target labels (e.g., activity, mood, social context, etc.) separately, are lacking in the domain. Research on such aspects could allow us to better understand distributional shifts in sensor data, to better counter it with domain adaptation techniques in multimodal settings.
\subsection{Limitations}
While the dataset covers five different countries from three
continents, students’ behavior in other countries and
continents could differ from what we have already
encountered. In addition, even though we found geographically closer countries performing well in Europe, such findings need to be confirmed for other regions where geographically closer countries could have contrasting behaviors and norms (e.g., India and China). Furthermore, the weather conditions in different countries during the time period of data collection could be slightly different. All five countries mentioned in this study go through different seasons, as all are somewhat far from the equator. Hence, we could expect changes in features in different seasons. However, in practical terms, collecting data in similar weather conditions is not feasible.
When aggregating sensor data around self-reports, the data corresponding to the moment the participant was filling out the self-report is considered a part of the activity he/she was doing at the time. This noise could alter the recognition task if the window's size is small enough. However, even though this could affect results if we intended to increase model performance in a fine-grained sensing task, we do not believe this noise affects the results significantly regarding our findings on diversity awareness. In addition, it is worth noting that the way we model our approach with a tabular dataset is similar to prior ubicomp/mobile sensing studies done in real life \cite{meegahapola2020smartphone} because we do not have continuous ground truth labels. Hence, it restrains us from modeling the task as a time-series problem, which is how a majority of activity recognition studies \cite{straczkiewicz2021systematic} with continuous accurate ground truth measurements follow. So, the results should be interpreted with the study's exploratory nature in mind.
Further, it is worth noting that we could expect some of the highly informative features used in models to change over time, with changes to technology use and habits of people, in different countries \cite{xu2022globem, adler2022machine}. For example, a reason for the lack of use of streaming services in certain countries (discussed in Section \ref{sec:descriptive}) is the lack of laws surrounding the usage of illegally downloaded content (e.g., Germany has strict laws about not using illegal downloads \cite{rump2011kind}). Changes in the laws of countries could change the behavior of young adults. Further, internet prices could also affect the use of streaming services. While bandwidth-based and cheap internet is common in developed countries, it is not the same in developing nations in Asia, Africa, and South America, where internet usage is expensive, hence demotivating streaming. In addition, income levels too could influence captured features a lot. For example, with increasing income levels (usually happens when a country's GDP changes), young adults may use more wearables for fitness tracking, leading to the usage of health and fitness apps on mobile phones.
The amount of data for each country is highly imbalanced. For a fair representation of each country, having the same number of participants and self-reports per country would ensure that a classifier learns to distinguish classes from each country equally. However, Italy and Mongolia are dominant in the current state of the dataset. If not done carefully, down-sampling would result in a loss of expressiveness and variance, making it difficult to discern different classes in a multi-country approach. Another imbalance is found among class labels, where activities such as sleeping or studying are more frequent than others. However, this does make sense since we do not expect all activities to appear at the same frequency in a participant's day or week. Further, we reported F1 and AUROC scores that are preferred in such imbalanced settings.
Finally, the dataset was collected in November 2020, during the Covid-19 pandemic, when most students stayed home due to work/study-from-home restrictions. This explains why most of the relevant features found in the statistical analysis are screen events and app events. While some relevant features are relative to proximity and WiFi sensors, there are very few regarding activity and location unless the activity corresponds to physical activities. This is probably an effect of a context where movements were highly discouraged. From another perspective, the behavior of college students from all countries during this time period reflects remote work or study arrangements. We could expect these practices to continue for years as more universities and companies adopt remote work/study culture. Hence, while many prior studies in ubicomp used phone usage features and sensing features for activity/behavior/psychological trait inference tasks, our findings indicate that phone usage features could be even more critical in the future with remote study/work settings due to sedentary behavior, that would limit the informativeness of sensors such as location and inertial sensors.
\subsection{Future Work}
The study's population for the dataset collection consisted of students. Therefore, it might be worth exploring how people from different age groups use their smartphones and how their daily behavior is expressed through that usage. In addition to visible diversity, it is known that deep diversity attributes (innate to humans and not visible) such as personality (captured with Big Five Inventory \cite{donnellan2006mini}), values (captured with basic values survey \cite{gouveia2014functional} and human values survey \cite{schwartz1994there, schwartz2001extending}), and intelligence (captured with multiple intelligence scale \cite{tirri2008identification}) could also affect smartphone sensor data and activities performed by people \cite{schelenz_theory_2021, khwaja_modeling_2019}. Hence, investigating how such diversity attributes could affect smartphone-based inference models on complex activities, and other target variables, is worth investigating. Further, future work could investigate how the classification performance is affected when excluding the sensing data corresponding to the time taken to fill the self-report about activities by participants. Finally, domain adaptation for multi-modal smartphone sensor data across time and countries, remains an important problem worth investigating in future work.
\section{Conclusion}\label{sec:conclusion}
In this study, we examined the daily behavior of 637 students in Italy, Mongolia, the United Kingdom, Denmark, and Paraguay using over 216K self-reports and passive sensing data collected from their smartphones. The main goal of this study was to, first examine whether multimodal smartphone sensor data could be used to infer complex daily activities, which in turn would be useful for context-aware applications. Then, to examine whether models generalize well to different countries. We have a few primary findings: \textit{(i)} While each country has its day distribution of activities, we can observe similarities between the geographically closer countries in Europe. Moreover, features such as the time of the day or the week, screen events, and app usage events are indicative of most daily activities; \textit{(ii)} 12 complex daily activities can be recognized in a country-specific and personalized setting, using passive sensing features with reasonable performance. However, extending this to a multi-country model does not perform well, compared to the country-specific setting; and \textit{(iii)} Models do not generalize well to other countries (at least compared to within-country performance), and especially to geographically distant ones. More studies are needed along these lines regarding complex daily activity recognition and also other target variables (e.g., mood, stress, fatigue, eating behavior, drinking behavior, social context inference, etc.), to confirm the findings. Hence, we believe research around geographic diversity awareness is fundamental for advancing mobile sensing and human behavior understanding for more real-world utility across diverse countries. From a study design sense, we advocate the idea of collecting data from diverse regions and populations to build better-represented machine learning models. From a machine learning sense, we advocate the idea of developing domain adaptation techniques to better handle multimodal mobile sensing data collected from diverse countries. |
{
"arxiv_id": "2302.08663",
"language": "en",
"timestamp": "2023-02-20T02:05:47",
"url": "https://arxiv.org/abs/2302.08663",
"yymm": "2302"
} | \section{Introduction}
Quantum secret sharing \cite{quantum secret sharing} is a cryptographic scheme to encode a quantum secret into multiple pieces of quantum information (called shares) and distribute shares to participants so that qualified sets of participants can reconstruct the secret but forbidden sets can gain no information about the secret.
Quantum secret sharing protects the quantum secret, in the same way that classical secret sharing protects the classical secret.
A stabilizer code \cite{stabilizer code} is a class of quantum codes which can detect and correct errors.
$[[n, k, d]]$ stabilizer codes are encoding $k$ qudits into $n$ qudits, detecting $d-1$ quantum errors, and correcting $d-1$ quantum ``erasures'' \cite{erasure}. Here, an erasure means an error whose position is known.
Quantum secret sharing can be constructed by stabilizer codes \cite{quantum secret sharing}, \cite{erasure to share}.
Any particular sequence of $n$ qudits which the encoder might transmit is called a codeword.
Each qudit of the codeword of a stabilizer code is a share of the quantum secret sharing.
If an error-correcting code can correct $d-1$ erasures, any $n-(d-1)$ shares can recover the initial state by treating the missing $d-1$ shares as erasures.
Quantum secret sharing using stabilizer codes has two different procedures to reconstruct a secret from a set of shares.
One is an erasure correcting procedure \cite{quantum secret sharing}, and the other is a unitary procedure \cite{unitary procedure}.
Details of both procedures are described in the next section.
Depth, width, and the number of circuit gates are performance indices of quantum circuits.
Depth is the number of steps needed to complete the circuit operation.
Width is the number of qudits required to run the circuit.
Circuit gates consist of measurements and unitary gates acting on one or two qudits in a quantum circuit.
In terms of width and the number of measurements, the unitary procedure is better than the erasure correction procedure \cite{unitary procedure}.
However, we do not know which is better in terms of depth and the number of unitary gates.
In this paper, a quantum circuit for each method is designed for quantum secret sharing using $[[5, 1, 3]]$ binary stabilizer codes.
We compare the erasure correcting procedure with the unitary procedure by counting depth and the number of circuit gates of both circuits.
We find that depth and the number of circuit gates of the unitary procedure are smaller than those of the error correcting procedure.
\section{Two procedures to reconstruct a quantum secret}
There are two procedures to reconstruct a quantum secret.
One is an erasure correcting procedure \cite{quantum secret sharing}, the other is a unitary procedure \cite{unitary procedure}. Detail of both procedures is as follows.
\subsection{Erasure correcting procedure}
Participants put shares in the corresponding places of the codeword of the stabilizer code.
The participants add new qudits to the empty places and regards them as erasures.
If the erasures are correctable, the participants get the original codeword by performing erasure correction.
Decoding the codeword reveals the secret \cite{quantum secret sharing}.
\subsection{Unitary procedure}
There are multiple sets of shares that can reconstruct a secret.
Each set has its own unitary transformation to reconstruct the secret.
When participants get a set of shares that can reconstruct a secret, the secret can be found by performing the corresponding unitary transformation \cite{unitary procedure}.
\section{Condition of comparison and design method of circuit}
\subsection{Condition of comparison}
We reconstruct the secret from the third, fourth and fifth qubits of the codeword of the $[[5, 1, 3]]$ binary stabilizer codes.
We design and draw quantum circuits using Q\#, which is Microsoft's open-source programming language for developing and running quantum algorithms \cite{Qsharp}.
The program code that we have written can be found in the ipynb format as auxiliary files.
The auxiliary file erasure\_correcting\_procedure.ipynb contains the designing of the erasure correcting procedure and the visualisation of the circuit.
The auxiliary file unitary\_procedure.ipynb contains the designing of the unitary procedure and the visualisation of the circuit.
The circuit gates used in quantum circuits are measurements, CNOT gates, X gates and H gates.
We denote a quantum secret as $\alpha|0\rangle+\beta|1\rangle$ ($\alpha$, $\beta$ are complex number, and $|\alpha|^2+|\beta|^2=1$).
We encode the message into $\alpha|0_L\rangle+\beta|1_L\rangle$ according to ``EncodeIntoFiveQubitCode'' \cite{EncodeIntoFiveQubitCode} defined by Q\#, and $|0_L\rangle$,$|1_L\rangle$ are as follow.
\begin{equation*}
\begin{split}
|0_L\rangle
=-\frac{1}{4}\bigl\lbrack
&|11111\rangle+|01101\rangle+|10110\rangle+|01011\rangle\\
+&|10101\rangle-|00100\rangle-|11001\rangle-|00111\rangle\\
-&|00010\rangle-|11100\rangle-|00001\rangle-|10000\rangle\\
-&|01110\rangle-|10011\rangle-|01000\rangle+|11010\rangle\bigr\rbrack.\\
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
|1_L\rangle
=-\frac{1}{4}\bigl\lbrack
&|00000\rangle+|10010\rangle+|01001\rangle+|10100\rangle\\
+&|01010\rangle-|11011\rangle-|00110\rangle-|11000\rangle\\
-&|11101\rangle-|00011\rangle-|11110\rangle-|01111\rangle\\
+&|10001\rangle-|01100\rangle-|10111\rangle+|00101\rangle\bigr\rbrack.\\
\end{split}
\end{equation*}\\
\subsection{Design method of circuit of the erasure correcting procedure}
We used Q\# operations to design the circuit.
The operations we used are ``Recover'' \cite{Recover} and ``DecodeFromFiveQubitCode'' \cite{DecodeFromFiveQubitCode}.
Recover is an operation that gives an error-corrected codeword and takes three arguments.
The first argument is the quantum state to be corrected; Recover gives the corrected codeword by operating on this argument.
The second argument consists of encoding and decoding procedures.
The third argument is a two-dimensional array that describes the correcting operation according to the measurement results.
DecodeFromFiveQubitCode reconstructs the secret from the codeword of the $[[5, 1, 3]]$ binary stabilizer code.
We used the command ``\%trace'' \cite{trace} to visualize the designed circuit.
\%trace is only available in the Jupyter Notebook.
We redrew the circuit manually to a simpler version.
\subsection{Design method of circuit of the unitary procedure}
We designed the circuit by hand.
\section{Results}
\subsection{Designed circuits}
Fig.\ref{fig:erasure} is the quantum circuit of the erasure correcting procedure.
Fig.\ref{fig:unitary} is the quantum circuit of the unitary procedure.
The legend for the symbols in each figure is in Table \ref{tab:number}.
In a CNOT gate, the black dot is a control qubit and the $\bigoplus$ is a target qubit.
In a controlled-Z gate, one black dot is a control qubit and the other is a target qubit.
The controlled-Z gates do not change the result whichever black dot is used as the control qubit.
The controlled-Z gate consist of one CNOT gate and two H gates. In this paper, the controlled-Z gate is counted as one CNOT gate and two H gates.
The double line from the measurement is wire of classical bit.
The box marked as Apply Pauli in Fig.\ref{fig:erasure} is a unitary gate to correct errors whose content changes according to the four results of measurements.
The box marked as Apply Pauli has 0, 1 or 2 Pauli gates.
``3rd share'', ``4th share'' and ``5th share'' in both circuits are the third, fourth and fifth qubits of the codeword of the $[[5, 1, 3]]$ binary stabilizer codes respectively.
The $|0\rangle$s labeled missing shares are new qubits added to treat the missing shares as erasures.
The $|0\rangle$ written as Auxiliary qubit is a qubit to simplify the measurement.
\begin{figure}[htbp]
\begin{minipage}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{QECC.pdf}
\caption{{$\scriptstyle \mbox{Circuit of the erasure}\atop \scriptstyle \mbox{correcting procedures}$}}
\label{fig:erasure}
\end{minipage}
\begin{minipage}[b]{0.45\columnwidth}
\centering
\includegraphics[width=0.5\columnwidth]{Unitary.pdf}
\caption{{$\scriptstyle \mbox{Circuit of}\atop \scriptstyle \mbox{unitary procedures}$}}
\label{fig:unitary}
\end{minipage}
\end{figure}
\subsection{Depth, width, and the number of circuit gates}
We list depth, width and the number of circuit gates for both procedures in Table \ref{tab:number}.
We assume that the box marked Apply Pauli has no Pauli gates. This assumption is advantageous to the erasure correcting procedure.
\begin{table}[htb]
\caption{{$\scriptstyle \mbox{legend of each gate and}\atop \scriptstyle \mbox{depth, width, the number of circuit gates}$}}
\label{tab:number}
\centering
\begin{tabular}{c|ccc}
\hline
&\multirow{2}{*}{legends}&unitary&erasure correcting\\
&&procedures&procedures\\
\hline \hline
CNOT gates&\begin{minipage}{5mm}\centering\scalebox{0.5}{\includegraphics{CNOT.pdf}}\end{minipage}&9&23\\\hline
X gates&\begin{minipage}{5mm}\centering\scalebox{0.5}{\includegraphics{X.pdf}}\end{minipage}&5&1\\\hline
H gates&\begin{minipage}{5mm}\centering\scalebox{0.5}{\includegraphics{H.pdf}}\end{minipage}&2&28\\\hline
measurements&\begin{minipage}{5mm}\centering\scalebox{0.5}{\includegraphics{measurement.pdf}}\end{minipage}&0&4\\\hline
controlled-Z gates&\begin{minipage}{5mm}\centering\scalebox{0.5}{\includegraphics{CZ.pdf}}\end{minipage}&&\\\hline
depth&&15&38\\\hline
width&&3&6\\\hline
\end{tabular}
\end{table}
\section{Conclusion}
We count depth and the number of circuit unitary gates of the quantum circuits for both procedures in quantum secret sharing using $[[5, 1, 3]]$ binary stabilizer codes.
In this paper, the unitary procedure is smaller than the erasure correcting procedure in terms of depth and the total number of circuit unitary gates.
It has shown that the unitary procedure is better than the erasure correcting procedure in terms of the width and the number of measurements.
Therefore, we conclude that the unitary procedure is better than the erasure correcting procedure under the conditions of this paper.
We obtain an example of the relationship between both procedures.
In the future we will research whether the relationship between both procedures holds in the general case.
|
{
"arxiv_id": "2302.08584",
"language": "en",
"timestamp": "2023-02-20T02:02:13",
"url": "https://arxiv.org/abs/2302.08584",
"yymm": "2302"
} | \section{Introduction}\label{S1}
The widespread adoption and recent acceleration in the deployment of $5$G networks have resulted in an increased strain on the mid-band spectrum (\SIrange{4}{8}{\giga\hertz}), thereby causing providers around the world to shift their focus onto the millimeter wave bands (mmWave: \SIrange{30}{300}{\giga\hertz})~\cite{Commercial}. With mmWave deployments, both consumers and businesses can exploit the relatively higher bandwidth over extant mid-band networks: consumers can experience better quality-of-service vis-\`{a}-vis data rates at their user terminals, while businesses can leverage reliable service guarantees to optimize operations~\cite{Rappaport}.
While mmWave networks promise quality-of-service gains, the signals suffer from poor propagation characteristics, i.e., increased atmospheric attenuation and absorption~\cite{Rappaport}. Ergo, there have been concerted efforts from both academia and industry to develop well-rounded mmWave channel models for indoor and outdoor radio environments~\cite{Foliage, Indoor60G, NISTModeling}. Current efforts comprise a wide array of measurement campaigns and subsequent analyses~\cite{Agile-Link, Harvard, Purdue, Foliage, MolischSpatialOutdoor, Outdoor28G, Commercial, MacCartneySpatialStatistics}. These works suffer from poor system design approaches that introduce drawbacks vis-\`{a}-vis cost (expensive phased-arrays~\cite{Agile-Link}), computational complexity (exhaustive signal sampling~\cite{Agile-Link}), and ease of operations (inflexible alignment~\cite{Foliage}); fail to address diversity in transmitter (Tx) and receiver (Rx) deployment~\cite{Purdue, MolischSpatialOutdoor}; fail to empirically validate standardized pathloss models in diverse propagation conditions~\cite{Outdoor28G, Commercial}; and fail to analyze signal spatial consistency behavior under continuously varying Tx-Rx distance and alignment accuracy effects~\cite{Indoor60G, MolischSpatialOutdoor, MacCartneySpatialStatistics}. To address these limitations, this paper describes the design of a sliding correlator channel sounder~\cite{Purdue} in conjunction with a fully autonomous robotic beam-steering platform, employed in a V$2$X propagation modeling campaign on the NSF POWDER testbed~\cite{POWDER} along urban and suburban routes at \SI{28}{\giga\hertz}.
The novel design of our measurement platform enables several key features essential to mmWave V2X propagation modeling. First, our system facilitates greater deployment diversity along routes in both urban and suburban settings (location diversity); allows operations with manual, full-, and semi-autonomous alignment (beam-steering diversity); and enables the use of different mobile mounts (vehicle/cart, velocity diversity). In contrast, the \SI{28}{\giga\hertz} measurement campaign in~\cite{Purdue} centers around a manual antenna alignment platform in suburban neighborhoods under semi-stationary settings only; the sampling activities in~\cite{Harvard} are constrained to indoor environments and the underlying beam-steering platform also requires manual operation; and~\cite{MolischSpatialOutdoor, Outdoor28G} outline semi-stationary approaches restricted to urban deployments. Second, while electronic beam-alignment systems~\cite{Agile-Link} involving phased-array antenna modules offer increased flexibility in side-lobe \& beam-width control and demonstrate faster average tracking response times relative to mechanical fixed-beam steering, they constitute exhaustive beam-search strategies and their design necessitates expensive hardware to host inherently resource-heavy algorithms. Additionally, our work evaluates the signal pathloss behavior, computed from the collected measurements, against the $3$GPP TR$38.901$ and ITU-R M$.2135$ large-scale macro-cellular pathloss models~\cite{MacCartneyModelsOverview}. These comparisons enable the user to gain novel insights on adapting them to rapidly evolving mmWave V$2$V and V$2$I scenarios. Lastly, while~\cite{MolischSpatialOutdoor, MacCartneySpatialStatistics} conduct spatial consistency analyses of mmWave signals vis-\`{a}-vis Tx-Rx distance, they fail to investigate signal decorrelation behavior under variations in Tx-Rx alignment accuracy. With a fully autonomous alignment and tracking platform that aids in the continual series of measurements along unplanned vehicular routes onsite, our design facilitates signal decoherence studies under both distance and alignment effects. The insights gathered from these studies on mmWave signal propagation in vehicular settings can be leveraged in the efficient deployment of $5$G$+$ networks along interstates and in furthering spatial consistency research on V$2$V and V$2$I communications. A glossary of the notations, standards, and protocols referenced in this paper is provided in Table~\ref{T1}.
\indent{The} rest of the paper is structured as: Sec.~\ref{S2} elucidates the end-to-end design of our autonomous robotic beam-steering platform; Sec.~\ref{S3} details our measurement and post-processing activities; Sec.~\ref{S4} describes our numerical evaluations and the insights gained from pathloss and spatial consistency studies; and finally, Sec.~\ref{S5} outlines our concluding remarks.
\vspace{-3mm}
\section{Measurement System: Design Description}\label{S2}
With the objective of facilitating uninterrupted measurement operations for \SI{28}{\giga\hertz} propagation modeling along unplanned routes in V$2$X settings, our measurement campaign on the NSF POWDER testbed~\cite{POWDER} involved a sliding correlator channel sounder~\cite{Purdue} with directional horn antennas in conjunction with a fully autonomous mechanical beam-steering platform for continual antenna alignment and tracking~\cite{NRSM}. Specifically, under V$2$I evaluations, with a rooftop mounted Tx and a mobile Rx traversing unplanned vehicular routes onsite, this design enables the logging of geo-positioning data (i.e., GPS coordinates, speed, acceleration, and heading), alignment angles, and power-delay profile samples. With the system architecture shown in Fig.~\ref{F1}, in this section, we first discuss our channel sounder design and then describe the development of our autonomous alignment and tracking platform.
\begin{table} [tb]
\centering
\scriptsize
\begin{tabular}{|l||l|}
\hline
V$2$X & Vehicle-to-Vehicle (V$2$V) or Vehicle-to-Infrastructure (V$2$I)\\
\hline
$3$GPP TR38.901 UMa & $3$rd Generation Partnership Project (Urban Macrocells)\\
\hline
ITU-R M$.2135$ UMa & International Telecommunication Union (Urban Macrocells)\\
\hline
SAGE, AoA & Space Alternating Expectation Maximization, Angle of Arrival\\
\hline
HPBW, SDR & Half-Power Beam-Width, Software Defined Radio\\
\hline
SSD, SBC & Solid State Drive, Single Board Computer\\
\hline
PWM & Pulse Width Modulation (digital control of servos)\\
\hline
GNSS, GPS & Global Navigation Satellite System, Global Positioning System\\
\hline
RTCM & Radio Technical Commission for Maritime Services\\
\hline
RTK & Real-Time Kinematics (GPS corrections)\\
\hline
NMEA-0183 & National Marine Electronics Association (data specification)\\
\hline
NTRIP & Networked Transport of RTCM over Internet Protocol\\
\hline
UNAVCO & University NAVstar COnsortium (GNSS data provisioning)\\
\hline
I$2$C & Inter Integrated Circuit (serial communication bus)\\
\hline
NTP & Network Time Protocol (timing synchronization)\\
\hline
\end{tabular}
\vspace{-1mm}
\caption{Glossary of Notations, Standards, and Protocols}
\label{T1}
\end{table}
\noindent{\textbf{Channel Sounder}}: The measurement system employed a custom broadband sliding correlator channel sounder at both the Tx and the Rx~\cite{Purdue}, each equipped with a Pseudorandom Noise (PN) sequence generator module producing the required known apriori signal for time-dilated cross-correlation studies, with the Rx module clocked at a slightly lower rate than the Tx; up-/down-converter to transition between the \SI{2.5}{\giga\hertz} and \SI{28}{\giga\hertz} regimes; a vertically polarized WR-28 directional horn antenna; and other commercially available components. This setup is depicted by the schematics in Fig.~\ref{F1} and is implemented by the circuits shown in Fig.~\ref{F2}. The operational specifications of the sounder are listed in Table~\ref{T2}, as detailed also in~\cite{Purdue}. At the Rx, complex-\SI{64}{} I/Q power-delay profiles are recorded onboard an SSD storage drive by a GNURadio sink on a Raspberry Pi SBC via a USRP B$200$mini SDR.
\noindent{\textbf{Alignment \& Tracking}}: First, to enable uninhibited rotational mobility for alignment and tracking in the horizontal and the vertical planes, as shown in Fig.~\ref{F1}, at both the Tx and the Rx, the WR-$28$ antenna is mounted on a PT-$2645$S open-loop pan-and-tilt unit, driven by $2{\times}$ HSR-$2645$CRH continuous rotation servos, each actuating either yaw (horizontal) or pitch (vertical) alignment. These servos are controlled via PWM signals from an ATMega$328$P microcontroller with the angular position (absolute and relative) feedback provided by a BNO$080$ inertial motion unit. This principal axes positioning subsystem demonstrates an average accuracy of \SI{1.1}{\degree} across all fine- \& coarse-grained yaw and pitch movements. Next, for seamless operations in V$2$X scenarios, the alignment platform is augmented with a geo-positioning subsystem consisting of a UBlox ZED-F$9$P GPS unit (with a GNSS multi-band antenna) wherein the positioning accuracy is enhanced by RTCM v$3.0$ RTK correction streams over NTRIP from a UNAVCO caster via a Lefebure client. Demonstrating an average $3$D accuracy of \SI{17}{\centi\meter}, the relevant data members captured by this geo-positioning unit---namely, the coordinate (latitude, longitude, and ellipsoidal altitude), the horizontal speed \& acceleration, and the heading, are communicated to the microcontroller as NMEA-0183 messages over an I2C serial peripheral bus.
\begin{table} [tb]
\centering
\scriptsize
\begin{tabular}{|l||l|}
\hline
Carrier Frequency & \SI{28}{\giga\hertz}\\
\hline
PN Chip Sequence Length & \SI{2047}{}\\
\hline
RF Bandwidth & \SI{800}{\mega\hertz}\\
\hline
Tx Chip Rate & \SI{400}{\mega{cps}}\\
\hline
Temporal Resolution & \SI{2.5}{\nano\second}\\
\hline
Rx Chip Rate & \SI{399.95}{\mega{cps}}\\
\hline
Tx Power & \SI{23}{\deci\bel{m}}\\
\hline
Tx/Rx Antenna Gain & \SI{22}{\deci\bel{i}}\\
\hline
Nominal Tx/Rx Antenna HPBW & \SI{15}{\degree}\\
\hline
Measured Tx/Rx Azimuth HPBW & \SI{10.1}{\degree}\\
\hline
Measured Tx/Rx Elevation HPBW & \SI{11.5}{\degree}\\
\hline
Maximum Measurable Pathloss & \SI{182}{\decibel}\\
\hline
GNURadio Sink Center Frequency & \SI{2.5}{\giga\hertz}\\
\hline
USRP Gain & \SI{76}{\decibel}\\
\hline
USRP Sampling Rate & \SI{2}{\mega{sps}}\\
\hline
\end{tabular}
\vspace{-0.7mm}
\caption{Sliding Correlator Channel Sounder Specifications}
\label{T2}
\end{table}
Since our measurement system revolves around a decoupled design with the alignment and tracking platform replicated at both the Tx and the Rx, a centralized nerve-center handles asynchronous module registration \& de-registration via RPyC object proxying, global timing synchronization via NTP, and coordination between the Tx and Rx over fault-tolerant Apache Kafka messaging middleware. With Apache Zookeeper broker management, the samples generated by the principal axes positioning and geo-positioning subsystems are shared over Kafka message queues: the Tx subscribes to the alignment and geo-location messages published by the Rx, and vice-versa, resulting in a scalable event-driven modular architecture. Corroborated both onsite and in the laboratory, this publish-subscribe framework facilitates an average beam-steering response time of \SI{27.8}{\milli\second}, evaluated over \SI{12870}{} interactions. With system monitoring provided over an Android debug bridge and system troubleshooting enabled via serial communication interfaces, our platform demonstrates remote orchestration capabilities: a critical necessity for V$2$X propagation modeling. To augment these remote monitoring and troubleshooting features further, a GNURadio Qt GUI time-sink (with dynamic trigger levels) allows for real-time visualization of the recorded power-delay profiles over an ad-hoc WLAN with the Raspberry Pi SBC; additionally, via the Plotly Dash and MapBox APIs, the Tx and Rx geo-locations are annotated with their relative alignment accuracies and visualized in real-time at a control terminal for onsite validation of the routes traversed on NSF POWDER.
\begin{figure*} [t]
\centering
\includegraphics[width=1.02\textwidth]{figs/system_architecture.jpg}
\vspace{-6mm}
\caption{The architecture of our autonomous antenna alignment \& tracking platform with a sliding correlator channel sounder.}
\label{F1}
\vspace{-7mm}
\end{figure*}
\vspace{-3.3mm}
\section{Measurements \& Post-Processing}\label{S3}
In this section, we discuss the operations involved in our \SI{28}{\giga\hertz} V$2$X measurement campaign on the NSF POWDER testbed. First, we describe the system calibration process; next, we outline the onsite system deployment procedure; and finally, we detail the post-processing steps involved in setting up the power-delay profiles recorded at the Rx for pathloss evaluations, multipath component extraction and parameter estimation via SAGE, and spatial consistency analyses.
\noindent{\textbf{Pre-deployment Calibration}}: After the Tx and Rx circuits for the sliding correlator channel sounder have been implemented as illustrated in Figs.~\ref{F2}(a) and~\ref{F2}(d), a calibration procedure is carried out onsite to map the power calculated from the power-delay profiles recorded by the USRP to reference measured power levels. Calibrating the measurement system before deployment ensures accurate Rx power calculations in the presence of imperfect circuit components, e.g., the Commscope LDF$4$-$50$A \SI{0.5}{{"}} coaxial cables employed at the Tx exhibit losses of up to \SI{0.12}{\deci\bel\per\meter} at \SI{2.5}{\giga\hertz}. Under \SI{0}{\deci\bel} and \SI{76}{\deci\bel} USRP gains, using a Keysight variable attenuator, the recorded power-delay profiles are processed to determine the calculated power values mapped to the reference power levels: the results of this procedure are studied in Sec.~\ref{S4}.
\noindent{\textbf{NSF POWDER Deployment}}: As described in Sec.~\ref{S2}, our measurement system assembly constitutes an autonomous beam-steering controller replicated at both the Tx and the Rx, their respective sounder circuits, and a centralized nerve center for aggregation (RPyC), synchronization (NTP), and coordination (Kafka-Zookeeper). On the NSF POWDER testbed, the nerve center is deployed on a high-availability cluster of four Dell R$740$ compute nodes at the Fort Douglas datacenter, with fault tolerance being a key feature to ensure storage redundancy for the recorded data. As depicted in Fig.~\ref{F2}(b), the Tx is mounted on a building rooftop; while, as shown in Fig.~\ref{F2}(c), the Rx is mounted on a van (or a push-cart) that is driven (or pushed) along unplanned routes onsite. Remote monitoring and troubleshooting is provided for validation of geo-positioning, alignment, and power-delay profile samples. The goal of this measurement campaign was to obtain a reasonably large dataset of site-specific measurements for evaluating the propagation characteristics of \SI{28}{\giga\hertz} signals in vehicular communication settings. Thus, our propagation modeling activities included V$2$I measurements under manual, semi-, and fully-autonomous alignment operations traversing nine routes spanning urban and suburban neighborhoods\footnote{Although our platform is capable of double-directional measurements and facilitates easy scalability to MIMO settings, in this campaign, we focus only on \emph{beam-steered measurements} for spatial consistency evaluations.}.
\noindent{\textbf{Post-Processing}}: Using GNURadio utilities, the metadata file corresponding to the route-specific power-delay profile records at the Rx is parsed to extract timestamp information, which is then associated with the geo-positioning and alignment logs at both the Tx and the Rx. The samples in each synchronized power-delay profile segment undergo pre-filtering via a low-pass filter (SciPy FIR implementation), time-windowing, and noise-elimination (via custom peak-search and thresholding heuristics). Coupled with transmission power and antenna gain values, the received power levels obtained from these processed samples allow the visualization of pathloss maps on the Google Maps API (rendered via the Bokeh toolbox), and the evaluation of pathloss behavior as a function of Tx-Rx distance, with validations against the $3$GPP TR$38.901$ and ITU-R M$.2135$ standards~\cite{MacCartneyModelsOverview}. The SAGE algorithm~\cite{SAGE} is used to extract multipath parameters, which facilitates RMS AoA direction spread studies~\cite{Indoor60G}. Under Tx-Rx distance and Tx-Rx alignment variations, we probe signal decoherence patterns via the spatial/angular autocorrelation coefficient~\cite{MacCartneySpatialStatistics}.
\begin{figure*}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}{0.522\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{figs/tx_comm_setup.jpg}
\\ [0.55ex]
\centering
\includegraphics[width=0.8\linewidth]{figs/rx_comm_setup.jpg}
\end{minipage}&
\hspace{-20mm}
\begin{minipage}{0.478\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{figs/tx_deployment_browning.jpg}
\\ [0.55ex]
\centering
\includegraphics[width=0.8\linewidth]{figs/rx_cart_deployment.jpg}
\end{minipage}
\end{tabular}
\caption{Clockwise from top-left: The Tx circuit with a \SI{28}{\giga\hertz} up-converter, a WR-$28$ directional horn antenna, and other commercially available parts (a); the deployment of the Tx atop the William Browning building: the sounder circuits are housed in a climate-controlled enclosure with the antenna mounted on the pan-and-tilt platform (b); the deployment of the Rx mounted on a push-cart (or a minivan) and pushed (or driven) around onsite (c); the Rx circuit with a \SI{28}{\giga\hertz} down-converter, a WR-$28$ directional horn antenna, USRP B$200$mini SDR, Raspberry Pi SBC, and other commercially available parts (d).}
\label{F2}
\vspace{-7mm}
\end{figure*}
\vspace{-8mm}
\section{Numerical Evaluations}\label{S4}
In this section, we outline the results of our evaluations on the collected dataset. First, in Fig.~\ref{F3}(a), we visualize the calibration curves, obtained according to the procedures outlined in Sec.~\ref{S3}, depicting the linear relationship between the power calculated (in dB) from USRP readings corresponding to the received power-delay profiles and the reference power levels (in dB). Also, obtained via empirical recordings of the WR-$28$ antenna's operational characteristics, Fig.~\ref{F3}(b) and Fig.~\ref{F3}(c) illustrate its $2$D radiation patterns along the azimuth and elevation directions, respectively: these enable us to compute the gains at specific locations along a route and at specific degrees of alignment, crucial for the analyses detailed next.
Fig.~\ref{F4} and Fig.~\ref{F5} depict the received power and pathloss heatmaps, superimposed on Google Hybrid maps of the routes traversed by the Rx onsite, with the Tx affixed on the rooftop of the William Browning building. Also, we compare the pathloss versus distance behavior of signals in our measurement campaign with large-scale urban macro-cellular pathloss standards ($3$GPP TR$38.901$ and ITU-R M$.2135$~\cite{MacCartneyModelsOverview}). These constitute both line-of-sight and non-line-of-sight models, with a Tx height of $h_{\text{Tx}}{\approx}$\SI{25}{\meter} and a Tx-Rx $2$D separation range of \SI{10}{\meter}${\leq}d_{2\text{D}}{\leq}$\SI{5000}{\meter}, which match the deployment specifications of our campaign, making them suitable candidates for empirical validations. As shown in Fig.~\ref{F6}(a), evaluating the pathlosses computed from our collected measurements against these standards for the urban campus (President's Circle), suburban neighborhood (S Walcott St), and urban vegetation (campus foliage) routes, we observe that while the $3$GPP and ITU-R standards model the empirical pathloss values for the urban campus route, they fail to accurately capture the pathloss vs distance behavior in suburban settings; additionally, these standards do not serve as good benchmarks for studying \SI{28}{\giga\hertz} propagation around foliage in urban environments.
\begin{figure*} [t]
\centering
\begin{subfigure}{0.368\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/calibration.jpg}
\caption{Calibration Curves}
\end{subfigure}
\begin{subfigure}{0.309\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/antenna_azimuth.jpg}
\caption{Normalized Antenna Pattern (Azimuth)}
\label{F3b}
\end{subfigure}
\begin{subfigure}{0.309\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/antenna_elevation.jpg}
\caption{Normalized Antenna Pattern (Elevation)}
\label{F3c}
\end{subfigure}
\vspace{-2mm}
\caption{The calibration curves depicting measured power versus calculated power for USRP gains of \SI{0}{\deci\bel} and \SI{76}{\deci\bel} (a); and the normalized $2$D antenna patterns along the azimuth and elevation directions for the WR-$28$ horn antennas used in our channel sounder (b), (c).}
\label{F3}
\vspace{-4mm}
\end{figure*}
\begin{figure*} [t]
\centering
\begin{subfigure}{0.460\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/rx_urban_campus.jpg}
\caption{Rx Power: Urban Campus (President's Circle)}
\label{F4a}
\end{subfigure}
\begin{subfigure}{0.2579\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/rx_urban_vegetation.jpg}
\caption{Rx Power: Urban Vegetation}
\label{F4b}
\end{subfigure}
\begin{subfigure}{0.2579\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/rx_suburban_fraternities.jpg}
\caption{Rx Power: Suburban Neighborhood}
\label{F4c}
\end{subfigure}
\vspace{-2mm}
\caption{The received power values superimposed on a Google Hybrid map for urban campus (Rx on minivan), urban vegetation (Rx on cart), and suburban neighborhood (Rx on cart) routes, respectively. The heat-map color palette dots denote the Rx locations, while the purple diamond denotes the Tx location.}
\label{F4}
\vspace{-4mm}
\end{figure*}
\begin{figure*} [t]
\centering
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/pl_urban_campus.jpg}
\caption{Pathloss: Urban Campus (President's Circle)}
\label{F5a}
\end{subfigure}
\begin{subfigure}{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/pl_urban_vegetation.jpg}
\caption{Pathloss: Urban Vegetation}
\label{F5b}
\end{subfigure}
\begin{subfigure}{0.245\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/pl_suburban_fraternities.jpg}
\caption{Pathloss: Suburban Neighborhood}
\label{F5c}
\end{subfigure}
\vspace{-2mm}
\caption{The pathloss values superimposed on a Google Hybrid map for urban campus (Rx on minivan), urban vegetation (Rx on cart), and suburban neighborhood (Rx on cart) routes, respectively. The heat-map color palette dots denote the Rx locations, while the purple diamond denotes the Tx location.}
\label{F5}
\vspace{-7mm}
\end{figure*}
\indent{Next}, to analyze the multipath propagation characteristics of \SI{28}{\giga\hertz} signals, we use the SAGE algorithm~\cite{SAGE} to extract the complex attenuation ($\alpha$), delay ($\tau$), Doppler shift ($\nu$), and angles-of-arrival ($\phi,\theta$) of the $M$ specular paths arriving at the Rx while traversing a particular route onsite. Solving for the exact high-resolution maximum likelihood estimate of this parameter vector ($\bm{\xi}_{i}{=}[\alpha_{i},\tau_{i},\nu_{i},\phi_{i},\theta_{i}]^{\intercal},i{=}1,2,{\dots},M$) directly involves prohibitively large computation times~\cite{SAGE}. Thus, the SAGE algorithm solves for an approximate estimate of $\bm{\xi}_{i}$ via iterative executions of the E-step which computes the expectation of the log-likelihood given the observations and the previous estimate, and the M-step which computes the current estimate by maximizing over the E-step result. This iterative execution occurs until convergence, i.e., the change in parameter values across consecutive iterations is smaller than a predefined threshold. Using these estimates, in Fig.~\ref{F6}(b) we plot the RMS direction-spread ($\sigma_{\Omega}$) of the received signals while traversing the urban campus and urban vegetation routes: these spread metrics are computed employing the procedures outlined in~\cite{Indoor60G}. From the estimated AoA azimuth and elevation values, we observe that the multipath components arriving at the Rx along the urban campus route demonstrate a relatively larger spread than those along the urban vegetation route.
\\\indent{Finally}, to study the spatial decoherence behavior of \SI{28}{\giga\hertz} signals in V$2$X scenarios, we evaluate the spatial/angular autocorrelation coefficient along specific Rx routes under the effects of Tx-Rx distance and alignment: this coefficient is computed according to the steps laid down in~\cite{MacCartneySpatialStatistics}, wherein for each Rx location change, the amplitude term in~\cite{MacCartneySpatialStatistics} constitutes the magnitude of all the power-delay profile components (along the excess delay axis) in a recorded measurement at that location, with the expectation taken over the ensemble. Fig.~\ref{F7}(a) illustrates these spatial consistency plots vis-\`{a}-vis distance, under perfect Tx-Rx alignment conditions, wherein exponential functions are fit to the computed coefficient values for the urban campus, suburban neighborhood, and urban vegetation routes. We note decreasing correlation trends between the recorded power-delay profile samples under increasing Tx-Rx separation. Similarly, keeping the Tx-Rx separation constant, Fig.~\ref{F7}(b) depicts the variation of the angular autocorrelation coefficient under increasing levels of misalignment between the Tx and Rx antennas, while traversing the urban campus and suburban neighborhood routes. We observe rapid decorrelation across the evaluated samples even at small amounts of misalignment, highlighting the characteristics of our directional WR-$28$ horn antennas, and illustrating the need for accurate beam-steering in mmWave V$2$X networks. In Figs.~\ref{F7}(a), (b), the channel does not get fully decorrelated since, in our \emph{beam-steered measurements}, the LoS component remains significant.
\begin{figure*} [t]
\centering
\begin{subfigure}{0.496\linewidth}
\centering
\includegraphics[width=0.805\linewidth]{figs/pl_distance_updated.jpg}
\vspace{-1mm}
\caption{Pathloss vs Tx-Rx Distance}
\label{F6a}
\end{subfigure}
\begin{subfigure}{0.496\linewidth}
\centering
\includegraphics[width=0.84\linewidth]{figs/rms_direction_spread_updated.jpg}
\vspace{-1mm}
\caption{RMS AoA Direction Spread}
\label{F6b}
\end{subfigure}
\vspace{-2mm}
\caption{An illustration of the pathloss values computed from the various routes traversed during the measurement campaign compared against the $3$GPP TR$38.901$ and ITU-R M.$2135$ standards, with the dashed lines denoting the corresponding fitted models (a); and a plot depicting the cumulative distribution function of the RMS AoA direction-spread computed from the parameter estimates obtained via SAGE, for the urban-campus and urban-vegetation routes.}
\label{F6}
\vspace{-4mm}
\end{figure*}
\begin{figure*} [t]
\centering
\begin{subfigure}{0.496\linewidth}
\centering
\includegraphics[width=0.84\linewidth]{figs/sc_distance_updated.jpg}
\vspace{-1mm}
\caption{Spatial Consistency vs Tx-Rx Distance}
\label{F7a}
\end{subfigure}
\begin{subfigure}{0.496\linewidth}
\centering
\includegraphics[width=0.83\linewidth]{figs/sc_alignment_updated.jpg}
\vspace{-1mm}
\caption{Spatial Consistency vs Tx-Rx Alignment Accuracy}
\label{F7b}
\end{subfigure}
\vspace{-2mm}
\caption{The plots depicting variations in the spatial/angular autocorrelation coefficient versus Tx-Rx distance (a) and Tx-Rx alignment accuracy (b).}
\label{F7}
\vspace{-6mm}
\end{figure*}
\vspace{-9mm}
\balance
\section{Conclusion}\label{S5}
We discuss the design of a fully autonomous beam-steering platform coupled with a sliding correlator channel sounder, well-suited for mmWave V$2$X modeling. Corroborated onsite, this beam-steering system demonstrates superior performance vis-\`{a}-vis geo-positioning accuracy, alignment reliability, and tracking response times. Processing the power-delay profiles via custom noise elimination and thresholding heuristics, we perform pathloss evaluations against the $3$GPP TR$38.901$ and ITU-R M$.2135$ standards. Importantly, the continuous series of measurements facilitated by our design enables V$2$X spatial consistency studies vis-\`{a}-vis Tx-Rx distance and alignment.
\balance
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08667",
"language": "en",
"timestamp": "2023-02-20T02:05:53",
"url": "https://arxiv.org/abs/2302.08667",
"yymm": "2302"
} | \section{Introduction and conclusion}
\label{Introduction}
{\it Background on experiments:}
Horowitz, Pollock and Souder proposed a direct measurement
for neutron-skin thickness $r_{\rm skin}=r_{\rm n}-r_{\rm p}$~\cite{PRC.63.025501},
where $r_{\rm p}$ and $r_{\rm n}$ are proton and neutron radii, respectively.
This direct measurement $r_{\rm skin}$ consists of parity-violating and elastic electron scattering.
In fact, as for $^{208}$Pb,
the PREX group has reported,
\begin{equation}
r_{\rm skin}^{208}({\rm PREX2}) = 0.283\pm 0.071= 0.212 \sim 0.354
\,{\rm fm},
\label{Eq:Experimental constraint 208}
\end{equation}
combining the original Lead Radius EXperiment (PREX) result \cite{PRL.108.112502,PRC.85.032501}
with the updated PREX2 result \cite{Adhikari:2021phr}. This is the most reliable skin value
for $^{208}$Pb.
Very lately, as for $^{48}$Ca, the CREX group has presented~\cite{CREX:2022kgg}.
\begin{eqnarray}
r_{\rm skin}^{48}({\rm CREX})
&=&0.121 \pm 0.026\ {\rm (exp)} \pm 0.024\ {\rm (model)}
\notag \\
&=&0.071\sim 0.171~{\rm fm} .
\end{eqnarray}
The value is the most reliable skin value for $^{48}$Ca.
These skin values and the $r_{\rm p}$ of Ref.~\cite{Angeli:2013epw,PRC.90.067304}
allow us to deduce matter radii $r_{\rm m}$. These values are
tabulated in Table \ref{reference values}.
As for the Sn isotopes, an indirect measurement on $r_{\rm skin}$
was made~\cite{Krasznahorkay:1999zz}.
In 1998, the cross sections of the isovector spin-dipole resonances (SDR) in the Sb isotopes excited by the ($^{3}$He, t) charge-exchange reaction at 450 MeV for
$0^{\circ} \leq \theta_t \leq 1.15^{\circ}$ have been measured.
In order to deduce $r_{\rm n}$, they used the sum rule of Ref. \cite{Gaarde:1981vhv} valid
for the spin-dipole operator involving the difference between the $\beta^{-}$ and $\beta^{-}$
strengths and the energy-weighted sum rule for the SDR calculated
in a model where the unperturbed particle-hole energies
are degenerate with an energy.
The skin values, $r_{\rm n}$, $r_{\rm m}$ and the $r_{\rm p}$
of Ref.~\cite{Angeli:2013epw} are also shown in Table \ref{reference values}.
As for $^{120}$Sn, in 2018, the electric dipole strength distribution between 5 and 22 MeV was determined
at RCNP from polarization transfer observables measured in proton inelastic scattering at
$E_{\rm lab} = 295$~MeV and forward angles including $0^{\circ}$~\cite{Hashimoto:2015ema}.
They extracted a highly precise electric dipole polarizability $\alpha_{\rm D} = 8.93(36)~{\rm fm}^3$
by combined it with photoabsorption data.
Within the model used, they yield $r_{\rm skin}= 0.148(34)$~fm. Their results are also shown in
Table \ref{reference values}. The result has smaller error that that of Ref.~\cite{Krasznahorkay:1999zz}.
\begin{table}[htb]
\begin{center}
\caption
{Values of $r_{\rm m}$, $r_{\rm n}$, $r_{\rm skin}$, $r_{\rm p}$.
The $r_{\rm p}$ are determined with the electron scattering, where the charge radii are taken from Ref.~\cite{PRC.90.067304} for $^{208}$Pb and Ref.~\cite{Angeli:2013epw} for $^{48}$Ca and Sn isotopes.
The radii are shown in units of fm.
}
\begin{tabular}{cccccc}
\hline\hline
& Ref. & $r_{\rm p}$ & $r_{\rm m}$ & $r_{\rm n}$ & $r_{\rm skin}$ \\
\hline
$^{208}$Pb & PREX2 & $5.444$ & $5.617 \pm 0.044$ & $5.727 \pm 0.071$ & $0.283\pm 0.071$ \\
$^{48}$Ca & CREX & $3.385$ & $3.456 \pm 0.050$ & $3.506 \pm 0.050$ & $0.121 \pm 0.050$ \\
$^{116}$Sn & \cite{Krasznahorkay:1999zz} & $4.554$ & $4.67 \pm 0.06$ & $4.62 \pm 0.06$ & $0.12 \pm 0.06$ \\
$^{118}$Sn & \cite{Krasznahorkay:1999zz} & $4.569$ & $4.70 \pm 0.06$ & $4.65 \pm 0.06$ & $0.13 \pm 0.06$ \\
$^{120}$Sn & \cite{Krasznahorkay:1999zz} & $4.583$ & $4.76 \pm 0.07$ & $4.69 \pm 0.07$ & $0.18 \pm 0.07$ \\
$^{120}$Sn & \cite{Hashimoto:2015ema} & $4.583$ & $4.731 \pm 0.034$ & $4.670 \pm 0.034$ & $0.148 \pm 0.034$ \\
$^{122}$Sn & \cite{Krasznahorkay:1999zz} & $4.595$ & $4.82 \pm 0.07$ & $4.73 \pm 0.07$ & $0.22 \pm 0.07$ \\
$^{124}$Sn & \cite{Krasznahorkay:1999zz} & $4.606$ & $4.80 \pm 0.07$ & $4.72 \pm 0.07$ & $0.19 \pm 0.07$ \\
\hline
\end{tabular}
\label{reference values}
\end{center}
\end{table}
{\it Background on model:}
The reaction cross section $\sigma_{\rm R}$ is a standard way of determining matter radius $r_{\rm m}$.
One can evaluate $r_{\rm skin}$ and $r_{\rm n}$ deduced from the $r_{\rm m}$ and the
$r_{\rm p}$.~\cite{Angeli:2013epw} calculated with the isotope shift method
based on the electron scattering.
We tested the chiral (Kyushu) $g$-matrix folding model~\cite{Toyokawa:2017pdd}
for $^{12}$C+$^{12}$C scattering and found that
the Kyushu $g$-matrix folding model is reliable for reaction cross sections
$\sigma_{\rm R}$ in
$30 ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ E_{\rm in} ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ 100 $~MeV and $250 ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ E_{\rm in} ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ 400$~MeV~\cite{Tagami:2019svt}.
The Kyushu $g$-matrix folding modeld were applied for measured $\sigma_{\rm R}$ of $^{4}$He+$^{116,120,224}$Sn scattering~\cite{Matsuzaki:2021hdm}.; the results are
$r_{\rm skin}({\rm exp})=0.242 \pm 0.140$~fm for $^{116}$Sn,
$r_{\rm skin}({\rm exp})=0.377 \pm 0.140$~fm for $^{120}$Sn,
$r_{\rm skin}({\rm exp})=0.180 \pm 0.142$~fm for $^{124}$Sn.
These values have larger errors than those shown in Table \ref{reference values}.
As for $p$+$^{208}$Pb scattering, we determined a value of $r_{\rm skin}^{208}({\rm exp})$
from measured $\sigma_{\rm R}$ in a range of incident energies,
$30 ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ E_{\rm lab} ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ 100$~MeV;
the value is $r_{\rm skin}^{208}({\rm exp})=0.278 \pm 0.035$~fm~\cite{Tagami:2020bee}. Our result agrees with $r_{\rm skin}^{208}({\rm PREX2})$.
In this case, we used the D1S-GHFB+AMP proton and neutron densities,
where D1S-GHFB+AMP stands for Gogny-D1S HFB (D1S-GHFB)
with the angular momentum projection (AMP). The $r_{\rm p}$
calculated with D1S-GHFB+AMP agirees with the experimental value of Ref.~\cite{PRC.90.067304}.
Also for $^{116,118,120,122,124}$Sn, the $r_{\rm p}$ of D1S-GHFB+AMP agree with those~\cite{Angeli:2013epw}
calculated with the isotope shift method based on the electron scattering.
For this reason, we use
the D1S-GHFB+AMP proton and neutron densities in this paper.
The data~\cite{INGEMARSSON1999341,R.F.CARLSON:1995} on $\sigma_{\rm R}$
with high accuracy of $2 \sim3\%$
are available for p+$^{116,118,120,122,124}$Sn.
{\it Aim:}
Our aim is to determine $r_{\rm skin}({\rm exp})$ for $^{116,118,120,122,124}$Sn with
small errors by
using the Kyushu $g$-matrix folding model with the D1S-GHFB+AMP proton and neutron densities.
{\it Results:}
Our values are $r_{\rm skin}({\rm exp})=0.118 \pm 0.021$~fm for $^{116}$Sn,
$0.112 \pm 0.021$~fm for $^{118}$Sn,
$0.124 \pm 0.021$~fm for $^{120}$Sn,
$0.156 \pm 0.022$~fm for $^{124}$Sn,
where the data are taken in $30 ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ E_{\rm in} ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ 65$~MeV.
As for $^{122}$Sn, the skin value in $30 ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ E_{\rm in} ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ 50$~MeV is
$0.122 \pm 0.024$~fm.
{\it Conclusion:}
Our results of Table \ref{result values} are consistent with those shown in Table \ref{reference values}.
\section{Model}
\label{Sec-Framework}
Kohno calculated the $g$ matrix for the symmetric nuclear matter,
using the Brueckner-Hartree-Fock method with chiral N$^{3}$LO 2NFs and NNLO 3NFs~\cite{PRC.88.064005}.
He set $c_D=-2.5$ and $c_E=0.25$ so that the energy per nucleon can become minimum
at $\rho = \rho_{0}$; see Fig.~\ref{fig:diagram} for $c_{D}$ and $c_{E}$.
Toyokawa {\it et al.} localized the non-local chiral $g$ matrix into three-range Gaussian forms~\cite{Toyokawa:2017pdd}, using the localization method proposed
by the Melbourne group~\cite{von-Geramb-1991,Amos-1994}.
The resulting local $g$ matrix is called ``Kyushu $g$-matrix''.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.54\textwidth,clip]{fig1-eps-converted-to.pdf}
\caption{3NFs in NNLO.
Diagram (a) corresponds
to the Fujita-Miyazawa 2$\pi$-exchange 3NF \cite{PTP.17.360},
and diagrams (b) and (c) correspond to 1$\pi$-exchange and contact 3NFs.
The solid and dashed lines denote nucleon and pion propagations,
respectively, and filled circles and squares stand for vertices.
The strength of the filled-square vertex is often called $c_{D}$
in diagram (b) and $c_{E}$ in diagram (c).
}
\label{fig:diagram}
\end{center}
\end{figure}
Now, we show the folding model for nucleon-nucleus scattering.
The potential $U({\bfi R})$ consists of the direct and exchange parts~\cite{PRC.89.064611},
$U^{\rm DR}({\bfi R})$ and $U^{\rm EX}({\bfi R})$, defined by
\begin{subequations}
\begin{eqnarray}
U^{\rm DR}({\bfi R}) & = &
\sum_{\mu,\nu}\int \rho^{\nu}_{\rm T}({\bfi r}_{\rm T})
g^{\rm DR}_{\mu\nu}(s;\rho_{\mu\nu}) d
{\bfi r}_{\rm T}\ ,\label{eq:UD} \\
U^{\rm EX}({\bfi R}) & = &
\sum_{\mu,\nu}
\int \rho^{\nu}_{\rm T}({\bfi r}_{\rm T},{\bfi r}_{\rm T}+{\bfi s}) \nonumber \\
& &
\times g^{\rm EX}_{\mu\nu}(s;\rho_{\mu\nu}) \exp{[-i{\bfi K}({\bfi R}) \cdot {\bfi s}/M]}
d {\bfi r}_{\rm T}\ ,\label{eq:UEX}
\end{eqnarray}
\end{subequations}
where ${\bfi R}$ is the coordinate between a projectile (P) and
a target (${\rm T}$),
${\bfi s}=-{\bfi r}_{\rm T}+{\bfi R}$, and ${\bfi r}_{\rm T}$ is
the coordinate of the interacting nucleon from the center-of-mass of T.
Each of $\mu$ and $\nu$ denotes the $z$-component of isospin, i.e.,
$(1/2,-1/2)$ corresponds to (neutron, proton).
The nonlocal $U^{\rm EX}$ has been localized in Eq.~\eqref{eq:UEX}
with the local semi-classical approximation
\cite{NPA.291.299,*NPA.291.317,*NPA.297.206},
where {\bfi K}({\bfi R}) is the local momentum between P and T,
and $M= A/(1 +A)$ for the target mass number $A$;
see Ref.~\cite{Minomo:2009ds} for the validity of the localization.
The direct and exchange parts, $g^{\rm DR}_{\mu\nu}$ and
$g^{\rm EX}_{\mu\nu}$, of the $g$-matrix depend on the local density
\begin{eqnarray}
\rho_{\mu\nu}=\sigma^{\mu} \rho^{\nu}_{\rm T}({\bfi r}_{\rm T}+{\bfi s}/2)
\label{local-density approximation}
\end{eqnarray}
at the midpoint of the interacting nucleon pair, where $\sigma^{\mu}$ having ${\mu}=-1/2$
is the Pauli matrix of an
incident proton. As a way of taking the center-of-mass correction to
the D1S-GHFB+AMP densities, we use the method of Ref.~\cite{PRC.85.064613},
since the procedure is quite simple.
The direct and exchange parts, $g^{\rm DR}_{\mu\nu}$ and
$g^{\rm EX}_{\mu\nu}$, of the $g$-matrix, are described by~\cite{PRC.85.064613}
\begin{align}
&\hspace*{0.5cm} g_{\mu\nu}^{\rm DR}(s;\rho_{\mu\nu}) \nonumber \\
&=
\begin{cases}
\displaystyle{\frac{1}{4} \sum_S} \hat{S}^2 g_{\mu\nu}^{S1}
(s;\rho_{\mu\nu}) \hspace*{0.42cm} ; \hspace*{0.2cm}
{\rm for} \hspace*{0.1cm} \mu+\nu = \pm 1
\vspace*{0.2cm}\\
\displaystyle{\frac{1}{8} \sum_{S,T}}
\hat{S}^2 g_{\mu\nu}^{ST}(s;\rho_{\mu\nu}),
\hspace*{0.2cm} ; \hspace*{0.2cm}
{\rm for} \hspace*{0.1cm} \mu+\nu = 0
\end{cases}
\\
&\hspace*{0.5cm}
g_{\mu\nu}^{\rm EX}(s;\rho_{\mu\nu}) \nonumber \\
&=
\begin{cases}
\displaystyle{\frac{1}{4} \sum_S} (-1)^{S+1}
\hat{S}^2 g_{\mu\nu}^{S1} (s;\rho_{\mu\nu})
\hspace*{0.34cm} ; \hspace*{0.2cm}
{\rm for} \hspace*{0.1cm} \mu+\nu = \pm 1 \vspace*{0.2cm}\\
\displaystyle{\frac{1}{8} \sum_{S,T}} (-1)^{S+T}
\hat{S}^2 g_{\mu\nu}^{ST}(s;\rho_{\mu\nu})
\hspace*{0.2cm} ; \hspace*{0.2cm}
{\rm for} \hspace*{0.1cm} \mu+\nu = 0 ~~~~~
\end{cases}
\end{align}
where $\hat{S} = {\sqrt {2S+1}}$ and $g_{\mu\nu}^{ST}$ are
the spin-isospin components of the $g$-matrix; see Ref.~\cite{PTEP.2018.023D03} for the explicit form of
$g^{\rm DR}_{\mu\nu}$ and $g^{\rm EX}_{\mu\nu}$.
As for Sn isotopes, the proton and neutron densities, $\rho_{\rm p}(r)$ and $\rho_{\rm n}(r)$, are calculated with D1S-GHFB+AMP~\cite{Tagami:2019svt}.
As a way of taking the center-of-mass correction to the D1S-GHFB+AMP densities,
we use the method of Ref.~\cite{Sumi:2012fr}, since the procedure is quite simple.
\subsection{Scaling procedure of neutron density}
The neutron density $\rho_p(r)$ is scaled from the D1S-GHFB+AMP one.
We can obtain the scaled density $\rho_{\rm scaling}({\bfi r})$ from the original density $\rho({\bfi r})$ as
\begin{eqnarray}
\rho_{\rm scaling}({\bfi r})=\frac{1}{\alpha^3}\rho({\bfi r}/\alpha)
\end{eqnarray}
with a scaling factor
\begin{eqnarray}
\alpha=\sqrt{ \frac{\langle {\bfi r}^2 \rangle_{\rm scaling}}{\langle {\bfi r}^2 \rangle}} .
\end{eqnarray}
We scale the neutron density so that the
$f \times \sigma_{\rm R}({\rm D1S})$ may reproduce the data ($\sigma_{\rm R}({\rm exp})$),
where $\sigma_{\rm R}({\rm D1S})$ is the result of D1S-GHFB+AMP and
$f$ is the average of
$\sigma_{\rm R}({\rm exp})/\sigma_{\rm R}({\rm D1S})$ over $E_{\rm lab}$.
\section{Results}
\label{Results}
Figure~\ref{Fig-RXsec-p+120Sn} shows reaction cross sections
$\sigma_{\rm R}$ for p+$^{120}$Sn scattering as a function of $E_{\rm lab}$.
The $\sigma_{\rm R}({\rm D1S}) $ calculated wth D1S-GHFB+AMP
undershoots the data~\cite{INGEMARSSON1999341,R.F.CARLSON:1995}
($\sigma_{\rm R}({\rm exp}) $)
in $30.2 \leq E_{\rm lab} \leq 65.5$~MeV, but $f \times \sigma_{\rm R}({\rm D1S}) $ almost agrees
with the data within error bars, where $f$ is the average of
$f(E_{\rm lab}) \equiv \sigma_{\rm R}({\rm exp})/\sigma_{\rm R}({\rm D1S})$ over $E_{\rm lab}$.
In this case, $f$ is 1.04711.
As a result of the scaling procedure mentioned above,
we can obtain $r_{\rm m}=4.655 \pm 0.021$~fm, leading to $r_{\rm skin}({\rm exp})=0.124 \pm 0.021$~fm; see Table \ref{result values}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth,clip]{120Sn-sigma-eps-converted-to.pdf}
\caption{
$E_{\rm lab}$ dependence of reaction cross sections $\sigma_{\rm R}$
for $p$+$^{120}$Sn scattering.
Open circles stand for the results of D1S-GHFB+AMP.
Closed circles correspond to $f \times \sigma_{\rm R}({\rm D1S}) $.
The data are taken from Refs.~\cite{INGEMARSSON1999341,R.F.CARLSON:1995}.
}
\label{Fig-RXsec-p+120Sn}
\end{center}
\end{figure}
Figure~\ref{Fig-RXsec-p+120Sn-e-skin} shows a skin vale
$r_{\rm skin}(E_{\rm lab}) $ for each $E_{\rm lab}$
for $p$+$^{120}$Sn scattering in
$30.2 \leq E_{\rm lab} \leq 65.5$~MeV
The $r_{\rm skin}(E_{\rm lab}) $ fluctuate within 0.3~fm and -0.1~fm.
The indicates that taking the weighted mean is important.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth,clip]{e-dep-skin-eps-converted-to.pdf}
\caption{
$r_{\rm skin}({\rm exp})$ for each $E_{\rm lab}$ for $p+^{120}$Sn scattering.
Closed circles with error-bar show $r_{\rm skin}({\rm exp})$ for each $E_{\rm lab}$
}
\label{Fig-RXsec-p+120Sn-e-skin}
\end{center}
\end{figure}
Figure~\ref{Fig-RXsec-p+120Sn-F} shows $E_{\rm lab}$ dependence of $f(E_{\rm lab})$ for $p$+$^{120}$Sn scattering. The $E_{\rm lab}$ dependence of $f(E_{\rm lab})$ is not smooth,
because the $\sigma_{\rm R}$ calculated with D1S-GHFB+AMP are smooth
for $E_{\rm lab}$ dependence but the central values of the data are not.
Note that the factor $f=1.04711$ is obtained by averaging $f(E_{\rm lab})$ over
$30.2 \leq E_{\rm lab} \leq 65.5$~MeV
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth,clip]{fig3-f-eps-converted-to.pdf}
\caption{
$E_{\rm lab}$ dependence of $f(E_{\rm lab})$ for $p+^{120}$Sn scattering.
Closed circles show $E_{\rm lab}$ of $f$. }
\label{Fig-RXsec-p+120Sn-F}
\end{center}
\end{figure}
The same procedure is taken for p+$^{116,118,122,124}$Sn scattering.
Our results and $f$ are shown in Table \ref{result values}.
The $r_{\rm p}$ of D1S-GHFB+AMP agree with those of the electron scattering, where the charge radii are taken from Ref.~\cite{Angeli:2013epw}.
The values of $r_{\rm p}$ are shown in Table \ref{reference values}.
\squeezetable
\begin{table}[htb]
\begin{center}
\caption
{Values of $f$, $r_{\rm m}$, $r_{\rm n}$, $r_{\rm skin}$.
The values of $r_{\rm p}$ are shown in Table \ref{reference values}.
The radii are shown in units of fm.
}
\begin{tabular}{cccccc}
\hline\hline
& Ref. of data & $f$ & $r_{\rm m}$ & $r_{\rm n}$ & $r_{\rm skin}$ \\
\hline
$^{116}$Sn & \cite{INGEMARSSON1999341,R.F.CARLSON:1995} & $1.02447$ & $4.622
\pm 0.021$ & $4.672 \pm 0.021$ & $0.118 \pm 0.021$ \\
$^{118}$Sn & \cite{INGEMARSSON1999341,R.F.CARLSON:1995} & $1.05118$ & $4.634
\pm 0.021$ & $4.681 \pm 0.021$ & $0.112 \pm 0.021$ \\
$^{120}$Sn & \cite{INGEMARSSON1999341,R.F.CARLSON:1995}& $1.04711$ & $4.655
\pm 0.021$ & $4.706 \pm 0.021$ & $0.124 \pm 0.021$ \\
$^{122}$Sn & \cite{R.F.CARLSON:1995} & $1.04881$ & $4.667 \pm 0.024$ & $4.717
\pm 0.024$ & $0.122 \pm 0.024$ \\
$^{124}$Sn & \cite{INGEMARSSON1999341,R.F.CARLSON:1995} & $1.06002$ & $4.699
\pm 0.022$ & $4.761 \pm 0.022$ & $0.156 \pm 0.022$ \\
\hline
\end{tabular}
\label{result values}
\end{center}
\end{table}
Figure \ref{Fig-skins-compare} shows skin values as a function of $S_{\rm p}-S_{\rm n}$.
Our skin values calculated with D1S-GHFB+AMP are compared with
our previous work \cite{Matsuzaki:2021hdm} with Sly7, where the SLy7 parameter set is an improved
version of the widely used SLy4~\cite{Chabanat:1997un}. The data of
measured $\sigma_{\rm R}$ for $^{4}$He scattering on $^{116,120,124}$Sn targets have larger errors than the data for $p$+$^{116,118,120,122,124}$Sn scattering. Eventually, our results have small errors compared with the previous results. This indicates that the present values are more reliable.
As for $^{120}$Sn, in addition, the present value
$r_{\rm skin}=0.124 \pm 0.021$~fm
is consistent with $r_{\rm skin}=0.148 \pm 0.034$~fm~\cite{Hashimoto:2015ema} deduced from $\alpha_{\rm D}$.
Our values are near the lower bound of the previous result for $^{116}$Sn, and
near the central value of the previous result for $^{124}$Sn.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth,clip]{fig5-compare-eps-converted-to.pdf}
\caption{
Skin values as a function of $S_{\rm p}-S_{\rm n}$.
Open squares stand for the results of this work (TW) for $^{116,118,120,122,124}$Sn.
The symbol ``$^{4}$He scattering'' stands for our previous work \cite{Matsuzaki:2021hdm}
for $^{4}$He scattering on $^{116,120,124}$Sn targets.
}
\label{Fig-skins-compare}
\end{center}
\end{figure}
Finally, we summarize skin values determined from measured $\sigma_{\rm R}$ and those by using electroweak interaction.
Figure \ref{Fig-skins} shows skin values as a function of $S_{\rm p}-S_{\rm n}$,
where $S_{\rm p}$ ($S_{\rm n}$) is the proton (neutron) separation energy.
The skin values $r_{\rm skin}(\sigma_{\rm R})$ determined from measured $\sigma_{\rm R}$
for $^{116,118,120,122,124}$Sn are compared with the data of PREX2~\cite{Adhikari:2021phr},
$^{116,118,120,122,124}$Sn~\cite{Krasznahorkay:1999zz,Hashimoto:2015ema},
CREX~\cite{CREX:2022kgg}. As for Sn isotopes,
our results of Table \ref{result values} are consistent with the previous experimental skin-values of Refs.~\cite{Krasznahorkay:1999zz,Hashimoto:2015ema}.
Our value $r_{\rm skin}^{208}({\rm exp})=0.278 \pm 0.035$~fm of Ref.~\cite{Tagami:2020bee} agrees with $r_{\rm skin}^{208}({\rm PREX2})$.
Now we make qualitative discussion.
We assume
a linear relation with $r_{\rm skin}$ and $\delta=S_{\rm p}-S_{\rm n}$ and take the $\chi^2$ fitting for
our central skin-values for $^{116,118,120,122,124}$Sn, we
can get $r_{\rm skin}=0.0091\delta+0.1116$.
When we extrapolate our central skin-values for $^{116,118,120,122,124}$Sn by using
the linear relation, we can obtain $r_{\rm skin}=0.165$~fm for $^{48}$Ca.
In fact, we have already determined
$r_{\rm skin}^{48}({\rm exp})=0.158 \pm (0.023)_{\rm exp} \pm (0.012)_{\rm th}~{\rm fm}$
~\cite{TAGAMI2022105155} from $p$+$^{48}$Ca scattering
and $^{48}$Ca+$^{12}$C scattering.
These values are near the upper bound of CREX.
As for $^{40}$Ca, the linear relation yields $r_{\rm skin}^{48}({\rm exp})=0.045$~fm. The value
is near the upper bound of our previous value $r_{\rm skin}=-0.035 \pm 0.075$~fm~\cite{Matsuzaki:2021hdm} determined from $^{4}$He+ $^{40}$Ca scattering.
The skin values determined from $\sigma_{\rm R}$ for $^{116,118,120,122,124}$, $^{40,48}$Ca
are near the linear line; see the linear line of Fog.~\ref{Fig-skins}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth,clip]{skins-eps-converted-to.pdf}
\caption{
Skin values as a function of $S_{\rm p}-S_{\rm n}$.
The symbol ``SDR" shows the results~\cite{Krasznahorkay:1999zz} of SDR in Sn isotopes.
The symbol `` $\alpha_D$'' means the results of the $E1$ polarizability experiment ($E1$pE)
for $^{120}$Sn~\cite{Hashimoto:2015ema}.
The symbol ``PREX'' stands for the result of $r_{\rm skin}^{208}({\rm PREX2})$, whereas the symbol ``CREX'' corresponds to the result of $r_{\rm skin}^{48}({\rm CREX})$.
Open squares stand for the results of this work (TW) for $^{116,118,120,122,124}$Sn.
The symbol ``$\sigma_{\rm R}$'' stands for our previous works of Refs. \cite{Tagami:2020bee,Matsuzaki:2021hdm}.
The linear line shows $r_{\rm skin}=0.0091\delta+0.1116$.
The data ( closed circles with error bar) are taken from
Refs.~\cite{Krasznahorkay:1999zz,Hashimoto:2015ema,Adhikari:2021phr,CREX:2022kgg}.
}
\label{Fig-skins}
\end{center}
\end{figure}
\noindent
\begin{acknowledgments}
We would like to thank Toyokawa and Fukui for their contribution.
\end{acknowledgments}
|
{
"arxiv_id": "2302.08642",
"language": "en",
"timestamp": "2023-02-20T02:04:37",
"url": "https://arxiv.org/abs/2302.08642",
"yymm": "2302"
} | \section{INTRODUCTION}
Level 3-5 of driving automation~\citep{SAE_j3016b_2018} are applied to cars and miniaturized personal mobility vehicles~(PMVs)~\citep{morales2017social,liu2020_what_timeing}.
Autonomous PMVs~(APMVs) are expected to be widely used in mixed traffic and shared space conditions, such as sidewalks, shopping centers, stations, and school campuses~\citep{Yoshinori2013, ali2019smart,liu2022implicit}.
Additionally, drivers (passengers) of AVs (including APMVs) are allowed to perform non-driving tasks during autonomous driving~\citep{sivak2015motion,wada2016motion,diels2016self,lihmi}, e.\,g.,\ reading a book~\citep{isu2014quantitative,Sato2022,LIU2022_IV}, watching videos ~\citep{kato2006study,isu2014quantitative}, and playing games~\citep{kuiper2018looking,li2021queasy}.
Unfortunately, the above usage scenarios pose a potential risk of motion sickness for passengers on the APMV by the following reasons:
\begin{itemize}
\item[1)] In these mixed traffic environments, other traffic participants such as pedestrians, bicycles, and other vehicles, will frequently interact with AVs (including APMVs)~\citep{li2021autonomous,LIU2022_IV}. Passengers may prone to motion sickness owing to the lack of control of the APMV and sensory conflict~\citep{sivak2015motion,wada2016motion,diels2016self,ISKANDER2019716} when the APMV does avoidance behaviors frequently.
\item[2)] Motion sickness may occur with a high probability when the visual and vestibular systems are stimulated with in-congruent information~\citep{reason1978motion2, diels2016self}.
\end{itemize}
Based on the aforementioned issues, preventing motion sickness can be considered an important challenge for the popularity and widespread use of APMVs.
To address these issues, various computational models have been used to evaluate or estimate the severity of motion sickness.
\subsection{Related works}
The sensory conflict~(SC) theory is widely used to explain the mechanism of motion sickness, which postulates that motion sickness is caused by conflicts between one sensory and expected signals based on previous experience~\citep{reason1978motion2}.
Oman proposed a mathematical model of the SC theory based on an observer or optimal estimation theoretic framework, in which our motion perception was assumed to be influenced or corrected by the discrepancy between signals from sensory organs and those calculated by internal models in our central nervous systems, and the discrepancy is regarded as a conflict in the SC theory~\citep{oman1990motion}.
Based on SC theory, \citet{bles1998motion} proposed the subjective vertical conflict(SVC) theory, which postulates that motion sickness is caused by conflict between the vertical directions sensed by sensory organs and those estimated by the central nervous systems or their internal models.
Moreover, \citet{bos1998modelling} first proposed the first computational motion sickness model of the SVC theory.
This model simulates the process of motion sickness caused by conflicts between otolith organs(OTO) and their internal models using one-degree-of-freedom~(1~DoF) vertical acceleration inputs.
To express motion sickness caused by multiple degrees-of-freedom of head movement, including head rotation, \citet{kamiji2007modeling} extended the 1~DoF SVC model~\citep{bos1998modelling} to a six-degrees-of-freedom~(6~DoF) SVC model, which included the OTO and semi-circular canals~(SCC) in the vestibular system to accept the three-dimensional(3D) acceleration and 3D angular velocity inputs.
Moreover, based on the \citet{kamiji2007modeling}'s 6~DoF SVC model, \citet{Inoue2022} optimized the structures of conflict feedback integration and parameters to increase the accuracy of the 6~DoF SVC model in presenting the tendency of motion sickness and motion perception of verticality.
Furthermore, \citet{bos2008theory} suggested that visually induced motion sickness can also be explained by SVC theory and proposed a model framework that includes visual information such as visual angular velocities and visual vertical~(VV) information.
However, this study did not consider a concrete method for the application of the experimental data.
As a computational model of motion sickness that can address visual–vestibular interactions, \citet{braccesi2011motion} proposed a motion sickness model based on the interaction between the OTO and visual acceleration.
However, this model does not consider the rotation of the head as perceived by the SCC and visual perception.
To address this issue, \citet{wada2020computational} expanded the original 6~DoF SVC model \citet{kamiji2007modeling} for vestibular motion sickness to include visual-based angular velocity perception using the optical flow method from camera images.
Recalling SVC theory, motion sickness is primarily caused by a conflict of vertical perception between sensor organs and their internal models.
Moreover, some medical studies point to a correlation between the disability in vertical visual perception and motion sickness~\citep{yardley1990motion, michelson2018assessment, guerraz2001visual}.
Therefore, based on \citet{kamiji2007modeling}'s work, we proposed a 6~DoF~SVC-VV model that represents motion sickness owing to the vertical perception from interactions of visual-vestibular systems in our preliminary work~\citep{LIU2022_IV}.
However, in this pre-study, we did not compare the predicted results using the model with the participants' feelings of motion sickness.
Moreover, we only verified the accuracy of the proposed visual vertical prediction method in a wide outdoor environment; its performance in complex indoor environments remains uncertain.
\subsection{Purposes and contributions}
The purposes of this study are as follows.
\begin{itemize}
\item[1)] Confirming that frequent avoidance behaviors of APMV will cause its passengers getting motion sickness;
\item[2)] Proposing a motion sickness computational model based on the SVC theory which addresses the vertical perception and visual-vestibular interaction.
\end{itemize}
The contributions of this study are as follows.
\begin{itemize}
\item[1)] We verified that passengers got motion sickness while riding the APMV, particularly working on it, by simulating the frequent pedestrian avoidance scenarios of the APMV in the subject's experiment.
\item[2)] A 6~DoF~SVC-VV model was proposed based on the conventional 6~DoF SVC model~\citep{Inoue2022}, which represents motion sickness owing to the vertical perception from interactions of visual-vestibular systems.
\item[3)] We verified that the proposed 6~DoF~SVC-V model has a performance to represent the increase in motion sickness caused by passengers working with tablet devices while riding in APMV.
\end{itemize}
\section{MOTION SICKNESS MODELING WITH VISUAL VERTICAL ESTIMATION}
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{img/model.pdf}
\caption{Proposed 6~DoF~SVC-VV model: motion sickness computational model considering the vertical sensed using vestibular-visual interactions.}
\label{fig:model}
\end{figure}
In this study, we propose a 6~DoF SVC-VV model to predict motion sickness incidence(MSI) considering vestibular-visual interaction, as shown in Fig.~\ref{fig:model}.
This model increases the visual vertical~(VV) estimation (shown as red paths) in the 6~DoF SVC model proposed by~\citep{Inoue2022} (shown as black paths).
Moreover, we improve this model to reduce the negative impact of measurement errors in the actual experiment.
In this section, the methods for modeling the visual vertical perception, vestibular system, and interactions with their internal models are presented separately.
\subsection{Visual Vertical Prediction Modeling}
Considering that the visual vertical~(VV) is thought to be derived from signals presumed to be parallel or perpendicular to vertical objects, such as buildings or the horizon in the environment~\citep{clark2019mathematical}, a simple image processing method is proposed to estimate the visual vertical by analyzing the directions of the edges of objects in images in our pre-study~\citep{LIU2022_IV}.
In this pre-study, we only considered the usage of APMV in open outdoor areas; thus, the visual vertical direction was calculated based on horizontal edge features such as the horizon and horizontal edge of the building.
However, in indoor scenes with various obstacles such as tables, chairs, and benches, the horizontal edge features change with the viewing angle because of the visual perspective,
particularly in frequent avoidance behaviors of APMV.
Therefore, we take the longitudinal edge features, e.\,g.,\ edges of columns and window frames to calculate the visual vertical direction in this study.
An examples of visual vertical prediction and visualization of the predicted visual vertical is showed in Fig.~\ref{fig:VVP}.
The details of the visual vertical prediction are described as follows.
The \framebox{VVP} block shown in Fig.~\ref{fig:model} represents the process of the visual system predicting the visual vertical from an image.
The proposed visual vertical prediction method is shown in Algorithm~\ref{VV}.
The input is a $\bm{I}_t^{color}\in\mathbb{R}^{H\times W \times3}$ which is defined as a color image in the $t$-th frame captured by a camera attached to the human head to imitate human visual input (see Fig.~\ref{fig:VVP}~(a)).
Then, $\bm{I}_t^{color}$ is preprocessed by converting to a gray-scale image and normalizing through the global maximum and minimum (Algorithm~\ref{VV}, steps 1-2).
Subsequently, Sobel operators are used to compute the gradients in the transverse $\bm{\nabla x}_{t}$ and longitudinal directions $\bm{\nabla y}_{t}$ to detect the edges of the objects in the image (Algorithm~\ref{VV}, steps 3-4).
The gradient magnitudes $M_t$ and angles $\Theta_t$ can be calculated from $\bm{\nabla x}_{t}$ and $\bm{\nabla y}_{t}$ in steps 5-6 of Algorithm~\ref{VV}.
Here, $\odot$ and $\oslash$ are Hadamard products and divisions that are element-wise products and divisions.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\linewidth]{img/VVP.pdf}
\caption{An example shows the process of visual vertical prediction in (a)-(c). Visualization results of the predicted visual vertical are shown in (d) and (e).}
\label{fig:VVP}
\vspace{-2mm}
\end{figure}
\begin{algorithm} [htb]
\setstretch{1.2}
\caption{Visual vertical direction estimation method for VVP block in Fig.~\ref{fig:model}.}
\label{VV}
\vspace{-3mm}
\begin{multicols}{2}
{\bf Input:}
$\bm{I}^{color}_{t} \in \mathbb{R}^{H \times W \times 3}$ and $\theta^{vv}_0=90$, \\
\hspace*{9mm} where $H=400, W=1000, t\in\{1,\cdots\, T\}$\\
{\bf Output:} $\theta^{vv}$
\begin{algorithmic} [1]
\State $\bm{I}^{gray}_{t} \in \mathbb{R}^{H \times W}$ $\gets$ Gray($\bm{I}^{color}$)
\State $\bm{I}^{gray}_{t} \in \mathbb{R}^{H \times W}$ $\gets$ Normalization$_{min}^{max}$($\bm{I}^{gray}_{t}$)
\State $\bm{\nabla x}_{t}\in \mathbb{R}^{H \times W}$ $\gets$ Sobel$_x$($\bm{I}^{gray}_{t}$)
\State $\bm{\nabla y}_{t}\in \mathbb{R}^{H \times W}$ $\gets$ $ $Sobel$_y$( $\bm{I}^{gray}_{t}$)
\State $\bm{M}_{t}=(\bm{\nabla x}_{t}\odot\bm{\nabla x}_{t}+\bm{\nabla y}_{t}\odot\bm{\nabla y}_{t})^{\odot 1/2}$
\State $\Theta_{t}= (180 / \pi)\arctan(\bm{\nabla x}_{t}\oslash\bm{\nabla y}_{t})$
\For {$i = 0$ to $H$}
\For {$j = 0$ to $W$}
\State \hspace*{-4mm}$ (\Theta_{t})_{i,j}\gets
\begin{cases}
(\Theta_{t})_{i,j} & (0\leq(\Theta_{t})_{i,j}<180)\\
(\Theta_{t})_{i,j} -180& (180\leq(\Theta_{t})_{i,j}<360)\\
0 & ((\Theta_{t})_{i,j}=360)
\end{cases}$
\EndFor
\EndFor
\State $\bm{M}_{t}$ $\gets$ Normalization$_{min}^{max}$($\bm{M}_{t}$)
\For {$d = 0$ to 179}
\State$(\bm{\theta}^{hist}_{t})_d \gets \sum_{i=0}^{H}\sum_{j=0}^{W}\bm{1}_d[(\Theta_{t})_{i,j}] (\bm{M}_{t})_{i,j}$,
\Statex \hspace*{4mm} where $\bm{\theta}^{hist}_{t}\in\mathbb{N}^{180}$
\EndFor
\State $\bm{c}^{sort}_{t}\in \mathbb{N}^{121}\gets$Sort($(\bm{\theta}^{hist}_{t})_{29:149}$)
\State $\bm{\theta}^{sort}_{t}\in \mathbb{N}^{121}\gets$ argSort$((\bm{\theta}^{hist}_{t})_{29:149})$
\State $\bm{c}^{best3}_{t}\in \mathbb{R}^{3}\gets(\bm{c}^{sort}_{t})_{119:121}/\sum_{i=119}^{121}(\bm{c}^{sort}_{t})_i$
\State $\bm{\theta}^{best3}_{t}\in \mathbb{N}^{3}\gets(\bm{\theta}^{sort}_{t})_{119:121}$
\State $\theta^{vv}_t\gets \bm{\theta}^{best3}_{t} \cdot \bm{c}^{best3}_{t}+30$
\State $\theta^{vv}_t\gets
\begin{cases}
0.7~ \theta^{vv}_t+0.3~\theta^{vv}_{t-1} & (|\theta^{vv}_t-\theta^{vv}_{t-1}|\leq \ang{4})\\
0.2~ \theta^{vv}_t+0.8~\theta^{vv}_{t-1} & (|\theta^{vv}_t-\theta^{vv}_{t-1}|>\ang{4})\\
\end{cases}$
\State $\bm{vv}_t =\begin{bmatrix} vv_t^x\\vv_t^y\\vv_t^z\end{bmatrix}
\gets\begin{bmatrix} 9.81~\cos(\theta^{vv}_t~\pi / 180)\\
9.81~\sin(\theta^{vv}_t~\pi / 180)\\
0\end{bmatrix}$
\State $\bm{vv}(t)\gets$ZOH$(\bm{vv}_t)$
\end{algorithmic}
\end{multicols}
\vspace{-3mm}
\end{algorithm}
\clearpage
In $\Theta_t$, we equate [\ang{360}, \ang{180}] to [\ang{0}, \ang{179}], because the angle of a person's neck usually does not exceed \ang{180} (Algorithm~\ref{VV}, steps 8-11).
Further, as the magnitude of the gradient is larger, the edge becomes more likely (see Fig.~\ref{fig:VVP}~(b)).
The gradient magnitudes $\bm{M}_{t}$ are normalized through the global maximum and minimum to $[0,1]$ (Algorithm~\ref{VV}, step 12).
Next, as shown in Fig.~\ref{fig:VVP}~(c), the histogram of gradient angles is calculated using an indicator function from $\Theta_t$ with its weight matrix $\bm{M}_{t}$ (Algorithm~\ref{VV}, steps 13-15).
The number of bins in the histogram is set to 180.
After calculating the histogram, the gradient's angles in the range [\ang{30}, \ang{150}] are sorted in ascending order by their counts (Algorithm~\ref{VV}, steps 16-17).
We assume that the passenger head will not rotate out of the range [\ang{30}, \ang{150}] in most driving situations.
Subsequently, the best three angles~$\bm{\theta}^{best3}_{t}$ are selected based on the highest three counts~$\bm{c}^{best3}_{t}$ (Algorithm~\ref{VV}, steps 18-19).
Then, the direction of the visual vertical $\theta^{vv}_t$ is calculated as Step 20 of Algorithm~\ref{VV}, in which $\bm{c}^{best3}_{t}$ can be considered as the weight of $\bm{\theta}^{best3}_{t}$.
Meanwhile, the direction of the visual vertical $\theta^{vv}_t$ is also affected by the direction of the visual vertical in the previous frame, that is, $\theta^{vv}_{t-1}$.
As shown in Step 21 of Algorithm~\ref{VV}, we choose two different strategies to update $\theta^{vv}_t$ to reduce the instability due to the prediction errors.
An example to show the visualization of $\theta^{vv}_t$ in Fig.~\ref{fig:VVP}~(d) and (e).
In Step 23 of Algorithm~\ref{VV}, the visual vertical vector $\bm{vv}=[vv_x, vv_y, vv_z]^T$ is calculated from $\theta^{vv}_t$ with a fixed L2 norm $9.81~m/s^2$.
Notably, the value on the z-axis of $\bm{vv}$, i.\,e.,\ $vv_z$, should be $0$ because $\theta^{vv}$ is the rotation angle on the x-y plane of the head coordinate system.
In Step 24 of Algorithm~\ref{VV}, as the 6~DoF SVC model is a continuous-time system, $\bm{vv}_t$ is a discrete variable estimated from an image,
a zero-order holder (ZOH) is used to convert $\bm{vv}_t$ into a continuous variable $\bm{vv}(t)$.
After block \framebox{VVP}, block \framebox{{\textbf{VIS$_g$}}} transfers $\bm{vv}$ to the sensed visual vertical $\bm{vv}_s$.
Note that the vertical signals $\bm{vv}_s$ and $\bm{v}_s$ sensed by the visual and vestibular systems are assumed to be the 3D.
For simplicity, a $3 \times 3$ identity matrix is used as the transform matrix $\bm{T}_{vis}$ in this study.
Therefore,
\begin{eqnarray}
\bm{vv}_s &=& \bm{T}_{vis}~\bm{vv} \nonumber\\
&=& \begin{bmatrix} 1 & 0 &0\\ 0 & 1 & 0\\ 0 & 0 &1 \end{bmatrix} \begin{bmatrix} vv_x\\ vv_y\\ vv_z\end{bmatrix}.
\end{eqnarray}
\subsection{Vestibular System Modeling}
The vestibular system is mainly composed of otolith organs and semi-circular canals.
As shown in Fig.~\ref{fig:model}, the otolith organ is modeled as a \framebox{\textbf{OTO}} block.
Its input is the gravity-inertial acceleration (GIA) in 3 DoF, which is $\bm{f=a+g}$.
Here, $\bm{a}$ is the inertial acceleration, and $\bm{g}$ is the gravitational acceleration (upward).
Refer to \citep{kamiji2007modeling}, in the \framebox{\textbf{OTO}} block, a $3 \times 3$ identity matrix is used for transforming $\bm{f}$ to the sensed GIA $\bm{f}_s$ as
\begin{equation}
\bm{f}_s=\begin{bmatrix} 1 & 0 &0\\ 0 & 1 & 0\\ 0 & 0 &1 \end{bmatrix}\bm{f}.
\end{equation}
Further, the \framebox{\textbf{SCC}} block contained semi-circular canals.
It receives angular velocity $\bm{\omega}$ in 3 DoF and transforms it into the sensed angular velocity $\bm{\omega}_s$ using a transfer function~\citep{merfeld1995modeling}:
\begin{equation}
\bm{\omega}_s=\frac{\tau_a\tau_d s^2}{(\tau_a s+1)(\tau_d s+1)}\bm{\omega}.
\label{Eq:SCC}
\end{equation}
Subsequently, block \framebox{\textbf{LP}} represents the otolith-canal interaction that estimates the detected vertical signal $\bm{v}_s$ from $\bm{f}_s$ and $\bm{\omega}_s$ by updating the raw~\citep{bos2002}:
\begin{equation}
\frac{d\bm{v}_s}{dt}=\frac{1}{\tau}(\bm{f}_s-\bm{v}_s)-\bm{\omega}_s \times \bm{v}_s.
\end{equation}
Moreover, the sensed inertial acceleration $\bm{a}_s$ can be calculated as
\begin{equation}
\bm{a}_s=\bm{f}_s-\bm{v}_s.
\end{equation}
\subsection{Internal models}
The internal model is a hypothetical central neural representation of anticipatory information generated by the central nervous system concerning the sensory organs.
As shown in Fig.~\ref{fig:model}, the 6 DoF SVC-VV model has three internal models.
Specifically, the internal models of \framebox{\textbf{SCC}}, \framebox{\textbf{OTO}}, and \framebox{{\textbf{VIS$_g$}}} are modeled as blocks of \framebox{$\overline{\textbf{SCC}}$}, \framebox{$\overline{\textbf{OTO}}$}, and \framebox{$\overline{\bf{VIS_g}}$}, respectively.
Furthermore, the low-path filter in the vestibular system, which separates the perceived signals of vertical and linear acceleration, is also modeled as \framebox{$\overline{\textbf{LP}}$} in the internal model.
For blocks \framebox{$\overline{\textbf{SCC}}$} and \framebox{$\overline{\textbf{OTO}}$},
there are two types of inputs.
One type includes a variety of signals to help create motion perceptions, such as motion predictions~\citep{wada2021computational} and efference copy~\citep{jeannerod2006motor}, which is an internal copy of the neural signal that generates the movement.
The other types are obtained from the feedback of conflicts between the sensor organs and their internal model.
The \framebox{$\overline{\textbf{SCC}}$} represents the internal model of SCC, which transforms angular velocity $\hat{\bm{\omega}}$ predicted through the internal model to the sensed angular velocity $\hat{\bm{\omega}}_s$ using a transfer function~\citep{merfeld1995modeling}:
\begin{eqnarray}
\hat{\bm{\omega}}_s=\frac{\tau_ds}{\tau_ds+1}\hat{\bm{\omega}}.
\end{eqnarray}
The predicted $\hat{\bm{\omega}}$ combines the angular velocity signal $\bm{\Tilde{\omega}}$ and the feedback of the difference $\mathit{\Delta}\bm{\omega}$ between sensory information ${\bm{\omega}}_s$ and estimated information $\hat{\bm{\omega}}_s$, that is, $\mathit{\Delta}\bm{\omega}=\bm{\omega_s}-\bm{\hat{\omega}_s}$.
To simplify the generation process of a variety of signals to help create motion perceptions, this study uses the angular velocity of the head $\bm{\omega}$ as input.
Therefore,
\begin{eqnarray}
\hat{\bm{\omega}}
&=&\bm{\Tilde{\omega}} +K_{\omega c}\mathit{\Delta}\bm{\omega} \nonumber \\
&=& K_{\omega}\bm{\omega}+K_{\omega c}\mathit{\Delta}\bm{\omega} \nonumber \\
&=& K_{\omega}\bm{\omega}+K_{\omega c}(\bm{\omega_s}-\bm{\hat{\omega}_s}).
\end{eqnarray}
The \framebox{$\overline{\textbf{OTO}}$} represents the internal model of OTO, which transforms predicted GIA $\hat{\bm{f}}$ into the expected afferent signal of GIA $\hat{\bm{f}}_s$ using a $3 \times 3$ identity matrix, that is,
\begin{eqnarray}
\hat{\bm{f}}_s=\begin{bmatrix} 1 & 0 &0\\ 0 & 1 & 0\\ 0 & 0 &1 \end{bmatrix}\hat{\bm{f}}.
\end{eqnarray}
The predicted $\hat{\bm{f}}$ using the internal model can be calculated as follows:
\begin{eqnarray}
\hat{\bm{f}}=\hat{\bm{g}}+\hat{\bm{a}},
\label{Eq:hat_f}
\end{eqnarray}
where $\hat{\bm{a}}$ and $\hat{\bm{g}}$ are the gravitational and linear accelerations predicted by the internal model.
Specifically,
\begin{eqnarray}
\hat{\bm{a}}&=&\Tilde{\bm{a}}+K_{ac}\mathit{\Delta}\bm{a} \nonumber \\
&=& K_a\bm{a}+K_{ac}\mathit{\Delta}\bm{a}\nonumber \\
&=& K_a\bm{a}+K_{ac}(\bm{a_s}-\bm{\hat{a}_s});
\label{Eq:hat_a}
\end{eqnarray}
In part, the acceleration ${\bm{a}}$ is calculated from measured GIA $\bm{f}$, that is, ${\bm{a}}=\bm{f}-\bm{g}$.
According to~\citep{kamiji2007modeling}, gravitational acceleration $\bm{g}$ is calculated from $\bm{\omega_q}$ using the following update law:
\begin{eqnarray}
\frac{d\bm{g}}{dt}=-\bm{\omega} \times \bm{g}.
\end{eqnarray}
However, in practice, noise exists in $\bm{\omega}$ obtained from the IMU.
This results in a drift in $\bm{g}$ because the noise in $\bm{\omega}$ is also integrated.
To solve this problem, $\bm{\omega_q}$ is a calibrated angle velocity obtained using a complementary filter~\citep{Wetzstein2017VirtualRC} to reduce the sensing tilt from the IMU.
Complementary filter $Q(\bm{\omega}, \bm{f})$ outputs a quaternion vector $\bm{q} \in \mathbb{R}^4$ to present the orientation calculated from the $\bm{\omega}$ and $\bm{f}$:
\begin{eqnarray}
\bm{q}=Q(\bm{\omega}, \bm{f})
\label{Eq:quaternion}
\end{eqnarray}
Then, $\bm{\omega_q}$ can be approximated as
\begin{eqnarray}
\bm{\omega_q} =\begin{bmatrix} 0 & 1 & 0 &0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 &1 \end{bmatrix} (2\frac{d\bm{q}}{dt}\circ\bm{q}^{-1}),
\label{Eq:omega_q}
\end{eqnarray}
where $\circ$ denotes the quaternion product.
Thus, the new update law for $\bm{g}$ is
\begin{eqnarray}
\frac{d\bm{g}}{dt}=-\bm{\omega_q} \times \bm{g}.
\label{Eq:g_new}
\end{eqnarray}
Then, $\bm{g}$ is normalized in a fixed L2 norm $9.81~m/s^2$ by
\begin{eqnarray}
\bm{g}=9.81~\frac{\bm{g}}{||\bm{g}||}.
\end{eqnarray}
In addition, the $\hat{\bm{g}}$ in part of $\hat{\bm{f}}=\hat{\bm{g}}+\hat{\bm{a}}$ (i.\,e.,\ Eq.~\ref{Eq:hat_f}) is calculated via
\begin{eqnarray}
\bm{\hat{g}}&=&\bm{K}_{vvc} \int_{0}^{t} \mathit{\Delta} \bm{vv} dt+\bm{K}_{vc} \int_{0}^{t} \mathit{\Delta} \bm{v} dt \nonumber \\
&=& \bm{K}_{vvc} \int_{0}^{t} (\bm{vv}_{s}-\bm{\hat{vv}}_{s} ) dt+\bm{K}_{vc} \int_{0}^{t} (\bm{v}_s-\bm{\hat{v}}_s) dt.
\label{Eq:hat_g}
\end{eqnarray}
Here, $\mathit{\Delta} \bm{v}$ and $\mathit{\Delta} \bm{vv}$ are the conflicts of the vertical and visual vertical signals between the sensor organs and their internal model, respectively.
The $\hat{\bm{v}}_s$ is calculated by \framebox{$\overline{\textbf{LP}}$} block by following update law~\citep{bos2002}:
\begin{eqnarray}
\frac{d\hat{\bm{v}}_s}{dt}=\frac{1}{\tau}(\hat{\bm{f}}_s-\hat{\bm{v}}_s)-\hat{\bm{\omega}}_s \times \hat{\bm{v}}_s,
\end{eqnarray}
which is the same as the update law for $\bm{v}_s$.
Subsequently, $\hat{\bm{a}}_s$ in Eq. ~\ref{Eq:hat_a} is calculated using $\hat{\bm{a}}_s=\hat{\bm{f}}_s-\hat{\bm{v}}_s$.
Meanwhile, $\bm{\hat{vv}}_{s} $ represents the sensed visual vertical in the internal model, which is calculated by block \framebox{$\overline{\bf{VIS_g}}$} using the vertical sensed although visual-vestibular interaction, that is, $\hat{\bm{g}}$.
Thus,
\begin{eqnarray}
\hat{\bm{vv}}_s = \bm{T}_{\overline{vis}}~\hat{\bm{g}}= \begin{bmatrix} 1 & 0 &0\\ 0 & 1 & 0\\ 0 & 0 &0 \end{bmatrix} \begin{bmatrix} \hat{g}_x\\ \hat{g}_y\\ \hat{g}_z\end{bmatrix},
\end{eqnarray}
that projects the sensed vertical $\hat{\bm{g}}$ into the x-y plane in the head coordinate system, eliminating the value on the z-axis.
Finally, $\bm{\hat{g}}$ can be updated by
\begin{eqnarray}
\frac{d\hat{\bm{g}}}{dt}=\bm{K}_{vvc}~(\bm{vv}_{s}-\bm{\hat{vv}}_{s} )+\bm{K}_{vc}(\bm{v}_s-\bm{\hat{v}}_s).
\end{eqnarray}
\subsection{Motion Sickness Estimation}
According to the SVC theory~\citep{bles1998motion}, motion sickness is mainly caused by a conflict between vertical perception through sensory organs and vertical feeling estimated by their internal models.
Therefore, \citet{bos1998modelling} proposed that motion sickness incidence~(MSI), which represents the percentage of vomiting subjects, is determined by the conflict of vertical signals $\mathit{\Delta} \bm{v}=\bm{v}_s-\hat{\bm{v}}_s$ using
\begin{eqnarray}
MSI = \frac{P}{(\tau_l s+1)^2} \frac{||\mathit{\Delta} \bm{v}||/b}{1+||\mathit{\Delta} \bm{v}||/b},
\end{eqnarray}
where $\frac{||\mathit{\Delta} \bm{v}||/b}{1+||\mathit{\Delta} \bm{v}||/b}$ is the Hill function that normalizes the L2 norm of the vertical conflict signal $||\mathit{\Delta} \bm{v}||$ to [0,1).
\section{DRIVING EXPERIMENT}
This experiment aimed to verify whether the proposed 6~DoF~SVC-VV model can predict MSI while riding an APMV with different visual conditions.
Therefore, two APMV ride comparison conditions were established for this experiment: \ 1) looking ahead during autonomous driving (LAD), and 2) working with a tablet device during autonomous driving (WAD).
Considering that the tablet device used in WAD may hinder passengers' visual-spatial perception, we propose the following hypothesis:
\begin{itemize}
\item[\textbf{H~1}:] Passengers working while riding the APMV will have a higher probability of getting motion sickness than if they look ahead while riding the APMV.
\end{itemize}
To simulate the use of APMV in stations or shopping malls where frequent pedestrian avoidance is required, an experiment in which participants rode an APMV was conducted in an indoor room environment.
This study was conducted with the approval of the Research Ethics Committee of the Nara Institute of Science and Technology (No. ~2021-I-38).
\subsection{Autonomous personal mobility vehicle} \label{sec:APMV}
\begin{figure}[b]
\centering
\includegraphics[width=1\linewidth]{img/whill2.pdf}
\caption{Autonomous driving robotic wheelchair used as the experimental vehicle. A helmet-mounted measurement instrument~(HMMI): an IMU and a camera were placed on a helmet to observe the acceleration and angular velocity of the passenger's head and visual information.}
\label{fig:WHILL}
\end{figure}
In this experiment, a robotic wheelchair {\it WHILL Model CR} with an autonomous driving system was used as the APMV.
As shown in Fig.~\ref{fig:WHILL}, the APMV was equipped with multilayered LiDAR (Velodyne VLP-16) and a controlling laptop PC.
An autonomous driving system based on the \textit{Robot Operating System} was applied to the APMV.
LiDAR was utilized for self-localization by the \textit{adaptive Monte Carlo localization} method on a previously built environmental map using the \textit{simultaneous localization and mapping} method.
Thus, it could automatically drive on pre-designed routes using a path-following controller~\citep{watanabe2016neonavigation}.
To ensure the experiment safety, the APMV had an automatic brake function that was applied when there was an obstacle within 0.5 meters directly a front of it.
Passengers could also actively control the APMV with the on-board joystick and power button if they feel in danger.
Meanwhile, a wireless remote controller could control the APMV to stop based on the actual risks during the experiment.
Further, the maximum velocity was set to $6~[km/h]$, and the maximum linear acceleration was set to $1.7~[m/s^2]$.
\subsection{Driving conditions}
As shown in Fig.~\ref{fig:Driving_path}, a $6~[m] \times 12.5~[m]$ room at Nara Institute of Science and Technology was used as the experimental site.
The target temperature of the air conditioner in the room was set as \ang{25}C.
In this experiment, a 20-min slalom driving path (see the red line in Fig.~\ref{fig:Driving_path}) was designed to simulate APMV avoiding other traffic participants in mixed traffic, such as shared space.
Specifically, APMV performed slalom driving with four centers of rotation such that the number of left and right rotations was the same.
The diameter of each rotation was approximately $2.5~[m]$; therefore, the distance between the two rotary centers was also $2.5~[m]$.
To reduce the effect of the participants' predictions of driving dynamics on their motion sickness, no actual object was placed in these centers of rotation.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.75\linewidth]{img/map.pdf}
\caption{$12.5~[m] \times 6~[m]$ room used as the experimental environment. A slalom path was used for autonomous driving. Two posters showing 11 MISC points (see the middle of Fig.~\ref{fig:exp01_MISC}) were mounted on the walls at either end of the room.}
\label{fig:Driving_path}
\end{figure}
\subsection{Riding conditions}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{img/driving_scenes.pdf}
\caption{Two riding conditions: 1) looking ahead during autonomous driving (LAD); 2) working with a tablet device during autonomous driving (WAD).}
\label{fig:exp02_conditions}
\end{figure}
As shown in Fig.~\ref{fig:exp02_conditions}, two riding conditions were designed: 1) looking ahead during autonomous driving (LAD) and 2) working with a tablet device during autonomous driving (WAD).
Each scenario took 25 min, including 20 minuts of riding the APMV and 5 minuts of resting on the stopped APMV.
The detailed design of each scenario is as follows.
\subsubsection{Looking ahead during autonomous driving (LAD)}
In the LAD, participants were asked to look ahead during autonomous driving.
It was also hypothesized that participants could easily obtain vertical orientation information from the floor, walls, and surrounding objects such as windows, tables, chairs, and whiteboards.
\subsubsection{Working with a tablet device during autonomous driving (WAD)}
In the WAD, participants were asked to work with a tablet device (Sony DPT-RP1 Digital Paper:224 [mm] high with 302.6 [mm] weight for horizontal use).
They were asked to use a stylus to answer dummy questionnaires and read articles on e-books while riding APMV.
Note that the contents of these questionnaires and articles were not relevant to this experiment to avoid influencing the experimental results.
Moreover, a neck-hanging tablet stand was used to help participants hold the e-book, see the text clearly, and write more easily.
It also enabled the relative positions of the head and tablet device to be maintained within a certain range.
This scenario was also hypothesized to be more prone to cause motion sickness because the participants may experience difficulty in recognizing the vertical direction because the tablet device prevented the passenger from perceiving the body motion from dynamic visual information~\citep{Sato2022} such as optical flow, and static visual information such as horizontal or vertical.
\subsection{Measurements}
\subsubsection{Head movement and visual information}
To measure the acceleration and angular velocity of the passenger's head and visual information, a helmet-mounted measurement instrument(HMMI) was used (see the center of Fig.~\ref{fig:WHILL}).
The HMMI included an inertial measurement unit(IMU) and a camera set in front of a helmet.
The IMU measured 3 DoF acceleration $f$ and 3 DoF angular velocity $\omega$ at 100~[Hz].
The camera resolution was set to $1280\times 720$ pixels at 30~[Hz].
To reduce the impact of lens distortion on the visual vertical prediction, we cropped the periphery of the camera video.
The cropped resolution was $1000\times 480$.
\subsubsection{Motion sickness}
To measure the severity of motion sickness during the 25-minute experiment, participants verbally reported their feelings of motion sickness every minute using an 11-point MIsery~SCale (MISC)~\citep{MISC} ranging from zero to 10 (see Fig.~\ref{fig:exp01_MISC}).
If MISC reached 6 and lasted for more than 2 minutes, then the APMV stopped, and participants continued to sit on the stopped APMV to report MISC every minute for 5 min.
To help participants refer to the definition of MISC, the two A0 size posters in Fig.~\ref{fig:exp01_MISC} were placed on whiteboards on both sides of the room in the LAD scenario; and the definition of MISC was available on each page of the e-book in the WAD scenario (see Fig.~\ref{fig:exp02_conditions}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\linewidth]{img/01.MISC.pdf}
\caption{An 11-point MIsery SCale (MISC)~\citep{MISC} in Japanese and English.}
\label{fig:exp01_MISC}
\end{figure}
\subsection{Participants and groups}
A total of 27 participants (10 females and 17 males) participated in this experiment as users of APMV.
They were 22-29 years old (mean:23.5, standard deviation,1.89).
They had no experience with autonomous cars and APMVs before this experiment.
All participants provided informed consent before participating in the experiment.
Each participant was asked to ride the APMV under the LAD and WAD conditions once.
To avoid the order effect of the experimental conditions on the experimental results, participants were randomly assigned to two groups.
Specifically, 14 participants in a group called \textit{LAD$\rightarrow$WAD} experienced the LAD scenario first and then the WAD scenario.
Thirteen participants in another group called \textit{WAD$\rightarrow$LAD} experienced these conditions in the opposite order.
The minimum time interval between the two conditions was 24 hours.
The total time of this experiment, including the two conditions was four hours for each participant.
Each participant received 4,000 Japanese Yen as a reward.
\subsection{Procedure}
First, the participants were instructed by following information before the experiment:
\begin{itemize}
\item The purpose of the experiment was to investigate the effect of the riding conditions of an APMV on motion sickness.
\item In this experiment, participants rode on the APMV and experienced slalom driving repeatedly (up to 6 km/h).
\item There were two riding conditions. Each riding scenario was performed on a separate day. Each scenario took approximately 2 hours, and the total time for the two conditions was approximately 4 hours.
\item Details of those two riding conditions were introduced just before participants experienced one of them, separately.
\item In each riding scenario, the APMV was autonomously driven in 20 minutes, then the APMV stopped, and participants could rest for 5 minutes on it.
\item In those 25 minutes, participants were required to report a level of MISC in each minute based on their feelings of motion sickness.
\item If the MISC reached 6 and lasted for more than 2 minutes, then the APMV was stopped immediately and participants continued to sit on the stopped APMV to report MISC every minute for 5 minutes.
\end{itemize}
Moreover, to reduce the restlessness and nervousness of the participants owing to lack of knowledge about APMV, we explained the principles of the autonomous driving system and its sensors (i.\,e.,\ Lider), and the operational design domain (e.\,g.,\ sensor range, maximum speed, maximum linear acceleration, and the judgment distance of emergency stop) to the participants in detail.
Participants were allowed to actively take over and stop the APMV if they thought there was danger.
\subsection{Evaluation methods}
\subsubsection{Motion sickness symptoms reported by MISC}
The MISC was reported by each participant every minute during each 25-min trial.
The mean and maximum of the MISC in each trial were counted to evaluate the degree of motion sickness of the participants.
We used $2\times2$ mixed-design ANOVAs to evaluate the mean and maximum of MISC in two riding conditions (within-subject factor: LAD and WAD) between two groups of condition order (between-subject factor: \textit{LAD$\rightarrow$WAD} and \textit{WAD$\rightarrow$LAD}).
Furthermore, MISC results were also analyzed under LAD and WAD conditions to test whether our proposed hypothesis \textbf{H~1}, i.\,e.,\ passengers working with a tablet device while riding the APMV will have a higher probability of motion sickness than if they look ahead while riding the APMV.
\subsubsection{Calculated visual vertical}
The calculated visual vertical of each trial was evaluated by analyzing the Pearson correlation coefficient between the direction of VV, i.\,e.,\ $\theta^{vv}$, and direction of gravitational acceleration, i.\,e.,\ $\theta^{g}$, from each 25-min trial independently.
Particularly, $\theta^{vv}$ was estimated using Algorithm~\ref{VV} from the camera data, and the direction of the gravitational acceleration projected in the 2D head coordinate system was calculated as $\theta^{g}=180~\arctan(g_y/g_x)/\pi$.
Here, the gravitational acceleration $\bm{g}$ was estimated from the IMU data, that is, $\bm{f}$ and $\bm{\omega}$.
Moreover, a two-sided paired t-test was used to analyze the significant difference between the Pearson correlation coefficients for the LAD and WAD conditions.
\subsubsection{Motion sickness prediction by MSI}
The proposed 6~DoF~SVC-VV model was used to predict the MSI from the IMU data (i.\,e.,\ $\bm{f}$ and $\bm{\omega}$) and the camera images measured in the experiment. To implement the calculation of the zero-order holder, the calculated visual vertical was up-sampled from 30 [Hz] to 100 [Hz] to synchronize with the IMU data.
A conventional 6~DoF SVC model proposed by \citep{Inoue2022} (called the In1 model in this study) and its optimized parameters were used as a baseline.
Furthermore, the parameters of the 6~DoF~SVC-VV model were the same as those of the 6~DoF SVC model, i.\,e.,\ In1 model in~\citep{Inoue2022}, except for $K_{vc}$ and a new parameter $K_{vvc}$.
Table~\ref{tab:parameter} lists the parameters used in the two models.
Note that the conventional 6~Dof SVC model (In1) can be described using the proposed 6 DoF-SVC-VV model if the parameters are set as $K_{vc}=5.0$ and $K_{vvc}=0.0$. Therefore,
for the 6~DoF~SVC-VV model, parameters $K_{vvc}=2.5$ $K_{vc}=2.5$ were used to balance the feedback strength of the two conflict signals.
Similar to the MISC evaluation method, $2\times2$ mixed-design ANOVAs (within-subject factors: LAD and WAD; between-subject factors: \textit{LAD$\rightarrow$WAD} and \textit{WAD$\rightarrow$LAD}) were used to evaluate the mean and maximum values of the predicted MSI in LAD and WAD, respectively.
\begin{figure}[!ht]
\centering
\captionof{table}{Parameters for the 6 DoF SVC model and the 6 DoF SVC-VV model.}
\label{tab:parameter}
\setstretch{1.3}
\begin{tabular}{@{}ccccccccccccc@{}}
\toprule
Model & $K_a$ & $K_{\omega}$ & $K_{\omega c}$ & $K_{ac}$ & $K_{vc}$ & $K_{vvc}$ & \begin{tabular}[c]{@{}c@{}}$\tau$\\ $[s]$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\tau_{a}$\\$[s]$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\tau_{d}$\\ $[s]$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$b$\\ $[m/s^2]$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\tau_{I}$\\ $[s]$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$P$\\ $[\%]$\end{tabular} \\ \midrule
6~DoF SVC & 0.1 & 0.1 & 10 & 0.5 &\textbf{5.0} & \textbf{0.0} & 2.0 &190.0 & 7.0 & 0.5 & 720.0 & 85 \\
6~DoF~SVC-VV & 0.1 & 0.1 & 10 & 0.5 &\textbf{2.5} & \textbf{2.5} & 2.0 & 190.0 & 7.0 & 0.5 & 720.0 & 85 \\ \bottomrule
\end{tabular}
\end{figure}
\subsubsection{Comparison between predicted MSI and reported MISC}
To investigate the performance of the proposed 6 DoF SVC-VV model, we compared the predicted MSI with the reported MISC.
Note that MSI and MISC are different indicators for each other; i.\,e.,\ MSI indicates the percentage of participants who experienced vomiting when exposed to motion for a certain time, and MISC indicates the subjective assessment of each participant of the severity of motion sickness.
Basically, the MISC is an evaluation indicator for individuals, whereas the MSI is an evaluation indicator for the whole group.
Considering the difference in meaning between MSI and MISC, the performance of our proposed 6 DoF SVC-VV model was evaluated by comparing the high-low relationship of the reported MISC from each participant under the LAD and WAD, and that of the predicted MSI under those two riding conditions.
Based on the confusion matrix presented in Table~\ref{tab:TP_FP}, multiple evaluation indexes, that is, accuracy, precision, recall, and F1 score (Table~\ref{tab:evaluation_index}), were used to evaluate the performance of the proposed 6 DoF SVC-VV model.
We took each participant's reported MISC as the true result and the predicted MSI as the predicted result of their motion sickness.
Therefore, as summarized in Table~\ref{tab:TP_FP}, we refer to \textbf{H~1}, ${MISC}_{LAD} < {MISC}_{WAD}$ and ${MISC}_{LAD} \geq {MISC}_{WAD}$ as the positive and negative states of the true result, respectively.
Meanwhile, ${MSI}_{LAD} < {MSI}_{WAD}$ and ${MSI}_{LAD} \geq {MSI}_{WAD}$ are considered the positive and negative states of the predicted result, respectively.
Moreover, the mean and maximum values were used as representative values for MISC and MSI.
\begin{figure}[t]
\centering
\captionof{table}{Definition of confusion matrix for evaluating motion sickness prediction by the predicted MSI comparing to the reported MISC.}
\label{tab:TP_FP}
\setstretch{1.2}
\begin{tabular}{@{}cccc@{}}
\toprule
& & \multicolumn{2}{|c}{\begin{tabular}[c]{@{}c@{}}MISC\\ (True result)\end{tabular}} \\ \cmidrule(l){3-4}
& \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Positive\\ ($MISC_{LAD} < MISC_{WAD}$)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Negative\\ ($MISC_{LAD} \geq MISC_{WAD}$)\end{tabular} \\ \midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\\MSI\\ (Predicted result)\end{tabular}} & \multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Positive\\ ($MSI_{LAD} < MSI_{WAD}$)\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}TP\\ (True Positive)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}FP\\ (False Positive)\end{tabular} \\ \cmidrule(l){2-4}
& \multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Negative\\ ($MSI_{LAD}\geq MSI_{WAD}$)\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}FN\\ (False Negative)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}TN\\ (True Nagative)\end{tabular} \\ \bottomrule
\end{tabular}
\end{figure}
\begin{figure}[t]
\centering
\centering
\captionof{table}{Evaluation indexes for the prediction of motion sickness using predicted MSI compared to reported MISC.}
\label{tab:evaluation_index}
\setstretch{2}
\begin{tabular}{@{}ll@{}}
\toprule
Evaluation index & Explanation \\ \midrule
$\displaystyle Accuracy = \frac{TP+TN}{TP+TN+FP+FN}$& Rate of the correct predictions over all predictions. \\
$\displaystyle Precision= \frac{TP}{TP+FP}$& Rate of correct positive predictions over all positive predictions. \\
$\displaystyle Recall=\frac{TP}{TP+FN}$& Rate of correct positive predictions over all the positive true results. \\
$\displaystyle F1~score=\frac{2\cdot Precision \cdot Recall}{Precision+Recall}$ & The harmonic mean of the precision and recall. \vspace{2mm}\\
\bottomrule
\end{tabular}
\end{figure}
\section{RESULTS}
\subsection{Reported MISC}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.495\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{img/MISC_mean_max_a.pdf}
\caption{Mean of MISC}
\label{fig:MISC_mean_max_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.495\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{img/MISC_mean_max_b.pdf}
\caption{Maximum of MISC}
\label{fig:MISC_mean_max_b}
\end{subfigure}
\captionof{figure}{Mean and Maximum of MISC reported under LAD and WAD conditions (error bar: $95\%$ confidence interval)}
\label{fig:MISC_mean_max}
\centering
\captionof{table}{Two-way mixed-design ANOVA for mean and maximum of MISC. * shows the $p<.05$}
\label{tab:MISC_ANOVA}
\setstretch{1.3}
\begin{tabular}{rrrrrrrlrr}
\toprule
\multicolumn{1}{c}{Measurement} &\multicolumn{1}{c}{Effect} & \multicolumn{1}{c}{SS} & \multicolumn{1}{c}{DF1} & \multicolumn{1}{c}{DF2} & \multicolumn{1}{c}{MS} & \multicolumn{1}{c}{\textit{F-value}} & \multicolumn{1}{c}{\textit{p-value}} & \multicolumn{1}{c}{np2} & \multicolumn{1}{c}{eps} \\ \midrule
Mean of MISC & Groups & 0.092 & 1 & 25 & 0.092 & 0.065 & 0.801 & 0.003 & NaN \\
& Conditions & 2.506 & 1 & 25 & 2.506 & 4.296 & \textbf{0.049 *} & 0.147 & 1.0 \\
& Interaction & 0.918 & 1 & 25 & 0.918 & 1.574 & 0.221 & 0.059 & NaN \\ \midrule
Maximum of MISC & Groups & 0.002 & 1 & 25 & 0.002 & 0.000 & 0.986 & 0.000 & NaN \\
& Conditions & 6.685 & 1 & 25 & 6.685 & 3.362 & 0.079 & 0.119 & 1.0 \\
& Interaction & 1.101 & 1 & 25 & 1.101 & 0.553 & 0.464 & 0.022 & NaN \\ \bottomrule
\end{tabular}
\end{figure}
The MISC per minute reported for 27 participants is shown in Fig.~\ref{fig:MISC_TS_all}.
Among them, 21 participants reported that they developed symptoms of motion sickness in this experiment; however, six participants (four in group \textit{LAD$\rightarrow$WAD} and two in group \textit{WAD$\rightarrow$LAD}) reported $MISC=0$ at all times in both the LAD and WAD conditions.
In the WAD, participant \#24 reported that $MISC=6$ at approximately 8.5 minutes and was asked to stop the APMV.
Then, the APMV was stopped, and the participant rested on the APMV.
However, after 1 minut of rest, the motion sickness symptoms of participant \#24 continued to develop to $MISC=9$ at approximately 9.5 minutes, thus we immediately terminated the experiment.
For the two groups, i.\,e.,\ , \textit{LAD$\rightarrow$WAD} and \textit{WAD$\rightarrow$LAD}, we did not find a significant effect of the experimental order on the MISC results, as summarized in Table~\ref{tab:MISC_ANOVA}, and the mean and maximum of MISC during 25 minuts driving in LAD were lower than those in WAD, as shown in Fig.~\ref{fig:MISC_mean_max}.
For two conditions, i.\,e.,\ LAD and WAD, the two-way mixed-design ANOVA (see Table~\ref{tab:MISC_ANOVA}) reported that there was a significant difference in the mean MISC between conditions ($p=0.049$); however, no significant difference between groups and in their interaction.
Moreover, there was no significant difference in the maximum MISC between the groups, conditions, and their interactions.
\subsection{Calculated visual vertical}
Fig.~\ref{fig:Corr_LAD_WAD} shows that $\theta^{vv}$ and $\theta^{g}$ had positive Pearson correlations ($N=27, M = 0.44, SD = 0.14$) under the LAD condition, i.\,e.,\ without the obstruction of view, and weakly positive Pearson correlations under the WAD condition, i.\,e.,\ with obstruction of view.
Furthermore, a two-sided paired t-test showed that the mean value of the Pearson correlation coefficients in the LAD condition was significantly higher than that in the WAD condition, i.\,e.,\ , $t(26) = 11.11, p < .001$.
\begin{figure}[!hb]
\centering
\includegraphics[width=0.65\linewidth]{img/Corr_LAD_WAD.pdf}
\caption{Pearson correlation between the directions of the predicted visual vertical and estimated gravitational acceleration. A two-sided paired t-test reports a significant difference between the Pearson correlation coefficients under LAD and WAD conditions.} \label{fig:Corr_LAD_WAD}
\end{figure}
\begin{figure}[!hp]
\vspace{3mm}
\centering
\includegraphics[width=1\linewidth]{img/MISC_TS_all.pdf}
\caption{MISC reported from 27 participants every minute. The horizontal coordinate shows the time whereas vertical coordinate shows the MISC. Graphs on the sky blue background represent MISCs reported by participants from \textit{LAD$\rightarrow$WAD} group whereas graphs on the yellow background represent MISCs reported by participants from \textit{WAD$\rightarrow$LAD} group. Green lines represent MISC reported by participants in LAD whereas red lines represent MISC reported by participants in WAD. The vertical broken lines indicate the moment of APMV stopping.}
\label{fig:MISC_TS_all}
\end{figure}
\begin{figure}[!hp]
\vspace{3mm}
\centering
\includegraphics[width=1\linewidth]{img/MSI_TS_2.5_2.5_new.pdf}
\caption{Predicted MSI during the 25 minuts experiment by 6 DoF SVC and 6 DoF SVC-VV models for 27 participants. The horizontal coordinate is the time and the vertical coordinate is the vomiting rate, i.\,e.,\ MSI. Graphs on the sky blue background represent MSIs predicted for participants from \textit{LAD$\rightarrow$WAD} group whereas graphs on the yellow background represent MSIs predicted for participants from \textit{WAD$\rightarrow$LAD} group. Green lines represent MISC reported by participants in LAD whereas red lines represent MISC reported by participants in WAD. Dotted lines represent MSI predicted by 6 DoF SVC model whereas solid lines represent MSI predicted by 6 DoF SVC-VV model. The vertical broken lines indicate the APMV stopping moment.}
\label{fig:MSI_TS_all}
\end{figure}
\newpage
\subsection{Predicted MSI}
\begin{figure}[b]
\centering
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{img/MSI_mean_max_a.pdf}
\caption{Mean of MSI by\\6 DoF SVC model}
\label{fig:MSI_mean_max_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{img/MSI_mean_max_b.pdf}
\caption{Mean of MSI by\\6 DoF SVC-VV model}
\label{fig:MSI_mean_max_b}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{img/MSI_mean_max_c.pdf}
\caption{Maximum of MSI by\\6 DoF SVC model}
\label{fig:MSI_mean_max_c}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\linewidth}
\centering
\includegraphics[width=\linewidth]{img/MSI_mean_max_d.pdf}
\caption{Maximum of MSI by\\6 DoF SVC-VV model}
\label{fig:MSI_mean_max_d}
\end{subfigure}
\caption{Mean and maximum MSI predicted by the 6 DoF SVC model and the 6 DoF SVC-VV model (error bar: $95\%$ confidence interval).}
\label{fig:MSI_resuts}
\centering
\captionof{table}{Two-way mixed-design ANOVA for mean of MSI predicted by 6 DoF SVC model and 6 DoF SVC-VV model, separately. ** shows the $p<.01$.}
\label{tab:ANOVA_Mean_MSI}
\begin{tabular}{rrrrrrrlrr}
\toprule
\multicolumn{1}{c}{Model} & \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{SS} & \multicolumn{1}{c}{DF1} & \multicolumn{1}{c}{DF2} & \multicolumn{1}{c}{MS} & \multicolumn{1}{c}{\textit{F-value}} & \multicolumn{1}{c}{\textit{p-value}} & \multicolumn{1}{c}{np2} & \multicolumn{1}{c}{eps} \\ \midrule
6 DoF SVC model & Groups & 0.231 & 1 & 25 & 0.231 & 0.989 & 0.329 & 0.038 & NaN \\
& Conditions & 0.043 & 1 & 25 & 0.043 & 0.244 & 0.626 & 0.010 & 1.0 \\
& Interaction & 0.126 & 1 & 25 & 0.126 & 0.709 & 0.408 & 0.028 & NaN \\ \midrule
6 DoF SVC-VV model & Groups & 0.081 & 1 & 25 & 0.081 & 0.077 & 0.784 & 0.003 & NaN \\
& Conditions & 8.527 & 1 & 25 & 8.527 & 9.487 & \textbf{0.005 **} & 0.275 & 1.0 \\
& Interaction & 2.545 & 1 & 25 & 2.545 & 2.831 & 0.105& 0.102 & NaN \\ \bottomrule
\end{tabular}
\vspace{3mm}
\centering
\captionof{table}{Two-way mixed-design ANOVA for Maximum of MSI predicted by 6 DoF SVC model and 6 DoF SVC-VV model, separately. ** shows the $p<.01$.}
\label{tab:ANOVA_Max_MSI}
\begin{tabular}{rrrrrrrlrr}
\toprule
\multicolumn{1}{c}{Model} & \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{SS} & \multicolumn{1}{c}{DF1} & \multicolumn{1}{c}{DF2} & \multicolumn{1}{c}{MS} & \multicolumn{1}{c}{\textit{F-value}} & \multicolumn{1}{c}{\textit{p-value}} & \multicolumn{1}{c}{np2} & \multicolumn{1}{c}{eps} \\ \midrule
6 DoF SVC model & Groups & 0.625 & 1 & 25 & 0.625 & 0.943 & 0.341 & 0.036 & NaN \\
& Conditions & 0.048 & 1 & 25 & 0.048 & 0.104 & 0.750 & 0.004 & 1.0 \\
& Interaction & 0.234 & 1 & 25 & 0.234 & 0.511 & 0.481 & 0.020 & NaN \\ \midrule
6 DoF SVC-VV model & Groups & 0.042 & 1 & 25 & 0.042 & 0.007 & 0.935 & 0.000 & NaN\\
& Conditions & 44.45 & 1 & 25 & 44.45 & 10.72 & \textbf{0.003 **} & 0.300 & 1.0 \\
& Interaction & 10.07 & 1 & 25 & 10.07 & 2.429 & 0.132 & 0.089 & NaN \\ \bottomrule
\end{tabular}
\vspace{-3mm}
\end{figure}
The time series MSI predicted by the 6 DoF SVC and 6 DoF SVC-VV models are shown in Fig.~\ref{fig:MSI_TS_all}.
As a summarized index of the time-series MSI in Fig.~\ref{fig:MSI_TS_all}, the mean MSI predicted by the 6 DoF SVC and 6 DoF SVC-VV models are shown in Figs. ~\ref{fig:MSI_resuts}–\subref{fig:MSI_mean_max_a} and~\subref{fig:MSI_mean_max_b}, respectively.
As presented in Table~\ref{tab:ANOVA_Mean_MSI}, a two-way mixed-design ANOVA for the mean MSI predicted by the 6 DoF SVC model revealed no significant effect of the groups, conditions, and in those interactions.
The two-way mixed-design ANOVA for mean MSI predicted using the 6 DoF SVC-VV model revealed a significant main effect of conditions, with the LAD condition showing lower MSI than the WAD condition ($p=0.005$); however, no significant effect was found in the groups and their interaction.
Using the maximum MSI as an evaluation index, Figs. ~\ref{fig:MSI_resuts}~\subref{fig:MSI_mean_max_c} and~\subref{fig:MSI_mean_max_d} show the mean MSI predicted by the 6 DoF SVC and 6 DoF SVC-VV models, respectively.
As presented in Table~\ref{tab:ANOVA_Max_MSI}, a two-way mixed-design ANOVA for the maximum MSI predicted by the 6 DoF SVC model revealed no significant effect of the groups, conditions, and interactions.
For the maximum MSI predicted using the 6 DoF SVC-VV model, the two-way mixed-design ANOVA revealed a significant main effect of conditions, with the LAD condition showing significantly lower MSI than the WAD condition ($p=0.003$); however, no significant effect was found in the groups and their interaction.
\subsection{Comparison between predicted MSI and reported MISC}
As shown in Fig.~\ref{fig:MISC_TS_all}, there were six participants (\#02, \#06, \#15, \#18, \#22, \#26) who did not get any motion sickness symptoms, i.\,e.,\ all MISC reported were zero, in both LAD and WAD conditions.
Considering that our proposed 6 DoF SVC-VV model was used to predict MSI, i.\,e.,\ , the percentage of participants who may vomit during the experiment, the predicted MSI was difficult to represent the individual features of participants who did not suffer from motion sickness.
Therefore, in the analysis in this subsection, we excluded data from these six participants.
By analyzing the MISC of the remaining 21 participants, the mean MISC had 15 positive cases and 6 negative cases; the maximum MISC had 14 positive cases and 7 negative cases.
Taking the MISC results as the true result, confusion matrices of the mean and maximum MSI predicted by the 6 DoF SVC and 6 DoF SVC-VV models are shown in Fig.~\ref{fig:TP_FP}.
Regardless of whether for the mean or maximum MSI, the TP when using the 6 DoF SVC-VV model was higher than that when using the 6 DoF SVC-VV; however, the TN when using the 6 DoF SVC-VV model was lower than that when using the 6 DoF SVC model.
Based on these confusion matrices, the scores of accuracy, precision, recall, and F1-score for the mean and maximum MSI predicted by the 6 DoF SVC and 6 DoF SVC-VV models are shown in Fig.~\ref{fig:F1_score}.
Both the mean and maximum MSI predicted using the 6 DoF SVC-VV model had higher scores for accuracy, precision, recall, and F1 score than those predicted using the 6 DoF SVC model.
However, the difference between the precision scores of the mean and maximum MSI when these two models were used was small.
\begin{figure}[!hb]
\vspace{3mm}
\centering
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{img/TP_FP_a.pdf}
\caption{Mean of MSI by\\6 DoF SVC model}
\label{fig:TP_FP_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{img/TP_FP_b.pdf}
\caption{Mean of MSI by\\6 DoF SVC-VV model}
\label{fig:TP_FP_b}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{img/TP_FP_c.pdf}
\caption{Maximum of MSI by\\6 DoF SVC model}
\label{fig:TP_FP_c}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\linewidth}
\centering
\includegraphics[width=\linewidth]{img/TP_FP_d.pdf}
\caption{Maximum of MSI by\\6 DoF SVC-VV model}
\label{fig:TP_FP_d}
\end{subfigure}
\caption{Confusion matrices for the mean and maximum MSI predicted by 6 DoF SVC and 6 DoF SVC-VV models based on the mean and maximum MISC reported from 21 participants who felt symptoms of motion sickness in at least one of the riding conditions. As the true result, the mean of MISC had 15 positive cases and 6 negative cases; the Maximum of MISC had 14 positive cases and 7 negative cases. }
\label{fig:TP_FP}
\end{figure}
\begin{figure}[!hb]
\centering
\begin{subfigure}[b]{0.495\textwidth}
\centering
\includegraphics[width=\textwidth]{img/F1_score_a.pdf}
\caption{Mean of MSI}
\label{fig:F1_score_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.495\textwidth}
\centering
\includegraphics[width=\textwidth]{img/F1_score_b.pdf}
\caption{Maximum of MSI}
\label{fig:F1_score_b}
\end{subfigure}
\caption{Accuracy, precision, recall, and F1 score for the mean and maximum MSI predicted by the 6 DoF SVC and 6 DoF SVC-VV models based on the mean and maximum MISC reported from 21 participants who felt symptoms of motion sickness in at least one of the riding conditions.}
\label{fig:F1_score}
\end{figure}
\section{DISCUSSIONS}
\subsection{Reported MISC}
The MISC reported by 27 participants at each minute is shown in Fig.~\ref{fig:MISC_TS_all}.
We found that 21 of the 27 participants experienced motion sickness symptoms during the APMV riding experiment.
This result illustrates that although APMVs are driven at a lower speed than vehicles, their passengers also have a high risk of motion sickness when it frequently avoids obstacles or other pedestrians.
However, many other studies have reported that passengers are prone to motion sickness when using cars~\citep{turner1999motion, Wada2012, brietzke2021influence}.
or other vehicles, such as vessels~\citep{pepper1979repeated, turan2009motion}, and aircrafts~\citep{samuel2015airsickness}, we reported unprecedented results of motion sickness in passengers when using miniaturized autonomous vehicles, such as APMV.
Therefore, this study presents a novel issue regarding motion sickness and a new mindset for improving passenger ride comfort for researchers and manufacturers of miniaturized autonomous vehicles.
As presented in Table~\ref{tab:MISC_ANOVA}, we did not find a significant effect of the experimental order of LAD and WAD on the MISC results based on the between-group design.
For the within-group design, the results of the statistical tests validated our hypothesis H~1 in participants working with a tablet device on the APMV (WAD) produced significantly more profound motion sickness symptoms than when looking ahead (LAD) (Fig.~\ref{fig:MISC_mean_max} and Table~\ref{tab:MISC_ANOVA}).
This result is consistent with the conclusions of ~\citep{ griffin2004visual,kato2006study,KARJANTO2018678, karjanto2018effect,kuiper2018looking,suwareducing,irmak2021objective}, where motion sickness is induced by visual obstruction when passengers ride in a car.
Furthermore, we found that, in Fig.~\ref{fig:MISC_mean_max}, the degree of motion sickness symptoms differed among participants.
Such individual differences in motion sickness susceptibility were also reported in ~\citep{GOLDING2006237,irmak2021objective}.
Although this study aimed to model motion sickness to predict MSI, considering these individual differences in the 6 DoF SVC-VV model is challenging future work.
\subsection{Calculated visual vertical}
The calculated visual vertical directions $\theta^{vv}$ by the proposed method were compared with the gravitational acceleration directions $\theta^{g}$ which were estimated from the measured acceleration ($\bm{f}=\bm{g}+\bm{a}$) by IMU attached to the participant’s head.
The results in Fig.~\ref{fig:Corr_LAD_WAD} show that the mean of the correlation coefficients under the LAD condition was significantly higher than their correlation coefficients under the WAD condition, suggesting that the proposed visual vertical prediction method can: 1) calculate the visual vertical direction from environmental images with correlation to the direction of gravitational acceleration in the absence of visual occlusion; 2) represent the effect of visual occlusion on the prediction of visual vertical information from environmental images, e.\,g.,\ participants look-ahead and look-at-the-tablet device in an indoor environment.
This conclusion is consistent with the results of our previous study~\citep{LIU2022_IV}, which focused on the use of APMV under outdoor conditions.
However, the correlation coefficients between the calculated visual vertical direction and gravitational acceleration direction in this experiment were not as high as those in our previous study.
Two reasons can be considered:1) The experimental scenes in this study were indoors; thus, the contours of the objects affected the visual vertical prediction, e.\,g.,\ , tables and chairs placed in different directions.
In \citep{LIU2022_IV}, the horizon line and contours of buildings in an open scene contribute to the prediction of the visual vertical direction.
2) There was noise in estimating gravitational acceleration owing to the movement of APMV.
In \citep{LIU2022_IV}, the calculated visual vertical directions were evaluated with the direction of acceleration $\bm{f}$ under a static condition (APMV was stopped, $\bm{a}=[0,0,0]^T$), in which measured acceleration $\bm{f}=\bm{g}$, whereas the present study used an acceleration signal, which is composed of the summation of inertial acceleration and gravitational acceleration, i.\,e.,\ $\bm{f}=\bm{g}+\bm{a}$.
\subsection{Predicted MSI}
The conventional 6 DoF SVC and 6 DoF SVC-VV models were
used to predict MSI from IMU and camera data from 27 participants (Fig.~\ref{fig:MSI_resuts}).
As previously explained, MSI and MISC are different indicators for evaluating motion sickness, i.\,e.,\ MSI indicates the percentage of participants experiencing vomiting when exposed to motion for a certain time; MISC indicates each participant's subjective assessment of the severity of motion sickness.
Compared to the MISC reported in Fig.~\ref{fig:MISC_TS_all}, the predicted MSI cannot represent the individual differences in motion sickness susceptibility.
In a between-group design,
the results in Tables~\ref{tab:ANOVA_Mean_MSI} and~\ref{tab:ANOVA_Max_MSI} show that the mean and maximum of the predicted MSI using the two models were not significantly different between the groups, that is, the order of LAD and WAD, is consistent with the MISC results (see Table~\ref{tab:MISC_ANOVA}).
In a within-group design,
the mean and maximum of the predicted MSI using the 6 DoF SVC model showed no significant difference between the LAD and WAD conditions.
This is inconsistent with the MISC results because the MISC in WAD is significantly higher than that in LAD (see Table~\ref{tab:MISC_ANOVA}).
Contrarily, the proposed 6 DoF SVC-VV model predicted a significantly higher MSI under WAD than under LAD with the same trend as the MISC reported by the participants (see Fig.~\ref{fig:MISC_mean_max} and Table~\ref{tab:MISC_ANOVA}).
This implies that adding a visual vertical part to the conventional vestibular motion sickness 6DOF-SVC model facilitates the description of the difference in motion sickness under different visual conditions.
In summary, the results obtained in this study imply that the proposed 6 Dof SVC-VV model can describe the difference in the severity of motion sickness for different vertical visual conditions, such as increased motion sickness when reading books during APMV while the conventional 6~Dof SVC model does not.
\subsection{Comparison between the predicted MSI and the reported MISC}
In this subsection, we discuss the performance of the predicted MSI by comparing it with the reported MISC.
Figure~\ref{fig:TP_FP} shows the confusion matrices for the mean and Maximum of MSI predicted by the 6 DoF SVC and the 6 DoF SVC-VV models based on the mean and Maximum of MISC reported by 21 participants (excluding six participants who did not have any motion sickness symptoms).
The mean of MSI ( Fig.~\ref{fig:TP_FP}~(a) and (b)), when using 6~Dof SVC-VV model, TP=12 was higher than when using 6~Dof SVC model, i.\,e.,\ TP=7.
Similarly, the TP cases of the maximum MSI using the 6~Dof SVC-VV model was 11, which was also higher than the TP=8 obtained using the 6~Dof SVC model.
However, for both the mean and maximum of MSI, the number of correctly predicted negative cases (TN) by the 6~Dof SVC-VV model was smaller than when using the 6~Dof SVC model.
This may be caused by an excessive feedback gain $K_{vvc}$ of the visual vertical conflict.
We will address this issue in future studies by adjusting the balance of parameters $K_{vc}$ and $K_{vvc}$.
Based on the confusion matrices above, the accuracy, precision, recall, and F1 score for the mean and Maximum of MSI predicted by the 6 DoF SVC and 6 DoF SVC-VV models are shown in Fig.~\ref{fig:F1_score}.
The accuracy and F1 score are the overall evaluations of the prediction results.
The accuracy and F1 scores of the 6 DoF SVC-VV model were higher than those of the 6 DoF SVC model for both the mean and maximum values of the MSI.
Moreover, precision and recall are sub-scores of the F1 score.
For both the mean and maximum of the MSI, the precision scores of the 6 DoF SVC-VV and 6 DoF SVC models were similar; however, the recall scores of the 6 DoF SVC-VV model were higher than those of the 6 DoF SVC model.
Based on Table~\ref{tab:evaluation_index}, both models had the same performance in predicting the correct positive results over all positive predictions; however, the 6 DoF SVC-VV model had a better performance in predicting correct positive results over all positive true results.
\subsection{Limitations}
The visual vertical prediction method proposed in this study can only predict the visual vertical direction in a 2D plane.
Therefore, the visual vertical direction changes caused by head rotations along the pitch axis cannot yet be calculated.
The parameters in Table~\ref{tab:parameter} were obtained from the conventional 6~DoF SVC model of a previous study~\citep{Inoue2022}, which did not include visual-vestibular interaction.
Therefore, these parameters may not be optimal for the proposed 6~DoF~SVC-VV model.
All participants were in their 20s.
A broader demographic survey is necessary, particularly for the elderly, who are potential wheelchair users.
Furthermore, the proposed 6~DoF~SVC-VV model cannot be used to represent the individual traits of motion sickness susceptibility because this model is designed to predict MSI.
Particularly, it is difficult to apply this model to people who are extremely insensitive to motion sickness.
\subsection{Future works}
The visual vertical prediction method was improved to extract 3D visual vertical features from image data.
This will further help the 6 DoF SVC-VV model represent the change in the visual vertical direction owing to head rotations on the pitch axis.
Although the 6 DoF SVC-VV model uses the parameters optimized by~\citet{Inoue2022}, a new parameter $K_{vvc}$, which is the feedback gain of the visual vertical conflict, has not yet been optimized.
We will address this issue in future studies by adjusting the balance of parameters $K_{vc}$ and $K_{vvc}$.
Furthermore, because the proposed 6~DoF~SVC-VV model cannot be used to represent the individual traits of motion sickness susceptibility, we will develop a new MISC prediction model based on the 6~DoF~SVC-VV model based on the model proposed in~\citep{irmak2021objective}.
Moreover, we consider that working with a tablet device during riding APMV hinders the visual vertical perception of passengers and their motion perception through dynamic vision.
Therefore, the integration of the proposed 6~DoF~SVC-VV model with the visual flow, referring to \citet{wada2020computational}, is an important future direction.
\section{CONCLUSION}
To model motion sickness in passengers under different visual conditions while using the APMV, this study proposes a new computational model of SVC theory for predicting motion sickness that considers the interactions between vertical perception from the visual and vestibular systems.
We added a module for visual vertical perception to the 6~DoF SVC model in ~\citep{Inoue2022}.
Therefore, we proposed a visual vertical prediction method based on an image processing technique.
In the experiment, 27 participants experienced APMV with two visual conditions: looking ahead (LAD) and working with a tablet device (WAD).
Of these, 21 participants reported motion sickness symptoms, particularly in the WAD condition.
Furthermore, based on the MISC reported by the participants, we found that the proposed 6~DoF~SVC-VV model more accurately predicted MSI than the conventional 6~DoF SVC model without visual input when the visual vertical direction and direction of gravitational acceleration differed, such as when participants worked with a tablet device while using an APMV.
\section*{ACKNOWLEDGMENTS}
This work was supported by JSPS KAKENHI Grant Numbers 21K18308 and 21H01296, Japan.
\section*{CRediT authorship contribution statement}
\textbf{Hailong Liu}: Conceptualization, Investigation, Resources, Methodology, Validation, Formal analysis, Visualization, Writing - Original Draft, Writing - Review \& Editing.
\textbf{Shota Inoue}: Conceptualization, Methodology, Writing - Review \& Editing.
\textbf{Takahiro Wada}: Conceptualization, Methodology, Writing - Review \& Editing, Project administration, Funding acquisition.
\section*{Declaration of Competing Interest}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\bibliographystyle{cas-model2-names}
|
{
"arxiv_id": "2302.08610",
"language": "en",
"timestamp": "2023-02-20T02:03:14",
"url": "https://arxiv.org/abs/2302.08610",
"yymm": "2302"
} | \section{Introduction}
\label{sec: introduction}
In recent years many different nonlocal inverse problems have been studied. The prototypical example is the inverse problem for the fractional Schr\"odinger operator $(-\Delta)^s+q$, where the measurements are encoded in the (exterior) Dirichlet to Neumann (DN) map $f\mapsto\Lambda_qf=(-\Delta)^su_f|_{\Omega_e}$. Here $\Omega_e={\mathbb R}^n\setminus\overline{\Omega}$ is the exterior of a smoothly bounded domain $\Omega\subset{\mathbb R}^n$ and $0<s<1$. This problem, nowadays called \emph{fractional Calder\'on problem}, was first considered for $q\in L^{\infty}(\Omega)$ in \cite{GSU20} and initiated many of the later developments. The classical proof of the (interior) uniqueness for the fractional Calder\'on problem, that is of the assertion that $\Lambda_{q_1}=\Lambda_{q_2}$ implies $q_1=q_2$ in $\Omega$, relies on the Alessandrini identity, the unique continuation principle (UCP) of the fractional Laplacian and the Runge approximation. Following a similar approach, in the works \cite{bhattacharyya2021inverse,CMR20,CMRU20,GLX,CL2019determining,CLL2017simultaneously,cekic2020calderon,feizmohammadi2021fractional,harrach2017nonlocal-monotonicity,harrach2020monotonicity,GRSU18,GU2021calder,ghosh2021non,lin2020monotonicity,LL2020inverse,LL2022inverse,LLR2019calder,LLU2022calder,KLW2021calder,RS17,ruland2018exponential,RZ2022unboundedFracCald}, it has been shown that one can uniquely recover lower order, local perturbations of many different nonlocal models.
On the other hand, the author together with different collaborators considered in \cite{RZ2022unboundedFracCald,counterexamples,RGZ2022GlobalUniqueness,RZ2022LowReg,StabilityFracCond} the \emph{inverse fractional conductivity problem}. The main objective in this problem is to uniquely determine the conductivity $\gamma\colon{\mathbb R}^n\to{\mathbb R}_+$ from the DN map $f\mapsto \Lambda_{\gamma}f$ related to the Dirichlet problem
\begin{equation}
\label{eq: frac cond eq intro}
\begin{split}
L^s_\gamma u &= 0 \enspace\text{in}\enspace\Omega,\\
u &= f \enspace\text{in}\enspace\Omega_e.
\end{split}
\end{equation}
Here $L_{\gamma}^s$ denotes the \emph{fractional conductivity operator}, which can be strongly defined via
\begin{equation}
\label{eq: frac cond operator intro}
L_{\gamma}^su(x)=C_{n,s}\gamma^{1/2}(x)\,\text{p.v.}\int_{{\mathbb R}^n}\gamma^{1/2}(y)\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy.
\end{equation}
In this formula, $C_{n,s}>0$ is some positive constant and $\text{p.v.}$ denotes the Cauchy principal value. More concretely, in the aforementioned articles it has been shown that the conductivity $\gamma$ with background deviation $m_{\gamma}=\gamma^{1/2}-1$ in $H^{s,n/s}({\mathbb R}^n)$ can be uniquely recovered from the DN data, in the measurement set the conductivity can be explicitly reconstructed with a Lipschitz modulus of continuity and on smooth, bounded domains the full data inverse fractional conductivity problem is under suitable a priori assumptions logarithmically instable.
Let us note that as $s$ converges to 1, the fractional conductivity operator $L_{\gamma}^s$ becomes the \emph{conductivity operator} $L_{\gamma}u=-\text{div}(\gamma\nabla u)$. Hence the above inverse problem can be considered as a nonlocal analogue of the classical \emph{Calder\'on problem} \cite{calderon}, that is, the problem of uniquely recovering the conductivity $\gamma\colon\overline{\Omega}\to{\mathbb R}_+$ from the DN map $f\mapsto\Lambda_{\gamma}f=\gamma\partial_{\nu}u_f|_{\partial\Omega}$, where $u_f\in H^1(\Omega)$ is the unique solution to the Dirichlet problem of the \emph{conductivity equation}
\begin{equation}
\label{calderon prob intro}
\begin{split}
L_\gamma u &= 0 \,\enspace\text{in}\enspace\Omega,\\
u &= f \enspace\text{on}\enspace\partial\Omega
\end{split}
\end{equation}
and $\nu$ denotes the outward pointing unit normal vector of the smoothly bounded domain $\Omega\subset{\mathbb R}^n$. The mathematical investigation of the inverse conductivity problem dates at least back to the work \cite{Langer33-calderon-halfspace} of Langer. The uniqueness proof of the Calderón problem is based on the \emph{Liouville reduction}, which allows to reduce this inverse problem for a variable coefficient operator to the inverse problem for the Schrödinger equation $-\Delta+q$, on the construction of complex geometric optics (CGO) solutions \cite{SU87}, and on a boundary determination result \cite{KV84}. The first uniqueness proof for the inverse fractional conductivity problem also relied on a reduction of the problem via a \emph{fractional Liouville reduction} to the inverse problem for the fractional Schr\"odinger equation and the boundary determination of Kohn and Vogelius was replaced by an exterior determination result (cf.~\cite{RGZ2022GlobalUniqueness} for the case $m_{\gamma}\in H^{2s,n/2s}({\mathbb R}^n)$ and \cite{RZ2022LowReg} for $m_{\gamma}\in H^{s,n/s}({\mathbb R}^n)$). Since the UCP and the Runge approximation are much stronger for nonlocal operators than for local ones, which in turn relies on the fact solutions to $(-\Delta)^s+q$ are much less rigid than the ones to the local Schr\"odinger equation $-\Delta+q$, the uniqueness for the nonlocal Schr\"odinger equation can be established without the construction of CGO solutions. In fact, it is an open problem whether these exist for the fractional Schr\"odinger equation.
\subsection{The optical tomography equation}
Recently, in the articles \cite{OptTomHarrach,SimDetDiffAbs}, it has been investigated whether the diffusion $\gamma$ and the absorption coefficient $q$ in the \emph{optical tomography equation}
\begin{equation}
\label{eq: opt tom prob}
\begin{split}
L_{\gamma}u +qu &= F \,\enspace\text{in}\enspace\Omega
\end{split}
\end{equation}
can be uniquely recovered from the partial Cauchy data $(u|_{\Gamma},\gamma\partial_{\nu}u|_{\Gamma})$, where $\Omega\subset{\mathbb R}^n$ is a bounded domain and $\Gamma \subset\partial\Omega$ is an arbitrarily small region of the boundary. This problem arises in the (stationary) diffusion based optical tomography and therefore we refer to \eqref{eq: opt tom prob} as the optical tomography equation. Generally speaking, in optical tomography one uses low energy visible or near infrared light (wavelength $\lambda\sim 700-1000$nm) to test highly scattering media (as a tissue sample of a human body) and wants to reconstruct the optical properties within the sample by intensity measurements on the boundary. In a possible experimental situation, light is sent via optical fibres to the surface of the medium under investigation and the transilluminated light is measured by some detecting fibres.
The starting point to describe the radiation propagation in highly scattering media is the radiative transfer equation (Boltzmann equation)
\begin{equation}
\label{eq: radiation equation}
\begin{split}
&\partial_tI(x,t,v)+v\cdot\nabla I(x,t,v)+(\mu_a+\mu_s)I(x,t,v)\\
&\quad=\mu_s\int_{S^{n-1}}f(v,v')I(x,t,v')\,d\sigma(v')+G(x,t,v),
\end{split}
\end{equation}
which describes the change of the radiance $I=I(x,t,v)$ at spacetime point $(x,t)$ into the direction $v\in S^{n-1}=\{x\,;\, |x|=1\}$. Here, we set $c=1$ (speed of light) and the other quantities have the following physical meaning:\\
\begin{tabular}{ c l }
$\mu_a$ & absorption coefficient \\
$\mu_s$ & scattering coefficient \\
$f(v,v')$ & scattering phase function - probability that the wave \\
& incident in direction $v'$ is scattered into direction $v$ \\
$G$ & isotropic source
\end{tabular}\\
In the diffusion approximation, as explained in detail in \cite{OptTomArridge} or \cite[Appendix]{schweiger1995finite}, one gets equation \eqref{eq: opt tom prob}, where the quantities are related as follows:\\
\begin{tabular}{ c l }
$u$ & photon density - $u(x,t)=\int_{S^{n-1}}I(x,t,v')\,d\sigma(v')$ \\
$\gamma$ & diffusion coefficient of the medium - $\gamma=[3(\mu_a+\mu'_s)]^{-1}$ \\
& with $\mu'_s$ being the reduced scattering coefficient \\
$q$ & absorption coefficient $\mu_a$ \\
$F$ & isotropic source
\end{tabular}\\
and
$-\gamma\partial_{\nu}u|_{\Gamma}$ describes the normal photon current (or exitance) across $\Gamma\subset\partial\Omega$. Let us remark that in the diffusion approximation one assumes $\mu_a\ll \mu_s$ and that the light propagation is weakly anisotropic, which is incoorporated in $\mu'_s$. For further discussion on this classical model, we refer to the above cited articles and \cite{gibson2005recent}.
\subsubsection{Non-uniqueness in diffusion based optical tomography}
\label{subsubsec: nonunique OT}
In \cite{arridge1998nonuniqueness}, Arridge and Lionheart constructed counterexamples to uniqueness for the inverse problem of the diffusion based optical tomography equation \eqref{eq: opt tom prob}. They consider a smoothly bounded domain $\Omega\subset{\mathbb R}^n$ containing a compact subdomain $\Omega_0\Subset \Omega$ such that the isotropic source is supported in $\Omega_1\vcentcolon = \Omega\setminus\overline{\Omega}_0$. Then they observe that if the diffusion coefficient $\gamma$ is sufficiently regular, the optical tomography equation \eqref{eq: opt tom prob} is reduced via the Liouville reduction to
\begin{equation}
\label{eq: reduced opt tom prob}
\begin{split}
-\Delta v +\eta v &= \frac{F}{\gamma^{1/2}} \,\enspace\text{in}\enspace\Omega
\end{split}
\quad\text{with}\quad \eta\vcentcolon = \frac{\Delta\gamma^{1/2}}{\gamma^{1/2}}+\frac{q}{\gamma},
\end{equation}
where $v=\gamma^{1/2}u$. Now, one can change the coefficients $(\gamma,q)$ to
\begin{equation}
\label{eq: equivalent coeff}
\widetilde{\gamma}\vcentcolon = \gamma+\gamma_0,\quad \widetilde{q}\vcentcolon = q+q_0\quad\text{and}\quad\widetilde{\eta}\vcentcolon=\frac{\Delta\widetilde{\gamma}^{1/2}}{\widetilde{\gamma}^{1/2}}+\frac{\widetilde{q}}{\widetilde{\gamma}},
\end{equation}
where these new parameters satisfy
\begin{enumerate}[(i)]
\item\label{cond 1 non} $\gamma_0\geq 0$ with $\gamma_0|_{\Omega_1}=0$
\item\label{cond 2 non} and $\widetilde{\eta}=\eta$ in $\Omega$.
\end{enumerate}
The latter condition means nothing else than
\begin{equation}
\label{eq: effective potentials}
\frac{\Delta(\gamma+\gamma_0)^{1/2}}{(\gamma+\gamma_0)^{1/2}}+\frac{q+q_0}{\gamma+\gamma_0}=\frac{\Delta\gamma^{1/2}}{\gamma^{1/2}}+\frac{q}{\gamma}\quad\text{in}\quad\Omega.
\end{equation}
Hence, if we have given $\gamma_0$, then this relation can always be used to calculate $q_0$ by
\begin{equation}
\label{eq: calculation of potential perturb}
q_0=(\gamma+\gamma_0)\left(\frac{\Delta\gamma^{1/2}}{\gamma^{1/2}}-\frac{\Delta(\gamma+\gamma_0)^{1/2}}{(\gamma+\gamma_0)^{1/2}}+\frac{q}{\gamma}\right)-q.
\end{equation}
As the transformations \eqref{eq: equivalent coeff} under the conditions \ref{cond 1 non}, \ref{cond 2 non} leave the Dirichlet and Neumann data of solutions to \eqref{eq: reduced opt tom prob} invariant, this leads to the desired counterexamples.
\subsubsection{Uniqueness in diffusion based optical tomography}
Harrach considered in \cite{OptTomHarrach,SimDetDiffAbs} the discrepancy between the counterexamples of the last section and the positive experimental results in \cite[Section~3.4.3]{gibson2005recent} of recovering $\gamma$ and $q$ simultaneously in more detail. In these works it is established that uniqueness in the inverse problem for the optical tomography equation is obtained, when the diffusion $\gamma$ is piecewise constant and the absorption coefficient piecewise analytic. The main tool to obtain this result is the technique of localized potentials (see~\cite{LocPotential}), which are solutions of \eqref{eq: opt tom prob} that are large on a particular subset but otherwise small. The use of special singular solutions to prove uniqueness in inverse problems for (local or nonlocal) PDEs became in recent years a popular technique (see for example \cite{KV84,KV85,Alessandrini-singular,Nachman1996GlobalUniqueness,SU87} for local PDEs and \cite{RGZ2022GlobalUniqueness,RZ2022LowReg,LRZ22,KLZ22FracpLap} for nonlocal PDEs).
\subsection{Nonlocal optical tomography equation and main results}
The main goal of this article is to study a nonlocal variant of the previously introduced inverse problem for the optical tomography equation. More concretely, we consider the \emph{nonlocal optical tomography equation}
\begin{equation}
\label{eq: nonlocal tomography equation intro}
L_{\gamma}^su+qu=0\quad\text{in}\quad\Omega,
\end{equation}
where $\Omega\subset{\mathbb R}^n$ is a domain bounded in one direction, $0<s<1$, $\gamma\colon {\mathbb R}^n\to{\mathbb R}_+$ is a diffusion coefficient, $q\colon{\mathbb R}^n\to{\mathbb R}$ an absorption coefficient (aka potential) and $L_{\gamma}^s$ the variable coefficient nonlocal operator defined in \eqref{eq: frac cond operator intro}. Then we ask:
\begin{question}
\label{question uniqueness}
Let $W_1,W_2\subset \Omega_e$ be two measurement sets. Under what conditions does the DN map $C_c^{\infty}(W_1)\ni f\mapsto \Lambda_{\gamma,q}f|_{W_2}$ related to \eqref{eq: nonlocal tomography equation intro} uniquely determine the coefficients $\gamma$ and $q$?
\end{question}
By \cite[Theorem~1.8]{RZ2022LowReg}, we know that the measurement sets need to satisfy $W_1\cap W_2\neq\emptyset$ and hence, we consider the setup illustrated in Figure~\ref{figure 1}.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}
\filldraw[color=blue!50, fill=blue!5, xshift=11cm, yshift=1.5cm] (3,-2.5) ellipse (0.9 and 0.9);
\node[xshift=11cm, yshift=1.5cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{\Lambda_{\gamma,q}f|_{W_2}}}$};
\filldraw[color=green!50, fill=green!5, xshift=11cm, yshift=0.1cm, opacity=0.8] (3,-2.5) ellipse (1.3 and 0.75);
\node[xshift=11cm, yshift=0.1cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{f\in C_c^{\infty}(W_1)}}$};
\filldraw [color=orange!80, fill = orange!5, xshift=8cm, yshift=-2cm,opacity=0.8] plot [smooth cycle] coordinates {(-1,0.9) (3,1.5) (4.5,-0.5) (3,-1) (-1.5,-0.25)};
\node[xshift=3cm] at (6.3,-1.75) {$\raisebox{-.35\baselineskip}{\ensuremath{L_{\gamma}^su+qu=0\enspace\text{in}\enspace\Omega}}$};
\end{tikzpicture}
\caption{\begin{small} Here, $\Omega$ represents the scattering medium, $\gamma$, $q$ the diffusion and absorption coefficient, $f$ a light pulse in $W_1$ and $\Lambda_{\gamma}f|_{W_2}$ the nonlocal photon current in $W_2$.\end{small}}
\label{figure 1}
\end{figure}
Moreover, motivated by the counterexamples in Section~ \ref{subsubsec: nonunique OT}, we expect that the potentials $q_1,q_2$ should coincide in the measurement sets $W_1,W_2\subset\Omega_e$. Indeed, under slightly weaker assumptions we establish that the DN map $\Lambda_{\gamma,q}$ uniquely determines the coefficients $\gamma$ and $q$. More precisely, we will prove in Section~\ref{sec: inverse problem} the following result:
\begin{theorem}[Global uniqueness]
\label{main thm}
Let $0 < s < \min(1,n/2)$, suppose $\Omega\subset {\mathbb R}^n$ is a domain bounded in one direction and let $W_1,W_2\subset \Omega$ be two non-disjoint measurement sets. Assume that the diffusions $\gamma_1, \gamma_2\in L^{\infty}({\mathbb R}^n)$ with background deviations $m_{\gamma_1},m_{\gamma_2}\in H^{s,n/s}({\mathbb R}^n)$ and potentials $q_1,q_2\in \distr({\mathbb R}^n)$ satisfy
\begin{enumerate}[(i)]
\item\label{uniform ellipticity diffusions} $\gamma_1,\gamma_2$ are uniformly elliptic with lower bound $\gamma_0>0$,
\item\label{continuity diffusions} $\gamma_1, \gamma_2$ are a.e. continuous in $W_1\cap W_2$,
\item\label{integrability potentials} $q_1,q_2\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})\cap L^p_{loc}(W_1\cap W_2)$ for $\frac{n}{2s}< p\leq \infty$
\item\label{equal potentials in measurement sets} and $q_1|_{W_1\cap W_2}=q_2|_{W_1\cap W_2}$.
\end{enumerate}
If $\Lambda_{\gamma_1,q_1}f|_{W_2}=\Lambda_{\gamma_2,q_2}f|_{W_2}$ for all $f\in C_c^{\infty}(W_1)$, then there holds $\gamma_1=\gamma_2$ in ${\mathbb R}^n$ and $q_1=q_2$ in $\Omega$.
\end{theorem}
\begin{remark}
\label{remark: def of constant}
In the above theorem and throughout this article, we set $\delta_0\vcentcolon =2\max(1,C_{opt})$, where $C_{opt}=C_{opt}(n,s,\Omega)>0$ is the optimal fractional Poincar\'e constant defined via
\begin{equation}
\label{eq: optimal fractional Poincare constant}
C_{opt}^{-1}=\inf_{0\neq u\in\widetilde{H}^s(\Omega)}\frac{[u]^2_{H^s({\mathbb R}^n)}}{\|u\|_{L^2({\mathbb R}^n)}^2}<\infty
\end{equation}
(see~Theorem~\ref{thm: Poinc Unbounded Doms}).
\end{remark}
\begin{remark}
Let us note that when we change $q$ away from $\Omega$ and the measurment sets $W_1,W_2$, then the DN data $C_c^{\infty}(W_1)\ni f\mapsto \Lambda_{\gamma,q}f|_{W_2}$ remain the same. Therefore, in the above theorem we have only uniqueness for the potential in $\Omega$.
\end{remark}
Next, let us discuss the assumption that the potentials $q_1,q_2$ coincide in $W=W_1\cap W_2$, where $W_1,W_2\subset\Omega_e$ are two non-disjoint measurement sets. First of all, one can observe that the proofs given in Section~\ref{subsec: exterior reconstruction} and \ref{subsec: determination of diffusion coeff} still work under the seemingly weaker assumption $W\cap \text{int}(\{q_1=q_2\})\neq\emptyset$. Hence, one can again conclude that $\gamma_1=\gamma_2$ in ${\mathbb R}^n$. Now, the UCP of the fractional conductivity operator $L_{\gamma}^s$ (see~Theorem~\ref{thm: uniqueness q}) and \cite[Corollary~2.7]{RZ2022unboundedFracCald} show that $q_1=q_2$ in $W$. Therefore, if the DN maps coincide then the assumption $W\cap \text{int}(\{q_1=q_2\})\neq\emptyset$ is equally strong as $q_1=q_2$ in $W$. This leads us to the following question:
\begin{question}
\label{question non-uniqueness}
For a measurement sets $W\subset\Omega_e$, can one find two distinct pairs of diffusion and absorption coefficients $(\gamma_1,q_1),(\gamma_2,q_2)$ satisfying the conditions \ref{uniform ellipticity diffusions}-\ref{integrability potentials} in Theorem~\ref{main thm} that generate the same DN data, i.e. $\Lambda_{\gamma_1,q_1}f|_{W}=\Lambda_{\gamma_2,q_2}f|_{W}$ for all $f\in C_c^{\infty}(W)$, but $q_1\not\equiv q_2$ in $W$?
\end{question}
We establish the following result:
\begin{theorem}[Non-uniqueness]
\label{thm: non uniqueness}
Let $0 < s < \min(1,n/2)$, suppose $\Omega\subset {\mathbb R}^n$ is a domain bounded in one direction and let $W\subset \Omega$ be a measurement set. Then there exist two different pairs $(\gamma_1,q_1)$ and $(\gamma_2,q_2)$ satisfying $\gamma_1,\gamma_2\in L^{\infty}({\mathbb R}^n)$, $m_{\gamma_1},m_{\gamma_1}\in H^{s,n/s}({\mathbb R}^n)$, \ref{uniform ellipticity diffusions}--\ref{integrability potentials} of Theorem~\ref{main thm} and $\left.\Lambda_{\gamma_1,q_1}f\right|_{W}=\left.\Lambda_{\gamma_2,q_2}f\right|_{W}$ for all $f\in C_c^{\infty}(W)$, but there holds $q_1(x)\neq q_2(x)$ for all $x\in W$.
\end{theorem}
Finally, let us note that whether uniqueness or non-uniqueness holds in the general case $q_1\not\equiv q_2$ on $W$ but $W\cap \{q_1=q_2\}$ has no interior points, is not answered by the above results. In fact, if $q_1,q_2$ are arbitrary potentials and the assumption $\Lambda_{\gamma_1,q_2}f|_{W}=\Lambda_{\gamma_2,q_2}f|_{W}$ for all $f\in C_c^{\infty}(W)$ implies $\gamma_1=\gamma_2$ in ${\mathbb R}^n$, then \cite[Corollary~2.7]{RZ2022unboundedFracCald} again shows $q_1=q_2$ in $W$. Hence, if one wants to establish uniqueness also for potentials $q_1,q_2\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$ satisfying $q_1\not\equiv q_2$ on $W$ and $W\cap \text{int}(\{q_1=q_2\})=\emptyset$, one would need to come up with a proof which does not rely on the separate determination of the coefficients as the one given in this article.
\section{Preliminaries}
\label{sec: Preliminaries}
Throughout this article $\Omega\subset {\mathbb R}^n$ is always an open set and the space dimension $n$ is fixed but otherwise arbitrary.
\subsection{Fractional Laplacian and fractional conductivity operator}
We define for $s> 0$ the fractional Laplacian of order $s$ by
\begin{equation}
(-\Delta)^su\vcentcolon = \mathcal{F}^{-1}(|\xi|^{2s}\widehat{u}),
\end{equation}
whenever the right hand side is well-defined. Here, $\mathcal{F}$ and $\mathcal{F}^{-1}$ denote the Fourier transform and the inverse Fourier transform, respectively. In this article we use the following convention
\[
\mathcal{F} u(\xi)\vcentcolon = \hat u(\xi) \vcentcolon = \int_{{\mathbb R}^n} u(x)e^{-ix \cdot \xi} \,dx.
\]
If $u\colon{\mathbb R}^n\to{\mathbb R}$ is sufficiently regular and $s\in(0,1)$, the fractional Laplacian can be calculated via
\begin{equation}
\label{eq: singular int def frac Lap}
\begin{split}
(-\Delta)^su(x)&=C_{n,s}\,\text{p.v.}\int_{{\mathbb R}^n}\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy\\
&= -\frac{C_{n,s}}{2}\int_{{\mathbb R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|y|^{n+2s}}\,dy,
\end{split}
\end{equation}
where $C_{n,s}>0$ is a normalization constant. Based on formula \eqref{eq: singular int def frac Lap}, we introduce the fractional conductivity operator $L_{\gamma}^s$ by
\begin{equation}
\label{eq: frac cond op}
L_{\gamma}^su(x)=C_{n,s}\gamma^{1/2}(x)\,\text{p.v.}\int_{{\mathbb R}^n}\gamma^{1/2}(y)\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy
\end{equation}
where $\gamma\colon{\mathbb R}^n\to{\mathbb R}_+$ is the so-called conductivity.
\subsection{Sobolev spaces}
The classical Sobolev spaces of order $k\in{\mathbb N}$ and integrability exponent $p\in [1,\infty]$ are denoted by $W^{k,p}(\Omega)$. Moreover, we let $W^{s,p}(\Omega)$ stand for the fractional Sobolev spaces, when $s\in {\mathbb R}_+\setminus{\mathbb N}$ and $1\leq p < \infty$. These spaces are also called Slobodeckij spaces or Gagliardo spaces. If $1\leq p<\infty$ and $s=k+\sigma$ with $k\in {\mathbb N}_0$, $0<\sigma<1$, then they are defined by
\[
W^{s,p}(\Omega)\vcentcolon =\{\,u\in W^{k,p}(\Omega)\,;\, [\partial^{\alpha} u]_{W^{\sigma,p}(\Omega)}<\infty\quad \forall |\alpha|=k\, \},
\]
where
\[
[u]_{W^{\sigma,p}(\Omega)}\vcentcolon =\left(\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^p}{|x-y|^{n+\sigma p}}\,dxdy\right)^{1/p}
\]
is the so-called Gagliardo seminorm. The Slobodeckij spaces are naturally endowed with the norm
\[
\|u\|_{W^{s,p}(\Omega)}\vcentcolon =\left(\|u\|_{W^{k,p}(\Omega)}^p+\sum_{|\alpha|=k}[\partial^{\alpha}u]_{W^{\sigma,p}(\Omega)}^p\right)^{1/p}.
\]
We define the Bessel potential space $H^{s,p}({\mathbb R}^n)$ for $1\leq p<\infty$, $s\in{\mathbb R}$ by
\begin{equation}
\label{eq: Bessel pot spaces}
H^{s,p}({\mathbb R}^n) \vcentcolon = \{ u \in \mathscr{S}^{\prime}({\mathbb R}^n)\,;\, \vev{D}^su \in L^p({\mathbb R}^n)\},
\end{equation}
which we endow with the norm $\norm{u}_{H^{s,p}({\mathbb R}^n)} \vcentcolon = \norm{\vev{D}^su}_{L^p({\mathbb R}^n)}$. Here $\mathscr{S}^{\prime}({\mathbb R}^n)$ denotes the space of tempered distributions, which is the dual of the space of Schwartz functions $\mathscr{S}({\mathbb R}^n)$, and $\langle D\rangle^s$ is the Fourier multiplier with symbol $\langle\xi\rangle^s=(1+|\xi|^2)^{s/2}$. In the special case $p=2$ and $0<s<1$, the spaces $H^{s,2}({\mathbb R}^n)$ and $W^{s,2}({\mathbb R}^n)$ coincide and they are commonly denoted by $H^s({\mathbb R}^n)$.
More concretely, the Gagliardo seminorm $[\,\cdot\,]_{H^s({\mathbb R}^n)}$ and $\|\cdot\|_{\dot{H}^s({\mathbb R}^n)}$ are equivalent on $H^s({\mathbb R}^n)$ (cf.~\cite[Proposition~3.4]{DINEPV-hitchhiker-sobolev}).
Throughout, this article we will assume that $0<s<\min(1,n/2)$ such that $H^s({\mathbb R}^n)\hookrightarrow L^{2^*}({\mathbb R}^n)$, where $2^*$ is the critical Sobolev exponent given by $2^*=\frac{2n}{n-2s}$.
If $\Omega\subset {\mathbb R}^n$, $F\subset{\mathbb R}^n$ are given open and closed sets, then we define the following local Bessel potential spaces:
\begin{equation}\label{eq: local bessel pot spaces}
\begin{split}
\widetilde{H}^{s,p}(\Omega) &\vcentcolon = \mbox{closure of } C_c^\infty(\Omega) \mbox{ in } H^{s,p}({\mathbb R}^n),\\
\end{split}
\end{equation}
We close this section by introducing the notion of domains bounded in one direction and recalling the related fractional Poincar\'e inequalities. We say that an open set $\Omega_\infty \subset{\mathbb R}^n$ of the form $\Omega_\infty={\mathbb R}^{n-k}\times \omega$, where $n\geq k\geq 1$ and $\omega \subset {\mathbb R}^k$ is a bounded open set, is a \emph{cylindrical domain}. An open set $\Omega \subset {\mathbb R}^n$ is called \emph{bounded in one direction} if there exists a cylindrical domain $\Omega_\infty \subset {\mathbb R}^n$ and a rigid Euclidean motion $A(x) = Lx + x_0$, where $L$ is a linear isometry and $x_0 \in {\mathbb R}^n$, such that $\Omega \subset A\Omega_\infty$. Fractional Poincaré inequalities in Bessel potential spaces on domains bounded in one direction were recently studied in \cite{RZ2022unboundedFracCald}. In this article a $L^p$ generalization of the following result is established:
\begin{theorem}[{Poincar\'e inequality, \cite[Theorem~2.2]{RZ2022unboundedFracCald}}]
\label{thm: Poinc Unbounded Doms} Let $\Omega\subset{\mathbb R}^n$ be an open set that is bounded in one direction and $0<s<1$. Then there exists $C(n,s,\Omega)>0$ such that
\begin{equation}
\label{eq: poincare on L1}
\|u\|^2_{L^2({\mathbb R}^n)}\leq C[u]_{H^s({\mathbb R}^n)}^2
\end{equation}
for all $u\in \widetilde{H}^{s}(\Omega)$.
\end{theorem}
\begin{remark}
Let us note, that actually in \cite[Theorem~2.2]{RZ2022unboundedFracCald} the right hand side \eqref{eq: poincare on L1} is replaced by the seminorm $\|u\|_{\dot{H}^s({\mathbb R}^n)}=\|(-\Delta)^{s/2}u\|_{L^{2}({\mathbb R}^n)}$, but as already noted for $H^s({\mathbb R}^n)$ functions these two expressions are equivalent.
\end{remark}
\subsection{Sobolev multiplier}
In this section we briefly introduce the Sobolev multipliers between the energy spaces $H^s({\mathbb R}^n)$ and for more details we point to the book \cite{MS-theory-of-sobolev-multipliers} of Maz'ya and Shaposhnikova.
Let $s,t\in{\mathbb R}$. If $f\in \distr({\mathbb R}^n)$ is a distribution, we say that $f\in M(H^s\rightarrow H^t)$ whenever the norm
\begin{equation}
\|f\|_{s,t} \vcentcolon = \sup \{\abs{\ip{f}{u v}} \,;\, u,v \in C_c^\infty(\mathbb R^n), \norm{u}_{H^s({\mathbb R}^n)} = \norm{v}_{H^{-t}({\mathbb R}^n)} =1 \}
\end{equation}
is finite.
In the special case $t=-s$, we write $\|\cdot\|_s$ instead of $\|\cdot\|_{s,-s}$. Note that for any $f\in M(H^s\rightarrow H^t)$ and $u,v \in C_c^\infty(\mathbb R^n)$, we have the multiplier estimate
\begin{equation}
\label{multiplier-estimate}
\abs{\ip{f}{uv}} \leq \|f\|_{s,t}\norm{u}_{H^s({\mathbb R}^n)} \norm{v}_{H^{-t}({\mathbb R}^n)}.
\end{equation}
By a density argument one easily sees that there is a unique linear multiplication map $m_f \colon H^s({\mathbb R}^n) \to H^t({\mathbb R}^n)$, $u\mapsto m_f(u)$. To simplify the notation we will write $fu$ instead of $m_f(u)$.
Finally, we define certain subclasses of Sobolev multipliers from $H^s({\mathbb R}^n)$ to $H^{-s}({\mathbb R}^n)$.
For all $\delta > 0$ and $0<s<1$, we define the following convex sets
\begin{equation}
\label{eq: special classes of Sobolev multipliers}
\begin{split}
M_{\delta}(H^s \to H^{-s}) &\vcentcolon = \{\,q \in M(H^s \to H^{-s}) \,;\,\|q\|_{s} < \delta\,\},\\
M_{+}(H^s \to H^{-s})& \vcentcolon =M(H^s\to H^{-s})\cap \distr_+({\mathbb R}^n),\\
M_{\delta,+}(H^s\to H^{-s})&\vcentcolon =M_{\delta}(H^s\to H^{-s})+M_{+}(H^s\to H^{-s}),
\end{split}
\end{equation}
where $\distr_+({\mathbb R}^n)$ denotes the non-negative distributions.
Note that by definition of the multiplication map $u\mapsto fu$ one has $\langle qu,u\rangle \geq 0$ for all $u\in H^s({\mathbb R}^n)$, whenever $q\in M_{+}(H^s \to H^{-s})$.
\section{Well-posedness and DN map of forward problem}
\label{subsec: forward}
We start in Section~\ref{sec: basics} by recalling basic properties of the operator $L_{\gamma}^s$, like the fractional Liouville reduction, and then in Section~\ref{subsec: well-posedness and DN maps} we establish well-posedness results for the nonlocal optical tomography equation and the related fractional Schr\"odinger equation as well as introduce the associated DN maps.
\subsection{Basics on the fractional conductivity operator $L_{\gamma}^s$}
\label{sec: basics}
In this section, we recall several results related to the operator $L_{\gamma}^s$.
First, for any uniformly elliptic coefficient $\gamma\in L^{\infty}({\mathbb R}^n)$ and $0<s<1$, the operator $L_{\gamma}^s$ is weakly defined via the bilinear map $B_{\gamma}\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to{\mathbb R}$ with
\begin{equation}
\label{eq: bilinear frac cond eq}
B_{\gamma}(u, v)
\vcentcolon =\frac{C_{n,s}}{2}\int_{{\mathbb R}^{2n}}\gamma^{1/2}(x)\gamma^{1/2}(y)\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dxdy
\end{equation}
for all $u,v\in H^s({\mathbb R}^n)$. Similarly, if $q\in M(H^s\to H^{-s})$, the bilinear map $B_q\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to {\mathbb R}$ representing the weak form of the fractional Schr\"odinger operator $(-\Delta)^s+q$ is defined via
\begin{equation}
\label{eq: schroedinger operator}
B_{q}(u,v)\vcentcolon = \langle (-\Delta)^{s/2}u,(-\Delta)^{s/2}v\rangle_{L^2({\mathbb R}^n)}+\langle qu,v\rangle
\end{equation}
for all $u,v\in H^s({\mathbb R}^n)$.
In \cite[Section~3]{RZ2022LowReg}, we showed that if the background deviation $m_{\gamma}=\gamma^{1/2}-1$ belongs to $H^{s,n/s}({\mathbb R}^n)$, then the \emph{fractional Liouville reduction} is still valid, which was first established in \cite{RZ2022unboundedFracCald} for conductivities having background deviation in $H^{2s,n/2s}({\mathbb R}^n)$ and hence $(-\Delta)^sm_{\gamma}\in L^{n/2s}({\mathbb R}^n)$. More precisely, we established the following results:
\begin{lemma}[{Fractional Liouville reduction}]
\label{Lemma: fractional Liouville reduction}
Let $0<s<\min(1,n/2)$, suppose $\Omega\subset{\mathbb R}^n$ is an open set and assume that the background deviation $m_{\gamma}=\gamma^{1/2}-1$ of the uniformly elliptic conductivity $\gamma\in L^{\infty}({\mathbb R}^n)$ belongs to $H^{s,n/s}({\mathbb R}^n)$. Then the following assertions hold:
\begin{enumerate}[(i)]
\item\label{estimate mathcal M} If $\mathcal{M}=m$ or $\frac{m}{m+1}$, then $\mathcal{M}\in L^{\infty}({\mathbb R}^n)\cap H^{s,n/s}({\mathbb R}^n)$ and one has the estimate
\begin{equation}
\label{eq: continuity estimate background}
\begin{split}
\|\mathcal{M}v\|_{H^s({\mathbb R}^n)}\leq C(\|\mathcal{M}\|_{L^{\infty}({\mathbb R}^n)}+\|\mathcal{M}\|_{H^{s,n/s}({\mathbb R}^n)})\|v\|_{H^s({\mathbb R}^n)}
\end{split}
\end{equation}
for all $v\in H^s({\mathbb R}^n)$ and some $C>0$. Moreover, if $u\in \widetilde{H}^s(\Omega)$, then there holds $\gamma^{\pm 1/2}u\in \widetilde{H}^s(\Omega)$
\item\label{potential frac Liouville reduction} The distribution $q_{\gamma}=-\frac{(-\Delta)^sm_{\gamma}}{\gamma^{1/2}}$, defined by
\[
\langle q_{\gamma},\varphi\rangle\vcentcolon =-\langle (-\Delta)^{s/2}m_{\gamma},(-\Delta)^{s/2}(\gamma^{-1/2}\varphi)\rangle_{L^2({\mathbb R}^n)}
\]
for all $\varphi\in C_c^{\infty}({\mathbb R}^n)$, belongs to $M(H^s\rightarrow H^{-s})$. Moreover, for all $u,\varphi \in H^s({\mathbb R}^n)$, we have
\begin{equation}
\langle q_{\gamma}u,\varphi\rangle = -\langle (-\Delta)^{s/2}m_{\gamma},(-\Delta)^{s/2}(\gamma^{-1/2}u\varphi)\rangle_{L^2({\mathbb R}^n)}
\end{equation}
satisfying the estimate
\begin{equation}
\label{eq: bilinear estimate}
\begin{split}
|\langle q_{\gamma}u,\varphi\rangle|&\leq C(1+\|m_{\gamma}\|_{L^{\infty}({\mathbb R}^n)}+\|m_{\gamma}\|_{H^{s,n/s}({\mathbb R}^n)})
\\ &\quad\quad \cdot \|m_{\gamma}\|_{H^{s,n/s}({\mathbb R}^n)}\|u\|_{H^s({\mathbb R}^n)}\|\varphi\|_{H^s({\mathbb R}^n)}.
\end{split}
\end{equation}
\item\label{liouville red identity} There holds $B_{\gamma}(u,\varphi)=B_{q_{\gamma}}(\gamma^{1/2}u,\gamma^{1/2}\varphi)$ for all $u,\varphi\in H^s({\mathbb R}^n)$, where $B_{q_{\gamma}}\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to {\mathbb R}$ is defined via \eqref{eq: schroedinger operator}.
\end{enumerate}
\end{lemma}
\subsection{Well-posedness results and DN maps}
\label{subsec: well-posedness and DN maps}
First, let us introduce for a given uniformly elliptic function $\gamma\in L^{\infty}({\mathbb R}^n)$ and a potential $q\in M(H^s\to H^{-s})$ the bilinear map $B_{\gamma,q}\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to {\mathbb R}$ representing the weak form of the nonlocal optical tomography operator $L_{\gamma}^s+q$ via
\begin{equation}
B_{\gamma,q}(u,v)=B_{\gamma}(u,v)+\langle qu,v\rangle
\end{equation}
for all $u,v\in H^s({\mathbb R}^n)$. As usual we say that a function $u\in H^s({\mathbb R}^n)$ solves the Dirichlet problem
\begin{equation}
\begin{split}
L_{\gamma}^s u+qu&= F\quad\text{in}\quad\Omega,\\
u&= f\quad\text{in}\quad\Omega_e
\end{split}
\end{equation}
for a given function $f\in H^s({\mathbb R}^n)$ and $F\in (\widetilde{H}^s(\Omega))^*$ if there holds
\[
B_{\gamma,q}(u,\varphi)=\langle F,\varphi\rangle \enspace\text{for all}\enspace \varphi\in \widetilde{H}^s(\Omega)
\]
and $u-f\in \widetilde{H}^s(\Omega)$. We have the following well-posedness result for the nonlocal optical tomography equation.
\begin{theorem}[Well-posedness and DN map for nonlocal optical tomography equation]
\label{thm: well-posedness opt tom eq}
Let $\Omega\subset {\mathbb R}^n$ be an open set which is bounded in one direction and $0<s<1$. Moreover, assume that the uniformly elliptic diffusion $\gamma\in L^{\infty}({\mathbb R}^n)$ is bounded from below by $\gamma_0>0$ and the potential $q$ belongs to $M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$.
Then the following assertions hold:
\begin{enumerate}[(i)]
\item\label{item 1 well-posedness opt tom eq} For all $f\in X\vcentcolon = H^s({\mathbb R}^n)/\widetilde{H}^s(\Omega)$ there is a unique weak solution $u_f\in H^s({\mathbb R}^n)$ of the fractional conductivity equation
\begin{equation}
\label{eq: nonlocal opt tomography problem}
\begin{split}
L_{\gamma}^su+qu&= 0\quad\text{in}\quad\Omega,\\
u&= f\quad\text{in}\quad\Omega_e.
\end{split}
\end{equation}
\item\label{item 2 DN map opt tom eq} The exterior DN map $\Lambda_{\gamma,q}\colon X\to X^*$ given by
\begin{equation}
\label{eq: DN map opt tom eq}
\begin{split}
\langle \Lambda_{\gamma,q}f,g\rangle \vcentcolon =B_{\gamma,q}(u_f,g),
\end{split}
\end{equation}
where $u_f\in H^s({\mathbb R}^n)$ is the unique solution to \eqref{eq: nonlocal opt tomography problem} with exterior value $f$, is a well-defined bounded linear map.
\end{enumerate}
\end{theorem}
\begin{remark}
In the above theorem and everywhere else in this article, we write $f$ instead of $[f]$ for elements of the trace space $X$. Let us note that on the right hand side of the formula \eqref{eq: DN map opt tom eq}, the function $g$ can be any representative of its equivalence class $[g]$.
\end{remark}
\begin{proof}
\ref{item 1 well-posedness opt tom eq}: First, let us note that the bilinear form $B_{\gamma}$ is continuous on $H^s({\mathbb R}^n)$ and that any Sobolev multiplier $q\in M(H^s\to H^{-s})$ by the multiplier estimate \eqref{multiplier-estimate} induces a continuous bilinear form on $H^s({\mathbb R}^n)$. Hence, $B_{\gamma,q}\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to {\mathbb R}$ is continuous. Moreover, as $q\in M_{\gamma_0/\delta,+}(H^s\to H^{-s})$ we may decompose $q$ as $q=q_1+q_2$, where $q_1\in M_{\gamma_0/\delta_0}(H^s\to H^{-s})$ and $q_2\in M_+(H^s\to H^{-s})$. Therefore, we can calculate
\begin{equation}
\label{eq: coercivity estimate potential q}
\begin{split}
B_{\gamma,q}(u,u)&\geq \gamma_0[u]_{H^s({\mathbb R}^n)}^2+\langle q_1u,u\rangle+\langle q_2u,u\rangle\\
&\geq \frac{\gamma_0}{2}\left([u]_{H^s({\mathbb R}^n)}^2+C_{opt}^{-1}\|u\|_{L^2({\mathbb R}^n)}^2\right)-|\langle q_1u,u\rangle|\\
&\geq \frac{\gamma_0}{2\max(1,C_{opt})}\|u\|_{H^s({\mathbb R}^n)}^2-\|q_1\|_{s}\|u\|_{H^s({\mathbb R}^n)}^2\\
&\geq (\gamma_0/\delta_0-\|q_1\|_{s})\|u\|_{H^s({\mathbb R}^n)}^2=\alpha\|u\|_{H^s({\mathbb R}^n)}^2
\end{split}
\end{equation}
for any $u\in \widetilde{H}^s(\Omega)$, where we used the (optimal) fractional Poincar\'e inequality (see~Theorem~\ref{thm: Poinc Unbounded Doms} and eq.~\eqref{eq: optimal fractional Poincare constant}). Using the fact that $q_1\in M_{\gamma_0/\delta_0}(H^s\to H^{-s})$, we deduce $\alpha>0$ and hence the bilinear form $B_{\gamma,q}$ is coercive over $\widetilde{H}^s(\Omega)$.
Next note that for given $f\in H^s({\mathbb R}^n)$, the function $u\in H^s({\mathbb R}^n)$ solves \eqref{eq: nonlocal opt tomography problem} if and only if $v=u-f\in H^s({\mathbb R}^n)$ solves
\begin{equation}
\label{eq: hom nonlocal opt tomography problem}
\begin{split}
L_{\gamma}^sv+qv&= F\quad\text{in}\quad\Omega,\\
v&= 0\quad\text{in}\quad\Omega_e
\end{split}
\end{equation}
with $F=-(L_{\gamma}^sf+qf)\in (\widetilde{H}^s(\Omega))^*$. Now since $B_{\gamma,q}$ is a continuous, coercive bilinear form the Lax--Milgram theorem implies that \eqref{eq: hom nonlocal opt tomography problem} has a unique solution $v\in \widetilde{H}^s(\Omega)$ and so the same holds for \eqref{eq: nonlocal opt tomography problem}. Next, we show that if $f_1,f_2\in H^s({\mathbb R}^n)$ satisfy $f_1-f_2\in\widetilde{H}^s(\Omega)$ then $u_{f_1}=u_{f_2}$ in ${\mathbb R}^n$, where $u_{f_j}\in H^s({\mathbb R}^n)$, $j=1,2$, is the unique solution to \eqref{eq: DN map opt tom eq} with exterior value $f_j$. Define $v=u_{f_1}-u_{f_2}\in \widetilde{H}^s(\Omega)$. Then $v$ solves
\begin{equation}
\label{eq: uniqueness trace space}
\begin{split}
L_{\gamma}^sv+qv&= 0\quad\text{in}\quad\Omega,\\
v&= 0\quad\text{in}\quad\Omega_e.
\end{split}
\end{equation}
By testing \eqref{eq: uniqueness trace space} with $v$ and using the coercivity of $B_{\gamma,q}$ over $\widetilde{H}^s(\Omega)$, it follows that $v=0$ in ${\mathbb R}^n$. Hence, for any $f\in X$, there is a unique solution $u_f\in H^s({\mathbb R}^n)$.
\noindent \ref{item 2 DN map opt tom eq}: For any $f\in X$, let us define $\Lambda_{\gamma,q}f$ via the formula \eqref{eq: DN map opt tom eq}, where $g\in H^s({\mathbb R}^n)$ is any representative of the related equivalence class in $X$. First, we verify that this map is well-defined. If $h\in H^s({\mathbb R}^n)$ is any other representative, that is $g-h\in \widetilde{H}^s(\Omega)$, then since $u_f$ solves \eqref{eq: nonlocal opt tomography problem} we have
\[
B_{\gamma,q}(u_f,g)=B_{\gamma,q}(u_f,g-h)+B_{\gamma,q}(u_f,h)=B_{\gamma,q}(u_f,h)
\]
and so the expression for $\langle \Lambda_{\gamma,q}f,g\rangle$ is unambiguous. By the continuity of the bilinear form $B_{\gamma,q}$ it is easily seen that $\Lambda_{\gamma,q}f\in X^*$ for any $f\in X$.
\end{proof}
\begin{theorem}[Well-posedness and DN map for fractional Schr\"odinger equation]
\label{thm: well-posedness for fractional Schrödinger type equation}
Let $\Omega\subset {\mathbb R}^n$ be an open set which is bounded in one direction and $0<s<\min(1,n/2)$. Moreover, assume that the uniformly elliptic diffusion $\gamma\in L^{\infty}({\mathbb R}^n)$ with lower bound $\gamma_0>0$ satisfies $m_{\gamma}\in H^{s,n/s}({\mathbb R}^n)$ and the potential $q$ belongs to $M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$. Then the following assertions hold:
\begin{enumerate}[(i)]
\item\label{item 1 well-posedness schrödinger} The distribution $Q_{\gamma,q}$ defined by
\begin{equation}
\label{eq: reduced potential}
Q_{\gamma,q}=-\frac{(-\Delta)^sm_{\gamma}}{\gamma^{1/2}}+\frac{q}{\gamma}
\end{equation}
belongs to $M(H^s\to H^{-s})$.
\item\label{item 2 well-posedness schrödinger} If $u\in H^s({\mathbb R}^n)$, $f\in X$ and $v\vcentcolon =\gamma^{1/2}u,\,g\vcentcolon =\gamma^{1/2}f$, then $v\in H^s({\mathbb R}^n), g\in X$ and $u$ is a solution of
\begin{equation}
\label{eq: NOTE well-posedness schrödinger}
\begin{split}
L_{\gamma}^su+qu&= 0\quad\text{in}\quad\Omega,\\
u&= f\quad\text{in}\quad\Omega_e
\end{split}
\end{equation}
if and only if $v$ is a weak solution of the fractional Schr\"odinger equation
\begin{equation}
\label{eq: FSE well-posedness schrödinger}
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=0\quad\text{in}\quad\Omega,\\
v&=g\quad\text{in}\quad\Omega_e.
\end{split}
\end{equation}
\item\label{item 3 well-posedness schrödinger} Conversely, if $v\in H^s({\mathbb R}^n), g\in X$ and $u\vcentcolon =\gamma^{-1/2}v,\,f\vcentcolon =\gamma^{-1/2}g$, then $v$ is a weak solution of \eqref{eq: FSE well-posedness schrödinger} if and only if $u$ is a weak solution of \eqref{eq: NOTE well-posedness schrödinger}.
\item\label{item 4 well-posedness schrödinger} For all $f\in X$ there is a unique weak solutions $v_g\in H^s({\mathbb R}^n)$ of the fractional Schr\"odinger equation
\begin{equation}
\label{eq: well-posedness fractional Schrödinger equation}
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=0\quad\text{in}\quad\Omega,\\
v&=g\quad\text{in}\quad\Omega_e.
\end{split}
\end{equation}
\item\label{item 5 well-posedness schrödinger} The exterior DN map $\Lambda_{Q_{\gamma,q}}\colon X\to X^*$ given by
\begin{equation}
\label{eq: well-defined DN map Schrödinger}
\begin{split}
\langle \Lambda_{Q_{\gamma,q}}f,g\rangle\vcentcolon =B_{Q_{\gamma,q}}(v_f,g),
\end{split}
\end{equation}
where $v_f\in H^s({\mathbb R}^n)$ is the unique solution to \eqref{eq: FSE well-posedness schrödinger} with exterior value $f$, is a well-defined bounded linear map.
\end{enumerate}
\end{theorem}
\begin{proof}
\ref{item 1 well-posedness schrödinger}: Since $q\in M(H^s\to H^{-s})$, we can estimate
\begin{equation}
\label{eq: continuity estimate potential}
\begin{split}
|\langle q/\gamma u,v\rangle|&=|\langle q (\gamma^{-1/2}u),\gamma^{-1/2}v\rangle|\\
&\leq \|q\|_s\|\gamma^{-1/2}u\|_{H^s({\mathbb R}^n)}\|\gamma^{-1/2}v\|_{H^s({\mathbb R}^n)}\\
&\leq \|q\|_s\|(1-\frac{m}{m+1})u\|_{H^s({\mathbb R}^n)}\|(1-\frac{m}{m+1})v\|_{H^s({\mathbb R}^n)}\\
&\leq C\|q\|_s\|u\|_{H^s({\mathbb R}^n)}\|v\|_{H^s({\mathbb R}^n)}
\end{split}
\end{equation}
for all $u,v\in H^s({\mathbb R}^n)$, where we used that the assertion \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction} implies $\gamma^{-1/2}w\in H^s({\mathbb R}^n)$ for all $w\in H^s({\mathbb R}^n)$ with $\|\frac{m}{m+1}w\|_{H^s({\mathbb R}^n)}\leq C\|w\|_{H^s({\mathbb R}^n)}$ for some constant $C>0$ only depending polynomially on the $L^{\infty}$ and $H^{s,n/s}$ norm of $\frac{m}{m+1}$. Now, the estimate \eqref{eq: continuity estimate potential} can be used to see that $q/\gamma$ is a distribution and belongs to $M(H^s\to H^{-s})$. On the other hand, by the statement \ref{potential frac Liouville reduction} of Lemma~\ref{Lemma: fractional Liouville reduction} we know that $q_{\gamma}=\frac{(-\Delta)^sm_{\gamma}}{\gamma^{1/2}}\in M(H^s\to H^{-s})$. This in turn implies $Q_{\gamma}\in M(H^s\to H^{-s})$.
\noindent\ref{item 2 well-posedness schrödinger}: The assertions $v\in H^s({\mathbb R}^n)$, $g\in X$ and $u-f\in \widetilde{H}^s(\Omega)$ if and only if $v-g\in \widetilde{H}^s(\Omega)$ are direct consequences of the property \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction}. Furthermore, the fact that $u$ solves $L_{\gamma}^su+qu=0$ in $\Omega$ if and only if $v$ solves $(-\Delta)^sv+Q_{\gamma,q}=0$ in $\Omega$ follows by the definition of $Q_{\gamma,q}$, \ref{liouville red identity} and \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction}.
\noindent\ref{item 3 well-posedness schrödinger}: The proof of this fact is essentially the same as for \ref{item 2 well-posedness schrödinger} and therefore we drop it.
\noindent\ref{item 4 well-posedness schrödinger}: By \ref{item 3 well-posedness schrödinger}, we know that $v\in H^s({\mathbb R}^n)$ solves \eqref{eq: well-posedness fractional Schrödinger equation} if and only if $u$ solves \eqref{eq: NOTE well-posedness schrödinger} with exterior value $f=\gamma^{1/2}g$. The latter Dirichlet problem is well-posed by Theorem~\ref{thm: well-posedness opt tom eq} and hence it follows from \ref{item 2 well-posedness schrödinger} and \ref{item 2 well-posedness schrödinger} that the unique solution of \eqref{eq: well-posedness fractional Schrödinger equation} is given by $v_g=\gamma^{1/2}u_{\gamma^{-1/2}g}\in H^s({\mathbb R}^n)$.
\noindent\ref{item 5 well-posedness schrödinger}: The fact that $\Lambda_{Q_{\gamma,q}}$ defined via formula \eqref{eq: well-defined DN map Schrödinger} is well-defined follows from the properties \ref{item 4 well-posedness schrödinger}, \ref{item 1 well-posedness schrödinger} and the same calculation as in the proof of Theorem~\ref{thm: well-posedness opt tom eq}, \ref{item 2 DN map opt tom eq}.
\end{proof}
\begin{remark}
\label{remark: interior source problem}
Let us note that essentially the same proofs as in Theorem~\ref{thm: well-posedness opt tom eq} and \ref{thm: well-posedness for fractional Schrödinger type equation}, can be used to show that
\[
\begin{split}
L_{\gamma}^su+qu&= F\quad\text{in}\quad\Omega,\\
u&= u_0\quad\text{in}\quad\Omega_e
\end{split}
\]
and
\[
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=G\quad\text{in}\quad\Omega,\\
v&=v_0\quad\text{in}\quad\Omega_e.
\end{split}
\]
for all $u_0,v_0\in H^s({\mathbb R}^n)$ and $F,G\in(\widetilde{H}^s(\Omega))^*$ are well-posed.
\end{remark}
\section{Inverse problem}
\label{sec: inverse problem}
In Section~\ref{sec: uniquness} we first prove Theorem~\ref{main thm} and hence providing an answer to Question~\ref{question uniqueness}. We establish this result in four steps. First, in Section~\ref{subsec: exterior reconstruction} we extend the exterior determination result of the fractional conductivity equation to the nonlocal tomography equation (Theorem~\ref{thm: exterior reconstruction}). Then in Lemma~\ref{lemma: relation of sols} we show that $\gamma_1^{1/2}u_f^{(1)}$ and $\gamma_2^{1/2}u_f^{(2)}$ coincide in ${\mathbb R}^n$ whenever $\gamma_1=\gamma_2$, $q_1=q_2$ in the measurement set and generate the same DN data. These two preparatory steps then allow us to prove that the diffusion coefficients are the same in ${\mathbb R}^n$ (Section~\ref{subsec: determination of diffusion coeff}) and to conclude that in that case also the absorption coefficients are necessarily identical (Section~\ref{subsubsec: equality of q}). Then in Section~\ref{sec: non-uniqueness}, we provide an answer to Question~\ref{question non-uniqueness}. Following a similar strategy as in \cite{RZ2022LowReg}, we first derive a characterization of the uniqueness in the inverse problem for the nonlocal optical tomography equation and then use this to construct counterexamples to uniqueness when the potentials are non-equal in the measurement set (see~Theorem~\ref{thm: non uniqueness}).
\subsection{Uniqueness}
\label{sec: uniquness}
\subsubsection{Exterior reconstruction formula}
\label{subsec: exterior reconstruction}
The main result of this section is the following reconstruction formula in the exterior.
\begin{theorem}[Exterior reconstruction formula]
\label{thm: exterior reconstruction}
Let $\Omega\subset {\mathbb R}^n$ be an open set which is bounded in one direction, $W\subset\Omega_e$ a measurement set and $0<s<\min(1,n/2)$. Assume that the uniformly elliptic diffusion $\gamma\in L^{\infty}({\mathbb R}^n)$, which is bounded from below by $\gamma_0>0$, and the potential $q\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$ satisfy the following additional properties
\begin{enumerate}[(i)]
\item\label{prop 1 reconstruction} $\gamma$ is a.e. continuous in $W$
\item\label{prop 2 reconstruction} and $q\in L^p_{loc}(W)$ for some $\frac{n}{2s}<p\leq \infty$.
\end{enumerate}
Then for a.e. $x_0\in W$ there exists a sequence $(\Phi_N)_{N\in{\mathbb N}}\subset C_c^{\infty}(W)$ such that
\begin{equation}
\label{eq: reconstruction formula}
\gamma(x_0)=\lim_{N\to\infty}\langle \Lambda_{\gamma,q}\Phi_N,\Phi_N\rangle.
\end{equation}
\end{theorem}
Before giving the proof of this result, we prove the following interpolation estimate:
\begin{lemma}[Interpolation estimate for the potential term]\label{lem: cont potential term}
Let $0 < s < \min(1,n/2)$ and assume $W\subset{\mathbb R}^n$ is a non-empty open set. If $q\in M(H^s\to H^{-s})\cap L^p_{loc}(W)$ for some $\frac{n}{2s} < p \le \infty$, then for an $V\Subset W$ the following estimate holds
\begin{equation}
\label{eq: potential goes to zero estimate}
|\langle qu,v\rangle|\leq C \|u\|^{1-\theta}_{H^s({\mathbb R}^n)}\|u\|_{L^2(V)}^{\theta} \|v\|_{H^s({\mathbb R}^n)}
\end{equation}
for all $u,v\in C_c^{\infty}(V)$ and some $C>0$, where $\theta\in (0,1]$ is given by
\begin{equation}
\theta=\begin{cases}
2-\frac{n}{sp},&\enspace\text{if}\enspace \frac{n}{2s}<p\leq \frac{n}{s}, \\
1,&\enspace\text{otherwise}.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality we can assume that there holds $\frac{n}{2s}<p\leq \frac{n}{s}$. First, by H\"older's inequality and Sobolev's embedding we have
\begin{equation}
\label{eq: first estimate}
|\langle qu,v\rangle|\leq \|qu\|_{L^{\frac{2n}{n+2s}}(V)}\|v\|_{L^{\frac{2n}{n-2s}}(V)}\leq C\|qu\|_{L^{\frac{2n}{n+2s}}(V)}\|v\|_{H^s({\mathbb R}^n)}.
\end{equation}
Next, observe that if $\theta=2-\frac{n}{sp}\in (0,1]$, then there holds
\[
\frac{n+2s}{2n}=\frac{1}{p}+\frac{1-\theta}{\frac{2n}{n-2s}}+\frac{\theta}{2}.
\]
Therefore, by interpolation in $L^q$ and Sobolev's embedding we can estimate
\begin{equation}
\label{eq: second estimate}
\begin{split}
\|qu\|_{L^{\frac{2n}{n+2s}}(V)}&\leq \|q\|_{L^p(V)}\|u\|_{L^{\frac{2n}{n-2s}}(V)}^{1-\theta}\|u\|_{L^2(V)}^{\theta}\\
&\leq C\|q\|_{L^p(V)}\|u\|_{H^s({\mathbb R}^n)}^{1-\theta}\|u\|_{L^2(V)}^{\theta}.
\end{split}
\end{equation}
Combining the estimates \eqref{eq: first estimate} and \eqref{eq: second estimate}, we obtain \eqref{eq: potential goes to zero estimate}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: exterior reconstruction}]
Let $x_0\in W$ be such that $\gamma$ is continuous at $x_0$. By \cite[Theorem~1.1]{KLZ22FracpLap}, there exists a sequence $(\Phi_N)_{N\in{\mathbb N}}\subset C_c^{\infty}(W)$ satisfying the following conditions:
\begin{enumerate}[(i)]
\item\label{support cond} $\supp(\Phi_N)\to \{x_0\}$ as $N\to\infty$,
\item\label{normalization cond} $[\Phi_N]_{H^s({\mathbb R}^n)}=1$ for all $N\in{\mathbb N}$
\item\label{convergence cond} and $\Phi_N\to 0$ in $H^t({\mathbb R}^n)$ as $N\to\infty$ for all $0\leq t<s$.
\end{enumerate}
The last condition implies that $\Phi_N\to 0$ in $L^p({\mathbb R}^n)$ for all $1\leq p <\frac{2n}{n-2s}$ as $N\to\infty$. Next, let $u_N\in H^s({\mathbb R}^n)$ be the unique solution to
\begin{equation}
\begin{split}
L^s_{\gamma}u+qu&= 0\quad\enspace\text{in}\enspace\Omega,\\
u&= \Phi_N\,\enspace\text{in}\enspace\Omega_e.
\end{split}
\end{equation}
By linearity $v_N\vcentcolon = u_N-\Phi_N\in \widetilde{H}^s(\Omega)$ is the unique solution to
\begin{equation}
\label{eq: sol hom ext cond}
\begin{split}
L_{\gamma}^s v+qv&= -B_{\gamma,q}(\Phi_N,\cdot)\enspace\text{in}\enspace\Omega,\\
v&= 0\quad\,\,\,\quad\quad\quad\quad\text{in}\enspace\Omega_e.
\end{split}
\end{equation}
One easily sees that $B_{\gamma,q}(\Phi_N,\cdot)\in (\widetilde{H}^s(\Omega))^*$. Similarly as in \cite[Lemma~3.1]{RZ2022LowReg}, for any $v\in \widetilde{H}^s(\Omega)$ we may calculate
\allowdisplaybreaks
\[
\begin{split}
&|B_{\gamma,q}(\Phi_N,v)|=|B_{\gamma}(\Phi_N,v)|=C\left|\int_{W\times\Omega}\gamma^{1/2}(x)\gamma^{1/2}(y)\frac{\Phi_N(x)v(y)}{|x-y|^{n+2s}}\,dxdy\right|\\
&\quad\leq C\int_{\Omega}\gamma^{1/2}(y)|v(y)|\left(\int_W\frac{\gamma^{1/2}(x)|\Phi_N(x)|}{|x-y|^{n+2s}}\,dx\right)dy\\
&\quad\leq C\|\gamma\|_{L^{\infty}(\Omega\cup W)}\|v\|_{L^2(\Omega)}\left\|\int_W\frac{|\Phi_N(x)|}{|x-y|^{n+2s}}\,dx\right\|_{L^2(\Omega)}\\
&\quad\leq C\|\gamma\|_{L^{\infty}(\Omega\cup W)}\|v\|_{L^2(\Omega)}\int_W|\Phi_N(x)|\left(\int_{\Omega}\frac{dy}{|x-y|^{n+2s}}\,dy\right)^{1/2}\,dx\\
&\quad\leq C\|\gamma\|_{L^{\infty}(\Omega\cup W)}\|v\|_{L^2(\Omega)}\int_W|\Phi_N(x)|\left(\int_{(B_r(x))^c}\frac{dy}{|x-y|^{n+2s}}\,dy\right)^{1/2}\,dx\\
&\quad\leq \frac{C}{r^{\frac{n+4s}{2}}}\|\gamma\|_{L^{\infty}({\mathbb R}^n)}\|v\|_{L^2(\Omega)}\|\Phi_N\|_{L^1(W)}.
\end{split}
\]
In the above estimates we used that $\gamma\in L^{\infty}({\mathbb R}^n)$ is uniformly elliptic, $\supp(\Phi_N)\subset \supp(\Phi_1)\Subset W$ (see~\ref{support cond}), H\"older's and Minkowski's inequality and set $r\vcentcolon = \dist(\Omega,\supp(\Phi_1))>0$. This implies
\begin{equation}
\label{eq: bounded on norm}
\|B_{\gamma,q}(\Phi_N,\cdot)\|_{(\widetilde{H}^s(\Omega))^*}\leq \frac{C}{r^{\frac{n+4s}{2}}}\|\gamma\|_{L^{\infty}({\mathbb R}^n)}\|\Phi_N\|_{L^1(W)}.
\end{equation}
Now, testing equation \eqref{eq: sol hom ext cond} by $v_N\in\widetilde{H}^s(\Omega)$, using the fractional Poincar\'e inequality (see~Theorem~\ref{thm: Poinc Unbounded Doms}), the uniform ellipticity of $\gamma$ and the coercivity estimate \eqref{eq: coercivity estimate potential q}, we get
\[
\begin{split}
\|v_N\|_{H^s({\mathbb R}^n)}^2&\leq C|B_{\gamma,q}(\Phi_N,v_N)|\leq C\|B_{\gamma,q}(\Phi_N,\cdot)\|_{(\widetilde{H}^s(\Omega))^*}\|v_N\|_{H^s({\mathbb R}^n)}\\
&\leq \frac{C}{r^{\frac{n+4s}{2}}}\|\gamma\|_{L^{\infty}({\mathbb R}^n)}\|\Phi_N\|_{L^1(W)}\|v_N\|_{H^s({\mathbb R}^n)},
\end{split}
\]
which in turn implies
\[
\|v_N\|_{H^s({\mathbb R}^n)}\leq \frac{C}{r^{\frac{n+4s}{2}}}\|\gamma\|_{L^{\infty}({\mathbb R}^n)}\|\Phi_N\|_{L^1(W)}.
\]
Recalling that $v_N=u_N-\Phi_N$ and the property \ref{convergence cond} of the sequence $\Phi_N\in C_c^{\infty}(W)$, we deduce
\begin{equation}
\label{eq: conv to zero of diff}
\|u_N-\Phi_N\|_{H^s({\mathbb R}^N)}\to 0\quad\text{as}\quad N\to\infty.
\end{equation}
Let us next define the energy
\[
E_{\gamma,q}(v)\vcentcolon = B_{\gamma,q}(v,v)
\]
for any $v\in H^s({\mathbb R}^n)$. Using the computation in the proof of \cite[Theorem~3.2]{RZ2022LowReg} we have
\begin{equation}
\label{eq: concentration of energy}
\begin{split}
\lim_{N\to\infty}E_{\gamma,q}(\Phi_N)&=\lim_{N\to\infty}B_{\gamma}(\Phi_N,\Phi_N)+\lim_{N\to\infty}\langle q\Phi_N,\Phi_N\rangle_{L^2({\mathbb R}^n)}\\
&=\lim_{N\to\infty}B_{\gamma}(\Phi_N,\Phi_N)=\gamma(x_0)
\end{split}
\end{equation}
where we used Lemma~\ref{lem: cont potential term} and the properties \ref{normalization cond}, \ref{convergence cond} of the sequence $(\Phi_N)_{N\in{\mathbb N}}$ to see that the term involving the potential $q$ vanishes.
On the other hand, we can rewrite the DN map as follows
\[
\begin{split}
\langle \Lambda_{\gamma,q}\Phi_N,\Phi_N\rangle &=B_{\gamma,q}(u_N,\Phi_N)=B_{\gamma,q}(u_N,u_N)\\
&=E_{\gamma,q}(u_N-\Phi_N)+2B_{\gamma,q}(u_N-\Phi_N,\Phi_N)+E_{\gamma,q}(\Phi_N).
\end{split}
\]
Thus, arguing as above for the convergence $E_{\gamma,q}(\Phi_N)\to\gamma(x_0)$, we see that the first two terms on the right hand side vanish in the limit $N\to\infty$ and we can conclude the proof.
\end{proof}
\subsubsection{Uniqueness of the diffusion coefficient $\gamma$}
\label{subsec: determination of diffusion coeff}
\begin{lemma}[Relation of solutions]
\label{lemma: relation of sols}
Let $\Omega\subset {\mathbb R}^n$ be an open set which is bounded in one direction, suppose $W_1,W_2\subset\Omega_e$ are two measurement sets and $0<s<\min(1,n/2)$. Assume that the uniformly elliptic diffusions $\gamma,\gamma_2\in L^{\infty}({\mathbb R}^n)$ with lower bound $\gamma_0>0$ satisfy $m_{\gamma_1},m_{\gamma_2}\in H^{s,n/s}({\mathbb R}^n)$ and the potentials $q_1,q_2$ belong to $M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$. If $\gamma_1|_{W_2} = \gamma_2|_{W_2}$ and $\Lambda_{\gamma_1, q_1} f|_{W_2} = \Lambda_{\gamma_2, q_2} f|_{W_2}$ for some $f\in \widetilde{H}^s(W_1)$ with $W_2\setminus \supp(f)\neq \emptyset$, then there holds
\begin{equation}
\gamma_1^{1/2}u_f^{(1)} = \gamma_2^{1/2}u_f^{(2)}\enspace \text{a.e.\@ in }{\mathbb R}^n
\end{equation}
where, for $j = 1, 2$, $u_f^{(j)}\in H^s({\mathbb R}^n)$ is the unique solution of \begin{equation}
\label{eq: NOTE relation solutions}
\begin{split}
L_{\gamma_j}^su+q_ju&= 0\quad\text{in}\quad\Omega,\\
u&= f\quad\text{in}\quad\Omega_e
\end{split}
\end{equation}
(see~Theorem~\ref{thm: well-posedness opt tom eq}).
\end{lemma}
\begin{proof}
First let $\gamma,q$ satisfy the assumptions of Lemma~\ref{lemma: relation of sols} and assume that $f,g\in H^s({\mathbb R}^n)$ have disjoint support. Then there holds
\begin{equation}
\label{eq: disjoint support}
\begin{split}
B_{\gamma,q}(f,g)&=B_{\gamma}(f,g)=C_{n,s}\int_{{\mathbb R}^{2n}}\gamma^{1/2}(x)\gamma^{1/2}(y)\frac{f(x)g(y)}{|x-y|^{n+2s}}\,dxdy\\
&=\langle (-\Delta)^{s/2}(\gamma^{1/2}f),(-\Delta)^{s/2}(\gamma^{1/2}g)\rangle_{L^2({\mathbb R}^n)}.
\end{split}
\end{equation}
Now, let $f\in \widetilde{H}^s(W_1)$ and $u_f^{(j)}\in H^s({\mathbb R}^n)$ for $j=1,2$ be as in the statement of the lemma. Set $V\vcentcolon = W_2\setminus\supp(f)$ and take any $\varphi\in \widetilde{H}^s(V)$. Then we have $\supp(u_f^{(j)})\cap \supp(\varphi)=\emptyset$ and the assumption that the DN maps coincide implies
\[
\begin{split}
B_{\gamma_1,q_1}(u_f^{(1)},\varphi)=\langle \Lambda_{\gamma_1,q_1}f,\varphi\rangle =\langle \Lambda_{\gamma_2,q_2}f,\varphi\rangle=B_{\gamma_2,q_2}(u_f^{(2)},\varphi).
\end{split}
\]
By \eqref{eq: disjoint support} and the assumption $\gamma_1=\gamma_2$ on $W_2$, this is equivalent to
\[
\langle (-\Delta)^{s/2}(\gamma_1^{1/2}u_f^{(1)}-\gamma_2^{1/2}u_f^{(2)}),(-\Delta)^{s/2}(\gamma_1^{1/2}\varphi)\rangle_{L^2({\mathbb R}^n)}=0
\]
for all $\varphi\in \widetilde{H}^s(V)$. By our assumptions on the diffusion coefficients $\gamma_j$ and Lemma~\ref{Lemma: fractional Liouville reduction}, we can replace $\varphi$ by $g=\gamma_1^{-1/2}\varphi$ to obtain
\[
\langle (-\Delta)^{s/2}(\gamma_1^{1/2}u_f^{(1)}-\gamma_2^{1/2}u_f^{(2)}),(-\Delta)^{s/2}g\rangle_{L^2({\mathbb R}^n)}=0
\]
for all $g\in \widetilde{H}^s(V)$. We know that $\gamma_1^{1/2}u_f^{(1)}-\gamma_2^{1/2}u_f^{(2)}=0$ on $V$ as $u_f^{(j)}=0$ on $V$. Therefore, Lemma~\ref{Lemma: fractional Liouville reduction} and the usual UCP for the fractional Laplacian for $H^s$ functions implies $\gamma_1^{1/2}u_f^{(1)}=\gamma_2^{1/2}u_f^{(2)}$ a.e. in ${\mathbb R}^n$.
\end{proof}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}
\filldraw[color=blue!50, fill=blue!15, xshift=11cm, yshift=1.35cm] (3,-2.5) ellipse (0.9 and 0.9);
\node[xshift=13cm, yshift=1.5cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{W_2}}$};
\filldraw[color=green!70, fill=green!15, xshift=11cm, yshift=0.1cm, opacity=0.8] (3,-2.5) ellipse (1.3 and 0.75);
\filldraw[color=red!60, fill=red!15, xshift=11cm, yshift=0.1cm, opacity=0.8] (3,-2.5) ellipse (1.1 and 0.65);
\node[xshift=11cm, yshift=0.1cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{\supp(f)}}$};
\node[xshift=13cm, yshift=0.1cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{W_1}}$};
\draw[xshift=13cm, yshift=0.1cm] (2.4,-2.5) -- (2.6,-2.5);
\draw[xshift=13cm, yshift=0.1cm] (2,-1.1) -- (2.6,-1.1);
\filldraw[color=yellow!70, fill=yellow!45, xshift=11cm, yshift=1.35cm, opacity=0.8] (3,-2.5) ellipse (0.8 and 0.55);
\node[xshift=11cm, yshift=1.35cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{\supp(\varphi)}}$};
\filldraw [color=orange!80, fill = orange!15, xshift=8cm, yshift=-2cm,opacity=0.8] plot [smooth cycle] coordinates {(-1,0.9) (3,1.5) (4.5,-0.5) (3,-1) (-1.5,-0.25)};
\node[xshift=3cm] at (6.3,-1.75) {$\raisebox{-.35\baselineskip}{\ensuremath{L_{\gamma_j}^su_f^{(j)}+q_ju_f^{(j)}=0\enspace\text{in}\enspace\Omega}}$};
\end{tikzpicture}
\caption{\begin{small} A graphical illustration of the sets and functions used in the proof of Lemma~\ref{lemma: relation of sols}. \end{small}}
\end{figure}
\begin{theorem}[Uniqueness of $\gamma$]
\label{thm: uniqueness of gamma}
Let $0 < s < \min(1,n/2)$, suppose $\Omega\subset {\mathbb R}^n$ is a domain bounded in one direction and let $W_1,W_2\subset \Omega$ be two non-disjoint measurement sets. Assume that the diffusions $\gamma_1, \gamma_2\in L^{\infty}({\mathbb R}^n)$ with background deviations $m_{\gamma_1},m_{\gamma_2}\in H^{s,n/s}({\mathbb R}^n)$ and potentials $q_1,q_2\in \distr({\mathbb R}^n)$ satisfy
\begin{enumerate}[(i)]
\item\label{uniform ellipticity diffusions} $\gamma_1,\gamma_2$ are uniformly elliptic with lower bound $\gamma_0>0$,
\item\label{continuity diffusions} $\gamma_1, \gamma_2$ are a.e. continuous in $W_1\cap W_2$,
\item\label{integrability potentials} $q_1,q_2\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})\cap L^p_{loc}(W_1\cap W_2)$ for $\frac{n}{2s}< p\leq \infty$
\item\label{equal potentials in measurement sets} and $q_1|_{W_1\cap W_2}=q_2|_{W_1\cap W_2}$.
\end{enumerate}
If $\Lambda_{\gamma_1,q_1}f|_{W_2}=\Lambda_{\gamma_2,q_2}f|_{W_2}$ for all $f\in C_c^{\infty}(W_1)$, then there holds $\gamma_1=\gamma_2$ in ${\mathbb R}^n$.
\end{theorem}
\begin{proof}
Let $W\vcentcolon = W_1\cap W_2$. Then Theorem~\ref{thm: exterior reconstruction} ensures that $\gamma_1=\gamma_2$ on $W$. Next choose $V\Subset W$ and let $f\in \widetilde{H}^s(V)$. By assumption there holds
\[
\begin{split}
0 &=\langle (\Lambda_{\gamma_1,q_1}-\Lambda_{\gamma_2,q_2} )f,f\rangle =B_{\gamma_1,q_1}(u_f^{(1)},f)-B_{\gamma_2,q_2}(u_f^{(2)},f)\\
&=B_{\gamma_1}(u_f^{(1)},f))-B_{\gamma_2}(u_f^{(2)},f)+\langle (q_1-q_2)f,f\rangle\\
&=\langle (-\Delta)^{s/2}(\gamma_1^{1/2}u_f^{(1)}),(-\Delta)^{s/2}(\gamma_1^{1/2}f)\rangle_{L^2({\mathbb R}^n)}\\
&\quad-\left\langle \frac{(-\Delta)^sm_{\gamma_1}}{\gamma_1^{1/2}}\gamma_1^{1/2}u_f^{(1)},\gamma_1^{1/2}f\right\rangle\\
&\quad +\langle (-\Delta)^{s/2}(\gamma_2^{1/2}u_f^{(2)}),(-\Delta)^{s/2}(\gamma_2^{1/2}f)\rangle_{L^2({\mathbb R}^n)}\\
&\quad-\left\langle \frac{(-\Delta)^sm_{\gamma_2}}{\gamma_2^{1/2}}\gamma_2^{1/2}u_f^{(2)},\gamma_2^{1/2}f\right\rangle\\
&\quad +\langle (q_1-q_2)f,f\rangle\\
&=\left\langle \frac{(-\Delta)^s(m_{\gamma_2}-m_{\gamma_1})}{\gamma_1^{1/2}}\gamma_1^{1/2}f,\gamma_1^{1/2}f\right\rangle+\langle (q_1-q_2)f,f\rangle\\
&\quad +\langle (-\Delta)^{s/2}(\gamma_1^{1/2}u_f^{(1)}-\gamma_2^{1/2}u_f^{(2)}),(-\Delta)^{s/2}(\gamma_1^{1/2}f)\rangle_{L^2({\mathbb R}^n)},
\end{split}
\]
where in the fourth inequality sign we used the fractional Liouville reduction (Lemma~\ref{Lemma: fractional Liouville reduction}, \ref{liouville red identity}) and in the fifth equality sign $\gamma_1=\gamma_2$ in $W$. By Lemma~\ref{lemma: relation of sols} with $W_1=V$ and $W_2=W\setminus\overline{V}$, the term in the last line vanishes. Moreover, since $q_1=q_2$ in $W$, the term involving the potentials is zero as well. Using the polarization identity, we deduce that there holds
\[
\left\langle \frac{(-\Delta)^s(m_{\gamma_2}-m_{\gamma_1})}{\gamma_1^{1/2}}\gamma_1^{1/2}f,\gamma_1^{1/2}g\right\rangle=0
\]
for all $f,g\in \widetilde{H}^s(V)$. In particular, by first changing $f\mapsto \gamma_1^{-1/2}f\in \widetilde{H}^s(V)$ and $g\mapsto \gamma_1^{-1/2}g\in \widetilde{H}^s(V)$ (see~Lemma~\ref{Lemma: fractional Liouville reduction}, \ref{estimate mathcal M}) and then selecting $U\Subset V$, $g\in C_c^{\infty}(V)$ with $0\leq g\leq 1$, $g|_{\overline{U}}=1$, this implies
\[
\left\langle (-\Delta)^s(m_{\gamma_2}-m_{\gamma_1}),\gamma_1^{-1/2}f\right\rangle=0
\]
for all $f\in \widetilde{H}^s(U)$. Using again the assertion \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction}, we deduce
\[
\left\langle (-\Delta)^s(m_{\gamma_2}-m_{\gamma_1}),f\right\rangle=0
\]
for all $f\in \widetilde{H}^s(U)$. Hence, $m=m_{\gamma_2}-m_{\gamma_1}\in H^{s,n/s}({\mathbb R}^n)$ satisfies
\[
(-\Delta)^sm=m=0\quad \text{in}\quad U.
\]
Now, the UCP for the fractional Laplacian in $H^{s,n/s}({\mathbb R}^n)$ (see~\cite[Theorem~2.2]{KRZ2022Biharm}) guarantees that $\gamma_1=\gamma_2$ in ${\mathbb R}^n$.
\end{proof}
\subsubsection{Uniqueness of the potential $q$}
\label{subsubsec: equality of q}
In this section, we finally establish the uniqueness assertion in Theorem~\ref{main thm}. In fact, under the given assumptions of Theorem~\ref{main thm}, Theorem~\ref{thm: uniqueness of gamma} implies $\gamma_1=\gamma_2$ in ${\mathbb R}^n$. The next theorem now ensures that there also holds $q_1=q_2$ in $\Omega$.
\begin{theorem}
\label{thm: uniqueness q}
Let $0 < s < \min(1,n/2)$, suppose $\Omega\subset {\mathbb R}^n$ is a domain bounded in one direction and let $W_1,W_2\subset \Omega$ be two non-disjoint measurement sets. Assume that the diffusions $\gamma_1, \gamma_2\in L^{\infty}({\mathbb R}^n)$ with background deviations $m_{\gamma_1},m_{\gamma_2}\in H^{s,n/s}({\mathbb R}^n)$ and potentials $q_1,q_2\in \distr({\mathbb R}^n)$ satisfy
\begin{enumerate}[(i)]
\item\label{uniform ellipticity diffusions} $\gamma_1,\gamma_2$ are uniformly elliptic with lower bound $\gamma_0>0$,
\item\label{continuity diffusions} $\gamma_1, \gamma_2$ are a.e. continuous in $W_1\cap W_2$,
\item\label{integrability potentials} $q_1,q_2\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})\cap L^p_{loc}(W_1\cap W_2)$ for $\frac{n}{2s}< p\leq \infty$
\item\label{equal potentials in measurement sets} and $q_1|_{W_1\cap W_2}=q_2|_{W_1\cap W_2}$.
\end{enumerate}
If $\Lambda_{\gamma,q_1}f|_{W_2}=\Lambda_{\gamma,q_2}f|_{W_2}$ for all $f\in C_c^{\infty}(W_1)$, then there holds $q_1=q_2$ in $\Omega$.
\end{theorem}
\begin{proof}
We first show that the fractional conductivity operator $L_{\gamma}^s$ has the UCP on $H^s({\mathbb R}^n)$ as long as $m_{\gamma}\in H^{s,n/s}({\mathbb R}^n)$. For this purpose, assume that $V\subset{\mathbb R}^n$ is a nonempty, open set and $u\in H^s({\mathbb R}^n)$ satisfies $L_{\gamma}^su=u=0$ in $V$. By the fractional Liouville reduction (Lemma~\ref{Lemma: fractional Liouville reduction}, \ref{liouville red identity}) and $u|_V=0$, there holds
\[
\begin{split}
0&=\langle L_{\gamma}^su,\varphi\rangle \\
&=\langle (-\Delta)^{s/2}(\gamma^{1/2}u),(-\Delta)^{s/2}(\gamma^{1/2}\varphi)\rangle_{L^2({\mathbb R}^n)}-\left\langle \frac{(-\Delta)^sm_{\gamma}}{\gamma}\gamma^{1/2} u,\gamma^{1/2}\varphi \right\rangle\\
&=\langle (-\Delta)^{s/2}(\gamma^{1/2}u),(-\Delta)^{s/2}(\gamma^{1/2}\varphi)\rangle_{L^2({\mathbb R}^n)}
\end{split}
\]
for any $\varphi\in C_c^{\infty}(V)$. By approximation the above identity holds for all $\varphi\in \widetilde{H}^s(V)$. By the property \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction}, we can replace $\varphi\in \widetilde{H}^s(V)$ by $\psi=\gamma^{-1/2}\varphi\in \widetilde{H}^s(V)$ to see that $(-\Delta)^{s/2}(\gamma^{1/2}u)=0$ in $V$. Now, the UCP for the fractional Laplacian implies $\gamma^{1/2}u=0$ in ${\mathbb R}^n$. Hence, the uniform ellipticity of $\gamma$ ensures $u=0$ in ${\mathbb R}^n$.
Hence, the problem at hand satisfies the conditions in \cite[Theorem~2.6]{RZ2022unboundedFracCald} (see also \cite[Remamrk~4.2]{RZ2022unboundedFracCald}, Theorem~\ref{thm: well-posedness for fractional Schrödinger type equation} and Remark~\ref{remark: interior source problem}) and we obtain $q_1=q_2$ in $\Omega$.
\end{proof}
\subsection{Non-uniqueness}
\label{sec: non-uniqueness}
In this section, we construct counterexamples to uniqueness when the potentials are non-equal in the whole measurement set $W$ and hence prove Theorem~\ref{thm: non uniqueness}. Similarly, as in the articles \cite{counterexamples,RZ2022LowReg}, the construction of counterexamples relies on a PDE characterization of the equality of the DN maps. To derive such a correspondence between DN maps and a PDE for the coefficients, we need the following lemma:
\begin{lemma}[Relation to fractional Schr\"odinger problem]
\label{Auxiliary lemma}
Let $\Omega\subset{\mathbb R}^n$ be an open set which is bounded in one direction, $W\subset\Omega_e$ an open set and $0<s<\min(1,n/2)$. Assume that $\gamma,\Gamma\in L^{\infty}({\mathbb R}^n)$ with background deviations $m_{\gamma},m_{\Gamma}$ satisfy $\gamma(x),\Gamma(x)\geq \gamma_0>0$ and $m_{\gamma},m_{\Gamma}\in H^{s,n/s}({\mathbb R}^n)$. Moreover, let $q\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$. If $\gamma|_{W}=\Gamma|_{W}$, then
\begin{equation}
\label{eq: identity DN maps}
\langle \Lambda_{\gamma,q}f,g\rangle=\langle \Lambda_{Q_{\gamma,q}}(\Gamma^{1/2}f),(\Gamma^{1/2}g)\rangle
\end{equation}
holds for all $f,g\in\widetilde{H}^{s}(W)$, where the potential $Q_{\gamma,q}\in M(H^s\to H^{-s})$ is given by formula \eqref{eq: reduced potential}.
\end{lemma}
\begin{proof}
First recall that if $u_f\in H^s({\mathbb R}^n)$ is the unique solution to
\[
\begin{split}
L_{\gamma}^s u+qu&= 0\enspace\text{in}\enspace\Omega,\\
u&= f\enspace\text{in}\enspace\Omega_e
\end{split}
\]
with $f\in \widetilde{H}^s(W)$, then $\gamma^{1/2}u_f\in H^s({\mathbb R}^n)$ is the unique solution to
\[
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=0\quad\,\,\,\,\enspace\text{in}\enspace\Omega,\\
v&=\gamma^{1/2}f\enspace\text{in}\enspace\Omega_e
\end{split}
\]
(see~Theorem~\ref{thm: well-posedness for fractional Schrödinger type equation}, \ref{item 2 well-posedness schrödinger}). Since $\gamma|_W = \Gamma|_W$, we have $\gamma^{1/2}f=\Gamma^{1/2}f$ and therefore $\gamma^{1/2}u_f$ is the unique solution to
\[
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=0\quad\,\,\,\,\enspace\text{in}\enspace\Omega,\\
v&=\Gamma^{1/2}f\enspace\text{in}\enspace\Omega_e,
\end{split}
\]
which we denote by $v_{\Gamma^{1/2}f}$. Using the property \ref{liouville red identity} of Lemma~\ref{Lemma: fractional Liouville reduction} and the definition of $Q_{\gamma,q}$ via formula \eqref{eq: reduced potential}, we deduce
\[
\begin{split}
\langle \Lambda_{\gamma,q}f,g\rangle&=B_{\gamma,q}(u_f,g)=B_{\gamma}(u_f,g)+\langle qu_f,g\rangle\\
&=B_{q_{\gamma}}(\gamma^{1/2}u_f,\gamma^{1/2}g)+\left\langle \frac{q}{\gamma}(\gamma^{1/2}u_f),\gamma^{1/2}g\right\rangle\\
&=B_{Q_{\gamma,q}}(\gamma^{1/2}u_f,\gamma^{1/2}g)=B_{Q_{\gamma,q}}(v_{\Gamma^{1/2}f},\Gamma^{1/2}g)\\
&=\langle \Lambda_{Q_{\gamma,q}}(\Gamma^{1/2}f),(\Gamma^{1/2}g)\rangle
\end{split}
\]
for all $f,g\in\widetilde{H}^s(W)$. In the last equality sign we used the definition of the DN map $\Lambda_{Q_{\gamma,q}}$ given in Theorem~\ref{thm: well-posedness for fractional Schrödinger type equation}, \ref{item 5 well-posedness schrödinger}.
\end{proof}
With this at hand, we can now give the proof of Theorem~\ref{thm: non uniqueness}:
\begin{proof}[{Proof of Theorem~\ref{thm: non uniqueness}}]
First assume that the coefficients $(\gamma_1,q_1)$ and $(\gamma_2,q)$ satisfy the regularity assumptions of Theorem~\ref{main thm}. Next, denote by $\Gamma\colon{\mathbb R}^n\to{\mathbb R}_+$ any function satisfying the following conditions
\begin{enumerate}[(a)]
\item $\Gamma\in L^{\infty}({\mathbb R}^n)$,
\item $\Gamma\geq \gamma_0$,
\item $\Gamma|_W=\gamma_1|_W=\gamma_2|_W$
\item and $m_{\Gamma}=\Gamma^{1/2}-1\in H^{s,n/s}({\mathbb R}^n)$.
\end{enumerate}
By Lemma~\ref{Auxiliary lemma}, Theorem~\ref{thm: well-posedness for fractional Schrödinger type equation} and Theorem~\ref{thm: exterior reconstruction}, one sees that $\Lambda_{\gamma_1,q_1}f|_W=\Lambda_{\gamma_2,q_2}f|_W$ for all $f\in C_c^{\infty}(W)$ is equivalent to $\Lambda_{Q_{\gamma_1,q_1}}f|_W=\Lambda_{Q_{\gamma_2,q_2}}f|_W$ for all $f\in C_c^{\infty}(W)$ and $\gamma_1|_W=\gamma_2|_W$. Next, we claim this is equivalent to the following two assertions:
\begin{enumerate}[(i)]
\item $\gamma_1=\gamma_2$ in $W$,
\item\label{item 1 equal potentials} $Q_{\gamma_1,q_1}=Q_{\gamma_2,q_2}$ in $\Omega$
\item\label{item 2 equality in exterior set} and
\begin{equation}
\label{eq: equivalence measurement set}
(-\Delta)^sm+\frac{q_2-q_1}{\gamma_2^{1/2}}=0\enspace\text{in}\enspace W,
\end{equation}
where $m=m_{\gamma_1}-m_{\gamma_2}$.
\end{enumerate}
If $\Lambda_{Q_{\gamma_1,q_1}}f|_W=\Lambda_{Q_{\gamma_2,q_2}}f|_W$ for all $f\in C_c^{\infty}(W)$, then \cite[Theorem~2.6, Corollary~2.7]{RZ2022unboundedFracCald} ensure that $Q_{\gamma_1,q_1}=Q_{\gamma_2,q_2}$ in $\Omega$ and $W$. Next note that
\begin{equation}
\label{eq: some calculation}
\begin{split}
0&=Q_{\gamma_1,q_1}-Q_{\gamma_2,q_2}=-\frac{(-\Delta)^sm_{\gamma_1}}{\gamma_1^{1/2}}+\frac{(-\Delta)^sm_{\gamma_2}}{\gamma_2^{1/2}}+\frac{q_1}{\gamma_1}-\frac{q_2}{\gamma_2}\\
&=-\frac{(-\Delta)^sm}{\gamma_1^{1/2}}+\left(\frac{1}{\gamma_2^{1/2}}-\frac{1}{\gamma_1^{1/2}}\right)(-\Delta)^sm_{\gamma_2}+\frac{q_1}{\gamma_1}-\frac{q_2}{\gamma_2}\\
&=-\frac{(-\Delta)^sm}{\gamma_1^{1/2}}+\frac{m}{\gamma_1^{1/2}\gamma_2^{1/2}}(-\Delta)^sm_{\gamma_2}+\frac{q_1}{\gamma_1}-\frac{q_2}{\gamma_2},
\end{split}
\end{equation}
where set $m=m_{\gamma_1}-m_{\gamma_2}$. As $\gamma_1=\gamma_2$ in $W$, the identity \eqref{eq: some calculation} reduces to the one in statement \ref{item 2 equality in exterior set}.
Next, assume the converse namely that $\gamma_1=\gamma_2$ in $W$ and $m=m_{\gamma_1}-m_{\gamma_2}$ as well as $Q_{\gamma_j,q_j}$ for $j=1,2$ satisfy \ref{item 1 equal potentials} and \ref{item 2 equality in exterior set}. Then for any given Dirichlet value $f\in C_c^{\infty}(W)$, the Dirichlet problems for $(-\Delta)^sv+Q_{\gamma_1,q_1}v=0$ in $\Omega$ and $(-\Delta)^sv+Q_{\gamma_2,q_2}v=0$ in $\Omega$ have the same solution $v_f^{(1)}=v_f^{(2)}$. Hence, one has \[
\begin{split}
B_{Q_{\gamma_1,q_1}}(v_f^{(1)},g)&=\langle (-\Delta)^{s/2}v_f^{(1)},(-\Delta)^{s/2}g\rangle+\langle Q_{\gamma_1,q_1}v_f^{(1)},g\rangle\\
&=\langle (-\Delta)^{s/2}v_f^{(2)},(-\Delta)^{s/2}g\rangle+\langle Q_{\gamma_1,q_1}f,g\rangle\\
&=\langle (-\Delta)^{s/2}v_f^{(2)},(-\Delta)^{s/2}g\rangle+\langle Q_{\gamma_2,q_2}f,g\rangle\\
&=\langle (-\Delta)^{s/2}v_f^{(2)},(-\Delta)^{s/2}g\rangle+\langle Q_{\gamma_2,q_2}v_f^{(2)},g\rangle
\end{split}
\]
for any $g\in C_c^{\infty}(W)$, but this is nothing else than $\Lambda_{Q_{\gamma_1,q_1}}f|_W=\Lambda_{Q_{\gamma_1,q_1}}f|_W$.
Next, choose $\gamma_2=1$ and $q_2=0$ and assume that $(\gamma_1,q_1)$ satisfies the assumptions of Theorem~\ref{main thm}. This implies that there holds $\Lambda_{\gamma_1,q_1}f|_W=\Lambda_{1,0}f|_W$ for all $f\in C_c^{\infty}(W)$ if and only if we have
\begin{enumerate}[(I)]
\item\label{item 2 measurement set 2} $\gamma_1=1$ on $W$,
\item\label{item 1 equal potentials 2} $Q_{\gamma_1,q_1}=0$ in $\Omega$
\item\label{item 3 equality in exterior set} and $(-\Delta)^sm_{\gamma_1}=q_1\enspace\text{in}\enspace W$.
\end{enumerate}
Therefore, if we define $q_1$ via
\begin{equation}
\label{eq: specification of potential}
q_1=\gamma_1^{1/2}(-\Delta)^sm_{\gamma_1}\enspace\text{in}\enspace{\mathbb R}^n
\end{equation}
for a given sufficiently regular function $\gamma_1\colon{\mathbb R}^n\to{\mathbb R}_+$ with $\gamma_1|_W=1$, then the conditions \ref{item 2 measurement set 2}, \ref{item 1 equal potentials 2} and \ref{item 3 equality in exterior set} are satisfied. Hence, the remaining task is to select $\gamma_1$ in such a way that the required regularity properties of Theorem~\ref{main thm} are met. We construct $m_{\gamma_1}\in H^{s,n/s}({\mathbb R}^n)\cap H^s({\mathbb R}^n)$ as follows: First, choose open sets $\Omega',\omega\subset{\mathbb R}^n$ satisfying $\Omega'\Subset\Omega$ and $\omega\Subset \Omega\setminus\overline{\Omega'}$. Next, let us fix some $\epsilon>0$ such that $\Omega'_{5\epsilon}, \omega_{5\epsilon}, \Omega_e$ are disjoint. Here and in the rest of the proof, we denote by $A_{\delta}$ the open $\delta$-neighbor-hood of the set $A\subset{\mathbb R}^n$.
Now, choose any nonnegative cut-off function $\eta\in C_c^{\infty}(\omega_{3\epsilon})$ satisfying $\eta|_{\omega}=1$. We define $\widetilde{m}\in H^s({\mathbb R}^n)$ as the unique solution to
\begin{equation}
\label{eq: PDE in extended domain}
(-\Delta)^s\widetilde{m}=0\quad \text{in}\quad \Omega'_{2\epsilon},\quad \widetilde{m}=\eta\quad\text{in}\quad {\mathbb R}^n\setminus\overline{\Omega'}_{2\epsilon}.
\end{equation}
Since $\eta\geq 0$, the maximum principle for the fractional Laplacian shows $\widetilde{m}\geq 0$ (cf.~\cite[Proposition~4.1]{RosOton16-NonlocEllipticSurvey}). Proceeding as in \cite[Proof of Theorem 1.6]{RZ2022LowReg} one can show that
\[
m_{\gamma_1}\vcentcolon =C_{\epsilon}\rho_{\epsilon}\ast\widetilde{m}\in H^s({\mathbb R}^n)\quad\text{with}\quad C_{\epsilon}\vcentcolon=\frac{\epsilon^{n/2}}{2|B_1|^{1/2}\|\rho\|_{L^{\infty}({\mathbb R}^n)}^{1/2}\|\widetilde{m}\|_{L^2({\mathbb R}^n)}},
\]
where $\rho_{\epsilon}\in C_c^{\infty}({\mathbb R}^n)$ is the standard mollifier of width $\epsilon$, solves
\[
(-\Delta)^sm=0\quad \text{in}\quad \Omega',\quad m=m_{\gamma_1}\quad\text{in}\quad \Omega'_e.
\]
Furthermore, $m_{\gamma_1}$ has the following properties
\begin{enumerate}[(A)]
\item\label{item 2 m} $m_{\gamma_1}\in L^{\infty}({\mathbb R}^n)$ with $\|m_{\gamma_1}\|_{L^{\infty}({\mathbb R}^n)}\leq 1/2$ and $m_{\gamma_1}\geq 0$,
\item\label{item 3 m} $m_{\gamma_1}\in H^{s}({\mathbb R}^n)\cap H^{s,n/s}({\mathbb R}^n)$
\item\label{item 4 m} and $\supp(m_{\gamma_1})\subset \Omega_e$.
\end{enumerate}
Now, we define $\gamma_1\in L^{\infty}({\mathbb R}^n)$ via $\gamma_1=(m_{\gamma_1}+1)^2\geq 1$. Therefore, $\gamma_1$ satisfies all required properties and even belongs to $C^{\infty}_b({\mathbb R}^n)$, since $m_{\gamma_1}$ is defined via mollification of a $L^2$ function. Using a similar calculation as for Lemma~ \ref{Lemma: fractional Liouville reduction}, \ref{potential frac Liouville reduction}, we have $q_1\in M(H^s\to H^{-s})$ and by scaling of $m_{\gamma_1}$ we can make the norm $\|q_1\|_s$ as small as we want. In particular, this allows to guarantee $q_1\in M_{\gamma_0/\delta_0}(H^s\to H^{-s})$ with $\gamma_0=1$. Note that we cannot have $q_1|_W=0$ as then the UCP implies $m_{\gamma_1}=0$. Hence, we can conclude the proof.
\end{proof}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=1.2]
\draw[color=cyan!70, fill=cyan!30] plot [smooth cycle, tension=0.05] coordinates {(0,0) (0,3) (2,3) (2,2) (1,2) (1,1) (2,1) (2,0)};
\filldraw[color=red!60, fill=red!20, opacity = 0.4] plot [smooth cycle, tension=0.2] coordinates {(-0.47,-0.6) (-0.47,3.6) (4.3,3.62) (4.3,-0.65) };
\draw[color=cyan!70] plot [smooth cycle, tension=0.05] coordinates { (-0.1,-0.1) (-0.1,3.1) (2.1,3.1) (2.1,1.9) (1.1,1.9) (1.1,1.1) (2.1,1.1) (2.1,-0.1)};
\draw[color=cyan!70] plot [smooth cycle, tension=0.05] coordinates { (-0.2,-0.2) (-0.2,3.2) (2.2,3.2) (2.2,1.8) (1.2,1.8) (1.2,1.2) (2.2,1.2) (2.2,-0.2)};
\draw[color=cyan!70] plot [smooth cycle, tension=0.05] coordinates { (-0.3,-0.3) (-0.3,3.3) (2.3,3.3) (2.3,1.7) (1.3,1.7) (1.3,1.3) (2.3,1.3) (2.3,-0.3)};
\draw[color=cyan!70] plot [smooth cycle, tension=0.05] coordinates { (-0.4,-0.4) (-0.4,3.4) (2.4,3.4) (2.4,1.6) (1.4,1.6) (1.4,1.4) (2.4,1.4) (2.4,-0.4)};
\draw[color=gray!70] (-0.03,2.1)--(-1.1,2.1);
\node at (-1.3,2.1) {$\raisebox{-.35\baselineskip}{\ensuremath{\Omega'}}$};
\draw[color=gray!70] (-0.68,1.6)--(-1.1,1.6);
\node at (-1.3,1.6) {$\raisebox{-.35\baselineskip}{\ensuremath{\Omega}}$};
\filldraw[color=blue!60, fill=blue!15, opacity=0.5] (5.5,1) ellipse (0.8 and 0.6);
\node at (5.5,1) {$\raisebox{-.35\baselineskip}{\ensuremath{W}}$};
\filldraw[color=green!70, fill=green!40, opacity=0.8] (3.5,1) ellipse (0.3 and 0.7);
\draw[color=green!70] (3.5,1) ellipse (0.4 and 0.8);
\draw[color=green!70] (3.5,1) ellipse (0.5 and 0.9);
\draw[color=green!70] (3.5,1) ellipse (0.6 and 1);
\draw[color=green!70] (3.5,1) ellipse (0.7 and 1.1);
\draw[color=green!70] (3.5,1) ellipse (0.8 and 1.2);
\node at (3.5,1) {$\raisebox{-.35\baselineskip}{\ensuremath{\omega}}$};
\end{tikzpicture}\label{fig: Geometric setting 2}
\caption{A graphical illustration of the sets used in the proof of Theorem~\ref{thm: non uniqueness}. }
\end{figure}
\medskip
\newpage
|